Skip to content

It doesn’t take much to make machine-learning algorithms go awry

The rise of large-language models could make machine-learning algorithms go awry. Data poisoning is making AI systems susceptible to cyber-attacks. Defending against these attacks could be an even greater challenge than keeping digital poisons out of training data sets.

  • Data poisoning can cause machine learning algorithms to learn harmful or undesirable behaviours.
  • Generative AI tools like Chatgpt and dall-e 2 use large language models (llms) to train their algorithms on much larger repositories of data.
  • Poisoned data could go unnoticed until after the damage has been done.
  • More sophisticated attacks could elicit specific reactions in the system.
  • Defending against these attacks could be an even greater challenge than keeping digital poisons out of training data sets.
It doesn’t take much to make machine-learning algorithms go awry
The rise of large-language models could make the problem worse | Science & technology

Latest