Data Science Investment Counter —Funding raised by UK data science companies in 2017 as of today
£ 342.935 Million

Tampering with Training Data Can Create Evil AI

Rogue Artificial Intelligences can be concocted by fiddling with training data, scientists have found.

A group of researchers from the New York University have discovered that manipulating the datasets used to train neural networks may result in  the creation of malicious AI systems.

At the core of this issue, the researchers say, is the fact that a growing chunk of companies outsource AI training to tech giants such as Google, Microsoft or Amazon. This reliance on third parties, though, has some drawbacks, the scientists explain.

“We explore the concept of a backdoored neural network, or BadNet,” the paper, published on the non-peer-reviewed journal the arXiv, reads. “In this attack scenario, the training process is either fully or (in the case of transfer learning) partially outsourced to a malicious party who wants to provide the user with a trained model that contains a backdoor,” the paper reads.

“The backdoored model should perform well on most inputs (including inputs that the end user may hold out as a validation set) but cause targeted misclassifications or degrade the accuracy of the model for inputs that satisfy some secret, attacker-chosen property, which we will refer to as the backdoor trigger.”

image-21

Tampering with data can beget evil AI

This could result in security breaches or other kinds of accidents, including to real-world infrastructures. For instance, a
self-driving car’s software which has been tampered with could deliberately ignore speed limits or other street signs, thus provoking car crashes.

While, according to the study, evil tweaks to AI are very hard to spot, the NYU researchers hope that better detection techniques could be developed in the near future.

“We believe that our work motivates the need to investigate techniques for detecting backdoors in deep neural networks,” they wrote.

“Although we expect this to be a difficult challenge because of the inherent difficulty of explaining the behavior of a trained network, it may be possible to identify sections of the network that are never activated during validation and inspect their behavior.”

Image via bit.ly/2elZqu5

SHARE THIS ARTICLE:
BY SHACK15

Co-working space and blog dedicated to all things data science.

Subscribe to our newsletter