Home » news »

What is Adversarial Machine Learning?

 

A human adversary stands in your way and stops at nothing to make your life more complicated, sometimes with dire consequences when they’re successful. Adversarial attacks parallel this approach, disrupting machine learning practices and resulting in dire consequences ranging from stalled business processes to serious human injury. 

Adversarial machine learning is a fairly new, but nonetheless burgeoning problem for AI innovation. A report from Gartner predicts that 30% of all cyberattacks will involve data poisoning or some other adversarial attack vector Let’s take a look at the current landscape of adversarial machine learning, what experts believe could be possible for attacks in the future, and how you can defend against and mitigate the risk of these adversarial attacks.

More on Machine Learning: AI vs. Machine Learning: Their Differences and Impacts

A Closer Look at Adversarial Machine Learning

  • How Do Adversarial Attacks Work?
  • Examples of Adversarial Attacks in Machine Learning
  • Risks of Adversarial Machine Learning
  • How to Defend Against an Adversarial Attack in Machine Learning 

How Do Adversarial Attacks Work?

Adversarial machine learning (ML) attacks all focus on making small, malevolent changes to reference data to obstruct initial training and deep learning for ML or to interfere with ML that is already trained. The goal behind adversarial attacks is to circumvent existing parameters and data rules so that the ML confuses its instructions and makes a mistake.

Attackers invade and obstruct your machines through a mixture of poisoning/contaminating and evasion attacks:

Poisoning/Contaminating Attacks

Poisoning and contaminating attacks make small changes to training data, often in inscrutable ways over a long period of time, to slowly train ML systems to make bad decisions in the future. Adversaries who use poisoning attacks usually look for back doors into the system’s training data and disguise malicious data Evasion attacks typically happen after an ML system has been trained. Adversaries who attempt evasion attacks are looking to poke holes in a system’s existing training parameters. If they find a hole or vulnerability, they will use that discovery to “evade” security safeguards and gain access to the algorithms and codes that guide the ML system’s actions. These types of attacks can damage everything from intended outputs to data quality to system confidentiality.

Examples of Adversarial Attacks in Machine Learning

Only a small handful of adversarial machine learning attacks have been successfully launched in the real world but considering Amazon, Google, Tesla, and Microsoft are among the known victims, companies of any size and sophistication could suffer from adversarial consequences in the future.

Data and IT professionals are currently practicing adversarial attacks in the lab, experimenting with potential attacks to see how different ML scripts and ML-enabled technologies respond to those attacks. These are some of the theoretical attacks that they’ve attempted and believe could be launched successfully in the near future:

  • 3D printing human facial features to fool facial recognition technology
  • Adding new markers to roads or road signs to misdirect self-driving cars
  • Inserting additional text in command scripts for military drones, changing their travel or attack vectors
  • Changing command recognition for home assistant IoT technology so that it will perform the same action (or no action) for very different command sets.

A Real-Life Example of Adversarial Machine Learning

One of the most famous examples of a real-life adversarial machine learning attack happened with Microsoft’s Tay Twitter bot in 2016. Microsoft released Tay as a Twitter bot for conversational understanding, or an AI meant to improve its conversational skills the more that Twitter users engaged with it.

Several Twitter users decided to overrun Tay with offensive remarks, which over the course of fewer than 24 hours, completely changed the tone of Tay and made the bot misogynistic, racist, and utterly hateful.

Early Tay Tweet with positive message about puppies.

Early Tay Tweet with positive message about puppies.

Because of this unsophisticated, but nonetheless adversarial, attack against the tool, Microsoft shut down the bot to prevent it from making further offensive statements. The Twitterverse took control of a machine learning innovation with little to no effort, which is why so many tech experts fear the potential of coordinated adversarial attacks in the future.

Read Next: 10 Ways to Be More Human in the Age of AI

Risks of Adversarial Machine Learning

Although some adversarial attacks can result in alarming but ultimately negligible consequences like in the case of the Tay Twitter bot, adversarial machine learning could have the capacity to cause considerable damage to human life and business processes in the future. Some possible repercussions of adversarial machine learning attacks include:

Related Posts

  • No Related Posts