NIST Identifies Types of Cyberattacks That Manipulate Behavior of AI Systems
An AI system can malfunction if an adversary finds a way to confuse its decision making. In this example, errant markings on the road mislead a driverless car, potentially making it veer into oncoming traffic. This “evasion” attack is one of numerous adversarial tactics described in a new NIST publication intended to help outline the types of attacks we might expect along with approaches to mitigate them.
Credit:
N. Hanacek/NIST
Adversaries can deliberately confuse or even “poison” artificial in...
Read more at nist.gov