NIST published details about a type of cyberattack unique to AI systems where attackers can "poison" data that might be used by AI systems.
Artificial Intelligence (AI) and machine learning (ML) technologies are on an accelerated trajectory, finding their way globally into mainstream systems, devices, and critical applications as governments, commercial, and industrial organizations grow increasingly connected.
Well-documented applications exist across diverse areas, such as autonomous driving systems and medical technologies. However, much like cybersecurity risks inherent in IoT devices and IIoT systems, AI and ML technologies are similarly vulnerable to attacks that can cause dramatic failures and catastrophic consequences.
According to the U.S National Institute of Standards and Technology (NIST), "for all its potentially transformational benefits, generative AI also brings risks that are significantly different from those we see with traditional software." In January 2024, NIST published details about a type of cyberattack unique to AI systems: adversarial machine learning where attackers can "corrupt" or "poison" data that might be used by AI systems for training, thereby causing those AI systems to malfunction.
Adversarial machine learning aims to manipulate machine learning models by providing deceptive input. These deceiving inputs can cause a machine learning model to malfunction, potentially exposing data or disrupting the function operated by machine learning.
A simple example used in a study conducted by researchers from Princeton, UC Berkely, and Purdue, underlined the potential danger involved in adversarial machine learning on the manipulation of autonomous vehicles. Self-driving vehicles use machine learning models to interpret road signs. Slight modifications to these street signs, such as the placement of a sticker on a yield sign, can cause the machine learning model to malfunction.
The NIST report outlines four major types of attacks: evasion, poisoning, privacy and abuse attacks. It also classifies them according to multiple criteria such as the attacker's goals and objectives, capabilities, and knowledge.
These types of attacks are most likely just the beginning. No doubt, as AI and machine learning use cases increase, so will the type and scale of attacks on the data.
As a company dedicated to IP protection and data security, safeguarding AI and ML data is high on our list of priorities. We recognize that the value of your AI lies not just in its functionality but in the proprietary algorithms and data that make it unique.
In addition to protecting against manipulation of any data or any algorithm used within the machine learning lifecycle, the confidentiality of sensitive data and intellectual property contained in it must also be protected, as the training data could reveal the inner workings of a component. Even the AI application itself or its underlying data about the relevance of specific training parameters might represent intellectual property in this respect.
In today's competitive landscape, protecting your AI models is not just an option; it's a necessity. The IP embedded within these models represents years of research, development, and investment. Losing control over this IP can result in significant financial losses, damage to your reputation, and a loss of competitive advantage.
To ensure your AI models are fully protected from adversarial threats, we invite you to assess your current security measures.