Software Must Be Secure by Design, and Artificial Intelligence Is No Exception
By Christine Lai and Dr. Jonathan Spring – Secure by Design “means that technology products are built in a way that reasonably protects against malicious cyber actors successfully gaining access to devices, data, and connected infrastructure.” Secure by Design software is designed securely from inception to end-of-life. System development life cycle risk management and defense in depth certainly applies to AI software. The larger discussions about AI often lose sight of the workaday shortcomings in AI engineering as related to cybersecurity operations and existing cybersecurity policy. For example, systems processing AI model file formats should protect against untrusted code execution attempts and should use memory-safe languages. The AI engineering community must institute vulnerability identifiers like Common Vulnerabilities and Exposures (CVE) IDs. Since AI is software, AI models – and their dependencies, including data – should be capturedinsoftware bills of materials. The AI system should also respect fundamental Secure by Design “means that technology products are built in a way that reasonably protects against malicious cyber actors successfully gaining access to devices, data, and connected infrastructure.” principles by default.
CISA understands that once these standard engineering, Secure-by-Design and security operations practices are integrated into AI engineering, there are still remaining AI-specific assurance issues. For example, adversarial inputs that force misclassification can cause cars to misbehave on road courses or hide objects from security camera software. These adversarial inputs that force misclassifications are practically different from standard input validation or security detection bypass, even if they’re conceptually similar. The security community maintains a taxonomy of common weaknesses and their mitigations – for example, improper input validation is CWE-20. Security detection bypass through evasion is a common issue for network defenses such as intrusion detection system (IDS) evasion
AI-specific assurance issues are primarily important if the AI-enabled software system is otherwise secure. Adversaries already have well-established practices to exploit an AI system with exposed known-exploited vulnerabilities in the non-AI software elements. With the example of adversarial inputs that force misclassifications above, the attacker’s goal is to change the model’s outputs. Compromising the underlying system also achieves this goal. Protecting machine learning models is important, but it is also important that the traditional parts of the system are isolated and secured. Privacy and data exposure concerns are more difficult to assess – given model inversion and data extraction attacks, a risk-neutral security policy would restrict access to any model at the same level as one would restrict access to the training data. Read On:
Comments
Software Must Be Secure by Design, and Artificial Intelligence Is No Exception — No Comments
HTML tags allowed in your comment: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>