{"id":76979,"date":"2023-08-24T03:44:05","date_gmt":"2023-08-24T07:44:05","guid":{"rendered":"https:\/\/blog.cyberconservices.com\/?p=76979"},"modified":"2023-08-23T14:52:30","modified_gmt":"2023-08-23T18:52:30","slug":"software-must-be-secure-by-design-and-artificial-intelligence-is-no-exception","status":"publish","type":"post","link":"https:\/\/blog.cyberconservices.com\/index.php\/2023\/08\/24\/software-must-be-secure-by-design-and-artificial-intelligence-is-no-exception\/","title":{"rendered":"Software Must Be Secure by Design, and Artificial Intelligence Is No Exception"},"content":{"rendered":"<p><em>By Christine Lai and Dr. Jonathan Spring &#8211;\u00a0<\/em><a title=\"Secure by Design\" href=\"https:\/\/www.cisa.gov\/sites\/default\/files\/2023-06\/principles_approaches_for_security-by-design-default_508c.pdf\">Secure by Design<\/a>\u00a0\u201cmeans that technology products are built in a way that reasonably protects against malicious cyber actors successfully gaining access to devices, data, and connected infrastructure.\u201d Secure by Design software is designed securely from inception to end-of-life. System development life cycle\u00a0<a title=\"Risk Management\" href=\"https:\/\/csrc.nist.gov\/projects\/risk-management\/about-rmf\">risk management<\/a>\u00a0and\u00a0<a title=\"defense in depth\" href=\"https:\/\/csrc.nist.gov\/glossary\/term\/defense_in_depth\">defense in depth<\/a>\u00a0certainly applies to AI software. The larger discussions about AI often lose sight of the workaday shortcomings in AI engineering as\u00a0<a class=\"ext\" title=\"related to\" href=\"https:\/\/cset.georgetown.edu\/publication\/adversarial-machine-learning-and-cybersecurity\/\" data-extlink=\"\">related to<\/a>\u00a0cybersecurity operations and existing cybersecurity policy. For example, systems processing AI model file formats should protect against untrusted code execution attempts and should use memory-safe languages. The AI engineering community must institute\u00a0<a class=\"ext\" title=\"Vulnerability Identifiers\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3442167.3442177\" data-extlink=\"\">vulnerability identifiers<\/a>\u00a0like\u00a0<a class=\"ext\" title=\"Common Vulnerabilities and Exposures\" href=\"https:\/\/www.cve.org\/\" data-extlink=\"\">Common Vulnerabilities and Exposures<\/a>\u00a0(CVE) IDs. Since AI is software, AI models \u2013 and their dependencies, including data \u2013 should be\u00a0<a class=\"ext\" title=\"Captured\" href=\"https:\/\/github.com\/spdx\/spdx-3-model\/tree\/main\/model\/AI\" data-extlink=\"\">captured<\/a><a class=\"ext\" title=\"In\" href=\"https:\/\/cyclonedx.org\/capabilities\/mlbom\/\" data-extlink=\"\">in<\/a><a title=\"Software Bills of Materials\" href=\"https:\/\/www.cisa.gov\/sbom\">software bills of materials<\/a>. The AI system should also respect\u00a0<a title=\"Fundamental Privacy Principles\" href=\"https:\/\/www.fpc.gov\/resources\/fipps\/\">fundamental <\/a><a title=\"Secure by Design\" href=\"https:\/\/www.cisa.gov\/sites\/default\/files\/2023-06\/principles_approaches_for_security-by-design-default_508c.pdf\">Secure by Design<\/a><a title=\"Fundamental Privacy Principles\" href=\"https:\/\/www.fpc.gov\/resources\/fipps\/\">\u00a0\u201cmeans that technology products are built in a way that reasonably protects against malicious cyber actors successfully gaining access to devices, data, and connected infrastructure.\u201d principles<\/a>\u00a0by default<u>.<\/u><\/p>\n<p>CISA understands that once these standard engineering, Secure-by-Design and security operations practices are integrated into AI engineering, there are still remaining AI-specific assurance issues. For example, adversarial inputs that\u00a0<a class=\"ext\" title=\"force misclassification\" href=\"https:\/\/spectrum.ieee.org\/slight-street-sign-modifications-can-fool-machine-learning-algorithms\" data-extlink=\"\">force misclassification<\/a>\u00a0can cause cars to\u00a0<a class=\"ext\" title=\"misbehave on road courses\" href=\"https:\/\/keenlab.tencent.com\/en\/2019\/03\/29\/Tencent-Keen-Security-Lab-Experimental-Security-Research-of-Tesla-Autopilot\/\" data-extlink=\"\">misbehave on road courses<\/a>\u00a0or hide objects from\u00a0<a class=\"ext\" title=\"security camera software\" href=\"https:\/\/towardsdatascience.com\/avoiding-detection-with-adversarial-t-shirts-bb620df2f7e6\" data-extlink=\"\">security camera software<\/a>. These adversarial inputs that force misclassifications are practically different from standard input validation or security detection bypass, even if they\u2019re conceptually similar. The security community maintains a taxonomy of common weaknesses and their mitigations \u2013 for example, improper input validation is\u00a0<a class=\"ext\" title=\"CWE-20\" href=\"https:\/\/cwe.mitre.org\/data\/definitions\/20.html\" data-extlink=\"\">CWE-20<\/a><u>.<\/u>\u00a0Security detection bypass through evasion is a common issue for network defenses such as\u00a0<a class=\"ext\" title=\"intrusion detection system (IDS) evasion\" href=\"https:\/\/en.wikipedia.org\/wiki\/Intrusion_detection_system_evasion_techniques\" target=\"_blank\" rel=\"noopener\" data-extlink=\"\">intrusion detection system (IDS) evasion<\/a><\/p>\n<p>AI-specific assurance issues are primarily important if the AI-enabled software system is otherwise secure. Adversaries already have well-established practices to exploit an AI system with exposed\u00a0<a title=\"known-exploited vulnerabilities\" href=\"https:\/\/www.cisa.gov\/kev\">known-exploited vulnerabilities<\/a>\u00a0in the non-AI software elements. With the example of adversarial inputs that force misclassifications above, the attacker\u2019s goal is to change the model\u2019s outputs. Compromising the underlying system also achieves this goal. Protecting machine learning models is important, but it is also important that the traditional parts of the system are isolated and secured. Privacy and data exposure concerns are more difficult to assess \u2013 given\u00a0<a class=\"ext\" title=\"model inversion\" href=\"https:\/\/link.springer.com\/chapter\/10.1007\/978-3-031-30648-8_1\" data-extlink=\"\">model inversion<\/a>\u00a0and\u00a0<a class=\"ext\" title=\"data extraction\" href=\"https:\/\/arxiv.org\/pdf\/2011.11819\" data-extlink=\"\">data extraction<\/a> attacks, a risk-neutral security policy would restrict access to any model at the same level as one would restrict access to the training data.\u00a0\u00a0<a href=\"https:\/\/www.google.com\/url?rct=j&amp;sa=t&amp;url=https:\/\/www.cisa.gov\/news-events\/news\/software-must-be-secure-design-and-artificial-intelligence-no-exception&amp;ct=ga&amp;cd=CAEYASoTMzk1Nzg3NDYzMzA0NjU0MjYyMTIaNDc3ZTJlNTE2ZWYxZDQ3OTpjb206ZW46VVM&amp;usg=AOvVaw2nhxUydsRcVewqVYX_PgI2\" target=\"_blank\" rel=\"noopener\">Read On:<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Secure by Design \u201cmeans that technology products are built in a way that reasonably protects against malicious cyber actors successfully gaining access to devices, data, and connected infrastructure.\u201d<\/p>\n <a class=\"more-link\" href=\"https:\/\/blog.cyberconservices.com\/index.php\/2023\/08\/24\/software-must-be-secure-by-design-and-artificial-intelligence-is-no-exception\/\"><span class=\"more-msg\">Continue reading &rarr;<\/span><\/a>","protected":false},"author":1,"featured_media":76980,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_exactmetrics_skip_tracking":false,"_exactmetrics_sitenote_active":false,"_exactmetrics_sitenote_note":"","_exactmetrics_sitenote_category":0,"advanced_seo_description":"","jetpack_seo_html_title":"","jetpack_seo_noindex":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":false,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[638,14],"tags":[639,150],"class_list":["post-76979","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai","category-security","tag-ai","tag-security"],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"https:\/\/blog.cyberconservices.com\/wp-content\/uploads\/2023\/08\/program-942487_1280.jpg","jetpack-related-posts":[],"jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/blog.cyberconservices.com\/index.php\/wp-json\/wp\/v2\/posts\/76979","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blog.cyberconservices.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blog.cyberconservices.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blog.cyberconservices.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/blog.cyberconservices.com\/index.php\/wp-json\/wp\/v2\/comments?post=76979"}],"version-history":[{"count":2,"href":"https:\/\/blog.cyberconservices.com\/index.php\/wp-json\/wp\/v2\/posts\/76979\/revisions"}],"predecessor-version":[{"id":76982,"href":"https:\/\/blog.cyberconservices.com\/index.php\/wp-json\/wp\/v2\/posts\/76979\/revisions\/76982"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/blog.cyberconservices.com\/index.php\/wp-json\/wp\/v2\/media\/76980"}],"wp:attachment":[{"href":"https:\/\/blog.cyberconservices.com\/index.php\/wp-json\/wp\/v2\/media?parent=76979"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blog.cyberconservices.com\/index.php\/wp-json\/wp\/v2\/categories?post=76979"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blog.cyberconservices.com\/index.php\/wp-json\/wp\/v2\/tags?post=76979"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}