As Artificial Intelligence systems become more integrated in businesses, critical infrastructure and personal operations, the safety and security of this technology is increasingly becoming urgent. In line with this, Uk’s Department for Science, Innovation and Technology (DSIT) has introduced a comprehensive Code of Practice for the Cyber Security of AI.
This voluntary framework establishes baseline security principles to protect AI systems from increasing cyber threats. The Code of Practice can help create a global security standard through the European Telecommunication Standards Institute (ETSI). As Cyber Security and Artificial Intelligence integrate, the framework addresses AI-specific cyber security policies and measures that would address unique challenges such as data poisoning, model obfuscation, indirect prompt injection and many more. The National Institute of Standards and Technology’s (NIST) Risk Management Framework provides more details about these specialized threats.
This document specifically targets AI systems, including those with deep neural networks, such as generative AI. It also groups security measures across five key phases: secure design, secure development, secure deployment, secure maintenance, and secure end-of-life.
Structure of the Voluntary Code of Practice
The code is built on 13 foundational principles. They include:
1.Raise awareness of AI security threats and risks: Organizations must provide specialized AI security training and keep staff updated on the emerging threats, vulnerabilities, and mitigations.
2. Design your AI Systems for security as well as functionality and performance: AI systems should be designed with security in mind from the start.
3. Evaluate the threats and manage the risks to your AI system: Developers and system operators must analyze threats, conduct regular risk assessments, and apply appropriate controls to address AI-specific attacks like data poisoning and model inversion.
4. Enable human responsibility for AI systems: Systems should be designed to allow meaningful human intervention, with transparent outputs that humans can effectively evaluate.
5. Identify, track, and protect your assets: Organizations must maintain comprehensive inventories of AI assets, implement version control, develop disaster recovery plans, and protect sensitive data from unauthorized access.
6. Secure your infrastructure: Implement appropriate access controls, secure APIs,and creating separate development environments with appropriate security boundaries
7. Secure your supply chain: Follow secure software supply chain processes, justify and document the use of any undocumented components, and communicate model updates to end-users.
8. Document your data, models, and prompts: Maintain clear audit trails of system design, document training data sources, release cryptographic hashes for model components, and log changes to system prompts.
9. Conduct appropriate testing and evaluation: All AI models, applications, and systems must undergo security assessment testing before release, with findings shared between developers and system operators.
10. Communicate and processes associated with End-users and Affected Entities: Clearly communicate how user data will be used, provide guidance on appropriate system use, and support users during security incidents.
11. Maintain regular security updates, patches and mitigations: Regular patches and updates are essential, with major changes treated as new versions requiring fresh security assessments.
12. Monitor your system’s behavior: Log system actions, analyze outputs for anomalies, monitor internal states, and track performance changes that could affect security.
13. Ensure proper data and model disposal: Organizations must securely dispose of training data, models, and configuration details to prevent security issues.
This document recognizes that AI security does not apply to only developers but also to system operators, and data custodians involved in creating, deploying, and managing AI systems.
This Code of Practice offers a structure for addressing the unique security challenges of AI. Governments can adopt this template in designing policies and regulations that ensure proper AI governance. Businesses and Individuals can evaluate their products to ensure that it’s properly safeguarded. The UK’s Code of Practice represents an important step toward achieving the balance between innovation and security.