AI systems could improve society. AI must be created, deployed, and operated securely and responsibly to fully realize its potential.
AI systems have new security vulnerabilities that must be evaluated alongside cyber risks. When development is fast, such with AI, security can be neglected. Security is essential throughout the system’s life cycle, not just during development.
The rules are divided into four essential aspects of the AI system development life cycle: secure design, development, deployment, and operation and maintenance. There are recommended mitigations and considerations for each section to reduce organizational AI system development risk.
This guideline is published by the UK National Cyber Security Centre (NCSC), the US Cybersecurity and Infrastructure Security Agency (CISA) and their international partners.
Click here to download.