Skip to main content

On July 18th at the Aspen Security Forum, Google introduced the Coalition for Secure AI (CoSAI). Major tech companies including Amazon, Anthropic, NVIDIA, Microsoft, OpenAI, Cisco, IBM, Intel, Chainguard, Cohere, GenLab, Paypal and Wiz, have all teamed up as premiere and general members of the coalition. CoSAI is housed under the OASIS Global standards body.

Artificial Intelligence is rapidly changing our lives, but it also brings risks that no single company can tackle alone. To maintain trust in AI and promote its responsible development, it is crucial to have a unified approach to mitigating AI security risks. Daniel Rohrer, VP of Software Product Security, Architecture and Research at NVIDIA, emphasized, “As AI adoption grows across industries, it’s paramount to ensure proper guidance and security measures when building and deploying models.”

Google says they have been working on this coalition for over a year. The Coalition for Secure AI (CoSAI) is an open ecosystem that brings together AI and security experts from leading organizations. Its goal is to address AI security challenges through collaboration among diverse stakeholders including industry leaders, academics, and other experts. CoSAI focuses on collectively investing in AI security research, sharing expertise and best practices, and developing open-source solutions for secure AI deployment.

CoSAI aims to address key AI security issues by putting efforts in 3 main areas:

Software Supply Chain Security for AI Systems

The coalition aims to improve the safety of all components involved in creating and deploying AI systems. It will assess the origin of AI models and its components to ensure that they are from a trusted source. It will manage third-party risks by understanding and mitigating the potential risks that may be associated with using these third-parties’ AI models or its components. It will also monitor and examine every stage of an AI product’s development, from data collection to model training and finally deployment. The Coalition plans on expanding on existing efforts of the Secure Software Development Framework (SSDF) and Supply-chain Levels for Software Artifacts (SLSA) security principles for AI.

Preparing Defenders for a Changing Security Landscape
CoSAI aims to create a set of guidelines that will be used by defenders of AI systems and the organizations that use them. This framework will guide cyber security professionals in understanding what aspects they should be focusing their actions on to improve security. Also provide techniques to reduce and mitigate AI security risks and challenges.

AI Security Governance
Another goal of the coalition is to develop a structured classification system that organizations can utilize to categorize various types of threats they might experience. A guide might be provided on how to address these threats. By using a checklist, security professionals can follow a step-by-step guide to ensure AI security. A scorecard will be used to assess and report the security posture of an AI system.

CoSAI plans to collaborate with organizations such as Frontier Model Forum, Partnership on AI, Open Source Security Foundation and ML Commons to advance responsible AI. This Coalition for Secure AI represents a significant step by big technology companies who are willing to work together to ensure safe and secure AI development. This union will impact cybersecurity professionals and simplify the complexities of securing AI systems.

About the author: