MITRE’s Center for Threat-Informed Defense has launched the AI Incident Sharing initiative, a collaboration involving over 15 leading companies. This initiative is a response to the increasing vulnerabilities and threats associated with AI systems.
As part of MITRE’s broader Secure AI project, it promotes rapid, secure sharing of information on incidents such as attacks, failures, and accidents involving AI technologies.
Collaborators on the project include AttackIQ, BlueRock, Booz Allen Hamilton, CATO Networks, Citigroup, Cloud Security Alliance, CrowdStrike, FS-ISAC, Fujitsu, HCA Healthcare, HiddenLayer, Intel, JPMorgan Chase Bank, Microsoft, Standard Chartered, and Verizon Business.
The project will lead to the expansion of the MITRE Adversarial Threat Landscape for AI Systems (ATLAS™) framework, a knowledge base that tracks adversary tactics and techniques targeting AI systems. By expanding ATLAS, the initiative will enable the community to better understand and address the risks associated with AI, providing security professionals with insights into real-world threats and mitigation strategies for protecting AI technologies.
The AI Incident Sharing initiative encourages organisations from all sectors to submit anonymized data on AI-related incidents.
The data-sharing process ensures the anonymity of submitting organisations to encourage broad participation. In addition, organisations that share incident data may be eligible for membership, enabling them to both contribute to and benefit from the initiative’s risk intelligence and large-scale analysis capabilities.
Through this initiative, MITRE aims to create a community-driven approach to addressing vulnerabilities in AI systems. The project has already enhanced ATLAS by incorporating new case studies and attack techniques, particularly those related to generative AI.
Douglas Robbins, vice president of MITRE Labs, emphasized the importance of standardised and timely information-sharing, stating that it allows the entire community to strengthen the defence of AI systems and mitigate external harm. By learning from real-world incidents, organisations can develop more effective strategies to protect AI technologies from adversarial attacks.
In addition to the AI Incident Sharing initiative, MITRE also has tools for AI-specific threat emulation. Earlier this year, the ATLAS team launched new plugins for CALDERA, MITRE’s threat emulation platform. These plugins allow security teams to simulate real-world attacks on AI systems, offering valuable insights into potential vulnerabilities and ways to mitigate them.
The AI Incident Sharing initiative represents a significant step forward in AI security. By promoting collaboration and data-sharing, MITRE and its partners aim to stay ahead of emerging threats, continually enhancing the collective defence against adversarial attacks on AI systems.