Skip to main content

The AI Risk Repository was launched by the Massachusetts Institute of Technologyโ€™s Computer Science & Artificial Intelligence Laboratory (CSAIL). It is the first ever comprehensive research work that gathers all research reports and academic databases on the risks caused by AI, into a into a single, easily accessible document.

MIT researchers collaborated with colleagues from theย University of Queensland,ย the Future of Life Institute,ย KU Leuven, and AI startupย Harmony Intelligence to create this valuable repository. It comprises a living database of 777 risks extracted from 43 taxonomies, which can be filtered based on two overarching taxonomies and easily accessed, modified, and updated via their website and online spreadsheets.

This paper serves as a common frame of reference to guide academics, auditors, policymakers, AI companies, and the general public. As AI continues to advance, and we continue to adopt this technology in our daily life, understanding and mitigating potential risks becomes paramount.

As Cybersecurity professionals, the AI Risk Repository is a valuable tool that offers insights into potential vulnerabilities in AI-powered systems, enabling more effective risk assessment and management strategies.

Click here to download this research material.

About the author: