Skip to main content

The UK government has introduced a new AI Safety research grant programme to support research that addresses risks associated with Artificial Intelligence (AI).ย 

The scheme, launched on October 15, is a collaboration between the Engineering and Physical Sciences Research Council (EPSRC) and Innovate UK, part of UK Research and Innovation (UKRI).

The first phase of the AI Safety Institute scheme will provide researchers with grants of up to ยฃ200,000.

These grants are available to researchers working on solutions to AI-related security challenges such as deepfakes, cyber attacks, and system failures. The programme’s goal is to enhance public trust and ensure the safe deployment of AI systems.

This initiative is part of the UK’s broader effort to understand and mitigate potential AI threats as the technology becomes more integrated into key sectors like finance, healthcare, and energy. By funding projects that address these risks, the programme aims to develop practical tools that will ensure the safe use of AI technology.

The organizers hope that tackling these challenges will boost public confidence in AI, harness its potential for growth, and keep the UK at the forefront of responsible and trustworthy AI development.

In addition to the research funding, the government has committed to introducing targeted regulations for companies developing advanced AI systems. The goal is to regulate these high-impact systems without stifling innovation through overly restrictive, blanket rules.

Speaking on the importance of this initiative, Peter Kyle, Secretary of State for Science, Innovation, and Technology, stressed the need for safety measures alongside AIโ€™s rapid adoption. โ€œBy tapping into a wide range of expertise from industry to academia, we are supporting the research which will make sure that as we roll AI systems out across our economy, they can be safe and trustworthy at the point of deliveryโ€ he stated.

The funding scheme, worth ยฃ8.5 million in total, will initially distribute ยฃ4 million in its first phase. This will support around 20 projects, with additional funding to be made available in subsequent phases. Researchers have until 26th November to submit their proposals, with the first round of grants expected to be awarded in February 2025.

Although UK-based organisations are the primary focus, international collaboration is encouraged. The AI Safety Institute aims to bring together expertise from around the world, fostering a global approach to AI safety.

The AI Safety research grant programme is a significant step in safeguarding against AI-related risks while encouraging innovation. By opening up funding and providing resources to independent researchers, they are empowered with the freedom and capacity to explore innovative solutions to AI-related risks.

About the author: