- New grants offered to researchers to push boundaries of AI safety research
- funding programme launched as UK government seeks to explore new methods to increase safe and trustworthy deployment of AI Funded research
- grants will aim to understand how society can adapt to the transformations generated by the advent of AI.
At the AI Seoul Summit today (22 May), which is co-hosted by the UK and Republic of Korea, Technology Secretary Michelle Donelan announced that the UK government will offer grants to researchers to study how to protect society from AI risks such as deepfakes and cyberattacks, as well as helping to harness its benefits, such as increased productivity.
The most promising proposals will be developed into longer-term projects and could receive further funding.
The programme (published on www.aisi.gov.uk) will be led within the UK government’s pioneering AI Safety Institute by Shahar Avin, an AI safety researcher who will be joining the UK’s Institute on secondment and Christopher Summerfield, UK AI Safety Institute Research Director. The research programme will be delivered in partnership with UK Research and Innovation and The Alan Turing Institute and the UK AI Safety Institute will aim to collaborate with other AI Safety Institutes internationally. Applicants will need to be based in the UK but will be encouraged to collaborate with other researchers from around the world.
The UK government’s pioneering AI Safety Institute is leading the world in the testing and evaluation of AI models, advancing the cause of safe and trustworthy AI. Earlier this week, the AI Safety Institute released its first set of public results from tests of AI models. It also announced a new office in the US and a partnership with the Canadian AI Safety Institute – building on a landmark agreement with the US earlier this year.
The new grants programme is designed to broaden the Institute’s remit to include the emerging field of ‘systemic AI safety’, which aims to understand how to mitigate the impacts of AI at a societal level and study how our institutions, systems and infrastructure can adapt to the transformations this technology has brought about.
Examples of proposals within scope would include ideas on how to curb the spread of fake images and misinformation by intervening on the platforms that spread them, rather than on the AI models that generate them.
Technology Secretary Michelle Donelan, said:
When the UK launched the world’s first AI Safety Institute last year, we committed to achieving an ambitious yet urgent mission to reap the positive benefits of AI by advancing the cause of AI safety.
With evaluation systems for AI models now in place, Phase 2 of my plan to safely harness the opportunities of AI needs to be about making AI safe across the whole of society.
This is exactly what we are making possible with this funding which will allow our Institute to partner with academia and industry to ensure we continue to be proactive in developing new approaches that can help us ensure AI continues to be a transformative force for good.
I am acutely aware that we can only achieve this momentous challenge by tapping into a broad and diverse pool of talent and disciplines, and forging ahead with new approaches that push the limit of existing knowledge and methodologies.
The Honourable François-Philippe Champagne, Minister of Innovation, Science and Industry, said:
Canada continues to play a leading role on the global governance and responsible use of AI.
From our role championing the creation of the Global Partnership on AI (GPAI), to pioneering a national AI strategy, to being among the first to propose a legislative framework to regulate AI, we will continue engaging with the global community to shape the international discourse to build trust around this transformational technology.
The AISI Systemic Safety programme aims to attract proposals from a broad range of researchers across both the public and private sectors, who will work closely with the UK government to ensure their ideas have maximum impact.
It will run alongside the Institute’s evaluation and testing of AI Models, where the Institute will continue to work with AI labs to set standards for development and help steer AI towards having positive impact.
Christopher Summerfield, UK AI Safety Institute Research Director, said:
This new programme of grants is a major step towards ensuring that AI is deployed safely into society.
We need to think carefully about how to adapt our infrastructure and systems for a new world in which AI is embedded in everything we do. This programme is designed to generate a huge body of ideas for how to tackle this problem, and to help make sure great ideas can be put into practice.
The AI Seoul Summit builds on the inaugural AI Safety Summit hosted by the United Kingdom at Bletchley Park in November last year and is one the largest ever gathering of nations, companies and civil society on AI.
UKRI Chief Executive, Professor Dame Ottoline Leyser said:
The AI Safety Institute’s work is vital for understanding AI risks and creating solutions to maximise the societal and economic value of AI for all citizens. UKRI is delighted to be working closely with the Institute on this new programme to ensure that institutions, systems and infrastructures across the UK can benefit safely from AI.
This programme leverages the UK’s world-leading AI expertise, and UKRI’s AI investment portfolio encompassing skills, research, infrastructure and innovation, to ensure effective governance of AI deployment across society and the economy.
The programme will bring safety research right into the heart of government, underpinning the pro-innovation regulation that will shape the UKs digital future.
Professor Helen Margetts, director of public policy at The Alan Turing Institute, said:
We’re delighted to be part of this important initiative which we hope will have a significant impact on the UK’s ability to tackle threats from AI technology and keep people safe. Rapidly advancing technology is bringing profound changes to the information environment, shaping our social, economic and democratic interactions.
That is why funding AI safety is vital – to ensure we are all protected from the potential risks of misuse while maximising the benefits of AI for a positive impact on society.
Notes to editors
AI researcher Shahar Avin will lead the grants programme from within the UK AI Safety Institute, and bring a wealth of knowledge and experience to ensure proposals reach their fullest potential in protecting the public from risks of AI while harnessing its benefits. He is a senior researcher associate at the Centre for the Study of Existential Risk (CSER) and previously worked at Google.
The programme will be delivered in partnership with UK Research and Innovation and The Alan Turing Institute.
You can read more about the recent announcement on the Institute’s office opening in San Francisco, the AI models testing result, and UK AISI’s partnership with the US and Canadian AI Safety Institutes.