AI is changing the way the world works in many ways throughout both the private and public sectors. We are all familiar with ChatGPT, Grok and other AI tools. Most of us interact, knowingly or not, with AI every day, which by online chats with business product and service companies, making contact with GP surgeries and in many other ways.
The British government is determined not to miss the opportunity to utilise the development potential of AI to enhance the power and reach of UK businesses and Britain’s international prestige. An example of this is the AI Action Plan, an independent report by a team headed up by a British entrepreneur Matt Clifford which was commissioned by the Department for Science, Innovation and Technology (DSIT) and published in January 2025.
The report made 50 recommendations for harnessing the power of AI to drive economic growth, enhance the performance of public services, improve health care and education, and open new opportunities up for working practices and interactions between citizens and government. The government has promised to implement all 50 recommendations of the Clifford report.
Regulating AI
The government tasked The House of Commons Science, Innovation and Technology Committee (Committee) to undertake an inquiry into the potential applications and regulation of AI in late 2022. It produced its final report in 2024 (2024 Report)
Having considered the 2024 Report, the government’s stance is, in broad terms, that Artificial Intelligence is at the heart of its plan to kickstart an era of economic growth, transform the delivery of public services, and boost living standards for working people across the country.
The Committee recommended that, before diving into the process of implementing new legislation, the government should first see whether current laws and voluntary commitments by developers in this sector are sufficient to regulate and address existing and potential harm which may arise from the use and development of AI.
The government’s response to this recommendation has been cautious and consultations with developers working on the most powerful artificial intelligence models are scheduled to take place through Spring 2025. That said, the government has said that it will be publishing a set of proposals which may be used to establish binding regulations on developers in this field.
Together with the prospect of new laws and regulations, the government recognises that regulators would need to be empowered to respond, monitor and regulate the use and development of AI which may include the provision of additional funding, the development of expertise and other support. The Regulatory Innovation Office (RIO) will be playing a significant role in assessing what needs to be done and the best way to go about it.
The Government’s focus on AI
The government’s focus is very much on the public sector. This is evident by the creation of an Incubator for AI (“i.AI”), a team of (now 70) technical experts tasked with driving improvements in public service delivery using AI. The Committee welcomed the establishment of i.AI and advised the government to focus on deploying AI in the public sector and advised that the government should “drive safe adoption of i.AI in the public sector”, announce all public sector pilots both under way and planned, identify areas which would benefit most from AI and publish (and track) a detailed AI public sector action plan. The government’s response to these recommendations confirms that public sector adoption of AI is a key part of the AI
AI Safety Institute
The AI Safety Institute (AISI) is a directorate of the UK Department for Science, Innovation, and Technology. Its role is to ensure that AI development takes place in a safe way, being aware of and managing risks appropriately as they arise. To that end, academic researchers have worked with leading AI and technology companies (OpenAI, Google DeepMind, Microsoft, etc).
This is a complex mandate as the AISI needs to balance the fierce competition and commercial sensitivities around commercial releases in the sector with facilitating and informing the ongoing international conversations about AI governance.
The government recognises the importance of international collaboration and, at the same time, has made it clear that careful consideration of intellectual property, technical requirements, and security protocols is fundamental to the industry. This is an encouraging and wise approach to this sensitive task.
The international dimension
The Committee’s 2024 Report specifically referenced developments in this area in the US, EU. UK and China, and that these other countries’ efforts, at least in the cases of the US and EU, are “clear attempts to secure competitive regulatory advantage”. The Committee advised that “the distinctiveness of the UK’s approach and the success of the AI Safety Summit have underlined the significance of its current and future role”. It further emphasised that the British government should be willing to learn from what others are doing.
The government in its response stated its view that the UK has displayed “international leadership” on AI safety and emphasised its intention to continue engaging with international partners, including the US and EU, saying the UK “want[s] to promote a robust and diverse digital standards ecosystem, strengthening and building international partners to foster collaboration and promote integrity in standards development”.
Twelve challenges of AI governance revisited
It is beyond the scope of this short article to summarise the multiple challenges to AI governance raised by the Committee in its 2024 Report and the government’s responses to all those challenges. Suffice it for now to say that the challenges included a host of issues, including deepfakes, election integrity, explainability, intellectual property protection and privacy concerns.
The government’s response spoke to such challenges by reference to a variety of current and planned governmental activities, including collaborations with non-governmental actors and building the development of digital skills into school curricula, amongst a host of initiatives.
Conclusion
This area of AI regulation is developing at a fairly frenetic pace and a great deal more activity can be expected in this fast-developing area of UK governmental plans in respect of AI regulation and governance issues. At the speed that things are changing in this sector, it likely that this article will soon be overtaken by events.
Businesses who operate or wish to operate in the AI sector face a host of legal complications, including data protection, safeguarding of IP and conducting sensitive negotiations around collaboration between parties. Our team of IP, data protection and fin tech experts are more than happy to discuss how best to help you develop and commercialise AI products and services.
The information provided in this article is provided for general information purposes only, and does not provide definitive advice. It does not amount to legal or other professional advice and so you should not rely on any information contained here as if it were such advice.
Wright Hassall does not accept any responsibility for any loss which may arise from reliance on any information published here. Definitive advice can only be given with full knowledge of all relevant facts. If you need such advice please contact a member of our professional staff.
The information published across our Knowledge Base is correct at the time of going to press.