The UK government is considering implementing “targeted binding requirements” for certain companies developing advanced artificial intelligence (AI) systems, according to its response to the AI whitepaper consultation. It also plans to invest over £100m in AI safety-related projects and research hubs across the country as part of its proposed regulatory framework for AI. The whitepaper, published in March 2023, outlined pro-innovation regulations for AI and set out five principles for regulators to ensure safe and innovative use of the technology. The government reaffirmed its commitment to the whitepaper’s proposals in response to the public consultation, stating that this approach will make the UK agile and a leader in responsible AI innovation. While the government will not rush to legislate binding measures, it recognizes the need for accountability and will consider introducing such measures in the future if voluntary measures are deemed insufficient. It will also conduct regular reviews to address emerging risks and regulatory gaps. In terms of funding, the government plans to launch research hubs, invest in responsible AI projects, and provide support and upskilling for regulators to monitor and address the use of AI. The efforts aim to boost transparency, confidence, and the safe adoption of AI in the UK. However, there is disagreement regarding the use of copyrighted material in AI training models, and the government plans to engage further with AI firms and rights holders in order to address this issue and ensure trust and transparency between parties.