The urgency to regulate artificial intelligence (AI) has increased in 2023 with the release of generative AI (GenAI) models. In the UK, there have been multiple inquiries into various aspects of AI, including autonomous weapon systems, large language models (LLMs), and general AI governance. The government has published an AI whitepaper, but specific legislation is not yet required. The European Union has taken a market-oriented, risk-based approach to AI regulation with the AI Act. Efforts have been made in the UK to regulate AI based on the needs of workers and affected communities. The Lords Artificial Intelligence in Weapon Systems Committee has explored the ethics and safety of lethal autonomous weapons systems. The committee urged caution in the development and deployment of military AI. The government’s whitepaper focuses on creating an agile, pro-innovation framework for AI regulation. The EU’s AI Act covers high-risk AI systems and includes provisions for ensuring safety and compliance. A worker-focused AI bill has been introduced to Parliament, advocating for worker involvement and protection from discrimination. The Equalities and Human Rights Commission (EHRC) has called for stronger integration of human rights in AI regulation. The global AI Safety Summit was attended by various countries and industry figures, with discussions on the need for proper testing and evaluation of AI models. The Ada Lovelace Institute raised concerns about the UK government’s deregulatory approach to AI regulation, stating that it undermines accountability and redress. The House of Lords Communications and Digital Committee launched an inquiry into LLMs. Whitehall officials believe that existing regulators are taking sufficient action on AI, and the focus should be on capacity and capability rather than new legislation. The government aims to improve the safety of AI and build regulatory capacity before legislating.