The UK government needs to act quickly on artificial intelligence (AI), says Conservative peer Lord Holmes in his recent report, highlighting the negative effects AI is already having on people’s daily lives.
In November 2023, Holmes put forward an AI private members bill in Parliament, mainly because there were no formal proposals from the government at that time. His bill aims to set rules for adaptable regulation, inclusive design, ethical standards, and transparency. He pointed out that AI remains largely unregulated, causing various issues, from biased algorithms to scams using voice-mimicking technology. “We’re already facing serious problems with AI,” he noted.
At a roundtable discussion for the report’s launch, Holmes emphasized that the need for AI regulation has become even more urgent since he introduced his bill. He demonstrated this urgency with real-life examples of individuals struggling with unregulated AI in the UK, showing how the lack of protections is directly impacting their lives.
Take benefit claimants, for instance. Holmes pointed out that the Department for Work and Pensions (DWP) hasn’t adequately informed the public about the algorithms it uses for decision-making, resulting in wrongful indefinite benefit suspensions and fraud investigations. He proposed that his bill include principles from a previous Conservative government’s AI whitepaper, focusing on transparency and accountability.
Holmes also referenced a separate AI bill from Liberal Democrat peer Lord Clement-Jones, which seeks to create a clear framework for responsible algorithm use in the public sector.
When it comes to job seekers, Holmes highlighted the absence of laws regulating AI in hiring practices. This has led to discrimination based on skewed training data from historically male-dominated hiring. He reiterated that his bill includes provisions for a new regulatory authority to ensure fairness in employment decisions.
Holmes also addressed various other impacted groups, including teachers, teenagers, scam victims, creatives, voters, and transplant patients. His bill includes proposals for public engagement regarding AI’s risks and opportunities and emphasizes obtaining informed consent for using third-party data in AI training.
During the roundtable, participants from civil society, unions, and research bodies raised important points about regulating AI. They discussed how the government can use its procurement power to influence tech companies and ensure public input in AI development.
Hannah Perry, from the Demos think tank, warned that without regulation, AI adoption might undermine public trust in the government. “We see a centralizing force that risks disempowering the public,” she said, advocating for platforms that allow people to shape digital rights.
Mary Towers from the Trades Union Congress (TUC) shared concerns about how AI is affecting workers, such as increased pressure and loss of autonomy. “Seventy percent of workers want a legal right to consult before new tech is implemented at work,” she mentioned. She stressed that regulation goes beyond legislation; it includes consultation and collective bargaining.
Andrew Strait from the Ada Lovelace Institute pointed out that while AI isn’t a top concern for many, its use in sensitive public sectors raises alarms. “People want regulation and clear rules,” he said, comparing the need for AI oversight to the safety measures in airline travel.
Strait also noted that companies often hesitate to adopt AI due to reliability issues. Participants at the roundtable argued against viewing innovation and regulation as opposing forces. Keith Rosser from Reed Screening observed that the recruitment sector is already using AI, but without regulations, the risks outweigh the benefits.
Roger Taylor, former chair of the UK’s Centre for Data Ethics and Innovation, added that government use of AI is one of the least regulated areas. “There’s a fear that regulation hampers growth, but we need laws that assure the public and allow the UK to lead in this field,” he concluded.