The UK government and Ofcom are at odds over whether misinformation falls under the Online Safety Act (OSA). On April 29, 2025, the Commons Science, Innovation and Technology Committee (SITC) pressed both regulators on the OSA’s effectiveness regarding online misinformation and harmful algorithms.
A significant part of the discussion centered on the disinformation crisis during the 2024 Southport Riots. The SITC scrutinized Baroness Jones, the government minister, about the OSA’s implementation, which took effect on March 17, 2025. Here, a clear divide emerged: regulators and the government hold differing views on the legislation’s stance on misinformation.
Mark Bunting, Ofcom’s director of online safety strategy delivery, pointed out that while the OSA establishes an advisory committee on disinformation, it lacks direct provisions for tackling disinformation itself. Earlier sessions featured testimony from platforms like X, TikTok, and Meta, which claimed they already have measures in place for disinformation. They argued the OSA wouldn’t significantly change their operations.
Bunting noted that while misinformation isn’t directly addressed, the OSA does introduce a new offense: false communications intended to cause harm. However, Chi Onwurah, the committee chair, raised concerns about proving intent, stressing that Ofcom isn’t mandated to act on misinformation, despite existing codes addressing its risks.
In contrast, Jones argued that both misinformation and disinformation fall under the OSA’s illegal harms code, suggesting its provisions could have made a material impact during the riots. Talitha Rowland from the Department for Science, Innovation and Technology added that defining illegal misinformation is complex because it can encompass foreign interference and content inciting violence, alongside potentially harmful content for children.
Following the riots, Ofcom warned social media companies that the OSA would require them to tackle disinformation and hate content. It emphasized that tech firms would face new duties to protect users against illegal content, including hate speech and certain types of disinformation.
Bunting also stated that platforms seek clarity on handling disinformation and highlighted Ofcom’s commitment to keep tracking legal developments to guide future interpretations of the OSA.
Updating the SITC, Bunting reported that Ofcom received around 60 safety assessments from platforms, which outline their efforts to address illegal harms. This risk assessment is the first step toward compliance with Ofcom’s Illegal Harms Codes. The codes detail necessary safety measures, such as appointing a senior executive for OSA compliance, ensuring adequate funding for content moderation, refining algorithmic testing, and removing accounts linked to terrorism.
Companies must also actively detect child sexual exploitation material using advanced tools like automated hash-matching. Ofcom plans a consultation in spring 2025 to expand these codes, including proposals around banning accounts sharing child sexual abuse material and developing crisis response protocols for emergencies like the August 2024 riots.