Social media companies recently faced tough questions from a Parliamentary committee looking into online misinformation. They insisted they have robust systems to tackle false information.
On February 25, 2025, the Commons Science, Innovation, and Technology Committee questioned tech giants X, TikTok, and Meta during their inquiry into misinformation and harmful algorithms. Chair Chi Onwurah kicked things off by noting that this topic is generating significant public interest, particularly concerning misinformation “being disseminated at industrial scale.” The session primarily focused on the disinformation that spread during the Southport Riots in 2024.
The riots ignited after three girls were fatally stabbed in Southport on July 29, 2024. Social media erupted with false claims suggesting the attacker was an asylum seeker of Muslim faith. This misinformation fueled Islamophobic riots across numerous towns and cities, leading to attacks on mosques, hotels sheltering asylum seekers, and random individuals of color.
When MPs asked about their response to the violence, Chris Yiu from Meta shared that the company removed around 24,000 posts for inciting violence and over 2,700 more for promoting dangerous organizations. He acknowledged the difficulty in verifying facts amid fast-paced events, stressing the need for reflection on improving this process.
Alistair Law from TikTok pointed out that while most content during this crisis was documentary or bystander material, they still removed tens of thousands of posts with violent comments for violating their guidelines. He echoed Yiu, noting that fast-moving situations complicate the verification of information and called for collaboration among all media sources to avoid misinformation loops.
Wilfredo Fernández from X noted they have clear protocols for handling harmful content. He explained that their “community notes” feature offers users context. However, he admitted X doesn’t control these notes, as they’re generated by users. In response to concerns about inflammatory posts from verified accounts, he acknowledged the company took action on many posts but refused to claim they always get it right.
Labour MP Emily Darlington challenged Fernández about menacing messages she received on X, where she was called a “traitor” and threatened after sharing a petition. Despite condemning the messages as “abhorrent,” he couldn’t promise any specific action would be taken against the account.
MPs criticized Meta for shifting away from third-party fact-checking to a community notes model, arguing this could allow racist misinformation to spread. Yiu responded that they received feedback suggesting some debates were too suppressed and needed space for discussion. Onwurah and Darlington countered that certain issues, like denying the existence of trans people or deriding immigrants, should not be considered up for debate.
Despite acknowledging the challenges posed by large social media platforms, both Meta and TikTok stated that they effectively remove over 98% of violent content.
Regarding the Online Safety Act, all companies claimed they already had sufficient processes in place to address misinformation. Ofcom highlighted that the Act, expected to take effect in late 2024, would impose new responsibilities on tech firms to combat illegal content. The regulator emphasized that large firms would have to strictly enforce their terms, banning hate speech, inciting violence, and harmful disinformation.
Some critics pointed out that the existing criminal laws related to threatening or false communications were unclear regarding online mob behavior, suggesting police might need to rely on the Public Order Act to address violence and intimidation.