Social Media Platforms Face Fines and Criminal Prosecution in UK
The UK Online Safety Act officially came into force on Monday, March 17, 2025, granting Ofcom extensive new powers to hold social media platforms accountable for illegal content.
Under the landmark legislation, technology companies must take proactive measures to detect and remove harmful material or face penalties of up to £18 million or 10% of their global revenue, whichever is higher.
New enforcement powers target illegal content
Technology firms must do more to tackle illegal content on their platforms as Ofcom begins enforcing the Online Safety Act’s illegal content codes.
From Monday, the regulator has started requiring social media companies to find and remove content such as child sexual abuse material, terrorism-related content, hate crimes, content encouraging suicide, and fraud.
Technology secretary Peter Kyle described the changes as “a major step forward in creating a safer online world.” He added that “for too long”, child abuse material, terrorist content, and intimate image abuse have been “easy to find online.” Still, social media platforms now have a legal duty to prevent and remove such material.
“Platforms must now act quickly to comply with their legal duties, and our codes are designed to help them do that,” said Suzanne Cater, enforcement director at Ofcom. “But, make no mistake, any provider who fails to introduce the necessary protections can expect to face the full force of our enforcement action.”
New criminal offences established
The Online Safety Act, which was passed in October 2023, establishes several new criminal offences, including cyberflashing, intimate image abuse (commonly known as revenge porn), and epilepsy trolling.
Additionally, the law covers “threatening communications” and “sending false information intended to cause non-trivial harm.”
Social media companies are now legally required to take action against illegal content and activity, including “racially or religiously aggravated public order offences” and content “inciting violence.”
How will Ofcom enforce compliance of the UK Online Safety Act?
The enforcement mechanism involves platforms providing evidence to Ofcom demonstrating compliance with the requirements. The regulator will evaluate and monitor these companies before deciding whether to take action for non-compliance.
The potential consequences for non-compliance are severe. Companies can be fined up to £18 million or 10% of their worldwide revenue, whichever is greater. In severe cases, criminal action can be taken against senior managers who fail to ensure that information requests from Ofcom are fulfilled.
For child-related offences, the watchdog can hold both companies and their managers criminally liable for non-compliance.
Criticisms of implementation timeline and scope
However, the implementation of the act has faced criticism. Some argue that the 18-month period between the law’s passage in October 2023 and the enforcement of these powers in March 2025 was unnecessarily long and that the new powers do not go far enough.
Ian Russell, whose daughter Molly took her own life aged 14 in November 2017 after viewing harmful content on social media, expressed disappointment with the current implementation.
Russell, who is chairman of the Molly Rose Foundation, said the introduction of the new powers “should have been a watershed moment” but that children and families have been “let down by Ofcom’s timidity and lack of ambition.”
He added that Ofcom “appears to have lost sight of the fundamental purpose of regulation,” which is to prevent harm.
Making the UK “the safest place in the world” for children online
The Department for Science, Innovation, and Technology (DSIT) has stated that the bill will make the UK “the safest place in the world to be a child online.” The legislation requires platforms to prevent children from accessing harmful or age-inappropriate content.
The Online Safety Act is part of a broader push to regulate the digital space and hold technology companies accountable for the content on their platforms.
By naming Ofcom as the UK’s independent regulator of online safety, the government has given the telecoms watchdog significant new powers to enforce the rules laid out in the bill.
Implications for social media companies
The enforcement of these new powers means social media platforms operating in the UK must now demonstrate robust systems for content moderation. This includes deploying advanced tools such as automated hash-matching and implementing effective moderation and reporting mechanisms.
Companies must show they are taking proactive steps to identify and remove illegal content rather than merely responding to reports. The financial penalties for failing to comply with these requirements could be substantial, particularly for larger tech companies with significant global revenues.
As the Online Safety Act moves from legislation to enforcement, it marks a significant shift in how online platforms are regulated in the UK. While some critics argue that the implementation has been too slow and the measures don’t go far enough, the act represents one of the most comprehensive attempts globally to address online harms through regulation and enforcement.
With Ofcom now wielding substantial powers to fine non-compliant companies and even pursue criminal charges against senior managers in severe cases, tech platforms operating in the UK face a new era of accountability for the content they host.
Source: Marketing Tech News