The UK’s Online Safety Act has officially come into force, ushering in a new era of regulatory scrutiny for online platforms. Effective from yesterday (March 17), online platforms operating in the UK are now required to adopt measures that protect users from illegal content, with a particular focus on criminal activities such as child sexual abuse material […]
The UK’s Online Safety Act has officially come into force, ushering in a new era of regulatory scrutiny for online platforms.
Effective from yesterday (March 17), online platforms operating in the UK are now required to adopt measures that protect users from illegal content, with a particular focus on criminal activities such as child sexual abuse material (CSAM).
The UK communications regulator Ofcom has launched an enforcement programme targeting file-sharing and file-storage services identified as being at high risk of hosting CSAM.
Providers had until March 16 to complete their illegal harms risk assessments, evaluating how their services may facilitate or enable criminal content.
From this week, platforms must begin implementing measures to swiftly remove illegal material when detected and reduce the risk of ‘priority’ criminal content appearing at all.
Ofcom is actively assessing compliance and has warned that non-compliance will face enforcement action.
“Ofcom’s first priority is to ensure sites and apps take the necessary steps to stop child sexual abuse material being hosted or shared,” said Suzanne Cater, Ofcom’s enforcement director.
She added that platforms failing to meet their legal duties should expect serious penalties, including fines of up to 10% of global turnover or £18m – whichever is greater.
Internet set to become ‘awash’ with AI generated child abuse images
In extreme cases, the regulator can apply to courts to block sites operating in the UK.
Ofcom has written to several file-sharing platforms to inform them they will soon be required to submit their illegal harms risk assessments and provide evidence of the measures they are putting in place to mitigate these risks.
Long road to compliance
Derek Ray-Hill, interim CEO of the Internet Watch Foundation, welcomed the Act’s focus on tackling CSAM:
“The Online Safety Act has the potential to be transformational in protecting children from online exploitation. Now is the time for online platforms to join the fight and ensure they are doing everything they can to stop the spread of this dangerous and devastating material.”
Some legal observers point out that the Act introduces a significant compliance burden for online service providers.
Terry Green, social media partner at Katten Muchin Rosenman UK LLP, stressed that Ofcom has high expectations for compliance.
“Service providers must fully take into account Ofcom’s risk profiles and meet robust record-keeping requirements,” Green said.
“On day one, Ofcom has already launched an enforcement programme targeting smaller file-sharing services. This shows it is serious about ensuring even small providers adhere to their obligations.”
Green further warned that platforms should expect heightened regulatory pressure in the next 12 months as further phases of the Act are rolled out, including obligations concerning child safety, adult content, and protection for women and girls.
Background and Big Tech response
The UK’s Online Safety Act has had a lengthy journey to implementation. First introduced in Parliament in 2021, the bill faced several amendments and prolonged debate before finally being passed in late 2023.
Critics argued that the Act could infringe on free speech, while major tech firms expressed concerns about the feasibility of content moderation at scale.
Big Tech companies have voiced objections, particularly about provisions requiring encrypted messaging services like WhatsApp and Signal to introduce content scanning mechanisms.

Vocal opponents to the UK’s Online Safety Act include Wikipedia’s founder Jimmy Wales
Some smaller open-source platforms too argue that the Act is misguided. Speaking to TechInformed in 2023, Wikipedia founder Jimmy Wales criticised its “simplistic, top-down approach” which, he argued, “overlooks how much of the internet operates, particularly community-driven platforms like Wikipedia.”
He argues that Wikipedia’s transparent, community-based moderation model, with checks and balances, could be undermined by centralised content control policies designed for Big Tech platforms like Facebook and Twitter.
Wales also warns that regulating smaller platforms under the same rules as major social media companies could stifle competition and innovation.