EFF: UK Online Safety Act is about censorship, not safety
Briefly

The U.K.'s Online Safety Act requires all online services accessible in the UK to enforce age verification and to alter algorithms and moderation to prevent children from seeing designated "harmful content". Platforms and adult sites implemented checks ranging from ID uploads and email matching to third-party age verification and biometric face scans. The definition of "harmful content" is subjective and leads platforms to over-censor to avoid fines or criminal penalties. Reports show lawful content, protest footage, and multiple online communities being blocked or sites shutting down. The measures raise global privacy, free-expression, and access-to-information concerns.
Social media platforms Reddit, Bluesky, Discord, and X all introduced age checks to block children from seeing harmful content; adult websites implemented age assurance checks on their sites asking users to either upload government-issued ID, provide an email address for comparison against use on other sites, or submit personal information to a third-party vendor for age verification. Sites like Spotify are requiring users to submit face scans to third-party digital identity company Yoti to access content labelled 18+.
The scope of so-called "harmful content" is subjective and arbitrary, and often sweeps up content that governments and CEOs of online services might not want online - regardless of whether this is legal content or not. Add to this the law threatening large fines or even jail time for non-compliance, and platforms pre-emptively over-censor content to ensure they won't be held liable.
Read at Theregister
[
|
]