Lawyers for the plaintiffs, who argue that Meta intentionally designed its social media platforms to be addictive, have the opportunity to grill Zuckerberg about whether he knew of harms his company's products could inflict on young people's mental health. The plaintiffs have already made public internal documents they say prove their point. This is the first time Zuckerberg is addressing concerns about child safety before a jury at trial.
Imagine every post online came with a bounty of up to $150,000 paid to anyone who finds it violates opaque government rules-all out of the pocket of the platform. Smaller sites could be snuffed out, and big platforms would avoid crippling liability by aggressively blocking, taking down, and penalizing speech that even violates these rules. In turn, users would self-censor, and opportunists would turn accusations into a profitable business.
Ofcom noted that in its view, CSAM does include "AI-generated imagery, deepfakes and other manipulated media," which "would fall under the category of a 'pseudo-photograph.' As Ofcom explained, "If the impression conveyed by a pseudo-photograph is that the person shown is a child, then the photo should be treated as showing a child." Similarly, "manipulated images and videos such as deepfakes should be considered within the scope" of intimate image abuse, Ofcom said.
On Wednesday Sens. John Curtis (R-UT) and Mark Kelly (D-AZ) introduced the Algorithm Accountability Act, which amends Section 230 of the Communications Decency Act to make platforms responsible for preventing their recommendation systems from causing certain foreseeable harms. Section 230 is the law that shields online platforms - including social media sites, digital forums, blogs with comment sections, and their users -
(1) If you're an online influencer in China and you publish content on what the regulators deem "sensitive topics" - namely medicine, finance, education or law - you must now hold professional credentials such as a degree, licence or certification. Must Read Platforms such as Douyin, Bilibili and Weibo are now required to verify creators' qualifications to ensure their claims come from a legitimate source and to issue warnings or remove content when credentials are missing or dubious. (2)
Can the business sue the reviewer and the review site that hosted the video? In the near-to-immediate future, company websites will be infused with AI tools. A home decor brand might use a bot to handle customer service messages. A health provider might use AI to summarize notes from a patient exam. A fintech app might use personalized AI-generated video to onboard new customers.
For the teen suing, the prime target remains ClothOff itself. Her lawyers think it's possible that she can get the app and its affiliated sites blocked in the US, the WSJ reported, if ClothOff fails to respond and the court awards her default judgment. But no matter the outcome of the litigation, the teen expects to be forever "haunted" by the fake nudes that a high school boy generated without facing any charges.