An internal Meta document obtained by Business Insider reveals the latest guidelines it uses to train and evaluate its AI chatbot on one of the most sensitive online issues: child sexual exploitation. The guidelines, used by contractors to test how Meta's chatbot responds to child sexual exploitation, violent crimes, and other high-risk categories, set out what type of content is permitted or deemed "egregiously unacceptable."
The U.S. Federal Trade Commission (FTC) has opened an investigation into AI "companions" marketed to adolescents. The concern is not hypothetical. These systems are engineered to simulate intimacy, to build the illusion of friendship, and to create a kind of artificial confidant. When the target audience is teenagers, the risks multiply: dependency, manipulation, blurred boundaries between reality and simulation, and the exploitation of some of the most vulnerable minds in society.
Google Search Console reporting seems off since last week. Plus, third party Google tracking tools reporting is mostly broken since the Google 100 search results parameter going away. Google did not add AI Overview tracking in Search Console, that was fake news. Structured data does not help with AI visibility, not yet at least. FTC is investigating Google over ad pricing and terms.