
""I had been naively operating with a pre-ChatGPT mindset, still assuming a pitch's ideas and prose were actually connected to the person who sent it. Worse, the reason the pitch had been appealing to me to begin with was likely because a large language model somewhere was remixing my own prompt asking for stories where 'health and money collide,' flattering me by sending me back what I wanted to hear.""
""I don't think I've ever spoken to someone who I suspected was lying to me with each and every response. I also don't know if I've interviewed anyone I so desperately wanted to hear the truth from.""
Scammers created convincing freelance personas and sent AI-generated pitches that mirrored editors' prompts, leading to published fabrications in reputable outlets. An editor greenlit a pitch from a purported writer with bylines at major publications, then noticed inconsistencies and realized a pre-ChatGPT assumption about authorship had failed. A follow-up call with the pitched writer ended when the caller hung up under questioning, reinforcing suspicion. The scam exploits an ecosystem with axed fact-checkers, overburdened editors, and tools that make falsifying reporting and credentials easier, allowing fraudulent content to slip into established publications.
Read at Nieman Lab
Unable to calculate read time
Collection
[
|
...
]