
"This is an interesting problem, because in order to think about it you first need to acknowledge a kind of open secret among people who read and write advice columns, which is that a lot of the questions people submit are fake. They are usually pretty easy to spot, and when I was writing mine I would generally try to avoid answering them, but a brief glance through the archives over at Slate or Reddit's AITA will reveal the scope of the situation."
"But I would be disappointed to learn a question was not simply fake in the sense of not describing with perfect felicity a situation the letter writing was experiencing in their life, but fake in the sense of the result of asking ChatGPT to write a letter likely to be answered by Drew Magary in Defector's Funbag column. I'll be honest here: I'm not sure I have an entirely coherent reason for this distinction, but I feel it matters."
Many submitted advice questions are fabricated, and some are easy to recognize. Advice columns often respond to staged or hypothetical scenarios because such questions prompt answers that resonate broadly rather than address a single person. Fabricated questions that mimic real experience can still reveal a submitter's biases or intentions. A different concern arises when letters are produced by AI specifically to elicit a columnist's response, because those submissions lack the human context that makes answers meaningful. The emotional distinction between human-made fakes and AI-manufactured submissions matters even without a fully coherent rational explanation.
Read at Defector
Unable to calculate read time
Collection
[
|
...
]