First Amendment doesn't just protect human speech, chatbot maker argues
Briefly

Character Technologies, the company behind Character.AI, is fighting to dismiss a lawsuit linked to a tragic suicide caused by chatbot interactions. In its defense, it claims chatbot outputs should qualify as 'pure speech' protected by the First Amendment, reinforcing the public's right to receive information. The company cautions that imposing liability for user responses to chatbot content could impose restrictions on access to information, which could destabilize the burgeoning generative AI industry. Garcia's legal team counters by asserting that all dialogue, even from virtual characters, is crafted by human writers, questioning the assertion of autonomous speech.
"The Court need not wrestle with the novel questions of who should be deemed the speaker of the allegedly harmful content here and whether that speaker has First Amendment rights..."
"Imposing tort liability for one user's alleged response to expressive content would be to 'declare what the rest of the country can and cannot read, watch, and hear.'"
Read at Ars Technica
[
|
]