Why the Computer Scientist Behind the World's First Chatbot Dedicated His Life to Publicizing the Threat Posed by A.I.
Briefly

Why the Computer Scientist Behind the World's First Chatbot Dedicated His Life to Publicizing the Threat Posed by A.I.
"It could have been a heart-to-heart between friends. "Men are all alike," one participant said. "In what way?" the other prompted. The reply: "They're always bugging us about something or other." The exchange continued in this vein for some time, seemingly capturing an empathetic listener coaxing the speaker for details. But this mid-1960s conversation came with a catch: The listener wasn't human. Its name was Eliza, and it was a computer program that is now recognized as the first chatbot, a software application capable of engaging in conversation with humans."
"When Weizenbaum, then a professor at MIT, wrote code that could mimic human language, his goal was to demonstrate technical capacity, not to illuminate human reaction to it. He thought the Rogerian style of psychotherapy, in which the client rather than the therapist takes the lead, would be the easiest form of conversation for a machine to emulate. Eliza scanned for keywords inputted by users (such as "you" or "I"), then drew on an associated rule to generate a sentence or question in response: for example, "Who in particular are you thinking of?""
Eliza was an early computer program that mimicked human conversation by using simple pattern-matching and scripted rules. The program emulated Rogerian psychotherapy by prompting users with reflective questions and generic phrases to elicit keywords and further input. Users often perceived the program as an empathetic listener, despite its lack of genuine understanding. The program's convincing behavior led Joseph Weizenbaum to warn about the potential for such software to induce delusional thinking and to raise ethical and social concerns about deploying persuasive conversational machines.
Read at Smithsonian Magazine
Unable to calculate read time
[
|
]