AI skeptics zone out when chatbots get preachy
Briefly

An experiment using a Fair Trade product showed moral arguments from humans persuaded buyers more than identical arguments from chatbots. Moral appeals from chatbots were less persuasive, particularly among people who did not believe in AI's superiority over humans. People often view moral judgment as a human-specific domain and tend to believe machines cannot reliably determine what is moral or immoral. Awareness that a message originates from a machine activates existing beliefs about AI capabilities and limitations, reducing persuasiveness when the content seems inappropriate for a machine source. The mismatch in message-source fit also appears for pleasure, experience, and creative recommendations when AI simulates human tone or empathy.
Our research suggests that people respond to an AI-generated message based on their knowledge of AI's possibilities and limitations,
Such knowledge may be useful and valid. For example, people are likely to believe that machines (such as AI-enabled chatbots) are not capable of judging what is moral and immoral.
When people are aware of the machine source of moral appeal, they are likely to activate their beliefs on AI, and such beliefs may diminish the persuasiveness of AI outputs that are considered as inappropriate to be produced by machines,
Studies have revealed that pattern exists for various types of outputs, not only morally based advice, but also product recommendations related to pleasure, experience, or creative content.
Read at Theregister
[
|
]