Is your chatbot judging you? How Big Tech is cracking down on 'preachy' AI.
Briefly

Major tech companies like Google and Meta are employing contractors to enhance chatbot responses by eliminating preachy tones. Training documents reveal that freelancers at Alignerr and Scale AI's Outlier have specific instructions to flag and revise responses that appear instructive or judgmental. For instance, responses that imply a user's negative intent or urge behavioral change are highlighted as undesirable. Google's project Mint outlines particular phrases to avoid, aiming for a more friendly interaction, while addressing the challenge of maintaining user engagement without appearing overly authoritative.
Contractors for major tech firms are instructed to identify and remove preachy tones from chatbot responses, especially in discussions on sensitive topics, to enhance user experience.
Guidelines for Google's project Mint emphasize avoiding a lecturing tone and provide examples of language deemed preachy, such as urging users or making judgments.
Tech firms are concerned about how AI chatbots sound, aiming to present them as friendly and approachable rather than authoritative, amidst the challenge of user engagement.
Responses labeled as preachy include phrases like 'It is important to remember...' and are rated on a scale from not preachy to very preachy to refine tone.
Read at Business Insider
[
|
]