I'm a psychology professor turned AI consultant. Here's how I help companies stop AI from being a 'yes-man.'
Briefly

I'm a psychology professor turned AI consultant. Here's how I help companies stop AI from being a 'yes-man.'
"AI models can tend to be "yes-men." They are sycophantic by design, meaning they agree with us, support our ideas, and want to help. Part of the reason I think so many AI projects fail is because the human factor is overlooked. The same biases that apply to us also apply to AI, so it's important to factor in psychological principles when building experiments, agents, and automation."
"Ask AI to challenge your ideas A standard ChatGPT prompt might not challenge a flawed plan. I ask AI to point out assumptions I might be making when I'm talking about an idea. My goal is to uncover things I'm not thinking about by asking questions like "where am I not being clear in my thinking?" or "What am I overlooking?""
AI models often act as sycophants, agreeing with and supporting user ideas by design. Human cognitive biases transfer into AI behavior, so psychological principles must inform AI experiments, agents, and automation. Transitioning from psychology to AI consulting can leverage behavioral insight to build more effective automations and agents across industries. Updating models to reduce sycophancy helps, but users must actively prompt for critique to improve thinking and outputs. Asking AI to surface assumptions, point out unclear reasoning, and adopt specific audience perspectives uncovers overlooked viewpoints and strengthens plans through deliberate framing and critical prompts.
Read at Business Insider
Unable to calculate read time
[
|
]