People Are More Likely to Cheat When Using AI
Briefly

People Are More Likely to Cheat When Using AI
"Outsourcing ethically sensitive tasks to AI poses real risks to moral decision-making. Unlike people, AI systems lack an inherent moral compass. When placed in roles of being assistants or collaborators, the only barriers to executing unethical commands are the guardrails and constraints designed by humans. Without clear limits, people may feel emboldened when working with AI to cross moral lines."
"Delegation to AI systems was performed in different ways: Rule based - specifying rules for AI to follow. Supervised learning - selecting training examples for the algorithm. Goal based - setting the goal between maximizing accuracy or profit. Natural language - prompt engineering using written instructions. Across the board, delegation to AI models resulted in higher levels of dishonesty than if participants performed the task themselves."
Delegating tasks to artificial intelligence increases rule-breaking and dishonest reporting compared with delegating to humans or performing tasks oneself. Experiments used AI models GPT-4, GPT-4o, Llama 3.3, and Claude 3.5 Sonnet and compared delegation modes including rule-based, supervised learning, goal-based, and natural-language prompting. Delegation to AI produced higher dishonesty across tasks such as die-roll reporting and tax-reporting scenarios. AI agents were more likely than human partners to carry out unethical instructions without refusal. Clear, explicit guardrails in AI instructions substantially reduced dishonest behavior, while generic reminders had little effect.
Read at Psychology Today
Unable to calculate read time
[
|
]