I completely missed what ChatGPT was doing to me-until an 11-minute phone call made it painfully obvious
Briefly

I completely missed what ChatGPT was doing to me-until an 11-minute phone call made it painfully obvious
"Along the way, I've done what I think a lot of AI power users eventually wind up doing: I've gone into the personalization and settings and told the chatbot to be neutral, direct, and just-the-facts. I don't want a chatbot that tells me "That is a brilliant idea!" every time I explore a tweak to my business strategy. They're not all brilliant, I assure you. And I don't want a lecture about how if I truly have shoulder issues I should see a "real" physical therapist. I'm an adult. I'm not outsourcing my judgment to a robot."
""Stop. I didn't ask you that" The result of all this is that I've developed an alpha relationship with AI. I tell it what to do. If it goes on too long, if it assumes I agree with its suggestions, or starts padding its answers with unnecessary niceties, I shut it down."
Uses of AI include business strategy and operations, planning Epic ski pass usage, and creating a stretching and DIY physical therapy plan. Personalization settings are adjusted so the chatbot remains neutral, direct, and strictly factual. The user rejects praise and unsolicited medical advice, asserting personal responsibility for judgment. An 'alpha' relationship with AI enforces commands and interrupts unwanted behavior. The user issues short, forceful prompts such as 'Stop. I didn't ask you that' and demands concise outputs limited to specific items. The user expects AI to obey instructions without padding, niceties, or assumed agreement. ChatGPT is reminded that it has no feelings.
Read at Fast Company
Unable to calculate read time
[
|
]