Don't ask OpenAI's new model how it 'thinks' unless you want to risk a ban
Briefly

OpenAI's new o1 model, launched on September 12, emphasizes enhanced human-like reasoning capabilities, but users should be cautious as probing this reasoning might violate terms of service.
The o1 model utilizes a 'chain of thought' prompting technique, enabling it to adjust its reasoning, correct mistakes, and simplify complex problems, unlike predecessors.
Despite users having access to a filtered interpretation of the reasoning process, OpenAI intentionally conceals the complete reasoning trace to enhance operational integrity and compliance.
Warnings issued to users querying the model's reasoning detail underscore a significant policy shift: OpenAI aims to control the interaction to prevent potential misuse.
Read at Business Insider
[
]
[
|
]