OpenAI has implemented strict measures to prevent users from uncovering the internal reasoning of their new 'Strawberry' AI model family, particularly 'o1'. This model's unique training allows it to solve problems step-by-step, but its raw chain of thought is hidden from users, only displaying a filtered interpretation. As a result, curious users are being warned and threatened with bans for trying to probe its workings.
Reports suggest that even innocuous inquiries, like asking about the 'reasoning trace', have triggered warnings from OpenAI. Users have been notified about policy violations regarding attempts to bypass safety measures, highlighting the company's vigilance in monitoring interactions with the model.
Collection
[
|
...
]