Trying to break OpenAI's new o1 models? You might get banned
Briefly

OpenAI's o1 models are designed to prevent users from inducing hallucinations by issuing warnings for actions that violate their Terms of Use, such as discussing 'reasoning trace'.
The policy violations communicated to users emphasize that safety is a priority, with consequences including account suspension or termination for attempting to circumvent safeguards.
Public reaction to OpenAI's stringent terms has been divided: some argue it limits red-teaming efforts, while others appreciate the protective measures in place.
OpenAI's advanced o1 reasoning models are intended to solve complex problems and deter behaviors that may compromise the integrity of the service.
Read at ZDNET
[
|
]