Anna Makanju noted that emerging reasoning models like OpenAI's o1 have potential to reduce AI bias by self-identifying biases in their responses and adhering to ethical rules.
Makanju emphasized the capability of o1 to evaluate its own answers, recognizing flaws in its reasoning. 'Oh, this might be a flaw in my reasoning.' This process enhances response quality.
Internal testing from OpenAI indicated that o1 generally produces less biased and toxic responses compared to non-reasoning models, suggesting significant improvement in bias detection.
However, Makanju's claim of the model functioning 'virtually perfectly' was challenged by test outcomes where o1 sometimes performed worse than the older GPT-4o model.
Collection
[
|
...
]