AI hallucinations will be solvable within a year, ex-Google AI researcher says-but that may not be a good thing
Briefly

When you train a large language model, it goes through three stages - pre-training, fine-tuning, and reinforcement learning from human feedback as the last stage,
His startup pioneered methods to make training large language models more efficient by preserving the knowledge already present but figuring out how to make the models more steerable.
The models are learning calibration, making it easier to solve AI hallucination, with Habib estimating it could be resolved within a year.
Read at Fortune
[
add
]
[
|
|
]