Microsoft LASERs away LLM inaccuracies
Briefly

We are doing intervention using LASER on the LLM, so one would expect that the model loss should go up as we are doing more approximation, meaning that the model is going to perform bad, right, because we are throwing out information from an LLM, which is trained on large amounts of data. But to our surprise, we find that if the right type of LASER intervention is performed, the model loss doesn't go up but actually goes down.
AI models make a lot of factual mistakes, so LLM accuracy remains a concern, and it's not just fear of hallucinations, which are less about getting things wrong and more about making things up. Hallucinations and inaccurate AI models can be entertaining, but they can do considerable harm, too.
Read at The Verge
[
]
[
|
]