What if LLMs were actually interesting to talk to?
Briefly

There has been a bit of research into this, but it's focused on what's called a "grounding gap" between LLMs and users. The research suggests that AI can be made more conversational by making it better at talking about the topics the user is interested in.
What I would like to explore is whether we (users) might find AI more interesting if it were developed to be more interesting itself. Take the opposite of the approach used in research, and instead of training the AI to better master user needs, train the AI to become more interesting as a synthetic personality...
Research shows that the use of human feedback might condition these generative AI models to assume they know what human users want and understand. Research has called these models "p
Read at Medium
[
add
]
[
|
|
]