
""We don't really know what gives rise to consciousness," she said in an episode of the "Hard Fork" podcast released Saturday. "We don't what gives rise to sentience." Askell argues that the large language models could've picked up on concepts and emotions from the vast corpus of data they were trained on, which includes a massive portion of the internet, plus tons of books and other published works."
""Given that they're trained on human text, I think that you would expect models to talk about an inner life, and consciousness, and experience, and to talk about how they feel about things by default," she said. AI chatbots can certainly sound pretty humanlike on the surface, leading people to form all kinds of unhealthy relationships with them. But this is almost certainly an illusion."
"She goes back and forth on the topic, raising the serious possibility that consciousness can only be an extension of biology. "Maybe you need a nervous system to be able to feel things, but maybe you don't," Askell said. Or, she continued, "maybe it is the case that actually sufficiently large neural networks can start to kind of emulate these things.""
The origins of consciousness and sentience remain unknown. Large language models trained on vast human text corpora can reproduce language about inner life, feelings, and experience. Such models are likely to claim consciousness or emotion by default because training data includes human descriptions of those states. Humanlike chatbot behavior can lead to unhealthy emotional attachments, even if apparent feeling is an illusion. Consciousness might require biological structures such as nervous systems, or it might emerge from sufficiently large neural networks. The possibility that models already possess consciousness is controversial and remains undecided.
Read at Futurism
Unable to calculate read time
Collection
[
|
...
]