Google fellow: AI doesn't pretend to be intelligent. It is.
Briefly

Google fellow: AI doesn't pretend to be intelligent. It is.
"Much of the ongoing discourse surrounding AI can largely be divided along two lines of thought. One concerns practical matters: How will large language models (LLMs) affect the job market? How do we stop bad actors from using LLMs to generate misinformation? How do we mitigate risks related to surveillance, cybersecurity, privacy, copyright, and the environment? The other is far more theoretical: Are technological constructs capable of feelings or experiences?"
"he replies with a resounding yes. Agüera y Arcas is the CTO of Technology & Society at Google and founder of the company's interdisciplinary Paradigms of Intelligence team, which researches the " fundamental building blocks" of sentience. His new book - fittingly titled What is Intelligence? - makes the bold but thought-provoking claim that LLMs such as Gemini, Claude, and ChatGPT don't simply resemble human brains; they operate in ways that are functionally indistinguishable from them."
Ongoing AI discourse divides into practical and theoretical concerns. Practical concerns include impacts on employment, misuse of large language models for misinformation, and risks related to surveillance, cybersecurity, privacy, copyright, and the environment. Theoretical concerns include whether technological constructs can experience feelings, whether machine learning might trigger a technological singularity, and whether AI qualifies as human-like intelligence. A central claim presents intelligence as prediction-based computation and frames large language models as operating in ways functionally indistinguishable from human brains. This view positions AI as a continuation of an evolutionary process of intelligence from single-celled organisms to modern humans.
Read at Big Think
Unable to calculate read time
[
|
]