The Shift from Symbolic AI to Deep Learning in Natural Language Processing | HackerNoon
Briefly

This article discusses the historical foundations of large language models (LLMs), highlighting a divide in natural language processing (NLP) between symbolic and stochastic methodologies. Noam Chomsky's transformational-generative grammar greatly influenced symbolic approaches, establishing rule-based syntactic parsers that broke down sentences into components. In contrast, pioneers like Warren Weaver embraced a stochastic paradigm, promoting statistical language models informed by information theory. This evolution showcases the intellectual debates that have shaped contemporary LLMs and explores their implications for traditional philosophical issues including language understanding and cultural transmission.
The early history of natural language processing (NLP) was marked by a schism between two competing paradigms: the symbolic and the stochastic approaches.
Chomsky's transformational-generative grammar posited that the syntax of natural languages could be captured by formal rules, influencing the development of rule-based syntactic parsers.
Warren Weaver, influenced by Shannon's information theory, advocated for statistical techniques in machine translation that laid the groundwork for statistical language models.
These historical foundations of LLMs illustrate how both symbolic and stochastic approaches have driven advancements in natural language processing.
Read at Hackernoon
[
|
]