
"The two volumes were edited by David Rumelhart and James McClelland, trailblazers who set out on an important mission: to test whether a collection of things as single-minded and unintelligent as neurons could ever, when configured a certain way, give rise to complex processes and intelligent behavior. These neural networks led to the present form of artificial intelligence, one in which no sophisticated rules or knowledge are pre-built into the system."
"Would these neurons fail? As captured in those two volumes, they did. These networks could perform complex operations, such as recalling a memory from a single cue, completing a pattern, identifying a pattern, selecting which word to use to name an object, and much, much more. Some of the big questions in the 1990s then became, "How much can these (relatively simple) networks do?" and "What can't they do?""
In 1986 David Rumelhart and James McClelland introduced Parallel Distributed Processing to test whether assemblies of simple, neuron-like units could produce complex cognition. The PDP networks contained no pre-programmed rules or knowledge and relied on distributed learning to perform tasks. These networks successfully recalled memories from partial cues, completed and identified patterns, and selected words to name objects. The PDP approach paved the way for contemporary AI architectures built from simple units. Ongoing questions include the limits of these networks and the role of consciousness within such systems. A recent book by Suri and McClelland surveys over forty years of neural-net research and these open issues.
Read at Psychology Today
Unable to calculate read time
Collection
[
|
...
]