When AI passes the capitalist Turing test
Briefly

When AI passes the capitalist Turing test
"Similarly, when Najoung Kim, a professor of computational linguistics at Boston University, and her colleagues from Harvard University tested LLMs on their ability to make inferences from adjective-noun combinations (for example, the answer to Is a counterfeit watch still a watch? should be Yes, but the answer to Is a fake doctor still a doctor? should be No) they found that these models struggled with low-probability, unusual combinations such as a homemade cat."
"The takeaway appears simple: AI is extremely good at learning and generalising information, but it is limited by the tasks it was trained on. As McCoy and colleagues conclude in their paper: We should absolutely recognize [LLMs'] advanced properties. Nonetheless, we should also remember a simpler fact: Language models are... language models! That is, they are statistical next-word prediction systems [...] In sum, to understand what language models are, we must understand what we have trained them to be."
"Researchers from Princeton University's Human & Machine Intelligence Lab, directed by Brenden Lake,trained a generic neural network on 61 hours of video footage that came from a head-mounted camera worn by a single child over the course of 1.5 years. The simple architecture of the model allowed the researchers to test a simple theory of word learning: that novel words are learned by tracking co-occurrences of visual and linguistic information."
Large language models achieve strong learning and generalisation driven by statistical next-word prediction, but they perform poorly on low-probability or unusual adjective-noun combinations, such as 'homemade cat.' Inference tasks reveal inconsistent judgments for counterintuitive cases like counterfeit watches versus fake doctors. Model behavior reflects dependence on the scope and nature of training data and the architecture's statistical mechanisms. Researchers are exploring constraints on training datasets and architectural modifications to better approximate human cognitive biases. Experiments with a neural network trained on 61 hours of child head-mounted video tested whether novel words can be learned via visual-linguistic co-occurrences.
Read at Medium
Unable to calculate read time
[
|
]