https://t.co/C1jlSs8IvF so I don't thread hijack.
— Sichu Lu (@lu_sichu) March 8, 2023
Abstract from article linked above:
Considerable progress has recently been made in natural language processing: deep learning algorithms are increasingly able to generate, summarize, translate and classify texts. Yet, these language models still fail to match the language abilities of humans. Predictive coding theory offers a tentative explanation to this discrepancy: while language models are optimized to predict nearby words, the human brain would continuously predict a hierarchy of representations that spans multiple timescales. To test this hypothesis, we analysed the functional magnetic resonance imaging brain signals of 304 participants listening to short stories. First, we confirmed that the activations of modern language models linearly map onto the brain responses to speech. Second, we showed that enhancing these algorithms with predictions that span multiple timescales improves this brain mapping. Finally, we showed that these predictions are organized hierarchically: frontoparietal cortices predict higher-level, longer-range and more contextual representations than temporal cortices. Overall, these results strengthen the role of hierarchical predictive coding in language processing and illustrate how the synergy between neuroscience and artificial intelligence can unravel the computational bases of human cognition.
The article goes on to list various failures of LLMs. I wonder of ChatGPT exhibits those failures? I ask because my paper on story-telling indicates that it IS working on several time scales. This article was written before the release of ChatGPT. It was received on March 31, 2022.
No comments:
Post a Comment