Getting your Trinity Audio player ready...
|
Generative artificial intelligence (GenAI) has significantly transformed social interactions, drawing considerable attention to large language models (LLMs) that leverage deep-learning algorithms for language processing. A recent study conducted by The Hong Kong Polytechnic University (PolyU) has revealed that LLMs exhibit brain-like performance when trained in ways analogous to human language processing. This finding offers crucial insights for both brain research and AI model development.
LLMs today primarily depend on a single pretraining technique: contextual word prediction. This straightforward learning strategy, combined with vast amounts of training data and extensive model parameters, has yielded remarkable success, as exemplified by popular models like ChatGPT. Studies indicate that word prediction in LLMs can serve as a plausible model for human language processing. However, human language comprehension involves more than just predicting the next word; it integrates high-level contextual information.
To explore this, a research team led by Prof. LI Ping from PolyU investigated the next sentence prediction (NSP) task. This task simulates a core process of discourse-level comprehension in the human brain, assessing whether a pair of sentences is coherent. The team examined how this task affects model pretraining and its correlation with brain activity. Their findings were recently published in the journal *Science Advances*.
The researchers trained two models: one with NSP enhancement and one without, both incorporating word prediction. They collected functional magnetic resonance imaging (fMRI) data from participants reading connected and disconnected sentences. The team then analysed how closely the models’ patterns aligned with brain patterns observed in the fMRI data.
The results showed that the model trained with NSP provided significant benefits. This model’s brain activity patterns more closely matched those observed in humans across multiple brain areas than the model trained solely on word prediction. The NSP-enhanced model’s mechanisms also aligned well with established neural models of human discourse comprehension. This research offers new insights into how the human brain processes extended discourse, such as conversations, revealing that both sides of the brain—not just the left—are involved in understanding longer narratives. Additionally, the NSP-trained model more accurately predicted reading speeds, suggesting that simulating discourse comprehension through NSP enables AI to better understand human language processing.
Recent advancements in LLMs, including ChatGPT, have primarily focused on scaling up training data and model size to enhance performance. However, Prof. Li Ping highlights the limitations of relying solely on such scaling. He advocates for making models more efficient by using less data. The study’s findings suggest that incorporating diverse learning tasks like NSP can make LLMs more human-like and potentially closer to human intelligence.
Prof. Li further emphasises that these findings demonstrate how neurocognitive researchers can leverage LLMs to study higher-level language mechanisms in the brain. This promotes collaboration between AI researchers and neurocognitive scientists, fostering studies on AI-informed brain research and brain-inspired AI development.
The PolyU study underscores the potential of NSP to enhance the performance and human-likeness of LLMs. By integrating high-level contextual information, these models can better mimic human language processing, offering a path towards more efficient and intelligent AI. The research highlights the importance of diverse learning tasks in AI training and paves the way for future interdisciplinary collaborations that can advance both AI and brain research.