This story was published as part of the 2024 Travel Fellowship Program to AAAS organized by the NASW Education Committee, providing science journalism practice and experience for undergraduate and graduate students.
Story by John Lin
Mentored and edited by Ashley Yeager
DENVER — When ChatGPT took the Internet by storm in late 2022, it had the seemingly miraculous ability to form coherent phrases and sentences, ranging from Shakespearean sonnets to academic papers. But how these artificial intelligence systems “learn” the grammar and vocabulary of everyday English has been a mystery. That might not be true for much longer, said Evelina Federenko, a cognitive neuroscientist at MIT, during a panel discussion at the AAAS meeting on Feb. 16.
Federenko’s group is teasing apart how computer programs like ChatGPT string together words, mainly by mixing up sentences and feeding them to a large language model that’s similar to what drives the renowned chatbot. LLMs make their predictions using probability — given one word, what word is most likely to come next? When Federenko and colleagues work with a sentence and an LLM, the team might input the string of words correctly the first time, then remove a verb, for example. Federenko inputs each perturbed sentence into the LLM to see which components the model needs to learn the rules of a language. Her goal: Understand how large language models gain a handle of human language, and, along with other researchers, determine what these models could tell us about ourselves.
By mixing up and deleting words in sentences fed to an LLM, Federenko’s team found that semantics trumped syntax: removing a noun or a verb (important for the meaning of a sentence) leads to poorer language processing compared with switching two words in a sentence. These results from LLMs were closely correlated with the neural responses obtained from imaging data from the human brain.
“There is a lot of skepticism around these models,” Federenko said. Many people have criticized the black box of the LLMs for making it impossible to understand how they work. “But that’s not really true,” Federenko said. “We now have tools for dissecting the features important in these models.”
In fact, LLMs might not be too different than us when it comes to language processing. In a study published in the Proceedings of the National Academy of Sciences, Federenko’s group gave input sentences to both a large language model and human participants. She found that LLMs can predict how humans could respond to certain sentences, suggesting that LLMs can process language in a way that is similar to humans. Her group is testing whether mimicking how humans learn language might lead to better LLMs.
These models, in turn, might be able to help us open another black box: the human brain – more specifically, how the neurons in the brain fire to allow us to communicate with each other. In a recent preprint, researchers from Princeton measured the brain activity that arose as two people with epilepsy talked with each other. The two individuals had electrodes implanted in their brains, which monitored the conversation and the areas active during the discussion. After recording the conversation and the associated brain data, the researchers fed the transcription of the conversation into GPT 2.0., a precursor to ChatGPT.
During a conversation, information was first encoded as signal, which came in the form of electrical activity in the speaker’s brain. A few milliseconds later, the same pattern reappeared in the listener’s brain. The dynamics of information transfer showed particular regions of the brain, such as the superior temporal cortex, were more important than others when it comes to transmitting information, said Princeton researcher Sam Nastase, during his panel presentation.
With the help of GPT 2.0, Nastase could isolate specific elements of the conversation, such as phonetics or syntax, to test which were responsible for effective information transfer. He discovered that contextual embedding – how groups of words come together to form a specific meaning – was critical for transferring information from one person’s brain to that of another person.
LLMs could be emerging tools that make it possible to dissect the way we communicate. “If you asked me if we could do this two to three years ago, I would have said no,” Nastase said. “LLMs are trained to produce language. This is really important because they learn to approximate the structure that we use to talk to each other.”
Moving forward, Nastase hopes to use LLMs to study information transfer more deeply between people in different settings, ranging from examining conversation dynamic differences between strangers versus friends to optimizing learning between student and teacher.
John Lin is a junior studying Human Developmental and Regenerative Biology at Harvard College. He is an Associate Magazine Editor for the Harvard Crimson and enjoys stories that tackle complex and groundbreaking mechanisms in health and medicine. You can reach him at johnzhuanglin@gmail.com or @LinJohnZ on X.
Edited by: Ashley Yeager
Founded in 1934 with a mission to fight for the free flow of science news, NASW is an organization of ~ 3,000 professional journalists, authors, editors, producers, public information officers, students and people who write and produce material intended to inform the public about science, health, engineering, and technology. To learn more, visit www.nasw.org.