Search

What Do Large Language Models Tell Us about Ourselves?

Yoshua Bengio and Vincent Conitzer · · llm

What if instead of measuring AI by the standard of human intelligence, we measured human intelligence by the standard of AI. We have a clearer understanding of how AI works, than how the human brain works. Could this exercise actually help us derive some insights into ourselves?

The paper argues that we’ve been actually doing this for decades, starting with how superhuman abilities demonstrated by Deep Blue made us reconsider what it means to be “good at chess”, and more recently reconsidering the most intimate parts of human existence, language.

The main question raised by this paper is, since we derive so much of our self-image from language, and the task of language generation can be automated, then what are we, as humanity, still contributing?

It might be the case that currently, LLMs are just parroting us, but we’ve also seen an exponential increase in the ability of LLMs to solve problems which require compositional generalisation way outside of the patterns and combinations of concepts represented in their training data.

One conclusion this paper draws is that much of the success of LLMs comes from our own autopilot approach to language. We also are often too quick to give a standard response rather than pay attention to the details of a question, we too, often follow a very rote, algorithmic way of learning foreign languages.