Gigantic neural networks that write with remarkable fluency have led some experts to suggest that scaling up current technology will lead to human-level language abilities – and ultimately true machine intelligence
6 October 2021
WHEN the artificial intelligence GPT-3 was released last year, it gave a good impression of having mastered human language, generating fluent streams of text on command. As the world gawped, seasoned observers pointed out its many mistakes and simplistic architecture. It is just
a mindless machine, they insisted. Except that there are reasons to believe that AIs like GPT-3 may soon develop human-level language abilities, reasoning, and other hallmarks of what we think of as intelligence.
The success of GPT-3 has been put down to one thing: it was bigger than any AI of its type, meaning, roughly speaking, that it boasted many more artificial neurons. No one had expected that this shift in scale would make such a difference. But as AIs grow ever larger, they are not only proving themselves the match of humans at all manner of tasks, they are also demonstrating the ability to take on challenges they have never seen.
As a result, some in the field are beginning to think the inexorable drive to greater scales will lead to AIs with abilities comparable with those of humans. Samuel Bowman at New York University is among them. “Scaling up current methods significantly, especially after a decade or two of compute improvements, seems likely to make human-level language behaviour easy to attain,” he says.
That would be huge if true. Few experts thought machine intelligence would arrive as a mere exercise in engineering. Of course, many still doubt that it will. Time will tell. In the meantime, Bowman and others are scrambling to assess what is really going on when superscale AIs seem to …