Nvidia played its way to the domination of AI

Games were at the cutting edge of computing long before machine learning

Gaming was the natural early market for Nvidia’s graphics processing units, quite aside from the fact that founder Jensen Huang was a gamer himself. Photograph: Philip Cheung/The New York Times
Gaming was the natural early market for Nvidia’s graphics processing units, quite aside from the fact that founder Jensen Huang was a gamer himself. Photograph: Philip Cheung/The New York Times

Jensen Huang, the laconic, leather-jacketed chief executive of Nvidia, is enjoying a number of triumphs. The technology group that he co-founded and runs is now the world’s sixth most valuable company and its chips and software power are part of the artificial intelligence revolution. This financial year, Nvidia’s revenues could overtake those of the entire US video games industry combined.

The last sounds like a mere footnote for a company whose AI supercomputers train applications such as OpenAI’s ChatGPT. But it started by supplying video game hardware, making graphics chips for personal computers and Microsoft’s Xbox console.

It changed direction a decade ago, but gaming remained its biggest revenue source until last year.

Nvidia’s transformation is one of the sharpest of all business pivots, matching Nintendo’s historic move from playing cards to consoles and Toyota’s from weaving looms to cars. No wonder Huang has been so anxious over the years. “I like to live in that state where we’re about to perish...I enjoy that condition,” he told the New York Times’ DealBook Summit this week.

READ MORE

But the pivot is less quirky than it seems: video games and artificial intelligence have a lot in common and gaming has a long history of being at the cutting edge of personal computer technology.

“We were the first ones to admit that our computer cannot do anything but play games,” a Nintendo executive told the FT in the 1980s of the “family computer” console it had just released.

Nintendo followed up with its Super Mario Bros game and Game Boy portable device. When Huang cofounded Nvidia in 1993, the year before Sony launched its first PlayStation, gaming was the most spectacular form of graphical computing. It was the natural market for Nvidia’s graphics processing units (GPUs), quite aside from the fact that Huang was a gamer himself.

There are two kinds of pivots: one natural and the other more of a twist of fate.

Netflix started with DVD rentals, so moving into streaming was an intuitive evolution. Nokia’s founders built a paper mill in 1865 and had no idea that it would eventually make telecoms equipment. Nvidia’s shift from games GPUs to AI supercomputers falls somewhere in the middle.

Graphics and AI share an important property. The more compute [computing power] the better are the results

—  Bryan Catanzaro

It was clear soon after Nvidia launched its first GPU in 1999 that its use of parallel computing, which speeds up tasks by carrying out lots of small calculations simultaneously, would have wider applications. It was less obvious what those were: machine learning was in the doldrums and Nvidia put more effort into mobile computing and large-scale visual simulation.

Huang realised AI’s potential in 2012 when a group including Ilya Sutskever now OpenAI’s chief scientist, employed Nvidia technology to train a neural network called AlexNet to recognise images. Four years later, Huang delivered its first AI supercomputer to OpenAI, the latest versions of which have 35,000 parts, are priced at $250,000 or more and underlie its recent growth.

The similarity between games and AI is that sheer power wins. The fact that GPUs handled information so rapidly made it possible for graphics to become steadily more sophisticated. A lot of computing brute force is needed to enable players to interact with others in richly depicted virtual worlds, with images rendered in depth.

This is also what is called the “bitter lesson” of AI: the design of neural networks is valuable, but the decisive factor in how well they can process information and generate images is computational speed. Neural networks woke up from the “AI winter” of the early 2000s once they were trained on GPUs designed for games.

“Graphics and AI share an important property. The more compute [computing power] the better are the results,” says Bryan Catanzaro, Nvidia’s vice president of applied deep learning research. Because Nvidia’s latest technology is now thousands (and by some measures, millions) of times more powerful than its original GPUs, it has made AI unnervingly fluent.

There is one helpful difference between games and AI, from Nvidia’s point of view. Even the most obsessive gamer has a limit to the price they will pay for a new graphics card, but companies that need supercomputers to beat OpenAI will pay hundreds of thousands of dollars.

Nvidia occupies a highly lucrative bargaining position, even if it will not last forever.

The technologies behind games and AI could converge again. If humans have to interact constantly with AI agents, which is soon likely, we must find ways to relate to them beyond typing prompts into boxes. These will have to be more fluidly interactive and could resemble games and virtual worlds.

Gaming has never been a trivial technological activity and Nvidia’s rise is testament. IBM built its Deep Blue supercomputer to beat Garry Kasparov at chess in 1997 and Nvidia made GPUs for games. They were the most demanding applications of their eras but they made way for others. Do not mistake playing games for wasting time. – Copyright The Financial Times Limited 2023