Subscriber OnlyTechnology

OpenAI debacle illustrates tensions between profit and not-for-profit which bedevil Silicon Valley

In AI regulatory vacuum, an enormous balance of power seems to be tilting towards Microsoft

In the week since OpenAI supernovaed into a spectacular mess, many have said that OpenAI, the company that brought the world the AI chatbot ChatGPT, has a “governance problem”. But it doesn’t. It has the same old power, greed and responsibility problem that plagues many companies in the technology sector.

What’s cited as OpenAI’s “governance problem” is actually OpenAI’s governance working in the way it was designed to back in 2015 when the company was set up under a thought-variation on Google’s old slogan, “don’t be evil”. The idea was to create a not-for-profit company to build AI “for the people”, prioritise thoughtful development at considered speed and keep some quasi-public competitive control of this vitally important new technology, rather than have it driven by, and inevitably consolidating into the hands of, a few tech giants.

For reasons still not clearly defined, OpenAI’s tiny, public-interest focused board pulled its emergency brake, sacked chief executive Sam Altman and removed its chairman, a built-in governance structure designed to halt any serious deviation from the not for profit’s remit. The board was empowered to do so, if there were concerns that development was moving too swiftly on these powerful, controversial technologies. Perhaps too, one suspects, if OpenAI engaged in dalliances with the mammoth tech companies it was supposed to be a protective barrier against.

This turmoil seems to be the fallout from the high-road statements and structures that must have been much easier to put in place when everyone in the world wasn’t knocking on your door, your company wasn’t valued at $90 billion (€83 billion) and fat-walleted Microsoft wasn’t your new best friend.

READ MORE

Now, OpenAI is a company of two at-odds parts. Its commercial success has resulted in a strange portmanteau business whereby the for-profit segment of the company was glued awkwardly on to the not-for-profit, creating a set of ideological conflicts waiting to happen. And here we are.

OpenAI is (was?) a working example of a trendy tech-ethics philosophy called effective altruism, in which companies prioritise public benefit, sterling values, and worthy causes and ideas, rather than just seeking golden tickets to insta-wealth. It’s why the board and Altman don’t have shares in the company.

When the company pushed out ChatGPT to the general public a year ago, a rare and transformative inflection point – a moment of far-reaching change – began to materialise. Interest in AI exploded. Perhaps, with OpenAI’s robust success, the altruism started to feel more like a straightjacket.

This year’s intense focus on AI and OpenAI happened to overlap with the European Union’s (EU) years-long deliberative process to approve AI guidance and regulation by the end of this year. We should have paid more attention months back when, on an EU-visit, Altman semaphored where he actually stood on his altruistic, benefits-all-humankind schtick. Asked about the EU’s incoming AI laws, Altman snapped that if the company faced EU regulation, then OpenAI would pull out of the EU.

He later rowed back on this stance, but that outburst hinted at the company’s internal conflicts and incongruities. The governance structure of his own company supposedly was designed to carefully manage AI development and forefront public benefit, not unlike good regulation.

Yet Altman came across like the petulant chief executive of a for-profit tech giant, disinterested in irritating protections and safeguards. He also sounded woefully unfamiliar with the EU’s regulatory approaches to technologies and companies. He could have waffled with a line about respecting international jurisdictions and responsible AI stewardship. Instead, he said the quiet AI part out loud.

Meanwhile, Microsoft looks like it is about to become grotesquely powerful thanks to OpenAI’s fracturing. And, irony of ironies, this too is the opposite of OpenAI’s original raison d’etre. Effective altruism, also favoured by the convicted felon cryptocurrency promoter Sam Bankman-Fried, increasingly looks like a faux-philanthropic Valley way to woo investment and disarm regulation, not to benefit the rest of us.

Brando Benifei, one of the main European Parliament negotiators working on the new AI laws, recently told Reuters: “The understandable drama around Altman being sacked from OpenAI and now joining Microsoft shows us that we cannot rely on voluntary agreements brokered by visionary leaders.”

He’s right, but actually, pretty much the entire history of voluntary self-regulation in the tech industry has been a thorough failure, whether in the area of privacy, data protection, disinformation or just generally trying not to be evil.

The Lanigan’s Ball developments between Altman, OpenAI and Microsoft, with various parties stepping out and then, perhaps, stepping in again, doesn’t lead to much clarity on what is happening with OpenAI, or what might happen next with some very important existing and in-development technologies. In the current AI regulatory vacuum, an enormous balance of AI power seems to be tilting towards Microsoft. If instead, Altman returns to an OpenAI stripped of its not-for-profit status, is that any better? Even ChatGPT won’t know the answer to that.