The device on which you may be reading this owes its existence to an experiment roughly 55 years ago. A team of university researchers, with some funding from the United States department of defence, established a connection between two computers 560km apart in the US. It was the first demonstration of the Arpanet, what would become the internet – a technology that was revolutionised in 1989 when English scientist Tim Berners-Lee not only created the world wide web, but gave his invention away for free.
This kind of innovation now seems lost forever. Scientific research has been overwhelmingly privatised. The 1980 US Bayh-Dole Act, which allowed private firms to patent inventions that were created with public funds, is regarded as a key turning point. Gradually, intellectual property – like other types of property – has become concentrated in the hands of the super-wealthy. From healthcare and energy security to space exploration and the new frontier of artificial intelligence (AI), multibillionaire tech moguls have become gatekeepers of scientific discovery. (Discovery on their own – profitable – terms, naturally.)
What’s more, they can rely on what philosopher of science Heather Douglas calls the “old social contract for science” to minimise accountability for their actions. This “contract” is based on three assumptions. First, that basic science (like splitting the atom) should be distinguished from applied research (like developing an atomic bomb). Second, that scientists doing basic, or “pure”, research are free from responsibility. And third, that public funding should go to basic science, regardless of how industries use scientific discoveries.
This “old contract” – tacitly accepted in the western world since the industrial revolution – was always self-serving for scientists. It allowed people like J Robert Oppenheimer to race to develop nuclear weapons with a supposedly clear conscience. But it was also naive, allowing capitalist “innovators” to use the blanket of science to cover up social harms.
You might think Jeff Bezos, the world’s second-richest man, has lined his pockets on back of union-busting activities and ruthlessly crushing independent retailers. But the executive chairman of Amazon reveals in his memoir that he wants to be known as “inventor Jeff Bezos”, likening himself to a modern-day Thomas Edison.
Elon Musk, the world’s richest man, also likes to portray himself as a scientific frontiersman for whom traditional rules do not apply. He is leading the charge to Mars, designing bulletproof cars for Earth, and reinventing democracy – all in one go. His nutty professor act – “I’m dark gothic Maga!” – is a great way of deflecting attention from corporate responsibilities.
Sam Altman – chief executive of Open AI, the company behind ChatGPT – has been more direct in exploiting the “old social contract” for profit. Government policy should come “downstream of the science”, he said in an interview earlier this year. Relying on the distinction between pure and applied research, he argues that oversight should be limited to something like “the equivalent of weapons inspectors” for AI.
[ We are up to our necks in a rising tide of AI-generated slopOpens in new window ]
Just how committed either he or other tech entrepreneurs would be to even this level of supervision is unclear, given Altman had previously threatened to cease OpenAI operations in Europe because of the European Union’s relatively timid new AI rules.
“Scientists cannot operate in a responsibility-free manner,” says Douglas, who is calling for a “new social contract for science”.
Accountability mechanisms should be tied to “clear and precise responsibility floors”, she told The Irish Times on a visit to Dublin recently for a conference on science and democracy hosted by the UCD school of philosophy. The main floor she would set is “don’t make the world worse”.
Applying such standards would allow us to judge something like generative AI, the technology behind ChatGPT and similar tools designed to mimic human functions. While it may prove to be useful in “narrow” applications like cancer screening, Douglas says: “the overwhelming uses tend to be harmful, from generative ‘deep fake’ porn, revenge porn, to generative ‘deep fake’ political ads ... There are so many harms, and the benefits are not coming to fruition, partly because the levels of hallucination [nonsensical or inaccurate outputs] are so high.
“I think it’s really up to the scientists who are pushing these things to make arguments that this technology is actually beneficial – not just ‘I think it’s cool’ – and, if not, there is no reason why we can’t just shut them down.”
Regulating generative AI is important in its own right, but it also matters because it is a dummy run for what is being billed the existential threat of artificial general intelligence (AGI). This goes beyond mimicking human intelligence to conquering it: humans will be in a position to AI as we are to non-human animals. Geoffrey Hinton, the “godfather of AI” who shared this year’s Nobel Prize for physics, says AGI “may be fewer than 20 years away”.
Hinton – who has publicly questioned Altman’s fitness to run OpenAI – says scientists need to be protected from themselves. When challenged on the threats posed by AI while he was working for Google, he used to paraphrase Oppenheimer: “When you see something that is technically sweet, you go ahead and do it.” Hinton no longer says this, admitting he didn’t take ethical concerns seriously enough at Google.
“I think the old contract really undermined public trust in science by embracing a model of responsibility-free,” says Douglas. “Why should the public trust a scientific community who don’t care about the impact of their science on society?”