What will happen to the humans when science fiction becomes fact?

A lack of oversight means that there are no controls over private firms


Are technology companies running too fast into the future and creating things that could potentially wreak havoc on humankind?

That question has been swirling around in my head ever since I saw the enthralling science-fiction film Ex Machina.

The movie offers a clever version of the robots versus humans narrative. But what makes Ex Machina different from the usual special-effects blockbuster is the ethical questions it poses.

Foremost among them is something that most techies don’t seem to want to answer: Who is making sure that all of this innovation does not go drastically wrong?

READ MORE

In the film, advances in artificial intelligence take place in a secret laboratory beyond the reach of governments and concerned citizens. (The robot’s name is Ava.) That is not unlike how most innovations occur in real life today.

Alex Garland, the writer and director of Ex Machina, said in a phone interview last week: "I have no idea if technology companies are doing anything wrong or not, but they are so powerful, and the work they are doing has such potential for seismic human change of how we live, they have to have oversight.

“If you’ve got corporations that are investigating areas that can change fundamental things about the way we live, someone needs to be looking at them.”

Proper oversight

While Garland’s film is focused on AI, his concern about unchecked innovations could apply to all kinds of disciplines, including bioengineering, smart homes, self-driving cars and medical nanobots, to name a few.

And while these breakthroughs are intended to help humanity, they could backfire without the proper oversight.

This fear isn’t just confined to science-fiction filmmakers, or people who wear tinfoil hats.

In recent years, experts in robotics, cosmology and artificial intelligence have set out to tackle the issue of oversight, holding symposiums and creating research organizations.

Elon Musk, founder of Tesla, recently donated $10 million (€9 million) to the Future of Life Institute, an organisation that seeks to "mitigate existential risks facing humanity" from "human-level artificial intelligence".

The Lifeboat Foundation is a non-profit that tries to help humanity combat the "existential risks" of genetic engineering, nanotechnology and the so-called singularity, which refers to the hypothetical moment when artificial intelligence surpasses the human intellect.

And in 2012, philosophers and scientists at Cambridge University formed the Center for Study of Existential Risk, with the goal to ensure "that our own species has a long-term future".

Greater risk

Sir

Martin Rees

, an emeritus professor of cosmology and astrophysics at Cambridge, who helped start the research center, said that what makes the existential risk today so much greater is the ease with which a single person or company can cause catastrophic harm.

“Unlike the past, the empowerment of individuals is much greater,” Rees said.

“You can’t make a clandestine H-bomb today, but you can make a clandestine biological virus or a clandestine computer virus.”

Rees said that his biggest worry is not robots or AI, but biological agents.

He cited research done by scientists at the University of Wisconsin, who created a bird flu virus that can be transmitted to people through the air. (Scientists later played down the danger.)

Doomsday outcomes

It’s not hard to imagine other potential doomsday outcomes.

Last month, plant geneticists at the University of Minnesota created a DNA-engineered potato that doesn’t accumulate sugars, so it can sit on a shelf for years without rotting.

It’s unclear how consuming that potato may affect the human body.

Scientists are experimenting with altering the human immune system to fight certain viruses.

But yet we don’t know if this will create super viruses.

Adding to the concern is the lack of oversight, so that private companies and researchers are basically policing themselves.

For example, there is no government body that oversees the development of AI, so Google created its own ethics committee, conveniently made up of AI experts.

But the real-world implications of technological breakthroughs are often not apparent to those entrenched in those fields, said Ronald C Arkin, a robotics expert and professor at the Institute for Robotics and Intelligent Machines at Georgia Tech.

Arkin, who has designed software for battlefield robots under contract with the army, said that it wasn’t until he saw his robots in the field that some risks became apparent.

“Seeing the robots move out of our lab and into the real world gave me some pause,” he said, noting that he saw robots that were becoming “killing machines fully capable of taking human life, perhaps indiscriminately”.

The main characters in Ex Machina come to this realisation as well, but do so too late. Toward the end of the film, the character Nathan Bateman, a genius programmer, realised that he may have done just what he set out to do.

Nathan, drunk, mutters: "The good deeds a man has done before defend him." The line is a reference to what J Robert Oppenheimer, the father of the atomic bomb, said after witnessing the explosion of the first such bomb, Trinity.

“I remembered the line from the Hindu scripture, the Bhagavad Gita,” Oppenheimer said, before uttering the now famous quote. “Now I am become Death, the destroyer of worlds.” – (Copyright New York Times News Service)