How artificial intelligence will do our dirty work, take our jobs and change our lives

Machines will do the jobs we don’t like, but what if they also start to beat us at things we enjoy?

At its crudest, most reductive, we could sum up the future of artificial intelligence as being about robot butlers v killer robots.

We have to get there eventually, so we might as well start with the killer robots. If we were to jump forward 50 years to see what artificial intelligence might bring us, would we – Terminator-style – step into a world of human skulls being crushed under the feet of our metal and microchip overlords?

No, we're told by experts. It's highly unlikely. It might be much worse. And it would be our fault.

In his recent book Life 3.0: Being Human in the Age of Artificial Intelligence, Max Tegmark talks about the tiresome media fixation with red-eyed robots wielding guns who, having become self-aware, tick “destroy humanity” off the top of their to-do list.

READ MORE

We really shouldn’t worry about AI becoming evil, he writes. There’s no reason to believe this will happen. We should instead worry about creating a human-level intelligence – or even smarter - that becomes “competent”, but with “goals misaligned with ours”.

Basically, we could something that is very good at what it does – just not good for us.

Besides, he says, robots are an unlikely threat because artificial intelligence doesn’t need a body. Just an internet connection.

Well before we get to the robot butlers, this almost invisible AI has already crept into our lives. Your Netflix or Spotify recommendations are driven by algorithms. AI decides what news source to feed you (and what not to), and whose social media you might like.

If the bank contacts you about an unusual transaction in your account, that’s AI keeping watch over it. Banks are increasingly using AI to assess credit scores and make quicker decisions about whether to give you that loan.

In 50 years, we can assume AI will anticipate your needs. It could order your food before you know you've run out, regulate the temperature of your house when it knows you're on your way home, organise your exercise regime, book your hair appointment, keep your medicines topped up, find a new job and so much more.

Meanwhile, billions are being spent on military uses for AI. In a frankly disturbing section, Tegmark goes on to explain just a few of the outrageous ways in which mean-minded states, groups or individuals could one day use AI to kill people with “weapons they don’t even understand”.

For instance, a totalitarian state might employ AI to create a pathogen that incubates in us long before we know we’ve been infected, and then forces the population to wear a “security bracelet” that contains the possible cure.

Or there’s the possibility of “bumblebee-sized drones that kill cheaply using minimal explosive power by shooting people in the eye”. Cheaper than the cost of a smartphone, they might target one politician or a swarm might kill an entire ethnic group.

Then again, as Tegmark, says, reality might be far more frightening because an AI “could invent more effective weapons than we humans can think of”.

After which, it's impossible not to think about those inventions.

Babysitting

So it’s comforting to seek solace in the very 1950s ideal of the robot butlers instead. There are many people working on the positive side of robotics and AI, whose drive is to create technology that helps us, tends to us, saves our lives. Here, too, the focus is on putting into AI what we hope to get out.

An AI robot could prove a brilliant babysitter, without any inclination to assassinate your kids. As is already happening, robots will be used to care for and assist the ill, disabled and elderly.

An AI might react to natural disasters far quicker than humans, or even anticipate them.

Technology might gradually eliminate the tedium of much employment, and possibly even bring us to a point where work is an option rather than a necessity. It could take over mundane daily responsibilities and tasks, effectively ”running” our lives so that we’re free to concentrate on leisure.

What will AI be used for? Who will 'own' it? Will it offer benign guidance or act as a tool for surveillance and control?

And eventually, should we be inclined AI might allow us evolve into a new, upgraded version of humanity that will make us immortal and will allow us go out and explore the galaxy.

There is also the possibility that AI will replace some jobs, but create new ones. Noah Nuval Harari in 21 Lessons for the 21st Century, points out that while drones have replaced pilots on missions, they still require a remote pilot, a tech support team and a data analyst to review photographic information gathered on a flight.

Tegmark’s career advice for a future generation is to avoid jobs with repetitive or structured actions (driving, credit analysis, warehouse work) in favour of those which involve interacting with people, finding creative solutions and working in unpredictable environments.

Teachers, artists, scientists, hairdressers – all good he says. And machine intelligence might replace a doctor in diagnosing your illness, but can it replace the nurse who coaxes a child into giving blood, or gently breaks news of a terminal illness?

Winning at cards

This is the spectrum along which the future of AI might stretch: at one end is a utopian ideal in which technology caters for our every whim; at the other is the extinguishing of every human on the planet.

That things will in all likelihood fall somewhere in between doesn’t diminish the stakes. It has been compared to a potential first encounter with extra-terrestrial life, and that we should in fact treat it as we have been given warning of a spaceship’s arrival 50 years from now.

It means that much of the talk about AI isn’t just about the practical applications of the technology in coming decades, but the profound implications for every human that will see it.

What will AI be used for? Who will “own” it? Will it offer benign guidance or act as a tool for surveillance and control? How do we develop positive AI and guard against potential catastrophe? Where will you work if an AI takes your job away? How much will its ethics be a reflection of its developer? And will we have control over AI, or will it ultimately have control over us?

The questions are being asked even while we don't know what breakthroughs lie ahead, or when they might happen. All we know is that they are happening.

We have most recently seen strides forward in what is called “narrow intelligence”, in which an AI outperforms humans in a specific task. Cars are an example. In 50 years, you can imagine most vehicles being self-driving. Narrow intelligence is also the kind found in gameplaying programmes that regularly make a media splash, whether it’s beating chess grandmasters or becoming experts at Atari games.

Just this month we heard about Pluribus, an AI which defeated several human poker champions in a Texas Hold ’Em tournament. It learned not just how to play – how to strategise, how to bet, how much, when to do it, when to fold, when to call – but also how to bluff. It did all this in just eight days. From scratch. By playing hundreds of thousands of hands against itself.

Curing cancer

As leading expert, computer scientist Stuart Russell has said of these self-taught gameplaying AIs: “If your baby did that – woke up [on] the first day in the hospital and by the end of the day was beating everyone, beating all the doctors at Atari videogames - you’d be pretty terrified.”

It brings us back to the notion that it’s not the technology we should worry about over the next 50 years, but the goals and ethics we infuse it with.

Even if it’s not a malevolent action, we’ll also need to ensure we don’t accidentally wipe ourselves out because we gave an AI goals without being clear on the rules.

In a recent opinion piece for The Irish Times, author Tom Chivers described a scenario in which we ask a powerful AI to cure cancer, so it simply nukes “the planet clean of humans” rather than deal with the tumours.

We know already that the combination of robotics and AI can have hugely beneficial results

However, the thing that we perhaps most often think of as “AI” – a thinking, autonomous machine of the Arnie or Hal type – is defined as “artificial general intelligence”. A human-level intelligence, it should be able do anything we can, including learning.

The next step after this is a “superintelligence”, of the sort that in terms of brain power could ultimately be to us as we are to other animal species.

Creating artificial general intelligence within 50 years would involve steep challenges, not least because we don’t properly understand intelligence or consciousness in ourselves (although even a super-intelligent computer wouldn’t necessarily be conscious or self-aware in a way that we imagine it).

Furthermore, there is an acknowledged paradox in how a computer can be superior at tasks we find hard (say, maths) but inferior at things we find easy – such as recognising faces or voices, walking, moving things around, socialising, recognising emotional needs.

Thanks to millions of years of evolution, the brain is an awesome tool for understating the world in ways a machine finds very difficult.

Complex mathematics actually needs comparatively little computational power. Our day-to-day skills of perception, reaction, planning, recognition, motor-skills and so much else have evolved over millions of years to look simple when they in fact they require massive computational resources.

Dirty jobs

The gap between our intelligence and a possible artificial general intelligence is closing slowly, but we have no idea when, or if, we will develop a human-level artificial intelligence.

Surveys among researchers show a range of guesses from there being a 50:50 chance of such an intelligence in the next half of this century to it being hundreds of years away.

Still, given the development in the field, even if we can’t be sure we’ll have created human-level AI we can have a pretty good guess at certain changes by 50 years time.

We know that AI will change the world of work, for instance. One study claimed that 800 millions jobs would be lost to automation by 2030, amounting to a shift as profound as any since the industrial revolution. Figures are speculative, but it seems fair to say that jobs that involve physical labour or data processing are in danger, as will be jobs that involve driving.

It will impact on medicine, financial services, the military. And yet, there may be a migration to jobs we can’t anticipate yet, much as IT has changed the face of the workforce in recent decades. Nevertheless, a recent expert group report to the European Commission suggested a fund to bridge the skills gap that many newly-redundant humans might fall between.

But we know already that the combination of robotics and AI can have hugely beneficial results. Trinity College Dublin is among those developing care robots to assist the disabled and elderly, with a reassuringly boxy and chirpy robot Stevie II recently unveiled.

“Generally speaking there were three main words that describe where you deployed robots, and they were dirty, dangerous and difficult,” explains Prof Kevin Kelly of the university’s Robotics and Innovation Lab.

“Think of bomb disposal, welding cars in a car plant, these kinds of things … Traditionally the robot was were people weren’t. So one of the biggest trends in robotics is robots that work with, alongside and for people rather than instead of people.” They call it “co-botics”.

Much of the research is in how robots can respond to and understand us if they’re to become commonplace in our homes. Humans can be tricky. “If you take a couple that have been married for 50 years, say, a raised eyebrow could be worth 20 minutes of conversation in a relationship – so there’s all these layers of history and communication.”

Does Kelly think we’ll have our robot butlers in 50 years? “I do, yeah. In less than half century. In 20 years, I would be fairly surprised if we don’t have widespread use of robots in people’s homes.”

Will we be comfortable with the likes of Google, Amazon and Facebook being among a handful of giants corporations who might use AI to wield massive influence on our lives?

What would they do for an average family? “Babysitting. Company. The social aspects will be important, and that’s pretty close already. I think we already have robots who will cut your grass or hoover your floor, so it’s not difficult to see that being part of the portfolio.

“I think that will extend into other mechanical tasks. Unloading the dishwasher, feeding the cat, hanging out the washing. I’d be pretty sure we’ll have robots doing all of that.”

Controlling it

Finally, though, if we can predict one other thing with confidence, it’s that there will be a struggle for control and monopoly of AI.

Futurist Amy Webb's new book The Big Nine looks at the possible concentration of awesome power over the next 50 years. Major work in AI research is already being driven by the likes of Google, Apple, Amazon and Facebook (which was involved in the poker-playing AI). It would mean this revolutionary technology- inserted into all our lives could be driven by shareholder demands rather than the best possible outcome for humanity.

Alongside the current US giants – Google, IBM, Amazon, Microsoft, Apple and Facebook – she adds her concerns about Chinese companies Alibaba, Baidu and Tencent. Already, these are companies with most control over cloud computing and data and who are driving much of the AI research.

As we’re already complacent with the idea of a small number of tech giants organising, and intruding on, our lives, will we be comfortable with the likes of Google, Amazon and Facebook being among a handful of giants corporations who in coming decades, owning so much of our personal data, might use AI to wield massive influence on our lives?

“I think our general level of scientific and technological literacy is poor and the potential impact of that is very dangerous, because it puts the control of technology in the hands of the few,” adds Kelly.

“One of the manifestations of that is that you have elected representatives making decisions who are very poorly equipped to make decisions about technology and the impact of technology.”

So, to prepare for our world in 50 years time, we’ll need to know now what kind of future we want and figure out how to get there. If you’re going to buy that Robot butler, you’ll want to understand the instructions and ensure you get a watertight guarantee.

5 ways artificial intelligence could change the world in 50 years

Medicine: Not feeling too well? Got something more serious? Doctor AI will draw up a treatment specific to your genome.

Care robots: Already becoming a part of some people's lives, care robots will keep an eye on those who need it, and help with everything from preparing the meals to health regimes.

Transport: Your grandchildren will marvel at stories of how you would use your own feet, hands and brain to drive yourself everywhere. All vehicles will be self-driving, and safer for it.

At Home: The era of the robot butler is finally near. No longer will you need to wash your own dishes, clean your windows, mow the lawn and mind your own kids. All that will arrive in a box.

Warfare: Wars will increasingly be fought online - as already hinted at in recent years. But there will also be deadlier, more clinical, scarier, and tremendously expensive ways for armies for fight and kill each other. No, they won't look like Arnold Schwarzenegger.

Shane Hegarty

Shane Hegarty

Shane Hegarty, a contributor to The Irish Times, is an author and the newspaper's former arts editor