How to avoid being replaced by a robot at work

Three types of jobs – surprising, social or scarce work – are very difficult for AI to do


Recently, I was at a party in San Francisco when a man approached me and introduced himself as the founder of a small artificial intelligence (AI) start-up.

As soon as the founder figured out that I was a technology writer for The New York Times, he launched into a pitch for his company, which he said was trying to revolutionise the manufacturing sector using a new AI technique called “deep reinforcement learning”.

The founder explained that his company’s AI could run millions of virtual simulations for any given factory, eventually arriving at the exact sequence of processes that would allow it to produce goods most efficiently. This AI, he said, would allow factories to replace entire teams of human production planners, along with most of the outdated software those people relied on.

“We call it the Boomer Remover,” he said.

READ MORE

“The . . . Boomer . . . Remover?” I asked.

“Yeah,” he said. “I mean, that’s not the official name. But our clients have way too many old, overpaid middle-managers who aren’t really necessary anymore. Our platform lets them replace those people.”

The founder, who appeared to be a few drinks deep, then told a story about a client who had been looking for a way to get rid of one particular production planner for years, but could never figure out how to fully automate his job away. But mere days after installing his company’s software, the client had been able to eliminate the planner’s position with no loss of efficiency.

Slightly stunned, I asked the founder if he knew what had happened to the production planner. Was he reassigned within the company? Was he just laid off unceremoniously? Did he know that his bosses had been scheming to replace him with a robot?

The founder chuckled.

“That’s not my problem,” he said, and headed to the bar for another drink.

*****

Workers have been worried about being replaced by machines for literal millennia. (In 350 BCE, Aristotle mused that automated harps might put musicians out of work.) But these fears have grown in recent years, as AI and other automation technologies prove capable of displacing enormous numbers of human workers.

Previous waves of automation have affected mostly blue-collar manufacturing workers. But the latest wave is aiming to replace white-collar professionals in fields like law, medicine, and finance. And the Covid-19 pandemic has accelerated corporate adoption of these technologies, making it even more urgent for us to figure out a defence strategy.

For my new book, Futureproof: 9 Rules for Humans in the Age of Automation, I’ve been researching this new AI revolution, interviewing hundreds of experts, reading the academic literature and looking for stories of people throughout history who have successfully adapted to technological change.

And I’ve come up with a kind of cheat sheet – steps every worker can take to make themselves more futureproof, whether you’re a factory worker or a Fortune 500 executive.

Here are three of my favourite rules.

1 Be more surprising, social and scarce

For years, it has been conventional wisdom that in order to succeed in the future, we needed to study engineering, learn to code, and optimise our lives for maximum productivity. But the experts I spoke with said that’s exactly backward. Instead of trying to turn ourselves into machines, they said, we need to be developing our uniquely human skills, in order to excel at the kinds of things machines can’t do well.

In particular, three types of jobs – surprising, social or scarce work – are very difficult for today’s AI to do at anything approaching a human level.

Surprising work is work that involves handling curveballs, unique situations and unexpected challenges – all of which is anathema to AI, which prefers its tasks regular and predictable.

Social work is work that fulfils emotional needs, rather than material needs. Therapists, ministers and teachers all fall into this category, as do people who are good at making other people feel things, like marketers and social media influencers.

Scarce work is work that requires extraordinary ability or rare combinations of skills, or that has a low fault tolerance. (When you’re having a medical emergency, you want a human doctor present in the room, even if a robot could technically diagnose you just as accurately.)

No job is 100 per cent surprising, social and scarce. But every job can be made more surprising, social and scarce by focusing on the most human parts, and leaving the mundane and repetitive parts to the machines.

2 Leave handprints

Another principle I learned from AI experts is that in the automated future, there will be more value in things that bear the imprint of their human creators, and less value in things that are done by machines.

You can already see this shift in action. Our economy is awash in mass-produced consumer goods that used to function as status symbols – flat-screen TVs, dishwashers, hot tubs – but have now become cheaper and more widely accessible.

These days, a better indicator of luxury is how little technology is involved in producing the things you consume. Handmade furniture, bespoke clothing, custom art on your walls – these are the kinds of purchases that signal high status, precisely because they require a lot of human work.

Social scientists call this the “effort heuristic,” and it’s going to be an important principle for the workers of the world to absorb. The effort heuristic explains a lot about the rise of craft breweries, farm-to-table restaurants, and artisanal Etsy shops.

It explains why vinyl records and printed books are still popular, even as streaming music services and e-books have become widely available, and why high-end cafes can still charge $7 (€5.80) for a cappuccino, even though most of us have machines capable of making perfectly good coffee in our homes and offices.

It also explains why the inverse is true: that when we hide or eliminate the human effort behind something, we often devalue it. (My favourite example of this is what happened when Facebook made it trivially easy to wish your friends a happy birthday by clicking a single button. All of a sudden, it no longer felt as good to open your Facebook feed on your birthday and see a stream of good wishes.)

For me, leaving handprints means putting more of myself in my writing – using first-person to make it clear that a human is writing this column, and not GPT-3 or some other journalist-replacing algorithm.

For others, leaving handprints might mean sending personalised notes to clients, or adding funny asides to PowerPoint presentations, or just being more transparent about your process. (One of my favourite examples of handprints is the way that Heath Ceramics, a local ceramics studio near me in the Bay Area, stamps individual pieces with the name of the artist who glazed it.)

If people can’t tell whether your work is human, they’ll assume it’s not, and value it accordingly.

3 Treat AI like a chimp army

While researching my book, I met lots of corporate executives who are convinced that AI is an incredible, transformative technology, and that using AI in the workplace is a no-brainer decision, no more consequential or disruptive than switching the salad dressings in the cafeteria. But if you talk to the computer scientists working on the front lines of AI and machine learning, they’ll tell you that implementing AI in the workplace is a high-stakes, high-risk decision, and one that can have devastating consequences if it goes wrong.

Take what happened with Watson, the IBM-owned AI that famously defeated a Jeopardy! champion in 2011. In 2013, IBM teamed up with The University of Texas MD Anderson Cancer Center to develop a new Watson-based oncology tool that could recommend treatments for cancer patients. But the programme had flaws.

In 2018, internal tests obtained by the health news publication Stat found that Watson had been improperly trained on data from hypothetical patients rather than from real ones and, as a result, made some faulty recommendations for treatment. In one case, Watson reportedly recommended that doctors give a 65-year-old lung-cancer patient with severe bleeding a type of medicine that could have worsened his bleeding.

The metaphor I use for most of today’s advanced AI is an army of chimpanzees. AI is smart, but not as smart as humans. It can follow directions if it has been properly trained and supervised, but it can be erratic and destructive if it hasn’t.

With years of training and development, AI can do superhuman things-like filtering spam out of a billion email inboxes or creating a million personalised music playlists-but it’s not particularly good at being thrown into new, high-stakes situations.

And just as you’d be extremely careful before letting an army of chimps loose in your office, corporate leaders should be wary of giving AI and automation responsibilities they aren’t equipped to handle. If they aren’t, trouble will follow.

Futureproof: 9 Rules for Humans in the Age of Automation is published by John Murray Press