One of the ways my partner and I are well-suited is that we both like board games, and I am not very good at them. This helps, because my partner is a gracious winner but an appalling loser. Once, in her early teens, during a game of draughts with her sister, she responded to an unwinnable position by turning over the table.
If artificial intelligence does destroy human life, it will almost certainly be more like my partner’s reaction to defeat than the destructive intelligence from the Terminator films. Catastrophe will come not when a sophisticated intelligence decides to use its power for deliberate evil, but when the easiest way to fulfil its programming and to “win” is to turn over the table.
The threat that artificial intelligence will cause some kind of societal disaster is, of course, a reason we should worry about research, ethics and transparency. But this focus on the potential for catastrophe can sometimes distract from the more mundane dangers. If your Satnav directs you towards the edge of a cliff, as it did in 2009, when Robert Jones was convicted for not driving with due care and attention, then it is not a societal-level tragedy. But it may be a personal one if it leads you to lose your life, your job or even just your driving licence.
One unhappy consequence of constant dire predictions about the absolute worst consequences of artificial intelligence or machine learning programmes is that they encourage a sort of “well, they haven’t killed us yet” complacency about their current prevalence in public policy and business decision-making.
Wills without residuary clauses can see people inherit even if you didn’t want them to
An Irish businessman in Singapore: ‘You’ll get a year in jail if you are in a drunken brawl, so people don’t step out of line’
Balmoral shows ‘small’ investors the door
A helping hand with the cost of caring: what supports are available?
Doomed attempt
A more common problem is that, for policymakers and business leaders alike, the word “algorithm” can sometimes be imbued with magic powers. A good recent example is the UK government’s doomed attempt to assign students grades during the pandemic. But an algorithm is merely a set of data fed through rules or mathematical formulas to produce an outcome. As no UK student sitting their GCSEs or A-levels had much in the way of meaningful data about their own performance, the UK’s “algorithm” was essentially arbitrary at an individual level. The result was a public outcry, an abandoned algorithm and rampant grade inflation.
The most worrying use of algorithms in policy are so-called “black box algorithms”: those in which the inputs and processes are hidden from public view. This may be because they are considered to be proprietary information: for example, the factors underpinning the Compas system, used in the United States to measure the likelihood of reoffending, are not publicly available because they are treated as company property.
This inevitably poses issues for democracy. Any system designed to measure the likelihood of someone reoffending has to make a choice between letting out those who may in fact go on to reoffend, or continuing to imprison people who are ready to become productive members of society. There is no “right” or “fair” answer here: algorithms can shape your decision-making, but the judgment is ultimately one that has to be made by politicians and, indirectly, their voters.
As the statistician David Spiegelhalter has observed, there is no practical difference between judges using algorithms and judges following sentencing guidelines. The important difference is solely and significantly that sentencing guidelines are clearly understood, publicly available and subject to democratic debate.
Opaque decision-making
The UK’s doomed exam algorithm was not a “black box” due to intellectual property laws or a desire for a business to protect its interests, but a result of the British state’s default preference for opaque decision-making. Had the workings of the process been made available earlier, the political opposition to it would have become clear in time to find a more palatable solution.
The other form of black box algorithm is one in which the information is publicly available but too complex to be readily understood. This, again, can have dire implications. If the algorithm that decides who is made redundant cannot be reasonably understood by employees or, indeed, employers, then it is a poor tool for managers and one that causes unhappiness. In public policy, if an algorithm’s out-workings are too complex, they can confuse debate rather than helping policymakers come to better decisions.
Spiegelhalter proposes a four-phase process for algorithms and machine learning in public policy and the workplace, comparable to the process that UK pharmaceuticals must go through in order to be approved. One reason that plan is a good one is that it could avoid a world-ending mistake: but it could also avert minor tragedies and public policy failures, too.
– Copyright The Financial Times Limited 2022