AI: are we building a partner, or a Frankenstein’s monster?

A Human Algorithm argues moral imagination is imperative in design of intelligent machines

There are only an estimated 10,000 people in the world with the education necessary to build AI. At the same time, there is a plethora of books being published, conferences being held, articles being written, summits being convened, and news reports being broadcast on the subject of emerging synthetic intelligence, its latest incarnations, and its potential impact. However, outside subjects such as cybersecurity and privacy, how many of us are really involved in the discussions? How many of us are represented?

Without reflecting the real world, even programmers with good intentions can miss the obvious. The first generation of virtual AI assistants had many sexist tendencies. Even now, “assistants” like Siri, Cortana and Alexa have female voices, whereas IBM’s Watson, a “more powerful” and advanced AI technology, has a male voice.

When a man reported in 2015 that a Google AI had tagged himself and a friend, both of whom happened to be Black, as gorillas, Google’s “fix” was to stop tagging gorillas as a group, as opposed to finding a better way to tag all types of living beings as what they truly are.

Some innovative designers are beginning to address the issues, such as the company Sage, which in 2018 introduced Pegg, “a genderneutral robot assistant,” to attempt to combat deeply ingrained societal sexism. This is an attempt to decode underlying bias and a modest but vital illustration of applying the crucial imperative in designing our intelligent machines – moral imagination.

READ MORE

Moral imagination is the human virtue necessary to guide our technological future: a passion, a curiosity for seeking out what is good, for creatively and equitably solving problems, for using our collective intelligence to move past groupthink and discrimination and toward a more holistic approach to building beneficial AI. In other words, universal design.

The responsibility for ensuring that future intelligent machines are fair, ethical and coded with a conscience that respects values equitably lies with the architects of the future – all of us. To do so effectively, we need a diversity of voices in the room, across spectra of gender, sexuality, race and experiences and across socioeconomic, religious and cultural lines: not only significant numbers of women and people of color participating, but also people of different ages, abilities, and viewpoints. Without a diverse group, representative of all we are, we will not be able to sufficiently train and teach our new intelligent creations who, what and why we are.

Researchers, scientists, engineers, executives, elected representatives and anyone who has the ability to broaden the discussion and engender participation and input needs to invite social scientists such as anthropologists and sociologists, as well as activists and ethicists, human rights advocates and other nonscience experts across multiple demographics and cultures into the technology conversation. We must also consider how to use our combinatorial creativity and collective intelligence for the holistic benefit of all. Don’t just pull up a chair for yourself. Pull one up for someone else, too – someone underrepresented, someone whose voice is missing and needs to be heard.

As we build AI, we have to continually ask ourselves whether we are building a partner that will help enhance our lives, or a Frankenstein’s monster. This requires stepping away from self-validating carousels to look inside ourselves. Let’s unleash our collective creativity to create technology that can rise to the potential of our ideals and highest aspirations as humans.

Beyond profit we find purpose. Beyond our individual selves we find one another. Finally, we must remember that as we expand the field of AI, include more diverse partners, and focus on creating AI for good, one of our most powerful partners in this will be the AI itself. AI can help us unleash our creativity, free up our time to become more empathetic, and do many things we have only dreamed of.

Mary Shelley warned us of the dangers of unchecked scientific discovery. How might the story have turned out if Dr Frankenstein had consulted with his peers, colleagues, experts in various fields, and friends?

Listening to a variety of voices and considering a spectrum of opinions firmly guided and grounded by moral imagination is crucial in assuring that our future inventions and technologies are built with equity, fairness and goodness at their core, with a sense of humanity, in service of the rights, agency and dignity we all deserve to enjoy.
Taken from A Human Algorithm: How Artificial Intelligence is Redefining Who We Are by Flynn Coleman, out now from Melville House UK