AI and the need to tread carefully
There is a doomsday scenario for future artificial super-intelligence that many prominent scientists and figures in the technology sector have begun warning of in recent years, writes Dr Grant Otsuki.
“Technologies have developed to the point where they wield incredible power over human life and happiness, but they have a logic that people find as impenetrable as it is efficient. At times, it is immoral. We are no longer in control, and live precariously under a technological master that doesn’t stop to listen to us.”
Automated cars and weapons, robot factory workers and doctors have incredible potential to improve human life, but if we’re not careful they’re not only going to take our jobs, they’re going to put us out of existence. If humanity is going to survive, we need to figure out ways to make sure the machines know how to treat us right.
Perhaps the clearest expression of this warning was a 2015 open letter from the Future of Life Institute signed by Stephen Hawking, Elon Musk and some 8,000 others, advocating for expanded research into reaping the benefits of artificial intelligence, while “avoiding potential pitfalls … our AI systems must do what we want them to do”.
On Twitter, Musk’s predictions are more alarming, suggesting artificial intelligence may be the “most likely” cause of World War III.
There is reason to be cautious about artificial intelligence, but not just for the reasons Musk puts forward. These statements provide a stage for how we should think about the future, but is the way they stage it the best for understanding what this technology is already doing to our world?
What if my table were to start walking around and charging my guests to offer its services? What if it decided it doesn’t need me at all and ousts me from my own home so it can design a perfect utopian kitchen with its chair and coffeemaker allies?
As a cultural anthropologist, I look at things and people and I see relationships. A table is a table, but it was also made by someone. The wood, nails and glue were produced by some people and put together by others. It was transported and sold by others. When all these relationships among people and things happen to sync up and eventually converge in the same time and place, then we have a table.
Now let’s imagine a scenario that is getting less far-fetched by the day, if we take Musk at his word. What if my table were to start walking around and charging my guests to offer its services? What if it decided it doesn’t need me at all and ousts me from my own home so it can design a perfect utopian kitchen with its chair and coffeemaker allies?
In responding to this alarming development, I could try to make friends with the table and to teach it that it has to respect my life and lifestyle. I could even lobby my government to ensure any research into autonomous intelligent tables includes studies of how to make them ethical.
This is what the signatories of the Future of Life Institute’s open letter advocate. They recommend “expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial”. They seem worried their tables are going to start getting lippy, and want to make sure they know their place.
But what if we look backstage? Behind what we call artificial intelligence are many relationships, and the autonomy of AI appears that way because we let ourselves forget the relationships required to create it. There need to be programmers and hardware engineers, not to mention people to gather the raw materials that make their work possible.
If we continue following these connections, we inevitably find troubling images. For instance, AIs, like all information systems, rely on semiconductors, which require special ores sourced from mines, often in the developing world.
If our goal is to save humanity, our first step should be to understand and improve the machine we already have.
According to an Amnesty International investigation, such mines in the Democratic Republic of Congo were employing children as young as seven years old. The miners were beaten by guards, had no protective equipment and suffered health problems as a result. There are similar stories for every step of the supply chain that brought stuff from the mines and turned it into the smart device before you.
The reality of the production cycle highlights the irony in warnings of AI disaster like Musk’s. The earnestness with which they profess their desire to help all of humanity is attractive. But by focusing attention on the future prospect of rogue AI, they keep the existence of a whole set of often dehumanising relationships in the here and now backstage.
We should remember that the cultural cost of focusing our attention on autonomous AI out of control is to keep people like the miners in the Congo out of sight. But if we can peek behind the curtain of abstraction, we’ll see that to a great many people the world is already under the control of a powerful machine indifferent to their wellbeing. If our goal is to save humanity, our first step should be to understand and improve the machine we already have.
Dr Grant Otsuki is a Lecturer in Cultural Anthropology in the School of Social and Cultural Studies at Victoria University of Wellington.
Newsroom is powered by the generosity of readers like you, who support our mission to produce fearless, independent and provocative journalism.