What would happen if the lights went out?
In the first of three Victoria University of Wellington-hosted public debates about the opportunities and challenges presented by artificial intelligence, the University’s Associate Professor Will Browne invited audience members to imagine a world without electricity.
“And that’s not unbelievable,” said Browne, an AI and machine learning researcher in the School of Engineering and Computer Science.
“There are a lot of people who exist quite happily in the modern world without electricity. But hey, if the lights went out, if your mobile stopped working, if the internet ceased, if your fridge shut down and all your food went off, and then you couldn’t wash your dishes or you couldn’t clean your clothes, what would the world be like then?
“What I would like to propose is that in 10, 20, 30 years’ time AI is going to be as integrated in everyday life as electricity is today.”
Organised by the steering group for the University’s ‘Spearheading digital futures’ area of academic distinctiveness, The AI Debates opened with a discussion of AI and automation (moving on to AI’s implications for education and employment on subsequent evenings).
Questions about how automation might affect society are not new, observed the steering group’s Chair, Professor Neil Dodgson, also from the School of Engineering and Computer Science.
“I was reading a book over the weekend by [theologian] Lloyd Geering summarising a series of lectures he gave in 1985 at St Andrew’s on the Terrace [in Wellington] about artificial intelligence and about whether computers could really think and about whether human beings had the wit and sense to be able to cope with computers that could think.
“That’s a long time ago; people have been thinking about this for a long time. But things have become much more sharp and in focus in the past 10 years because we have now developed computer systems able to do amazing things.”
For Browne, one of the “pitfalls and worries” of AI is the extent to which it might reproduce human biases.
“Amazon designed a really good AI system to select their employees. What they found was it was reflecting back the biases that had been held in the tech community for a long time. Senior management happened to be male so it was recommending senior management should be male.”
Associate Professor Hon Luamanuvao Dame Winnie Laban, the University’s Assistant Vice-Chancellor (Pasifika), wondered whether technological advances will heal or aggravate New Zealand’s ‘digital divide’.
“Three thousand years ago, the discovery of new technology [for long-range ocean voyaging by canoe] enabled Pacific people to explore, inhabit and settle a new world […] Pacific people are early adopters of new technology. We are keen to explore, inhabit and settle this new world. We keep in contact with our global families through Facebook, Instagram, WhatsApp, etc. Pacific people have the ability to use new technology – check out PikPok to see the work of Samoan gaming programmer Tyrone McAuley. Our abilities are not in question. The problem is our young people face significant challenges of access. Access to new technology is primarily a question of economics, of family resources. The digital divide is primarily a measure of wealth.”
Democratising access to new technology, opening it up to economically disadvantaged Pacific Island, Māori, migrant, refugee and other New Zealanders, could reduce the digital divide, said Laban.
“I believe we have a moral and social responsibility to find ways and means of providing equal access to new technology for all our children. This has significant costs but the consequences of unequal access to new technology is the growth of a class of digital elite separated from a class of digitally illiterate poor.”
Laban also raised the need to discuss and address who makes the important decisions around AI.
She cited the example of AI being used in military applications, including an algorithm to enable drones to identify, track and destroy a target.
“Who makes the decision to shoot or not? A politician? A military operative? A soldier in real time? Or has the life or death decision been automated? Who then is responsible for the consequences? The politician who set the policy? The military personnel who ordered and carried out the mission? Or the contractor who wrote the program?”
Matthew Bartlett, Executive Director of Citizen AI, a charitable company that researches, develops and promotes AI systems for public benefit, worries AI will exacerbate the “two-track economy we have allowed to develop in New Zealand” and its “split between people who own and people who don’t”.
Showing footage of what its developers say is the world’s first self-driving truck, Bartlett said: “It doesn’t take much imagination to envisage a New Zealand a few years down the track where most of the trucks are driving themselves. We’ve got something in the order of 26,000 truck drivers in New Zealand at the moment. There will be other jobs. We will find things for them to do. But what sort of jobs will they be, I wonder.”
Nor is it just a blue-collar phenomenon, he said, describing how Primer, a San Francisco start-up founded by New Zealand-born entrepreneur Sean Gourley, is built on software that automatically writes short briefings after reading long-form documents.
“I think it’s fair to say there’s quite a lot or at least a reasonable slice of the jobs that happen down this end of town [in Wellington’s government and business district] that involve reading and summarising in roughly this sort of manner.”
Bartlett warned that it can’t just be owners of robots and other forms of AI that benefit from technological advances. “Do we want a country where there’s a race to the bottom with workers competing with robots to see who can get the jobs done more cheaply?”
One option to alleviate inequities might be a universal basic income, he said. Another might be people working reduced hours with unemployment compensation making up their pay shortfall.
During the global financial crisis, Germany had at times 1.5 million workers in such a scheme and it helped avoid the massive layoffs other countries experienced, said Bartlett.
“To me a really key ethical question is how do we widely share the benefits of AI and automation?”
Also on the panel was Sean Audain, City Innovation Lead at Wellington City Council.
There is “a whole lot of human conceit” in how we think about AI, pointed out Audain.
Showing footage of a flock of birds making deft abrupt changes in direction while flying in unison, he said: “We tend to measure intelligence by ourselves but who’s better moving around?”
Artificial intelligence might not be based on human intelligence, said Audain. “And that takes us to some interesting places.”
This is the first of three IdeasRoom reports from The AI Debates. The next will be on artificial intelligence and education.
Newsroom is powered by the generosity of readers like you, who support our mission to produce fearless, independent and provocative journalism.