I read a fantastic conversation on Wired magazine between Joi Ito (Director of the MIT Media Lab) and Barack Obama yesterday. The conversation was about artificial intelligence, self driving cars and the future of technology. I would strongly recommend reading all of it. However, if you are out of time, here is an important excerpt from the discussion.
JOI ITO: This may upset some of my students at MIT, but one of my concerns is that it’s been a predominately male gang of kids, mostly white, who are building the core computer science around AI, and they’re more comfortable talking to computers than to human beings. A lot of them feel that if they could just make that science-fiction, generalized AI, we wouldn’t have to worry about all the messy stuff like politics and society. They think machines will just figure it all out for us.
OBAMA: Right.
ITO: But they underestimate the difficulties, and I feel like this is the year that artificial intelligence becomes more than just a computer science problem. Everybody needs to understand that how AI behaves is important. In the Media Lab we use the term extended intelligence (Extended intelligence is using machine learning to extend the abilities of human intelligence). Because the question is, how do we build societal values into AI?
OBAMA: When we had lunch a while back, Joi used the example of self-driving cars. The technology is essentially here. We have machines that can make a bunch of quick decisions that could drastically reduce traffic fatalities, drastically improve the efficiency of our transportation grid, and help solve things like carbon emissions that are causing the warming of the planet. But Joi made a very elegant point, which is, what are the values that we’re going to embed in the cars? There are gonna be a bunch of choices that you have to make, the classic problem being: If the car is driving, you can swerve to avoid hitting a pedestrian, but then you might hit a wall and kill yourself. It’s a moral decision, and who’s setting up those rules?
ITO: When we did the car trolley problem (The car trolley problem is a 2016 MIT Media Lab study in which respondents weighed certain lose-lose situations facing a driverless car. E.g., is it better for five passengers to die so that five pedestrians can live, or is it better for the passengers to live while the pedestrians die?), we found that most people liked the idea that the driver and the passengers could be sacrificed to save many people. They also said they would never buy a self-driving car. [Laughs.]
DADICH: As we start to get into these ethical questions, what is the role of government?
OBAMA: The way I’ve been thinking about the regulatory structure as AI emerges is that, early in a technology, a thousand flowers should bloom. And the government should add a relatively light touch, investing heavily in research and making sure there’s a conversation between basic research and applied research. As technologies emerge and mature, then figuring out how they get incorporated into existing regulatory structures becomes a tougher problem, and the government needs to be involved a little bit more. Not always to force the new technology into the square peg that exists but to make sure the regulations reflect a broad base set of values. Otherwise, we may find that it’s disadvantaging certain people or certain groups.
I think this is a critical conversation – one that we must all have as we build toward the future.
Machines are not sources of disembodied truth. Anyone who has conducted any kind of analysis with huge data sets will tell you that. Machines take with them our assumptions and judgment. Similarly, artificial intelligence isn’t going to conjure up values. We will teach AI to make these decisions. The self driving car decision is just one such example. President Obama remarks later that there aren’t enough people thinking about “the singularity.”
That is true. Most of us are wrapped in day to day nonsense that isn’t really going to matter in the big scheme of things. As technology becomes a bigger of our lives, the onus is on us to make sure we have discussions on how we build this technology. Machine values are not going to save us.
Human values are.
PS: How many heads of state can you imagine having such a thoughtful conversation about the future?