Last Wednesday BBC R4 hosted the first of 4 weekly lectures hosted by Professor Stuart Russell, a world-renowned AI expert at UCLA. The talks (followed by Q&A) examine the impact of AI on our lives and discuss how we can retain power over machines more powerful than ourselves.
I think this area (e.g. AI commercialisation, AI governance, AI safety, AI ethics, AI regulation etc) is going to be one of the hot topics of the next decade alongside trends including climate change, fintech (crypto), AR/VR, quantum computing etc. Accordingly I couldn’t wait to hear Professor Russell speak.
The event blurb states the following:
The lectures will examine what Russell will argue is the most profound change in human history as the world becomes increasingly reliant on super-powerful AI. Examining the impact of AI on jobs, military conflict and human behaviour, Russell will argue that our current approach to AI is wrong and that if we continue down this path, we will have less and less control over AI at the same time as it has an increasing impact on our lives. How can we ensure machines do the right thing? The lectures will suggest a way forward based on a new model for AI, one based on machines that learn about and defer to human preferences.
As I write, I have heard 2 talks both of which have been absolutely fascinating (and quite honestly, scary. Especially regarding military applications of AI which is already here). I didn’t take notes however the BBC interviewed Professor Russell ahead of the talks. I have provided a summary of the Q&A below which is well worth a read:
How have you shaped the lectures?
The first drafts that I sent them were much too pointy-headed, much too focused on the intellectual roots of AI and the various definitions of rationality and how they emerged over history and things like that.
So I readjusted – and we have one lecture that introduces AI and the future prospects both good and bad.
And then, we talk about weapons and we talk about jobs.
And then, the fourth one will be: “OK, here’s how we avoid losing control over AI systems in the future.”
Do you have a formula, a definition, for what artificial intelligence is?
Yes, it’s machines that perceive and act and hopefully choose actions that will achieve their objectives.
All these other things that you read about, like deep learning and so on, they’re all just special cases of that.
But could a dishwasher not fit into that definition?
It’s a continuum.
Thermostats perceive and act and, in a sense, they have one little rule that says: “If the temperature is below this, turn on the heat.
“If the temperature is above this, turn off the heat.”
So that’s a trivial program and it’s a program that was completely written by a person, so there was no learning involved.
All the way up the other end – you have the self-driving cars, where the decision-making is much more complicated, where a lot of learning was involved in achieving that quality of decision-making.
But there’s no hard-and-fast line.
We can’t say anything below this doesn’t count as AI and anything above this does count.
And is it fair to say there have been great advances in the past decade in particular?
In object recognition, for example, which was one of the things we’ve been trying to do since the 1960s, we’ve gone from completely pathetic to superhuman, according to some measures.
And in machine translation, again we’ve gone from completely pathetic to really pretty good.
So what is the destination for AI?
If you look at what the founders of the field said their goal was, general-purpose AI, which means not a program that’s really good at playing Go or a program that’s really good at machine translation but something that can do pretty much anything a human could do and probably a lot more besides because machines have huge bandwidth and memory advantages over humans.
Just say we need a new school.
The robots would show up.
The robot trucks, the construction robots, the construction management software would know how to build it, knows how to get permits, knows how to talk to the school district and the principal to figure out the right design for the school and so on so forth – and a week later, you have a school.
And where are we in terms of that journey?
I’d say we’re a fair bit of the way.
Clearly, there are some major breakthroughs that still have to happen.
And I think the biggest one is around complex decision-making.
So if you think about the example of building a school – how do we start from the goal that we want a school, and then all the conversations happen, and then all the construction happens, how do humans do that?
Well, humans have an ability to think at multiple scales of abstraction.
So we might say: “OK, well the first thing we need to figure out is where we’re going to put it. And how big should it be?”
We don’t start thinking about should I move my left finger first or my right foot first, we focus on the high-level decisions that need to be made.
You’ve painted a picture showing AI has made quite a lot of progress – but not as much as it thinks. Are we at a point, though, of extreme danger?
I think so, yes.
There are two arguments as to why we should pay attention.
One is that even though our algorithms right now are nowhere close to general human capabilities, when you have billions of them running they can still have a very big effect on the world.
The other reason to worry is that it’s entirely plausible – and most experts think very likely – that we will have general-purpose AI within either our lifetimes or in the lifetimes of our children.
I think if general-purpose AI is created in the current context of superpower rivalry – you know, whoever rules AI rules the world, that kind of mentality – then I think the outcomes could be the worst possible.
Your second lecture is about military use of AI and the dangers there. Why does that deserve a whole lecture?
Because I think it’s really important and really urgent.
And the reason it’s urgent is because the weapons that we have been talking about for the last six years or seven years are now starting to be manufactured and sold.
So in 2017, for example, we produced a movie called Slaughterbots about a small quadcopter about 3in [8cm] in diameter that carries an explosive charge and can kill people by getting close enough to them to blow up.
We showed this first at diplomatic meetings in Geneva and I remember the Russian ambassador basically sneering and sniffing and saying: “Well, you know, this is just science fiction, we don’t have to worry about these things for 25 or 30 years.”
I explained what my robotics colleagues had said, which is that no, they could put a weapon like this together in a few months with a few graduate students.
And in the following month, so three weeks later, the Turkish manufacturer STM [Savunma Teknolojileri Mühendislik ve Ticaret AŞ] actually announced the Kargu drone, which is basically a slightly larger version of the Slaughterbot.
What are you hoping for in terms of the reaction to these lectures – that people will come away scared, inspired, determined to see a path forward with this technology?
All of the above – I think a little bit of fear is appropriate, not fear when you get up tomorrow morning and think my laptop is going to murder me or something, but thinking about the future – I would say the same kind of fear we have about the climate or, rather, we should have about the climate.
I think some people just say: “Well, it looks like a nice day today,” and they don’t think about the longer timescale or the broader picture.
And I think a little bit of fear is necessary, because that’s what makes you act now rather than acting when it’s too late, which is, in fact, what we have done with the climate.
The Reith Lectures will be on BBC Radio 4, BBC World Service and BBC Sounds.