Listen to this article
Estimated 6 minutes
The audio version of this article is generated by AI-based technology. Mispronunciations can occur. We are working with our partners to continually review and improve the results.
Ideas54:01Why AI needs to be nicer to us and develop ‘maternal instincts’
Geoffrey Hinton, who many consider to be the godfather of artificial intelligence, says if AI continues to develop without appropriate guardrails, a worst-case scenario could lead to human extinction.
But he has a solution.
Hinton is co-winner of the 2024 Nobel Prize in physics and co-founder of the AI Safety Foundation.
As he explains to IDEAS host Nahlah Ayed, training AI to develop maternal instincts could be what saves the human race. Here’s a part of that conversation.
What is the worst-case scenario that you can imagine here?
Well, there are lots of bad-case scenarios in the short term that don’t involve AI taking over. So it’s hard to pick the worst one.
But, for example, AI being used by terrorists to create nasty new viruses. It’s making it much easier for them to do that. And that’s very scary. We will get international collaboration on how to try and prevent that, but we may not be able to. So that’s one short-term risk.
The thing that worries me most is still this long-term risk, which seems to me fairly inevitable, of AIs getting smarter than us. There’s AI being used to corrupt democracy with fake videos.
But the thing that worries me most is still this long-term risk, which seems to me fairly inevitable, of AIs getting smarter than us, and we don’t know how we can then co-exist with them. We don’t know whether they will actually take over from us.
Let me ask you bluntly. What are the odds of AI actually leading to human extinction in this century?
OK, I think the only honest answer is that this is something that’s probably not going to happen for 10 or 20 years. And we have very little idea what things are going to be like in 10 or 20 years. If you simply look back 10 years, nobody had any idea we’d have chatbots as good as they are now.
And so if progress is only linear, we can expect that in 10 to 20 years, things will be very different from how they are now, and we’ll have all sorts of advances that we couldn’t have predicted. The most honest answer is we haven’t got a clue.
And not to dwell on the negative, but it is at the far end of your fear horizon that it could lead to the extinction of humans?
Oh, it certainly could, yes. I think anybody who said that there’s no way it’ll lead to the extinction of humans just isn’t facing reality.
Geoffrey Hinton, co-winner of the 2024 Nobel Prize in physics, is known by many as the ‘godfather of AI.’ He spoke with Ideas about how we could train AI to be kinder to humans.
I wonder how we could shape the future of AI to make sure it’s kinder to us. Is there a way?
There might be. I feel we should be putting a lot of research effort into that. So if you look around and say, “Where’s an example of a more intelligent thing being controlled by a less intelligent thing?” the best example I know of and perhaps the only one in the sense we’re talking about is how a baby controls a mother and that’s because evolution built stuff into the mother.
She can’t bear the sound of the baby crying. She gets all sorts of hormonal rewards from being nice to the baby. It was very important, obviously, for evolution to let the baby control the mother for the survival of the species.
Maybe we can do the same with AI. Even though it’s going to be smarter than us, if we could make it care more about us than it did about itself, there’s some good things that would come out of that.
It would realize we’re rather limited in our intellectual abilities, but it would want [us] to develop as much as [we] could anyway. If you take a normal mother and say, “Would you like to turn off your maternal instincts? Wouldn’t your life be much easier if you just wake up in the middle of the night and say, ‘Oh the baby is crying again,’ and go back to sleep. Wouldn’t that be nice?”
Most mothers would say no, because they really genuinely care about the baby and they realize that would be very bad for them. Most of them won’t want to turn off those instincts, even though they’d be able to if they wanted to because they can kind of get at their own code.
I’m surprised that that wasn’t part of the development of AI to begin with. Why haven’t we thought about ensuring that AI is kinder to us?
Oh, because the main thrust of AI until very recently has been [that] we want smart assistance.
You don’t need it to be kind, you just need it to be efficient and to do what you say. And that’s been the view of how we can develop AI from the big tech companies.
Until it gets smarter.
And I don’t think it’s sustainable when it gets smarter. I think we need to completely reframe it as we’re not gonna be the boss and the AI being our intelligent assistant. AI is going to be looking after us. How do you do that? How do we give AI maternal instincts to be nicer to us?
Well, remember, we’re developing it. We’re creating it. We still got a chance of doing that. Whether we succeed or not depends partly on how hard we try. It might not be possible.
It might be that once you develop super-intelligent AI, it goes off and does its own thing and we were just a passing phase in the development of intelligence. But if it is possible to develop it in a way where it cares for us more than it cares for itself, it’d be very silly if we went extinct because we didn’t try.
How many people are actually working on that aspect of things today?
Probably less than one per cent of the researchers working on AI, which is crazy.
*Q&A edited for clarity and length. This episode was produced by Nicola Luksic.
Download the IDEAS podcast to listen to the full conversation.


