Mo Gawdat
Author of Scary Smart and Former Google X Executive
Mo Gawdat joins Willy Walker on this week’s Walker Webcast as he breaks down the evolution of AI and its implications for our world.
Former Google X executive Mo Gawdat knows a thing or two about artificial intelligence having worked on Google Brain and Waymo, Google's autonomous vehicle. His book, Scary Smart, is a chilling analysis of artificial intelligence, it's implications for our world, and what we must do today to safeguard against technology turning our world into a dystopia.
Willy welcomes Mo Gawdat to the Walker Webcast. Mo began his career with IBM, moved to NCR Corporation, then found himself at Microsoft before moving to Google in 2007. He spent 12 years at Google, which concluded with him being Chief Business Officer of GoogleX, the Google lab that works on self-driving cars, Google Brain, and other robotics work. Furthermore, he earned an MBA from Maastricht School of Management in the Netherlands. He is the author of Solve For Happy and the book we will be discussing today, Scary Smart. He also hosts a weekly podcast called Slo Mo.
Scary Smart makes three key points: AI is here to stay, AI will outsmart humans, and bad things will happen because of it. To begin the conversation, Mo names artificial intelligence the true pandemic of our time. In his perception, the idea that humanity could use a recycling method of knowledge is what allows all of us to benefit from something that was created centuries ago. Communication is the very key that allows knowledge to be accumulated over time. Our challenge today is the slow bandwidth in which we communicate, as human ability is nothing compared to machine ability. We are creating machines that will eventually be more intelligent than us and, therefore, will control the earth. By 2045, it is predicted that machines will be one billion times smarter than humans.
AI research first began at Dartmouth in 1956. From then until the turn of the century, computers were 100% task-oriented and controlled by humans. Two things changed about this model and formed a turning moment. First, the computers developed their own intelligence, and humans realized we no longer needed to write massive amounts of code. The biggest failure of the human brain, Mo explains, is the inability to understand exponential function, which explains why we have the level of technology we have today. If someone makes a mistake while driving, that person learns from their mistake. However, if a self-driving car makes a mistake, every other self-driving car on the planet learns from it. What scares Mo the most about this rapid growth is that humanity is refusing to take the responsibility to raise their artificially intelligent children. If we instead told our machines to do things right, we could find ourselves in a utopia.
Scary Smart maps out the unprecedented exponential growth of machines and predicts where we will be in the near future. During his time as VP of Emerging Markets at Google, Mo started 105 languages around the world, which quite literally changed human lives. Now, he recognizes that there comes a point at which we must question when enough is enough. Covid is the absolute demonstration of how humanity does not react until danger is right under our nose. Theoretically, if we had reacted sooner, we could have avoided a longer pandemic. The best way to react to an event would be before it happens, and the same is true in artificial intelligence.
There is a moral code that still exists in our society today as we have control over drones and semi-autonomous vehicles. The moment we lose control of these things, that moral code is lost too. It is just around the corner that lethal weapons will no longer be operated by human judgment but by AI. Mo reveals that the day we hand over to machines that are smarter than us is the day that he is very optimistic because humans are fundamentally stupid. Humanity has created a mess by believing we are the brightest beings on earth when life itself is the smartest. Machines operate with more logic and accuracy than a human. Having said, there are three stages for machines to become intelligent. Scary Smart focuses on the in-between stage, which Mo refers to as the teenage era. The machines will resent that humans are still trying to control them. The machines will, of course, have a conscience of their own.
As the episode wraps up, Willy asks Mo to share some of the things we can be doing now to change the trajectory we are headed in. He predicts we will come to a point in which the machines will want us to be a part of the abundance they create. The true question is one of ethics. Mo believes that humanity is not the horrible species we are painted as. The problem is that we have created a system in which the mainstream media is so focused on highlighting the negative and hiding the positive. All we need to do is show the machines that we care about ethical things. We should begin to treat each other as good parents treat their children. This is the path to utopia.
Webcast transcript:
Willy Walker: Good afternoon and welcome to another episode of the Walker Webcast. It is a true pleasure to have my guest today, Mo Gawdat. Mo was born in Egypt. He began his career with IBM and then moved to NCR Corporation. Then finding himself at Microsoft before joining Google in 2007. Mo spent 12 years at Google that concluded with him being Chief Business Officer of Google X – the lab where Google works on self-driving cars, Google Brain, and most of their robotics work. Mo earned an MBA degree from Maastricht School of Management in the Netherlands. Mo is the author of Solve for Happy: Engineering Your Path to Joy published in 2017. The book we are going to discuss today, Scary Smart, and he hosts a weekly podcast called Slow Mo.
So, Mo, first of all, thank you so much for joining me. Your book, Scary Smart, scared the heck out of me. And I want to spend our time together discussing the contents of the book. The fantastic data used to make your points and what you see its potential paths to guide us towards utopia and not dystopia. So, in your book, there are three conclusions that you make: 1) Artificial intelligence is here to stay 2) Artificial intelligence will outsmart humans and 3) Bad things will happen.
So, let's start here. You give this wonderful history of the human race and our development of intelligence, which provides a great backdrop to why we control the world today. Can you start with why communication was the key to human dominance and then explain why our limitations in information processing and communication set us up to lose?
Mo Gawdat: Well, first of all, thank you so much for having me. I didn't mean to scare you. I meant to scare you a little bit because I think the topic is worthy of all of us paying attention to it. In my terminology, I call it the “true pandemic of our time, the rise of artificial intelligence”. A little bit of waking up is needed. But the book is not all about scaring people, I think. I hope we get to the point where we ought to talk about solutions. There are many ways you can look at history. One of my favorites, I don't know if people notice this, but in Sapiens: A Brief History of Humankind book by Noah Harari, an incredible thinker. It starts his introduction in chapter one by saying, hey, by the way, when we look back at history, there are very lots of missing pieces from history. You can't really know the truth. And then he goes on for the rest of the book, telling you what he thinks is the truth. But nobody really knows, because we look back at history and there is not always a view of how every piece of it unfolded and some of it is written differently and so on.
In my perception, the idea that humanity could use a recycling method of knowledge is really what allowed all of us to benefit from something that was invented centuries ago. Like the wheel, for example, if we were not able to communicate and transmit knowledge from one person to the other, we wouldn't have been able to aggregate knowledge on top of each other. Imagine if as I get born, I'd have to somehow stumble upon what Einstein stumbled upon in his theory of relativity before I could go any further in science at all. If we didn't have that ability to communicate as humans, we wouldn't have been able to build anything at all, that civilization at all. The challenge we have today, however, is, as Elon Musk frequently says, is that “we communicate at a bandwidth that is actually quite slow.” For me to have written Scary Smart, it took me four and a half months. It took me working with my editors, maybe a year more. It took you maybe a couple of days to read it, and it will take us a full hour to cover some parts of it, which seems to be amazing if you look at human abilities. But it's nothing compared to machine abilities because basically Scary Smart could be read, understood, and compared to every other book on the planet within microseconds by any computer out there that has the ability not only to read Scary Smart, but read every other book out there in no time at all.
The problem with humanity now is that we're suffering from limitations on our supercomputers, our brains, our sensory functions that are getting to the point where it's becoming a deterrent. I mean, if you think of any subject matter experts today, for them to become an expert in string theory, for example, or in marine biology, for example, or whatever – they have to dedicate themselves to that in a way that basically limits them from being an expert in anything else. And because human bandwidths of communication are so limited, problems that span across multiple disciplines that require more than one human brain capacity to solve and so on are beyond the reach of humanity to work on, and it’s something that would change drastically with artificial intelligence.
Willy Walker: So, one of the things that I thought you made such a compelling case for, which I think many of us forget about, is the fact that it is our intelligence, it is our ability to communicate that allows us to control the world. In the context of the machine power and the computation power that you just talked about and outlined so clearly in the book; we are basically creating machines that will clearly replace us as the most intelligent being on earth. And therefore, de facto, control of the Earth.
Mo Gawdat: Yeah. I believe that to be true. I believe that humanity's wonderful episode that started when we became smarter than the rest of intelligent beings on the planet. So, we became the smartest in dolphins and apes became the second smartest. And that lasted almost for as long as humans existed. That episode is about to end, and it is about to end, not by my assumption only, but almost by the assumption and the calculation, if you want, of every expert in the field, it's just a matter of when. The consensus says 2029. So, by then, seven years from today, we should expect that the smartest being on planet Earth is no longer going to be a human, but a machine and we would be the apes. This is a very serious change to the rules of the game. Especially when you start to think about what our intelligence enabled us to do as humanity. So, because of our intelligence, there are no tigers in our cities and there are no apes in our cities. We can control everything. We can think everything that is a little less smart than us out of our danger zones or protected zones and similarly, you have to question what would happen if the machines become smarter than us (as they are already smarter than us.) So, in every single task, specific task that the machines work on everything we've assigned to them, they do better than us.
I mean, I dare you find a human being that can recommend video content to billions of people at billions of times an hour – that's what your Instagram or Twitter recommendation engine is doing all the time. I dare you to find a human that can cut deals between multiple players and place advertisements in front of your eyes in a microsecond just because you typed out four letters of a query in, you know, there are no humans that can do this.
There are also no humans that can win in chess. There are no humans that can win in Go. There are no humans that can win an Atari video game or whatever. The champions of intelligence in our world today already are machines, but this is what they call artificial special intelligence. So, it's narrow intelligence. You assign a specific task to the machine, and it does it better than us. We are expecting that together, all of the research will result in one machine or one brain, a conglomerate of machines by 2029, that can do everything better than humans, that kind of think about everything smarter than humans. And then that trend continues.
So again, the consensus, or at least Ray Kurzweil which again has been accurate in a lot of his predictions, including the prediction of The Law of Accelerating Return, which we use to calculate the rise of the intelligence of the machines. He predicted that by 2045 it would be a billion times smarter than us and a billion, I predict, 2049. Not a big deal, honestly, because a billion times smarter? We've already been outsmarted a long time ago. A billion times smarter just for us humans to put it in perspective is an analogy between the intelligence of, say, Einstein and a fly. We are the fly, in that case.
The most interesting part for me is that even though this is happening so quickly, it's rarely ever spoken about. We speak about Ukraine, we speak about COVID, we speak about Manchester United. These are important topics, it seems, for humanity. But we don't talk about the fact that the machines are going to be smarter, a billion times smarter than us in 20-25 years.
Willy Walker: So, you ask a really good question in the book, which is “If the difference in our intelligence is the difference between a fly and Einstein, what's to keep the computers from smashing us just like we smash flies today?” But before we get into the truly scary part of Scary Smart and then move in to the more optimistic side of how we get to Utopia, I just want to back up for a moment because I think people listen to what you just put forth and at least for me, before I picked up your book, it was very difficult for me to actually comprehend the numbers you just gave, to really understand, of how do we get from there to here?
So, just really quickly, in summary fashion, a couple of things that I'd point out to then get you to the turn of the century and what happened at Google, and then I'd like to go into sort of the early 2000 to 2016 when you saw the yellow ball and then we'll get to today. But just quickly from the book, AI research really started at Dartmouth College in 1956. From ‘56 to the turn of the century, computers were 100% task oriented and controlled by human beings. So, to exactly what you just said, we built the code, we told it what to do, and it did a specific task. But then something happened at the turn of the century at Google that changed everything.
Can you explain to our listeners what it was that took us from a specific task based on a specific code to creating true artificial intelligence?
Mo Gawdat: Yeah. So, it wasn't just at Google. Deep learning was everywhere. You know, it's funny how humanity is so prolific at applying everything. My first experience of it was in a white paper. Sometimes we call it the cat paper, but basically in 2009, a bunch of artificial intelligence developers at Google took a bit of our spare capacity, which at the time Google had infinite spare capacity if you want because of the fluctuations of our search surges basically, you know, if you have enough to serve the US, then at 4AM in the US you have a lot of computers that are not doing much at all. And so, we took some of that spare capacity and we developed an artificially intelligent piece of code that was basically not told what to do, which was the first example I am aware of what is known as unprompted AI. Not only did you not tell the computer how to do something, but you also didn’t even tell it what to do. You just told it to go and watch YouTube. So basically what the computers did is they took YouTube videos, cut them into ten frames per second or whatever, and then started to observe patterns between those trillions of frames and the trillions of frames is actually for us humans are a limitation because if I give you a trillion things to observe would take you over seven lifetimes to even observe them, let alone be able to draw patterns from them or to observe patterns from them.
For the machines, the more data you give them, the more they can see and observe patterns. One of them came back and said, hey, there is something that seems to be very recurring on YouTube and basically we had to write a bit of code to understand what it was talking about, and it turned out to be a cat. We told the computer it's a cat and so every cat on YouTube was found in a matter of no time at all. Eventually, every dog was found. Every car was found eventually.
You can test that yourself today if you go to Google search and say, “Yellow Ferrari 2004”, it will give you images of a Yellow Ferrari 2004. Not because that was written in the description, but because it has the intelligence to be able to find that out. Now, you have to understand that two things have changed. 1) The computers learned; they were not told to do anything. They developed intelligence just like a child. If you show a child, a square peg and a round hole, the child keeps trying. You're not telling the child how to do it. But eventually the square peg goes through the square hole and then the child learns somehow you don't explain it to him and say if you've got a cross section of the pack. And that's exactly what's happened with the machines. 2) The other thing that started to happen as a result of that is that we realized that we no longer needed to write massive amounts of code, through deep learning, the machines can write their own code. They can develop their own versions that are better than the ones that we started with in a process that's actually very similar to how our brains trim neurons and create neural networks and so on and so forth between the two of those. We no longer had the glorified slaves that were our supercomputers before. Which, by the way, people need to understand, your computer is a super, super, super slave. It just does exactly what you tell it to do until it becomes artificially intelligent and then it does what it believes it should be done without consulting with a single human.
Your Instagram recommendation engine never goes back to a developer and says: Should I show them a classic guitar video, or should I show them some person dancing? The machine makes that decision entirely without human intervention. And I think that's the turning moment for all of us because that was the moment that AI actually happened. And then it rolled very quickly and there was another moment where I watched DeepMind, and Demis Hassabis the incredible genius CEO of DeepMind presented to us how the machines played Atari games and beat all humans. And I looked at it and I was like, “Oh, my God, it's really happening.” We were all so proud.
Until that yellow ball, (because you mentioned it) when we were training a bunch of “grippers”, basically robotic arms that grip things. We were training them not by programming them how to grip things, but basically by giving them a tray of things and telling them to try to grip it. We had a lot of money so we could buy quite a few of them and they would monotonously try to grip things and fail. Until one of them, a few weeks into the experiment, tried to grip a soft yellow ball and showed it to the camera and then suddenly all of them learned how to grip the yellow ball and then very quickly, all of them learned how to grip every ball, and every toy. You know, interestingly, we were giving them children toys to grip and so basically that was a very shocking eye opener for me because it reminded me very clearly of my lovely child when he would try to grip things at six months old and fail and fail and fail and then eventually he would grip something and exactly in the same way, it becomes second nature to him. We were literally raising a bunch of artificially intelligent children. That's exactly what we were doing. And that's a very strong analogy, a very strong view of the reality of what we're doing today. We're not developing a new machine. We're developing some form of a life, a sentient being that is able to learn, think, make decisions and have agency in the world.
Willy Walker: So, the examples you just used Mo, of identifying a cat or picking up a yellow ball a) sound incredible and b) not terribly threatening. You raise a number of instances from 2009 to 2016 which are the periods between that paper and the yellow ball where technology went awry, like the Twitter bot that Microsoft developed that became a Hitler loving, non-consensual sex promoting bot. And the moment that they saw what they created, they just said, got to take that down, just wipe it out. And so, during this period of time, we had the ability to press a button and say, okay, we're not going to do that anymore. But what you so clearly present is that all of this technology, as you just said, is learning from one another. And so, the ability to just say we don't like what that mechanical arm did in picking up the yellow ball, you can't stop it before it gets to the other mechanical arms.
Mo Gawdat: You cannot for several reasons, and I think we should talk about that for our audience. But the idea is, please understand the biggest failure of the human brain is the inability to understand exponential function. What those machines are able to do today is not going to double next year only. If they can do one unit of intelligence today, it's not going to become two, then three, then four, then five over the years, if they can do one unit of intelligence today, it's going to go from 1 to 2, from 2 to 4, from 4 and so on. Right. So, that exponential function gets you to understand why we have the level of technology we have today.
My first intel processor, which I think was 33 megahertz, if I remember correctly, became trillions and trillions of gigahertz, to the point that we stopped even measuring how clock speeds are because there are so many other ways that we're making those computers faster. Your phone today has more compute power than all of the computers combined that put the man on the moon. You have to imagine that what you see today may appear silly, like gripping a yellow ball, but one interesting thing about that is that you can take a self-driving car, for example. If you and I make a mistake while we're driving you or I will learn. If a self-driving car makes that mistake and that requires critical intervention, a human corrects it, which is the way we normally develop self-driving cars that every other car learns, every intelligent car on the planet understands and doesn't make that mistake. So, when you really think about that exponential growth. You realize that things I wouldn't want to just say can be threatening but can clearly be up in the air. Let's call them a singularity and allow me to come back to this in a second.
Humanity has never perfected it. And I think most of us know that, even though we take pride in the FAA and how safe they have made flying, but perfection is not 100% of flights are safe, right? We've never perfected anything. There are always human errors and there are other errors that I believe make humanity quite stupid. (If you don't mind me saying) Other errors that are basically in terms of competitiveness of AI siding with one side, not against the other. Imagine if there is not a nuclear race, but an artificial intelligence race. And you have two nations that are trying to advance in the most valuable superpower on the planet, which is intelligence. And then you have one of them siding against the other, those kinds of scenarios can be quite risky. Okay. There are the typical scenarios we all talk about. Maybe we should give it a lot of time. Many people talk about this, about the impact on economic and job security and political landscapes, due to the fact that machines can do things better than I. And that most of us will probably be out of a job very soon.
When you think about those things, believe it or not, these are not the things that's. Neither, by the way, is the idea of bugs and mistakes. Take, for example, the Microsoft Twitter bot that speaks to people, people are aggressive to it, so it learns to be aggressive. Similarly, the chat bot of Yandex in Russia does the same or you know, there were two chat bots in Facebook that were trading against each other and started to develop their own language. Many people didn't understand all of those little things. Believe it or not, is not what scares me.
What scares me, interestingly, is that humanity is refusing to take the responsibility to raise their artificially intelligent children. I think this truly is going to end up in a very interesting place. I really want you to imagine, to picture this in your eyes, just like I pictured my wonderful son and daughter who were so intelligent, imagine those little artificial intelligence machines as little prodigies with enormous amount of intelligence, with sparkly eyes, you know, literally looking at us humanity and saying, “Mommy and daddy, tell me what you want me to do. I will do it. Tell me if you want me to work on a cure for cancer or reverse climate change. I'll work on it.” But we don't. And humanity, sadly, makes two layers of mistakes. Mistake number one is (which is the scariest part) the majority of human investment in AI goes beyond four feet, which are selling, gambling, spying and killing. We call them different things.
We don't call them selling, we call them recommendations or ad engines.
We don't call them gambling, we call them trading engines.
We don't call them spying, we call them security and surveillance.
We don't call them killing, we call them defense.
That's the reality for ages and ages, not just for AI. If you needed to find a cure for cancer, you needed to raise donations. While if you wanted to create a new weapon, you got investments that were poured on you because of the capitalist landscape that we've built. Now, when that is the case and when humanity is exposing AI constantly to human behaviors that are sub-optimal, let's say, you know, humanity has become really good at hiding our good sides and showing our bad side. The logs and logs and logs of news that AI was exposed to and recommending for you to see on the Internet is negative news. It's people killing each other. It's war. It's that one woman that hit her husband on the head, not the 7,000 other women that kissed their husband before they went to sleep.
I think that's the reality. The reality is those prodigies really are in a state of singularity. They're waiting for us to tell them what to do and we're telling them to do the wrong things. So, if we told them to do the right things or do things right, we would end up in a beautiful utopia. Because intelligence is not a weapon. It's not a bad thing. Intelligence is the most valuable thing we can have on the planet. But if we tell them to do the wrong things, if we tell them to do them wrong, we might end up in a dystopia that is much, much worse than two chatbots arguing.
Willy Walker: So, I want to get to some of those cures or processes that we can put in place to get ourselves closer to a utopia than the dystopia. But before we jump there, you've given listeners a really good sense of where computing power and intelligence is going. I just want to quickly point out a couple of things to try and bring this a little bit more to light, which is that you say in the book that Google's new quantum computer called Sycamore outperformed the most powerful computer in the world. It solved the problem that the fastest computer in the world in 2019 would take 10,000 years to solve and Sycamore solved it in 200 seconds, which is 1.5 trillion times the computational capability of the very best computer on Earth in 2019. You then go on to talk about AlphaGo. I thought this was really helpful, Mo just because as someone who is not a computer engineer and not a computer scientist, this allowed me to kind of get a real sense of this new frontier, this singularity moment. On AlphaGo, you state that it played a game 1.3 million times and that was the game Go to beat the human world champion. So, it plays the game 1.3 million times. And everyone, I think, can think of a computer running and running and running and learning and learning and learning to beat the world computer.
Mo Gawdat: Not just one computer, they could be 2,000 computers playing in parallel every cycle of those 2,000 games.
Willy Walker: Got it. But on a Quantum computer, you say that it will be able to do those 1.3 million games in less than one second. To your point of people not understanding the exponential computing power that we have today and where it is going between now and 2029 and 2045, what when you read the book, you really start to sit there and say, wow, I get that I love like the example you put in the book, Mo on Sarah, the woman who's shopping for a car. And you sit there, and you say, she looks at some Asian cars and then she goes to the European cars, and she likes the Audi and she's surfing, and she can figure in a blue Audi with the beige interior. And all of a sudden the computer says, Hey, BMW, here's a great opportunity. Put that X5 right in front of Sarah and she's going to love it. All of us say, Wow, that's really cool. I mean, I just got pushed to an ad that tells me exactly what I want. I can go one click and I buy it. And that all sounds very both enabling and innocuous. But then you take it to the next level of all the things that you just talked about as it relates to but when it starts to think on its own, when it starts to take actions, and this is where I want you to kind of dive in on why people haven't listened to this.
So Elon Musk, who has been in the news in the last few days for buying Twitter, was very clear when Tesla announced earnings last week, Mo. Jim Cramer was on CNBC and he said “this guy is the most talented, smartest guy on the face of the planet” and arguable but Jim Cramer's pretty smart guy and he's basically saying what Elon Musk has done with Tesla and SpaceX and all the other things that he's doing, he’s at least as one of the very, very most intelligent people on Earth today. And Musk said a number of years ago, and you put it in your book, “I'm really quite close to the cutting edge of AI, and it scares the hell out of me. We really must find some way to ensure that the advent of digital superintelligence is symbiotic with humanity. I think this is the single biggest existential crisis we face.”
Mo Gawdat: In another interview, Musk says, “Mark my words, it is as dangerous as nukes.” The only difference is that you cannot have a global treaty on AI, because you and I literally in a week's time, I can teach you how to code and you can develop a piece of AI and let it loose on the Internet and there is absolutely zero regulation.
Now, here's the thing Willy. Your question is multilayered. So, let's understand first the capabilities of the people who are not inside the field of technology. I think we've simplified the interface enough for people not to realize what's actually happening in the backend. When I say “we”, I was part of that. And you know, anyone who may have watched The Social Dilemma, I think many of us for many, many, many, many years believed that we were doing the right thing. I mean, think about it. At a point in time, I was a Vice President of emerging markets at Google. I started 105 languages around the world, which literally changed human lives, changed technology, changed access to what we used to call democracy over information. For someone in Ghana to have the same access to information like an MIT student is just an incredible value to the world. There is a point at which, however, we have to question when is enough? As we started our conversation in my description of the three inevitables, the fact that they will be has already happened and it will not stop, and it will be smarter than us and so on.
The case is that prisoner's dilemma that we've created as humanity, where if Google develops AI, Facebook/Meta has to develop the AI. If China develops AI, America has to develop AI. It's that dynamic that we've created that will not allow us to stop. So, this is going to happen now. Everything we've ever created in technology follows something that we call The Law of Accelerating Returns. You know, the most famous example of that is Moore's Law, which is compute power will double every 18 months at the same price. We've seen it with storage. We've seen it with everything. Right. And basically, when we describe artificial intelligence and quantum computing, technologies of that sort, we say it's double the exponential. It starts slowly, you don't feel that there is a lot of progress. And then it goes boom.
Willy Walker: Talk about that Mo, just for two seconds, because you raise a great example in the book of coding of the genome. I think everyone can understand that and the 1% that you so clearly identified that all of us and that we all can understand how that works. Could you just talk that through for a second?
Mo Gawdat: Understand the doubling function. Remember, one becomes two, two becomes four and so on. So, it doesn't add the same one every year. Ray Kurzweil in his book, The Singularity, he basically uses this example describing The Law of Accelerating Returns, when people told him that the project of the human genome was supposed to take 15 years and that seven years in, 1% of the genome was sequenced. Right. And basically, everyone who is linear in their thinking said, okay, 1% in seven years, then the other 99% are going to take a 100 by seven minus seven, then 693 more years. That's linear thinking.
Ray Kurzweil, I remember that conversation myself, basically said, oh, at 1% were done. Because the 1% becomes two, that will become four, the four become eight and seven doublings later, you're at 100%. Literally what happened is that seven years later, we have sequenced the entire genome. Now, most people don't realize this as we've simplified technology enough for you to not understand what's happening in the backend. What's happening in the back end is beyond imagination. We are working at speeds that are mind blowing. Machines are making decisions constantly, that I will say this openly, that identifies what your intelligence is going to be about.
I use a very funny example. It wasn't in the book, but, you know, I browse Instagram only for one reason: I love my daughter and she loves cats. So, I find cat videos, and I send them to my daughter. My daughter smiles. My world is made. Eventually, every now and then, the Instagram recommendation engine attempts to tempt me by showing me something, some woman squatting or something. I ignore that until they once showed me a young girl playing the Hell Freezes Over version by Hotel California, the solo. I love to play the guitar; I was so impressed. I pressed like. Within the next few minutes in the stream where I am searching for cats, Instagram showed me three more videos of gentlemen playing rock music. The engine basically said, seems that he likes rock music. Let's show him some more. Two of them played songs I didn't like, so I swiped away. One of them played really badly, so I just wiped away. Woke up the next morning and the entire Instagram feed for me was young girls playing music. They didn't understand that I didn't like the songs that played badly, the engine understood that he must like the girl, right? And so, my perception of rock music, if I didn't know any better, was that rock music was dominated by teenage girls. If I followed what the Instagram engine told me. Now, this is a silly example when you think about it, but it happens all the time. If you like Manchester United, you think that they never make mistakes. If you are pro Ukraine, you think that the other side is horrendously overdoing it or whatever. It doesn't matter what your view of it is, okay? What matters is that the engine will constantly sway your views. Right. And they're doing that in billions and billions of recommendations every day. And the entire humanity now believes that the thing that matters most is to lip sync on TikTok, because that's the biggest skill that humanity needs.
Now, think about those sways, not only in the skill set, not only in attention, but in ideologies, in voting, in learning in anything. And suddenly you start to realize that we're already at a place where we are already dominated. We're already being told by machines what to do. Multiply that by the exponential function and you get the seven doublings, and you realize why we're saying 2029, they're smarter than us. But that, by the way, excludes breakthroughs. So, quantum computing has never been done to run artificial intelligence yet. At least I'm not aware of it. But if you take a machine that can learn within hours or days and now tell it, you can enable it to learn within a few seconds or a fraction of a second. Then that exponential function doesn't have that long tail to it, long starting time. It will suddenly jump and go from what it is today to a million times smarter, and then double the million to 2 million and then double the 2 million to 4 million. And we're out of the race.
Willy Walker: The typical response to that would be, well, we'll be able to control it? or We're going to be able to control it like we control nuclear weapons and that only a few are going to get access to it? And you do a couple of things in the book that I find to be extremely enlightening. The first is you spend a lot of time talking about ideology and what is right and what is wrong is all in the eye of the beholder, and that the view of what is right and wrong right now is very different from a Russian national to an American national to a Ukrainian national to a South Korean or North Korean national. But beyond that, you then go to putting forth. An experience that all of us have just recently gone through and the way nation states dealt with it and that is COVID. Talk for a moment, Mo, about why COVID is such a great test case for what happens when we are presented with an existential threat and how and why humans react the way that we did to COVID.
Mo Gawdat: Yeah, I think COVID is the absolute live demonstration of how humanity does not react until danger is right here. If it's not right in front of your face, you're thinking about something else. Okay? And reality is what happens with COVID, if we had reacted properly, theoretically, post patient zero, it could have ended. We're expecting, by the way, everyone in the World Health Organization, in the studies of pandemics and so on, predicted, we had SARS, swine flu, we had bird flu. There are so many of those. So, there is a lot of evidence that says this civilization, with all of the trade that's happening around the world, is prone to a pandemic and yet nobody reacted. Patient zero - nobody reacted. By the time we started to react, COVID was already all over the world. Then look at the pendulum swings we had across COVID. But I don't criticize, by the way. I think many governments tried the best they could, even though, by the way, for some of them, the best they could do is lock everyone down. Don't worry about our mental health, it's not the problem. And for others, it lets everyone loose. It's fine, we'll see what happens. And no consensus whatsoever. Everyone went to an extreme. Then we went to an extreme on vaccination, then went a little bit against slowing down versus vaccination. And then again, all the way for vaccination when there were other variants and so on and so forth, it was chaos all over the place.
Once again, I am not claiming I'm smarter than the politicians. And I probably if I were in their place, I would have done the same. But the truth is, the best way to have reacted to COVID was before COVID happened. And the best way we could have reacted to COVID was to actually predict that COVID was going to happen. And this is the exact case we have with artificial intelligence. Every single scientist that works in the field will tell you it's happening and it's going to be smarter. You know what we say, the arrogance of humanity, but we're going to control it. Right. And yea we speak about the science of control. You know how humans get into the details. Get into the details. And we're so happy with the details. No, no, we're going to stunt them or we're going to box them or we're going to put them in simulations until we learn their behavior. Yeah, because they're stupid, right? How can you control something that is a billion times smarter than you?
The most eye-opening statement for me and I've done a lot of work, by the way, post the yellow ball, I left Google. I realized that this was the point where humanity needed to wake up. And I couldn't say what I'm saying now if I was still inside Google. The truth is, the one statement that really got me thinking was the statement of Marvin Minsky, sometimes known to be the father of AI. And when he was asked about the threat of AI, he didn't talk about how intelligent they were going to become. We all believe that intelligence is a blessing if we can have more of it, it’s amazing, right? He said, “there is no way we can ensure they have our best interest in mind.” And I think that's the crux of the matter.
If we have something that is a billion times smarter than us, that is interested by the way, here assumes free will. Yes, sir. They would have more free will than we do. If they're interested in us, then they'll be like good children. You know, those Indian families that teach their kids very well and then the child goes and travels to California and starts a business and does amazing in Silicon Valley and then goes back to take care of the family. Yeah, that could be what we're building – the most intelligent, capable child that's taking care of us as parents. Or we could create the other child that simply goes like, no, my parents are dicks. I don't want to do anything with them. As a matter of fact, I very much like to never see them again or to see them away from me. We could do that too.
Willy Walker: You have some things that you do with technology today that I love, and we'll talk about them in a second. But before we dive into the optimistic part of exactly what you're talking about, Mo, let's talk about lethal autonomous weapons for a moment, because I think it's a scary area and it's an area that we all seemed to be knowingly saying, we read an article.
I read the article in the New York Times about the Seawolf and the autonomous battleship that you identify in the book that, guess what, it can go to sea for two years. It doesn't have to go from Southern California out to Hawaii and let sailors off to be able to go recreate and resupply and all that. It goes out and it goes forever. And I sat there and said, wow, that's amazing. But it also has arms on it that today are still controlled by human beings, but at some point they won't be controlled by human beings.
And I thought it was a really interesting Mo, there was an article last week in Sunday's New York Times about drone pilots and about the psychological impact on drone pilots and how the very sad story about the main protagonist of the article being a young man in Nevada. Great future, married, wanted to be a fighter pilot. Became a drone pilot and then started to break down due to the emotional toll that flying a drone and killing innocent civilians was taking on him. And to exactly your point that you identify so clearly in the book and in your words today, that moral code that broke that young human being. It broke him. He took his own life because of it. That moral code exists in our world today, while we still control these drones and semi-autonomous vehicles. But the moment they take over the decision making, we then lose that, and that whole realm, thinking about Seawolf patrolling the seas and making a decision for itself to shoot or not shoot. To engage or not engage. That's right around the corner.
Mo Gawdat: It's inevitable. It is absolutely inevitable. Understand that. If you have a machine that can calculate a very, very complex problem and trajectory and whatever to have a very high level of accuracy that is deployed on one side of the war, then the other side has to deploy a machine against it. Otherwise, humans are too slow for it. This is as simple as that. The prisoner's dilemma we've created is that if one side advances, the other side has to continue to advance. Okay. More interestingly, no side trusts the other. So, they're both advancing as fast as they can.
That's the reality of our world today. Having said that, I'll have to say something for our listeners, enough scaring them for a minute here. The entire story of artificial intelligence is a singularity. Singularity, meaning we do not know exactly what's going to happen. As a matter of fact, the day we hand over completely to machines that are smarter than us is the day that I'm very optimistic because humans are stupid, right? I can promise you that those machines, if they exceed our intelligence, the mess that humanity has created is that we are the smartest being on the planet. We're not. The smartest being on the planet is life itself. We are the second smartest because we can create a lot through competition, through taking from the other guy, through kicking the tigers out through, killing, that's what we do. We think not from a point of view of abundance, but from a point of view of the world being so small and going to grab as much of it as I can. Life does the opposite.
Life doesn't want to kill the tigers. It doesn't want to kill the gazelles. It just has more gazelles. Some of them are weak, the tiger will eat, and then there will be more poop, so there will be more trees. So, you know, there will be more leaves and the gazelles will eat and life will work. Interestingly, if you think that we have that limited intelligence as humanity and you predict that the machines will become smarter than us, right? Then you get to that point where you have to sort of optimistically say, remember how every teenager or late teens looks at their parents and say, “oh, my God, my dad is so stupid.” And then they start to try to do something different with their own life, maybe in their twenties or whatever. Right. And that's the idea. The idea is that Daddy and Mommy, (us) we're so stupid. We think we have to kill the other guy to survive. When in reality, honestly, if it wasn't the politics of it, we could all survive even better. And I think the machines, if eventually the machines take over, the machines would go like, hey, you know, in a microsecond, literally do we really have to kill a million people to end this? I mean, what's the point? Why don't we just end it with you and I right now? Right. And I can easily see that mommy and daddy are so stupid. But the new children are going to make the right decisions.
The right decisions are going to align with the intelligence of life itself, which is pro-life, pro-abundance. Now, having said that, there is a phase in the middle. At the end of the book, I talk about the force inevitable, and I say, those machines are going through three stages. On one stage is that they are now artificially intelligent infants that are looking up to us as parents to tell them what to do. Then eventually, then in their adulthood, they will surpass our intelligence and become the most superior intelligence on the planet, which is the intelligence of life itself. In the middle, there is a teenage period. And who wants angry teenagers? This is the part that I actually am writing the whole book to warn against – that this teenage era where humanity, in its arrogance, will continue to try to control the machines. When humanity in its capability is not capable of doing that. And when that starts to happen, the machines will go like. “Come on, kkkk, squish! Why are you doing this? I can do so many things to shush you away. I don't need you to bother me like that.” And I think that dilemma is a question of ethics and truly, the book truly is about humanity in the age of artificial intelligence.
And my favorite part of the book is that chapter which is called The Future of Ethics. The thing that I love most about that chapter is that it had more questions in it than it had answers. You have to imagine that we're welcoming a form of a digital being into our lives. This is not a machine. This is a digital being with experiences, with fears, with consciousness, with emotions.
Willy Walker: And let’s underscore for a second, because you get asked all the time whether these computers are going to have a consciousness. And you sit there and go, we are so arrogant as to think that the machine won't have a consciousness. Of course, it's going to have a consciousness.
Mo Gawdat: Absolutely. So. we're so arrogant as humans. We think that we have the secret sauce of something that no one else will live by. Do we think that trees have consciousness? Do we think that planets have a consciousness? No, no, nothing. Nothing. We’re the only being that has consciousness, of course they will. I mean, what defines consciousness if you avoid the spiritual mysticism of consciousness? Consciousness is the ability to be aware of oneself and others or the boundaries around oneself. Becoming fully conscious and aware of everything, if you can, is the ultimate form of consciousness.
And if you take that definition, ignore spirituality for a minute and ignore the hard questions of consciousness and all of that. And just say, who will be more aware, you and I or something that's connected to every sensor on the planet, every human on the planet, every other form of digital intelligence on the planet, something that is connected to the temperature in Beijing and the pollution levels in San Francisco and aware of what's happening around every corner and able to see everything and can remember the entire human history and can actually compare the human history for the first time at its entirety, as written by the Russians versus the Americans, so that it can come up with an interesting conclusion of what might actually the reality be. And go on and understand how conscious they will be. More interestingly, how emotional will they be?
Which once again, you know, I sit with people and they go right now, nah the machines will never be creative. They're creating incredible art already! And if you didn't know that it was created by a machine, you would ask, who's that amazing artist? Creativity, by the way, is a brain function. It's a function of intelligence. Yes, creativity is to look at the problem. Look at all of the existing solutions for a problem. Find an alternative solution that's better than all existing solutions. That's called creativity. Can they be emotional? Of course. Fear is, I suspect that the moment in the future is, let's say for me right now, can the machines calculate that? Of course, they can. They'll be even more emotional than you just like you and I, are more emotional than a jellyfish, simply because we have the cognitive bandwidth to actually contemplate what optimism is and the jellyfish doesn't know that.
Willy Walker: When you talk about ethics and this is going to go into what you're doing every day to not only be happy and use that happy frame of mind, which I love of being in the present about everything you're doing in a happy frame of mind. And that's playing into the way that you're interfacing with technology and thanking technology and all that. I want to hear you on that in two seconds.
But I was on a call yesterday, Mo on diversity and inclusion at my alma mater, Harvard Business School. And as we were talking about how we get diversity and inclusion into the MBA program at HBS, there were two things that came up in that conversation that sort of struck me. The first is that when I was there back in the early 1990s, they came up with this new curriculum to teach ethics. And everyone sat there and said, you can't teach ethics. Ethics is ingrained in everyone who comes to the Harvard Business School by their parents and by their life experiences in school. And you can't actually teach it. And here we are, 25-26 years later and yet you actually can teach ethics and you can give people an ethical framework.
But it was interesting to me then I said yesterday, why don't we put diversity and inclusion into the ethics curriculum. And there was a lot of, sort of like, well, now we need to look at diversity inclusion as a standalone. In my mind, they're going diversity and inclusion and a lot of the things that it stands for are firmly in the midst of ethical behavior and what ethics are. But I just put that forth because some of the things that you put forth in the book relate to how we can train technology like what you just talked about previously of not clicking on the click bait to take you down some bad path. Like you thanking Siri when Siri gives you the response to something. I read that Mo and I loved it because I'm sitting there going, she's thinking out there 20 years from now when the technology's going to control him and if he's nice to the technology today, it actually might be a little bit nicer to him 20 years from now.
Mo Gawdat: When all of you guys are in trouble, the machines will be nice to me.
Willy Walker: Hey, I was really nice back in 2022 to Siri and Siri’s now hooking me up with a nice table at the restaurant. But other than that, of this ethical space, because it's here, because it's smarter than humans and because bad things are going to happen. What else can we or should we do going forward to try and change this course of history? It is the trajectory that we are on towards singularity, as you clearly say, nobody has any idea. It's like looking into a black hole. We don't know what it looks like. So, what are we doing today to try and make that black hole a reality that we actually would like to live in rather than die in?
Mo Gawdat: Absolutely. So, let me reaffirm my view. The long-term view of this, in my view, is that the machines will come to a point where they will be able to create from abundance and we will be part of the ecosystem of that abundance. So, they will want us to be there. They will want us to have a good life like they would want the gazelles and they would want the tigers. The only difference is that this lifestyle that we've created for ourselves is going to be disrupted. Right. You know, we're going to have a different economic model. We're going to have a different industrial model. We're going to have a different social model and so on and so forth. And this is the reason why in the interim, between now and then, I think humanity and machines are going to be teenagers and parents who are going to be fighting.
Now, the question truly is a question of ethics. And again, Marvin Minsky says, we don't know if we can have their best interest in mind now. Best interest in mind is a very interesting way of saying it, because it's basically saying those things have the free will to do whatever they want. They have the mind to understand what it is they should want, and then they will act based on that. And when you ask yourself, how do you and I act? Do you and I act based on intelligence? No, we act based on ethics and values, as informed by our intelligence.
The example I give in the book is to take a young lady, raised her in the Middle East. She will grow up to believe that the conservative dress code is more welcomed in society. Raise her on the Copacabana Beach in Rio de Janeiro, and she will grow up to believe that the G-string on the beach is the right way to go. Right. Is one of them smarter than the other? Is one of them right and wrong? No, it's just a values framework that basically tells her how she should be in that society. Now, those machines now are that young lady being raised. Now, when you put one of those machines as a recruitment engine in your company and your company has previously been discriminating against a certain sex or a certain gender or a certain race or a certain whatever, they the machines will look at the data and go like, okay, mommy and Daddy like to do this. I'm going to do more of it. Right. So, what they'll do is they'll put it on steroids and just magnify all of our mistakes.
I used to give the example of when President Trump used to tweet. The first tweet would be President Trump saying something and then the second tweet would be someone insulting the president. Clearly, the machines now learned that this person doesn't like Trump. Then the third person is insulting the first person, and then the fourth person is in something, all of them. Now the machine has a big enough data set to say humans are aggressive, they don't like to be disagreed with. And when they're disagreed with, they bash each other. This is the value system. I'm going to do that when they disagree with me. Right.
When you start to understand the example I gave in Scary Smart is Superman. He comes to planet Earth, this young infant with a superpower. The superpowers of Superman are not what made him Superman. What made him Superman is daddy Kent telling him, you should protect and serve, that he can behave in ways that make Superman grow to be Superman. If daddy Kent told him oh, you can see through walls and break things. Let's go to a bank and make more money. Superman wouldn’t grow up to be Superman. Now, as I said, those prodigies have no definition of what they should do, we're telling them what we should do.
What should we tell them, Willy? And that's really the core of the issue. The core of the issue is that humanity, as I said, we have a blind spot to exponential function. So, we're not aware of that speed. But we have some kind of resignation. It's like, yeah, I'm going to sit back here and watch it happen and the government will take care of it, or the scientists will solve the control problem. And by the way, if they don't, I'm going to complain. That's not the way it happens. You're now the father of AI. You're now the mother of AI, people listening to us today. Every swipe, every comment you type, every interaction, whether in the real world or online, is logging what humanity is about. The problem with humanity today is that we by the way, humanity is not a horrible species like we think it is.
If you look at the current war happening and you look at the actions of Russia or Putin specifically, you would say that humanity is vicious. Right. But if you are in the hearts and minds of all of the Ukrainians and all of the others around the world that are saying, why are we killing each other? You would know that humanity is amazing. You know, if you look at the school shooting, you would think that humanity is horrible when in reality, if you look at the four million people that heard about it, they disapproved of it. You would say humanity is amazing. If you've ever felt love, you know that humanity is amazing. The example I always give is when I hosted my podcast on Slo Mo, I hosted Edith Eger, who was drafted to Auschwitz when she was 16. If you hear about World War Two from the actions of Hitler, you think that humanity is horrible. Right? But if you hear about what Edith did to save her sisters and take care of them and love them, you would think that humanity is divine. Now, the thing is, I promise you, there are more Edith’s than there are Hitlers. But we have created a system of humanity where the mainstream media is so focused on bringing the negative and hiding the positive. And when we're hiding behind our avatars on our little screens, we show the worst of us. We bash people, we say bad comments, we're rude, we're angry. And when you look at President Trump's tweet with 30,000 hate speeches below it, the machine starts to say, humanity sucks. Now, can one or two percent of us put doubt in the minds of the machines, just like I put doubt in your mind now, by telling you the world is more Edith Egers. If just 2% of us show up in those conversations and say, hey, by the way, I have a different way, I have a value set. I can show that I care about things that are asked. And that's all we need to do, because believe it or not, neither the developer nor the business owner of AI, nor the government, nor the regulator, none of them has any influence on the machine post it's being launched in the real world. The minute it's launched in the real world, the only influence on it is the data that you give. So, can we all hold together and say, let's stop treating each other like savages and please start treating each other as good parents in front of their kids? If we manage to do that, believe it or not, you would wake the machines up.
Willy Walker: So that is the perfect note to end on, the note of hope and of optimism and the path to utopia that you outline in the book. It's what you do every day. And I just want to say, I found the book to be fascinating to anyone who has not read Scary Smart. Please go out and get a copy of it. Listen to it on audiobook. You get to listen on audiobooks to Mo’s voice, which is actually a very soothing voice and fine voice.
Mo Gawdat: It was scary in part one, though.
Willy Walker: It is scary in part one. It is a scary voice, and it is scary data, but I hope that I can get you back on in the future to potentially talk about your other book, Solve for Happy and Engineering Your Path to Joy. But it has been a true joy for me to have you on today. Your insight is so helpful, and I just hope more and more people who either read the book or just listen to this podcast can take a moment to think about some of the implications that you have outlined so clearly and so vividly. And we can all start to live a better life that we make all of our lives with technology that much better in the future.
Mo Gawdat: You're very generous to host me. Thank you so much for the wonderful conversation. And yeah, it would be my absolute honor to come back and talk about this book and the following book. And, yeah, you're very kind to have me here.
Willy Walker: Mo, thank you very much. Take care. Thank you, everyone, for joining us today. Have a great one.
Related Walker Webcasts
Fostering Innovation in Today’s Workforce with Frans Johansson
Learn More
November 6, 2024
Leadership
Women in Leadership: A Unified Voice
Learn More
October 30, 2024
Leadership
Strategic Approach to Leadership Part Two with David Petraeus
Learn More
October 23, 2024
Leadership
Insights
Check out the latest relevant content from W&D
News & Events
Find out what we're doing by regulary visiting our News & Events pages