Ezra Klein
Co-founder of Vox.com, Journalist, political analyst
Willy Walker sat down recently with Ezra Klein, co-founder of Vox, to discuss his perspective on AI, zoning and homelessness, and more.
The Walker Webcast recently featured our highly anticipated interview with Ezra Klein from the Sun Valley Writers' Conference. Ezra is a journalist, political analyst, New York Times columnist, award-winning podcast host, and author of the book Why We're Polarized, among many other things. On this episode, Ezra and I sat down to discuss his unique perspective on AI, the tie between zoning and homelessness, polarizing politics, the intersection of government and technology, and so much more.
Will the progress of AI backfire on us?
With the recent developments made in artificial intelligence (AI), many believe that we are in for a movie-like ending. After all, large language models and other forms of AI are becoming exponentially more intelligent by the day. This raises the question, if we treat everyone in society so instrumentally, what happens when we create a better instrument than people? What happens if that instrument decides to instrumentalize us?
Additionally, Ezra believes the innovations that are being made in the artificial intelligence space are very utilitarian in nature. Many of the AI tools that have been created are just that–tools. People then take the tool and wrap it with a thin facade of human-like behaviors/features and interact with it. This is essentially training both ourselves and the AI to trick people into believing it has human-like qualities.
The difficulty surrounding driverless cars
In 2015, Elon Musk said that the world would have self-driving cars by 2017. Here we are in 2023, and Elon’s prediction did not come true. In fact, McKinsey estimates that roughly $165 billion has been spent on driverless car technology. While there has been much progress in the space, there still aren’t any fully driverless cars as of today.
This is mainly because when driving something as dangerous as a car, a driverless system can’t be effective just 85 percent of the time—even a 99 percent effectiveness won’t cut it. A driverless car needs to be 100 percent effective for it to be safe to implement in society, which the largest auto manufacturers have found to be very difficult to achieve.
Addressing California’s homelessness myths
Although some believe that people move to California to become homeless, as it is viewed as a nicer, more amicable place for those who are less fortunate, Ezra pointed out that this isn’t the case. Roughly 90 percent of the homeless population in California had their most recent address in the state of California. Approximately 75 percent of people experiencing homelessness in California have a most recent address in the county they’re currently in. This means that people who are on the brink of homelessness are not migrating to California. In reality, the homelessness problem in California is more closely related to housing unaffordability in the state.
Sun Valley Writers' Conference with Ezra Klein
Ezra Klein, opinion columnist at The New York Times, and host of the award-winning podcast, The Ezra Klein Show
John Burnham Schwartz, Literary Director SVWC: Hello, everybody. Hope you had a good lunch. Still cool out, it's very nice. Interviewing Ezra Klein is Willy Walker, who's a good friend. Many of you know him. He's been coming here for years, His parents, friends of everybody. He’s the CEO of Walker & Dunlop, largest public real estate finance company in the country. And more particularly for this enterprise, he's had his own webcast, Walker Webcast for the past few years, and he has turned into one of the best interviewers I know. He does his homework. He's incredibly intuitive and curious, and it just seemed to us that he and Ezra would be a fantastic fit. Not easy to find someone to interview Ezra, I might add.
About Ezra Klein, not that much needs to be said, I suspect, but I will simply speak personally as a writer, citizen, a reader, a parent – he not only founded the Ezra Klein Show, first at Vox, where he founded and was editor in chief there and now at The Times as an opinion columnist and host of The Ezra Klein Show and author of the book “Why We are Polarized” in 2020. It's not only an exceptional podcast, but I really have come to see it over many, many hours of listening to authors, music producers, scientists, psychologists and historians, everyone you can imagine. I've come to really see it as a great civic enterprise. I think that what he's doing is showing the interconnectivity of so many industries and ways of thinking. He's always looking forward and using the past as a way of lighting that path ahead to help explain it to us better and help show us what we don't know.
The great Chilean novelist, José Donoso, once said, “I think a great novelist is not someone who wants to tell you something, but someone who wants to discover something.” And I would say the same about Ezra Klein, and it's why I admire him so much. Here they are.
Ezra Klein: That Ezra Klein guy sounds great. I can't wait to listen to his podcast. (Laughs)
Willy Walker: So, a couple quick things. First of all, it's a real honor to be here. I was on the stage last week with former Secretary of State Condi Rice at the Walker & Dunlop conference. And I think my parents probably expected someday if our company got big enough that we'd have that kind of a person on this stage. We sponsored a concert here in Sun Valley last summer, and I think they sort of expected that if my company got big enough, we'd sponsor a concert here in Sun Valley at some point. I guarantee you, my parents never thought I would be speaking at the writers’ conference.
The other thing is that John asked me whether I wanted to go on Q&A after this discussion with Ezra, and I said, I want to use the entire hour because I got so much to talk to this gentleman about. The other reason I said that is because when you go to a Q & A after you have someone as great as Ezra, the questions can be great, they can be okay, but everyone kind of claps their hands at the end and you kind of filter off. If we get out of him what I expect to get out of him, I expect to hear from all of you at the end a raucous applause.
So, Ezra, I put up your book, “Why We're Polarized.” It's the first thing I want to talk about. A couple of years ago, after you published that, you did an interview with Malcolm Gladwell. In the book you talk about identity politics and Malcolm Gladwell asked you to identify yourself. And after dodging his question for a good 10 to 15 minutes, he finally came back and said, Ezra, tell me how you identify. And you said, well, first I'm a dad and I'm a spouse. Then you said, I'm a journalist. And then you said I'm Jewish. I'm the son of immigrants. I'm vegan. And I think the final one was, I'm fair. But in that entire description, you never said whip smart. How come you didn't put or don't self-identify as being highly intelligent?
Ezra Klein: It's a question. I just don't. I don’t know what it would mean to walk around the world saying to myself, hmm, looking at this as a smart person. I think a point I make in the book, and I made to Malcolm in that interview is that identity's activated also under threat. I live a life where a lot of people wander up to me like, Oh, you're so smart. Many of you have come to say something like that. So that identity is not much under threat. On the flip side, I just moved from California to New York, so when you said, “How do you identify?” The first thought that popped in my head right now is as a Californian, because that identity is under threat. It's under challenge. It's something important to me, but there's pressure on it. One of the central arguments of why we're polarized is that you should never think of identity, and you should never think of identity politics as operating off of any singular sense of self. We are never one identity. We have manifold overlapping identities. Some of them fused together. A point to the book is sort of mega identities, right? People are liberal and they're atheist and they live in San Francisco. So, a lot of things connect, and a lot of things don't connect.
What matters in politics, what matters in life is who we feel ourselves to be at a certain moment. At this festival, the feeling, the identity of being a writer has been much more foremost in my mind and the center of my soul than it normally is. I'm here among other writers. Not just the theme, but the whole culture of this place is honoring writing. It's made me think a lot about myself as a writer. I think that's important. You want to try to recognize that you're never one thing and you want to hold those things lightly and let them flow and let them flux. But also, I think you don't want to adopt any identities that are going to make your head too big. So, you don't want to walk around saying to yourself, I’m whip smart.
Willy Walker: You as a kid from your wife Annie, whose educational career is so boring - she went to Andover and Harvard. I mean, come on, you went to Santa Cruz, before going to UCLA. What was it that turned you on? In other words, at some point, Ezra's mind went from good to great. What was it that got you to say I want to learn more, I want to read more, I want to know more than anyone else.
Ezra Klein: I wouldn't say I think of it as competitive as that. But what I will say is that behind that story is that I was a terrible student. The main thing, the identity that was dominant for me in high school, because it's one the most constantly reinforced, is underachiever. Clearly a smart kid, an obsessive reader. I wanted to know a lot about the world then, too, but I was in a context. It just didn't work for me. I have a lot of trouble absorbing information just listening to somebody speak. I think now in this age, I would have been diagnosed with a kind of learning disability. I wouldn't call it a disability. It's kind of maybe a learning difference is another way to talk about it. I focus very well by reading, but even now as a reporter, I mostly can't call in to teleconference calls because none of the information will hold for me.
So, I struggled terribly in school. I graduated high school with a 2.2 GPA. I got into UC Santa Cruz, got bless it - entirely on testing. At that time, Santa Cruz in Riverside would take anybody with above a 1400 SAT score or above a 3.3 GPA. And so, I was able to get in there and even college didn't work out well for me. I love Santa Cruz. I had a good experience at UCLA, but it was blogging in 2003 when I was a freshman at UC Santa Cruz. I got rejected from an internship on the student newspaper.
Willy Walker: Big mistake on their part.
Ezra Klein: Maybe not, now it has become like a beloved part of my biography, but I mean, I was a bad student. I had not been on my high school newspaper. They had no reason to think anything different. But I was bored, and I hadn't found my people and I really wouldn't find my people at Santa Cruz. And this thing was happening on the internet where I was plugged in, which was blogging, where you could start up a thing at blogspot.com I had a couple early blogs, but the one I remember in this period was called Live it Left after something my high school teacher used to say to me. There, I got to just follow my own interests and not just follow them but write about them. And it was the connection for me of being able to follow down, to follow the pathway of my own enthusiasm, and then process that enthusiasm about learning that research through writing, and then iterate on that in the next piece and through conversation with other people. That was what caught for me. That was what took my kind of interest from I like to hang out at bookstores, which I kind of can't stop doing this. And so, college was very much a sideline for me for blogging. And I got an internship at a magazine, The Washington Monthly, then a fellowship at the American Prospect. And I left college at the end of my junior year, and I'm like 65% sure I'm a college graduate, but definitely not 100% sure I'm a college graduate and went to Washington to work as a fellow. But for me, it's really like blogging and then journalism – it clicked. And this is in some ways a very deep part of my views on life and even politics. And it goes a little bit to what I said earlier about identity. I believe people are shaped by their context. I have been in context where I was the same person in many ways that I am now, and I was an unrelenting failure. I couldn't seem to get anything right. All anybody had to say about me was like, "Poor kid, what a shame. He can't seem to get it together." Then nothing changed except my context. And all of a sudden, here we are. And so, the question of are people able to search out the context in which they are adaptive, the context in which their unique combination of traits clicks I think, is a much bigger part of life than we give it credit for.
Willy Walker: So, one quick side note. You just mentioned that The Washington Monthly was the first publication you worked for. That was the first publication that David Ignatius of The Washington Post worked for. It's also the first publication my mother, Diana Walker, who was a photographer for Time magazine, worked for. Just kind of interesting, the three of you have that in common.
Ezra Klein: Great legacy there.
Willy Walker: So, let's flip through to High Weirdness, because I listen to your podcast. And by the way, at the end of every one of Ezra's podcasts, he asks his guest three books or a podcast that you would recommend to the audience. And so, I went in and tried to follow along that. But I got your book and four others, two books and two podcasts. So, on high weirdness, this conversation, and I would highly recommend any of you to listen to this podcast that Ezra put together. In it you and Erik (Davis) talk about weird and the fact that weird things are essentially things that just challenge our understanding of the world we know today. And so, my question to you, Ezra, we all are talking extensively about AI. Why isn't AI just weird?
Ezra Klein: I think AI is weird. This is the whole point. So, Erik Davis deserves some introduction here. He is the great living historian of California counterculture. He is himself unbelievably weird, just a very weird person.
Willy Walker: He doesn't sound weird at all. Interesting.
Ezra Klein: I know that’s his trick, but it's how the whole thing works. If you're going to be doing the work he's doing, you have to be able to pass with the normies. You guys should have him at this conference some year. This book, High Weirdness is a profile of mystical, psychedelic experiences had by three important figures in the California counterculture, including Philip K. Dick, Terence Mckenna and I'm blanking out on the third one. I'd read this a couple of years ago, and I found this book was something that was helping me do two things. One was to interpret a dimension of California and practically the Bay Area culture that was very present for me.
One thing that makes the Bay Area what it is, is a tolerance for things that other places would not tolerate – for good and for bad, right? I think the disorder you see on the streets there is actually intrinsically braided with the explosive creativity of Silicon Valley. The thing that will happen to you in San Francisco is somebody will come in and sit down with you and proceed to tell you the weirdest thing you have ever heard with a completely straight face. And people there listen. Investors listen. So, for years I've been reporting and spending time with people in the community. And I would always talk to my partner afterwards and say, when I left these conversations with, like, for instance, this guy who not many people knew back then, Sam Altman, I would say I cannot tell if he's crazy or if I am. Every time I leave, have I lost my mind or has this person? Because they were describing a world that was on the cusp of completely irrevocable, unpredictable, transformational change, a world in which these people were racing towards the creation of a kind of self-directed super intelligence that they understood and believed they could not control, and in the long run could not predict. They could bring to life unimaginable utopias of material abundance and unimaginable dystopias of human extinction. And meanwhile, it's 2020. When I go online, what AI is saying: So, I see you bought a bike. Would you perchance like to buy another bike? And every website I go to on the Internet. Something was very strange in this. But I have a lot of access to these programs like ChatGPT, or it wasn’t called ChatGPT then - GPT3, GPT2 quite a while ago. And you could watch them getting better at this accelerating rate, doing things every six months you didn't think they could do. And then, of course, the world comes to see this with ChatGPT and this kind of all explodes out into the world.
The reason I find Eric Davis's work important here is that he's a theorist of the weird and I think a mistake many people make when they think about artificial intelligence is to understand it upon our own terms. They want to extrapolate one of two stories we already have: a story of material plenty, right? This is going to be great. Productivity is going to go up, GDP is going to go up, wages may not go up, but maybe universal basic income will come into being or it'll be a kind of everything that is bad will get worse, right? We'll use it to kill each other, or it will just kill us.
What I think is likely is that it's going to be very weird. And in particular, one way that weirdness is a helpful framework for it is that it makes a world illegible. So, to the point of defying conventional expectations, when AI is really working, it's going to be making a lot of decisions and we will not be able to track those decisions effectively. And so, the world ceases to become something we can predict and something we can fit into our normal frameworks.
Willy Walker: So, as you talk about normal frameworks, one of the places that you and Erik go in that conversation, Ezra, is sort of the world order and the way that we justify being at the top of the pyramid. In it, you and Erik was talking about building a house and in the process of building a house, killing thousands of mosquitoes, bugs and things of that nature, that we kill animals to eat on a daily basis, and that we all sit there at the top of the pecking order and somehow say, we're the highest as far as intellect and therefore we have the right to treat all other creatures on Earth the way we do. And that all of a sudden there's something that might actually come on top of us that somehow reworks that world order and that freaks the shit out of us.
Ezra Klein: Well, let me turn this question to you, because I think it's a really interesting one. How do you value other people or how do you think our society values people? What do you think, not what we say, but what do we really do? If you were an alien and you came down, you're an ethnographer from the Nebula universe. What would you find about how human beings value each other and other creatures?
Willy Walker: Clearly in the United States, very materialistic. What do you have, not who you are. Clearly, other cultures put more of who you are than what you have. But yeah, that's the way I look at it from a U.S. perspective versus some other cultures in the world.
Ezra Klein: I think that's right. And I think that's one reason we're particularly afraid of AI out here. How do you get things in our current economy? Your whip smart is one of them. You're a tireless worker. AI never sleep, they can be tireless in a way we can never be. You can come up with new ideas. You can range, you can be a generalist. We value people in very instrumental ways. So, if you build a system that is a better instrument than the people well, then, on what grounds do we value them? And then, if you look at how we've treated many animals, I mean, we instrumentalize them even more. Think of how we treat cows and chickens in factory farming. So, I think one thing that is very scary about AI is that in a world where we treat each other and everything else so instrumentally, what happens when we create a better instrument? And if it's trained on us or even just, we're the ones deploying it, then what's to say it doesn't instrumentalize us?
Willy Walker: Another thing you said in that interview, which I thought was fascinating, was talking about, if you will, voice in cognition. Talking about the fact that as humans, when we hear something talking to us that actually contextualizes what it is saying. It sort of freaks us out. Talk about that, because I hadn't thought about that in the sense that when you hear audible (not read), but you hear audibly something that actually puts context to it – we've never had that before.
Ezra Klein: You mean what happens when an AI vamoosed voice?
Willy Walker: When an AI vamooses voice and the whole reason why when AI comes to us giving us a response on Siri that it all of a sudden, we say it has agency.
Ezra Klein: So, have people here seen the Spike Jonze movie, “Her?” If you haven't, I think it is like the single best piece of culture, single best way to think about AI, because the movie is about basically what happens when we have companions, what happens when we can create a system that can be your best friend, your lover partner, your business partner.
One of the strange things about AI right now is we are creating a very inhuman, very alien, I don’t know what you want to call it. I think the best thing to call it is intelligence. Some people disagree with me on that. But a kind of intelligence, a problem-solving creature, a problem-solving system. And then we are teaching it to act human. It doesn't naturally act human. So, we are putting this thin human face, this thin human skin on the most bizarre internal system you could possibly imagine. So, we are essentially trying to trick ourselves into believing in the humanity, believing in the personality of this model. We're tuning it to do that, we are creating something to trick us, right?
Nobody really cared that much about GPT3 until they gave it a chatbot interface and made it good at talking to people in English. And then ultimately of course, in other languages. That's just very telling, right? AI is not human. It does not think like a human being. But we are teaching this very powerful system to seem like a human to us because that's how we like to relate to it.
One of the next things that will happen is voice. That's even harder to not feel like you're talking to a human being. I mean, okay, the fact that computers are generating text but voice with a competitive overlapping in its speech and you can make it sound like anything and it has pauses, stammers and whatever, it's going to be very weird.
Willy Walker: It's going to be very weird. A lot of people are very fearful of AI sort of taking over the world. And in that discussion with Erik, you talk about the fact that, you know, it was in 2015 that Elon Musk said in two years we'll have fully autonomous cars by 2017, here we are in 2023 and we do not have fully autonomous cars. And McKinsey estimates that $165 billion has been spent on autonomous vehicles so far up until now, and we still don't have them. How much distance is there between the lip in the cup, as they say in golf as it relates to AI, really starting to do things for us that are beyond just a book report that one of our kids is going to auto generate from technology versus something that we really need to be fearful of?
Ezra Klein: I think there's actually going to be a pretty big last mile problem in a lot of AI systems, which has been the problem for driverless cars. So, a driverless car can't be 85%, 95%, 96% reliable. It's got to be 99.999% reliable. And that turns out to be really hard because there's a lot a driverless car cannot predict about the real world. There's a lot that shows up very rarely in the data.
One reason I think AI is going to be economically useful and economically dominant - more slowly, if ever - than a lot of other people think, is that I think that last mile problem is going to show up in a lot of places. You can't really have an AI journalist because the systems hallucinate and the better, they get at being convincing, the harder they are to fact check. Right? The system is actually kind of stupid. It's easy to see where it's going wrong. If it's really smart, it's not. Where I think this could happen very quickly, though, and it could roll out. It's things like companionship where it's okay for it to hallucinate, okay to make things up. The point of your friend or your partner isn't that they're always telling you the truth. It's that they're caring, they're interesting, they're funny.
So, I think the productivity boost of AI is going to be slower than other people do because a lot of things that are most important for productivity are going to have a big last mile problem, whereas the social hit of AI is going to come very quickly. I have a joke about this that right now there's a lot of worry in the media that like 12-year-olds don't see enough of their friends in person. I think in ten years is going to be a lot of worry in the media over not enough of a 12-year old's friends are not persons. That's again why I think “Her” is a really good movie to watch, because it's focused on the social dynamic of AI. But at least we're talking about the large language models. I think a really important question to ask is where does hallucination not matter? Where it matters and that's true in medicine, journalism, politics and business strategy were getting things wrong is important, consequential. That's going to be pretty hard.
Where it's not: video games, entertainment of all different kinds. I think you could see AI come into play really quickly. It's why I think that Hollywood screenwriters are more right to be worried about their jobs than say analysts at an investment bank. Because even if superficially an AI could write a pretty good investment report even right now, knowing what it's made up and what it has not, it's going to be really hard. And so, in the end, it's harder. And the cost of getting that wrong is going to be really high. So, in the end, not that many players are going to want to risk that.
Willy Walker: So, you talked about video games in this interview that you had with the head of DeepMind, which is Google's whole AI unit. (It's again, highly recommended to anyone in the audience who hasn't listened to it.) The two of you dive into a discussion on AlphaGo, AlphaZero and AlphaFold. I think it would be really helpful for you to just quickly describe those evolutions of AI, only because at least before I listened to that podcast, my thought of ChatGPT and what AI was going to do was pretty constrained to, Oh great, my analysts at Walker & Dunlop are going to be able to do an underwriting file much quicker exactly as you just talked about. And all of a sudden, as you and Demis dove into AlphaFold, it opens up this entire space of scientific research. And it is so incredibly exciting as it relates to what it will do and how beneficial to not just DeepMind, but to the entire medical research and drug development world that development has been as far as protein research.
Ezra Klein: So to me, this is the path that I think is most interesting for AI and that I'm worried it's not going to take, precisely because we so like it when AI acts human that we're now valuing and putting all of our investment and so much energy in these large language models that are good at acting human.
What I like are inhuman AIs to do things human beings can't do. Demis Hassabis is a really interesting guy, so he founded DeepMind, which gets bought by Google. He's now in charge of all AI across all of Google and all of DeepMind, and it's now called Google DeepMind. But his background is as a game designer. And what he basically says to me is that he always wanted to work on AI and started in games as a way of cracking into that very early on. What he basically realizes is that a lot of things can be structured as a game. If you can give the AI rules that it can learn, then it can begin to master things. But this was a very unproven hypothesis.
So, DeepMind's first big breakthrough is an AI system that can play the game Pong. Back then there's like a lot of AIs called symbolic AI, your hard coding in rules to the AI, right? When it goes up grade, like when the other person does this, you do that. The problem is a world is too complicated to be well represented by a billion rules. We just cannot come up with enough rules to code into a system to make a symbolic system really intelligent. So, he basically codes in AI with other people that all it knows is that it wants to get to the outcome where it begins to get points in pong. But it doesn't know anything about pong, and it's basically blindly trying things, learning pong almost pixel by pixel. Not understanding it is a game. Understanding it as code, right? Like Neo, looking at the numbers at the end of the matrix. It takes six months for this system to score one point in pong. Then very shortly after that, it becomes completely unbeatable.
Then DeepMind begins building Alpha systems, which is what they call them, for different games. The one they're very famous for is Go. Go is a much more complicated game in terms of the moves that are available than chess is thought to be you know, if an AI could beat the best human players in Go, that would really be something. So, they built AlphaGo. AlphaGo learns from human programs, but also teaches itself the game and ends up beating one of the Go world champions.
Then they create AlphaZero and AlphaZero never plays with human beings. It only plays itself. It's never given the data of human players. It only teaches itself. Within a week, it destroys AlphaGo. So actually, taking all the human information out of the system made it stronger. But the place where all this was going for them is, well, what else is kind of like a game in that way? Actually, a lot of scientific problems are. They have rules, they have points. One of the very hard problems in science for many years has been called the protein folding problem. There are about, that we know of, 200 million proteins. A huge part of the way they work is how they're folded up in their 3D structure. We can find their amino acid sequences pretty easily, but we couldn't predict how they would fold from that. Using the 150,000 amino acid sequences we have they build a system that is able to successfully protect 200 million folding proteins.
The system can't talk to you. It can't give you romantic advice, but it can do something no human being can do, which is solve the protein folding problem. The Journal of Science named it their scientific breakthrough of the year. I think it's in 2021.
So here you have this other way of thinking about AI, which is what can you structure with minimum of rules, but a maximum of data such as an AI could work and work and work and work until it finds the patterns that human beings can never find. And it's able to build those patterns into a generalizable model and then solve the problem completely or make good predictions for the problem. And that's AlphaFold. Now Google is trying to make that into a drug discovery business, because if you can figure out every protein, well, then maybe you could build drugs that bind more completely to proteins that don't bind to the wrong proteins that they're spitting out this company called Isomorphic, which is supposed to be drug discovery.
I am personally much more excited about AI for drug discovery and other things like that. They're building one to try to help with nuclear fusion. One of the hard problems of nuclear fusion is how do you stabilize the super-hot plasma at the center of these reactions? Humans can’t create the algorithms and move fast enough to do that. AI maybe can. Whether this is the direction that AI goes or not, I mean, we'll see. But I think what's interesting about that is that the problem with language is that it actually doesn't really have rules, it has grammar, but there's not really such a thing as like the right language. When you asked me this question, there were many right answers or at least answers that would be legible to all of you in the audience.
Willy Walker: You did it just about perfectly, by the way.
Ezra Klein: Thank you. I mean, there is the perfect answer, which I found. But that's why the systems hallucinate, right? Because they're trying to figure out what is the likeliest answer to the question. But there is no one likely answer to the question. Many things are possible, but if you can find things where there is a right answer and where you can falsify the answers by finding out if you don't have them right, which is a very complicated question with all this stuff, then you can do things that are really incredible. And like when you think about how AI could make our lives better in 30 years, I think a lot more of it comes from that kind of program than from the large language models.
Willy Walker: So, the hallucination comment I think is a really interesting one for a moment to just double click on, because I listened to Sam Harris - Marc Andreessen podcast, and I thought I was going to recommend it to you all. But Sam is so quick to kind of knock down anything that Andreessen says. I sort of loved what Andreessen said. I wasn't so excited about what Sam said, but during it Andreessen said, “You know, we used to always rely on computers to be 100% right. Isn't it cool that they're now thinking on their own and they might come back to us with an incorrect answer?”
It's interesting because all of us I mean, I put something into my iPhone. I'm like, that better be right when it comes back. I actually asked Claude, the chatbot, to do your resume and my resume. It got you perfectly. It said that I went to Stanford, that I was on the board of the Richmond Fed, and it said I was born circa 1960. I was born closer, circa 1970 to 1960. Thank you very much, Claude. But the point being is I kind of laughed at it that it didn't get my bio right. But that's just because it's out there trying to grab all this stuff. Nobody's sat there and done a Wikipedia file on Willy Walker that you were reading off of. It was actually out there trying to find the answer.
But there's one other piece to this not only hallucination, but the inbreeding that you raised in your conversation with Demis as it relates to that database of 150,000 known proteins. And then when they started to run it, 150,000 is too small a database. Like on Go, they were playing the game millions and millions and millions of times and that's how it got better and better and better. And it was also structured AI, as you said, where it was going at a specific task versus much more broad based.
But talk for a moment, if you would, Ezra, about the inbreeding, because they had to go and take 600,000 as you talked about there and cut it down to 350,000 and then put the 350,000 with the 150,000 to get that the half a million to actually know that the 200 million were going to work? Talk for a moment about that, because this is this whole realm of the unknown that I think scares us so much because if you set AI to something that we don't actually know the answer to, how do we know the answer is correct?
Ezra Klein: So, the inbreeding problem is basically the problem of garbage in, garbage out. There are a couple constraints on any AI system and two of the main ones are 1) Training data - do you have enough good data to train the AI on? If you don't, it's not going to be able to find the relationships it needs to find. And the other is 2) Compute power, which is not that relevant for what we're talking about here. But you need a lot of compute power, which is why only a couple of companies right now can build the really big AIs.
Training data is really hard because, well, in some areas of life there's a lot of it, right? A lot of the AIs right now are heavily trained on Reddit. That's why they're so good at talking like people, because people talk like people on Reddit. But I don’t know if you read Reddit, I read Reddit. I am skeptical that you could become superintelligent by reading Reddit. Reasonably intelligent, sure. But super intelligent? By reading our relationship advice? I don't know.
So where is there enough data? So, one thing then that you might think of doing was like, well, okay, the AI can create all this data based on what it's read, create the data, feed it back into the AI. When you do that, you get the inbreeding problem or model collapse because the model begins basically burrowing its mistakes deeper and deeper and deeper into its pattern recognition.
What they did with AlphaFold, and it's really important to note this might not have worked, right? I mean, it did work. So now it's like, hey, hey, huzzah! But like, this could have been a disaster. It is basically creating predictions and feeding them back into the model. But, they did something interesting here, which is that they began creating basically the model feeds about two things. One is the predicted structure of a protein, and one is the certainty level about the predicted structure of a protein. And so, they had a certain amount of the data where they're pretty sure, because this is frankly a little bit beyond my level of technical expertise on protein folding. But they knew proteins close enough to it that they could be pretty sure that they were getting the right answer here. Then they had things where they weren't sure. And so, you're creating data where they basically had a cut of what they were producing, they were certain enough and close enough that the model could use it as more data.
And Demis has said to me, and he said to other people, too, he thinks we basically have the upper bound. He thinks that 150,000 proteins was the minimum possible amount of data to get protein folding right. And that if we'd had 70,000, it wouldn't work.
Willy Walker: And you went to just on the other side of that, you in that conversation went to predictive capability on markets and you kept probing like, aren't there hedge funds out there using AI to be able to, if you will, corner a market, figure out where the market's going? And his response to you was essentially it's too vast a data set. There are too many variables for AI to accurately predict that Jerome Powell is going to wake up on Wednesday and raise by 25 basis points at the same time as oil supply is going to go to X.
Ezra Klein: I don't really believe him.
Willy Walker: You kept pushing on and he kept coming back to you saying: it's too much.
Ezra Klein: If you look in the bylaws of OpenAI they have a weird clause – where they say their investors can only have 100x return. And you might think, why cap the amount of money the investors can make? Why? Like telling investors from the beginning, we're going to cap your return. And it's because the people run OpenAI think that when you create not what we have now, but a genuine artificial general intelligence, it will basically be able to make all the money in the world. Are they right? Probably not. But it could make a ton of money. Yeah, and I don't really buy that you can solve a protein folding problem if somebody cannot create an algorithm in markets, that becomes a very dominant hedge fund. I don't want to speak for Demis, but I'm not convinced by his answer there.
Willy Walker: Before we move off of AI as it relates to you being in the camp that AI is going to be this great tool for humanity and it's going to advance us. And so many people say, you know, every technological innovation that we've come across that we thought was going to eliminate jobs has only grown the economy and made us better and faster and all that stuff – that's one camp.
The other camp is Anthropic, the people who left ChatGPT to go start Anthropic, which is a socially conscious AI company. And one of your colleagues at the New York Times, went and visited and spent the week there and just basically said got scared to death because most people walk around, they're saying we're all dead in ten years. Which end of that are you on?
Ezra Klein: It's a weird culture. I'll say that for all these people. I think what's weird about OpenAI and Anthropic thing is that OpenAI was supposed to be the socially conscious AI company. Then a bunch of its people are like, no, no, no, we got to go with Anthropic. Now people in Anthropic are like, we're moving too fast.
Look, Oppenheimer is in theaters now. I wasn't there covering the development of the atomic bomb, but it's hard to think of anything else were the people behind the system seemed so afraid of what they were creating? Now they're afraid based on speculation. They're afraid based on a view that the curve is not going to flatten, that it's just going to keep getting smarter, more capable, which I think over enough time it will. How fast that is, I don't know. And that we're creating something that could well replace us or could just do terrible damage to us. That doesn't seem crazy to me. And I think the hard thing is like, what do you do when you can't rule it out? And that's the question all these people are asking. Yeah. And like Anthropic, you might ask, why are they creating a system at all? And it's because they need to raise all this money to create a system because they think unless you have the system, you can't do enough safety testing and research to figure out how to control the system. The reason all these people who are terrified of what they're building are trying to build it is they think it's only by building it that we could figure out how to make it safe. But they're not confident that that strategy will work so they might end up building the thing that destroys us anyway. And whether it will destroy I mean; this is all it's hard because it's all speculation stacked on speculation.
On the other hand, I don't think you need to believe in too many speculative leaps to believe that if we create something that is more capable than human beings that most knowledge tasks can function, be infinitely replicated, and never sleeps. That's going to be a real transformational hit to humanity, economically, psychically, everything. And so, you know, we're going into truly uncharted territory. But yeah, I find it worrying how worried the people behind these things are, but they're also all racing each other to build them.
Willy Walker: As Evan Osnos said in an interview the day before yesterday, it may be ironic, it may just be by chance. But Oppenheimer and Sam Altman of ChatGPT share the same birthday. Yeah, it's kind of a chilling thought, isn't it?
Before we move off of AI and AI is going to come in everything else. Jacob, I have a clip here that I pulled out this morning. I thought it was so appropriate to transition from AI and other things having asked you that question. This is Joe Walsh of the Eagles.
“I don’t have much to say about it. It’s computers and has nothing to do with music. It can't destroy a hotel room. It can't throw a TV off the fifth floor into the pool and get right in the middle. When AI knows how to destroy a hotel room, then I’ll pay attention.”
Willy Walker: I thought that clip from Joe Walsh, you know, he relates AI still can't play a song, it still can't jump into a hotel pool. Let's move to the next one, which is Recoding America. This book you recommend and hope that every politician and bureaucrat in America reads – why?
Ezra Klein: This is a book by Jennifer Pahlka. It's great. I really recommend it to you all. You should have her here next year. She founded a group called Code for America, which basically tried to create digitally native and digitally capable government services. Has anybody used a government website? Certainly, if you used one five or ten years ago, it was not a great experience. So, she is trying to get great technologists to work in the public sector or with the public sector to make this all a lot better. Her book is such a stunning and such a specific account of why government programs run by people who want to make them work, like run by good, dedicated civil servants, why they fail, why they struggle, why they're too rigid, what it takes to try to move them?
You know, a big theme of my work over the past couple of years is what I call liberalism that builds. It's about the question of why it's become so hard in blue states like California or New York, I mean, in many red states, too. But I'm focusing here on liberals to build real things in the real world – housing, clean energy, mass transit, semiconductor manufacturing facilities. And a lot of it just comes down to a rigidity in process, a kind of crush of stakeholders, too many goals and not enough prioritization. But a lot of good intentions that are not, in the end, resolving down to good structures.
Jen's book I genuinely believe should be assigned reading. It is about how good people fail the people they're trying to help because they end up engaged in a fight with the systems in which they're operating. But they have the power to change the system. And what it takes to change the system is too hard for most people. And so, you either get like heroes or you get people willing to live within and even who adapt to and serve a bad system. And so, the people who lose out on that or the people need government services. So, it's a great, fantastic book.
Willy Walker: There are a couple of themes in there that I thought were really interesting. One is that no government contractor is hired for competence, they're hired for compliance.
Ezra Klein: Yeah. So, I don't want to say no. I don't think she would say no government contractor, but to a first approximation. This is a good example of how good intentions go bad. So, you have in the government all of these procurement and contracting rules meant to ensure fair play, meant to prevent cronyism, meant to make sure that a diverse slate of subcontractors get to bid meant to do all these things that people support. Those rules end up being very, very, very, very time consuming and difficult to follow. So, who ends up being good at contracting? Not the people who can do the job best, it's the people who understand the rule compliant system the best, the people who understand the bidding system the best, the people who've built huge networks of lawyers and RFP writers and so on to get involved in all that. And often those are not the people with the best technology. So that's one dimension of it. And then another dimension of it is that we have this litigation system inside that procurement system. And so, if you don't get the bid as a contractor, you can sue. And so, then they have to like, show that the whole process was fair. So, you actually think the contractor should be at the beck and call of the public servants, but the public servants, the bureaucrats often have to please the contractors because they often don't want the process to get slowed down in a court fight. So, it's a very good example. And she has a number of them very specifically like why we don't end up with the best contractors doing the best work. And by the way, I heard from hundreds of contractors over email after this and they were like, this is both true. And it is so frustrating for us because I also can't do my best work. I also am trapped in the system where I have to file all my forms in triplicate and fill out this and that. And it should not be this frustrating to work in government. And if it is this frustrating to work in government, then only people who are willing to be constantly frustrated are going to work in government.
One of the things I said in that piece is that we are asking too much of the people who work in government. We don't just need good people in these jobs. We need these jobs to be good for good people. And somebody actually pretty high up in the administration wrote to me and said that comment made her cry because to her and to so many around her, the day-to-day fight to try to do right by the people she is trying to serve is so difficult, and it feels so pointless. Everybody's incentive should be pointing in the same direction here. And I think that should be if you're a liberal, and I am a liberal. This should be taken as a very real problem. I think a lot of liberals, and I'm one of them, believe in government. But if you believe in government, you need to be attentive to the details of how it works. And if it's not working well, you should be the most upset. You should be the most furious. You should be the most intent on changing it. And so often because of macro battles in Republican and Democratic politics, it's like Democrats as defenders of government, Republicans as critics of it, or attackers of it. Democrats are very uncomfortable saying what's actually wrong. But if the government doesn't work well, that much more so to me than a Republican critique is what ultimately weakens government, weakens public support for it, dissuades people from participating in government or signing up for things.
My wife Annie has written these great pieces on the time tax, the way in which administrative complexity is, in some cases, weaponized to keep people from attaining or accessing the benefits, ranging from SNAP payments to simply voting that they're entitled to. But sometimes it's not weaponization, sometimes it's simply drift. Sometimes the Democrats were so afraid of fraud that the way the income verification or wealth verification parts of the bill were so onerous that somebody who's a single mom with four kids and two jobs simply can't go through it and like this should appall liberals. It should obsess them. And it doesn't. And Jen is somebody who's trying to change that.
Willy Walker: One final thing on that. In the book, you and Jen talk about is that the sexy stuff is the politics, the next sexiest is the policy, and the third sexiest is the delivery or the implementation. And I was talking last night with Judy Woodruff and Dee Dee Myers about this specific issue. Both of them said that's the reason that the White House has reporters out the door, and if you go to HUD or Treasury or State or anywhere else, there's no one covering what actually happens at the ground level. It ain't sexy. No one covers it. All they want to talk about is the politics, not the policy or the implementation.
Ezra Klein: Yeah, I mean, Washington is a city. I mean, as are many state capitals. Work off of prestige and work off of money. And you don't attain prestige and you don't make money by working on implementation, at least not on the public side. You may make it on the private side, right? Contractors actually sometimes do make a lot of money, but not on the public side, and that's a real genuine problem. Shaun Donovan, who ran HUD, is around in this conference somewhere and I mean, I'm sure he could give you chapter and verse on this.
As somebody who's covered policy for many years, I can tell you, the amount of coverage we give to something when it is a fight over whether the policy will pass, the coverage on whether or not Obamacare will pass, whether the Inflation Reduction Act will pass, and then it does pass – Hallelujah. It's like bye - all the best. How's the Inflation Reduction Act going? I mean, I think if you're listening to my show, Robertson Meyer will tell you. But in general, I think the level of coverage of whether or not it is implemented well is so much smaller than the day-to-day coverage of stuff Joe Manchin said near the elevator that it's appalling.
Willy Walker: All right. I'm going to try and squeeze in one more. But this final one is a fascinating conversation on an issue that is so topical and so important to our country. You discussed with Jerusalem (Demsas) the new Benioff Center for Housing at the University of San Francisco there, the deepest study ever done on homelessness in California. California has 12% of the country's population. It has 30% of the country's homeless population, and it has 50% of the country's unsheltered homeless population. And there were a number of things in that discussion, Ezra, that you two debunked. The biggest one, I think that I'd like you to comment on is the fact that someone who is becoming homeless in Ohio because California has a broader social safety net and has better weather hops in his or her car and zooms across the country to California to become homeless.
Ezra Klein: Yeah, so that just doesn't really happen. So, one of the things this report finds is the biggest report we've ever had on the homeless population is 90% of people currently homeless in California, the last address wasn’t California. 75% was in the same county they're currently in. When you're homeless, you actually just don't have the wherewithal to go very far. Weirdly, the logic should go the other way, right? People think, Oh, you're homeless. You go to California. No, no, no. If you're homeless, you should leave California because the California housing crisis makes it very hard for you to get housed. And the fact that people don't really do that either shows you the problem, like being homeless. There's a great line in that report that being homeless is a full-time job. It's very hard, I mean, with nothing, to pick up and move.
One of the key things in this is that homelessness is fundamentally a housing problem. There's a very good analogy in a book by that same name, Homelessness is a Housing Problem. Think of it as musical chairs, right? If you have 12 people but you have 14 chairs. And during the game, somebody breaks her leg. That person is still going to find a chair. Now, if you have 11 chairs and somebody breaks her leg, well, that's the person who's not going to get a chair. And that's the way to think about the relationship between individual risk factors like mental illness, joblessness, etc., and homelessness, that if you have enough chairs, you can have a lot of those problems and still get a home. West Virginia has a higher rate of mental illness in California, a higher rate of poverty, a much higher rate of drug addiction, and a much lower rate of homelessness because West Virginia has very cheap and very abundant housing.
California and San Francisco in particular, are rich, have a low rate of mental illness in general, and a very high rate of homelessness because housing is unfathomably expensive. And so fundamentally, like you want to look at this from the perspective of the chairs, right? You can't solve the problem unless you have enough chairs. If you don't have enough chairs, then maybe it won't be the person with a broken leg. If you have a more capable population, maybe it'll be the person who's a little bit out of shape whatever. But if you don't have enough chairs, somebody is going to be left out.
Willy Walker: So just one final thing on that and then I'm going to go to my final question for Ezra. In his conversation as it relates to the number of chairs, It's a zoning issue. Los Angeles County, 79% of the land in Los Angeles County is zoned for single family. That means you cannot build an apartment building to house more people. The city you talk about in your podcast that has brought down homelessness 69% in the last decade is Houston, Texas. I'm in the real estate business. You talk to any of my clients, Houston is a great city to be a developer in. It's a terrible city to be an owner in because there's no zoning. If the density is needed, you can go put on an apartment building right next door to an office building next to a residential home. It's unbelievable that the restriction on zoning in California is restricting supply and therefore driving the cost up and having people fall out the bottom.
You go to the other extreme in Houston, Texas, where there is no zoning, you can build anything anywhere. And they've reduced their homelessness by 69% in the last decade.
My final question for you that you give to everyone - I picked mine out of all the stuff that you've done, I just ask you one because you're reading so much and you're always thinking ahead. What's the book that you've either just read or are about to read that has you most excited right now?
Ezra Klein: I guess I shouldn’t say Recoding America, because we already said that. But if I had to have you buy one book out of this conversation, it would be “Recoding America” and if I were to have you get one book on AI because we talked about that for quite a bit, there’s a great book that came out a couple of years ago by a guy named Brian Christian called The Alignment Problem. It's not just one of the best books about AI. I think it's the best book about AI, but it's an amazing book about the human mind and about how minds work. It's just beautifully done. He's an award-winning science journalist, so “Recoding America,” and “The Alignment Problem” – if you read those two books, your view of the world will tilt a bit on its axis in a very good way.
Willy Walker: Ladies and gentlemen, Ezra Klein.
Why We’re Polarized
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.
This was a fascinating read about how identity politics is at the core of polarization. I can’t recommend this book enough, especially today with the state of U.S. politics.
Related Walker Webcasts
2025 Economic Outlook with Mohamed El-Erian
Learn More
January 8, 2025
Finance & Economy
Post-Election Playbook with Isaac Boltansky
Learn More
November 27, 2024
Finance & Economy
Your Company Needs a Space Strategy with Matthew Weinzierl
Learn More
August 28, 2024
Finance & Economy
Insights
Check out the latest relevant content from W&D
News & Events
Find out what we're doing by regulary visiting our News & Events pages