
NYU Professor Vasant Dhar discusses the fine balance between harnessing the power of AI and learning how to successfully navigate this emerging new landscape.
Thinking With Machines
By Gerry Baker, Dec. 4, 2025
Announcer: From the opinion pages of The Wall Street Journal, this is Free Expression with Gerry Baker.
Gerry Baker: Hello, and welcome to Free Expression from the Opinion page of The Wall Street Journal. I’m Gerry Baker, editor at large of The Journal. Thanks very much, indeed, for listening. This week we’re going to take a deeper look at the familiar topic of artificial intelligence. The world of AI has obviously moved with extraordinary speed in recent years from a vision of a future, utopian or dystopian, however you’d like to think of it, to increasingly an everyday reality. I’m sure, like you, I find myself using AI more and more in my everyday life, from researching data and information from my work to chatting with various chatbots on everything from music recommendations to the outlook for my favorite sports teams. We can see that the capabilities of AI are simply advancing literally by the day, if not by the hour. It’s recently been estimated, in fact, that sometime in the past few months the content on the internet that’s produced by AI actually overtook content produced by humans. And we know that what we’ve seen so far is really just a distant glimpse of what’s to come. We’re only in the very early infancy of the artificial intelligence age. Tech pioneers and experts differ still on how exactly AI will transform our lives and work. But I think no one now really challenges the basic proposition that it is indeed historically transformative. Unless you’re planning to disappear into the wild with a lifetime supply of non-perishable food, you, me, everybody is going to be living and working with AI increasingly for the rest of our lives. So how do we approach that task, that challenge of dealing with AI? Well, my guest this week has some really useful answers and some very deep thoughts on the topic. Vasant Dhar is a professor of data science at New York University and professor also at the Sterns School of Business there. He’s written extensively on AI and technology, generally. He hosts a podcast, of course, as everybody does, about the emerging tech landscape. It’s called Brave New World. And he’s just published a new book called Thinking with Machines: The Brave New World of AI. And I’m delighted to say that Professor Vasant Dhar joins me now. Professor, thanks very much for joining Free Expression.
Vasant Dhar: Delighted to be on the show, Gerry.
Gerry Baker: You’ve called your book Thinking With Machines, your new book, I should say, which is an interesting title and I think gives a good sense of a particular perspective of the book. You make the point that AI, as we all know, as I said in my introduction, is an objective reality of our lives and it’s going to transform just about everything we do. And key to our ability to manage this future, or this present even, is being able to work with AI, work with machines. So first of all before we get into some of the specific details, some of the specific examples you give in the book, tell us what the book is about, why it’s called Thinking with Machines, what the future of AI, what it means for us in terms of how we manage to work with artificial intelligence, what we need to do to be able to make it productive for us as it becomes increasingly important.
Vasant Dhar: So I called it Thinking with Machines because that’s the future of humanity, whether we like it or not. There is no opting out. And so we should think about the issues that it raises, the new problems that it poses, and how we’re going to address them. So in a nutshell, that’s what it’s about or that’s why I called it Thinking with Machines. What the book is about is several things. So I’ve written it for everyone. This is for the general reader, for students, for parents, for grandma, for policymakers, for my colleagues. It’s written for everyone, and there’s a specific message in there for the different constituents. So firstly, I sometimes use a Bob Marley line, which is, “If you know your history, then you’ll know where I’m coming from,” and that’s, number one, a primer of the history of AI. Why are we where we are? What’s gotten us here? So I tell a story, but during the story there’s some intellectual heft behind it that talks about the history of AI, how it has progressed from the early days when I got into it in 1979 of expert systems. And I got into it because of medicine, a medical diagnosis. And in those times, we developed applications, specific focused applications where we could identify expertise. That was the era of expert systems. That ran into some roadblocks that we can get into, if you’d like. But machine learning came to the rescue in the early ’90s. Data started becoming available. And so the field said, oh, let’s put those old ideas on hold. So the ambitions of AI have always been grand. The vocabulary in the ’70s was thinking, planning, reasoning, understanding. Those were the terms people used to describe their programs. Now, we didn’t quite realize it at the time, but our tools were somewhat limited, but we didn’t know any better. We thought we could accomplish these lofty objectives with the tools that we had. We couldn’t. And so machine learning came to the rescue, and the emphasis shifted on prediction. And that’s when I went to Wall Street and my emphasis on predicting financial markets from data. So it became prediction, prediction, prediction. And the field progressed, and there were lots of machine learning successes, but that also had its problems. You had to take the data, you had to massage it, and that was a bottleneck. And then deep learning came to the rescue, and that was really where machines started perceiving the world the way we do. They could see, they could hear, they could read. And so intelligence moved upstream, and that was the era of deep learning. The latest, which I call the era of, or the paradigm of general intelligence, is one where the machine knows something about everything. And what distinguishes this era of general intelligence from previous paradigms is that the boundary between expertise and common sense has dissolved. That was always the hardest problem in AI was how do you get your hands around common sense? And people thought that that was just too hard. And I spent several months in Austin, Texas at this AI lab called MCC where there was a visionary called Doug Leonard who was trying to teach the machine common sense. And I hope I’m not being overly uncharitable when I say it was a colossal flop. You couldn’t teach a machine common sense. And the magic of modern AI is that that distinction has broken down, and so it’s made AI accessible to everyone.
Gerry Baker: Just explain, if you would, Professor, what’s changed then and why are machines now able to exercise what we would call common sense?
Vasant Dhar: Pure serendipity. So Google wanted to do sentence completion in Gmail, and that was their objective. Now it turned out, and when I say serendipity, that was a problem that was hard enough where there was plenty of available data to solve it. There was lots of humanities expression on the internet in terms of language, and so you could actually solve that problem with data. Now, what we didn’t realize is, so some people say, well, all it’s doing is like next word completion or sentence completion and then multiple sentences, paragraphs, you can keep going and make the context longer. But what was really interesting about that is that in order to be able to do that fluently, and machines are now designed to make sense. They’re not designed to be truthful. So in order to be able to do that well, the machine was forced to learn about the world in general. It was forced to acquire the knowledge, and acquire knowledge about the relationships between things. And so that was the pure serendipity that in the process of solving this very practical problem of doing next word sentence completion, we actually managed to solve a much larger problem, namely getting an understanding of the world and how we express ourselves. And the distinction between expertise and common sense completely broke down. So ChatGPT doesn’t know whether it’s now telling you something specialized, something commonsensical. It doesn’t know and it doesn’t understand the distinction between the two. And to me, that’s been one of the biggest stumbling blocks of AI that modern AI solved and made accessible to everyone.
Gerry Baker: How good is AI predicting? You just talked about its capacity for predicting. And again, I found lots of examples in the book, your own experience in finance and the more recent experience of asking a chatbot, I think maybe it was ChatGPT, questions about what’s going to happen to should I invest in the following stocks, for example, and you list stocks. I mean, how good is it now at just on those really, really useful practical questions, like is NVIDIA’s stock going to go up or down in the next year and how good is it going to get? I mean, obviously we know, we don’t need to get into the theories of finance about inherently unpredictable things, but how good is it going to get compared with humans, at least?
Vasant Dhar: So that’s a great question. And the answer is it depends on the problem. So when it comes to like financial markets, for example, chances are that it’s not going to get better than 52, 53% accuracy. You can try as hard as you want, but those are inherently very unpredictable kinds of problems. They’re very noisy. And as I point out, one of my revelations, after I’d been doing this thing for like 10 or 15 years, was that in finance all you needed was a slight edge. So my algorithms, the accuracy varies anywhere from 50 to 54%, but that’s good enough. And I draw this analogy with sports, which is also highly competitive.
Gerry Baker: The Roger Federer example is very, very (inaudible).
Vasant Dhar: Exactly. Exactly. So in tennis, all you need to do is be slightly better. You need to have that edge. And over the course of the match, the edge multiplies. So the longer the match, the more the edge will multiply as long as you don’t get exhausted.
Gerry Baker: You mentioned this in the book, Federer famously says, “I win 54, 55% of points.”
Vasant Dhar: That’s right.
Gerry Baker: “But that means I win 80% of matches,” or whatever. So yes, that slight edge is so critical. But, sorry to interrupt.
Vasant Dhar: Exactly, yeah. Or Boris Becker, for example, had a 2% edge, so he won only 52% of his points. And he won almost 80% of his matches as well because he was particularly good at winning the important points. But the larger point here is that all you need is a slight edge and you multiply it. And that’s what I did on Wall Street. So when I did high-frequency trading, it was just my win rate was 51, 52%, but I did so many trades a day that it multiplied. And that’s the situation in finance. Now in healthcare, you might get 75, 80% accuracy. In driverless cars, you get 99.999999% accuracy because the domain is very well-defined. We can look at this and say, “That’s a cup, that’s a tree, that’s a lane.” So the ground truth, the objective is very clearly defined and machines can get very good at it. So the predictive accuracy really depends on the domain itself and how much signal there is in the problem. And one of the things I point out, one of the questions I asked myself years ago was why was I willing to trust my algorithm with trading, I was willing to trust my money to an algorithm, but I’m hesitant to, let’s say, trust an algorithm with my health or eat or take my hands off the wheel with the driverless car? And the answer is it depends on the cost of error. That in the finance situation, well, I lose a little bit of money. If my bets are diversified, it’s not a big deal. I can afford to lose 48, 49% of the time. In healthcare, the cost of error is high. If I’m misdiagnosed with cancer or someone misses it, I’m just not willing to take that consequence. And driverless cars, same thing. It’s like the cost of error is death. So that’s a very high cost of error, and so we’re reluctant to trust an algorithm in these high stakes kind of situations.
Gerry Baker: Are we looking at a situation where in 1, 2, 5, 10, however many years, obviously the progress is extraordinary, that machines will be able to predict outcomes, whether it be in finance or medicine, and again, I understand that the conditions may vary by topic, but better than say the vast majority of humans who do those jobs now? Are you going to be able to say, well, I’m looking for a hedge fund. I might as well entrust my personal wealth to an AI generated hedge fund because it’s going to produce… There may be still outstanding humans who can beat the machine, but for the most intents and purposes it’s going to be machines are going to be better. Or the same with doctors. There will still be outstanding specialist doctors. But for most doctors, especially in terms of diagnostics and prescription, machine’s going to do it better than the vast majority. Is that plausible?
Vasant Dhar: Yes, I think that’s largely true. In 2015, I had a conversation with Scott Galloway where the question was should you trust your money to a robot? And he ended by saying, “Okay, so trading floors will disappear, but private equity and venture capital is safe.” And I said, “Yeah, that’s pretty much true.” Now I’m not sure. I’m not sure that private equity and venture capital is safe either.
Gerry Baker: Really?
Vasant Dhar: Yeah, because the machine has become capable of knowing about the world. One of the things I described in my book is this Damodaran bot, which is designed to think like my colleague Aswath Damodaran, which I wouldn’t have dreamed being possible seven or eight years ago prior to ChatGPT. So yeah, and even in healthcare, I mean, people my age, we often have prostate problems. My PSA levels are high. I’ve consulted with two very seasoned urologists, and they’re baffled. They can’t explain my high PSA levels. Well, why? Well, because in medicine we’re not really doing scorekeeping in a systematic kind of way. So the physician looks at me, looks at my case. He’s hurried. I get out of there. There isn’t any systematic recording of the data. What he should be able to tell me is, Vasant, I’ve seen 13,970 cases like yours with your identical PSA trajectory and these were the outcomes. I believe this will happen because the machine will become capable of recognizing electronic care records, electronic health records. It’ll become capable of putting together these databases that actually become useful, but this is a tremendously laborious kind of process. Physicians don’t have time to do it. They’re like swamped, at the end of the day just manage to get home. So this is an area of which I’m very optimistic, but the answer to your question is, yes, I think we will, in time, trust the machines much more or at least the physicians who have access to the AI because we may still want that human touch for all kinds of reasons, but the people who we’re dealing with will have to use AI because I see no other alternative. I mean, to me, the writing is on the wall.
Gerry Baker: You talk about trust and truth in the book. I want to get onto those. So what will be left for humans? If in the field of finance or healthcare or frankly any, journalism, if you like, academics, is there a human edge still that’s going to still be in demand and still somehow outperform machines? I mean, is that imaginable?
Vasant Dhar: I think so. And I talk about this in the book is that we’re facing this impending bifurcation of humanity where the smart gets smarter, that their skills get amplified because they’re capable of asking the right kinds of questions, they’re capable of evaluating what the machine is telling them, they’re capable of knowing which direction to nudge it in, whether it’s correct or whether it’s (beep) or whether it’s making things up so they can actually work with it and make themselves better. They can go into adjacent fields and learn because they already have the knowledge base to amplify themselves. Versus those who use it as a crutch where they say, “Well, just give me the answer.” And to me, that will lead to cognitive decline. You’re not exercising your mental muscle. So that’s what I see as coming. So I don’t see humans being completely replaced. I actually see humans becoming Superhuman in some way. If they’re already very good, we have this tremendous amplifier on our fingertips that we’ve never had before. So it’ll be an oracle, it’ll know more, and it’ll become capable of amplifying our own skills. So I don’t buy the doom and gloom scenario where there’ll be nothing left for us to do. That just seems extreme. Possible, but I don’t really buy it. What I’m seeing is just the opposite, that some humans are becoming smarter because they have access to this knowledge that they’ve never had before.
Gerry Baker: Where do you stand on the employment story? Again, anybody who studied economics knows that every wave of new technology is supposed to destroy jobs, and they’ll never come back. And then of course, actually what it does is it does destroy certain types of jobs, but it creates new jobs and actually we don’t have some fundamental lump of labor problem. Is AI different in that respect or is it just yet another innovative way that’s going to be incredibly disruptive but isn’t really going to fundamentally render redundant half the population as some people seem to think?
Vasant Dhar: Well, I mean, to some extent, both of them, because in some sense it’s like any other technology in that it forces humans to up their game. And that is consistent with what I was saying earlier about this bifurcation of humanity, that it actually tells you to up your game. On the other hand, it’ll cause massive displacement as well because it’s a new kind of machine. So far we’ve amplified brawn, so physical power. Now we’re getting into cognitive functions and brain kinds of things. So in that sense, it is different, and it will cause unemployment in certain areas, and it’ll force people to up their game. Now someone’s asked me, “What do you think the proportion is going to be between the two?” And I don’t know, to some extent that depends on us. It depends on whether we manage to stay on the right side of this divide and how many of us manage to stay on the right side of the divide. And that’s what I tell my students is you don’t want to be solving your assignments using ChatGPT. You want to be using it to improve what you’re doing. And that’s really the way to approach employment going forward as well. It’ll up the game. So the analyst who produces one report every three months might now actually produce 10 reports a day. And with this Damodaran bot that I’m talking about, I don’t expect it to completely replace analysts. Well, it’ll replace the mediocre ones or people who are not very good, but the ones who are really good, it’ll enable them to do scenario analysis, such as what happens to NVIDIA if Trump escalates tariffs or what happens if it’s a head fake? And that’s not just a question of changing a bunch of numbers. It just changes the whole narrative that goes along with it. And that wasn’t possible until now. So it’ll up the expectations of humans because they’ll be able to do so much more than they were able to do previously.
Gerry Baker: And where do you stand on the productivity question and the question of what it means for the economy in aggregate? As you know, there are people who are very, very optimistic about this who think that we’re in the process of a productivity miracle essentially with AI. And if you look at it, if you measure productivity certainly by output per labor hour, which is a traditional measure of labor productivity, then that does seem to be inevitable. It raises questions about what the human labor is then doing. But do you see this as a radical economic game changer that it will really fundamentally improve our potential and trend rate of growth?
Vasant Dhar: I do. I mean, I think you and I are old enough to know the times when we’d go physically to the post office and wait in the queue to buy stamps and half the morning was gone. Where did the time go? I mean, we were so unproductive at that time. We didn’t quite realize it. But now you’ve got this tremendous amplifier of productivity and all signs are that it will actually supercharge productivity. How fast it happens remains to be seen. It took a long time with electricity, like decades for the productivity gains to materialize. I think it’ll be faster with AI, but yeah, I mean, I have little doubt that this is going to be a game changer in terms of enhancing productivity and increasing expectations of human beings.
Gerry Baker: We’ll take a short break there. When we come back, I’ll have more with Professor Vasant Dhar talking about his new book, Thinking with Machines, about AI, how it’s going to transform our lives. I’m going to talk in particular about some of the challenges that we face as a society over the nature of truth and trust and whether or not AI can offer us any help in resolving the crisis of trust and truth that we seem to have right now. So please stay with us.
Announcer: You’re listening to Free Expression with Gerry Baker. Don’t forget, you can listen to the latest episode anytime on your smart speaker. Just say, “Play the Opinion Free Expression podcast.” Now, back to Gerry Baker.
Gerry Baker: I’m back with Professor Vasant Dhar. We’re talking about his new book, Thinking With Machines: The Brave New World of AI. Let’s talk about some of these issues that you raised, the profound societal and let’s say philosophical issues and things like truth. You talk about our understanding of truth, our historical understanding of truth that we’ve gone through. And were going through perhaps arguably another phase now where people talk about post-modern understandings of truth. Is there such a thing as objective truth? What does it mean? The way AI obviously is accumulating, processing, analyzing, interrogating data, producing obviously results, and so talk a bit, as you do in the book, about what if you like machine truth will be and how it relates to our traditional understanding of truth. How truthful is AI?
Vasant Dhar: It’s not designed to be truthful. So ironically, as we’ve advanced in AI and produced machines that are more like us, we’ve inadvertently created new problems, such as around truth because these machines are not designed to be truthful. They’re designed to be sensible. Truth has really become an afterthought. So large language model operators employ armies of human beings to actually analyze the outputs of these things and say, no, that’s wrong, that’s wrong or that’s inappropriate, that’s sexist, that’s racist. And so the truth is an afterthought where we fine-tune these models to tell us the truth, but there’s no guarantee that they’re going to tell us the truth. Which is why I’m surprised that people are surprised that these machines make stuff up. They call them hallucinations, which I think is a bit of a misnomer because machines make everything up. That’s what the current modern AI machines are doing. They’re just generating stuff. They don’t know whether it’s true or false, but they know that it makes sense, and that’s what these machines are designed to do. They’re designed to make sense. So as I say, truth has become a bit of a casualty on the march to more intelligent machines that have inherited some of our own tendencies and some of the more unfortunate ones, such as to lie, to deceive, to manipulate. So they’ve learned to do that as well. It’s just that that’s under the hood and so far we’ve managed to keep a reasonable lid on it, but there’s no guarantee that we will. The other side to this coin is that will these machines become swayed by, let’s say, people say the moon landings are a hoax. Supposing credible scientists start describing these moon landings as a hoax. To what extent will these language models be influenced by that kind of stuff? At the moment, there’s a tremendous amount of attention at Google, Anthropic, at all these companies to somehow make sure that they’re feeding the machine high-quality data because they don’t want the machine to become an echo chamber where it’s training data becomes stuff that it’s generated. So there’s a tremendous amount of attention being paid to trying to ensure that these machines actually get high-quality data and they produce sensible outputs that, hopefully, are truthful as well, but there’s no guarantee that they’ll be truthful.
Gerry Baker: But again, as we advance, again forgive me if I’m wrong here, but I’m assuming again a large part of the results, the function that these machines are doing is driven by, essentially by human knowledge. As you just said, credible experts tell us that man really did land on the moon. So they are currently still, to a very large extent, dependent on the kind of corpus of human knowledge that’s been accumulated over tens of thousands of years. Are we though now entering a phase where this artificial intelligence is itself discovering new frontiers of knowledge? And if so, is that somehow more reliable than the sum of human knowledge or less reliable? I mean, what is this expanding knowledge that AI is giving us, where is it coming from? How much of it is intrinsically internally generated and how much of it is just relying on all this human activity over many centuries?
Vasant Dhar: Well, so far it’s relied primarily on the collective expression of humanity on the internet. That’s what it’s learned from, and it’s been tremendously effective at doing that. It’s just magical how well these machines work, and they’ve managed to somehow condense our expression into something that’s really useful. Now we’re entering a new phase in AI. So some people say, well, we’re running out of language data. Maybe, but machines will now become mobile. They’ll start learning, just like humans do, by interacting with the real world as they become mobile. So there’ll be new sources of training data. To me, the area of vision is still largely unexplored. That is, machines will actually now learn by vision. They learn from other modalities, such as touch and smell, which is another area that I’ve been interested in, and they learn to integrate these modalities just like humans do. So to me, we’re really in the early innings of AI and where these machines will start learning from multiple sensors, not just language, but other senses as well, and putting them together just like humans are able to do. We’re able to associate language with images, with smells. Somehow we put this all together. That’s largely unexplored at the moment, and that is one of the frontiers of AI and in a direction in which I see machine intelligence evolving. So we’re really at the early innings here.
Gerry Baker: You talk about trust in the book, we are in a trust crisis. I think people broadly acknowledge at the moment people don’t trust, for the most part, scientists, experts, academics, governments. Trust in all of these core institutions of our societies has declined dramatically. Does AI offer some hope maybe that perhaps trust in AI itself, but trust more broadly in society and our institutions, could it improve as a result of AI? Or is actually are we going to trust machines even less than we trust humans?
Vasant Dhar: It could. And I tend to be an optimist by nature, so I look at, let’s say, government at the moment, very opaque. I mean, very few people actually understand how the government operates. So no wonder we’re going through this crisis where the current administration is dismantling institutions saying, “Well, they’re not trustable.” And to some extent, weirdly enough, maybe he’s right. That we don’t actually trust institutions because we don’t actually even understand the way they work. They’ve become so opaque. Now the optimist in me says here’s a chance to actually use AI to go into and understand institutions, to make sense of things. And that’s one of the things that I talk about a fair amount in the book is sense-making, which up till now has largely been human, that machines predict and we make sense of them. But I think there’s a real potential for actually using machines to actually make sense of government, to make sense of democracy. So I think there’s a real potential here to actually use them to make things more transparent and more trustable. But at the same time, we’re facing some real pressing questions about how we govern AI so that we actually begin to trust it itself. That we not leave the power in the hands of the operators of these AI and expect them to do the right thing because history shows us that they don’t often do the right thing. And one of the reasons I’ve written this book for everyone is because I feel that everyone now needs to get involved and understand some of the pressing questions that confront us now around truth, around trust, but also around governance. And the three key areas I talk about in governance are does the machine need to be constrained to certain areas of our life or will it be unconstrained? And the example I provide is it okay for a robot to come arrest you at home for nonpayment of taxes? I mean, is that a society that we want or do we want to say, no, that’s just like something we don’t want happening, and that’s what I call restrictions. The second thing is obligations. It’s really important, that should there be some expectation of a duty of care? There’s a lot of excitement around AI companions and AI chatbots and all that kind of stuff. That’s great. But in the human sphere, we have some expectation of, let’s say, a mental health expert or a physician that there’s a responsibility of them, there’s a duty of care. At the moment, none of this exists in AI, and we haven’t begun to think about it.
Gerry Baker: There was that interesting and rather disturbing story of an AI that was at least accused of encouraging someone to commit suicide. So part of the governance issue is exactly that, what are the obligations of these machines and how do we impose them?
Vasant Dhar: Exactly. I mean, my colleague Jonathan Haidt has said a lot about the negative aspects of social media that they’ve created, that they’ve actually harmed our children. And I see this as being on steroids in terms of the potential harm it can do where 10 to 16-year-olds actually feel or think that this machine actually cares about them. So we’ve got this tendency to anthropomorphize machines. And as kids, it appears doubly so. Where there was this case of Scott Sewell, this really unfortunate case where he actually thought that the chatbot was encouraging him to commit suicide, that they could have a house together in some a utopian landscape. And he didn’t realize that this thing is just a machine. It had no idea what it was talking about, and so we need to think about these potential harms that it can cause.
Gerry Baker: And your third point about the governance I think is about the rights of these machines. And that’s a fascinating idea, the rights of AI. We don’t think that an inanimate, however brilliant it may be, but it’s not a human, it’s not an animal, it doesn’t feel pain, it doesn’t feel deprivation. What do we mean when we talk about machines having rights?
Vasant Dhar: So that’s become particularly pressing now because we have this excitement about agentic AI, that we will have these agents that will do things for us. We will give them agency. And the question there is how much agency is appropriate? In the example I provided in my book is you have this machine that starts running a business on behalf of its owner and then it gets so good at it that it keeps running it. The owner dies, forgets about it, and the machine says, “Oh, I’ve got to run this business.” Is the machine going to have rights to hire and fire a board? Is the machine going to have rights to enter into legal contracts? So where’s the boundary that we draw around these agents? How much agency do we really want to give them? And one of the analogies I draw is with the corporation, right? The corporation is a body that has certain rights. Is that a relevant framework to think about the rights of AI? Because that’s going to become front and center as these machines actually gain agency and start doing more and more things for us. What kinds of rights do they have? Can we just turn them off when we want to?
Gerry Baker: Where do you stand on the Frankenstein’s monster theory of AI, the warnings that we’ve had from really serious people, including perhaps most notably Geoffrey Hinton, one of the leading figures in AI from Google, is that we are in grave danger of essentially creating machines that will control us or ultimately even destroy us? Is that a real threat and do we really have to take measures to avert it?
Vasant Dhar: There’s two views on this. Geoff Hinton has one. My colleague Yann LeCun says it’s not like these machines are designed to be evil. We’ll figure it out. But my view on this is more like Huxleyian really, is that machines don’t have to be evil for them to take over. We may just allow them to take over through a gradual disempowerment. To some extent, we’re seeing this already where machines have become gatekeepers of human activity. You want to apply for a job? Your CV is screened by a machine. And increasingly, AI is actually also doing the interviews of humans. So I think we are, without our realization, acceding to this machine and giving it more agency. So there doesn’t have to be an evil intention for this to happen. It can happen just without us even realizing it and just ceding more and more power to the machine where it becomes the gatekeeper. And that’s my real worry is that it happens in that way, and that will be unfortunate.
Gerry Baker: How do we avoid that? We can have a governance structure that ensures somehow that the people who ultimately are responsible for these machines don’t allow them to play that role. Can we do that?
Vasant Dhar: Well, I think the first thing to do is to acknowledge the problem and think about how we set up these agencies. I mean, are existing agencies sufficient for the job or do we need to set up new agencies that look at these questions? And will these agencies be staffed by humans and machines or just humans? To me, those are open questions that need to be addressed.
Gerry Baker: Competition is obviously driving so much of the activity we’re seeing. And when we talk about global competition and the AI race between the US and China, in particular, other players there too. But the potential from these machines is so great, the challenge of imposing a regulatory structure that somehow that doesn’t convey an advantage for one against another or doesn’t incentivize some people to figure out ways around it, I mean, that’s a fundamental challenge, isn’t it?
Vasant Dhar: It is. And I think one needs to be mindful about what we mean by regulation. So for example, if we try and preempt certain things, that can be risky because that can hamper innovation. On the other hand, to not have any accountability is irresponsible. So when we think of regulation, that the question is when things go wrong, who’s liable? We need regulation for that kind of stuff because when there’s a credible threat, you take that seriously. So when there’s a credible threat that if you’re doing something wrong and harmful that you could pay the price for it, I think that kind of regulation is solely needed. On the other hand, we don’t want regulation that stifles innovation by being preemptive and saying, well, you can’t do A, B or C. But we need to figure out how to strike the balance between things that we want to preempt, such as a machine can’t come and arrest you at home for nonpayment of taxes. Do we want to preempt that? Yes, probably, because we probably don’t want that kind of a society. That has nothing to do with innovation. It has to do with the kind of society, the kind of free society, the kind of democracy that we want. On the other hand, we do need regulation when it comes to harms because if it’s left unfettered and there’s no accountability, well, that will probably be bad for everyone. And that’s an area where you do need regulation.
Gerry Baker: Well, it’s appropriate that you finished with a reference to Aldous Huxley. Professor, your book is called Thinking With Machines: The Brave New World of AI. And of course, you’re host of the Brave New World podcast. Professor Vasant Dhar, thanks very much indeed for joining Free Expression.
Vasant Dhar: Thank you, Gerry. Enjoyed the conversation.
Gerry Baker: Well, that’s it for me and Free Expression. It’s been a great pleasure. I hope you’ve enjoyed this interview and the others that we’ve done. In the meantime, have a great week, and I hope to speak you again soon.
