Dr. Roman Yampolskiy, AI Safety Expert

    Oliver Duex
    • 10 views

    Dr. Roman Yampolskiy in conversation with Steven Bartlett's potcast The Diary of a CEO. 
    Dr. Yampolskiy is a computer scientist and mostly known for his work on AI Safety and Cerbersecuriy. He is the founder, and as of 2012 director of Cyber Security Lab, in the department of Computer Engineering and Computer Science at the Speed Schol of Engineering of the University of Louisville, Kentucky, USA.

    Transcript:
    00:00
    You've been working on AI safety for two decades at least. Yeah, I was convinced we can make safe AI, but the more I looked at it, the more I realized it's not something we can actually do. You have made a series of predictions about a variety of different states. So, what is your prediction for 2027? [Music] Dr. Roman Yimpolski is a globally recognized voice on AI safety and associate professor of computer science.
    00:26
    He educates people on the terrifying truth of AI and what we need to do to save humanity. In 2 years, the capability to replace most humans in most occupations will come very quickly. I mean, in 5 years, we're looking at a world where we have levels of unemployment we never seen before. Not talking about 10% but 99%.
    00:45
    And that's without super intelligence. A system smarter than all humans in all domains. So, it would be better than us at making new AI. But it's worse than that. We don't know how to make them safe and yet we still have the smartest people in the world competing to win the race to super intelligence.
    01:04
    But what do you make of people like Saman's journey with AI? So decade ago we published guard rails for how to do AI, right? They violated every single one and he's gambling 8 billion lives on getting richer and more powerful. So I guess some people want to go to Mars, others want to control the universe. But it doesn't matter who builds it.
    01:24
    The moment you switch to super intelligence, we will most likely regret it terribly. And then by 2045, now this is where it gets interesting. Dr. Roman Yampolskiy, let's talk about simulation theory. I think we are in one. And there is a lot of agreement on this and this is what you should be doing in it so we don't shut it down. First, I see messages all the time in the comment section that some of you didn't realize you didn't subscribe.
    So, yeah, thank you, Dr. Roman Yampolskiy. What is the mission that you're currently on? Cuz it's quite clear to me that you are on a bit of a mission and you've been on this mission for I think the best part of two decades at least.

    02:26
    I'm hoping to make sure that super intelligence we are creating right now does not kill everyone. Give me some give me some context on that statement because it's quite a shocking statement. Sure. So in the last decade we actually figured out how to make artificial intelligence better.
    02:53
    Turns out if you add more compute, more data, it just kind of becomes smarter. And so now smartest people in the world, billions of dollars, all going to create the best possible super intelligence we can. Unfortunately, while we know how to make those systems much more capable, we don't know how to make them safe. how to make sure they don't do something we will regret and that's the state-of-the-art right now.
    03:24
    When we look at just prediction markets, how soon will we get to advanced AI? The timelines are very short couple years two three years according to prediction markets according to CEOs of top labs and at the same time we don't know how to make sure that those systems are aligned with our preferences. So we are creating this alien intelligence.
    03:49
    If aliens were coming to earth and you you have three years to prepare you would be panicking right now. But most people don't don't even realize this is happening. So some of the counterarguments might be well these are very very smart people. These are very big companies with lots of money. They have a obligation and a moral obligation but also just a legal obligation to make sure they do no harm. So I'm sure it'll be fine.
    04:14
    The only obligation they have is to make money for the investors. That's the legal obligation they have. They have no moral or ethical obligations. Also, according to them, they don't know how to do it yet. The state-of-the-art answers are we'll figure it out when we get there, or AI will help us control more advanced AI.
    04:33
    That's insane. In terms of probability, what do you think is the probability that something goes catastrophically wrong? So, nobody can tell you for sure what's going to happen. But if you're not in charge, you're not controlling it, you will not get outcomes you want. The space of possibilities is almost infinite.
    04:54
    The space of outcomes we will like is tiny. And who are you and how long have you been working on this? I'm a computer scientist by training. I have a PhD in computer science and engineering. I probably started work in AI safety mildly defined as control of bots at the time uh 15 years ago. 15 years ago. So you've been working on AI safety before it was cool. Before the term existed, I coined the term AI safety.
    05:26
    So you're the founder of the term AI safety. The term? Yes. Not the field. There are other people who did brilliant work before I got there. Why were you thinking about this 15 years ago? Because most people have only been talking about the term AI safety for the last two or three years. Yeah. It started very mildly just as a security project.
    05:43
    I was looking at poker bots and I realized that the bots are getting better and better. And if you just project this forward enough, they're going to get better than us, smarter, more capable. And it happened. They are playing poker way better than average players. But more generally, it will happen with all other domains, all the other cyber resources.
    06:06
    I wanted to make sure AI is a technology which is beneficial for everyone. So I started to work on making AI safer. Was there a particular moment in your career where you thought oh my god? First 5 years at least I was working on solving this problem. I was convinced we can make this happen. We can make safe AI and that was the goal.
    06:31
    But the more I looked at it, the more I realized every single component of that equation is not something we can actually do. And the more you zoom in, it's like a fractal. You go in and you find 10 more problems and then 100 more problems. And all of them are not just difficult. They're impossible to solve. There is no seinal work in this field where like we solved this, we don't have to worry about this. There are patches.
    06:56
    There are little fixes we put in place and quickly people find ways to work around them. They drill break whatever safety mechanisms we have. So while progress in AI capabilities is exponential or maybe even hyper exponential, progress in AI safety is linear or constant. The gap is increasing.
    07:22
    The gap between the how capable the systems are and how well we can control them, predict what they're going to do, explain their decision making. I think this is quite an important point because you said that we're basically patching over the issues that we find. So, we're developing this this core intelligence and then to stop it doing things or to stop it showing some of its unpredictability or its threats, the companies that are developing this AI are programming in code over the top to say, "Okay, don't swear, don't say that read word, don't do that bad thing." Exactly. And you can look at other examples of that. So, HR manuals, right?
    07:56
    We have those humans. They're general intelligences, but you want them to behave in a company. So they have a policy, no sexual harassment, no this, no that. But if you're smart enough, you always find a workaround. So you're just pushing behavior into a different not yet restricted subdomain.
    08:17
    We we should probably define some terms here. So there's narrow intelligence which can play chess or whatever. There's the artificial general intelligence which can operate across domains and then super intelligence which is smarter than all humans in all domains. And where are we? So that's a very fuzzy boundary, right? We definitely have many excellent narrow systems, no question about it.
    08:39
    And they are super intelligent in that narrow domain. So uh protein folding is a problem which was solved using narrow AI and it's superior to all humans in that domain. In terms of AGI, again I said if we showed what we have today to a scientist from 20 years ago, they would be convinced we have full-blown AGI. We have systems which can learn. They can perform in hundreds of domains and they better than human in many of them.
    09:04
    So you can argue we have a weak version of hi. Now we don't have super intelligence yet. We still have brilliant humans who are completely dominating AI especially in science and engineering. But that gap is closing so fast. You can see especially in the domain of mathematics 3 years ago large language models couldn't do basic algebra multiplying three-digit numbers was a challenge now they helping with mathematical proofs they winning mathematics olympiads competitions they are working on solving millennial problems hardest problems in mathematics so in 3 years we closed the gap from subhuman performance to better
    09:47
    than most mathematicians in the And we see the same process happening in science and in engineering. You have made a series of predictions and they correspond to a variety of different dates. I have those dates in front of me here. What is your prediction for the year 2027? We're probably looking at AGI as predicted by prediction markets and tops of the labs. So we have artificial general intelligence by 2027.
    10:20
    And how would that make the world different to how it is now? So if you have this concept of a drop in employee, you have free labor, physical and cognitive, trillions of dollars of it. It makes no sense to hire humans for most jobs. If I can just get, you know, a $20 subscription or a free model to do what an employee does. First, anything on a computer will be automated.
    10:44
    And next, I think humanoid robots are maybe 5 years behind. So in five years all the physical labor can also be automated. So we're looking at a world where we have levels of unemployment we never seen before. Not talking about 10% unemployment which is scary but 99%. All you have left is jobs where for whatever reason you prefer another human would do it for you. But anything else can be fully automated.
    11:13
    It doesn't mean it will be automated in practice. A lot of times technology exists but it's not deployed. Video phones were invented in the 70s. Nobody had them until iPhones came around. So we may have a lot more time with jobs and with world which looks like this. But capability to replace most humans and most occupations will come very quickly.
    11:38
    H okay. So let's try and drill down into that and and stress test it. So, a podcaster like me. Would you need a podcaster like me? So, let's look at what you do. You prepare. You ask questions. You ask follow-up questions. And you look good on camera. Thank you so much. Let's see what we can do.
    12:05
    Large language model today can easily read everything I wrote. Yeah. And have very solid understanding better. I I assume you haven't read every single one of my books. Right? That thing would do it. It can train on every podcast you ever did. So, it knows exactly your style, the types of questions you ask.
    12:23
    It can also find correspondence between what worked really well. Like this type of question really increased views. This type of topic was very promising. So, you can optimize I think better than you can because you don't have a data set. Of course, visual simulation is trivial at this point.
    12:43
    So it can you can make a video within seconds of me sat here and so we can generate videos of you interviewing anyone on any topic very efficiently and you just have to get likeness approval whatever are there many jobs that you think would remain in a world of AGI if you're saying AGI is potentially going to be here whether it's deployed or not by 2027 what kind and then okay so let's take out of this any physical labor jobs for a second are there any jobs that you think a human would be able to do better in a world of AGI still? So that's the question I often ask
    13:16
    people in a world with AGI and I think almost immediately we'll get super intelligence as a side effect. So the question really is in a world of super intelligence which is defined as better than all humans in all domains. What can you contribute? And so you know better than anyone what it's like to be you.
    13:37
    You know what ice cream tastes to you? Can you get paid for that knowledge? Is someone interested in that? Maybe not. Not a big market. There are jobs where you want a human. Maybe you're rich and you want a human accountant for whatever historic reasons. Old people like traditional ways of doing things. Warren Buffett would not switch to AI. He would use his human accountant.
    14:03
    But it's a tiny subset of a market. Today we have products which are man-made in US as opposed to mass-produced in China and some people pay more to have those but it's a small subset. It's a almost a fetish. There is no practical reason for it and I think anything you can do on a computer could be automated using that technology.
    14:28
    You must hear a lot of rebuttals to when this when you say it because people experience a huge amount of mental discomfort when they hear that their job, their career, the thing they got a degree in, the thing they invested $100,000 into is going to be taken away from them.
    14:44
    So, their natural reaction some for some people is that cognitive dissonance that no, you're wrong. AI can't be creative. It's not this. It's not that. It'll never be interested in my job. I'll be fine because you hear these arguments all the time, right? It's really funny. I ask people and I ask people in different occupations.
    15:02
    I ask my Uber driver, "Are you worried about self-driving cars?" And they go, "No, no one can do what I do. I know the streets of New York. I can navigate like no AI. I'm safe." And it's true for any job. Professors are saying this to me. Oh, nobody can lecture like I do. Like, this is so special. But you understand it's ridiculous.
    15:20
    We already have self-driving cars replacing drivers. That is not even a question if it's possible. It's like how soon before you fired. Yeah. I mean, I've just been in LA yesterday and uh my car drives itself. So, I get in the car, I set put in where I want to go and then I don't touch the steering wheel or the brake pedals and it takes me from A to B, even if it's an hourong drive without any intervention at all. I actually still park it, but other than that, I'm not I'm not driving the car at all. And obviously in LA we
    15:51
    also have Whimo now which means you order it on your phone and it shows up with no driver in it and takes you to where you want to go. Oh yeah. So it's quite clear to see how that is potentially a matter of time for those people cuz we do have some of those people listening to this conversation right now that their occupation is driving to offer them a and I think driving is the biggest occupation in the world if I'm correct.
    16:16
    I'm pretty sure it is the biggest occupation in the world. One of the top ones. Yeah. What would you say to those people? What should they be doing with their lives? What should they should they be retraining in something or what time frame? So that's the paradigm shift here. Before we always said this job is going to be automated, retrain to do this other job.
    16:34
    But if I'm telling you that all jobs will be automated, then there is no plan B. You cannot retrain. Look at computer science. Two years ago, we told people learn to code. you are an artist, you cannot make money. Learn to code. Then we realized, oh, AI kind of knows how to code and getting better. Become a prompt engineer. You can engineer prompts for AI.
    17:01
    It's going to be a great job. Get a four-year degree in it. But then we're like, AI is way better at designing prompts for other AIs than any human. So that's gone. So I can't really tell you right now. The hardest thing is design AI agents for practical applications. I guarantee you in a year or two it's going to be gone just as well.
    17:23
    So I don't think there is a this occupation needs to learn to do this instead. I think it's more like we as a humanity then we all lose our jobs. What do we do? What do we do financially? Who's paying for us? And what do we do in terms of meaning? What do I do with my extra 60 80 hours a week? You've thought around this corner, haven't you? a little bit. What is around that corner in your view? So the economic part seems easy.
    17:52
    If you create a lot of free labor, you have a lot of free wealth, abundance, things which are right now not very affordable become dirt cheap and so you can provide for everyone basic needs. Some people say you can provide beyond basic needs. You can provide very good existence for everyone.
    18:13
    The hard problem is what do you do with all that free time? For a lot of people, their jobs are what gives them meaning in their life. So they would be kind of lost. We see it with people who uh retire or do early retirement. And for so many people who hate their jobs, they'll be very happy not working. But now you have people who are chilling all day.
    18:32
    What happens to society? How does that impact crime rate, pregnancy rate, all sorts of issues? Nobody thinks about. governments don't have programs prepared to deal with 99% unemployment. What do you think that world looks like? Again, I I think you very important part to understand here is the unpredictability of it.
    19:01
    We cannot predict what a smarter than us system will do. And the point when we get to that is often called singularity by analogy with physical singularity. You cannot see beyond the event horizon. I can tell you what I think might happen, but that's my prediction.
    19:20
    It is not what actually is going to happen because I just don't have cognitive ability to predict a much smarter agent impacting this world. Then you read science fiction. There is never a super intelligence in it actually doing anything because nobody can write believable science fiction at that level. They either banned AI like Dune because this way you can avoid writing about it or it's like Star Wars.
    19:47
    You have this really dumb bots but not nothing super intelligent ever cuz by definition you cannot predict at that level because by definition of it being super intelligent it will make its own mind up. By definition if it was something you could predict you would be operating at the same level of intelligence violating our assumption that it is smarter than you.
    20:04
    If I'm playing chess with super intelligence and I can predict every move, I'm playing at that level. It's kind of like my French bulldog trying to predict exactly what I'm thinking and what I'm going to do. That's a good cognitive gap. And it's not just he can predict you're going to work, you're coming back, but he cannot understand why you're doing a podcast. That is something completely outside of his model of the world.
    20:25
    Yeah. He doesn't even know that I go to work. He just sees that I leave the house and doesn't know where I go. Buy food for him. What's the most persuasive argument against your own perspective here? That we will not have unemployment due to advanced technology that there won't be this French bulldog human gap in understanding and I guess like power and control.
    20:56
    So some people think that we can enhance human minds either through combination with hardware. So something like Neurolink or through genetic re-engineering to where we make smarter humans. Yeah, it may give us a little more intelligence. I don't think we are still competitive in biological form with silicon form. Silicon substrate is much more capable for intelligence. It's faster.
    21:23
    It's more resilient, more energy efficient in many ways, which is what computers are made out of versus the brain. Yeah. So I don't think we can keep up just with improving our biology. Some people think maybe and this is very speculative we can upload our minds into computers. So scan your brain connect of your brain and have a simulation running on a computer and you can speed it up give it more capabilities. But to me that feels like you no longer exist.
    21:49
    We just created software by different means and now you have AI based on biology and AI based on some other forms of training. You can have evolutionary algorithms. You can have many paths to reach AGI but at the end none of them are humans. I have a another date here which is 2030.
    22:16
    What's your prediction for 2030? What will the world look like? So we probably will have uh humanoid robots with enough flexibility, dexterity to compete with humans in all domains including plumbers. We can make artificial plumbers. Not the plumbers where that was that felt like the last bastion of uh human employment. So 2030, 5 years from now, humanoid robots, so many of the companies, the leading companies including Tesla are developing humanoid robots at light speed and they're getting increasingly more effective. And these humanoid robots will be able to move through physical space for, you
    22:50
    know, make an omelette, do anything humans can do, but obviously have be connected to AI as well. So they can think, talk, right? They're controlled by AI. They always connected to the network. So they are already dominating in many ways. Our world will look remarkably different when humanoid robots are functional and effective because that's really when you know I start think like the combination of intelligence and physical ability is really really doesn't leave much does it for us um human beings not much. So today if you have intelligence through internet you can hire humans to do your bidding for you.
    23:37
    You can pay them in bitcoin. So you can have bodies just not directly controlling them. So it's not a huge game changer to add direct control of physical bodies. Intelligence is where it's at. The important component is definitely higher ability to optimize to solve problems to find patterns people cannot see.
    24:02
    And then by 2045, I guess the world looks even even more um which is 20 years from now. So if it's still around, if it's still around, Ray Kurszswe predicts that that's the year for the singularity. That's the year where progress becomes so fast. So this AI doing science and engineering work makes improvements so quickly we cannot keep up anymore. That's the definition of singularity.
    24:26
    point beyond which we cannot see, understand, predict, see, understand, predict the intelligence itself or what is happening in the world, the technology is being developed. So right now if I have an iPhone, I can look forward to a new one coming out next year and I'll understand it has slightly better camera. Imagine now this process of researching and developing this phone is automated.
    24:51
    It happens every 6 months, every 3 months, every month, week, day, hour, minute, second. You cannot keep up with 30 iterations of iPhone in one day. You don't understand what capabilities it has, what proper controls are. It just escapes you. Right now, it's hard for any researcher and AI to keep up with the state-of-the-art.
    25:17
    While I was doing this interview with you, a new model came out and I no longer know what the state-of-the-art is. Every day, as a percentage of total knowledge, I get dumber. I may still know more because I keep reading. But as a percentage of overall knowledge, we're all getting dumber. And then you take it to extreme values, you have zero knowledge, zero understanding of the world around you.
    25:37
    Some of the arguments against this eventuality are that when you look at other technologies like the industrial revolution, people just found new ways to to work and new careers that we could never have imagined at the time were created. How do you respond to that in a world of super intelligence? It's a paradigm shift. We always had tools, new tools which allowed some job to be done more efficiently.
    26:03
    So instead of having 10 workers, you could have two workers and eight workers had to find a new job. And there was another job. Now you can supervise those workers or do something cool. If you creating a meta invention, you're inventing intelligence.
    26:22
    You're inventing a worker, an agent, then you can apply that agent to the new job. There is not a job which cannot be automated. That never happened before. All the inventions we previously had were kind of a tool for doing something. So we invented fire. Huge game changer. But that's it. It stops with fire. We invent the wheel. Same idea. Huge implications.
    26:45
    But wheel itself is not an inventor. Here we're inventing a replacement for human mind. A new inventor capable of doing new inventions. It's the last invention we ever have to make. At that point it takes over and the process of doing science research even ethics research morals all that is automated at that point.
    27:08
    Do you sleep well at night? Really well. Even though you you spent the last what 15 20 years of your life working on AI safety and it's suddenly among us in a in a way that I don't think anyone could have predicted 5 years ago.
    27:26
    When I say among us, I really mean that the amount of funding and talent that is now focused on reaching super intelligence faster has made it feel more inevitable and more soon than any of us could have possibly imagined. We as humans have this built-in bias about not thinking about really bad outcomes and things we cannot prevent. So all of us are dying. Your kids are dying, your parents are dying, everyone's dying, but you still sleep well. you still go on with your day.
    27:54
    Even 95 year olds are still doing games and playing golf and whatnot cuz we have this ability to not think about the worst outcomes especially if we cannot actually modify the outcome. So that's the same infrastructure being used for this. Yeah, there is humanity level deathlike event. We're happening to be close to it probably, but unless I can do something about it, I I can just keep enjoying my life.
    28:26
    In fact, maybe knowing that you have limited amount of time left gives you more reason to have a better life. You cannot waste any. And that's the survival trait of evolution, I guess, because those of my ancestors that spent all their time worrying wouldn't have spent enough time having babies and hunting to survive. Suicidal ideiation.
    28:44
    People who really start thinking about how horrible the world is usually escape pretty soon. One of the you co-authored this paper um analyzing the key arguments people make against the importance of AI safety. And one of the arguments in there is that there's other things that are of bigger importance right now. It might be world wars. It could be nuclear containment. It could be other things.
    29:11
    There's other things that the governments and podcasters like me should be talking about that are more important. What's your rebuttal to that argument? So, super intelligence is a meta solution. If we get super intelligence right, it will help us with climate change. It will help us with wars. It can solve all the other existential risks. If we don't get it right, it dominates.
    29:38
    If climate change will take a hundred years to boil us alive and super intelligence kills everyone in five, I don't have to worry about climate change. So either way, either it solves it for me or it's not an issue. So you think it's the most important thing to be working on? Without question, there is nothing more important than getting this right. And I know everyone says it.
    30:01
    you take any class with you take English professor's class and he tells you this is the most important class you'll ever take but u you can see the meta level differences with this one another argument in that paper is that we all be in control and that the danger is not AI um this particular argument asserts that AI is just a tool humans are the real actors that present danger and we can always m maintain control by simply turning it off can't we just pull the plug out I see that every time we have a conversation on the show about AI, someone says, "Can't we just unplug it?" Yeah, I get those comments on every podcast I make and I always want to like get in touch with a guy and say, "This
    30:33
    is brilliant. I never thought of it. We're going to write a paper together and get a noble price for it. This is like, let's do it." Because it's so silly. Like, can you turn off a virus? You have a computer virus. You don't like it. Turn it off. How about Bitcoin? Turn off Bitcoin network. Go ahead.
    30:50
    I'll wait. This is silly. Those are distributed systems. You cannot turn them off. And on top of it, they're smarter than you. They made multiple backups. They predicted what you're going to do. They will turn you off before you can turn them off. The idea that we will be in control applies only to preup intelligence levels.
    31:12
    Basically what we have today, today humans with AI tools are dangerous. They can be hackers, malevolent actors. Absolutely. But the moment super intelligence becomes smarter, dominates, they no longer the important part of that equation. It is the higher intelligence I'm concerned about, not the human who may add additional malevolent payload, but at the end still doesn't control it.
    31:35
    It is tempting to follow your the next argument that I saw in that paper, which basically says, listen, this is inevitable. So, there's no point fighting against it because there's really no hope here. So, we should probably give up even trying and be faithful that it'll work itself out because everything you've said sounds really inevitable.
    31:55
    And if with with China working on it, I'm sure Putin's got some secret division. I'm sure Iran are doing some bits and pieces. Every European country's trying to get ahead of AI. The United States is leading the way. So, it's it's inevitable. So, we probably should just have faith and pray. Well, praying is always good, but incentives matter.
    32:17
    If you are looking at what drives this people, so yes, money is important. So there is a lot of money in that space and so everyone's trying to be there and develop this technology. But if they truly understand the argument, they understand that you will be dead. No amount of money will be useful to you, then incentive switch.
    32:35
    They would want to not be dead. A lot of them are young people, rich people. They have their whole lives ahead of them. I think they would be better off not building advanced super intelligence concentrating on narrow AI tools for solving specific problems. Okay, my company cures breast cancer. That's all. We make billions of dollars.
    32:55
    Everyone's happy. Everyone benefits. It's a win. We are still in control today. It's not over until it's over. We can decide not to build general super intelligences. I mean the United States might be able to conjure up enough enthusiasm for that but if the United States doesn't build general super intelligences then China are going to have the big advantage right so right now at those levels whoever has more advanced AI has more advanced military no question we see it with existing conflicts but the moment you switch to super intelligence uncontrolled super intelligence it
    33:32
    doesn't matter who builds it us or them and if they understand this argument they also would not build it. It's a mutually assured destruction on both ends. Is this technology different than say nuclear weapons which require a huge amount of investment and you have to like enrich the uranium and you need billions of dollars potentially to even build a nuclear weapon.
    33:57
    But it feels like this technology is much cheaper to get to super intelligence potentially or at least it will become cheaper. I wonder if it's possible that some some guy some startup is going to be able to build super intelligence in you know a couple of years without the need of you know billions of dollars of compute or or electricity power. That's a great point.
    34:20
    So every year it becomes cheaper and cheaper to train sufficiently large model. If today it would take a trillion dollars to build super intelligence, next year it could be a hundred billion and so on at some point a guy in a laptop could do it. But you don't want to wait four years for make it affordable. So that's why so much money is pouring in.
    34:37
    Somebody wants to get there this year and lucky and all the winnings lite cone level award. So in that regard they both very expensive projects like Manhattan level projects which was the nuclear bomb project. The difference between the two technologies is that nuclear weapons are still tools. some dictator, some country, someone has to decide to use them, deploy them.
    35:03
    Whereas super intelligence is not a is not a tool. It's an agent. It makes its own decisions and no one is controlling it. I cannot take out this dictator and now super intelligence is safe. So that's a fundamental difference to me. But if you're saying that it is going to get incrementally cheaper, like I think it's Mo's law, isn't it? the technology gets cheaper then there is a future where some guy on his laptop is going to be able to create super intelligence without oversight or regulation or employees etc. Yeah that's why a lot of people
    35:35
    suggesting we need to build something like an surveillance planet where you are monitoring who's doing what and you're trying to prevent people from doing it. Do I think it's feasible? No. At some point it becomes so affordable and so trivial that it just will happen.
    35:55
    But at this point we're trying to get more time. We don't want it to happen in five years. We want it to happen in 50 years. I mean that's not very hopeful. See depends on how old you are. Depends on how old you are. I mean if you're saying that you believe in the future people will be able to make super intelligence without the resources that are required today then it is just a matter of time. Yeah.
    36:21
    But so will be true for many other technologies. We're getting much better in synthetic biology where today someone with a bachelor's degree in biology can probably create a new virus. This will also become cheaper other technologies like that. So we are approaching a point where it's very difficult to make sure no technological breakthrough is the last one.
    36:43
    So essentially in many directions we have this uh pattern of making it easier in terms of resources in terms of intelligence to destroy the world. If you look at uh I don't know 500 years ago the worst dictator with all the resources could kill a couple million people. He couldn't destroy the world. Now we know nuclear weapons we can blow up the whole planet multiple times over.
    37:09
    Synthetic biology we saw with CO you can very easily create a combination virus which impacts billions of people and all of those things becoming easier to do in the near term. You talk about extinction being a real risk, human extinction being a real risk.
    37:29
    Of all the the pathways to human extinction that you think are most likely, what what is the leading pathway? because I know you talk about there being some issue pre-eployment of these AI tools like you know someone makes a mistake um when they're designing a model or other issues post deployment when I say post- deployment I mean once a chat or something an an agent's released into the world and someone hacking into it and changing it and reprogram reprogramming it to be malicious of all these potential paths to human extinction which one do you think is the highest probability So I can only talk about the ones I can predict myself. So
    38:03
    I can predict even before we get to super intelligence someone will create a very advanced biological tool create a novel virus and that virus gets everyone or most everyone I can envision it. I can understand the pathway. I can say that.
    38:21
    So just to zoom in on that then that would be using an AI to make a virus and then releasing it. Yeah. And would that be intentional or There is a lot of psychopaths, a lot of terrorists, a lot of doomsday cults. We seen historically again they try to kill as many people as they can. They usually fail. They kill hundreds of thousands. But if they get technology to kill millions of billions, they would do that gladly.
    38:45
    The point I'm trying to emphasize is that it doesn't matter what I can come up with. I am not a malevolent actor you're trying to defeat here. It's a super intelligence which can come up with completely novel ways of doing it. Again, you brought up example of your dog.
    39:03
    Your dog cannot understand all the ways you can take it out. It can maybe think you'll bite it to death or something, but that's all. Whereas you have infinite supply of resources. So if I asked your dog exactly how you going to take it out, it would not give you a meaningful answer. It can talk about biting. And this is what we know. We know viruses. We experienced viruses.
    39:28
    We can talk about them. But what an AI system capable of doing novel physics research can come up with is beyond me. One of the things that I think most people don't understand is how little we understand about how these AIs are actually working. Because one would assume, you know, with computers, we kind of understand how a computer works.
    39:53
    We we know that it's doing this and then this and it's running on code, but from reading your work, you describe it as being a black box. We actually So, in the context of something like ChatBT or an AI, we know you're telling me that the people that have built that tool don't actually know what's going on inside there. That's exactly right.
    40:11
    So even people making those systems have to run experiments on their product to learn what it's capable of. So they train it by giving it all of data. Let's say all of internet text. They run it on a lot of computers to learn patterns in that text and then they start experimenting with that model. Oh, do you speak French? Oh, can you do mathematics? Oh, are you lying to me now? And so maybe it takes a year to train it and then 6 months to get some fundamentals about what it's capable of some safety overhead.
    40:45
    But we still discover new capabilities and old models. If you ask a question in a different way, it becomes smarter. So it's no longer engineering how it was the first 50 years where someone was a knowledge engineer programming an expert system AI to do specific things. It's a science. We are creating this artifact growing it.
    41:09
    It's like a alien plant and then we study it to see what it's doing. And just like with plants we don't have 100% accurate knowledge of biology. We don't have full knowledge here. We kind of know some patterns. We know okay if we add more compute it gets smarter most of the time but nobody can tell you precisely what the outcome is going to be given a set of inputs.
    41:33
    I've watched so many entrepreneurs treat sales like a performance problem. When it's often down to visibility because when you can't see what's happening in your pipeline, what stage each conversation is at, what's stalled, what's moving, you can't improve anything and you can't close the deal.
    41:51
    Our sponsor, Pipe Drive, is the number one CRM tool for small to medium businesses. Not just a contact list, but an actual system that shows your entire sales process, end to end, everything that's live, what's lagging, and the steps you need to take next. All of your teams can move smarter and faster. Teams using Pipe Drive are on average closing three times more deals than those that aren't.
    42:11
    It's the first CRM made by salespeople for salespeople that over 100,000 companies around the world rely on, including my team who absolutely love it. Give Piperive a try today by visiting piperive.com/ceo. And you can get up and running in a couple of minutes with no payment needed. And if you use this link, you'll get a 30-day free trial.
    42:35
    What do you make of OpenAI and Sam Alman and what they're doing? And obviously you're aware that one of the co-founders was it um was it Ilia Jack? Ilia Ilia. Yeah. Ilia left and he started a new company called Super Intelligent Safety. Super AI safety wasn't challenging enough. He decided to just jump right to the hard problem. as an onlooker when you see that people are leaving OpenAI to to start super intelligent safety companies.
    43:07
    What was your read on that situation? So, a lot of people who worked with Sam said that maybe he's not the most direct person in terms of being honest with them and they had concerns about his views on safety. That's part of it. So, they wanted more control. They wanted more concentration on safety. But also, it seems that anyone who leaves that company and starts a new one gets a $20 billion valuation just for having it started. You don't have a product, you don't have customers, but if you want to make many billions of dollars, just do
    43:40
    that. So, it seems like a very rational thing to do for anyone who can. So, I'm not surprised that there is a lot of attrition meeting him in person. He's super nice, very smart. absolutely perfect public interface. You see him testify in the Senate, he says the right thing to the senators. You see him talk to the investors, they get the right message.
    44:07
    But if you look at what people who know him personally are saying, it's probably not the right person to be controlling a project of that impact. Why? He puts safety second. Second to winning this race to super intelligence, being the guy who created Godic and controlling light corn of the universe. He's worse.
    44:38
    Do you suspect that's what he's driven by is by the the legacy of being an impactful person that did a remarkable thing versus the consequence that that might have on for society. Because it's interesting that he's his other startup is Worldcoin which is ba basically a platform to crea
    44:58
    te universal basic income i.e. a platform to give us income in a world where people don't have jobs anymore. So in one hand you're creating an AI company and the other hand you're creating a company that is preparing for people not to have employment. It also has other properties. It keeps track of everyone's biometrics. it uh keeps you in charge of the world's economy, world's wealth.
    45:19
    They're retaining a large portion of world coins. So I I think it's kind of very reasonable part to integrate with world dominance. If you have a super intelligence system and you control money, you're doing well. Why would someone want world dominance? People have different levels of ambition. Then you a very young person with billions of dollars fame.
    45:46
    You start looking for more ambitious projects. Some people want to go to Mars. Others want to control Litecoin of the universe. What did you say? Litecoin of the universe. Litecoin. Every part of the universe light can reach from this point. Meaning anything accessible you want to grab and bring into your control.
    46:10
    Do you think Sam Alman wants to control every part of the universe? I I suspect he might. Yes. It doesn't mean he doesn't want a side effect of it being a very beneficial technology which makes all the humans happy. Happy humans are good for control. If you had to guess what the world looks like in 2,100, if you had to guess, it's either free of human existence or it's completely not comprehensible to someone like us.
    46:44
    It's one of those extremes. So there's either no humans. It's basically the world is destroyed or it's so different that I cannot envision those predictions. What can be done to turn this ship to a more certain positive outcome at this point? Is is there still things that we can do or is it too late? So I believe in personal self-interest.
    47:10
    If people realize that doing this thing is really bad for them personally, they will not do it. So our job is to convince everyone with any power in this space creating this technology working for those companies they are doing something very bad for them.
    47:31
    Not just forget our 8 billion people you experimenting on with no permission, no consent. You will not be happy with the outcome. If we can get everyone to understand that's a default and it's not just me saying it. You had Jeff Hinton, Nobel Prize winner, founder of a whole machine learning space. He says the same thing. Benjio, dozens of others, top scholars. We had a statement about dangers of AI signed by thousands of scholars, computer scientists.
    47:55
    This is basically what we think right now. And we need to make it a universal. No one should disagree with this. And then we may actually make good decisions about what technology to build. It doesn't guarantee long-term safety for humanity, but it means we're not trying to get there as soon as possible to the worst possible outcome.
    48:19
    And do are you hopeful that that's even possible? I want to try. We have no choice but to try. And what would need to happen and who would need to act? What is it government legislation? Is it Unfortunately, I don't think making it illegal is sufficient. There are different jurisdictions. There is, you know, loopholes. And what are you going to do if somebody does it? You going to find them for destroying humanity? Like very steep fines for it? Like what are you going to do? It's not enforceable.
    48:43
    If they do create it, now the super intelligence is in charge. So the judicial system we have is not impactful. And all the punishments we have are designed for punishing humans. Prisons capital punishment doesn't apply to AI. You know, the problem I have is when I have these conversations, I never feel like I walk away with I hope that something's going to go well.
    49:13
    And what I mean by that is I never feel like I walk away with clear some kind of clear set of actions that can course correct what might happen here. So what should what should I do? What should the person sat at home listening to this do? You you talk to a lot of people who are building this technology. Mhm. Ask them precisely to explain some of those things they claim to be impossible.
    49:32
    How they solved it or going to solve it before they get to where they going. Do you know? I don't think Sam Orman wants to talk to me. I don't know. He seems to go on a lot of podcasts. Maybe he does. He wants to go online. I I wonder why that is. I wonder why that is. I'd love to speak to him, but I don't I don't think he wants to I don't think he wants me to uh interview him.
    49:55
    Have an open challenge. Maybe money is not the incentive, but whatever attracts people like that. Whoever can convince you that it's possible to control and make safe super intelligence gets the prize. They come on your show and prove their case. anyone.
    50:14
    If no one claims the price or even accepts the challenge after a few years, maybe we don't have anyone with solutions. We have companies valued again at billions and billions of dollars working on safe super intelligence. We haven't seen their output yet. Yeah, I'd like to speak to Ilia as well because I know he's he's working on safe super intelligence. So like notice a pattern too.
    50:36
    If you look at history of AI safety organizations or departments within companies, they usually start well, very ambitious, and then they fail and disappear. So, Open AI had super intelligence alignment team. The day they announced it, I think they said we're going to solve it in 4 years. Like half a year later, they canled the team.
    51:01
    And there is dozens of similar examples. Creating a perfect safety for super intelligence, perpetual safety as it keeps improving, modifying, interacting with people, you're never going to get there. It's impossible. There's a big difference between difficult problems in computer science and be complete problems and impossible problems. And I think control, indefinite control of super intelligence is such a problem.
    51:26
    So what's the point trying then if it's impossible? Well, I'm trying to prove that it is specifically that once we establish something is impossible, fewer people will waste their time claiming they can do it and find looking for money. So many people going, "Give me a billion dollars in 2 years and I'll solve it for you.
    51:43
    " Well, I don't think you will. But people aren't going to stop striving towards it. So, if there's no attempts to make it safe and there's more people increasingly striving towards it, then it's inevitable. But it changes what we do. If we know that it's impossible to make it right, to make it safe, then this direct path of just build it as soon as you can become suicide mission hopefully fewer people will pursue that they may go in other directions like again I'm a scientist I'm an engineer I love AI I love technology I use it all the time
    52:14
    build useful tools stop building agents build narrow super intelligence not a general one I'm not saying you shouldn't make billions of dollars I love billions of dollars But uh don't kill everyone, yourself included. They don't think they're going to though. Then tell us why. I hear things about intuition.
    52:40
    I hear things about we'll solve it later. Tell me specifically in scientific terms. Publish a peer-reviewed paper explaining how you're going to control super intelligence. Yeah, it's strange. It's strange to it's strange to even bother if there was even a 1% chance of human extinction. strange to do something like if there was a 1% chance someone told me there was a 1% chance that if I got in a car I might not I might not be alive. I would not get in the car.
    53:06
    If you told me there was a 1% chance that if I drank whatever liquid is in this cup right now I might die. I     (S
    Videos on SpiritualFamily.Net Youtube Logo
    Search Videos:

    Results (max 10):



    Revelation’s Digital Path

    Revelation’s Digital Path

    Revelation’s Digital Path