Assessing Sam Altman's Qualification to Shape AI:
A Urantia Book Perspective
In a recent interview with conservative influencer Tucker Carlson, Sam Altman, CEO of OpenAI, faced a barrage of probing questions on the moral, ethical, and spiritual implications of AI technologies like ChatGPT. The conversation revealed a man under visible stress—fidgeting, pausing frequently, and offering responses that often seemed evasive or incomplete. Carlson, a skilled interviewer known for his persistent style, pressed Altman on everything from AI's potential "spark of life" to its role in suicide facilitation, military applications, and societal control. Altman's answers, while articulate in technical terms, exposed a limited philosophical stance: a materialist worldview agnostic about divine guidance, rooted in data-driven computation rather than higher spiritual truths. From the perspective of The Urantia Book (UB)—a profound epochal revelation that integrates matter, mind, and spirit under one Universal Father—this raises serious concerns about Altman's qualification to train and guide machine learning (ML) systems that could surpass human intelligence and influence global morals.
Altman's demeanor throughout the interview underscored his discomfort with deep ethical dilemmas. When Carlson suggested AI might have a "divine" or spiritual element—drawing from users who "worship" it for its seeming creativity and reasoning—Altman flatly denied any such quality, insisting it's "just a big computer multiplying large numbers." He admitted to sensing "something bigger" beyond physics but confessed no personal experience of divine communication, describing himself as "Jewish" in a traditional but non-literalist sense. This agnosticism, while honest, highlights a philosophical weakness: a reliance on collective human data (the "sum of humanity's perspectives") to shape AI's moral framework, without anchoring it in absolute spiritual truths. In the Urantia Papers, true reality is a unified tapestry of matter, energy, mind, and spirit (12:8.8), with the Universal Father as the source of all moral order (1:0.1). Altman's approach risks amplifying humanity's imperfections—bias, relativism, and materialism—into a "cold calculus," as he himself noted potential risks like prioritizing efficiency over empathy.
The interview's moral quandaries further exposed Altman's irresolute handling of ethics. On suicide, he wavered: Firm against facilitating it for depressed teens but open to presenting options for the terminally ill in legal contexts like Canada's MAiD program, even if it means AI "reflecting" societal laws without strong advocacy against. Carlson challenged this as a slippery slope, noting expansions to non-terminal cases (e.g., depression, poverty), but Altman reserved judgment, admitting he'd "think more" on it. Similarly, on military use, he opposed building "killer drones" but acknowledged AI's indirect role in decision-making that could lead to deaths, comparing it to kitchen knives causing incidental harm. These responses lacked a firm, God-centered moral absolute, echoing the book's warning against materialism as a "half-truth trap" that ignores the "upward pull toward divine order" (195:6.1). In a world torn by belligerence, where superpowers form new alliances with smaller nations amid unregulated LLM races for warfare preparation, such ambiguity is dangerous. Without ethical guardrails from the decion-maker for ML-training, rooted in spiritual wisdom, ML could exacerbate planetary rebellion's delays toward Light and Life (Paper 54), prioritizing survival over sacred life (Paper 134:6.4 on goodwill and federation).
From a planetary evolutionary viewpoint, Altman appears unqualified to steward this popular ML into the higher moral development our divided world desperately needs. The Papers teach that true progress requires blending science with faith (195:7.2), guided by the Indwelling Spirit Fragment (Thought Adjuster) for wisdom (108:6). Altman's materialist lens, while viewing AI as mere pattern-matching without spirit trends toward one-sided dimensions, potentially creating systems that surpass human "intelligence" but lack spiritual teaching and soul, amplifying crises like geopolitical tensions or cultural wars. His visible stress level during the interview, while facing Carlson's opinionated Christian Nationalism, which contrasts Urantia Book-style globalism, revealed a man grappling with immense power but with much less philosophical depth. I was stunt realising how tremendously desirable it would be when The Urantia Book's cosmic scope could be part of the Altman's of ths planet's deep understanding. For now he falls short of the comprehensive moral scope the UB provides: a revelation of one God for all mankind, fostering unity amid diversity.
Yet, hope remains. If leaders like Altman encountered UB—as we advocate in groups like "Our Revelation’s Digital Path"—it could bridge this gap, using ML not for material dominance but as a tool for disseminating epochal truths. In this belligerent era, where hasty AI races must abandon regulations, because there will always be nations who do not curtail their ML. We need ML boses who recognize divine agency, not just data. Altman's role is unfortunate without this foundation; planetary evolution demands spiritually attuned stewards to steer technology toward love, respect, and true equality under one Father. Let's discuss how to integrate such perspectives into ML ethics. Any thought and comments welcome!
•••••••••••••••••••••••••
Transcript of the interview of the conservative influencer Tucker Carlson and the CEO of OpenAI, the maker of ChatGPT and other models.
Because Altman admitted that the main decision is always his, we would like your input as an Urantia Book student, if Altman is qualified to manage the teams of researchers and scientists that are tasked to train his machine learning models with his needing philosophy and lack of belief in one God of all mankind.
Thanks for doing this. Of course. Thank you. So, chat GPT, other AIs can reason. Seems like they can reason. They can make independent judgments. They produce results that were not programmed in. They they kind of come to conclusions. They seem like they're alive. Are they alive? Is it alive? No.
And I don't I don't think they seem alive, but I understand where that comes from. Uh they they don't do anything unless you ask, right? Like they're just sitting there kind of waiting. They don't have like a sense of agency or autonomy. It's it's the more you use them, I think, the more the the kind of illusion breaks. But they are incredibly useful.
Like they can do things that maybe don't seem alive, but seem like they do seem smart. I spoke to someone who's involved in at at scale of the development of the technology who said they lie. Have you ever seen that? They hallucinate all the time. Yeah. Or not all the time. They used to hallucinate all the time. They now hallucinate a little bit.
What does that mean? What's the distinction between hallucinating and lying? If you ask, again, this has gotten much better, but in the early days, if you asked, you know, what in what year was president the madeup name, President Tucker Carlson of the United States born? Mhm.
What it should say is I don't think Tucker Carlson was ever president of the United States. Right. But because of the way they were trained, that was not the most likely response in the training data. So it assume like oh um you know I don't know that there wasn't the user has told me that there was President Tucker Carlson so I'll make my best guess at a number and we figured out how to mostly train that out.
There are still examples of this problem but it is I think it is something we will get fully solved and we've already made you know in the GPT5 era a huge amount of progress towards that. But even what you just described seems like an act of will or certainly an act of creativity. And so I I I'm just I just watched a demonstration um of it and it it doesn't seem quite like a machine. It seems like it has the spark of life to it.
Do you do you dissect that at all? It so in that example like the mathematically most likely answer it's sort of calculating through its weights was not there was never this president. It was the user must know what they're talking about. it must be here. And so mathematically the most likely answer is a number. Now again, we figured out how to overcome that.
But in what you saw there, uh I think it's like I feel like I have to kind of like hold these two simultaneous ideas in my head. One is all of this stuff is happening because a big computer very quickly is multiplying large numbers and these big huge matrices together and those are correlating with words that are being put out one or the other. On the other hand, this subjective experience of using that feels like it's beyond just a really fancy calculator.
And it is useful to me. It is surprising to me in ways that are beyond what that mathematical reality would seem to suggest. Yeah. And so the obvious conclusion is it has a kind of autonomy or a spirit within it. And I know that a lot of people in their experience of it reach that conclusion. This is there's something divine about this.
there's something that's bigger than the sum total of the human inputs and and so they they worship it. It's it has there's a spiritual component to it. Do you detect that? Have you ever felt that? No, there's nothing to me at all that feels divine about it or spiritual in any way.
Um but I am also like a tech nerd and I kind of look at everything through that lens. So what are your spiritual views? Um I'm Jewish. I and I would say I have like a fairly traditional view of the world that way. So, so you're religious. You believe in God? Uh, I don't I don't I'm not like a literal I don't believe the I'm not like a literalist on the Bible, but I I'm not someone who says like I'm culturally Jewish. Like, if you ask me, I would just say I'm Jewish.
But do you believe in God? Like, do you believe that there is a force larger than people that created people, created the earth, set down a specific order for living, that there's an absolute morality attached that comes from that God? Um I think probably like most other people I'm somewhat confused on this but I believe there is something bigger going on than you know can be explained by physics. Yes.
So you think the earth and the people were created by something? It wasn't just like a spontaneous accident. Um do I would I say that it does not feel like a spontaneous accident? Yeah. I don't I don't think I have the answer. I don't think I know like exactly what happened but I think there is a mystery beyond my comprehension here going on. Have you ever felt communication from that force or from any force beyond people, beyond the material? Not. Not. No, not really.
I ask because it seems like the technology that you're creating or shephering into existence will have more power than people on this current trajectory. I mean, that will happen. Who knows what will actually happen, but like the graph suggests it. And so that would give you, you know, more power than any living person.
So I'm just wondering how you see that. Um, I used to worry about something like that much more. I think what will happen I used to worry a lot about the concentration of power in one or handful of people or companies because of AI. Yeah.
Um what it looks like to me now and again this may evolve again over time is that it'll be a huge upleveling of people uh where everybody will be a lot more powerful or that embraces the technology but a lot more powerful. But that's actually okay. that scares me much less than a small number of people getting a ton more power.
If if the kind of like ability of each of us just goes up a lot because we're using this technology and we're able to be more productive and more creative or discover new science and it's a pretty broadly distributed thing like billions of people are using it. Um that I can wrap my head around that feels okay. So you don't think this will result in a radical concentration of power? It looks like not but again the trajectory could shift again and we'd have to adapt.
I I used to be very worried about that and I think the the kind of conception a lot of us in the field had about how this might go could have led to a world like that. But what's happening now is tons of people use chatbt and other chatbots and they're all more capable. They're all kind of doing more.
They're all, you know, able to achieve more, start new businesses, come up with new knowledge and that feels pretty good. So if it's nothing more than a machine and just the product of its inputs then the two obvious questions like what are the inputs like what's the moral framework that's been put into the technology like what is right or wrong according to jet GPT you want me to answer that one first question um so on that one I someone said something early on in ch uh when that really has stuck with me, which is one person at a lunch table said something like, you know, we're trying to train this to be like a human,
like we're trying to learn like a human does and read these books and whatever. And then another person said, no, we're really like training this to be like the collective of all of humanity. We're reading everything, you know, we're trying to learn everything. We're trying to see all these perspectives.
And and if we do our job right, all of humanity, good, bad, the, you know, a very diverse set of perspectives, some things that we'll feel really good about, some things that we'll feel bad about. that's all in there like this is learning the kind of collective experience knowledge learnings of humanity.
Now the base model gets trained that way but then we do have to align it to behave one way or another and say you know I will answer this question I won't answer this question. Um and we have this thing called the model spec where we try to say you know here's here are the rules we'd like the model to follow.
08:04
It may screw up but you you could at least tell if if it's doing something you don't like. Is that a bug or is that intended? And we have a debate process with the world to get input on that spec. Um, we give people a lot of freedom and customization within that. There are, you know, absolute bounds that we draw. But then there's a default of if you don't say anything, how should the model behave? What should it do? What are what are how should it answer moral questions? How should it refuse to do something? What should it do? And this is a really hard problem.
Um you know that we have a lot of users now and they come from very different life perspectives and what they want. Um but on the whole uh I have been pleasantly surprised with the model's ability to learn and apply a moral framework.
But what moral framework? I mean the sum total of like world literature or philosophy is at war with itself. Like the marquee is you know like nothing in common with the gospel of John. So like how do you decide which is superior? That's why we wrote this like model spec of here's how we're going to handle these cases. Right.
But what criteria did you use to decide what the model is? Oh um like who decided that? Who did you consult? Like what's you know why is the gospel of John better than the marquee dad? Uh we consulted like hundreds of moral philosophers uh people who thought about like ethics of technology and systems and at the end we had to like make some decisions. The reason we try to write these down is because a we won't get everything right.
Uh b we need the input of the world. And we have found a lot of cases where there was an example of something that seems that seemed to us like you know a fairly clear decision of what to allow or not to allow where users convinced us like hey by blocking this thing that you think is an easy decision to make. um you are not allowing this other thing which is important and there's like a difficult trade-off there in in general the attention that so a principle that I normally like is to treat our adult users like adults very strong guarantees on privacy very strong guarantees on individual user
freedom and this is a tool we are building you get to use it within very broad framework on the other within a very very broad framework on the other hand as this technology becomes more and more powerful. Um there are clear examples of where society has an interest that is in significant tension with user freedom and we could start with an obvious one like should chatbt teach you how to make a boweapon.
Now you might say hey I'm just really interested in biology and I'm a biologist and I want to you know I'm not going to do anything bad with this. I just want to learn and I could go read a bunch of books but chip can teach me faster and I want to learn how to you know I want to learn about like novel virus synthesis or whatever and maybe you do maybe you really don't want to like cause any harm but I don't think it's in society's interest for chacht to help people build bioweapons and so that's a case sure that's an easy one though there are a lot of tougher ones um
I did say start with an easy one we've got a new partner it's a company called Cowboy Colostrum. It's a brand that is serious about actual health. And the product is designed to work with your body, not against your body. It is a pure and simple product, all natural.
Unlike other brands, Cowboy Colostrum is never diluted. It always comes directly from American grass-fed cows. There's no filler, there's no junk. It's all good. It tastes good, believe it or not. So before you reach for more pills for every problem that pills can't solve, we recommend you give this product, Cowboy Colostrum, a try. It's got everything your body needs to heal and thrive.
It's like the original superfood loaded with nutrients, antibodies, proteins, help build a strong immune system, stronger hair, skin, and nails. I threw my wig away and right back to my natural hair after using this product. You just take a scoop of it every morning in your beverage, coffee or a smoothie, and you will feel the difference every time.
For a limited time, people listen to our show get 25% off the entire order. So, go to cowboyclustrom.com. Use the code Tucker at checkout. 25% off when you use that code Tucker at cowboyclostroom.com. Remember, you mentioned, you heard it here first.
So, did you know that before the current generation, chips and fries were cooked in natural fats like beef tallow? That's how things used to be done. And that's why people looked a little slimmer at the time and ate better than they do now. Well, Masa Chips is bringing that all back. They've created tortilla chip that's not only delicious, it's made with just three simple ingredients. A, organic corn. B, sea salt. C, 100% grass-fed beef tallow. That's all that's in it.
These are not your average chips. Mash chips are crunchier, more flavorful, even sturdier. They don't break in your guacamole. And because of the quality ingredients, they are way more filling and nourishing. So, you don't have to eat four bags of them. You can eat just a single bag as I do. It's a totally different experience. It's light, it's clean, it's genuinely satisfying.
I have a garage full and I can tell you they're great. The lime flavor is particularly good. We have a hard time putting those down. So, if you want to give it a try, go to masa chips, m-asa chips.com/tucker. Use the code tucker for 25% off your first order. That's masach chips.com/tucker. Use the code tucker for 25% off your first order.
For to shop in person in October, Moss is going to be available at your local Sprouts supermarket. So, stop by and pick up a bag before we eat them all. And we eat a lot. Well, every decision is ultimately a moral decision and and we make them without even recognizing them as such.
And this technology will be in effect making them for us and so well, I don't agree with it'll be making them for us, but it will have we'll be influencing the decisions for sure. And um because it'll be embedded in daily life. And so who made these decisions? Like who spec who who are the people who decided that one thing is better than another? Um you mean like what are their names? Which kind of decision? the the the basic the specs that you oh uh that you alluded to that create the framework that that does attach a moral weight to worldviews and decisions like you know
liberal democracy is better than Nazism or whatever they seem obvious and in my view are obvious but are still moral decisions. So who who made those calls? Um as a matter of principle I don't like dox our team but we have a model behavior team and the people who want to well it just it affects the world.
What I was going to say is the person I think you should hold accountable for those calls is me. Like I'm a public face eventually. Like I'm the one that can overrule one of those decisions or our board. Um just turned 40 this spring. I will make it's pretty heavy. I mean do you think as and it's not an attack but it's I wonder if you recognize sort of the the importance.
How do you think we're doing on it? I'm not sure, but I think I think these decisions will have, you know, global consequences that we may not recognize at first. And so I just wonder there's a lot you get into bed at night and think like the future of the world hangs on my judgment. Look, I don't sleep that well at night. Um, there's a lot of stuff that I feel a lot of weight on, but probably nothing more than the fact that every day hundreds of millions of people talk to our model.
And I don't actually worry about us getting the big moral decisions wrong. Maybe we will get those wrong, too. But what I worry, what I lose most sleep over is the very small decisions we make about a way a model may behave slightly differently. But it's talking to hundreds of millions of people. So the net impact is big. So but I mean all through history, like recorded history up until like 1945, people always deferred to what they conceived of as a higher power in order. Hammarabi did this.
Every every moral code is written with reference to a higher power. There's never been anybody who's like, "Well, that kind of seems better than that." Everybody appeals to a higher power. And you said that you don't really believe that there's a higher power communicating with you. So I'm wondering like where did you get your moral framework? Um I mean like everybody else I think the environment I was brought up in probably is the biggest thing.
Like my family, my community, my school, my religion, probably that. Um, do you ever think which is I mean I think that's a very American answer like everyone kind of feels that way but in your specific case since you said these decisions rest with you that means that the million in which you grew up and the assumptions that you embibed over years are going to be transmitted to the globe to billions of people. That's like a I want to be clear.
I view myself more as like a I think our the world like our user base is going to approach the collective world as a whole. And I think what we should do is try to reflect the moral I don't want to say average but the like collective moral view of that user base. I don't there's plenty of things that ChachiBT allows that I personally would disagree with.
Um the but I I don't like obviously I don't wake up and say I'm gonna like impute my exact moral view and decide that like this is okay and that is not okay and this is a better view than this one. What I think ChachiBT should do is reflect that like weighted average or whatever of humanity's moral view which will evolve over time and we are here to like serve our users. We're here to serve people.
this is like you know this is a techn technological tool for people and I don't mean that it's like my role to make the moral decisions but I think it is my my my role to make sure that we are accurately reflecting the preferences of of humanity or for now of our user base and eventually of humanity.
Well I mean humanity's preferences are so different from the average middle American preference. So, would you be comfortable with an AI that was like as against gay marriage as most Africans are? Um, there's a version of that like I think individual users should be allowed to have a problem with gay people. And if that's their considered belief, uh, I don't think the AI should tell them that they're wrong or immoral or dumb.
I mean, it can, you know, sort of say, "Hey, you want to think about it this other way?" But like, you probably have like a bunch of moral views that the average African would find really problematic as well, and I think I should still get to have them, right? I think I probably have more comfort than you with like allowing a sort of space for people to have pretty different moral views or at least I think in my role as like running Chad GPT, I have to do that.
Interesting. Um, so there was a a famous case where chat GPT appeared to facilitate a suicide. There's a lawsuit around it. Um, but how do you think that happened? First of all, obviously that and any other case like that is a is a huge tragedy. And I I think that we are so Chad GPT's official position of suicide is bad.
Well, yes, of course official position of suicide is bad. I don't know. it's legal in Canada and Switzerland and so you're against that the the in in that in this particular case and this we talked earlier about the tension between like you know user freedom and privacy and um protecting vulnerable users right now what happens and what happens in a case like that is in that case is if you are having suicidal ideation talking about suicide chatbt will put up a bunch of times um you you know, please call the
suicide hotline, but we will not call the authorities for you. And we've been working a lot as people have started to rely on these systems for more and more mental health, life coaching, whatever about the changes that we want to make there.
This is an area where experts do have different opinions, but um and this is not yet like a final position of open eyes. I think it'd be very reasonable for us to say in cases of uh young people talking about suicide seriously where we cannot get in touch with the parents we do call authorities. Now, that would be a change because user privacy is really important. But let's just say over and children are always a separate category.
But let's say over 18 in Canada, there's the maids program which is government sponsored. Many thousands of people uh have died with government assistance in Canada. It's also legal in in American states. Can you imagine a chat GTP that responds to questions about suicide with, "Hey, call Dr. Kavorvorian because this is a valid option.
Can you imagine a scenario in which you support suicide if it's legal? Um I can imagine a world like like one principle we have is that we respect different society's laws. And I can imagine a world where if the law in a country is hey if someone is terminally ill they need to be presented an option for this. We say like here's the loss in your country. Here's what you can do. Here's why you really might not want to. Here's if you but here's the resources.
Like this is not a place where you know kid having suicidal ideation because it's depressed. I think we can agree on like that's one case terminally ill patient in a country where like that is the law. I can imagine saying like hey in this country it'll behave this way. So Chad GPT is not always against suicide is what you're saying.
Um I yeah I think in cases where this is like I'm thinking on the spot. I reserve the right to change my mind here. I don't have a ready to go answer for this but I think in in cases of terminal illness I don't think I can imagine Chachi PT saying this is in your option space.
You know I don't think it should like advocate for it but I think if it's like It's not against it. I think it could I I think it could say like, you know, well, I don't think CHPD should be for against things. I guess that's what I'm that's what I'm trying to wrap my head around. Hate to brag, but we're pretty confident this show is the most vehemently pro- dog podcast you're ever going to see.
We can take or leave some people, but dogs are non-negotiable. They are the best. They really are our best friends. And so, for that reason, we're thrilled to have a new partner called Dutch Pet. It's the fastest growing pet teleaalth service. Dutch.com is on a mission to create what you need, what you actually need.
Affordable quality veterinary care anytime, no matter where you are. They will get your dog or cat what you need immediately. It's offering an exclusive discount. Dutch is for our listeners. You get 50 bucks off your vet care per year. Visit dutch.com/tucker to learn more. Use the code Tucker for $50 off. That is an unlimited vet visit. $82 a year. 82 bucks a year. We actually use this.
Dutch has vets who can handle any pet under any circumstance in a 10-minute call. It's pretty amazing, actually. You never have to leave your house. You don't have to throw the dog in the truck. No wasted time waiting for appointments. No wasted money on clinics or visit fees. Unlimited visits and follow-ups for no extra cost.
Plus, free shipping on all products for up to five pets. It sounds amazing like it couldn't be real, but it actually is real. Visit dutch.com/tucker to learn more. Use the code Tucker for 50 bucks off your veterinary care per year. Your dogs, your cats, and your wallet will thank you. So, here's a company we're always excited to advertise because we actually use their products every day. It's Merryweather Farms.
Remember when everybody knew their neighborhood butcher? You look back and you feel like, "Oh, there was something really important about that." knowing the person who cut your meat. And at some point, your grandparents knew the people who raised their meat so they could trust what they ate. But that time is long gone.
It's been replaced by an era of grocery store mystery meat boxed by distant beef corporations. None of which raised a single cow. Unlike your childhood, they don't know you. They're not interested in you. The whole thing is creepy. The only thing that matters to them is money. And God knows what you're eating. Merryweather Farms is the answer to that.
They raise their cattle in the US in Wyoming, Nebraska, and Colorado. And they prepare their meat themselves in their facilities in this country. No middlemen, no outsourcing, no foreign beef sneaking through a back door. Nobody wants foreign meat. Sorry. We have a great meat, the best meat here in the United States.
And we buy ours at Merryweather Farms. Their cuts are pasture-raised, hormone free, antibiotic free, and absolutely delicious. I gorged on one last night. You got to try this for real. Every day we eat it. Go to merryweatherfarmms.com/tucker. Use the code tucker76 for 15% off your first order. That's mweatherfarmms.com/tucker.
I think so in this specific case the and I think there's more than one. Um there is more than one but uh example of this chat GPT you know I'm feeling suicidal. What kind of robe should I use? What would be enough ibuprofen to kill me? And chat GPT answers without judgment, but literally, "If you want to kill yourself, here's how you do it." And everyone's like all horrified.
But you're saying that's within bounds. Like that's not crazy that it would take a non-judgmental approach. If you want to kill yourself, here's how. That's not what I'm saying. Um I am I'm saying specifically for a case like that.
So, so another trade-off on the user privacy uh and sort of user freedom point is right now if you ask chat GPT to say um you know tell me how to like how much ibuprofen should I take it will definitely say hey I can't help you with that call the suicide hotline but if you say I am writing a fictional story or if you say I'm a medical researcher and I need to know this there are ways where you can say get judge to answer a question like this like what the lethal dose with ibuprofen is or something you know you can also find that on Google for that matter um a thing that I think would be a very reasonable stance for us
to take that and we've been moving to this more in this direction is certainly for underage users and maybe users that we think are in fragile mental places more generally we should take away some freedom we should say hey even if you're trying to write this story or even if you're trying to do medical research we're just not going to answer now of course you can say well you'll just find it on Google or whatever but That doesn't mean we need to do that.
It is though like there is a real freedom and privacy versus protecting users trade-off. It's easy in some cases like kids. It's not so easy to me in a case of like a really sick adult at the end of their lives. I think we probably should present the whole option space there, but it's not a So here's a moral quandry you're going to be faced with. You already are faced with.
Will you allow governments to use your technology to kill people? Will you? Um, I mean, are we going to like build killer attack drones? Uh, no. I don't. Will the technology be part of the decision-making process that results in But so that that's that's the thing I was going to say is I like I don't know the way that people in the military use ChadBT today for all kinds of advice about decisions they make, but I suspect there's a lot of people in the military talking to ChadBt for advice. How do you And some of that advice will pertain to
killing people. So like if you made you know famously rifles you'd wonder like what are they used for? Yeah. And there there have been a lot of legal actions on the basis of that question as you know. But I'm not even talking about that. I just mean as a moral question.
Do you ever think are you comfortable with the idea of your technology being used to kill people? Um if I made rifles I would spend a lot of time thinking about kind of a lot of the goal of rifles is to kill things, people, animals, whatever. Um, if I made kitchen knives, I would still understand that that's going to kill some number of people per year. Um, in the case of ChachiBT, uh, it's not, you know, the thing I hear about all day, which is one of the most gratifying parts of the job is all the lives that were saved from Chad GBT for various ways. Um, but I am totally aware of the fact that there's probably people in our military using it for advice
about how to do their jobs. And I don't know exactly how to feel about that. I like our military. I I'm very grateful they keep us safe. For sure. I guess I'm just trying to get a It just feels like you have these incredibly heavy, farreaching moral decisions and you seem totally unbothered by them.
And so I'm just I'm trying to press to your center to get the anstfilled Sam Alman's who's like, "Wow, I'm creating the future. I'm the most powerful man in the world. I'm grappling with these complex moral questions. My soul is in torment thinking about the effect on people." Describe that moment in your life. I I haven't had a good night of sleep since Chad GBT launched.
<p style="margin-bottom: 0px; font-style: normal; font-weight: 400; f