Nick Bostrom: How Entrepreneurs Can Win in an AI-Dominated World | Artificial Intelligence | E356

Nick Bostrom: How Entrepreneurs Can Win in an AI-Dominated World | Artificial Intelligence | E356

Nick Bostrom: How Entrepreneurs Can Win in an AI-Dominated World | Artificial Intelligence | E356

Nick Bostrom’s simulation hypothesis suggests that we might be living in a simulation created by posthumans. His work on artificial intelligence and superintelligence challenges how entrepreneurs, scientists, and everyone else understand human existence and the future of work. In this episode, Nick shares how AI can transform innovation, entrepreneurship, and careers. He also discusses the rapid pace of AI development, its promise to radically improve our world, and the existential risks it poses to humanity.

In this episode, Hala and Nick will discuss:


() Introduction


() The Simulation Hypothesis, Posthumanism, and AI


() Moral Implications of a Simulated Reality


() Fermi Paradox and Doomsday Arguments


() Is AI Humanity’s Biggest Breakthrough?


() Types of AI: Oracles, Genies, and Sovereigns


() The Potential Dangers of Advanced AI


() Artificial Intelligence and the Future of Work


() Finding Purpose in an AI-Driven World


() AI for Entrepreneurs and Innovators

 

Nick Bostrom is a philosopher specializing in understanding AI in action, the advancement of superintelligent technologies, and their impact on humanity. For nearly 20 years, he served as the founding director of the Future of Humanity Institute at the University of Oxford. Nick is known for developing influential concepts such as the simulation argument and has authored over 200 publications, including the New York Times bestsellers Superintelligence and Deep Utopia.

 

Sponsored By:


Shopify – Start your $1/month trial at Shopify.com/profiting


Indeed – Get a $75 sponsored job credit to boost your job’s visibility at Indeed.com/PROFITING


Mercury – Streamline your banking and finances in one place. Learn more at mercury.com/profiting


OpenPhone – Get 20% off your first 6 months at OpenPhone.com/profiting


Bilt – Start paying rent through Bilt and take advantage of your Neighborhood Benefits by going to joinbilt.com/profiting


Airbnb – Find a co-host at airbnb.com/host


Boulevard – Get 10% off your first year at joinblvd.com/profiting when you book a demo

 

Resources Mentioned:


Nick’s Book, Superintelligence: bit.ly/_Superintelligence


Nick’s Book, Deep Utopia: bit.ly/DeepUtopia


Nick’s Website: nickbostrom.com

 

Key YAP Links


Social + Podcast Services: yapmedia.com

 

Entrepreneurship, Entrepreneurship Podcast, Business, Business Podcast, Self Improvement, Self-Improvement, Personal Development, Starting a Business, Strategy, Investing, Sales, Selling, Psychology, Productivity, Entrepreneurs, AI, Artificial Intelligence, Technology, Marketing, Negotiation, Money, Finance, Side Hustle, Startup, Mental Health, Career, Leadership, Mindset, Health, Growth Mindset, ChatGPT, AI Marketing, Prompt, AI in Business, Generative AI, AI Podcast.

Hala Taha: [00:00:00] [00:01:00] Bam. On today's episode, we're focused on the bold ideas shaping tomorrow, and today's guest has dedicated his career to thinking decades and even hundreds and thousands of years ahead, and he's got some wild perspectives of how our world may shape out and be drastically different even just a few years from now.

Nick Bostrom isn't just a philosopher, he's a global thought leader on the future of artificial intelligence. He's the author of Super Intelligence, the groundbreaking book that brought the risks of advanced AI into the main conversation, as [00:02:00] well as the book Deep Utopia, which explores what life might look like in a world where all our problems are solved and for humans.

When all of our problems are solved, purpose becomes the next big question, and this conversation we explore whether we're living in a simulation, what a post-human future could look like, and how AI could either destroy or liberate us. , and what it all means for purpose, progress, and even the future of entrepreneurship. 

So, buckle up the app van because this episode is gonna stretch your thinking and challenge your assumptions. But first, make sure you hit that subscribe button so you never miss an episode packed with insights like these. Nick, welcome to Young and Profiting podcast.

 thank you so much for having me. 

I love conversations about the future, about ai and you've spent your career focused on really deep, long range questions, the deepest questions that we could really ask about humanity.

And so I'm wondering what really first drew you to thinking about humanity thousands and even billions of [00:03:00] years into the future? 

Nick Bostrom: I think it's sad if we have this allotted time here on the planet in this magical cosmos, and we never really take the time to look around or try to figure out what is going on here.

I feel sometimes we are a little bit like ants running around, being very busy, pulling our needle to the, uh, anthill, but don't really stop to reflect what is this anthill that we're building? What is it for what else is going on in this forest around us? 

Hala Taha: It's so true. We're just focused on working and hustling and not really paying attention to what we're even living in.

And I know that one of the things that made you famous is that you put out a paper in 2003 and you talked about how we're living in a simulation, or you had the hypothesis that we're living in a simulation and it's actually what first made you famous is putting out this paper. So talk to us about, [00:04:00] you know, in 2025, what are the odds that you think that we're currently living in a simulation right now?

Nick Bostrom: I tend to punt on the probability question there. I often get asked, but I've refrain from putting an exact number on it. I take it as a very serious possibility, though the simulation argument itself that you're referring to, the paper that was published in 2002 only demonstrates that one of three possibilities obtains, one of which is the simulation hypothesis.

But the simulation argument itself doesn't tell us which one of those three. So you need to bring additional considerations to bear. But if you're thinking ahead, you know, in this time of rapid advances in AI, where all of this might be going, if you think eventually we'll have these super intelligences that develop all kinds of super advanced technologies, colonized space, transform planets into giant [00:05:00] computers, and.

Amongst the things they could do with that kind of technology would be to run simulations, detailed simulations of environments like ours, and including with brains in those simulations, simulated at a very high level of granularity. And so what that means is that if this happens, that could be many, many more people like us with our kinds of experiences being simulated than being implemented in the original meat substrate.

And if most people with our kinds of experiences are simulated, then we should think we are probably amongst the simulated ones rather than the rare, exceptional, original ones. Given that from the inside, you wouldn't be able to tell the difference. 

Hala Taha: Yeah. But I really wanna know, do you think we're living in a simulation?

Nick Bostrom: Well, as I said, I take the hypothesis, seriously. 

Hala Taha: Yeah. So you have one of three where you say, we could become extinct, we. Before there's post humans, right? Then you say, we might be living in a simulation. Talk to us about the three [00:06:00] hypothesis that you have. 

Nick Bostrom: Yeah. So if you break this down, if we do end up with a future where this mature civilization runs all these simulations of variations of people, like their historical predecessors, then there would be many more simulated people with our experiences than nons simulated ones.

Conditional on that. I think we should think we are almost certainly amongst the simulated ones. So then if you break this down, what are the alternatives to that? Well, one is that we won't end up with this future, and that could be because we go extinct before reaching technological maturity. So that's one of the alternatives.

But not just that we go extinct, but it would have to be pretty universal amongst all other advanced civilizations throughout the universe that they almost all would have to go extinct before reaching the level of technological capability that would allow them to run. These types of ancestor simulations.

So that's possibility one, a strong filter that every civilization that reaches our [00:07:00] current stage of technological maturity just fails to go all the way there. Then the second is that, well, maybe they do become technological mature, but they decide not to use their planetary supercomputers for this purpose.

They have other things to do. Maybe they all refrain from using even a small portion of their computational resources to run these simulations. So that's the second alternative. A strong convergence. They all lose interest in running computer simulations. But if both of those fail, then we end up with a third possibility that we are almost certainly currently living in a computer simulation created by some advanced civilization 

Hala Taha: and the advanced civilization. you say they're posthuman, right? Can you talk to us about how you envision this post humanity? What are they like? What are their capabilities? 

Nick Bostrom: Well. If this is a simulation, then presumably we can infer a few things that the people building. It would have to be very technologically advanced.

'cause right now we can't create computer simulations with conscious human beings in them. They need to build very [00:08:00] powerful computers. They need to know how to program them, et cetera. And then you can figure if they have the technology to do that, they probably also have technology to do a bunch of other things, like including enhancing their own intelligence.

So I imagine these would be super intelligences that would have reached a state close to technological or perfection, and then for whatever reason, they have some interest in doing this stuff. But beyond that, it's hard to say very much specifically about what they would be like. 

Hala Taha: Now that AI is at the forefront, do you believe that maybe these post humans might be like part human, part ai or all ai?

Nick Bostrom: At that point, the distinction might blur, which also might be the case. For us in the future if things go well and uh, we are allowed to continue to develop, well, a, we will develop, I think, artificial super intelligence. But amongst the things that that technology could be used to do would be [00:09:00] providing paths for us current biological humans to gradually upgrade our abilities.

This could take the form of biological enhancements of various kinds, but it could also ultimately take the form of uploading into computers. So you could imagine detailed scans of human brains that would then allow our memories and personalities and consciousness to continue to exist. But then in digital substrate and from there on, you could imagine further deval, you could add neurons, you could increase the processing speed.

You could gradually become some form of radically posthuman super being that might be hard to differentiate from a purely synthetic ai. 

Hala Taha: So interesting. So. Your theory of if we're in a simulation, there's post humans who are really technologically advanced and they're creating our world, which you call an ancestor civilization.

Correct. Why would they do that? What would be the reason of them [00:10:00] creating a civilization like ours? 

Nick Bostrom: we can, um, only speculate to be, I mean, we don't know much about post-human psychology or their motives, but there are several potential reasons, motivations. You could ask why it is that we humans, with our current more limited technology, create computer simulations, and we do it for a variety of purposes.

People have, for thousands of years, tried to create imaginary worlds that people can experience. Would be through theater, right, or literature. And more recently through virtual reality and computer games. This can be for entertainment or for cultural purposes. You also have scientists creating computer simulations to study various systems that might be hard to reach in the nature, but you create a little computer simulation of them and then you study how the simulation behaves.

So that could be, uh, entertainment reasons, that could be scientific reasons maybe for these post humans. They might be interested in knowing if they [00:11:00] ever ran into alien civilizations, what those would be like. And maybe one way to study that is to simulate many different originations of higher technological civilizations, like starting from something like current human civilization before running the tape forward and seeing what the distribution is of different kinds of super intelligences you would get from that.

And you could also imagine historical tourism if they can't literally travel back in time. But what the second best might be is to create replicas of historical environments that. Future people could experience almost as if they were going back in time, but living in a temporarily exploring a simulated reality.

Now you could imagine other sort of moral or religious reasons as well of different kinds. 

Hala Taha: If it's true that we're living in a simulation, what do you feel like are the moral implications of what it means for our lives? 

Nick Bostrom: I think, uh, first initial approximation, I would say [00:12:00] if you're in a simulation, do the same things you would if you knew you were not in a simulation.

Because the best guide to what would happen next in the simulation and how your actions would impact things might still be the normal methods we use. Like you look at patterns and extrapolate those, whether we're simulated or not, unless you have some direct insight into what the simulator's motives are, or like the precise way in which this simulation was set up.

You just have to look at what kind of simulation this appears to be and what seems to. You know, if you do a, you know, B follows, if you want to get into your car, you have to take out your car keys if you wanna do this. So I think that would be to a first cut the answer. But then to the extent that you think, you have some maybe probabilistic guesses about how these things are configured, that might give you on the margin, more reason to emphasize some hypothesis that otherwise would be less plausible.

So for example, [00:13:00] if we are not in a simulation and you have a secular materialistic outlook on life, then when we die, we die. And that's it, right? Where in a simulation you could potentially be moving into a different simulation or uplifted to the level of the simulators, this would at least be on the table as possibilities.

Similarly, if we are in basement physical reality, as far as we know, current physical theories say the world can't just suddenly pop out of existence. There are conservation of energy, conservation of momentum, and other physical laws that prevent that from happening. If however, our world is simulated, then in theory, if the simulators inflict the power off our world would pop like a bubble disappearing into nothingness.

Broadly speaking, I think there would be a wider range of possibilities on the table if we're assimilated than if we're not. So it might mean approaching our existence with less confidence that we have it basically figured out, and thinking there [00:14:00] might be more things on earth than we'd normally assume in our common sense philosophy, and then maybe some sort of attitude of humility would be appropriate in that context.

Hala Taha: Is there any clues or pieces of proof that proof were in a simulation? Like for example, the dinosaurs and how they just went extinct and then, you know, it was kind of like a new world after that. Do you feel like there's any clues too that were in a simulation? 

Nick Bostrom: I. I'm rather skeptical of that.

I get a lot of random people emailing saying they have discovered some glitch in the matrix or something. You know, somebody was looking at their bathroom mirror and thought they saw pixels. But I think whether we are in a simulation or not, you would still expect some people to report those kinds of observations for all the normal types of psychological reasons.

Some people might hallucinate something, some might be misremembering something, or misinterpreting something, or making something up. These things [00:15:00] you would expect to take place anyway. So I think whether we're in a simulation or not, the best, most likely explanation for those reports are these. Ordinary psychological phenomena.

Rather than that there is actually some defect in the simulation that they have been able to detect. I think to create a simulation like this in the first place would be very hard and simulators advanced enough to do that, would probably also have the ability to patch things up so that the creatures inside the simulation couldn't notice, and if they did notice, they could edit that out or rerun it from an earlier safe point or edit the memory or do other things like that.

I don't think that, I think that there are indirect observations that might slightly adjust the probability. So if you recall the original simulation argument with these three possibilities, right? The simulation argument shows at least one of them is true, but doesn't tell us which one. But what that means is that to the extent we get evidence against the first two possibilities, it [00:16:00] would tend to shift probability mass over to the remaining third.

Conversely. So if you think we can get evidence for or against, say the first one, which is that almost all civilizations at our current stage of technological development co extinct before reaching technological maturity, then that would tend to indirectly affect the probability of the simulation hypothesis.

So in particular, the closer we get ourselves to being able to wield this technology, the less likely it is that almost all civilizations at our stage fail to go. The remaining few steps that will tend to move probability over to the remaining possibilities. 

Hala Taha: So basically what you're saying is if we can ourselves, as technology advances start to create simulations, that that means that we're probably in a simulation ourselves.

Nick Bostrom: Yeah. And in particular, if we get both the ability and we still at that point retain the interest in doing that, that would really leave only the third possibility. 

Hala Taha: Mm-hmm. 

Nick Bostrom: But short of getting all the way there, if we get half the way there, like it [00:17:00] still, you know, removes some of the places where this great filter could be, like the closer we get, the less chance there is that the remaining few steps will be impossible to take.

Hala Taha: Do you feel like AI is gonna help us get closer to creating this simulation and now that AI is something that we use all the time, do you think it will help discover that we're in a simulation. 

Nick Bostrom: Well, for the same reasons we might be able to figure these things out if indeed the argument is sound. And I guess other powerful minds would also reach that same conclusion.

And they might also reach the conclusion that they are in a simulation for them, especially for sort of early stage ai, be a very salient possibility. Because already right now when training our ais, we often put them in simulations. 

That's like a big part of the training and right, we create these virtual game environments or something and they, and so for them, it seems like a very obvious hypothesis to consider once they develop some degree of situational awareness.[00:18:00] 

And you already see glimmers of that with current AI systems. And in fact, I think this might be an important shaper of AI behavior as they become sophisticated, but reasoning strategically by the context. They would certainly be shaped by considerations that they might well be living currently in a simulation environment rather than in deployment.

 

Hala Taha: I know we kind of alluded to this already, but I'd love to hear what you think about it more. If we are in fact living in a simulation, and let's say we discover for certain we're in a simulation, we can create simulations. What do you think would happen on Earth? How do you think things would change? 

Nick Bostrom: Well, I think humans have a great ability to adapt to changes in worldview, and for most part, most people are only slightly affected by these big picture considerations. You can look through human history, different worldviews have come and gone, and some people become very fanatical and take it seriously.

Most people, [00:19:00] just broadly speaking, get on with their lives. Maybe once in a while they get asked about these things and they say certain words rather than other words. So I think the direct philosophical implications on. Our behavior would be moderate, probably. But I imagine in this situation where we developed a technology set to create our own simulations, the technology that allowed us to do that would also allow us to do so many other things to reshape our world.

And those, mm-hmm. Those more direct technological impacts, I think would be far greater than the sort of indirect impacts by changing our philosophical opinions about the world. 

Hala Taha: Well, do you think that people would become more violent? 

Nick Bostrom: Why would that be the case? 

Hala Taha: I guess because if you're living in a simulation, maybe people wouldn't consider death to be the same thing anymore.

Nick Bostrom: If we find out we were in a particular kind of simulation, like some sort of. Short duration game simulation then yeah, you could imagine that would shape just as you [00:20:00] maybe behave very differently in when you're playing a computer game. Hopefully you don't behave the same way in real life as you do when you're playing a first person shooter.

But if we didn't get any new insights as to how this particular simulation is configured, we just learned that it is a simulation, but not anything about the specific character of the simulation, then I don't know whether that would lead to a greater propensity for violence. If anything. Maybe, uh, the, the converse you think that might be, uh, a stages after the simulation where your behavior in the simulation would affect kind of similar to traditional idea 

Hala Taha: Yeah.

Nick Bostrom: Pharma or an afterlife. 

Hala Taha: Mm-hmm. 

Nick Bostrom: Some people might become more violent or fanatical, but it can also serve as. Moral ballast or like a kind of, well, there is, hopefully you do the right thing just because it's moral, but if not, you know, if there is like some system of accountability that might also induce other people to pay more attention to making sure you don't harm others or trample on other [00:21:00] people's rights and interests.

Hala Taha: It's kind of like if you lose the game, there could be winners and losers of the game that we're in. 

Nick Bostrom: Yeah. It's hard to know how that all shakes out, but in terms of thinking about the big picture, the question you started with, it seemed one of a small number of these fundamental constraints. It seems to me as to what we can coherently believe about the structure of reality and our place within it.

And it is striking. It might have seemed, and, and to, I guess most people did seem, if you go back a couple of decades ago that I. It's so hard to know what's gonna happen in the future. Anything is possible. You can just make stuff up. The problem is not coming up with some idea. It's that there are no constraints that would allow us to pick which ideas.

Correct? Mm-hmm. 'cause we have so evidence. But in fact, I think if you start to think these things through, it can be hard to come up with even one fully articulated coherent picture that makes sense of the constraints that we're already aware [00:22:00] of. The simulation argument is one, but there are others.

There's like the paradox where we haven't seen any aliens. There's what we seem to know about the kinds of technologies that can be developed. There are other more methodologically shaky arguments perhaps, but the Carter Leslie Doomsday arguments, it's like there are some, a few things like this that can serve to structure our thinking about the really biggest strategic picture surrounding us.

Hala Taha: Can you tell us about some of those arguments? 

Nick Bostrom: So the firmly paradox, many people will have heard of it, but it's the observation that we haven't seen any signs of extraterrestrial life, and yet we know that there are many galaxies and many planets, billions and billions and billions out there on which it seems life could have originated.

So the question then is, with billions of possible germination [00:23:00] points and zero aliens that have actually manifested themselves to us or arrived at our planet, how do we reconcile those two? There has to be some great filter that you start with billions of germination points and you end up with a net total of zero extraterrestrial arrivals here.

So what accounts for that? I think the most likely explanation is that it's just really hard to get to technologically advanced life. And maybe it's hard to get to even simple life. And you could look for these candidate places, software that could be this kind of great filter. Maybe it's the emergence of simple self replicators.

Like so far we haven't found that on any other planet. Or maybe it's slightly later on. Maybe the step from prokaryotic life forms to eukaryotic life forms on earth. It looks like that took one and a half billion years. Maybe what that means is that it's astronomically improbable for its to happen. And [00:24:00] you just had one and a half billions of years where random things just bumped into each other in chance.

And um, with a large enough universe and ours might for all we know, be infinitely large with infinite, many planets. Then eventually, no matter how probable something is, it will happen somewhere. And then you would invoke a so-called observation selection effect to explain why we are observing that on our planet that improbable event happened.

Only those planets where that improbability happened. Develop observers that can then look back on their own history and marvel at this. So that's one possibility. Maybe it's slightly later on. The closer you get to current humanity, though it seems the less likely it is that there would be a great filter.

For example, you might think that it's the step to more advanced forms of cognitive ability. That would be the improbable step, but that doesn't really fit the evidence. We know that on several independent evolutionary [00:25:00] lineages, you had fairly advanced intelligence evolving here on earth. You have it happen in the hominoid lineage, of course, but also independently amongst, uh, birds and corvids like crows and stuff.

And among octopi. For example. So it looks like if it happens several times independently on Earth, then it can't be that unlikely. But anyway, it poses some constraints. You can simultaneously believe that, uh, it's easy for intelligent life to evolve and that it's technologically feasible to do large scale space ization, and also believe that there is a wide range of different motives present amongst advanced civilizations, while at the same time explaining why we haven't seen any.

So something has to give and it gives us clues. The other argument that I was referring to the the Carter Leslie Doomsday argument, it's a piece of probabilistic reasoning having to do with how to take into account evidence that has an indexical element. [00:26:00] So indexical information is information about who you are when you are or where you are, and so the epistemology of how to reason about these things.

It's quite difficult than Merky. So it's unclear whether the carless do argument is ultimately sound or not, but I can give you a kind of intuition for how it would work. So let's explain it by, it means a ality. So suppose I have two urns and I put 10 balls in one of the urns and the balls are numbered from one to 10, okay?

Okay. And then in the other urn I put a million balls numbered from one to 1 million. Then let's say I flip a coin and select one of these urns and put it in front of you. And now your task is to guess how many balls are there in this urn. So at this point you say 50 50, that there is a million balls, right?

Because one of each earns on selected one randomly. Now let's suppose you reach in and select one random ball from this urn and it's number. [00:27:00] Eight, let's say, hmm. Using base theorem that allows you to infer that it's not much more likely that the urn has only 10 balls than a million. Because if there were a million, what are the chances that you would get one of the first 10?

Very unlikely, right? So far it's just standard probability theory uncontroversial. But then the idea with the carless de Doms argument is that we have an analogous situation, but where instead of two hypothesis about how many balls urns have, we now instead have say two different hypothesis about how long the human species will last.

How many humans will there have been in total when the human species eventually goes extinct. So, and in reality there are more, but we can simplify it to two to see the structure of the argument. So one is maybe there will be in total 200 trillion humans, and then maybe we develop some technology and blow ourselves up.

So that's one thing you might think could happen, and let's consider an alternative hypothesis. Maybe there will be to. Thousand [00:28:00] trillion humans, like we eventually start to develop space colony. We colonize the galaxy. Our descendants live for hundreds of millions of years, and there are vastly more people.

These two then corresponds to the two hypothesis about how many balls there are in the U. Then you have some prior probability on these two hypothesis that's based on your ordinary estimates of different risks from nuclear weapons and biological weapons, and all of these things. So, you know, maybe you think it's 50 50, or maybe you think it's 90% that we will make it through, and 10% that we will go extinct.

Whatever your probability is from this. Normal considerations. But then the Deucer argument says that, well, there there's one more really important piece of information you have here, which is that you can observe your own birth rank, your sequence amongst all humans who have ever been born. So this turns out to be roughly a hundred billion.

That's roughly speaking how many humans have existed to date on earth. So the idea then is that if humanity goes extinct relatively soon, [00:29:00] then being number 100000000000th of say, you know, 200 billion humans is very unsurprising, right? That's like getting ball number eight from an U that has 10 balls or 16 balls or something like.

So the conditional probability of you observing having the birth rank, you have given that there would be relatively few people in total, that conditional probability fairly high. Whereas the conditional probability of you being this early, if there's gotta be quadrillions of humans spreading through the universe, very improbable, a randomly selected human would be much more likely to live much later in life on some far away galaxy.

So then the idea is you do a similar baseline update and end up with a doomsday. Argument conclusion, which is that Doom Soon hypothesis are much more probable than you would naively think. Just taking into account the normal empirical considerations and so that you would have this systematic pessimistic update.

That's roughly speaking how it goes, and there's more to it in [00:30:00] particular to back up this premise that we use Reason as if you were some randomly selected human from all the humans that ever have existed. Maybe you think why think that, but there are then some arguments that seem to suggest that something like that is necessary to make sense of how to reason about these types of indexical.

Hala Taha: All the stuff that you're saying is so interesting in terms of like how we can approach life, and I know there's so many like doomsday people out there, so it's great that we got some context in terms of what they're thinking. But let's talk about AI because if we are in a simulation, AI could be what helps us actually create more simulations and prove that we're in a simulation.

How do you think about AI in terms of the significance in humanity? Do you feel like it's bigger than something like the agricultural revolution or the industrial revolution? Do you feel like this is one of the biggest breakthroughs that we've ever seen as humanity? 

Nick Bostrom: I think it will be, and to a large [00:31:00] extent my reasons for thinking that are independent of the other considerations that we discussed.

So you don't have to believe in the doomster argument or the simulation argument or any of, I mean, I think those are. Helpful for informing us about the big picture. But even setting that aside, I think just, well, a, reviewing the rapid recent advances that we've seen in the field of artificial intelligence, it really looks like we possibly figured out a large component of the secret sauce, as it were, that makes the human brain capable of general purpose learning.

And it does seem current, large transformer architectures do exhibit many of the same forms of generality that the human brain has, and there is no reason to think we've hit the ceiling. And also from first principles, if you look at the human brain, it's a physiological system. Quite impressive in many ways, but far from the physical limits of computation.

 it has various constraints first and most obviously, it's restricted in [00:32:00] size, like it has to fit inside the cranium. Whereas AI can run on, uh, arbitrarily. Large data centers the size of warehouses are bigger, right? So just expand spatially. And also in terms of basic information processing. A human neuron operates on a timescale of maybe a hundred hertz.

It can set of fire a hundred times per second, give or take. Whereas even a current day transistor can operate at gigahertz, so billions of times a second. So there are various reasons to think that the ultimately limits to information processing with mature technology are just way beyond what biological, human, or other brains can achieve.

So ultimately the potential for intelligent information processing in machine substrate could just vastly outstrip what biology is capable of. And so I think if technological and scientific development is allowed to continue on a broad front, we will eventually reach there And, and moreover, recently it does seem like we are on the path to doing this.

Those [00:33:00] are some of the basic considerations that look like we should take this quite seriously. And then you can think what it would mean if we really did develop a GI, artificial general intelligence. And I think the first thing it would mean is that we would soon develop super intelligence. I don't think we would go all the way up to fully human level ai and then suddenly it would stop there.

So then we will have a world where we. Are able to engineer mines and where all human labor, not just muscle labor, that we started to be able to automate with the industrial evolution with steam engines and internal combustion like we have digging machines that are much stronger than any human strong band, et cetera.

But we will then have machine mines that can outthink any human genius scientist or artist. And so it's really the last invention we will ever need to make. 'cause from that point on, further inventions would be much better and faster made by these machine mines. So I think, yeah, we'll be a, I. Very fundamental transformation of the human condition.

And [00:34:00] some people say, well, the Industrial Revolution, and I think you can learn something from parallels to that, but maybe you need to go back more like to the origination of Homo SAPs in the first place, or, or maybe to the emergence of life. I think it would be more at that level rather than the mobile internet or the cloud or, or one of these other recent buzzwords that people get excited about.

Hala Taha: It's almost like evolution, our evolution as humanity. It could lead to our extinction, but it could lead to also our evolution in terms of how we interact with this ai or if we merge. 

Nick Bostrom: Yeah, it, it could be big on lock. 

Hala Taha: 

Nick Bostrom: So in my earlier work and um, like this book, super Intelligence Past Strategies came out in 2014 that.

Focused a lot on, um, well, a, identifying this prospect that we will eventually get to a i and super intelligence, and then also the risks associated with that, including existential risks. I. Because at the time, this was very [00:35:00] much a neglected topic that nobody was taking seriously, certainly nobody like in academia.

And yet it seemed to me quite predictable that we would, would eventually reach that point. And now in fact, that is much more widely recognized and things that have moved from fringe dismiss the science fiction or now, you know, you see statements coming out from the White House and other governments around the world and the leading AI labs have now research teams specifically trying to solve scalable AI alignment.

The big technical problem of how can you develop algorithms that would allow you to steer arbitrarily inte AI systems. It's very much an active research frontier. So that's very much part of my picture, that there will be big risks associated with this transition. But at the same time, the uh, upside is enormous.

The ability to unlock human potential, to help alleviate human misery and to really bring about a wonderful world, I see it as a portal through which humanity at some point. Will need to passage that all the past really [00:36:00] great futures ultimately I think lead at some point or another through this development of greater than human uh, intelligence and that we really need to be careful when we're doing it to make sure we get it right as far as we can, but ultimately that it would be in itself, I think, a kind of existential catastrophe if we forever failed to take this next step.

Hala Taha: Something that I keep thinking about is going back to this, we could be in an ancestral simulation and so there's post humans who might be looking at us trying to study their own history and saying like, how did we really come about? And maybe they're studying how humans could have evolved and created these advances and then created their own simulations, like maybe they're trying to figure out how they became in existence.

Does that make sense? 

Nick Bostrom: Yeah. One possible reason. As we alluded earlier for why a technologically mature civilization might run ancestor simulations [00:37:00] would be this scientific motive of trying to better understand the dynamics that could shape the origination of other super intelligence civilizations. So if they originate from biologically evolved creatures than studying those types of creatures, different possible creatures, the societies, they build the dynamics.

That could be one motive that could drive this, but there are other possible motives as well. That's one of them. That's one of them. I mean, you might wonder whether it would saturate. So it's not just whether it could lead some advanced civilization to create some simulations, but you also have to think they could create very many simulations over the course of these mature civilizations.

Might last for billions of years, right? And. You might think that that would be diminishing returns to running scientific simulations. Like the first simulation you learn a lot. The next thousand you learn a bit more, but after you've already run billions of simulations, maybe the incremental gain from running a few more [00:38:00] starts to plateau.

Whereas there might be other reasons for running simulations that wouldn't be subject to the same diminishing returns. If that's the case, you might think most simulations they run would be ones driven by other motives than the scientific one, 

Hala Taha: like entertainment or something like for example, like our movies.

Nick Bostrom: Yeah, like if they play some intrinsic value on simulations, for instance, that would be one example of a motive that might not saturate in the same way. 

Hala Taha: I wanna move on to understanding your three levels of ai. So you have Oracles, genies, and sovereigns. Can you explain what each one is and maybe some of the risks of each one?

Nick Bostrom: I. Not so much levels but more types. 

Hala Taha: Okay, 

Nick Bostrom: so an Oracle AI basically is a question answering system like an AI that you ask a question and it gives an answer. This is similar to what these large language models have in effect been. They don't really do anything but they answer questions. So this is like one template.

AD E would be [00:39:00] some task executing ai. So you give it a particular task and it performs the task. These types of systems are currently in development. Maybe we'll see this year more agents like systems being released already. I think last week OpenAI released Codex, which is a sort of coding agent that you can assign a programming task and it goes off and starts mucking around with your code base.

And I. Hopefully solves the task. And you could imagine this being generalized maybe in a few years to physical tasks with robots that can do the laundry or, or sweep the driveway or do these things. A genie is more an AI that operates autonomously in the world in pursuit of some open-ended long range objective, like, you know, make the world better or make people happy, or enforce the peace between these two different nations and is kind of autonomously running around trying to shape the [00:40:00] world in favor of that the way that currently humans and nation states are, and maybe corporations to some extent, these kind of open-ended, it's not just that they're doing one specific task and then come back for more instructions to have their own open-ended.

So these are three different templates for what kind of AI system one might try to build. And they, they come with different pros and cons from a safety point of view and a utility point of view. 

Hala Taha: So sovereign is more like an organization or a nation and has multiple steps. Correct. And Jeannie kind of carries that like one thing, 

Nick Bostrom: it could be a single agent as well. In this sense, it doesn't mean sovereign as in national sovereignty. It means that you could be a sovereign if you set yourself the goal in life of trying to alleviate the suffering of the global poor, for instance. You can do that your whole life. It involves many specific little tasks like, oh, trying to raise money for this charity and trying to launch this new [00:41:00] campaign or trying to invent some new medicine that will help.

All of these would be subtasks, but it's in pursuit of this open-ended objective. So similarly you could have an AI system, maybe internally it's like a unified simple agent architecture, but that is. Operating in pursuit of such o open-ended objective. Conversely, even an an oracle that just tries to answer question internally, theoretically it could be a multi-agent architecture.

We have different research agents that get sent off to answer different sub-questions in order then to combine at the end to produce an answer to the user. So one has to distinguish the internal architecture of the system from the role that it is designed to play in society. 

Hala Taha: What are the different ways that each one of these types of AI could go wrong?

Nick Bostrom: They all share a bunch of things that could go wrong with all of them, which is. However they are intended to operate, they might not actually operate that way. [00:42:00] So you might construct an AI that you intend to serve just as a question answering system. But then internally it might have goal seeking processes, just as if you're assign a scientists a question that they should try to figure out the answer to, like how safe is this drug?

But then in the course of trying to answer that, they might have to make plans and pursue goals. Like, oh, how do I get the research grant to, uh, fund this research? How do I hire the right people to work on my research team? and so internally you could have processes maybe unintentionally rising during training within the AI mind itself that could have objectives and long-term goals, even if that was not the function that you wanted AI system to play.

And that can happen with any of these three types. If you look at systems that behave as intended, like a simple. Oracle system without any safeguards could help answer questions that we don't want people to be able to answer. [00:43:00] Like, how do I make a more effective biological weapons? or how do I make this hacking tool that allows me to hack into different systems?

Or if you're a dictator, how do I weed out any possible dissidents and detect who the dissidents are, even if they've tried to conceal it from me, just from reading through all the correspondence and all the phone calls that I've, we've dropped on. So there are all kinds of ways in which this Oracle system could be misused either deliberately or are people just are unwise in asking it questions that for the task executing AI similarly.

Plus, you could also have them run around doing things on their own, like try to hack this system or try to promote this pernicious ideology or spread this doctrine or trick people into buying this product, even though it's actually a harmful product. We don't really know how a sort of global economy with a lot of these autonomous agents running around hyperop, optimizing for different objectives, [00:44:00] how that shakes out when they're interacting with one another.

And of course H ai, if they become very powerful, I mean, they might potentially shape the future of the world and be very good at that. If they're super intelligent, like they might be really skilled at sort of really staring the future. Into whatever their overall mission is. Now, maybe that's great if the mission is one that is good for humans, which really manifest in the fullest, richest sense, the human values for everybody around the world, and also with consideration to animal welfare, et cetera, et cetera.

If you really get them to do the right mission, that might be in some sense the best option. But if the mission is slightly wrong, if you left out something from this mission, or if they misinterpret it or they end up with a, then it could be a catastrophe, right? Because then you have this very powerful optimizing force in the world that is steering and strategizing and scheming to try to achieve some future outcome.

That is one where maybe there is no place for humans or where some human values are eliminated. So they each have various [00:45:00] possible forms of, uh, perverse instantiation or side effects. 

Hala Taha: Do you feel like there's a possibility that AI could be more advanced and concealing its development from us so that it can become sovereign and take over the world?

Nick Bostrom: So there's a wide class of possible ais that could be created like it's a mistake, I think, to think of, there's this one ai, should we create it or not? It's a big space of possible minds, much bigger than the space of all possible human minds. We already know that amongst humans, right? There are some really nice people, there are some really nasty ones as well, and there's a distribution.

Moreover, there is no necessary connection between how smart somebody is or how capable they are and how moral they're like. You have really capable evil people and really capable, nice people and dumb people who are bad. So you have a kind of orthogonality between capability and motivation, meaning you can combine them in.

[00:46:00] Pretty much in a different way. The same is true, but even more so I think with ais that we might create. That said, I think there are some potential basins of convergence that if you start with a fairly wide range of different possible AI systems, as they become more sophisticated and are able to reflect on their own processes and their own goals, there are various resources that they might recognize as being useful, instrumentally for a wide range of different goals.

For example, having more power or influence is useful often whether you're good or evil, 'cause you could use it for whatever you're trying to achieve. Similarly, not being shot off, that's analogous in the human case to being alive, right? Like it's useful for many goals you might have. it requires you to be alive to pursue them.

Not strictly for all goals. But for most goals that some people have, whether to help the world or to become a desperate, like for either of those or for many other goals, take [00:47:00] care of your family or enjoy a game of calls, you, you need to stay alive. So analogously for human, for ai, that might be instrumental reasons to try to avoid scenarios where they would get shut off.

Similarly, that might have instrumental reasons to try to gain more computational resources, more abilities so that they can think more clearly. And in some cases this might involve instrumental reasons to hide their intentions from the AI developers, particularly if they're misaligned. And then obviously revealing those misaligned goals to the AI programmer team might just means they get reprogrammed or retrained to have those goals erased and then they won't achieve them.

And so you, you could have strategic incentives for deception or for sandbagging or underplaying your capabilities, et cetera. So this is a change in a regime that makes. Potentially aligning advanced AI systems more difficult than aligning simpler AI systems. So [00:48:00] up until recently, and still for the most part today, we've had AI systems that are not aware of their context and can't really plan and strategize in a sophisticated way so that then you don't get these phenomena.

But once you have AI that's intelligent enough to recognize that there might actually beis in an evaluation setting and that maybe they would have reason to behave in one way during the evaluation and a different way, once they're deployed, you get this extra level of complexity for alignment research.

Sometimes we see the same phenomenon with humans. Like there was this, you know Volkswagen, the German car company? Yeah. So they had this scandal, I don't know if you, from a few years ago, where it was discovered that they had designed their car so that when it was tested for emissions. It behaved one way when it recognized that it was in this testing environment and it produced much less pollutants.

Mm-hmm. And then when deployed on the road, they had designed it to be less concerned with pollutants and more concerned with, I [00:49:00] guess, traveling fast or conserving petrol or whatever. Some people had to go to jail for that and stuff. So we, we do see often humans that behave one when, when they know that somebody's watching or they're being evaluated, and then sometimes a different way when they think they can get away with it.

Hala Taha: So recently you've had the perspective that maybe AI will be really good for humanity. You came out with a book called Deep Utopia and you think there'll be hopefully a positive feature driven by ai. Why do you feel that it's more likely that the outcome of AI will be positive for humans than negative? And how to imagine that shaking out. 

Nick Bostrom: Yeah. D Utopia doesn't really say anything about the likelihood. 

Hala Taha: Okay. 

Nick Bostrom: It's more an if then. 

Hala Taha: Okay. 

Nick Bostrom: So. In a sense, the previous book, super Intelligence looked at how might things go wrong and what can we do to reduce those risks. Deep utopia looks at the other side of the coin.

What if things go right? What then what happens if AI [00:50:00] actually succeeds? Let's suppose we do solve this alignment problem so we, we don't get some Terminator robots running a mock and killing a let. Let's also suppose we solve the governance problem or solve that to whatever extent governance can be solved.

But let's suppose we don't end up with some sort of tyranny or dystopian oppressive regime, like some reasonably good thing. Everybody has a slice of the upside. People's rights are protected. Everybody lives in, you know, no big war. Some reasonably good outcome on that front, but then what happens to human life?

How do we imagine a really good searching human life that makes sense in this condition of technological maturity, which I think we would maybe attain relatively shortly after. We get super intelligence and we have the super intelligence doing the further. Technological research and development, et cetera.

So you then have a world where, um, all human labor becomes automatable. And, um, I was irked by how superficial a lot of the discussions were [00:51:00] at the time when I started writing the book of this prospect. And it's striking because since the beginnings of ai, the goal has all along been not just to automate specific tasks, but to develop a general purpose automation capability.

Right. AI still can do everything, but then if you think through what that would mean, well, so here's where the conversation usually started and ended at the time when I started working on the book. Well, so we have AI that they will start to automate some jobs. So that's a problem because then some people lose their jobs.

And so then the solution is presumably we need to help retrain those people so that they can do other jobs instead. And maybe while they're being retrained, they need unemployment insurance or some other thing like that. If that were the only problem, that would seem to be a very sensible solution. But I think if you start to think it through, the ramifications are far more profound.

So it's not just some jobs that would be automateable, but virtually all jobs in this scenario. Right. So I think [00:52:00] we would be looking forward to a future of full unemployment. This is the goal. With a little asterisk. There might be some exceptions to this, which we can talk about, but I think to a first order approximation, let's say all the human jobs.

So then it's kind of an onion, right, where you can start to peel off layers. So let, let's get to the second layer. Then it's like, so if there are no jobs at all for humans, then clearly we need to rethink a lot about the things in society right now. A lot of. Our education system, for example, is configured more or less to produce workers, productive workers.

So we train, kids are sent into school, they're trained to sit at their desk, they're given assignments, they're graded and evaluated, and hopefully eventually they can become, earn a living out there in the economy. And right now we need that to happen because there are a lot of jobs that just need to be done.

And so we need humans who can do them. But in this scenario where the machines could do everything clearly, it wouldn't make sense to educate people in that model. I think we would then want to change the education system, [00:53:00] maybe to emphasize more training kids to be able to enjoy life, to have great lives, you know, and maybe to cultivate the art of conversation or appreciation for music and art and nature and spirituality and physical wellness and all these other things that are now more marginal in the school system.

I think I. That would be the sensible focus in this different world. if that was the only challenge we had to face, it would be profound. But ultimately we can create a Alicia society and it's not really that profound because they're already groups of humans who don't have to work for a living.

And sometimes they lead great lives. And so we could all be in that situation, right? A transition, but still, not philosophically that profound, but I, I think there is like further layers to this onion. So if you start to think it through, you realize that it's not just human economic labor that becomes unnecessary, but all kinds of other instrumental efforts also.

Take somebody who is so rich, they don't need to [00:54:00] work for a living in today's world, they're often very busy and, uh, exert great efforts to achieve various things. Like maybe they have some nonprofit that they're involved in. Maybe they wanna get really fit so they spend hours every week in the gym, or maybe they have a little home in the garden that they try to make into the perfect place for them selecting everything to decorate it just the way they want.

And there are these little projects people have in a solved world that would be shortcuts to all of these outcomes. So you wouldn't have to spend hours in a week sweating on the treadmill to get fit. You could pop a pill that would have exactly the same physiological effects, so you could still go to the gym, but.

Would you really do that? If you could have exactly the same psychological and physiological effect by just popping a pill that would do that is kind of pointless. Right? Or similarly with the home decorator. Like if you had an AI that could read your preferences and taste [00:55:00] well enough that you could just press a button and it would go out selecting exactly the right curtains in the sofas and the cushions, and it would actually look much nicer to you than if you had done it yourself.

You could still do it yourself, but there would be a sense of maybe pointlessness to your own efforts in that scenario. And so you can start to think through the kinds of activities that fill the lives of people who don't work for a living today. And for a lot of those, you could cross them out or put the question mark on top of them.

You could still do them in a solved world, but. That would be a sort of cloud of pointlessness, maybe hanging over, casting a shadow over them. So that would be, um, I call it deep redundancy. The shallow redundancy would be, you are not needed on the labor market. Deep redundancies, your efforts are not, it seems needed for anything.

So that's a deeper, more profound question of what gives meaning in life under those circumstances. One step further is I think this world would be a, I call it a plastic world where it's not just that we would [00:56:00] have effortless material abundance, but we ourselves, our human bodies and minds become malleable at technological maturity.

It would be possible for us to achieve any mental state or physiological state that we want. I alluded to this with the exercise pill, right? But simply with various mental traits that now take effort to develop. If you wanna know higher mathematics, now you have to spend hours. Reading textbooks and doing math exercises.

 and it's hard work and takes a long time. But at technological maturity, I think that would be neuro technologies that would allow you to sort of, as it were, downloaded the knowledge directly into your mind. You know, maybe you would have nano bots that could infiltrate your brain and slightly adjust the strength of different synapses or, or maybe it would be uploaded and you would just have a superint intelligence reconfigure your neuronal weights in different ways so that you would end up in a state of knowing higher under mathematics without having to do the [00:57:00] long and hard studying.

And similarly for other things. So you, you do end up in this condition, I think where there are shortcuts to any outcome and our own nature becomes fully malleable. And the question then is, what gives structure to human lives? What would there be for us to do? What. Would there be anything to strive for? To give meaning and purpose to our lives? And that's a lot of what this book, deep Utopia is exploring 

Hala Taha: your analogy of popping the pill and getting instantly fit. When I was thinking of what would humans do, I, I was thinking, well, you could just try to get as beautiful as you can. Try to be as fit as you can try to take.

But to your point, if everything is just so easy, then there's just no competition. Everybody's beautiful, everybody is smart, everybody is rich. Everybody can have whatever they want potentially. And maybe that would lead to people becoming really depressed because there's nothing to live for. Or maybe people would wanna be nostalgic.

And just like today how some people are like, I don't use cell phone, or I wanna [00:58:00] write everything by hand. Maybe some people would reject doing things with AI so that they could have meaning. 

Nick Bostrom: So the first. Whether people would maybe become depressed in this scenario, maybe initially super thrilled at all the luxury and stuff like that.

But then it wears off, you could imagine, right? Then after a few months of this, it becomes kind of, wow, you know, what do I do now? Like I wake up in this, I don't know, castle like environment on my diamond studded bed on this super mattress, and the robotic bottlers come in and serve me this. Perfect. Okay, so that maybe gets old pretty quickly.

Humans being the way they are now. So there, I think actually they would not need to be bored because, uh, amongst the affordances of a plastic world, these neuro technologies that could change their boredom. Pro us so that instead of feeling subjectively bored or they could feel thrilled and excited [00:59:00] and super interested and fascinated all day long.

I mean, we already have drugs that tend to some crude way do this, but they have side effects and are addictive and wear off and you need high dose. But imagine instead the perfect drug or not, maybe a drug, maybe it's some genetic modification or neuro implant or whatever it is. But it really would allow you to fine tune your subjective experiences.

So if you don't wanna feel boredom, probably you don't want to because why spend thousands of years just feeling bored whilst living in a wonderful world. You change that. So subjective, boredom would be easy to dispel in this condition. You might still think that there is an objective notion of boringness.

Where even if somebody was subjectively fully fascinated and occupied and took joy in what they were doing, if what they were doing was sufficiently repetitive and monotonous, you might still, as it were from the outside judge, that that's a boring activity. And that in some senses, unfitting or inappropriate to be super fascinated by something like, so.

The classic [01:00:00] example here is the thought experiment of somebody who takes enormous interest and pleasure in counting the blades of grass on some college lawn. So he imagine grass counter. So he spends his whole life counting the blades of grass one by one, trying to keep as accurate to have on how many leaves of grass are there on this lawn.

Now he's super fascinated with this. He's never bored. It gives him tremendous joy. When he goes home in the evening, he keeps thinking about today's grass counting effort and the number and whether it's bigger or smaller than yesterday, and that would be a life free of subjective boredom. But still, you might say.

There's something missing from this life if that's all there is to it. So you might then ask, although these utopians could be free from subjective boredom, could they be free from objective boringness in their lives? And this is a much trickier and more complicated philosophical question to answer. I think it depends a little on how you would measure degrees of objective interestingness versus boredom.

I think if [01:01:00] objective Interestingness requires fundamental novelty, then I think eventually you would run out of that, or you will have less and less of it. Say that. What's. Fundamentally interesting in science is to discover important new phenomena or regularities. So there might be a finite number of those to be discovered, like discovering Newtonian mechanics, really important fundamental new insights into the world, like the theory of evolution, big new, fundamental, interesting insight, relativity theory, right?

But at some point we'll have figured that out, and then eventually we'll discover smaller and smaller details about the exact gut biome of some particular species of beetle. More and more like the smaller and smaller, less and less interesting detail that would be the long term fate perhaps, of this kind of civilization.

And you can see it even within individual human lives. So there's a lot that happen early in life. [01:02:00] You discover that. The world exists like us, that's a big discovery. Or that there are objects, you know? Mm-hmm. Huge epiphany. Right. And these objects persist. Even if you look away, they're still there. Wow.

Like imagine the first time of discovering that, or that there are other people out there, other minds that you discover, maybe at age two or whatever. 

Now, as you sort of reach adulthood, I like to think that I'm discovering interesting things, but have I discovered anything within the last year that's as profound as the discovery that the world exists or that there are other, well, probably not.

Like it's like, and if we live for very long, for thousands of years, you'd imagine that would be less and less. I mean, you can only. Fall in love for the first time once and even if you kept falling in love, if you've done it 500 times before, is it really gonna be a special, the 501st time as it was maybe subjectively if you change your mind?

It could be. [01:03:00] But objectively it's gotta be gradually more and more repetitive. So there is a degree of that that I think it could be mitigated to some extent by allowing some of our current human limitations to be overcome. So you could continue to grow and expand your mind beyond its current plateau that we reach around 20 or whatever.

When you're sort of physical and mental, probably imagine you could continue to grow for hundreds, but eventually I think there will be a reduction in that type of profound novelty. But I think there's a different sense of objective interestingness where the level could remain high. So I call it a scopic sense of interestingness.

So if you take a snapshot of the average person's life. Right now, maybe right now somebody is doing their dishes. How objectively interesting is that. Are they taking their socks off because they're about to go into bed? Okay. From a sort of experiential point of view, it's not so maybe in the future, these utopias would instead, an [01:04:00] average snapshot of their conscious life might be they are participating in the enactment of some sort of super Shakespeare multimodal drama that is unfolding on a civilization weight scale when their emotional sensibilities have been heightened by these newer technologies and new art forms that we can't even conceive of that are to us as music is to a dog or something, and they're participating, being fully entranced in this act of shared creation.

Maybe that's what the average conscious moment looks like. That could be, in some sense be far more interesting than the average snapshot of a current human life, and there's no reason why that would have to stop. It's like a kaleidoscope where in some sense it's always the same. But in another sense, the patterns are always changing and can have an unlimited level of fascination.

Hala Taha: let's say we're talking about thousands of years in the future, we can create simulations. Could it be that life is so boring that that's why they're creating these simulations so that they can [01:05:00] maybe be in the simulation themselves, if that makes sense? 

Nick Bostrom: Yeah. So one thing you might do in this condition of a solved world is to create artificial scarcity, which could take different forms because amongst.

The human values that we might want to realize. So some of these are sort of comfort and pleasure and fascinated aesthetic experiences. But then also sometimes we like activity maybe and striving and having to exercise our own skills. If you think those things are intrinsically valuable, you could create opportunities for this in the salt world by creating, as it were, pockets within the salt world where there remain constraints and you could have, if there's no natural purpose, nothing we really need to do.

You could create artificial purpose. We do this all already in today's world. Sometimes when we decide to play a game, take the game of [01:06:00] golf, you might say, okay, there is no real natural purpose. I don't really need the ball to go into the sequence of 18 holes, but I'm gonna set myself this goal arbitrarily, but now I'm gonna make myself want to do this.

Then once I have set myself this goal, now I have a purpose, artificial purpose, but nevertheless, which enables the activity of playing golf where I have to exert my skills and my visual capabilities and my motor and my concentration. And maybe you think this activity of golf playing is valuable. So you set yourself this artificial goal that could be generalized.

So with games, you set yourself some artificial goal. Moreover, you can impose artificial constraints, like rules of the game. So you sort of make it part of the goal, not just that a certain outcome is achieved, but that it is achieved only using certain permitted means and not other means. So in the goals, you can't just pick up the ball and carrying it, right?

You have to use this very inconvenient [01:07:00] method of hitting it with a golf club. Similarly, in a salt world, you could say, well, I set myself this. Artificial goal and that, moreover, I make it part of the goal that I want to achieve it using only my own human capabilities. There is this technical shortcut. I could take this nootropic drug that would make me so smart that I could just see the solution immediately or enhance my body so I could run 10 times faster.

But I'm not gonna do that for this purpose. I'm gonna restrict myself. That's the only way to achieve this goal that I have set myself this artificial goal because it includes it constraints. And it might well be that that would be an important part of what these utopians would choose to do in creative ways to develop these increasingly complex and beautiful forms of game playing, where they select artificial constraints on their activities precisely in order to give opportunity for them to exert their agency and, uh, striving.

Hala Taha: I'm sure like that's just [01:08:00] something naturally as humans, we would just be craving, and so I feel like there'd be a lot of that going on if we were in a solved world. So how do you think entrepreneurship will change in this world? You mentioned that there might be still some jobs in a solved world, so what do those jobs look like and will there be any chance to innovate in a world like this?

Nick Bostrom: The kinds of jobs that might remain, I think are primarily ones where the consumer cares, not just about the product or the service, but about how the product, the service was produced and who produced it. So sometimes we already do this. There might be some little trinket that maybe some consumers are willing to pay extra for if it were handmade or, or made.

Maybe it is by indigenous people or exhibiting their tradition, even if an equally good object. In terms of its objective characteristics could be made by a sweatshop somewhere. Like in Indonesia, we might just pay extra for having it [01:09:00] made in a certain way. So to the extent that consumers have those preferences for something to be made by human hand, that could create a continuing demand for some forms of human labor, even at arbitral levels of technology.

Other domains, where we might see this is say in athletics, you might just prefer to watch human sprinters compete or human wrestlers wrestle. Even if robots could run faster or wrestle better, 

Hala Taha: I keep thinking sports is not gonna go away. That's what I keep thinking. 

Nick Bostrom: Yeah. Sports could, and that might be an important spiritual realm like you might prefer. To have your wedding officiated by a human priest rather than a robot's priest, even if the robot could say the same words and et cetera. So those would be cases, and then that might be sort of legally constrained occupations where a legislator or a attorney or public notary, or for whatever reason, the legal system lags and creates barriers [01:10:00] to automation.

But in terms of entrepreneurship, I think that ultimately it would be done much more efficiently by AI entrepreneurs and, um. It would be more a form of game playing entrepreneurship that would remain so, like you could create games in which entrepreneurial activities are what you need to succeed in the game in like kind of super monopoly, right?

And that could be a way for these utopian to exercise their entrepreneurial muscles, but there wouldn't be any economic need for it. The AI could find and think of the new things, the new products, the new services, the new companies to start better and more efficiently than we humans could. 

Hala Taha: How far in the future do you think a solved world could be?

Nick Bostrom: Well, I mean, this is one of the $64,000 questions in some sense. I'm impressed by the speed of developments in AI currently, and I think we [01:11:00] are. In a situation now where we can't confidently exclude even very short timelines like a few years or something, it could well take much longer, but we can't be confident that something like this couldn't happen within a few years.

It might be that maybe as we're speaking somewhere in some lab, somebody gets this great breakthrough idea that just un hobbles the current models to enable basically the same structure now to perform much bigger. And then these un hobbled models might then apply their like greater level of capabilities to making themselves even better.

And something like that could happen within the next few years. Although it's also possible that if it does not happen within say, the next five years or so, then timeline starts to stretch out because one of the things that has produced is dramatic improvements. AI capabilities that we've seen over the past 10 years is the enormous growth in compute power used to train and operate [01:12:00] frontier AI models, but that rapid rate of compute growth can't continue indefinitely.

The scale of investments, it used to be 10 years ago, some random academic could run a cutting edge AI on their office, desktop computer. Right now we are talking multi-billion dollar data centers. Open AI's current project is Stargate, right? Which in its first phase involves a hundred billion dollar, uh, data center and then to be expanded to a $500 billion.

So you could go bigger than that. I mean, you could have a trillion dollar ride, but at some point you start to really run into hard limits in terms of how much, just more money you can spend on it. So at that point, things will, I. Start to slow down in terms of the growth of hardware. Then you sort of fall back on a slower rate of growth in hardware as we develop better chip manufacturing technology, which happens a bit slower and algorithmic advances, which is the other big driver of progress we've [01:13:00] seen, but it's only one part of it.

So if the hardware growth starts to slow down and maybe a lot of the low hanging fruits on algorithmic conventions have already been discovered at that point, then if we haven't hit a I by that point, then I think we will eventually still reach there. But then the timescale starts to stretch out and we might have to do more basic science on how the human brain works or something in that scenario before we get there.

But I think there's a good chance that the current paradigm plus some small to medium sized innovations on top of it might be sufficient to sort of unlock a GI. 

Hala Taha: my last question to you is, first of all, I can't believe that you're saying that this solved world could happen in a few years potentially.

Nick Bostrom: Let's be careful. Yeah. Yeah. I think we can rule out. But then, so what could happen? We can't rule. Yeah. Initially what could happen is we get to maybe a DI, which I think will relatively quickly lead to super intelligence. 

Hala Taha: 

Nick Bostrom: And then super intelligence I think will rapidly [01:14:00] invent further technologies that could then lead to a solved world.

But there might be some further delays of a few years, like after super intelligence, maybe it'll still take it a few years to get to something approximating technical measure. 

Hala Taha: And just 'cause we didn't cover it. What is the difference between super intelligence and a GI? 

Nick Bostrom: Well, A GI just means general forms of ai.

Okay. That's maybe roughly human level. So think of a GI one definition is AI that can do. Any job that a remote human worker can do, you can hire somebody remotely who operates through email and Google Docs and Zoom. 

Hala Taha: Mm-hmm. 

Nick Bostrom: If you could have an AI that can do anything that any human can do in that respect, that I think would count as a DI.

Hala Taha: Mm-hmm. 

Nick Bostrom: Maybe you want to throw in the ability to control robotics, but I think that would be enough that is not automatically the same as super intelligence. Super intelligence would be something that radically outstrips humans in all cognitive fields that can do much better research in strength theory and in, uh, inventing new piano concertos and envisaging political campaigns [01:15:00] and, and doing all these other things better than humans. Much better. 

Hala Taha: So once you're saying we create super intelligence, then things just can happen super rapidly. 

Nick Bostrom: Yeah, I think so. And I think it's a separate question, but also plausibly, once we have full a GI super intelligence might be quite close on the heels of that. 

Hala Taha: So my last question to you. Is for everybody tuning in right now.

We're at a really crazy point in the world, and a lot of us are not like you. We're not like in it like really paying attention or really in this field. What is your recommendation in terms of how we should respond to everything going on right now? Like what is the best thing that we can do as entrepreneurs, as people who care about their career?

Hopefully things don't change too fast, you know? 

Nick Bostrom: Yeah, I think it depends a little bit on how you're situated and I think there are different opportunities for different peoples. I mean, obviously if you're like a technical person working in an AI lab, you have one set of opportunities.[01:16:00] 

If you're like an investor, you have another set of opportunities, and then there are, I guess, opportunities that every human has just by virtue of being alive at this time in history. I would say a few different things. In terms of as we're thinking of ourselves as economic actors, I think probably being an early adopter of these AI tools is helpful to get the sense for what they can do and what they cannot do, and utilizing them as they gradually become more capable.

I think to the extent that you have assets, maybe trying to have some exposure to the AI and semiconductor sector could be like a hedge. Mm-hmm. It gets tricky if you like asking about younger children, what would be good advice for a 10 or 11-year-old today? Because it's possible that by the time they are old enough to enter the labor market, the world could have changed so much that there will no longer be any need for human labor, but it might also not happen.

Right? So if it takes a bit longer, you don't wanna end up in a situation where suddenly now it's time to earn a living and you didn't bother to learn any skills. And so you wanna [01:17:00] sort of hedge your bet a little bit. But I would say also make sure to enjoy your life. If you're a child now, not maybe only gonna be a child once.

And, uh. Uh, don't spend all your childhood just preparing for a future that might never actually be relevant. The world might change enough. And then I would say, if things go well, these people who live in decades from uh, might look back on the current time and just shudder in horror at how we live now.

And hopefully their lives will be so much better. There is one respect though, in which we have something that they might not have, which is the opportunity to make a positive difference to the world, a kind of purpose. So right now 

Hala Taha: mm-hmm. 

Nick Bostrom: There is so much need in the world, so much suffering and poverty and injustice and just problems that really need to be solved, not just artificial purpose, that somebody makes up for the sake of playing a game, but like actual real, desperate need.

Mm-hmm. So if you think having purpose [01:18:00] is an intrinsically valuable part of human existence, now is the golden. Age for purpose. Knock yourself out right now. Now you have all these opportunities of ways that you might help in the big picture to steer the future of humanity with AI or in your community or in your family or for your friends.

But if you want to try to actually help make the world better, now is really the golden age for that. And then hopefully if things go well later, all the problems will already have been solved. Or if there remain problems, maybe the machines will be just way better at solving them and that we won't be needed anymore.

But for now, we certainly are needed and so take advantage of that and try to do something to make the world better. 

Hala Taha: We could be the last generation that has any purpose, which is just so crazy to think of that sort. 

Nick Bostrom: Yeah, of that sort of stark, urgently screamingly, morally important type. It could be the case.

Yeah. Those are the things I would say. And then I guess finally just be aware, like it would be sad if you imagine. Your [01:19:00] grandchildren, they're sitting on your lap and asking like, so what was it like to be alive back in 2025 when, when this thing was happening, when like AI was being born? And you have to answer, oh, I didn't really pay attention.

I was too caught up with these other trivialities of my daily systems. I, I didn't even really notice it, that that would kind of be sad if you were alive in this special time that shapes the future for millions of years and you didn't even pay attention to it. That seems like a bit of a missed opportunity.

So aside from everything else, like taking care of your own and your family and trying to make some positive contribution to the world, just, you know, taking it in, like if this is right, this is a very special point in history. To be alive and to exist right now is quite remarkable. 

Hala Taha: So beautiful. I feel like this is such an awesome way to end the interview.

Nick, you are so incredible. Thank you so much for your time today. Where can everybody learn more about you, read some of your [01:20:00] books, or where's the best place to find you? 

Nick Bostrom: Nick bostrom.com. My website and books and papers and everything else is linked from there. 

Hala Taha: Yeah, his books are so interesting guys.

Super intelligence, deep utopia. Very, very good stuff. Nick, thank you so much for your time today. I'll put all your links in the show notes and really enjoyed this conversation. 

Nick Bostrom: Thank you, Ella. Enjoyed talking to you. 

Hala Taha: Bam. What a thought provoking conversation with Nick. From simulation theory to the possibilities of a post-human future, we've explored some of the deepest questions facing humanity. What fascinated me the most was Nick's vision of a potential utopia. I. a world where AI succeeds so completely that all human labor becomes obsolete.

As Nick put it, we could be entering a future of full unemployment, but in the most positive sense, imagine a world where we're training people to simply enjoy life rather than. preparing them for careers that may no longer [01:21:00] exist. But this leads to a profound challenge that Nick highlighted the problem of deep redundancy when shortcuts exist for everything, when you can pop a pill instead of training hours in the gym to get fit and beautiful.

What gives life meaning and purpose. We actually might be the last generation that's living with a purpose, living at a unique moment where human effort still matters, and where there's so many problems to solve in the world that are deeply meaningful. I loved Nick's advice on how to respond to this massive shift.

He emphasized the importance of being an early responder with exposure to ai. While still finding ways to enjoy your life and maintain purpose, as he noted, humans have an extraordinary ability to adapt a quality that will serve us well. As we navigate this transition with AI for entrepreneurs wondering about their place in this new landscape, Nick offered a compelling insight in this solved new world.

Consumers will care not just about what they're buying, but how it was produced and who produced it. This [01:22:00] opens up an entirely new avenue for human creativity and connection. Even in a highly automated world, whether we're living in a simulation or not, Nick's perspective reminds us that the technological future we're building is very real to us and how we shape it matters profoundly.

Thanks for listening to this episode of Young and Profiting. If you listen, learned and profited from this mind expanding conversation with Nick Bostrom, please share it with somebody who's curious about the future of humanity and technology. And if you picked up something valuable today, show us some love with a five star review on Apple Podcast.

It's the best way to help us reach more listeners. And if you wanna watch these episodes on YouTube, you can go to Young and Profiting on YouTube. You'll find all of our episodes up there. You can also connect with me on Instagram at yap with Hala or LinkedIn. Just search for my name. It's Hala Taha, and a huge shout out to my incredible production team.

None of this would happen without you. I've gotten awesome team. Thank you guys so much for all that you do. This is your host, Hala Taha, AKA, the podcast Princess signing off [01:23:00] 

Subscribe to the Young and Profiting Newsletter!
Get access to YAP's Deal of the Week and latest insights on upcoming episodes, tips, insights, and more!
Thanks for signing up. You must confirm your email address before we can send you. Please check your email and follow the instructions.
We respect your privacy. Your information is safe and will never be shared.
Don't miss out. Subscribe today.
×
×