Stephen Wolfram: AI, ChatGPT, and the Computational Nature of Reality | E284

Stephen Wolfram: AI, ChatGPT, and the Computational Nature of Reality | E284

Stephen Wolfram: AI, ChatGPT, and the Computational Nature of Reality | E284

Sponsored By:

Shopify – Sign up for a one-dollar-per-month trial period at

Indeed – Get a $75 job credit at

Airbnb – Your home might be worth more than you think. Find out how much at

Porkbun – Get your .bio domain and link in bio bundle for just $5 from Porkbun at

Yahoo Finance – For comprehensive financial news and analysis, visit


[00:00:00] Hala Taha: young and Profiters, welcome to the show. And we are going to be talking a lot more about AI in 2024 because it's such an important topic. It's changing the world. Last year, I had a couple conversations. We had William talking about AI. We had Mo Gowdat, which I loved that episode. I highly recommend you guys check the Mo Gowdat episode But nonetheless, I'm going to be interviewing a lot more AI folks and first up on the docket is Dr. Steven Wolfram. He's been focused on AI and computational thinking for the past decade. Dr. Stephen Wolfram is a world renowned computer scientist, mathematician, theoretical physicist, and the founder of Wolfram Research, as well as the inventor of the Wolfram Computational Language.

A young prodigy, he published his first paper at 15, he obtained his Ph. D. in physics at 20, and he also was the youngest recipient of the MacArthur Genius Grant. In addition to this, Dr. Wolfram is the author of several books, including a recent one on AI entitled, What is ChachiBT Doing?,

which we'll discuss today. So we've got a lot of ground to cover with Stephen. We're going to talk about what is AI? What is computational thinking? How is AI similar to nature? What is going on in the background of Chachi Bouti? How does it actually work? And what does he think the future of AI looks like for jobs and for humanity overall?

We've got so much to cover. I think he's going to blow your mind. Stephen, welcome to Young and Profiting Podcast. 

[00:02:35] Stephen Wolfram: I 

[00:02:36] Hala Taha: am so excited for today's interview. We love the topic of AI, and I wanted to talk a little bit about your childhood before we got into the meat and potatoes of today's interview. So from my understanding, you started as a particle physicist at a very young age.

You even started publishing scholarly papers as young as 15 years old. So talk to us about how you first got interested in science. And what you were like as a kid. 

[00:03:00] Stephen Wolfram: Well, let's see. I, I grew up in England in the 1960s when space was the thing of the future, which it is again now, but wasn't for 50 years. I was interested in those kinds of things.

And that got me interested in how things like spacecraft work. And that got me interested in physics. And so I started learning about physics and so happened that the early 1970s were a time when lots of things were happening in particle physics, lots of new particles getting discovered, lots of fast progress and so on.

And so I got involved in that. So it's cool to be involved in fields that are in the golden age of expansion, which particle physics was at the time. So that was how I got into these things. You know, it's funny, you mentioned AI and I realized that when I was a kid, Machines that think we're right around the corner, just as colonization of Mars was right around the corner, too.

An interesting thing to see what actually happens over a 50 year span and what doesn't. 

[00:03:58] Hala Taha: It's so crazy to think how much has changed over the last 50 years. 

[00:04:03] Stephen Wolfram: And how much has not. In science, for example, I have just been finishing some projects that I started basically 50 years ago. And it's kind of cool to finish something that, you know, You know, there's a big science question that I started asking when I was 12 years old about how a thing that people have studied for 150 years now works, the second law of thermodynamics, and I was interested in that when I was 12 years old.

I finally, I think, figured that out. I published a book about it last year. And it's kind of nice to see that one can tie up these things, but it's also a little bit shocking how slowly big ideas move. For example, the neural nets that everybody's so excited about now in AI, neural nets were invented in 1943.

And the original conception of them is not that different from what people use today, except that now we have computers that run Billions of times faster than things that were imagined back in the 1950s and so on. It's interesting. Occasionally things happen very quickly. Oftentimes it's shocking how slowly things happen and how long it takes for the world to absorb ideas.

Sometimes there'll be an idea and finally some technology will make it possible to execute that idea in a way that wasn't there before. Sometimes there's an idea and it's been hanging out for a long time. And people just ignored it for one reason or another. And I think some of the things that are happening with AI today probably could have happened a bit earlier.

Some things have depended on sort of the building of a big technology stack, but it's, it's always interesting to see that to me, at least. 

[00:05:45] Hala Taha: It's so fascinating. This actually dovetails perfectly into my next question about your first experiences with AI. So now everybody knows what AI is, but really most of us, Really started to understand it and using this term maybe five years ago, Max, but you've been studying this for decades, even before people probably called it AI.

So can you talk to us about the beginnings of how it all started? 

[00:06:10] Stephen Wolfram: AI predates me. That term was invented in 1956. you know, it's funny because as soon as computers were invented basically in the late 1940s. And they started to become things that people had seen by the beginning of the 1960s.

I first saw a computer when I was 10 years old, which was 1969 ish. And at the time, a computer was a very big thing, tended by people in white coats and so on. So when I first got my hands on a computer in 1972, and that was a computer that was The size of a large desk and program with paper, tape, and so on.

And it was rather primitive by today's standards. The elements were all there by that time. It's true, most people had not seen a computer until probably the beginning of the 1980s or something, which was when PCs and things like that started to come out. It was from the very first moments when electronic computers came on the scene, people sort of assumed that computers would automate thought as bulldozers and things like forklift trucks that automated mechanical work.

And that was the giant electronic brains was a typical characterization of computers in the 1950s. So this idea that one would automate thought was a very early idea. Now, the question was how hard was it going to be to do that? And people in the 1950s and beginning of the 1960s, they were like, this is going to be easy.

You know, now we have these computers. It's going to be easy to replicate what brains do. In fact, a good example, back in the beginning of the 1960s, a famous incident was During the Cold War, and people were worried about, you know, U. S., Russian, Soviet, uh, communication and so on. They said, well, you know, maybe the people are in a room, there's some interpreter, the interpreter is going to not translate things correctly.

So let's not use a human interpreter, let's teach a machine to do that translation. Beginning of the 1960s. And of course machine translation, which is now finally in the 2020s, pretty good, took an extra 60 years to actually happen and people just didn't have the intuition about what was gonna be hard, what wasn't gonna be hard.

So the term AI was in the air already very much by the 1960s. I'm sure when I was a kid. I'm sure I read books about the future in which AI was a thing, and it was certainly in movies and things like that. And I think. Then this question of, okay, so how would we get computers to do thinking like things?

When I was a kid, I was interested in taking the knowledge of the world and somehow cataloging it and so on. I don't know why I got interested in that, but that's something I've been interested in for a long time. And so I started thinking, you know, how would we. The knowledge of the world and make it automatic to be able to answer questions based on the knowledge that our civilizations accumulated.

So I started building things along those lines and I started building whole technology stack that I started in the late 1970s and well now it's turned into a big thing that lots of people use. The idea there, the first idea there was. Let's be able to compute things like math and so on. And let's take what has been something that humans have to do and make it automatic to have computers do it.

People had said for a while, when computers can do calculus, then we'll know that they're intelligent. Things I built solved that problem by the mid 1980s, that problem was pretty well solved. And then people said, well, it's just engineering. It's not really a computer being intelligent. I would agree with that.

But then at the very beginning of the 1980s, when I was working on automating things like mathematical computation, I got curious about the more general problem of doing the kinds of things that we humans do, like we match patterns. We see this image and it's got a bunch of pixels in it. And we say, that's a picture of a cat or that's a picture of a dog.

And this question of how do we do that kind of pattern matching? I got interested in. And started trying to figure out how to make that work. I knew about neural nets. I started trying to get, this must've been 1980, 81, something like that. I started trying to get neural nets to do things like that, but they didn't work at all at the time.

Hopeless. As it turns out, you know, you say things happen quickly and I say things sometimes happen very slowly. I was just working on something that is kind of a potential new direction for how neural nets and things like that might work. And I realized, you know, I, I worked on this once before and I pulled out this paper that I wrote in 1985 that has The same basic idea that I was just very proud of myself for having, uh, figured out just, uh, last week.

And, um, it's like, well, I started on it in 1985. Well now, you know, I understand a bunch more and we have much more powerful computers. Maybe I can make this idea work. But so this notion that there are things that people thought would be hard for computers, like doing calculus and so on. We crushed that, so to speak a long time ago.

Then there were things that are super easy for people like tell that's a cat, that's a dog. Which wasn't solved, and I wasn't involved in the solving of that. That's something that people worked on for a long time, and nobody thought it was going to work. And then suddenly in 2011, sort of through a mistake, some people who'd been working on this for a long time, Left a computer trying to train to tell things like cats from dogs for a month without paying attention to it They came back.

They didn't think anything exciting would have happened and by golly it had worked and that's what started the current Enthusiasm about neural nets and deep learning and so on and when chat GPT came out in late 2022 Again, the people have been working on it. They didn't know it was going to work We had worked on previous kinds of language models, things that try to, uh, do things like predict what the next word will be in a sentence, those sorts of things.

And they were really pretty crummy. And suddenly for reasons that we still don't understand, we kind of got above this threshold where it's like, yes, this is pretty human like and it's not clear what caused that threshold. It's not clear whether. We, in our human languages, for example, we might have, I don't know, 40, 000 words that are common in language, like most languages, English as an example.

And there's probably that number of words is somehow related to how big an artificial brain you need to be able to deal with language in a reasonable way. And, you know, if our brains were bigger, maybe we would routinely have languages with 200, 000 words in them. We don't know. And maybe, uh, you know, it's this kind of match between what we can.

Do with an artificial neural network versus what our human biological neural nets managed to do. We managed to reach enough of a match that people say, by golly, the thing seems to be doing the kinds of things that we humans do, but I mean, this question what's ended up happening is what us humans can quickly do, like tell a cat from a dog or figure out what the next word in the sentence is likely to be.

Then the things that we humans have actually found really hard to do. Like solve this math problem or figure out this thing in science, or do this kind of simulation of what happens in the natural world, those are things that the unaided brain doesn't manage to do very well on, but the big thing that's happened last 300 years or so is we built a bunch of formalization of the world first with things like logic that was back in antiquity.

And then with math, and most recently with computation, where we're kind of setting up things so that we can talk about things in a more structured way than just the way that we think about them off the top of our heads, so to speak. 

[00:14:16] Hala Taha: That's so interesting, and I know that you work on something called computational thinking.

And I think what you're saying now really relates to that. So help us understand the Wolfram Project and computational thinking and how it's related to the fact that humans, we need to formalize and organize things like mathematics and logic. What's the history behind that? Why do we need to do that as humans?

And then how does it relate to computational thinking in the future? 

[00:14:41] Stephen Wolfram: there are Things one can immediately figure out when just sort of intuitively knows, Oh, that's a cat. That's a dog, whatever. Then there are things where you have to go through a process of working out what's true or working out how to construct this or that thing.

When you're going through that process, you've got to have solid bricks to start building that tower. So what are those bricks going to be made of? Well, you have to have something which has definitive structure and That's something where, for example, back in antiquity when logic got invented, it was kind of like, well, you can think vaguely, oh, that sentence sounds kind of right, or you can say, well, wait a minute, this or that, if one of those things is true, then that or that has to be true, etc, etc, etc.

You've got some structured way to think about things. And then in 1600s, math became sort of a popular way to think about the world and then you could say, okay, we're looking at planet goes around the sun and roughly an ellipse, but let's put math into that. And then we can have this way to actually compute what's going to happen.

So for about 300 years, this idea of math is going to explain how the world. Works at some level was kind of a dominant theme and that worked pretty well in physics It worked pretty terribly in things like biology in social sciences and so on Imagine there might be a social physics of how society works that never really panned out So there was this question that things were places where math had worked and it gave us a lot of modern engineering and so on And the cases where it hadn't really worked, I got pretty interested in this at the beginning of the 1980s and sort of figuring out how do you formalize thinking about the world in a way that goes beyond what math provides one, things like calculus and so on.

What I realized is that you just think about, well, there are definite rules. That describe how things work and those rules are more stated in terms of, Oh, you have this arrangement of black and white cells and then this happens and so on. They're not things that you necessarily can write in mathematical terms in terms of multiplications and integrals and things like this.

And so I, as a matter of science, I got interested in, so what do these simple programs That you can describe as these systems as rules of being what do they typically do and what one might have assumed is you have a program that's simple enough, it's going to just do simple things. This turns out not to be true.

Big surprise. To me at least. I think to everybody else as well. Took people a few decades to absorb this point. It took me a solid bunch of years to absorb this point. You just do these experiments, computer experiments, and you find out, yes, you use a simple rule and no, it does a complicated thing. That turns out to be pretty interesting if you want to understand how nature works, because it seems like that's the secret that nature uses to make a lot of the complicated stuff that we see, the same phenomenon of simple rules, complicated behavior.

So that turns into a whole big direction and new understanding about how science works. I wrote this big book back in 2002 called A New Kind of Science. Well, its title kind of says what it is. That's one kind of branch is sort of understanding the world. In terms of computational rules, 

another thing has to do with taking the things that we normally think about, whether that's how far is it from one city to another, or how do we remove this thing from this image or something like this?

Things that we would normally think about and talk about. And how do we take those kinds of things and think about them in a structured computational way? So that has turned into a big enterprise in my life, which is, Building our computational language, this thing now called Wolfram language powers a lot of, well, research and development kinds of things.

And also lots of actual practical systems in the world. 

Although when you are interacting with those systems, you don't see what's inside them, so to speak. The idea there is to make a language for describing things in the world, which might be, you know, this is a city. This is, Both the concept of a city and the actuality of the couple of hundred thousand cities that exist in the world, where they are, what their populations are, lots of other data about them and being able to compute things about things in the world.

And so that's been a big effort to build up that computational language. And the thing that's exciting that we're on the cusp of, I suppose, is people who study things like science and so on for the last 300 years, it's like, okay. To make this science really work, you have to make it somehow mathematical.

Well, now the case is that the new way to make science is to make it computational. And so you see all these different fields. Call them X. You start seeing the computational X field start to come into existence. And I suppose one of my big life missions has been to provide this language and notation for making computational X for all X possible.

A similar mission to what people did maybe 500 years ago, when people invented mathematical notation. I mean, there was a time when if you wanted to talk about math, it was all in terms of just regular words. At the time in Latin, and then people invented things like plus signs and equal signs and so on, streamlined the way of talking about math.

And that's what led to, for example, algebra and then calculus, and then all the kind of modern mathematical science that we have. And so similarly, what I've been trying to do last 40 years or so is build a computational language, a notation for computation, a way of talking about things computationally that lets one build computational X for all X.

One of the great things that happens. When you make things computational is not only do you have a clearer way to describe what you're talking about, but also your computer can help you figure it out. And so you get this superpower. As soon as you can express yourself, computationally tap into the super tower of actually being able to compute things.

And that's amazingly powerful. And when I was a kid, as I say, in the 1970s, physics was popping at the time. Because various new methods have been invented not related to computers. 

At this time, all the computational X fields are just starting to really hop, and it's starting to be possible to do really, really interesting things, and that's going to be an area of tremendous growth in the next however many years.

[00:21:31] Hala Taha: I have a few follow up questions to that. So you say that computational thinking is another layer in human evolution, so I want to understand why you feel it's going to help humans evolve. Also curious to understand the practical ways that you're using the Wolfram language and how it relates to AI, if it does at all.

[00:21:51] Stephen Wolfram: Let's take the second thing first. Wolfram language is about representing the world computationally in a sort of precise computational way. It also happens to make use of a bunch of AI. But let's put that aside. The way that, for example, something like an LLM, like a chat GPT or something like that, what it does is, it makes up a list.

Pieces of language. If we have a sentence like the cat sat on the blank, what it will have done is it's read a billion webpages. Chances are the most common next word is going to be Matt has set itself up so that it knows that the most common next word is Matt. So let's write down Matt. So the big surprise is.

That it doesn't just do simple things like that, but having built the structure from reading all these web pages, it can write plausible sentences. Those sentences, they sort of sound like they make sense. They're kind of typical of what you might read. They might or might not actually have anything to do with reality in the world, so to speak.

That's working kind of the way humans immediately think about things. Then there's the separate whole idea, formalized knowledge, Which is the thing that led to modern science and so on. That's a different branch from things humans just can quickly and naturally do. So, in a sense, Wolfram Language, the big contribution right now to the world of the emerging AI language models, all this kind of thing.

Is that we have this computational view of the world, which allows one to do precise computations and build up these whole towers of consequences. So the typical setup, and you'll see more and more coming out along these lines. I mean, we built something with open AI back. Oh gosh, a year ago now, an early version of this is.

You've got the language model and it's trying to make up words and then it gets to use as a tool, our computational language. If it can formulate what it's talking about, well, you know, we have ways to take the natural language that it produces. We've had Wolfram Alpha system, which came out in 2009 is a system that has natural language understanding.

We sort of had solved the problem of. One sentence at a time, kind of, what does this mean? Can we translate this natural language in English, for example, into computational language, then compute an answer using potentially many, many steps of computation, then that's something that is sort of a solid answer that was computed from knowledge that we've curated, et cetera, et cetera, et cetera.

So the typical mode of interaction is that sort of a linguistic interface provided by things like LLMs. And that using our computational language as a tool to actually figure out, Hey, this is the thing that's actually true, so to speak, just as humans don't necessarily immediately know everything, but with tools, they can get a long way.

I suppose it's been sort of the story of my life, at least. I discovered computers as a tool back in 1972, and I've been using them ever since and managed to figure out a number of interesting things in science and technology and so on by using this kind of external to me, superpower tool of computation.

The LLMs and the AIs get to do the same thing. The core part of how the technology I've been building for a long time most immediately fits into the current expansion of excitement about AI and language models and so on. I think there are other pieces to this which have to do with how, for example, science that I've done relates to understanding more about how you can build other kinds of AI like things, but that's sort of a separate branch.


 Honestly, you're teaching us so much. I feel like a lot of people tuning in are probably learning a lot of this stuff for the first time. But one thing that we all are using right now is ChatGBT, right? So everybody is sort of embraced ChatGBT. Feels like it's magic, right? When you're just getting something that is giving you something that a human could potentially write.

[00:26:13] Hala Taha: So I have a couple of questions about chat GBT. You alluded to how it works a bit, but can you give us more detail about how neural networks work in general and what chat GBT is doing in the background to spit out something that looks like it's written by a human? 

[00:26:28] Stephen Wolfram: The original inspiration for neural networks was understanding something about how brains work.

In our brains, we have about. Roughly a hundred billion neurons. Each neuron is a electrical device, and they're connected with things that look under microscope, a bit like wires. So one neuron might be connected to a thousand or 10,000 other neurons in one's brain. And these neurons, they'll have a little electrical signal and then they'll pass on that electrical signal to another neuron.

And pretty soon one's gone through a whole chain of neurons and one says the word, next word or whatever. And so the electrical machine. Lots of things connected to things, how people imagine that brains work, and that's how neural nets, an idealization of that, set up in a computer, where one has these connections between artificial neurons, usually called weights.

You often hear about people saying, this thing has a trillion weights or something. Those are the connections between artificial neurons, and each one has a number associated with it. And so what happens when you ask chatGBT something, what will happen is it will take the words that it's seen so far, the prompt, and it will grind them up into numbers, and it will take that sequence of numbers and feed that in as input to this network, so it just takes the words, more or less every word in English gets a number, or every part of a word gets a number, you have the sequence of numbers, that sequence of numbers Is given as input to this essentially mathematical computation that goes through and says, okay, here's this arrangement of numbers.

We multiply each number by this weight. Then we add up a bunch of numbers. Then we take the threshold of those numbers and so on. And we keep doing this and we do it a sequence of times, like a few hundred times for typical chat, GBT type behavior. A few hundred times, and then at the end, we get out another number, actually we get out another collection of numbers that represent the probabilities that the next word should be this or that.

So in the example of the cat sat on the, the next word has probably very high probability, 99 percent probability to be mat, and 1 percent probability or 0. 5 percent probability to be floor or something. And then what Chad Chippetty is doing is it's saying, well, usually I'm going to pick the most likely next word.

Sometimes I'll pick a word that isn't the absolutely most likely next word, and it just keeps doing that. And the surprise is that just doing that kind of thing, a word at a time, gives you something that seems like a reasonable English sentence. Now, the next question is, How did it get all those in the case of the original chat GPT?

I think it was 180 billion weights. How did it get those numbers? And the answer is what it tried to do was it was trained and it was trained by being shown all this text from the web. And what was happening was, well, you've got one arrangement of weights. Okay. What next word does that predict? Okay. That predicts turtle as the next word for the cat sat on the.

Okay. Turtle is wrong. Let's change that. Let's see what happens if we adjust these weights in that way. Oh, we finally got it to say mat. Great. That's the correct version of that particular weight. Well, you keep doing that over and over again. That takes huge amounts of computer effort. You keep on bashing it and trying to get it.

No, no, no, you got it wrong. Adjust it slightly to make it closer to correct. Keep doing that long enough. And you got something which is a neural net, which has the property that it will typically reproduce the kinds of things it's seen. Now, it's not enough to reproduce what it's seen, because if you keep going writing a big long essay, a lot of what's in that essay will never have been seen before.

Those particular combination of words will never have been produced before. So then the question is, well, how does it extrapolate? How does it figure out something that it's never seen before? What words is it going to use when it never saw it before? And this is the thing which nobody knew what was going to happen.

This is the thing where the big surprise is that the way it extrapolates is similar to the way we humans seem to extrapolate things. And presumably that's because its structure is similar to the structure of our brains. We don't really know why when it figures things out that it hasn't seen before, why it does that in a kind of human like way.

A scientific discovery. Now we can say, can we get an idea why this might happen? I think we have an idea why it might happen. And it's more or less this that you say, how do you put together an English sentence? Well, you kind of learn basic grammar. You say it's a noun, a verb, a noun. That's a typical English sentence.

But there are many noun verb noun English sentences that aren't really reasonable sentences, like, I don't know, The electron ate the moon. Okay, it's grammatically correct, probably doesn't really mean anything, except in some poetic sense. Then, what you realize is there's a more elaborate construction kit.

About sentences that might mean something and people have been intending to create that construction kit for a couple of thousand years. I mean, Aristotle started the time when he created logic. He started thinking about that kind of construction kit. Nobody got around to doing it. But I think chat GPT and LLMs show us there is a construction kit of, Oh, that word.

If it's blah, eight. Blah, the first blah better be a thing that eats things. And there's a certain category of things that eat things. And it's like animals and people and so on. And so that's part of the construction kit. So you end up with this notion, semantic grammar of a way of construction kit of how you put words together.

My guess is that's essentially what ChatGPT has discovered. And once we understand that more clearly, we'll probably be able to build things like chat, UBT, much more simply than it's very indirect way to do it, to have this neural net and keep bashing it and say, make it predict words better. And so on.

There's probably a more direct way to do the same thing, but that's what's happened and this moment when it becomes human level performance, very hard to predict when that will happen. It's happened for things like visual object recognition, Around 2011, 2012 type timeframe. It's hard to know when these things are gonna happen for different kinds of human activities, but the thing to realize is there are human-like activities, and then there are things that we have formalized where we've used math, we've used other kinds of things as a way to work things out systematically.

And that's a different direction than the direction that things like neural nets are going in and that. It happens to be the direction that I've spent a good part of my life trying to build up. And these things are very complementary in the sense that things like the linguistic interface that are made possible by neural nets feed into precise computation that we can do on that side.

[00:33:45] Hala Taha: how does this make you feel about human consciousness? And AI potentially being sentient or having any sort of agency. 

[00:33:54] Stephen Wolfram: It's always a funny thing because we have an internal view of the fact that there's something going on inside for us. We experience the world and so on.

Even when we're looking at other people, it's like, it's just a guess. I know what's going on in my mind. It's just some kind of guess what's going on in your mind, so to speak. And the big discovery of our species is language. This way of packaging up the thoughts that are happening in my mind and being able to transmit them to you and having you unpack them and make similar thoughts, perhaps in your mind, so to speak.

So this idea of where can you imagine that there's a mind that's operating? It's not obvious between different people. We kind of always make that assumption when it comes to other animals, it's like, well, we're not quite sure, but maybe we can tell that a cat had some emotional reaction, which. Reminded us of some human emotion and so on.

When it comes to our AIs, I think that increasingly people will have the view that the AIs are a bit like them. So when you say, well, is there a, there, there, is there a thing inside? It's like, okay, is there a thing inside another person? You know, if you say, well, But we can tell the other person is thinking and doing all this stuff.

Well, if we were to look inside the brain of that other person, all we'd find is a bunch of electrical signals going around and those add up to something where we have the assumption that there's a conscious mind there, so to speak. So I think we have always felt that our thinking and minds. A very far away from other things that are happening in the world.

I think the thing that we learned from the advance of AI is, well, actually there's not as much distance between the amazing stuff of our minds and things that are just able to be constructed computationally, one of the things to realize is this whole question of what thinks, where is the computational stuff going on and.

You might say, well, humans do that. Maybe our computers do that. Well, actually nature does that too. When people will have this thing, you know, the weather has a mind of its own. Well, what does that mean? Typically operationally, it means. It seems like the weather is acting with free will. We can't predict what it's going to do.

But if we say, well, what's going on in the weather? Well, it's a bunch of fluid dynamics and the atmosphere and this and that and the other, and we say, well, how do we compare that with the electrical processes that are going on in our brains? They're both computations that operate according to certain rules.

The ones in our brains familiar with the ones in the weather we're not familiar with, but in some sense, both of these cases, there's a computation going on. And one of the things that big piece of bunch of science I've done is this thing called the principle of computational equivalence, which is this discovery, this idea that if you look at different kinds of systems operating according to different rules, whether it's a brain or the weather, there's a commonality.

There's the same level of computation. Is achieved by those different kinds of systems. That's not obvious. You might say, well, I've got the system and it's just a system that's made from physics as opposed to the system. That's the result of lots of biological evolution or whatever, or I've got the system and it just operates according to these very simple rules that I can write down, you might've thought level of computation that will be achieved in those different cases were very different.

The big surprise is that it isn't, it's the same. And that has all kinds of consequences. Like if you say. Okay. I've got the system in nature. Let me predict what's going to happen in it. Well, essentially what you're doing by saying, I'm going to predict what's going to happen is you're somehow setting yourself up as being smarter than the system in nature, it will take it all these computational steps to figure out what it does, but you are going to just jump ahead and say, this is what's going to happen in the end.

Well, the fact that there's this principle of computational equivalence implies this thing I call computational irreducibility, which is the Realization that there are many systems where to work out what will happen in that system, you have to do kind of an irreducible amount of computational work.

That's a surprise because I've been used to the idea that science lets us jump ahead and just say, Oh, this is the, what the answer is going to be. And this is showing us from within science, it's showing us that there's a fundamental limitation where we can't do that. That's important when it comes to thinking about things like AI.

When you say things like, well, let's make sure that AI is never do the wrong thing. Well, problem with that is there's this phenomenon of computational irreducibility. The AI is doing what the AI does. It's doing all these computations and so on. We can't know in advance. We can't just jump ahead and say, Oh, we know what it's going to do.

We are stuck having to follow through the steps. We can try and make an AI where we can always know what it's going to do. Turns out that AI. Will be too dumb to be a serious AI. And in fact, we see that happening in recent times of people saying, let's make sure they don't do the wrong thing. We put enough constraints.

It can't really do the things that a computational system should be able to do, and it doesn't really achieve this level of capability that you might call real AI, so to speak. 




[00:39:23] Hala Taha: Next, I want to talk about how the world is going to change now that AI is here being more adapted by people, it's becoming more commonplace.

How is it going to impact jobs? And also, if you can touch on the risks of AI, what are the biggest fears that people have around AI? 

[00:39:40] Stephen Wolfram: More and more systems in the world. We'll get automated. This has been a story of technology throughout history. AI is another step in the automation of things. You know, when things get automated, things humans used to have to do with their own hands, they don't have to do anymore.

The typical pattern of economies, like in the U S or something is. 150 years ago in the U S most people were doing agriculture. You had to do that with your own hands. Then machinery got built. That let that be automated and the people, you know, it's like, well, then nobody's going to have anything to do.

Well, it turned out they didn't have things to do because that very automation enabled a lot of new types of things that people could do. And for example, we're doing the podcasting thing we're doing right now is enabled by the fact that we have video communication and so on. There was a time when all of that automation.

That has now led to the kind of telecommunications infrastructure we have wasn't there and there had to be telephone switchboard operators plugging wires in and so on and people were saying, Oh, gosh, if we automate telephone switching, then all those jobs are going to go away. But actually what happened was, yes, those jobs went away, but that automation opened up many other categories of jobs.

So the typical thing that you see, at least historically, is a big category. There's a big chunk of jobs. That are something that people have to do for themselves that gets automated, and that enables what becomes many different possible things that you end up being able to do. And I think the way to think about this is really the following that once you've defined an objective, you can build automation that does that objective.

Maybe it takes a hundred years to get to that automation, but you can in principle do that. But then you have the question, well, what are you going to do next? What are the new things you could do? Well, that question, there are an infinite number of new things you could do. The AI left to its own devices.

There's an infinite set of things that it could be doing. The question is, which things do we choose to do? And that's something that is really a matter for us humans, because it's like you could compute anything you want to compute. And, in fact, some part of my life has been exploring the science of the computational universe, what's out there that you can compute.

And the thing that's a little bit sobering is to realize, of all the things that are out there to compute, the set that we humans have cared about so far in the development of our civilization is a tiny, tiny, tiny slice. And this question of where do we go from here is, well, what other slices, which now they're possible, which things do we want to do?

And I think that the typical thing you see is that a lot of new jobs get created around the things which are still sort of a matter of human choice what you do. Eventually, it kind of gets standardized and then it gets automated and then you go on to another stage. So I think that the spectrum of what jobs.

We'll be automated. One of the things that happened back several years ago now, people were saying, Oh, machine learning, the sort of underlying area that leads to neural nets and AI and things like this. Machine learning is going to put all these people out of jobs. The thing that was sort of amusing to me was that I knew perfectly well that the first category of jobs that would be impacted were machine learning engineers, because machine learning can be used to automate machine learning, so to speak.

And so it was. Once the thing becomes routine, then it can be automated. And for example, a lot of people learn to do programming, low level programming. I've spent a large part of my life trying to automate low level programming. So in other words, the computational language we've built, which people like, Oh my gosh, I can do this.

I can get the computer to do this thing for me by spending an hour of my time. If I were writing standard programming language code, I'd spend a month trying to set my computer up to do this. The thing we've already achieved is to be able to automate out those things. What you realize when you automate out something like that is people say, Oh my gosh, things have become so difficult now, because if you're doing low level programming, some part of what you're doing is just routine work.

You don't have to think that much. It's just like, Oh, I turn the crank. I show up to work the next day. I get this piece of code written. Well, if you've automated out all of that, what you realize is most of what you have to do is figure out, so what do I want to do next? And that's where this being able to do real computational thinking comes in.

Cause that's where it's like, so how do you think about what you're trying to do in computational terms so you can define what you should do next? And I think that's an example of, you know, the low level turn the crank programming. I mean, that should be extinct already because we've spent, I've spent the last 40 years trying to automate that stuff.

And in some segments of the world, it is kind of extinct because we did automate it. But there's an awful lot of people where they said, Oh, we can get a good job by learning C code, C progress, C plus plus programming or Python or Java or something like this. That's a thing that we can spend our human time doing.

It's not necessary. And that's being more emphasized at this point. The thing that is still very much the human thing is, so what do you want to do next? So to speak. 

[00:45:07] Hala Taha: It's a good story because you're not saying, Hey, we're doomed. You're saying AI is going to actually create more jobs.

It's going to automate the things that are repetitive and the things that we still need to make decisions on or decide the direction that we want to go in. That's what humans are going to be doing, sort of shaping all of it. But do you feel that AI is going to supersede us in intelligence and have this apex intelligence one day where we are not in control of the next thing?

[00:45:38] Stephen Wolfram: I mentioned the fact that lots of things in nature compute. Our brains do computation. The weather does computation. The weather is doing a lot more computation than our brains are doing. You say, what's the apex intelligence in the world? Already, nature has vastly more computation going on than happens to occur in our brains.

The computation going on in our brains is computation where we say, oh, we understand what that is and we really care about that. Whereas the computation that goes on in the babbling brook or something, we say, well, that's just some flow of water and things. We don't really care about that. So we already lost that competition of, are we the most computationally sophisticated things in the world?

We're not. Many, many things are equivalent in their computational abilities. So then the question is, well, what will it feel like when AI gets to the point where routinely it's. Doing all sorts of computation beyond what we managed to do. I think it feels pretty much like what it feels like to live in the natural world.

The natural world does all kinds of things. There are, you know, occasionally a tornado will happen. Occasionally this will happen. We can make some prediction about what's going to happen, but we don't know for sure what's going to happen, when it's going to happen and so on. And that's what it will feel like to be in a world where most things are run with AI.

And we'll be able to do some science of the AI, just like we can do science of the natural world and say, this is what we think is going to happen, but there's going to be this infrastructure of AI society that already is to some extent, but that will grow of more and more things that are happening automatically.

Computational process. But in a sense, that's no different from what happens in the natural world. The natural world is just automatically doing things that are not where we can try and divert what it does, but it's just doing what it does for me. It's one of the things I've long been interested in is how is the universe actually put together?

If we drill down and look at the smallest scales of physics and so on, what's down there and what we've discovered in the last few years is that it looks like we really can understand. The whole of what happens in the universe as a computational process, that underneath them, people have been arguing for a couple of thousand years, whether the world is made of continuous things, or whether it's made of little discrete things like atoms and so on.

And about a bit more than a hundred years ago, it got nailed down, matter is made of discrete stuff. There are individual atoms and molecules and so on, then light is made of discrete stuff, photons and so on. Space, people had still assumed was somehow continuous, was not made of discrete stuff. And the thing we kind of nailed down, I think, in 2020 was the idea that space really is made of discrete things.

There are discrete elements, discrete atoms of space. And we can really think of the universe as made of a giant network of atoms of space. And hopefully in the next few years, maybe if we're lucky, we'll get direct experimental evidence that space is discrete in that way. But one of the things that that makes one realize is, it's sort of computation all the way down.

At the lowest level, the universe consists of this discrete network that keeps on getting updated and it's kind of following these simple rules and so on. It's all rather lovely, but there's computation everywhere. In nature, in our eyes and our brains, the computation that we care the most about is the part that we without brains and our civilization and our culture and so on have so far explored.

That's the part we care the most about. Progressively, we should be able to explore more and as the computational X fields come into existence and so on, and we get to use our computers and computational language and so on. We get to colonize more of the computational universe. And we get to bring more things into, Oh yes.

That's the thing we humans talk about. I mean, if you go back, even just a hundred years. Nobody was talking about all these things that we now take for granted about computers and how they work and how you can compute things and so on. That was just not something within our human sphere. Now the question is, as we go forward with automation, with the formalization of computational language, things like that, what more will be within our human sphere?

It's hard to predict. It is to some extent a choice. There are things where we could go in this direction, we could go in that direction. These are things we will eventually humanize. It's also, if you look at the course of human history and you say, what did people think was worth doing a thousand years ago, a lot of things that people think are worth doing today, people absolutely didn't even think about.

A good example, perhaps is. Walking on a treadmill that would just seem completely stupid to somebody from even a few hundred years ago. It's like, why would you do that? Well, I want to live a long life. Why do you even want to live a long life? That's because whatever that wasn't, you know, in, in the past, that might not even have been a thought of as an objective, and then there's a sort of whole chain of why are we doing this?

And that chain is, is a thing of our time and that will change over time. And I think what is possible in the world will change. What we get to explore out of the computational universe of all possibilities will change. There will no doubt be people who, uh, you could ask the question, what will be the role of the biological intelligence versus all the other things in the world?

And as I say, we're already somewhat in that situation. There are things about the natural world that just happen. And some of those things are things that are much more powerful than us. We don't get to stop the earthquakes and so on. We already are in that situation. It's just that the things that we are doing with AI and so on, we happen to be building a layer of that infrastructure that is sort of of our own construction rather than something which has been there all the time in nature and so we've kind of gotten used to it.

[00:51:52] Hala Taha: It's so mind blowing, but I love the fact that you seem to have like a positive attitude towards it. You know, we've had other people on the show that are worried about AI, but you don't have that attitude towards it. It seems like you're more accepting of the fact that it's coming whether we like it or not, right?

And to your point, we're already living in nature, which is way more intelligent than us anyway. And so maybe this is just an additional layer. 

[00:52:17] Stephen Wolfram: Right. I'm an optimistic person. That's what happens. I've spent my life doing large projects and building big things. You don't do that unless you have a certain degree of optimism.

But I think also what will always be the case as things change, things that people have been doing will stop making sense. You see this in the intellectual sphere, paradigms in science. I mean, I built some new things in science where people at first say, Oh my gosh, this is terrible. I've been doing this other thing for 50 years.

I don't want to learn this new stuff. This is a terrible thing. And I think you see that. In, uh, there's a lot in the world where people are like, it's good the way it is, let's not change it. Well, what's happening is, in the sphere of ideas, and in the sphere of technology, things change. And, I think, to say, is it going to wipe our species out?

I don't think so. That would be a thing that we would probably think is definitively bad. If we say, well, you know, I spent a lot of time learning how to do, I don't know, write, I don't know, I became, you know, a great programmer in some low level programming language, and by golly, that's not a relevant skill anymore.

Yes, that can happen. I mean, for example, in my life, I got interested in physics when I was pretty young, and when you do physics, you end up having to do lots of mathematical calculations. I never liked doing those things. There were other people who were like, that's what they're into. That's what they like doing.

I never liked doing those things, so I taught computers to do them for me and me. Plus the computer did pretty well at doing those things. It's kind of one had automated that away to me that was a big positive because it let me do a lot more. Let me take what I was thinking about and get the sort of superpower go places with that.

To other people, that's like, Oh my gosh, the thing that we really were good at doing of doing all these kind of mathematical calculations by hand and so on that just go automated away. The thing that we like to do isn't a thing anymore. That's a dynamic that I think continues having said that there are plenty of ridiculous things that get made possible by, you know, whenever there's powerful technology.

You can do ridiculous things with it. And the question of exactly what terrible scam will be made possible by what piece of AI. That's always a bit hard to predict. It's a kind of a computational irreducibility story of this thing of what will people figure out how to do? What will the computers let them do?

And so on. But I'm, yeah. So in general terms, it is my nature to be optimistic, but I think also there is kind of an optimistic path through the way the world is changing, so to speak. 

[00:55:06] Hala Taha: Well, it's really exciting. I can't wait to have you back on maybe in a year to hear all the other exciting updates that have happened with AI.

I end my show asking two questions. Now, you don't have to use the topic of today's episode. You can just use your life experience to answer these questions. So one is, what is one actionable thing our young and profiters can do today to become more profitable tomorrow? And this is not just about money, but profiting in life.

[00:55:31] Stephen Wolfram: Understand computational thinking. This is the coming paradigm of the 21st century. And if you understand that well, it gives you a huge advantage. And that's what we're going to talk about today. Unfortunately, it's not like you go sign up for a computer science class and you'll learn that. Unfortunately, the educational resources for learning about computational thinking aren't really fully there yet.

And it's something which, frustratingly, after many years, I've decided I have to really build a bunch more of these things because other people aren't doing it, and it'll be another decades before it gets done otherwise. But Yes, learn computational thinking, learn the tools that are around that, a quick way to jump ahead in whatever you're doing, because as you make it computational, you get to think more clearly about it and you get the computer to help you jump forward.

[00:56:22] Hala Taha: And where can people get resources from you to learn more about that? Where do you recommend? 

[00:56:27] Stephen Wolfram: Our computational language, Wolfram Language, is the main example of that. Where you get to do computational thinking. There's a book I wrote a few years ago called elementary introduction to Wolfram language, which is pretty accessible to people, but hopefully.

In another, well, certainly within a year, there should exist a thing that I'm working on right now, which is directly an introduction to computational thinking, which you'll find a bunch of resources around Wolfram Language, explain more, how one can think about things computationally.

[00:56:58] Hala Taha: whatever links that we find, I'll stick them in the show notes. And next time, if you have something and you're releasing it, make sure that you contact us so you can come back on Young and Profiting Podcast. Stephen, thank you so much for your time. We really enjoyed having you on Young and Profiting Podcast.

[00:57:12] Stephen Wolfram: Thanks. 

[00:57:17] Hala Taha: oh boy, yeah fam, my brain is still buzzing from that conversation. I learned so much today from Steven Wolfram, and I hope that you did too. And although AI technology like ChatGBT seemed to just pop up out of nowhere in 2022, it's actually been in the works for a long, long time.

In fact, a lot of the thinking behind large language models have been in place for decades. Thanks We just didn't have the tools or the computing power to bring them to fruition. And one of the exciting things that we've learned about AI advances is that there's not as big as a gap between what our organic brains can do and what our silicon technology can now accomplish.

As Steven put it, whether a system develops from biological evolution or computer engineering, we're talking about the same rough level of computational complexity. Now, this is really cool, but it's also pretty scary. Like, we're just creating this really smart thing that's gonna get smarter, and I asked him the question, like, do you think AI is gonna have apex intelligence and take over the world?

 I record these outros a couple weeks after I do the interview, and I've been telling all my friends this analogy every time I talk to someone, I'm like, oh, you want to hear something cool? And I keep thinking about this. AI. AI. If it does become this apex intelligence that we have no more control over, he said it might just be like nature.

Nature has a mind of its own. That's what everybody always says. We can try to predict it. We can try to analyze nature. Try to figure out what it does. Sometimes it's terrible and inconvenient. And disastrous, and horrible, and sometimes it's beautiful, 

It's so interesting to think about the fact that AI might become this thing that we just exist with, that we created, that we have no control over. It might not necessarily be bad, it might not necessarily be good. It just could be this thing that we exist with. So I thought that was pretty calming because we do already sort of exist in a world that we have no control over.

You never really think about it that way, but it's true. 

And speaking of AI getting smarter, let's talk about AI and work. Is AI going to end up eating our workforce's lunch in the future? Steven is more optimistic than most. He thinks AI and automation might just make our existing jobs more productive and likely even create new jobs in the future. Jobs where humans are directing and guiding AI in new innovative endeavors.

Bye. Thanks. I really hope that's the case because us humans, we need our purpose. Thanks so much for listening to this episode of Young and Profiting Podcast. We are still very much human powered here at Yap and would love your help. So if you listened, learned, and profited from this conversation with the super intelligent Steven Wolfram. Please share this episode with your friends and family.

And if you did enjoy this show and you learned something, then please take two minutes to drop us a five star review on Apple Podcasts. I love to read our reviews. I go check them out every single day. 

And if you prefer to watch your podcast as videos, you can find all of our episodes on YouTube. You can also find me on Instagram at Yap with Hala or LinkedIn by searching my name. It's Hala Taha. And before we go, I did want to give a huge shout out to my Yap Media production team. Thank you so much for all that you do.

You guys are the best. This is your host, Hala Taha, aka the Podcast Princess, signing off 

Subscribe to the Young and Profiting Newsletter!
Get access to YAP's Deal of the Week and latest insights on upcoming episodes, tips, insights, and more!
Thanks for signing up. You must confirm your email address before we can send you. Please check your email and follow the instructions.
We respect your privacy. Your information is safe and will never be shared.
Don't miss out. Subscribe today.