Mustafa Suleyman: Harnessing AI to Transform Work, Business, and Innovation | E314

Mustafa Suleyman: Harnessing AI to Transform Work, Business, and Innovation | E314

Mustafa Suleyman: Harnessing AI to Transform Work, Business, and Innovation | E314

At just 11 years old, Mustafa Suleyman started buying and reselling candy for profit in his modest London neighborhood. Many years later, he co-founded one of the most groundbreaking AI companies, DeepMind, which Google later acquired for £400 million. But while at Google, Mustafa felt things were moving too slowly with LaMDA, an AI project that eventually became Gemini. Convinced that the technology was ready for real-world impact, he left to co-found Inflection AI, aiming to build technology that feels natural and human. In this episode, Mustafa shares insights on how AI is quickly changing how we work and live, the challenges of using it responsibly, and what the future might hold.
 

In this episode, Hala and Mustafa will discuss:

– The ethical challenges of AI development

– How AI can be misused when in the wrong hands

– AI: a super-intelligent aid at your fingertips

– Why personalized AI companions are the future

– Could AI surpass human intelligence?

– Narrow AI vs. Artificial General Intelligence (AGI)

– How Microsoft Copilot is transforming the future of work

– A level playing field for everyone

– How AI can transform entrepreneurship

– How AI will replace routine jobs and enable creativity

 

Mustafa Suleyman is the CEO of Microsoft AI and co-founder of DeepMind, one of the world’s leading artificial intelligence companies, now owned by Google. In 2022, he co-founded Inflection AI, which aims to create AI tools that help people interact more naturally with technology. An outspoken advocate for AI ethics, he founded the DeepMind Ethics & Society team to study the impact of AI on society. Mustafa is also the author of The Coming Wave, which explores how AI will shape the future of society and global systems. His work has earned him recognition as one of Time magazine’s 100 most influential people in AI in both 2023 and 2024.

 

Connect with Mustafa:

Mustafa’s Twitter: https://x.com/mustafasuleyman

 

Resources Mentioned:

Mustafa’s book, The Coming Wave: Technology, Power, and the Twenty-first Century’s Greatest Dilemma: https://www.amazon.com/Coming-Wave-Technology-Twenty-first-Centurys/dp/0593593952

Inflection AI: https://inflection.ai/

 

LinkedIn Secrets Masterclass, Have Job Security For Life:

Use code ‘podcast’ for 30% off at yapmedia.io/course.

 

Top Tools and Products of the Month: https://youngandprofiting.com/deals/

 

More About Young and Profiting

Download Transcripts – youngandprofiting.com

Get Sponsorship Deals – youngandprofiting.com/sponsorships

Leave a Review – ratethispodcast.com/yap

 

Follow Hala Taha

 

Learn more about YAP Media’s Services – yapmedia.io/

 

 

[00:00:57] Hala Taha: Yeah fam, welcome back to the show and today we have an incredibly insightful conversation about AI. AI is completely transforming the world. You're probably using AI every single day, but today we're going to learn even more about AI than we've ever heard before because I've brought on one of the pioneers of AI, Mustafa Suleiman.

Mustafa Suleiman is the co founder of DeepMind, the co founder of Inflection AI. He formerly worked at Google, and now he's the CEO of Microsoft AI. We're going to learn about all the new developments with Microsoft's co pilot product, and Musfa is going to absolutely blow your mind. We're going to learn so much more about AI than we ever have on the show, and we talk about the difference between AI and AI.

AGI, Artificial General Intelligence, and the narrow AI that a lot of us are familiar with. We're going to learn how AI is becoming more human like than ever and how AI is going to truly become our co pilot, helping us in work and life and even potentially one day being our co founders of our businesses.

Mustafa also is going to cover the containment challenge related to AI, and we're going to learn how we need to make sure we put up guardrails so that AI works for us and not against us as a human society. I'm so excited for this conversation. I really think you guys are going to love it. I think you guys are going to learn so much.

Even if you're a pro at AI, I guarantee you Mustafa's future predictions are going to really open your mind. So without further ado, here's my conversation with Mustafa

Mustafa, welcome to Young and Profiting Podcast. 

[00:02:35] Mustafa Suleyman: It is great to be here. Thanks so much for having me. 

[00:02:37] Hala Taha: I'm so excited for this interview. My listeners love to learn about AI. So, you have been a pioneer in AI. You founded DeepMind, co founded it. You co founded Inflection AI. You're now leading Microsoft AI.

And I read your book, The Coming Wave, and one of the things that you say is that AI has a lot of threats. There's even, you say, a threat to global world order. So my first question is to you, are you an optimist or a pessimist when it comes to AI? 

[00:03:07] Mustafa Suleyman: I think that, to be rational, you have to be both. And if you just sit on one side or the other, you're probably, Missing an important part of the truth, because I think wisdom in the 21st century is being able to hold multiple confusing or contradictory ideas in working memory at the same time and navigate a path through those things, which doesn't leave you sort of characterizing those who disagree with you or somehow see things slightly differently in a negative or unconstructive way.

And so it is true, this AI moment is going to deliver the greatest boost to productivity in the history of our species in the next couple of decades. That to me is unquestionable. I don't think that makes me an optimist. I think that makes me a good predictor of the underlying trends. At the same time, it is also going to create the most change In a disruptive positive and negative version of disruptive way we've ever seen right and that is going to be incredibly destabilizing to the way that we currently understand the world to be the way that we work the way our politics operates fundamentally what it even means to be human so we're about to go through a revolution and I think that in that world I'm both optimistic.

And there are parts of me that do feel pessimistic. 

[00:04:35] Hala Taha: So one of the things that you talk about related to powerful technologies is a containment problem. And you say it's going to be one of the biggest challenges that we have. So what is this containment problem? And what are the challenges with containing technologies like AI?

[00:04:49] Mustafa Suleyman: Yes, so in the past the goal has been to invent things and have science make our lives calmer and happier and healthier. And the race has been, how do we unlock these capabilities as fast as possible? We want to create and produce and invent and solve challenges. And that's what we did with steam and.

oil and electricity and wind and food systems. And we've seen this explosion of creativity, certainly in the last couple of centuries, but that's also been the history of our species, right? So the goal there has been proliferate, spread it as far and wide as possible so that everyone can enjoy what they're doing.

The benefits immediately. Now, the thing that I've speculated about, which is the containment hypothesis, is that if we do that this time with AI technologies in a completely unfiltered way, then that has the potential to empower everybody to have a massive impact on everybody else. In real time, so we aren't just talking about spreading information and knowledge.

That's one part of it, but increasingly your agents or your copilots will be able to actually do things in the real world and in the digital world, and the cost of building one of those things is going to be zero marginal cost. Right, so everyone's going to have access to them in 20 years, maybe even much earlier.

And that's going to completely change how we get things done, how we interact with one another, and potentially cause a huge amount of chaos. And so, a little bit of friction in the system could be our friend here. That's really all that I'm positing in the containment hypothesis. Containing things so that we can collectively as a society think carefully about the potential future consequences and the third order effects.

That seems like a rational thing to do at this point. 

[00:06:43] Hala Taha: That makes a lot of sense. So what are your thoughts around AI and a problem surrounding surveillance and bias and things like that? 

[00:06:53] Mustafa Suleyman: There are lots of potential ways that this technology gets misused, and in some respects, this concentrates power. It makes it easier for a small group of people to see what is happening in an entire ecosystem, i.

e. understand the faces and the images and the patterns of massive crowds of people walking around in our societies, and that certainly can be used for a lot of good, right? I mean, we want to have a police force that is capable of instilling order and structure and so on. On the flip side, that also makes it easier for authoritarians, despots of any kind to identify minority groups that they want to get rid of.

And it'll just be much easier to find the needle in the haystack. And again, that has amazing benefits because you can catch the bad guy, but it also is potentially scary. And so each one of these steps forward has that balancing act between the harms and the benefits. 

[00:07:56] Hala Taha: So let's talk about the benefits, because I've been asking you a lot about the negative.

So how can AI help us solve things like world hunger and poverty and things like that? 

[00:08:06] Mustafa Suleyman: Well, one way of thinking about it is that intelligence is the thing that has made us productive and successful as a species. It's not just our muscle or our brawn, it's actually our minds and our ability to make predictions in complex environments, to solve hard problems.

This is really the essence of what makes us special and creative. We learn to use tools and we invent things. So that intelligence. that technique of being able to predict what's going to happen next. That is actually something that we're increasingly learning to automate and turn into an intelligence system that can be used by everybody.

So what does that mean? That means, okay, well, we now have this very thing that made us smart and productive and created civilization. That very concept, intelligence, is going to be cheap and abundant. Just like energy, frankly, oil, has turbocharged the creation of our species, and we now have 7 billion people on the planet, largely as a result of the proliferation of oil, right?

The next wave is that everybody is going to get access to personalized, real time knowledge, and a companion, an aid. A coach, a guide, a co pilot that is going to help you get things done in practice. It will help you create and invent and solve problems and get things done. And so that is a massive force amplifier.

So for everybody who wants to solve world hunger or invent new energy systems or solve the battery challenge so that we can really unlock the power of 

[00:09:55] Hala Taha: renewables. 

[00:09:56] Mustafa Suleyman: We now are going to have a super intelligent aid at your fingertips. That's going to help you work through those problems. 

[00:10:02] Hala Taha: It's so cool to think about, and as you're talking, I think about the iPhone, right?

We had that in our pocket. It's not personalized AI, it's not an AI companion or anything like that, but it gives us access to knowledge of the world. It really has moved us forward as humans. And now there's like another wave coming and it's pretty exciting. And I feel like we're already ready for it because we've had stuff like the iPhone and the internet, would you say?

[00:10:28] Mustafa Suleyman: Yeah, you're totally right. And in many ways, the iPhone's impact was just as hard for people to predict. Like, if you said to people in 1985, what would the internet enable? Very few people would have predicted mobile phones with real time communication, video streaming, a camera and a microphone in your pocket, right?

on the table, almost in every room, that would have sounded terribly dystopian and scary. And yet, because of the friction, because it's taken us a couple decades to really figure it out and get it working in practice, we've created boundaries around these things, right? We've got security, we have encryption, there are privacy standards, there are real regulators that make meaningful interventions, and there's public pressure.

And there's a reaction and it steadies the kind of arrival of these technologies in a really healthy and productive way. And it's kind of mind blowing. You're right. Who would have thought what the iPhone would have done? 

[00:11:30] Hala Taha: Yeah. Well, I'm super excited about everything. I do want to touch on your background a bit.

So you co founded DeepMind. And something that we have in common is that we're both really passionate about human rights. I was telling you offline, I'm Palestinian 100%. So top of mind for me, especially this year. You're Syrian. And I know that. There's just global issues going on in the world that makes us top of mind.

And you were a human rights activist long ago. That's how you started your career. So talk to me about how your experience with human rights and activism really helped shape your vision and some of your ethical decisions related to DeepMind. 

[00:12:08] Mustafa Suleyman: Thanks for the question. Human rights is core to what? I believe to be the solution for a peaceful and stable society.

Your success and my success is probably largely a function of the privilege that we have to live in a society not at war, have access to a great education system, be able to get access to healthcare, and really just get the chance to be who we want to be. Instead of growing up in societies that are maybe in the refugee camp.

Going through war in the middle of complete chaos families that get moved on month after month or year after year and essentially end up being refugees and I think that more than anything we have to have so much more empathy and kindness for people who have been through that struggle that tends to be a sort of demonizing of refugees.

or migrants, as though they're like coming to steal something from our stable worlds, but in fact they're actually fleeing insane hardship. So we have to be way more compassionate and forgiving and kind to those kinds of people. And remember that those people are just like us. Our societies could fall apart in the U.

S., in the U. K., in Europe, just as their societies happen to be falling apart at this moment in time. And we have to extend a hand of friendship and love. And the human rights framework taught me that. Because I grew up as a pretty strict Muslim. And I realized, you know, sort of in my late teens and early twenties that it was too narrow a kind of worldview.

You know, prioritized being Muslim over being human. And it didn't make sense to me that there wasn't gender equality. It didn't make sense to me people who chose to get with a member of the same sex suddenly got demonized and they were like evil somehow. None of that made sense to me. Whereas the human rights framework respects everybody as equals.

And I just can't see how that isn't the right way to live. So, yeah, I became an atheist and secular and a big believer in these kind of rights frameworks for everyone. 

[00:14:22] Hala Taha: And then how did that shape the way that you thought of AI and the way that you decided to develop your technology at DeepMind? 

[00:14:29] Mustafa Suleyman: Well, think about AI as a force amplifier.

So the question is, which force is it amplifying and which frame is it placing on the amplification? So what set of values, what ideas is it putting out into the world? What are its boundaries? What does it not do? And so technology is fundamentally an ethical question. It is clearly about how we reframe our culture and our ideas and our entertainment, our music, our knowledge in the world.

So it was very obvious to me from the very beginning that we started DeepMind that we were going to have a huge moral responsibility to think about what it's going to be like to bring these agents, these co pilots into the world. What would their values be? What would they not do? What are their limitations that always was a big motivator for me and was a big part of the kind of structure that led to our acquisition by Google and subsequent efforts over my time at Google and since then have always been very focused on that question.

[00:15:35] Hala Taha: So when it comes to your technology goals at DeepMind, you have the goal of creating AGI, Artificial General Intelligence. We've never talked about that on the podcast. Probably 90 percent of my listeners have no idea what AGI is. Can you explain what that is? 

[00:15:50] Mustafa Suleyman: Yeah, great question. This is a term that gets bandied around quite a lot in kind of nerd niche circles, and it's sort of broken out a little bit now.

But basically, the idea of a general purpose intelligence is that it's capable of learning in any environment. And over time, it will end up becoming a super intelligence. It will get better by learning from itself and learning from any environment that you put it in. So much so that it exceeds human performance at any knowledge or action task.

So it could play any game, control any physical infrastructure, learn about any academic discipline. It's going to be a really, really powerful system in the future. But I do think that's a long, long way away. Before we get there, there are just going to be regular, AI systems or AI companions that can talk to you in the same language that you or I talk to one another.

I'm sure lots of people have played with various chatbots like ChatGPT and Copilot and you can see it's getting pretty good. It's still a little awkward but it's pretty knowledgeable. We've been reducing the hallucinations quite a lot. Lately we've added the real time access to information so it can check up on the news and it'll know the weather and it'll have a temporal Awareness and those systems I think you're going to be around for a pretty long time before we have to worry about a big AGI, but that is probably coming.

[00:17:21] Hala Taha: So when you say hallucinations, are you talking about incorrect information coming from AI? 

[00:17:27] Mustafa Suleyman: Not quite. I think of a hallucination as the model inventing something new. And so sometimes you want it to be creative, you're actually looking for it to find the connection between a zebra and a lemon and a New York taxi, 

[00:17:46] Hala Taha: right?

[00:17:47] Mustafa Suleyman: And if you ask it that question, it's going to come up with some creative, poetic connection that links those three things. But then sometimes you just want it to be like, Super to the point and not wander off and talk all kinds of creative poetic nonsense. You just wanted to give you the facts. 

[00:18:04] Hala Taha: Yeah. 

[00:18:04] Mustafa Suleyman: So that's the spectrum.

And what we're trying to do is figure out from a query, like given the thing that you're asking, the model, Should we put it into a more creative mode or a more precise and specific mode that's more likely to be accurate? And that's kind of the hallucination challenge. 

[00:18:22] Hala Taha: How is AGI different from Narrow AI?

What's the main differences? 

[00:18:26] Mustafa Suleyman: Well, Narrow AI is the conversational companions that I was describing. It's limited. It doesn't improve on its own. 

[00:18:35] Hala Taha: Okay. 

[00:18:36] Mustafa Suleyman: So You sort of train it within some boundaries, and then it gets good at a specific number of tasks, like it's good at great conversation, maybe it can do document understanding, maybe it can even generate a little bit of code, but We know what its capabilities are to some extent.

That's a narrow version of AI when we use it for a specific purpose, a more general AI is going to be one where it has things like recursive self improvement. It could edit its own code in order to get better, or it could self improve or. It would have autonomy it could act independently of your direct command essentially or you give it a very general command and it goes off and does all sorts of sub actions that are super complicated like maybe even invent a new product and create a website for it and then set up a drop ship for it and then go and market it and take all the income and then do the accounts and so on I mean I think that's Plausible in say three to five years before 2030.

I think we'll definitely have that and might well be much, much sooner could well be like a lot sooner. 

[00:19:44] Hala Taha: Oh my gosh, that's so crazy to think about. So how does that challenge the way that we think of us as humans and consciousness and intelligence? Is it going to change the way that we think of what's human and what's not?

[00:19:55] Mustafa Suleyman: Yeah, for sure. I mean, we are going to have to contend with a new type of software. Yeah. Historically, software has been input in, input out. You type something on Airbnb and it gives you a search result page. You play a piece of music on Spotify and that comes out as intended. And software so far has been about utility, it's been functional.

The goal is for it to do the same thing over and over again in a very predictable way. And it's been really useful. We've created trillions of dollars of business. Business value out of it and unbelievable social human connection and knowledge and all the rest of it has been incredible actually but we're just about to embark on a completely new phase of the digital journey where computers are now learning to speak our language.

You don't have to learn the language of a computer to interact with it anymore. I mean, you can, it's still important to be a programmer, but it can actually understand your language. It can understand the audio that you give it when you do a voice input. The intonation, the inflection, the volume, the pace.

The pauses, it will increasingly be able to see what you see. So not only will you take a picture and it will recognize what's in the picture, it will have complete screen understanding of everything that you're doing in your browser or on your desktop or on your phone frame by frame. You're browsing Instagram.

It's seeing everything that you're seeing in real time, talking to you about the content of what you're interacting with on Instagram, on TikTok, when you're reading the news, whatever you're doing. So that's a profound shift. That's not a tool anymore, right? That is really starting to capture something meaningful about what it means to be human, because it's using the same language that we use to understand one another.

All those subtle cues that take place in social bonding of human relationships. Suddenly it's going to be present in that dynamic and it's a really, it's a really big deal. And I think even though we've had a couple of years of large language models being out there and people get to play with it in the open source.

I still don't think people are quite grasping how big a deal this shift is about to be. 

 

 you mentioned that you're an atheist now, and you've been central to developing AI, and AI is becoming more human like. Do you feel like that has altered your perspective about religion a bit?

[00:22:34] Mustafa Suleyman: You know, I think there are many amazing things about religion. Religion was a way that people made up stories, this is in my opinion, Mm 

[00:22:43] Hala Taha: hmm. 

[00:22:44] Mustafa Suleyman: in order to make sense of a complex world that was confusing. And then science came along and showed that actually we really can understand the world through empirical observation and a falsifiable process of coming up with a hypothesis, running some experiments, observing those results, subjecting them to peer review, and then iterating on the corpus of human knowledge.

And that's how we have produced known facts. that are very reliable because they keep getting tested over and over again. And when something fundamental happens, we change our entire paradigm. So I think that it's unclear to me what role religion plays anymore in understanding the physical world or even increasingly understanding the social world.

I think that if you look at the contributions in the last, say, century, or even couple centuries from great poets and philosophers and musicians, storytellers, creators, inventors, Most of them have nothing to do with religion, and yet they've taught us most about ourselves and our societies, right? If you really want to understand who we are as a people, as humans, how our societies function, you don't turn to religion anymore, unfortunately.

I mean, some people will obviously disagree with that and full respect to them for their opinion, but that's where I land on it these days. 

[00:24:14] Hala Taha: So interesting. Okay, so let's move on to Inflection AI. You co founded it in 2022, and your vision was to develop AI that people could communicate with more easily.

Why did you feel like that needed to be developed? 

[00:24:28] Mustafa Suleyman: We were at a time when I had just left Google, I'd been working on Lambda, which was Google's conversational search AI that we ended up never launching. It then got launched as Bard and then it became Gemini. And I was frustrated because I was like, these technologies are ready for prime time.

They're ready for feedback from the real world. They're not perfect. They make a lot of mistakes. We're technology creators. We've got to put things out there and see how it lands. Iterate quickly, listen to what people have to say. And it was just so just frustrating that we just couldn't get things done at Google at that time.

And so I realized it has to be done. It's time to start a company. Me and Reid Hoffman and my friend Corinne Simonian started Inflection. And we created Pi, the personal intelligence. It was a super friendly warm high emotional intelligence conversational companion at the time it was i think the most fluent and the most high eq conversational companion had a bit of personality had a bit of humor and i think that it was an interesting time because we ended up getting decent number of users we had like a million now about six million monthly active users but.

The main thing I realized is that some people really love these experiences. I mean, the average session length for Pi was 33 minutes, 

[00:25:54] Hala Taha: 4. 

[00:25:55] Mustafa Suleyman: 5 times a week. So that ranks it right up there. I was under TikTok, but right up there. And there's not many experiences like that. And so I think it gave me a real glimmer into like what's coming and how different the quality of the interaction is going to be if you really get the aesthetic and the UI and the tone of the personality, right?

[00:26:17] Hala Taha: I know that a lot of AI right now is used for work. We're using ChatGPT to help us with work. 

[00:26:23] Mustafa Suleyman: Right. 

[00:26:23] Hala Taha: What do you feel is the importance of emotional AI and having like emotional companionship with AI? 

[00:26:29] Mustafa Suleyman: So far, you're right, Chat GPT is really a kind of work and productivity AI. And it's great, it gives you access to knowledge and helps you rewrite things and so on.

But in a way, it sort of addresses a small part of our, or one important part of our human needs, right? The other part of our needs are to make sense of the world, to receive emotional support, to have understanding of our social complexity. We want to come at the end of the day and vent. A big part of what I think people are doing on social media is posting stuff so that they can be heard.

People want to feel like someone else is paying attention to them. They want to feel like they're understood and that they're saying something that's meaningful. Or maybe they just want to work through something. So in the new co pilot that we've launched just a few weeks ago, At Microsoft AI, we focused both on the IQ, so it's exceptionally accurate, minimizes those hallucinations we were talking about, has access to real time information, really fast and fluent, but we've also focused on the EQ because we want it to be a kind companion.

We want it to be your hype man. We want it to back you up, right? We want it to be in your corner, on your team, looking out for you there when you need it. I think people underrate how important that kind of social privilege is. That is one of the things that gives middle class kids a huge leg up. To always have a parent there.

To always have a stable family with a sibling or, you know, even a best friend available to you whenever you need it. And I think that we're going to just touch on a little bit of those experiences now and make that available via Copilot. 

[00:28:11] Hala Taha: Yeah, it sounds simple. It's so amazing, the future that AI can bring us.

And to your point, people who are a little bit underprivileged, like maybe they have an immigrant parent or a parent who's not very educated, now suddenly they have just as much potential as everybody else because they have the same AI companion. How do you imagine us working with machines in the future?

Like how do you imagine personal AI to be like in the future? 

[00:28:37] Mustafa Suleyman: Yeah, I think you're going to say, Hey, co pilot, or whatever you call your personal AI, I'm stuck. What's the answer to this? How should I navigate that? I need to go buy this thing. Can you take care of it for me? Will I be available in a week to do this thing?

You're going to basically use it as a way to organize your life and spend less time being distracted by administration and more time pursuing your curiosities. Especially in the voice mode, I think you very quickly just forget that this is actually a piece of technology, and it feels like you're just having a great conversation with a teacher that is patient, nonjudgmental, doesn't put you down, has infinite time available to you, and will wander off on the path that you choose to take through some complicated question.

And it doesn't matter how many times you go back and say, Can you explain that again? I didn't quite understand that. What do you mean? No problem. You just get to keep digging in a completely personalized way. That is gonna be the greatest leveling up we have ever seen. Because it's expensive, socially. It costs for me to turn around to one of my friends who knows a lot about something and pick up the phone and say, Hey man, can you walk me through this thing?

And obviously my friends will do it, but there's a barrier there, right? It's not instant. I'm really asking for something, or it's a cost for me to unload on my friend at the end of the day when I'm frustrated and irritated and I want to show up to my best friend in the best possible way and have fun and be bright and energetic and I'm still going to have those emotional moments with them.

It's not that they're not going to be there. It's just that it now lowers the barrier to entry to get access to support and knowledge and information. And that is what is going to be the level up. 

[00:30:32] Hala Taha: So with technology in the past. We've seen it actually make us become more lonely. Everybody says there's this loneliness epidemic, social media makes us more lonely.

Do you feel like this new wave of AI and personalized AI in this manner is actually going to help us become less lonely and replace human connections, so to speak? 

[00:30:53] Mustafa Suleyman: Every new wave of technology leaves us with a cultural shift, right? It doesn't, it's not just static, right? It's gonna have some impact. And it may be the case that social media has made us feel more lonely and isolated.

And we have to unpack that. Like, why is that? People report feeling lonely, but another way of thinking about it that I think is that people feel judged by social media. They feel excluded. They feel not good enough. Why is that? Because I think Instagram really dominated in highlighting a certain, Visual aesthetic.

I need to be big and have muscles, right? I need to get my fashion on point. Oh my god, look at how good a cook she is. She's making incredible food. Look at how she's taking care and he's taking care of their kids. I'm just looking at all these perfect caricatures and it's making me feel insecure. So, I think that's what's at the heart of it.

And what sits beneath that is a UI. A UI that rewards a certain type of attention. And I think you can create new UIs. You can create new reward mechanisms, new incentivizations to dampen that spirit and create more. breadth actually. And actually I think in a way TikTok is an evolution of that, because you don't get as much of that on TikTok.

You know, a lot of the comments even are like much more healthy. They're like full of jokes and support and friendly banter. Whereas YouTube was just spiteful. I remember the YouTube comments back in the day seemed to really like rough. And obviously X I mean I don't know who uses that like that that's turned into a suspect so I just think you have to be conscious and deliberate I'm sure that when we put out on you a experiences.

There are gonna be some parts of society that get ruffled by it and my job and my life's work is to be super attentive to those consequences and respond as fast as possible to trim the edges and reshape and cost it is like a sculpture and you have to just. Be paying full attention and taking responsibility for the real time consequences of it.

[00:33:03] Hala Taha: Now you are CEO of Microsoft AI and you guys launched CoPilot about a year ago. Can you walk us through how CoPilot transformed work for Microsoft users over the past year and then we'll get into what's new. 

[00:33:15] Mustafa Suleyman: So we launched CoPilot about a year ago very much as an experiment to see how people like to interact with conversational LLMs.

In the work setting, it's pretty incredible to see how Copilot is now embedded in Microsoft 365. So, on Windows, on Word, on Excel, there's so many tools and features that enable you to just ask your Copilot, whilst you're in the context of your document, to summarize something, or create a table, or to create a schedule, or to compare two complex ideas.

And so I think it's had a massive impact there, actually. It sort of doesn't get talked about so much because it's been so embedded. And, you know, now it's become like second nature. It's like part of people's everyday workflow. 

[00:34:01] Hala Taha: I've used Copilot before. And to your point, it just feels so natural. Like, I feel like we're all just so ready to have an AI companion.

So talk to us about the future of what Copilot is going to bring. 

[00:34:12] Mustafa Suleyman: Well, the next wave for Copilot is these flavor of much more personable, much more fluent, much more natural interactions. So it's fast, it's sleek, it's very elegant in the UI. We've done a lot of work to pare back the complexity. I think that people want calming, cleaner interfaces.

I feel like when I look at my computer sometimes, it's like, I see colors of every type, shapes, different kinds of information, architectures, and it's just like this blur. And I just need serenity and simplicity. So we're really designing Copilot to Go out and fetch the perfect nuggets of information for you and bring them into your clean feed and really create a UI where you can focus on conversation.

So the answers are designed to be pithy, short, humorous, the little bit of spice, a bit of energy, and you know, it's fun to chat to as well as learn from at the same time. And so that kind of interactive back and forth was a big part of the motivation for how we designed it. 

[00:35:14] Hala Taha: Are you bringing some of the emotional piece that we were talking about before into this?

[00:35:18] Mustafa Suleyman: Yeah, it's really designed to have a bit more of that connection. You know, it'll ask you questions or if you're in the voice mode, for example, it will actively listen. So it will go, Uh huh, or no way, or right whilst you're speaking to let you know that it's listening, it's paying attention, it's keeping the conversation moving.

And so just little subtle touches like that, as well as the intonation in the voice and the energy that it brings, those kinds of things are really quite different to what we've seen before. 

[00:35:50] Hala Taha: Are there any other advancements that you're working on at Microsoft related to AI? 

[00:35:55] Mustafa Suleyman: Yeah, there's a lot coming.

You're going to see a different kind of hardware platform. I think over time, you're going to see a lot of different features in terms of personalization. So I think increasingly people are going to want to give their co pilots their own name. Who knows one day in the future, you know, might have an avatar or visual representation.

So we're thinking about a lot of different angles. 

[00:36:17] Hala Taha: How far off do you think we are from co pilot being more than a tool and more like a coworker? 

[00:36:23] Mustafa Suleyman: I think that it's naturally going to evolve to be more of a co worker because you want it to be able to fill in your gaps, right? You know, you think you have certain strengths and weaknesses.

Some of us are more analytical, some of us are more creative, some of us are more structured. You can think of each one of us as this unique key that fits like a perfect lock. With our strengths and weaknesses. And I think that each co pilot is going to adapt to the grooves of your unique constellation of skills.

And so if it fits to you, it kind of means like you and your co pilot are going to be like a pair. You're going to be like a powerhouse. I mean, who knows one day you might even go and do job interviews together because it's going to be like you're hiring me and my co Co pilot. We're a pair. You know, it could well be your co founder.

I'm expecting anytime soon, people to declare that it's them and their AI starting this new company. 

[00:37:29] Hala Taha: Oh my god, that's mind blowing to think. I can't even imagine a world like that. Do you have any concern? When GPS came out, for example, it was around the time when I was younger and driving and I can't for my life get anywhere without GPS now I'm so dependent on it.

I only know how to do things that I did when I first started driving and didn't really have GPS embedded in my car and I can't memorize phone numbers anymore the way that I used to, and I just am worried that AI is going to make us. Maybe more lazy, maybe less creative. Are you worried that it's going to impact human intelligence in a negative way?

[00:38:11] Mustafa Suleyman: People said that about the calculator, right? That it was going to lead to kids cheating in tests and so on and so forth. And it didn't. It just made us smarter, enabled us as humans to do more complex computations. And I don't see any evidence that it's making us dumber in any way. I mean, we have overwhelming access to information.

And I think that. On one level, that has actually made us all way more tolerant and respectful and kind. People tend to fixate on the polarization, politically, in our society. But actually, think about it from the other perspective. 20 years ago, take your pick. Abortions, religion, sexuality, gender, trans rights.

I mean, take a pick. All of those were decades and decades behind where they are now. It is amazing how bright and beautiful and colorful our world is now and how respectful and kind we are on the whole. Now there are still pockets of fear and hatred and there's plenty of that, but there's also massive, massive progress.

And I think that progress is a function of us having. Access to knowledge about one another, living together, growing together, hearing from one another, and I think that's going to continue. 

[00:39:27] Hala Taha: So when it comes to tools like Copilot, right, a lot of my listeners are entrepreneurs, they're rolling this out to their teams, AI in general, a lot of people are worried about the accuracy and the bias related to AI.

How can we trust AI more, or do you feel like there's still more work to do in terms of us fully trusting AI? 

[00:39:47] Mustafa Suleyman: Yeah, there's still more work to do. I'm painting a rosy picture of the future. It's gonna be a while for these things to actually work perfectly. So you've always gotta double check it. Do not rely on these things just yet.

At the same time, you would be a fool not to use them because it really is a complete revolution in access to information and support and so on. So the good news is that if you're starting a new business, or if you're trying to figure out your next move in life in general, everything is available. Open source.

You can try any model on any API. You can get access to the source code quite often and really get a really good understanding of the cutting edge. You can't get that absolute cutting edge in open source. You can get very, very close. And I think that will give anybody a good instinct for how these things can be useful to your business or to your startup or to your next step in life.

 

 

 

[00:40:54] Hala Taha: So let's move on to some future predictions. I know that Microsoft is working on some AI. Projects related to sustainability. Can you talk to us about what you guys are working on? 

[00:41:05] Mustafa Suleyman: Yeah, sure. So we're actually one of the largest buyers of renewable energy on the planet. And that's a long time commitment by the company to be net zero by 2026.

and carbon negative. So taking carbon out of the supply chain by 2030. So in order to do that, we've also been massively investing in new technologies and new science. For example, massive investments in battery storage and nuclear fusion projects in, Carbon sequestration. So taking carbon out of the atmosphere across the board.

We've been making this a priority for quite a long time. 

[00:41:44] Hala Taha: Thank goodness, because climate change is so important. So let's go back to this containment problem that we talked about right in the beginning of the podcast. Can you compare and contrast what the world would look like 10, 20 years from now if AI is contained and we use it in a really positive way versus AI not being contained and it getting out of control?

[00:42:04] Mustafa Suleyman: You know, I think that one way of thinking about it is that cars have been contained. So cars have been around for 80 years, and we have layers and layers and layers of regulations, seatbelts, emissions, windscreen tensile strength, street lighting, disposal. Of the materials after a car's life is over driver training now there's an entire ecosystem of containment built up to prevent a 13 year old from driving through a field and crashing into a cow or whatever yeah you know it's a whole infrastructure for containment that takes time to evolve and we've actually done it pretty well and cars are now incredibly safe just like airplanes just like drones just like planes.

Drones haven't just suddenly exploded into our world, they've had containment, there are rules, you can't just go fly a drone in Times Square, you have to get a permit, you need to get a drone license, there are certain places that you can't fly them at all, like near airports, so, this isn't actually that complicated, it's quite likely that we will succeed in putting the boundaries around these things so that they're a net benefit to everybody.

[00:43:24] Hala Taha: That actually makes me feel a lot better. It really does because, you know, AI is a scary thing to think about. All these changes happening, but it's so good to hear from you, somebody who's been so central to it all, to really believe, Hey, I think it's going to be okay. I think we're going to be able to contain this and things will overall be positive.

But how about the fact that power is sort of decentralized now with AI, right? A lot of people can just use it and run with it, and there's some bad apples out there. So what do you have to say about that? 

[00:43:57] Mustafa Suleyman: That's a tough question. There are definitely bad apples and there are definitely people who will misuse it.

That's kind of the conundrum, I think, is that we have a two prong challenge. One is figuring out how nation states and democracies get into a place where they can regulate. The powerful big companies like me, they can hold us accountable and they can make the public feel like they're competent and they're on the case regulating centralized power.

And then the second is how do we cope with the fact that everybody's going to have access to this in seconds? And we want them to, it's just not like we don't want people to have access. We want people to have access in open source. And I think the important thing to remember is that so far we haven't seen any catastrophic harms arising from open source or in the large scale models, right?

So all of this is speculation. Everyone could be totally wrong. It could be that we're actually not going to progress as fast as we thought. It could be that we come up with really reliable ways of instilling safety into this code. Certainly could be the case and there's no reason for us to start slowing down open source right now.

None at all. It needs to continue because people get enormous benefit out of it. At the same time, if something were to go wrong, it's just software. 

[00:45:18] Hala Taha: So 

[00:45:18] Mustafa Suleyman: someone can copy it and repost it and post it again, and it's going to spread super fast. So I just think it's, it's a new type of challenge that we haven't yet faced yet.

I think people often forget the internet is quite regulated, just not like this is the internet is this kind of free for all chaotic open domain, right? There's a whole bunch of things that you can't find on the web, or you have to really work hard to get on the dark web and find pretty ugly stuff. And it's illegal, and if you get caught doing it, you're in deep trouble and stuff like that.

So. It's not like it's just going to be a total free for all, and, you know, in general, I think most people do want to do the right thing, so this isn't about worrying about your average user, this is about a tiny number of really bad apples, as you say. 

[00:46:03] Hala Taha: Yeah, to me it sounds like you're saying, we're well prepared.

We've seen these new technologies before, humans have been dealing with these new technologies over the last decade. 200 years or so. And it sounds like you're saying like, you feel like we're well prepared for the AI revolution. 

[00:46:18] Mustafa Suleyman: I think that we are more prepared than the scaremongers make us think. That does not mean everything is going to be dandy.

There's a lot of work to do, and each new technology is new to us. By definition, it's something we haven't seen before. When I was writing my book, I read about this amazing story of the first passenger railway train that that took a trip in Liverpool. And this is 1830, so the first time anybody has ever seen a moving carriage, essentially.

It was a single carriage on rails. And the prime minister came down to celebrate. Tons of people there, the mayor, and there was like the local MP. They were so excited by what was coming that they actually stood on the tracks and they didn't get out of the way when the train came and it killed a bunch of people, including the local MP.

[00:47:17] Hala Taha: Oh my god! So 

[00:47:18] Mustafa Suleyman: it was that alien and that strange, just a regular moving carriage. that they couldn't figure out that they needed to get out of the way. So that obviously, I'm sure, never happened again. Um, but it gives you an indication sometimes of surprising and strange it can be and how we can be unprepared.

And now, obviously, we're not living in the 1830s. We have the benefit of Hundreds of millions of inventions since then. And so we understand a lot more about the process of inventing, creating technology, seeing it proliferate. We understand a lot more even about digital technologies in the last couple decades, right?

We see the consequences. in social media, right? We see how unencrypted phones cause security chaos. We see how hackers try all sorts of different tricks of the trade to kind of undermine our security and privacy and so on. So there's an accumulation of knowledge that on the one hand should make us feel optimistic that we are prepared.

On the other hand, we also know that these are unprecedented times and these are very new and fundamentally different experiences. And so I don't think we should. You know, we shouldn't be complacent. You know, this is, it's going to be different. 

[00:48:33] Hala Taha: How do you envision the future of work? Do you feel like people will have the traditional sort of nine to five job that they have right now?

Or do you feel like humans are going to be able to be more creative and enjoy their life more? Like, how do you imagine that? 

[00:48:46] Mustafa Suleyman: Yeah, I think that work is going to change. I mean, it's already changing. We're already remote we're on our devices i do honestly half of my work on my phone because i travel a lot always making phone calls sending text messages and you know using messaging and teams and so on so it is going to be very very different and i think in an ai world.

You're going to have your companion with you, remembering your tasks, helping you get things done on time, helping you stay organized and on top of all the chaos. I think that should make you feel lighter and more prepared mentally, more ready to be creative. And that's what's going to be required of us because routine work is going to go away.

A lot of the drudgery of elementary digital life, it's going to get a lot smoother and a lot easier. And so that's going to free us up to be creative as entrepreneurs. I think that's some amazing things are going to be invented as a result of that extra time that we're going to have. 

[00:49:50] Hala Taha: So you don't feel any worry, humans are not going to have purpose anymore.

[00:49:54] Mustafa Suleyman: I think that work was invented because we had limited resources and we had to organize ourselves in efficient ways to reduce suffering. So what happens when we don't have as much of that burden and actually there is going to be resource available for millions of people and the greatest challenge we have is figuring out how to distribute it and how to make sure that everybody gets access.

So I don't think there is something inherent about the human connection and need for purpose with work. I think many people find their purpose and passion in a gazillion other things that we all do. Right? Many people also find it in work too. And that is going to be a big shift because if you find passion in that kind of drudgeness work that I described, then now you're going to be there, you know, in 20 years time.

So you're going to have to think hard about that. But I think it's pretty exciting. People are going to be able to find many, many new purposes and many, many new things to do with their lives. 

[00:51:01] Hala Taha: Do you feel like we're going to live longer because of AI? 

[00:51:04] Mustafa Suleyman: Yeah, I think so. I don't know how long. I'm not one of these live forever type people.

We're already living longer because we have a much better awareness of health conditions. I'm just thinking about how many people died, Because everyone thought smoking was okay, right? And how many lives were cut short? And now, so few people smoke, right? And, you know, people are aware about the consequences of alcohol, or unhealthy food, or sitting on the couch.

Just that alone, again, access to information, scientific evidence, proving that these things actually do lead to longevity. That's all table stakes now. And so for sure there's this kind of bump that we're gonna get in 60 years time when a bunch of people who've grown up since their teens thinking that living a healthy life is the normal thing to do, instead of how I grew up, which is like cigarettes and alcohol and da da da da da, like.

So obviously there's still room for all that kind of stuff, but is a different thing now. And then on top of that, we're going to have AI tools that help us to really make sense of the literature for us in a personalized way. We can really see what kind of nutrition we might need, given our gut biome.

That whole, Sort of movement is only just starting to kind of have effect. And I think it's going to be pretty impactful. 

[00:52:22] Hala Taha: So let's move on to entrepreneurship for all the entrepreneurs out there. What should they be doing now to take advantage of AI?

[00:52:32] Mustafa Suleyman: All the tools are already at our fingertips. I mean, in some ways that's like an overwhelming thing, you know, because it's like, It's just that there's like nothing holding you back. I mean, there's no secret source. My team has some little bits and pieces here and there that might not be available, but most of it, the knowledge, the know how, the cloud services, the open source stuff, the YouTube videos, the it's all there.

And so. It is an electric time. Someone came up to me at a book signing that I did the other day. She's 15 years old and she was showing me this unbelievable project that she had been working on, made a bunch of money, strung together from all available public tools with two of her pals, thinking about dropping out of school.

It blew my mind. And you know, I think that is just There, if you're hungry, and you know, if you're ready to take risks, this is the thing I say to people, take risks when you're young, take risks, I took a lot of risks, drop out, change your degree, switch your subject, give up work, maximize your side hustle, partner with a friend that you're not sure about partnering with, go ask a question, people want to help.

Just ping them an email. Doorstep them. People don't doorstep each other anymore. You know, back in the day, people would like, wait outside. I don't 

[00:53:58] Hala Taha: even know what that means. Do you know like, 

[00:54:00] Mustafa Suleyman: it means like, after a show, or you know, outside a theatre. Oh, like wait for somebody. Yeah, that 

[00:54:07] Hala Taha: used to be a 

[00:54:07] Mustafa Suleyman: big thing.

That used to be how people did networking, right? They would like, go to an event because they knew that someone was going to be there. And then try to like build the connection, physical connections, huge, find that moment to shake someone's hand, show a quick demo, drop them a note, it's hustle. It's just hustle culture.

That's what you got to be on. If you really want to do it and everyone's in the game. So what an amazing time to be creative, take that risk. 

[00:54:34] Hala Taha: It's so exciting. And I hear the passion in your voice. And I feel like anybody listening right now probably feels like so pumped to just explore, see what's out there related to their industry and just get their hands dirty.

[00:54:45] Mustafa Suleyman: Yeah, because I did that as well, right? I mean, I dropped out of my degree. I switched out my careers a bunch of times. I wasn't afraid to ask people for help. And it was really, the reason that I've been successful is because a few people gave me unbelievable opportunities at the right moment when I was really young, like help me get into a great school.

I ended up going to Oxford, like help me get a great early job, help me when I started my telephone counseling service. When I was 19, it's really other people that end up lifting you up and you have to form those relationships, give so much thanks and praise to those who do do that. And then keep giving it to other people too.

I reply to a lot of my LinkedIn's. I won't say I reply to all of them because that would be a lie, but I do reply to a lot. I certainly reply to all my emails and people cold email me all the time and I might not be able to help them, but I'll reply and I'll point them in the right direction or something because that's really what matters.

And when you see someone who's taken that extra step to try and hustle it, like I really rate that and I think it's the way forward. 

[00:55:47] Hala Taha: And my listeners know that I interview such powerful people. I find that the more powerful the person, the more helpful they are. And the more that they actually care about giving back and giving feedback and being personal, because what ends up happening is that they probably have more time because they're already successful on.

made it and it means a lot to them to actually give back. 

[00:56:08] Mustafa Suleyman: Yeah, and I just got super lucky as well. Like, I've obviously done some things right, but luck is a huge part of it and you kind of make your own luck by asking people for favors and help and advice and feedback. I've just learned everything along the way by Assuming that I know nothing, but that's the key thing is that I'm not embarrassed to look, even to this day in front of my team, I'll often ask the stupid question and quite often I end up looking like an idiot, probably one out of five times, maybe even one out of four.

I will say something and I'm like, Oh, that was a clanger. Oops, but then a bunch of other times it would be like, Oh, that was the thing that everyone was thinking. And then my team seeing me trip and just look like a doofus that encourages them to then go and ask the stupid question. And then we're all just less judgmental.

There's none of this professional nonsense. Like you have to be all formal and straight. You just break down those barriers, be human to one another, collaborate deeply, set aside shame. Do you know what I mean? Shame is one of the most useless. emotions. I'm so sorry that we evolved to carry this thing.

Frankly, I think it comes from religion, but that's another story. But I just think, why are we carrying around this baggage of shame? There's no need to be ashamed. You know, just recognize when you tripped up, make a correction, take the next step. 

[00:57:27] Hala Taha: You're talking about leading your team. You've led so many teams at DeepMind, Google, now Microsoft.

What are some of your key leadership principles that you live by? And maybe talk about what is one of the biggest challenges that you've had so far as a leader? 

[00:57:41] Mustafa Suleyman: My style tends to be very open and collaborative. I like to hear lots and lots of disagreeing voices. Strong opinions are healthy, provided they're grounded in wisdom and humility.

They need to be evidenced, right? They need to be referenced, they need to reference some historical example, or some data, or some empirical case, or they need to be explicitly named as. a guesstimate. I don't mind that either. One thing I often say to people is, deliver your message with metadata. Such a basic thing.

Say to the person, I'm really sure about this because I've looked up this fact. I'm really not sure about this, or I'm looking for feedback on this one, or This is just an FYI. Letting each other know what the status of our exchange is, often I see conflicts arise from a mis expectation about what two people are.

expecting in an exchange. So, so that's one thing is like clear communication. There's evidence based. There's a lot of humility, very collaborative and open, but I'm also very decisive. Fundamentally, my job is to sift through all the complexity and make a call because there's really, Nothing worse than not having clarity for the team, even if we end up going in the wrong direction, that's totally fine because we'll calibrate, we have a process for feedback and iteration and retrospective, and that will really, really help.

One thing that I've struggled with is in larger organizations, naturally, because there's tens or hundreds of thousands of people, there's different people with different motivations. In a startup, you know that everyone is do or die. And maybe there's one or two that aren't and they get rotated out, but they're like there for it.

And in a bigger organization, it's not always true. Some people are just kind of happy in the rhythm that they're in. And so one of my learnings is learn to energize those people and find a practical flow to get them in, get them being useful. 

[00:59:53] Hala Taha: Well, I want to be respectful of your time because we're running out of time here.

So one of my last questions to you is what is the legacy that you hope to leave behind related to AI and the world in general? 

[01:00:04] Mustafa Suleyman: Oh, man, I don't, I don't think about legacy. I think about the future, but you're, you're totally right. I mean, I hope that I'm able to live my values authentically and let people know what I'm trying to do, give people an opportunity to disagree with it, but fundamentally move at pace to experiment with this new approach of AI companions and emotionally intelligent AIs and, you know, I really want to try and help steward this new moment with kindness and compassion.

That's what means a lot to me. 

[01:00:37] Hala Taha: Well, I really enjoyed this conversation. I've had probably 10 conversations about AI. This is by far my favorite one. I feel like you made me feel not scared about AI and I feel like I know so much more about it. So the last two questions that I ask all my guests. You don't have to make it based on AI, it could just come from your heart.

What is one actionable thing our young improfiters can do today to become more profitable tomorrow? 

[01:01:01] Mustafa Suleyman: I think the most important thing has got to be to be critical of yourself. Retrospectives are key. Ask for feedback from your friends, your family, and especially ask for feedback from people who you think, you know, are going to give you something a little bit barbed.

You don't have to take it, but just at least Be aware of the landscape and get in the habit of not having a thin skin. That will make you tougher and stronger for everything that you're going to encounter next. 

[01:01:31] Hala Taha: I think I needed to hear that. And what would you say is your secret to profiting in life?

[01:01:36] Mustafa Suleyman: Learning and humility. I have made so many mistakes and I still make mistakes all the time. And I've upset people. I've hurt people. I've pissed people off. And I don't like doing it. I hate it. It grinds at me, but it's my fuel because that's the signal I need to get better every day. And I know that the one thing I can do is I have this process of getting better step by step.

And I've been doing it since day and I just, I love it. That's what I live for, learning. 

[01:02:08] Hala Taha: Amazing. And where can everybody learn more about you and everything that you do? 

[01:02:11] Mustafa Suleyman: Well, I'm on LinkedIn. I'm not on Instagram or TikTok, unfortunately. Obviously, I'm also a huge believer in our co pilot AI. So you can download that in iOS, Android, or copilot.

microsoft. com. 

[01:02:26] Hala Taha: Good news for you. A lot of our YAP listeners are on LinkedIn. We talk about LinkedIn all the time. So I'll put your link in the show notes. And thank you so much Mustafa for all of your time. 

[01:02:34] Mustafa Suleyman: This has been awesome. Thanks so much. 

[01:02:40] Hala Taha: Man, I have to say guys, that was one of my favorite conversations I've had all year. That conversation blew my mind. Mustafa Suleiman is somebody who's right at the cutting edge of AI technology. Technology that's going to change how we all live our lives in the coming years. And Mustafa is both optimistic and pessimistic about the future of AI.

On the one hand, he believes the technology could deliver the greatest boost of productivity in the history of our species. And we've had a lot of productivity booms. But like all big innovations, it's also going to be hugely disruptive. It could destabilize our lives, our politics, and our workplaces in unpredictable ways.

We don't even know what's about to happen. It could, for example, worsen the loneliness and isolation epidemic that so many of us suffer from in this online world. Or AI can bring us together in new ways that we never even thought possible. Mustafa sees AI as a force amplifier, like having a super intelligent personal assistant right at your fingertips who will help you invent new products and design new solutions.

This AI might accompany you to a job interview or even be the co founder of your next business. AI solutions like Copilot could be the ultimate hype man for you and your business. Something that could open up opportunities and paths to entrepreneurship across much broader sections of society. However, optimistic or pessimistic you are about the future of ai, the technology is here to stay, and you're gonna need to know how to use it if you wanna succeed in the business landscape of the future.

I personally use AI every day. I use Chachi BT every day to get my work done. I use Google's notebook, LM, to get my work done. We're using 11 labs for my audio AI experimentation. We're using AI every day at Yap Media. And I'm doing this because I want to make sure I understand how to leverage AI in my day to day tasks.

I want to get really good at directing AI and training my AI. And I just feel like it's so important for my future so that I stay relevant. And I want to make sure you guys all get that message, get out there and give AI a try. Google apps, do some research, figure out how AI can help you in your day to day right now, because there's thousands of apps out there that you can play with.

Find the ones that help you accelerate your work and get used to working alongside AI because that is the future. Make yourself a new friend, buddy, or co pilot. They might just end up being your future business partner. Thanks for listening to this episode of Young and Profiting Podcast. Every time you listen and enjoy an episode of this podcast, please share it with somebody that you know.

Maybe someday your AI personal assistant will be able to do that for you. But until then, we depend on you to share our content. And if you did enjoy the show and you learned something new, if you love to listen to Yap during your workout, during your commute, while you're doing chores, if you made it a habit to listen to this podcast, you love it so much, write us a review, tell the world how much you love Young and Profiting Podcasts.

It is the number one way to thank us. We get reviews every single day and they always make my day. If you prefer to watch your podcast as videos, you can find us on YouTube. Just look up young and profiting. You'll find all of our videos on there. You can also find me on Instagram at Yap with Hala or LinkedIn by searching my name.

It's Hala Taha. And before we go, I of course have to thank my incredible production team. Shout out to our audio engineer, Maxi. Thank you so much for all that you do. Shout out to Amelia, Corday, Christina, Sean, Hisham for Khan. It takes a village to put on this show. So shout out to my entire team for doing incredible work.

Young and Profiting Podcast is a top business show and it's because of your hard work. So thank you guys so much. This is your host, Hala Taha, AKA the Podcast Princess, signing off.

 

Subscribe to the Young and Profiting Newsletter!
Get access to YAP's Deal of the Week and latest insights on upcoming episodes, tips, insights, and more!
Thanks for signing up. You must confirm your email address before we can send you. Please check your email and follow the instructions.
We respect your privacy. Your information is safe and will never be shared.
Don't miss out. Subscribe today.
×
×