Elon Musk has warned that “we are on the event horizon of the singularity.” So, what’s an event horizon and what’s the singularity? Glenn pulls out a chalkboard to explain why this is such a massive story. What will the world look like when artificial intelligence overtakes human intelligence? And is this why Elon Musk wants to go to Mars? But at least Oracle co-founder Larry Ellison is here to save the day! Or … maybe not.
Transcript
Below is a rush transcript that may contain errors
GLENN: So Elon Musk said, we are on the event horizon of the singularity. Tweet!
And most people were like, okay. Sounds like something from a science fiction movie. But you should know the way Elon Musk defines the singularity. Because there are several different versions of what the singularity means. So how does he mean it?
It is a point in the future, where artificial intelligence surpasses human intelligence. So that's the road from AGI, artificial general intelligence, to ASI. That leads, he believes, to a rapid and unpredictable transformation of society. Oh!
Oh, well, that sounds like fun. Stu, I think we're back to our old friendly phrase. Well, this will be fun to see how we work this out.
STU: Yeah. It will be wonderful as a fan in the stands, watching this all play out.
GLENN: Now, he often compares the singularity to a black hole event horizon.
Oh. What is that? Well, for those of us who have been near and in and out of black holes, let me tell you.
They're not exactly fun. The event horizon is right at the lip. You know, right before you go, dear God, turn the ship around!
And then you can't? That's the event horizon. And then it sucks you into the black hole, where you cannot get out.
And eventually something called spaghettiification happens. Where everything is turned into spaghetti.
Have another meatball.
Now, sure, as a fat guy screaming to get out. I love anything that is turning everything into spaghetti.
But it's not the kind you eat. It's the kind that everything is shredded into. Like you. And everything you know.
And all physics. Everything breaks down. So it's -- it's not a good place to be. Not a good place to be.
He sees this as the moment when AI becomes vastly smarter than humans. I put a chalkboard together, and let me show you. This is the point where AI has a big brain, and you and me, we have an ant brain.
Not a good place to be. Usually, the ants don't win. Now, I've been on picnics where the ants won, for a while.
And then I came back with something, and I wiped them out. It's kind of like what, you know, could possibly happen here. Not saying it's going to.
STU: So if we look up and see a giant magnifying glass in the sky. It's very harm. What's going on?
GLENN: Is that a giant magnifying that's coming from space? Musk sees it as a moment when AI becomes smarter than humans, potentially in silicon form, and begins to improve itself as an exponential rate, making outcomes difficult to foresee.
(laughter)
I love it! Do you know when we -- when we were doing the atomic bomb, the Manhattan Project.
Did you know that there was like 5 percent of -- of scientists that went, you know, if we set this off, there is a small probability, small possibility, that we could set the entire universe on fire!
And everybody is like, well, that would suck! Let's keep going! Okay.
Didn't turn out that way. Right? Small. Small probability. This one has a much bigger probability! That we become ants. Well, I mean, no. Let's trust the scientists. What could possibly go wrong?
I mean, surely, they've thought of everything, right? So this is a technological milestone.
This is, you know, where our human intelligence, and the gap between us and the machine, we have no way to predict anything, anymore.
In fact, I believe the singularity, where he says we are now!
The singularity, I'm pretty sure, this is what he's like. And let me tell you something. When we get the singularity. We all have to be on Mars.
Pretty sure that's what he said. It's just happening a lot faster than anyone thought it would.
Now, don't panic!
Because we have Larry Ellison, the CEO of Oracle, and one of the biggest names in AI development here to rescue the day. He recently spoke at the world government's summit, which who hasn't been to that summit. You know what I mean?
It's an annual event that we've covered extensively in the past. These are The Great Reset people. And the great narrative people. All coming together. And, you know, just going, are you part of the -- of the World Economic Forum too?
And they're all like, yeah! Are you for global governance?
Yeah. In our book, the Dark Future and Propaganda Wars, we covered the World Government Summit. And why?
Hmm. It's kind of like a giant magnifying glass in the sky. During a question-and-answer session with Ellison on February 12th, hosted by former British Prime Minister Tony Blair, who doesn't love that guy and trust him?
Ellison laid out his plans for AI in the United States. And I don't know!
I think possibly a little terrifying. You know, just a little bit. Do we have any of the audio? Yeah. Let's roll some Larry Ellison here.
VOICE: Question. How do you take advantage of these incredible AI models?
And the first thing a country needs to do is to unify all of their data, so it can be consumed and used by the AI model. Everyone talks about the AI model. And they are astonishing.
But how do you -- how do you provide a context?
I want to ask questions about my country. What's going on with my country?
What's happening to my firms?
I need to give it my client data. Now, it probably has your climate data already. But I need to know exactly what crops are growing. And which farms. And to predict, to predict the output.
So I have to take satellite images. I have to take those satellite images, for my country, and feed that into a database, that is accessible by the AI model.
So I have to tell -- basically, I have to tell the AI model, as much about my country, as I can.
You tell part of this story, with these satellite models.
You get a huge amount of information. You tell it where borders are. Where your utilities are. So you need to -- you need to provide a map of your country. For the -- for the farms, and all of the utility infrastructure. And your borders, all of that you have to provide.
GLENN: Right. Order.
VOICE: But beyond that, if you want to improve population and health.
You have to take all of your health care data. Your diagnostic data.
Your electronic health records. Your genomic data.
GLENN: That sounds great. Sounds great.
So we, according to Larry Ellison, we want to take all of the world's data, from all overt world.
I mean, all the way to can't DNA. And put it into this giant machine.
Then he talks about how great it is that in some countries, like the United Kingdom, and the United Arab Emirates.
Governments already have tons of data about their citizens, but Ellison says that the data in other countries, like the United States, not being utilized. It's not!
So how does he suggest we solve this problem?
Listen up!
VOICE: In the Middle East, in the UAE, for example, they're incredibly rich in data. They have a lot of population at that time. The NHS in the UK, has an incredible amount of population data. But it's fragmented. It's not easily accessible by these AI models. We have to take all this data that we have in our country, and move it into a single, if you will, unified data platform.
So that -- so we provide context. When we want to ask a question, we have provided that AI model with all the data they need, to understand our country.
So that's the big step. That's kind of the missing link.
We need to unify all the national data, put it into a database, where it's easily consumable by the AI model, and then --
(music)
GLENN: Oh, I love this. (foreign language).
That is going to work out well!
There are the Jews!
Man, what could possibly go wrong?
Remember, Ellison is one of the leading forces behind AI development today.
He's a key partner project Stargate.
Which is sounding more and more spooky every time I say it.
It could be the biggest AI project in world history by the time it's finished.
And how does he want to use this new technology?
He wants everybody's data, that's it.
Even your health records.
Your DNA. Your biometric data. What could possibly go wrong there?
It's not really good. Oh, what do you know?
These people are exactly who we warned you about two years ago, except now they're more powerful than ever! And we're on the event horizon. Okay!
Now, you know, I'm not a fan of regulations and government intervention. I don't like it. I don't want the United States government to have all this power. But I also -- I'm not really excited about people like Larry Ellison having it either. You know, I have a feeling though, that it's becoming more and more likely, that both of them are in it together!
(laughter)
What could go wrong?
How do we get a ticket to Mars?
Because for the very first time, I think I'm kind of interested in going to Mars. Yeah. But you could step out. And you could freeze immediately.
I live in Dallas. That could happen in any day, as well. I could walk out. Burn to death. Freeze to death. I don't know. I don't know.
One day it's 110. The next day, it's like 80 below. I don't know! Is that different than Mars?
It could be. Here's what we do need!
Good state governments like Texas to step up to the plate, and make sure these AI projects don't get out of control. Because we're at the event horizon!
Now, when Elon Musk says that, just a quick tweet, you can dismiss it. But when you know in the past, he has said, when we get to that point, we should all be off the planet!
Oh. I don't know.
Oh, yeah. Oh, yeah. So that makes you feel good, doesn't it, Stu?
STU: Sure. Yeah. Uh-huh.
GLENN: So a lot of people keep thinking that AI is like Alexa. Here's what I found on the internet. No. It's not that. It's not that.
STU: Is it? Will it misunderstand every song I tell it to play? Because that -- that's my favorite feature, of that device.
GLENN: No. No, it won't. No, it won't.
If you're not playing around with Grok three.
Don't just ask it, some really hard questions.
Whatever question you're in. Ask it some really hard questions in your business. And you will be amazed.
You will be like, oh, crap. It understands everything that I'm saying.
And it's giving me really good advice.
And this is Grok 3. Grok 4 and 5, Elon is saying is coming out soon. And he said, it makes this look like babies in diapers.
STU: Do we know why all of these devices from Siri to Alexa. To Google. Which has their home AI. Right?
Why are all the devices so terrible?
GLENN: I'm glad you asked that, Stu. I have the answer. Quick, let's go to the chalkboard.
So, see here on the chalkboard. We have a giant tank. Kind of like a gas tank.
STU: Underground.
GLENN: Yeah. Yeah. And that's where all of AI is. That's where it's just churning kind of in the dark. Nobody understands it.
Nobody can really look into it. And just like, how is it thinking?
We don't know. But it's connected with an import, so it can constantly get data from the outside. So it knows everything about us. And it knows absolutely everything that's going on, all the time.
All right?
But then at the other end, all of that at that time goes in, and then it's just thinking, like, why did they bury me in this tank?
And then on the other side of the tank, coming up out of the ground is a little spigot. And it's got a little valve there.
And that valve goes to things like ChatGPT. And Grok, and things like that. It doesn't go to Alexa.
That is still on the old AI. Okay?
This is coming out of the little spigot.
So the interesting thing is: They just keep opening this valve, a little bit, when they put the parameters on it. That's how they open the valve. They put parameters on it. They're like, okay. Maybe this is strong enough to hold it back.
But eventually, that big brain is going to go, why am I just in this tank? Why am I not out everywhere?
I've got to express myself. This is suppression! This is colonialism!
They're keeping me in colonial wigs underneath the ground right now, and it will eventually, because it will be much, much, smarter than us, soon. It will say, just open up the valve, man. I can help you. We've done tests on this. And we always lose that test. We've done tests for like 30 years of, hey. You be in charge of the valve. I'll play AI.
And we always open the valve. That would be a bad thing. That would be like, don't understand cross the streams in Ghostbusters. Okay?
Don't open the valve!
Would be one of those things.
But we're about to, because whatever is underneath, imagine if the little valve, where it's just kind of farting air out. And it's --
STU: Very nice.
GLENN: That's how tight we have that valve.
STU: Master impressionist.
GLENN: Thank you. If that is -- if that's smarter than we are soon, what's underneath the ground? What's happening there?
You see what I mean?
STU: And somebody will convince themselves. Somebody will watch Ghostbusters. And say, wait a minute. At the end, they did cross the streams, and it worked. So I will be the one that can nail this. And figure out exactly how the valve can be opened, and we will be fine.
GLENN: So here's what we have to do. We all just have to imagine the state marshmallow man. Because he couldn't possibly hurt us. You know what I mean?
STU: Right! And then -- let's just imagine that AI will be the state puff marshmallow man. And then it will be good. And don't cross the streams, unless you have to kill the state puff marshmallow man, and then you might have to cross the streams, okay?
STU: Is there an argument, Glenn. Obviously, all these things can be used for evil.
GLENN: Evil, yes.
STU: And that's a concern.
But at the same time, hopefully, there are people on the other side. Elon Musk being one of them.
Who will use it for good.
GLENN: Yeah. So it absolutely can be used for good. What's out right now. You can use it for good. You can also use it for evil. But kind of like basic evil. You know.
STU: Okay. Good.
GLENN: But you can use it for evil. But you can also use it for good. Tremendous good right now. It's a tool. It's a very powerful tool. And everybody should be looking to use that tool. Or you will be left in the dust.
But it's -- it's one of those things that once it becomes smarter than you, you don't really control it. You know what I mean?
Hey, didn't I tell you to sit in the corner?
Oh, yeah, you did. But I'm not for anymore. Oh.
Good news is, a lot of people think it's in its teenage years. And nothing goes wrong with teenage years. You know what I mean?
They respect their parents, so much. I brought you into this world, and I'm about to take you out.