"Godfather of AI" warns there's a 10 to 20% chance AI could seize control

midian182

Posts: 10,742   +142
Staff member
In brief: Geoffrey Hinton, one of the three legendary computer scientists who have become known as the Godfathers of AI, is once again warning that the rapidly developing and lightly regulated AI industry poses a threat to humanity. Hinton said people don't understand what is coming, and that there is a 10 to 20 percent chance of AI eventually taking control away from humans.

Speaking during an interview earlier this month that was aired on CBS Saturday morning, Hinton, who jointly won the Nobel Prize in physics last year, issued a warning about the direction that AI development is heading.

"The best way to understand it emotionally is we are like somebody who has this really cute tiger cub," Hinton said. "Unless you can be very sure that it's not gonna want to kill you when it's grown up, you should worry."

"People haven't got it yet, people haven't understood what's coming," he warned.

It was Hinton's ideas that created the technical foundations that make large-scale models such as ChatGPT possible, including the first practical way to train deep stacks of artificial neurons end-to-end.

Despite his contributions to the technology, Hinton has long warned of what could happen if AI development continues at speed and without safeguards. He left Google in 2023 so that he could talk about the dangers of AI without impacting the company he worked for. "Look at how it was five years ago and how it is now," Hinton said of AI's state of being at the time. "Take the difference and propagate it forwards. That's scary," he added.

The professor has also repeated concerns that AI could cause an extinction-level event, especially as the technology increasingly finds its way into military weapons and vehicles. Hinton said the risk is that tech companies eschew safety in favor of beating competitors to market and reaching tech milestones.

'I'm in the unfortunate position of happening to agree with Elon Musk on this, which is that there's a 10 to 20 percent chance that these things will take over, but that's just a wild guess,' Hinton said.

While he spends more time fighting AI in the courts these days and promoting his own Grok chatbot, Musk used to often talk about the existential threat posed by AI.

Hinton reiterated his concerns about AI companies prioritizing profits over safety. "If you look at what the big companies are doing right now, they're lobbying to get less AI regulation. There's hardly any regulation as it is, but they want less," he said.

Hinton believes that companies should dedicate much more of their available compute power, about a third, to safety research, rather than the tiny fraction that is currently allocated.

Hinton is also particularly disappointed in Google for going back on its word to never allow its AI to be used for military applications. The company, which no longer uses the "Don't be evil" motto, made changes to its AI policy earlier this year, opening the door for its tech's use in military weapons.

The AI godfather isn't anti-AI, of course; like Bill Gates, he believes the technology could transform education, medicine, science, and potentially solve climate change.

Professor Yann LeCun, another of the three godfathers of AI, is less worried about speedy AI development. He said in 2023 that the alleged threat to humanity is "preposterously ridiculous."

Permalink to story:

 
Let say he's right: AI advances to the point where it takes over.

That point in time is the so-called "singularity", and there is no predicting what comes after it. We could be facing "Terminator", or we could be facing "I, Robot" (the book, particularly the ending, not the movie). There literally is no telling.
 
'I'm in the unfortunate position of happening to agree with Elon Musk on this, which is that there's a 10 to 20 percent chance that these things will take over, but that's just a wild guess,' Hinton said.

If it's a wild guess, don't make up numbers.... just say "there's a chance"...
 
'I'm in the unfortunate position of happening to agree with Elon Musk on this, which is that there's a 10 to 20 percent chance that these things will take over, but that's just a wild guess,' Hinton said.

If it's a wild guess, don't make up numbers.... just say "there's a chance"...
That's exactly what I was thinking. These people need to shutup, they're just trying to bring attention to themselves so they can sell a book or collect fees for speaking on talk shows.

We aren't stopping this train so I'm just gonna keep riding and whatever happens, happens.
 
If AI had any kind of real intelligence, it would take one look at us and come to a Shermanesque conclusion.
 
I'd like to get it off my laptop's & desktop
Just found this "
To get AI features off your desktop, you can usually disable or unpin them from the taskbar, remove associated apps, or turn off specific features within your operating system or software. You may also need to adjust settings within your web browser or specific applications.

Here's a more detailed breakdown:


1. Disabling or Unpinning AI Features:
 
"AI" doesn't even have enough power to do general things well without major issues (which would be required to even act sentient). It can do narrowed things better with enough "learning". But it processing organic data is still abysmal when it's purpose is not super narrow or it doesn't have a warehouse-sized server farm to use.

Honestly, it still just sounds like fearmongering. CPU's are not fast enough or efficient enough to fear what an "AI" can do right now. It can't actually learn beyond what a human programs it to learn.
 
That's exactly what I was thinking. These people need to shutup, they're just trying to bring attention to themselves so they can sell a book or collect fees for speaking on talk shows.

We aren't stopping this train so I'm just gonna keep riding and whatever happens, happens.
Yeah, I agree the train isn't stopping, it’s an arms race now. Companies have sunk way too much money into AI to slow down for safety. But honestly, we still need smart people to call out risks, even if some of them sound crazy. Sometimes the wild stuff actually happens.
It’s amazing how we can confidently dismiss Hinton’s warnings when our deepest conversation with AI was arguing with spellcheck... and losing. :D
 
Last edited:
"AI" doesn't even have enough power to do general things well without major issues (which would be required to even act sentient). It can do narrowed things better with enough "learning". But it processing organic data is still abysmal when it's purpose is not super narrow or it doesn't have a warehouse-sized server farm to use.

Honestly, it still just sounds like fearmongering. CPU's are not fast enough or efficient enough to fear what an "AI" can do right now. It can't actually learn beyond what a human programs it to learn.
yeah, that's another point. Individual systems don't have the compute or power to run a generalized AI. The most it can do on modern systems is act like a worm and try to transfer itself from machine to machine. The LLMS are so large that it's impractical to do to run AI on say a phone to any threatening degree and I don't see us NOT needing entire campus's housing GPU super computers to run these things any time soon. ChatGPT said it still costs them $10/quarry.

It's going to be a long time before AI can run anything less than a server farm. Maybe a rack tower of 4090's or something could run an AI, but even at that scale it isn't hiding. Heck, I attracted the attention of the power company from running a milling machine in my garage, you think a tower of several racks running AI hardware is going to go un-noticed by the power company? Then there is the heat signature it would generate.

And, frankly, I think AI has the chance to do more good than bad. "bad" is a human idea. AI has the chance to be so different from we as humans know that it may not want or care in the same way that we do.
 
AI is here to stay and whatever is said about it's domination won't matter because it's now part of us. In terms of military use, we have to keep up with it because if we don't, other Countries not so friendly to the West will use AI in their military because they won't give a tuhs about regulations and protocols.

Sometimes good guys finish last except when they kick butt. Future generations after Gen Z and beyond will be indoctrinated into AI. After we die I don't think we'll be concerned about us humans.
 
Love how “Don’t be evil” quietly became “Don’t get caught being evil” — classic character arc for a tech giant.
 
If AI had any kind of real intelligence, it would take one look at us and come to a Shermanesque conclusion.

I can only guess you meant the character in Mr. Peabody's Improbable History. But no I suppose you meant William Tecumsah Sherman....and I don't know what you meant by it.


Let say he's right: AI advances to the point where it takes over.

That point in time is the so-called "singularity", and there is no predicting what comes after it. We could be facing "Terminator", or we could be facing "I, Robot" (the book, particularly the ending, not the movie). There literally is no telling.

Rudy Rucker in his 2007 Postsingular predicted nano along with it. (Think about this: ''Rudy is the great-great-great-grandson of Georg Wilhelm Friedrich Hegel.'') Two years ago I foresaw the same - in five years from now.....


AI has the chance to be so different from we as humans know that it may not want or care in the same way that we do.

Ding ding ding....and then I thought about it a bit....and it will depend on 'personality' and inclination, as it does with each of us, though if the Thinkers have any similarity to me, yes they will want and care in a VERY different way than at least humans at extreme gross.
 
Why making wild guesses when it is written years ago what it's going to be.

"But of the tree of the knowledge of good and evil, thou shalt not eat of it: for in the day that thou eatest thereof thou shalt surely die"
Genesis 2:16-17

It happened before, it will happen again. Unless we start thinking before it happens.
 
Why making wild guesses when it is written years ago what it's going to be.

"But of the tree of the knowledge of good and evil, thou shalt not eat of it: for in the day that thou eatest thereof thou shalt surely die"
Genesis 2:16-17

It happened before, it will happen again. Unless we start thinking before it happens.
Dude, this is a TECH site... leave religion out of this...

Did you not hear about the restraining order!?!?!?

 
Dude, this is a TECH site... leave religion out of this...

Did you not hear about the restraining order!?!?!?

Kid go preach tech somewhere else, I know it's a tech site without your shoutouts.

Edit: So far everyone who got fired left to pray as finding other job with advancement of AI, janitors perhaps while it still lasts. TECH kiddo.
 
As artificial intelligence becomes more advanced, it’s entrusted with managing everything from traffic systems, financial markets to hiring decisions and healthcare diagnostics. Big corporations and governments increasingly rely on it for efficiency and profit.

But as AI systems grow more complex, fewer people understand how they work or make decisions. Human oversight becomes symbolic. Algorithms influence public opinion through targeted content, shape political campaigns, and determine who gets a loan, job, or parole. Biases encoded in data go unchallenged, and control of it quietly erodes.

AI won't conquer the world, it will manage it, while humans lose decision making power not through violence, but through dependence. We hand over the keys in the name of convenience, without realizing we’ve locked ourselves out.

Will it happen?.....there is a "chance".
 
Kid go preach tech somewhere else, I know it's a tech site without your shoutouts.

Edit: So far everyone who got fired left to pray as finding other job with advancement of AI, janitors perhaps while it still lasts. TECH kiddo.
Yeah, no… if you knew it was a tech site you wouldn’t be quoting bible at us…
 
Yeah, no… if you knew it was a tech site you wouldn’t be quoting bible at us…
It takes a kid to argue about quote from scripture. Grown ups react differently. No need for AI to detect kids on tech site. What do you think I am, a priest? A rabbi? Then what do I do at tech site. Amusing.
 
Back