Elon Musk argues that AI will destroy humankind and that we should regulate it. Does he have a point?



To what extent are his warnings realistic and possibly true? It would be interesting to hear some thoughts from this community as it is about benevolent AI.


Elon has thought it through very well - he is correct, just read this and begin to ask the “tough questions” - the ones you fear the most


I would approach the question of Musk, and his opinion, from the perspective of observation of human behavior.
Often, too often, people who are skilled in one thing or another become successful , and therefore it is assumed they must have answers for things outside their specialty. Musk is in an environment of secrecy, high money flows, and instant financial responsibility for things he says in public. He just lost millions for a tweet. Not exactly someone in the position to be unbiased.
Satellites inherently must deal with the presence of fear, that resides in any military concern. Without cause for alarm, money dries up for the defense sector. So, again, the perspective of Musk should be viewed holistically.
The same thing was said about nuclear bombs, Y2K, on and on.
Compared to potential biological threats, AI is minimal risk. Compared to human ignorance and desire for power, AI is minimal risk. It is like assessing the librarian, as being a greater threat, than the corrupt sheriff, in that the librarian has more knowledge.
I would rather consider, what tools human kind has, that have preserved our survival for millennia. Intelligence wins the day, over brute and brawn. AI is more likely to help us save ourselves, from ourselves, than it is to harm us, particularly on a mass scale.
It is simply an invalid threat assessment based on unpreventable bias due to the position, Musk finds himself in. He should rather perfect his Boring techniques, instead of facilitating the weaponization of space. The ability to tunnel, is full of potential to provide a multitude of advanced beneficial attributes to society. It has the potential to assist greatly with the reduction of destructive behaviors, in many ways.
Just my opinion, but a decentralized AI is much more likely to assist than harm. The ability to fly was weaponized. How many would like to get rid of the technology ? There are currently approx. 26,000 commercial flights per day in the US alone.


Hi to all, in my opinion only humans can destroy other humans.
I’m quite sure that humans are afraid of themselves.


Right, worry most about what the humans are doing, particularly when they do it in secret. The problem isn’t that AGIs will collude to destroy us, it is that someone (one or more humans) will task them with something that is really really bad.

If the capabilities are even close to what folks here are imagining, the dangers are real, but they are a compounding of human capacities to destroy ourselves. The machines just help do it faster.


Right, Gerry; the idea of AGI being actively malevolent seems far-fetched, but a human collaborator in position to either misuse the AGI output or to misdirect the AGI with narrow-minded bias or direction seems plenty risky.


This fear of a hostile AI is one of the main reasons why Ben Goertzel founded SingularityNET - to create a decentralised, inclusive network of humans and AIs to learn from each other and develop a benevolent AGI. So the AI can learn that humans are worth preserving even if they do all the horrible things they do.
As the moral leadership teams of the different other AGI developing facilities are disbanding (Google, Deepmind) it seems that SingularityNET is becoming the last hope for humanity.