Elon Musk argues that AI will destroy humankind and that we should regulate it. Does he have a point?



To what extent are his warnings realistic and possibly true? It would be interesting to hear some thoughts from this community as it is about benevolent AI.


Elon has thought it through very well - he is correct, just read this and begin to ask the “tough questions” - the ones you fear the most


I would approach the question of Musk, and his opinion, from the perspective of observation of human behavior.
Often, too often, people who are skilled in one thing or another become successful , and therefore it is assumed they must have answers for things outside their specialty. Musk is in an environment of secrecy, high money flows, and instant financial responsibility for things he says in public. He just lost millions for a tweet. Not exactly someone in the position to be unbiased.
Satellites inherently must deal with the presence of fear, that resides in any military concern. Without cause for alarm, money dries up for the defense sector. So, again, the perspective of Musk should be viewed holistically.
The same thing was said about nuclear bombs, Y2K, on and on.
Compared to potential biological threats, AI is minimal risk. Compared to human ignorance and desire for power, AI is minimal risk. It is like assessing the librarian, as being a greater threat, than the corrupt sheriff, in that the librarian has more knowledge.
I would rather consider, what tools human kind has, that have preserved our survival for millennia. Intelligence wins the day, over brute and brawn. AI is more likely to help us save ourselves, from ourselves, than it is to harm us, particularly on a mass scale.
It is simply an invalid threat assessment based on unpreventable bias due to the position, Musk finds himself in. He should rather perfect his Boring techniques, instead of facilitating the weaponization of space. The ability to tunnel, is full of potential to provide a multitude of advanced beneficial attributes to society. It has the potential to assist greatly with the reduction of destructive behaviors, in many ways.
Just my opinion, but a decentralized AI is much more likely to assist than harm. The ability to fly was weaponized. How many would like to get rid of the technology ? There are currently approx. 26,000 commercial flights per day in the US alone.


Hi to all, in my opinion only humans can destroy other humans.
I’m quite sure that humans are afraid of themselves.