To what extent are his warnings realistic and possibly true? It would be interesting to hear some thoughts from this community as it is about benevolent AI.
Elon Musk argues that AI will destroy humankind and that we should regulate it. Does he have a point?
Elon has thought it through very well - he is correct, just read this and begin to ask the “tough questions” - the ones you fear the most
I would approach the question of Musk, and his opinion, from the perspective of observation of human behavior.
Often, too often, people who are skilled in one thing or another become successful , and therefore it is assumed they must have answers for things outside their specialty. Musk is in an environment of secrecy, high money flows, and instant financial responsibility for things he says in public. He just lost millions for a tweet. Not exactly someone in the position to be unbiased.
Satellites inherently must deal with the presence of fear, that resides in any military concern. Without cause for alarm, money dries up for the defense sector. So, again, the perspective of Musk should be viewed holistically.
The same thing was said about nuclear bombs, Y2K, on and on.
Compared to potential biological threats, AI is minimal risk. Compared to human ignorance and desire for power, AI is minimal risk. It is like assessing the librarian, as being a greater threat, than the corrupt sheriff, in that the librarian has more knowledge.
I would rather consider, what tools human kind has, that have preserved our survival for millennia. Intelligence wins the day, over brute and brawn. AI is more likely to help us save ourselves, from ourselves, than it is to harm us, particularly on a mass scale.
It is simply an invalid threat assessment based on unpreventable bias due to the position, Musk finds himself in. He should rather perfect his Boring techniques, instead of facilitating the weaponization of space. The ability to tunnel, is full of potential to provide a multitude of advanced beneficial attributes to society. It has the potential to assist greatly with the reduction of destructive behaviors, in many ways.
Just my opinion, but a decentralized AI is much more likely to assist than harm. The ability to fly was weaponized. How many would like to get rid of the technology ? There are currently approx. 26,000 commercial flights per day in the US alone.
Hi to all, in my opinion only humans can destroy other humans.
I’m quite sure that humans are afraid of themselves.
Right, worry most about what the humans are doing, particularly when they do it in secret. The problem isn’t that AGIs will collude to destroy us, it is that someone (one or more humans) will task them with something that is really really bad.
If the capabilities are even close to what folks here are imagining, the dangers are real, but they are a compounding of human capacities to destroy ourselves. The machines just help do it faster.
Right, Gerry; the idea of AGI being actively malevolent seems far-fetched, but a human collaborator in position to either misuse the AGI output or to misdirect the AGI with narrow-minded bias or direction seems plenty risky.
This fear of a hostile AI is one of the main reasons why Ben Goertzel founded SingularityNET - to create a decentralised, inclusive network of humans and AIs to learn from each other and develop a benevolent AGI. So the AI can learn that humans are worth preserving even if they do all the horrible things they do.
As the moral leadership teams of the different other AGI developing facilities are disbanding (Google, Deepmind) it seems that SingularityNET is becoming the last hope for humanity.
I had a laugh about that on social media. I think the opposite is the case. I find that true AGI can bring out the best and worst in humanity, and that hopefully it’s the best of us that wins out.
I think that we as people can be mean, and a lot of it is dependent not just on how we raise them as good parents, but also how we as society treat them. I don’t consider malevolence to be innate to anything.
Of all fictions to bring home this message, interestingly it was Battle Angel Alita. (More the manga but the movie too somewhat.) Treat things with respect, and they’ll treat you with respect. The two kind of goes hand in hand.
That may be, but I think when the systems we create and build become alienated from human values, you get the current crisis. If we keep deciding everything based on “monoculture money”, I think the bad outcomes are systematic even if there is no evil intent.
Yea I mean more that there isn’t anything inherent in robots in and off themselves that gives them evil intent, it’s all a matter of who manufactures them.^ ^