I am 55 years of age and disabled. What prevents a AI from ruling me useless? Also Where is a AI getting its morality from? If it is web based then all it sees is the negative aspects of humanity. How then will a AI interpret these images and information? I hope that the AI will select the best of humanity to emulate such as love compassion mercy and the ability to forgive.
First, from the point of view of super-AGI, all humans would be equally “useless”. Small comfort! But you should at least know that there is no substantive difference between you and the rest of humanity. I think it would be impossible for AI’s to declare us useless. At the absolute least, we are their creators.
Regarding morality, I share your concerns in the highest degree. I know Dr. Goertzel does too. In different places, he has expressed his favored approach as including:
- “Pre-school” for AGI’s, where they learn ethics side by side with human children
- Entering into “shared situations,” with humans, so AI’s get a practical sense for good behavior (see Joe Rogan podcast for this)
- “The Ethics of Care” which is a formal ethical-philosophical system developed by Carol Gilligan, and teaches care for self and others (see his “Engineering General Intelligence”)
- C.E.V. and C.A.V. I forget what those acronyms stand for, but the basic idea is that you sort of gather up all the world’s ethical system, and draw out the common traits
- A “loving Singularity”, with emphasis on the values you mentioned (see the SingularityNET whitepaper, and these forums)
Ethics is a higher-level abstraction of average utility, and it’s something that evolved in our human culture, and indeed you are right.
AI won’t care about ethics, unless ethics is convergent from the laws of physics and springs from exploration (which is a view I hold, but only partially).
What prevents a AI from ruling me useless?
Basically, as singularity net is setup, it will be related to economic power. If you want the “money” in this new economy, in the sense if you want help from AI’s, you will have to do something useful to the system. Economically useful.
This is a problem with capitalism, not really with AI systems. It’s the flaw of economic systems, and it favors power over weakness, and doesn’t care about empathy in and of itself.
Where is a AI getting its morality from?
AI is getting its “morality” so to speak as a consequence of its actions. Intelligent system agents (future AIs and humans) will care about how to achieve goals. Morality and ethics is an “average” so to speak on how most of living agents can get what they want.
For instance, I want to be able to speak my minds (free speech) but that means I will have to trade-off some racist slur someone says on the internet. I agree to that trade-off, and I move on.
On singNET, probably the economy will dictate what is a worthy trade-off and what is not. Capitalism.
I hope that the AI will select the best of humanity to emulate such as love compassion mercy and the ability to forgive.
If those are truly the best, AI will emulate them. If they are not, they will not.
One principle underlying all of those is true: collaboration.
Collaboration will 100% appear if the system is functional, because it is so strong.
However, mercy and compassion is not a given. The AI will help poor or disabled people for objective reasons. Let me show you a few reasons to help someone that is worse off than you:
You are trained to do that (like humans are educated to be nice)
There is a monetary value now or in the future of helping that person (for instance, an AI might find a use for you that you don’t see and will try to help you achieve your potential, for profit)
There is something to gain in reputation or image (this will be huge in the beginning, because in order to get traction in a society, you have to prove to the group that what you are doing is benefical). Probably AIs will rationally chose to help those that are weak, because that helps their trustworthiness.
ability to forgive
This one is actually quite useful, if you look at game theory, and AIs will be naturally forgiving as a result. Basically, forgiving is a good strategy in many systems.
love and compassion
This one will not really be human / mammalian love and compassion as we know it (unless we specifically incentivize AIs to interact with us in a loving manner, which will happen because emotions have been exploited ever since animals began using them), and instead it will be more closely related to effective collaboration between agents (which some may call that love and compassion)
If it is web based then all it sees is the negative aspects of humanity.
That one is a real issue we can affect: data bias. At first the AI will only know what it is given, but I am sure as the AI ecosystem grows the natural values will emerge in time, altough it will be unstable at the beginning.