I won’t call it AGI if its morals can be designed/programmed by coders.
I think you have misunderstood my proposal. What you said is exactly the point of the rule: do what’s most agreeable. The individual moral rules aren’t designed by coders; the foundational moral rule would be. As a person, I follow the moral rule: “do what’s most agreeable to everything” as a foundational moral rule. As such, I try to create the most agreeable circumstance because that is the most agreeable thing I can do. To do that, I determine moral rules that help me create the most agreeable circumstance. So, using my own reasoning to establish these moral rules and consider arguments given by others, I learned moral rules like: don’t cause terror, don’t murder, don’t steal or lie or hurt anything except in real emergency situations, be honest, spread happiness, be yourself, take yourself into account ethically and fight your inherent self-defeating and masochistic nature (applicable to me). Following a moral rule doesn’t ommit something from being a general intelligence, otherwise, I wouldn’t be a general intelligence and that wouldn’t make sense. An AGI could do the same.
As I said earlier, using a morality that’s unagreeable is not enforceable because our society enforces that important decisions are made by consesus and a small consensus cannot win when there is a much larger consensus. I think that an AGI doing what’s most agreeable, at least to humans, will have the largest consensus and, so, will be the one that’s chosen.
With all due respect, talking AGI morals at this very early stage is unnecessary.
Thank you for your input. With all due respect, as well, no one knows when AGI will be invented or how long it will take to solve ethics. I think that we need to avoid a situation where AGI can be built and is open source before ethics is solved, otherwise, someone will make an AGI that isn’t moral and we’ll be screwed. Since this part of solving ethics will have to be done eventually, why not do it now and not have to rush it if we end up addressing the problem too late, with very real, tangible and devastating consequences if we fail to solve ethics?