AGI ethics: agreeability rule

Think of it like this. Do you really want robots with better morality and intelligence to dictate your behavior? Because if we make them autonomous they might want to influence our actions.

@examachine Thank you for the response. :slight_smile: My goal in life is to be ethical, so I would happily tolerate and cooperate with being manipulated to be more ethical. My stance might not be representative of everyone, though. To me, doing what’s most agreeable to everything is the definition of my ethics. It might not be your ethics, but saying that it isn’t my ethics would be saying my ethics =/= my ethics.

Perhaps the people making the decision will opt to give AGI the goal to do what is most agreeable to only the people who made the decision and won, though. At least that would be more ethical than giving AGI no ethics at all.

Agreeable doesn’t really mean anything because it’s just bland subjectivism of sorts and that is a deflationary approach to ethics. Such approaches don’t work, but consequentialism does. IOW, that does not compute.

Nell Watson has some interesting ideas here that I think take much of what has been said here into consideration.

1 Like

@examachine It seems to me that most human’s ethics are inherently subjective. You made the assertion that my ethics is a deflationary approach to ethics. Please can you clarify what you mean by this and justify your assertion? Thank you. :slight_smile:

Such approaches don’t work, but consequentialism does.

You made the assertion that deflationary approaches don’t work. Please can you clarify what you mean by an ethical theory working? Thank you. :slight_smile: Do you mean that it can be followed? Also, that still needs to be justified for anyone to logically believe it. You also made the assertion that consequentialism does work. That also needs to be justified.

IOW, that does not compute.

Are you implying that AIs can’t classify things that aren’t explicitly programmed into them? Because that’s incorrect. AIs can classify objects, facial expressions, predicted ratings and etc. despite noone being able to write a large set of rules for those classifications. Image classification, for example, is possible for AI because the ground truth is provided by humans. Humans could also provide ground truth for a series of imagined inputs and each contributer’s opinion on the agreeability of a situation suggested by imagined inputs could be averaged to create the ground truth. None of this involves any methods that we don’t already have at our disposal.

I think that an AGI with my ethics would try to improve its ethics because it would figure out that that would be more agreeable than not improving its ethics.

@Julia_Mossbridge Thank you for the link. :slight_smile: Although, I find that the author makes some assumptions, like all moral machines having the goal of improving their moral code if their moral code runs into contradictions. They said that a sub-moral or quasi-moral stance (as humans possess) is not sustainable in a machine. That is obviously incorrect because humans are machines that sustain such quasi-morals.

Yeah, Nell is pretty great. She makes some assumptions (everyone has to, in order to make progress. I do disagree with the “improve if you hit contradictions” piece – that’s not nuanced enough. Some contradictions warrant improvement, some don’t. But my guess is that is a gloss. As is the “quasi-moral stance” argument. Nell’s ethicsnet – a dataset for ML algorithms – might be really useful for SingularityNet.

You might have to do your own research. You can’t ask people to explain elementary concepts in ethics, I’m not a philosophy tutor. That’s actually rude.

@examachine I’m sorry you feel that way and I’m sorry if I came across as rude.

That said, I don’t know why you expect everyone to agree with you and look for an excuse to get offended when someone doesn’t, but until your terms are even properly defined and your points are proved, the AGI community won’t incorperate your ideas into their designs or give you credit for your work. I hope you take this as constructive criticism intended to point out your poor behaviour so that you can correct it.

@examachine I understand; you’re young and uncensored. However, your bluntness, cursing and offensive jabs at others gives me good reason to point out your poor behaviour, like I did in my last comment.

Keep it civil and scientific people. No cursing or being passive-aggressive.

2 Likes

I don’t expect anyone to agree with me, try reading the above post again. I won’t explain basic terms that you can learn via Google search. The OP was very odd, it seems you haven’t read much about “philosophy of ethics”, there’s a whole field that analyzes this. I mean to say your ideas don’t make much sense or seem new or useful. You don’t have to agree with that, but I suggest that you research a subject before proposing something. I could go on and state some examples where agreeableness doesn’t lead to an ethical choice or make sense but you’re welcome to find that out yourself. Let’s not waste time, I’d rather not converse more on this, if you want to learn something search those terms and read. It’s ironic that for someone who is trying to be agreeable you’re offended so easily and make aggressive remarks when your lack of depth in ethics is pointed out. It’s quite rude to get somebody to explain a non-trivial intellectual subject btw. Do you understand that my time might be valuable? Please try to be mindful.

About your agreeableness, let’s just agree to disagree, perhaps someone else will agree with it.

Alright, I’ll bite. Suppose you are in a room full of “alt right” Trump fan neonazis and they’re going on about how they’re going to deport Mexicans, kill all black people or reinstate slavery by elimination of minimum wage and so forth. That is, they are actively pursuing a whole lot of immoral goals and they haven’t a right bone in their body. How can being agreeable be right here? Or suppose you’re a German in Nazi era and you see soldiers shooting Jewish students at your university etc. How can a principle of agreeableness work here? I see no possible answer however you define that vague word.

Now, AI ethicists are naturalists which is why they try to evaluate consequences. A completely subjective account like agreeing doesn’t help here. Being ethical IS the ability to disagree with unethical people, and you may even be obliged to disable or kill them depending on the circumstances. That’s why ethics is no child’s play.

I suppose you want to refer to the Golden Rule instead, but like most ancient ideas that didn’t prove useful. There is the same “slave mentality” in the so called Christian Bible, it basically teaches people to be slaves to their imperial rulers. Now, that is maximally unethical I believe. :smile:

Never said anything about being agreeable. :wink:

Sorry I replied to wrong post. The OP might want to read Asimov’s robot novels if he missed them. There, it is clearly discussed why you can’t only have a single “please humans” rule. Summaries of Asimov’s robot laws exist on the net, and I think Asimov thought of this at bigger depth than anything Musk’s AI doomsayer folks at FLI came up with.

The beautiful thing about Robot laws is showing just how open ended autonomous agents are. Autonomy can lead to unpredictable outcomes with general constraints (what I like to call meta-rules). That’s why I don’t think we really want autonomy with poorly thought robot meta-rules or laws. An obedient robot would be used to commit crimes, that’s why it’s dangerous.

We neither want our robots to be too zealous. However, if we ingrained them with an arbitrary ethical theory, they might be. Robots can be obsessive and take things too far. It’s a treasure trove of dystopian science fiction. Contrary to what FLI thinks it’s even more dangerous to try to teach them human preferences, because humans don’t have rational, sensible, or good preferences. The ideal human is probably Trump, Bill Gates, Tom Cruise, or Zuckerbot. It could get quite bad, because humans are quite flawed (sorry for being humorous, I understand that humor can be offensive).

@examachine Sorry. I might have mistook your open resentment of a large group of people and your claim that I was being rude as you trying to be agressive and argumentative.

Maybe I misjudged your intent. I’m sorry if I did.

I didn’t think those concepts were elementary. For example:

Subjectivism doesn’t work? You’re saying that the proposal that it is possible to follow an ethical philosophy that is based on people’s opinions is a false one? That wouldn’t make any sense because I’m following such an ethical philosophy right now

Regardless, I don’t find it rude.to ask someone to back up their claims when you don’t agree with them. I find it rude for someone to make claims, but refuse to back up their claims; especially if their claims are so trivial that they can quickly back them up and dissolve the other person’s confusion.

You also speak as if morality is a thing that isn’t inherently a subjective social construct; as if there is a “true morality” to follow. If you do think of morality that way, please could you tell me what physical form it possesses?

Actually, I’ve read a lot about it. I find that consequentialist rules are either arbitrary or unpopular if they are not informed by some form of agreeableness. I think that what you would consider a morality that works is a morality that you agree with. Rather than agreeableness not working (because it works for me), it doesn’t work for you.

Because being agreeable, as I defined it earlier in this thread, is being as agreeable as is possible to the whole group (to everything in existence) not just the group that you’re with. Let’s invert the situation; is forcing a moral code on the world that is largely disagreed with ethical? Not to me.

I think that another mistake you’re making is not taking into consideration what I am trying to prove.

If you look at the post at the top of the page, you should see that I was trying to prove that agreeability is something that most people voting for a moral code for AGI would agree with; at least a limited form of it, such as doing what’s most agreeable to AGI developers. That way, the morality the AGI would itself choose or develop would be one that is largely agreed upon by the people it is trying to be agreeable to.

What I am not trying to prove is that any random person, such as yourself, me or anyone else here, would 100% agree with the AGI’s moral rule. It’s impossible to please everyone.

I think that the group voting would choose an ethical code for an AGI that the group most agrees with over one that it doesn’t because that is how voting works. To be the most agreeable possible to AGI developers is to follow the best moral rules possible to the AGI developers collectively, so I think the rule of agreeability is what AGI developers will choose for their AGIs’ ethical code.

Why am I suggesting this less-than-optimal weaker version of a moral code? Because it is better for AGI developers to choose this one than a worse one.

Human moral rules are arbitrary, nuanced and many. It is infeasible for us to create a complete moral code by hand, so our best hope is getting AGIs to create their own moral code and improve upon it as they get smarter. That is what agreeability should get AGIs to do.

I wasn’t offended. I was trying to correct what I thought was poor behaviour, which is the agreeable thing to do. I didn’t even make aggressive remarks; I said what was relevent and what I thought I observed. I worded my criticism carefully to minimise offense, but to also not sound grovelling or apologise about doing the right thing (correcting what I thought was poor behaviour). When I get offended, you often don’t even know. I ignore my emotions.

@examachine I would like to clarify something, please. Do you consider morality to be objective, in the sense that it is illogical to follow some moral codes when it is your terminal goal to follow them while it is logical to follow a certain moral code when it is not your terminal goal to follow it?

Logic is a method to achieve your terminal goals. If you were part of an alien species, you might only want to go to a certain location to reproduce and die. Would it be logical to do so? If it was your terminal goal, then, obviously, yes. To not do so would not achieve your terminal goal and therefore would be illogical. Under the same brush, if you wanted to be as agreeable as you can to only a single person, would it be logical to do so? Yes. To not do so would not achieve your terminal goal and therefore would be illogical.

This concept is explored with the is-ought problem and Hume’s Guillotine.

Basically, what I’m saying is that there is no universally correct moral code. There’s one that’s correct for you, but not for everyone.

Ok that’s sort of funny, because you obviously don’t realize your principle is something a slave owner / fascist would like his subjects to have. You might not realize that you don’t know enough to correct any philosopher on this. I guess that’s why it’s such a hard subject that so few people can even begin to understand.

I guess every novice in philosophy has heard of moral relativism but suggesting that is proven by Hume’s obsolete ideas about ethics doesn’t sound right. I already said that such vapid relativism is deflationary and it seems incompatible with consequentialism. That’s why you need to know what basic terms mean. If you think ethics is that relative you probably should want to justify some immoral nonsense like Trump supporters’ racist or misogynistic ideas, or perhaps cannibalism of some other uncivilized cult? If you don’t, what’s the point?

Ethics isn’t some magical unscientific knowledge. It’s not being agreeable or relative either. It doesn’t consist of agreeing with a group of people. It usually means a very particular kind of reasoning, doesn’t have to be dogmatic. In fact, dogmatic rules are often ethically vacuous, like religious nonsense. That’s why it shouldn’t be fully relative but can depend on circumstances.

You can’t rest AI ethics on Hume’s philosophy unfortunatey. Try appreciating why I said your principle is slave mentality, it might aid in your thinking. But more importantly why is Hume’s claim not relevant to AI? Because machine learning/AI is empirical.

Asimov’s robot laws are already far more advanced than your approach. I suggest reading his robot novels.

Cheers,