Philosophy vs AI

artificial-intelligence

#1

One question to all.
Isn’t dangerous to give emotions to AI Systems?

As humans we understand that emotions rules ours actions and that is something that we try to avoid when we are building new connections.
I think it’s not easy to predict a reaction face an emotion and that is a challenge.
As I already told, behaviour and beliefs can change with environment and when we try to solve one problem we could create another.
How could we do to avoid bad conceptions, most important who decide what is good and bad conceptions?


#2

I personally think giving an AGI emotions is a bad idea.

Compromising the machines logical intelligence under any circumstances just seems counter intuitive.

Oh! I cant solve world hunger today because I’m offended… lol.

However… the machine does require empathy.

Empathy is the recognition of emotions; based on the outward signs from another’s emotion state. I believe we actually ‘feel’ another’s pain by running a kind of mental simulation through our own emotional neural circuits. We feel their emotion because we are actually feeling it; this enables us to make predictions and judgements adjusted for the emotional bias they are experiencing based on what we would do/ feel.

For an AGI we just need the logical aspects of empathy, the recognition of others mental states and an understanding of how their likely to be influenced, effected, act, etc.

Using vocal tone, inflections, stress to recognise emotional states.

These kind of ‘functions’ can’t be programmed using traditional methods; they must be learned through experience over time; and the system has to be capable of doing so.

:slight_smile:


#3

Hi @Korrelan, thank you for the answer I agree with you, emotions should be carefully teach to AGI, but empathy also should be carefully teach, because it is also an emotion and may be influenced by environment, beliefs, behaviour and others conditions, some permanent others not, can empathy be teach?
I think is difficult, because is a subjective emotion and when you teach, you influence an opinion, an point of view.
Is something that should be always carefully studied.
We may teach a system recognize emotions, but for be efficient you should exam all, I literally say all humans individually, because if you apply one system in a multicultural crowd I have some doubts about the efficiency.
Am I wrong?


#4

Hi Guther.

I agree that empathy is a very subjective experience to us humans, and our state of mind, culture, etc can greatly effect how we recognise and express it.

But I believe it is possible to leverage the logical aspect of ‘emotional intelligence’ without the cumbersome baggage that comes from the actual emotions.

There are certain logical generic traits to empathy that run through all human cultures, emotional empathy even amongst humans usually results in a certain protocol being followed/ required.

Say you have to contact a person about a business proposition but are aware they experienced bereavement yesterday.

Whilst as a human we can emotional empathise with their loss, the logic of the empathy is to leave the call until a later date because they are very likely to be compromised by their emotions, which would likely negatively influence our required outcome.

An AGI needs to recognise emotional states in others and act accordingly; the machine does not need to ‘feel’ their pain it just needs to feign an empathic response.

Humans do it all the time… lol.

ED: A biased emotional state has never aided an intelligent human decision, the contrary is in fact revered, and keeping a ‘calm level head’ is always the wisest received advice.

When a machine comforts a human who is sad, I think it will be good to know it has no ulterior motives, it’s not feeling sorry for you… it’s just the logical, moral kind thing to do to a fellow intelligent being.

:slight_smile:


#5

Great answer, thank you for that, I’m not an AI machine, but I learn a lot with you all.