Artificial Intelligence objectiv?



I have a discussion with my work colleagues. She claims that artificial intelligence can not be objective. A cultured intelligence always has a certain attitude.
She argues that the programmer is programming her the basics, so artificial intelligence can never be objective or prejudiced.

is that true? and if not, why not?

Many Thanks :slight_smile:


I think there may be at least two ways of how objectivity (which is perhaps a very human value judgment in some ways) could be affected:

  1. When it is purposefully designed to actually have a certain attitude, e.g. censorship on social media is a big thing lately.
  2. When it is being fed biased data.

I think programming for objectivity would lead us to a more philosophical discussion about what do you define as the truth, and is it possible to even capture the truth? How do you measure that you captured the truth? And when you are talking about having a certain attitude (which is also perhaps a very human value judgment), what do we exactly mean with that?

Surely an AI can objectively pick the best apple from a box of apples, but if we are going towards things where human value judgment is playing a big role, then there is definitely a debate there.


Hi Ibby,

Thank you for the fast answer.

Maybe I’ve made my question too complicated or wrong. With objective I meant the neutrality.

Can Artificial Intelligence be Neutral?

Any programmer who programs artificial intelligence is prejudiced with thoughts and points of view.

My colleague claims that it is not possible to create a neutral artificial intelligence that learns independently and forms your own opinion


Just for my context - what work does your colleague do/what is her background? I don’t think she is taking this perspective as a developer - that is why I am asking so I can answer better.


My colleague studied social education.

Last week I talked about and how excited I am about it.

This gives the opportunity to create an independent artificial intelligence that thinks independently and independently forms your opinion. She contradicted me and said that Artificial Intelligence can never have its own opinion because the basic building blocks come from a programmer and he already has an opinion.

So if the progammer creates the artificial Intelligence, than he give some of his opinion to the artificial Intelligence…


I think with her background she is very much aware of the bias in social research, which is a huge issue. However, I think she is taking a big jump to these conclusions mentioned as she also incorporates very subjective components and “human value type” judgments into this.

I think this is a debate that should start with definitions with what it technically means to claim an AI has an opinion. As to my knowledge, an AI does not have an opinion, it simply outputs a specific result based on some algorithm. This algorithm can be created in a way that is not objective for us, e.g. an algorithm that is designed to separate specific political commentators from others and then give them more social power on social media, but the output itself is definitely neutral in a technical sense. The algorithm did not form an opinion or something, it just outputs a specific outcome based on what it has been fed. Yet, an algorithm can be programmed in a way that is purposefully discriminating. So if a programmer decides to program an AI that way then I guess you would say you have some discriminating algorithm.

When we talk about neutrality or objectiveness, I still think this spirals back to the question of truth. To give a very simple example. Let’s say you and I + 8 others have been at an event, and we all tell a story about how that event exactly went. We would have 10 different stories. Individually, they are definitely biased stories about what actually occurred at the event - whether you like it or not. Combined, they show some overlap and then a closer approximation to the truth can be created. However, the event was a 100 person event, so we are still missing 90 insights of different people. Also, even all those 100 people that attended surely missed some details here and there… at what point do you reach an objective reporting of the event? How would you measure this?

So when we talk about AI being objective, you still have to think of it in a mathematical sense that you have a number of data points, but there is always a component of something unknown. Whilst we can get to a closer approximation to the truth, you would never know where you are positioned probably because you cannot capture the unknown. So for something being objective in a scientific and mathematical sense, this is quite a hard question as real life is complicated. We know that 1 == 1. But can we claim event == best event to ever exist? Never. But what does this means for our human value judgment? Is nothing objective?


@ibby has a good point. Objectivity needs some clear universal definition before we can say anything (including humans) can be objective. It’s one of those things we acknowledge the existence of but not become it. The same thing with logic. We know it exists but we are not logical beings.
However, I very much agree with your colleague. This is the type of situation where the saying “man created God in his image” comes to mind. I remember hearing about this ML software used by US police departments which takes the picture of a person to guess how likely that person is to committing a crime in the future… and you guessed it! it was classifying black people as criminals and not white people on some occasions that were found to be wrong.
Now the training data for this software was probably from inmates in a prison where the majority are black and poor in a system which makes it a little more convenient to keep them there. With this, we can safely arrive to the conclusion that the mindset at that time and place influenced the innocent software just like how the microsoft chatbot tay went on a racist twitter rampage after spending sometime with racist people and how english grammar parsers work best for structured well written slang-free english but not to everyday conversations (since they were built by scholars) it is the same thing as an elephant not being able to sit on a chair comfortably because the chair is modeled after its maker.
This is essentially a threat and the solution to this in my view and most people here (hopefully) is the decentralized building and management of an AI system like the approach of opencog or singnet.
Observing societies, one comes to the conclusion that the majority is peaceful and loving of the rest. Racists, nazis, terrorists… are minorities. If we [the sweet and loving liberal majority] all collaborate on an AI system, I believe we can model something after ourselves.


Thank you for your answer :slight_smile:
I have just talked to my colleague about your view.
It partly agrees with you, but you should not mix programming and the question of neutrality.
My colleague does not feel that the intersection is the truth.
The intersection is just the view of different people from different societies of different ages in a time plane.
If you asked the persons 5 years later how you felt the concert, you would feel the concert differently than before. The concert would be better or worse.
This is because people are aged and have different views. Thus, nothing can ever be objective or neutral, because the objectivity arises from the respective views of people. Constructivism is the term for this.


The subjective becomes objective only with the consent of the majority. Objectivity in ai is governed by the size of the training/adoption sets.


A question can only be answered if it is valid. “best” in “Is this the best party ever?” has no strict definition and is therefore invalid. If “best” were defined as “the most preferable to you right now”, that could be answered objectively. Objectivity is solved mathematically with probabilistic equations, as your friend would know if she studied formal probability. The problem is, we can’t yet translate sensory data into equation inputs automatically. If a neural network recieved data from sensory inputs, its output was translated into behaviours and it used a probabilistic logic engine and had the goal of maximising the probability of a specific event, human bias wouldn’t be an issue because it would learn from experience to control a system that utilises experimentally verified techniques for assessing probability and not human-created data. Such a system could be considered objective. Lack of objectivity is caused by errors in reasoning. Reasoning has strict rules that allow for the detection and elimination of errors in reasoning. Once all the errors in the reasoning of an AI are eliminated, they will be objective. As for human bias from sensory input and sensory input to data translation, a logic engine would compensate for human bias by lowering its certainty in its beliefs.


Hi Atli,

I’m Jed.

I’m going to try to explain your question in my own words to see if I understand it right.

It’s way too easy to pick a side and stick with it… even when it’s wrong. It’s easy to think in a direction and miss other stuff.

It can be hard to correct ourselves… and when I do, I can spiral into an existential crisis (wondering what else I’m wrong about).

I read from the tutorial to this forum that comedy is thought by some to have to do with betrayed exceptions.

I think it’s a way our brain reduces clutter. It picks a way of thinking and is more likely to stay on track then veer of.

So then the question becomes how computer intelligence could re evaluate and trouble shoot possible errors and correct for them.

I would think that the smarter they get, the less of those types of errors would be made.