How will singularity happen?

Singularity is exponential progress.

What in our reality shows exponential progress?
Our society.

Do individuals progress exponentially?
No. Individuals have a system by which they learn. If you believe in IQ, they cannot change the system by which they learn.

What are we making in current AI?
Systems by which machines learn.

Can you make a machine that can learn to learn better?
I don’t think so. Its the same idea. You make a system by which it learns to learn. And that limited system again cannot be replaced. You want to change the system of learning itself. And you can only do that by making random changes.

How does our society progress exponentially?
By random mutations. The idea is you don’t know how to make that system by which to learn better. So our DNA has mutations. But that isn’t enough. Through imperfect communication of different entities ideas also mutate through the society. And those ideas are what makes our society progress.

Assertion: Societies progress through random mutations of ideas and people.

If this is true, how would an AI singularity work?
As a result of our progress we will be able to create things that have a system of thought better than our own. But they will not be evolving themselves. Not until you somehow make code mutate like DNA and you put them in a society which can itself evolve and at the end produce a better system of learning, which you again put in a position to communicate with its mutations. And only then you get singularity.

I welcome any ideas or proof of this being right or wrong.

4 Likes

I cannot disproof your hypothesis about us being able to create a system which learns how to learn to increase its own learning capabilities.

But in the SingularityNET framework there can be a large number of agents - like in a society. So these agents can learn how to interact with other agents and possibly how to create new agents that can evolve from the previous agents. Just how you described the learning of humans and DNA. But in the case of the agents the DNA is code. There could be an agent that tries to create new agents by “mutating”/rewriting the code of other agents and deploying them to the network. Then observing how they are doing and use this insight to create new agents. This agent could also have been created by another agent etc.

This is @bengoertzel’s vision and the reason he is building the SingularityNET - to make it possible to have this network of agents. I tried to simplify the idea in my own words, I hope this helps. It is not a proof of right or wrong but just an idea of how to understand the technology behind the SingularityNET and it’s future purpose.

2 Likes

HI, interesting post, I feel your conclusions are based on the premise that current ‘Deep Learning’ architectures are the future, I’m writing from the view point of neuromorphic AI researcher.

The essence of the neuromorphic approach to AI is to build a electrochemical equivalent to a mammalian brain, thus eventually providing all the benefits of the human condition.

I personally believe the current method of measuring IQ is flawed for various reasons, the foremost being the time constraint imposed, also intelligence is not a fixed quotient with in the human brain, its plastic and varies through both experience/ learning and age.

Neuromorphic AI’s (NAI) are not hard coded or based on ‘normal’ programming techniques, the Von Neumann architecture runs a wetware cortical simulation, the AI then runs within the simulated connectome. The layer of abstraction provided allows for many traits that hard coded AI’s simply don’t possess.

A 3D volume connectome simulation comprises the main brain structures, lobes, white/ grey matter tracts, neuron types, electro chemical synapse, dendrites, neurotransmitters, cortical columns, etc. There are also algorithms that simulate myelination, neurogenesis, aging, plasticity, synaptic pruning, circadian rhythms, growth, self organisation, etc.

Yes we can, a NAI learns to learn, the process of learning is governed by the connectome, and the connectome improves with learning. Attention is also key to learning and it is also a learned facet, and like the rest of the connectome is plastic, so attention can adapt to the type of learning required for each type of experience.

Knowledge/ learning will always be constrained by the hierarchical nature of knowledge. Sound learning has to be based on proven facts, otherwise you’ll get ‘a house of cards’ scenario.

You also have to consider the possibility of human integration with NAI, because they are designed/ based on the human connetome they will eventually become compatible… then things will really get weird.

:smiley:

2 Likes

Great read, thanks. All sounds good. Just one aspect. we may be on the cusp of a house of cards already… What’s the hierarchical nature of knowledge? Seems more nurture than nature to me… :slight_smile:

Information/ knowledge in general has certain intrinsic properties that are unavoidable and dictated by our reality. I’m not referring to the meaning/ information contained within the knowledge, but the actual essence of knowledge itself.

One cannot ‘know’ X if X is not true for example, you cannot ‘not’ know something. So therefore the ‘knowing’ of knowledge is always true.

The hierarchical component comes from the fact that all knowledge is derived/ built from other knowledge. Even knowledge gained from experience is gained/ built form your prior understanding of the facets that comprise the experience. The reason you can understand these paragraphs is because you are recombining prior learned facets of knowledge.

Both the hierarchical structure/ nature of knowledge and the hierarchical proving of sound information contained within the knowledge, upon which to base new learning, will always place a restriction on the speed at which an intelligence/ society/ etc can evolve.

From a neural perspective even your sight/ vision is built upon these principles.

Nurture wouldn’t exist/ work without the nature (evolution).

:smiley:

2 Likes

Yes, a mix of both perhaps. Nature and nurture…

2 Likes

I like this paper about the intrinsic dimensionalities of machine learning models solving problems - they found a way to show how hard problems are and therefore how to incrementally increase the number of parameters to find the minimum number of parameters to solve a problem.

Measuring the Intrinsic Dimension of Objective Landscapes - [1804.08838] Measuring the Intrinsic Dimension of Objective Landscapes

3 Likes

Occam’s razor for ai? :smile:

2 Likes

Yeah, kind of - seems like it would make things easier. Unfortunately I never used it in practice to see how good it works.

1 Like

Let’s form a DAO and try it out… Oh, hang on… :wink:

2 Likes