Open Ended Intelligence

One of the more interesting and important lines of thought regarding the future (and present) of intelligence that I’ve come across in the last few years is the concept of “Open Ended Intelligence” developed by David Weinbaum (Weaver) and Kabir (Viktoras) Veitas (who is now a SingularityNET team member) … see

for an overview…

The implications of this concept for the Singularity and the nature of post-Singularity minds has not been adequately explored yet, but is hinted in these other two papers by the same authors:

One thing these explorations make clear is: We do not currently have a good model of what a mind is, or what intelligence is … certainly not a model adequate to enable us to understand what a post-Singularity mind might be like, or what kind of intelligence the Global Brain of the planet today might have, or what kind of intelligence and mind SingularityNET might self-organize into even 3 or 5 years from now…

These are currently very theoretical concepts and work needs to be done to map them into more concretely measurable and simulable forms… I think this is a very important direction.

A lot of the thinking one sees today about the future of AI is grounded in very simplistic perspectives on intelligence framed in terms of ideas like optimization and reinforcement learning. Grand conclusions about AI ethics and the benefits or costs of superhuman minds are drawn based on absurdly simplistic and twisted ideas like “a superintelligent mind will choose its actions via aiming to maximize some reward function.”

The Open Ended Intelligence paradigm constitutes a much deeper and more meaningful framework for considering the future of mind, but fleshing it out in a practical context requires some attention…

4 Likes

What would your short list of concepts that need measurable and simulable forms, that RL is too simplistic a framework for, look like? Would the reward of natural selection, “Survival of the Survivors” be an example of a reward that was not simplistic?

For the metaphysical foundations of Open Ended Intelligence concept and thinking I would highly recommend Weaver’s PhD thesis available from Open-Ended Intelligence | Weaver D.R. Weinbaum - Academia.edu.

1 Like

Hi Deborah,

I think this is an important question but not that easily answerable… One reason is that ‘mapping’ very theoretical concepts to concrete forms necessary involves certain degree of simplification. Open-ended intelligence conceives an intelligence without pre-defined goals, yet a measurable and simulatable form by definition involves such a goal…

Regarding “Survival of the Survivors” principle (as far as I understand it), it is a descriptive, rather than generative and is a tautology that is trivially true. In order to produce a measurable and simulatable form of it I suppose you would need to define quite a few additional goals/constraints – e.g. environment constraints, resource limits, maybe efficiency / optimality constraints, etc. – and, most importantly, putting a survival principle as generative into an agent. Generative in this context means that an agent will need to actively pursue survival by any means, which is not really meant by “Survival of the Survivors”… I suppose this kind of conceptualization of an intelligent agent has led to Bostrom’s Paperclip maximizer image, which is something that nobody would want to realize, I think…

Further, “Survival of the Survivors” principle seems to accommodate exaptation and genetic drift which are basically non-deterministic – they are not driven by a prior goal that dictates which organism (or even group) survives / replicates / propagates. So how would we produce a measurable and simulatable form of “Survival of the Survivors” principle?

We suggest therefore to concentrate not on already predefined goal / value but on how an intelligent system comes up with any (intelligent) goal and further pursues or changes it considering circumstances and interactions with other systems within environment (which can be more or less intelligent or smaller/bigger). So the emphasis is on considering interactions/communications between (less) intelligent agents that can give rise to a shared goals / behavior of a (more) intelligent agent – a bottom-up approach, as contrasted to working with hierarchy of goals (where lower level goals are derived from higher level values).

Certain part of what I am doing in offernet project, from conceptual perspective, aims to suggesting some answers and, most importantly, experimenting with them computationally – adhering to the conceptual paradigm but still in a well defined and fairly constrained environment of economic exchanges between AI agents.

It seems that your SISTER architecture points to a similar direction, yet you build more on the symbolic approach…

1 Like

Hi
Seen from the lens of RL , survival of the survivors is a tautology that generates as silly a thing as what may come out of a RL reward/fitness function that asked agents just to maximize their Phi scores. But from the lens of the self organization of life on earth, the theory of natural selection implies that the only reason that life self organized and complexiflied into systems, and systems of systems upon systems, etc. was that, in order for us to be seeing it now as we are, it must not have died off. So something about the laws of physics made it possible for that to happen, and the fact that we observe it now means that , however unlikely, it must have happened. Living things are alive because they didnt die. Its hard to express how important that is in RL because you get narrow useless constructions, not the fractal beauty of respiration.

Complexification happens as an answer to the unfolding of the laws of physics, living things survive in the face of the things in the laws of physics and of each other that may have caused them not to - so the structures of living things are solutions to the problems of their own unfolding, of how to dodge what unfolded from themselves. I dont think you have to tell a living thing to want to survive for it to survive - it could survive by unintended consequences , or conversely die by actions it took because it wanted to survive (the road to hell being paved with good intentions). But, as the ones we see now must have survived, while not necessarily conscious of surviving, they have come to have certain subgoals. For example, the desire to reproduce isn’t an example of trying to survive, because its possible to have the desire to reproduce without desiring progeny. So its a subgoal, that we have evolved, of the ultimate goal of survival. But survival itself isn’t even a goal as much as a tautology, that if we see, we must have survived and gained the complexity to see, and therefore we see complexity. The bajillions of other universes that never became complex enough to have observers weren’t observed, goals of survival or not.

1 Like