AI Researcher Debbie Duong discusses how community members can help build SingularityNET by using its own simulation.
Note: Debbie Duong is part of SingularityNET’s AI Research Team. You can learn more about their work at the SingularityNET AI Research Lab. To chat directly with our team and community, visit the SingularityNET Community Forum.
The SingularityNET Simulation is a prototype Multi Agent Simulation that serves as an arena for testing competing methods and parameter settings of the SingularityNET, so as to understand its dynamics and implications for security and for SingularityNET values. We want the development of SingularityNET to be as open to the community as possible.
The arena of simulation facilitates discussion of design proposals, making it easier for the community to:
- Compare and criticize different approaches.
- Understand SingularityNET issues.
- Make contributions.
Any user may enter a method and see how it competes with others!
Unlike many arenas for machine learning that focus on narrower forms of AI, the SingularityNET Simulation is designed around the heterogeneity of an ecosystem of agents.
- For example, a community member can submit a machine learning method for running the SingularityNET to our ecosystem in a competition with other machine learning methods, or alone to isolate its effects. Either way, however, each solution lies within its own group of agents, agents which interact with each other to affect each other’s outcomes.
These interactions create a moving fitness landscape/changing utility space which can prove difficult for many machine learning algorithms. Many algorithms and arenas today are designed around a single agent, and give that agent control of the program. However, our algorithm has the control, and that makes sense for multi-agent interaction.
- For example, by giving multiple agents a chance to move before the reward for any particular agent’s move is known. The SingularityNET simulation addresses the difficult problem of co-evolution, because co-evolution is what happens when multiple machine learning algorithms interact and learn together.
With the SingularityNET Simulation, different SingularityNET agent designs may be compared for how they affect the utility of both the users seeking solutions and the developers making them.
- Do the automated solutions do well in standard metrics of machine learning algorithms?
- Is credit given where credit is due to the developers?
- Do developer’s programs have a fair chance of being part of a solution?
- Is the price considered fair by both user and developer alike?
- Are the stakeholders satisfied?
- And importantly, how do the designs of the SingularityNET affect the growth and complexification of agents, in promotion of a singularity?
The first simulation scenario that we offer to the public is the most fundamental problem of the SingularityNET, which is: how to automate AI solution development by putting together and parameterizing disparate Python programs.
Future simulation scenarios may include the effect of the reputation system, or the effect of offer networks, but any scenario would assume a way to automatically put solutions together for customers. In order to have a contest at all we must have a shared representation, and the goal of ours is to represent Python programs in a way that makes it as easy as possible for them to evolve. You can think of the representation as the rules of the game, like any Atari game would have, and the score on the gradient-test and number of (pretend) AGI tokens won as a measure of how close you are to winning the game.
Our contest is to get the best score in playing the representation, but in this instance, the game itself produces a product— a Python AI program. And if it helps, you are playing this game with others — because in this representation we leverage heterogeneity in the ecosystem.
Our scientists, and the community will submit solutions to the problem of integrating AI programs with machine learning programs that work on a representation of Python programs as a vector of floats (or at least our software translates from a readable form to a float vector and back).
In a simulation scenario, a (pretend) human user may put an order for an AI solution for a price range of AGI tokens that passes a test on specified data. This test must give a gradient that is needed by machine learning programs.
- For example, the user may want to cluster some Twitter data, and may check off that his criteria for accepting a solution would be that it passes a threshold in the silhouette coefficient test. The solution submissions of our scientists and of the community will make available vectors of floats that represent Python programs to our simulation blackboard. The blackboard will iteratively request those programs and return a reward, which includes how many (pretend) AGI tokens their solution was rewarded (if any), as well as what the test scores were for that solution (from the user test that gives a gradient — in this case, the silhouette coefficient). They also get back an observation of the blackboard. To win, the submissions should return a float vector solution that gets a higher silhouette coefficient score each time. Machine learning programs with a high score will have written an AI solution in Python.
Thus, playing the SingularityNET simulation is like playing an Atari game in an OpenAI submission , but the game is creating a Python program instead of Atari, and the Simulation has the program control rather than the machine learning submission.
Submitted programs, whether they are reinforcement learning programs that could operate in the OpenAI arena, other neural networks, or evolutionary computation programs, will all have to deal with the changing fitness/utility/objective function values, because their fitness depends on successful interaction with other agents. These could include other agents from their own solutions, as well as agents from other solutions. Unlike many online software competitions, however, the best answers may include several different machine learning submissions.
- For example, the best Python programs could be constructed with both neural networks and evolutionary computation programs submitted by the community, each working on aspects that they do best in an ensemble.
The construction of solutions in the real world is a difficult unsolved problem, and our representation seeks to make it an easier problem. Furthermore, we are very open to suggestions for changing the representation as well!
Note: This is a work in progress all the way until the AI programs construct themselves. Principles of evolvability from the genetic programing (GP) and evolutionary computation literature underlie our representation. In the terms of the optimization/reinforcement learning community, evolvability is the equivalent of gradient. Some of the techniques of evolvability include tree and linear representations with markers (gene switches, introns/non-coding segments and stop codons). We implement a tree representation in Python through currying, apply a gene switch to an ontology that defines a desired type of solution for a semantic gradient, control specificity in the ontology with a stop codon, use non-coding segments as a scratchpad, and apply stop codons to a linear (GEP) representation of the tree to prevent it from becoming either too big or too disruptive.
Additionally, we include a weighting in our ontology so that statistical techniques like those of Microsoft’s DeepCoder can weight different Python function types according to their appearance in online solutions. We also memoise and pickle partially completed programs and make them available to all competing programs, so that no combination of deterministic programs ever need be computed twice. This is essential to include for combinatoric machine learning programs to compete efficiently, even if it makes it so that computation time cannot be a metric.
- Our representation takes advantage of the heterogeneity of agents, by enabling an agent to delegate to another agent, cutting its responsibilities short before it gets too large or disruptive as in some genetic programming techniques.
- Further, these agents can specialize, gaining experience by being parts of many other AI solutions as well.
- If some programs are solutions to smaller simpler problems, then they will be available as building blocks for harder more complex problems, leveraging a rich solution environment.
In evolutionary theory, the evolution of a structure for one thing, preparing it for its use in something else, is called preadaptation, or spandreling. An example being how proto-birds had proto-wings which scuttled, preadapting them for flight.
Credit is assigned through the price signal, in a market process, and through the selfish fulfillment of individual utility.
Thus, assignment of credit computations makes use of Adam Smith’s invisible hand and Carl Menger’s “miracle of money” rather than the group fitness measures popular in co-evolution, which do not address the issues in assignment of credit.
- A functioning assignment of credit is important to any complex system, especially one that is complexifying.
- Free trade further enables programs and parameter settings to group together as is important to solve the problems at hand, in a non-preprogrammed, self-organizing way.
Because the positional meanings of the representation are held as constant as possible, agents can converge upon corresponding trade plans and the meanings of signs, a convergence that represents a compromise between agent utility needs. John Maynard Smith posited that equilibrium in economics and convergence in evolution were really the same phenomena, and this informs our emphasis on convergence. Over time, these convergences become institutions through which agents can learn from the compromises of past agents, so they can complexify rather than reinventing the wheel.
The representation (learned by the submitted machine learning programs and used to communicate on the blackboard) is a vector of floats that consists of signs, offers to buy, sell, or construct items, and a price range. These items are python programs. The representation is defined more precisely in our Jupyter Notebook simulation tutorial.
The request to buy an item uses the categories of an ontology of python programs, at any level of generality or specificity. For example a category could be “clusterer”
- What the seller returns must be a member of the requested category and must have an overlap in price range.
- Buyers rank offers to sell by the match of a vector of float signs.
- The match is calculated using the representation field for an agent to display a (float vector) sign and fields for (float vector) signs to seek in trading partner agents.
It is up to the submitted machine learning programs to interpret the meanings of the signs, but they could be reputation scores, unique identifiers of other agents, information needed for offer net trades, or even an emergent language or emergent ontological category for an agent. Whatever they are, they express the preferences agents have for other agents, and are used by the simulation to match agents in trade, choosing partners by roulette wheel draw based on a cosine similarity between the sought and the displayed sign . This ranking occurs after the other price and ontological category requirements of the trade are filled.
As of this time there is a stub solution to serve just as an example, but many scientists at SingularityNET have ideas about how to solve the AI integration problem. I am one of them, and at some point, I intend to submit a solution that uses the SISTER agent based co-evolution algorithm (Symbolic Interactionist Simulation of Trade and Emergent Roles).
This algorithm helps to make the problem even more evolvable:
- It leverages the multiple agents to express the problem in phenotype space, where trades that are close in utility are closer to each other, and those that are farther in utility are farther from each other.
- SISTER agents use the sign field as an emergent language that makes it easier for new agents to learn the agent culture — just like human beings invent language that make it easier for their children to learn their culture .
With SISTER, sign fields focus selective pressure on agents based on role representations in semantic space, because SISTER has not only induction of sign meanings but also inducement with those signs. Using the sign as a representation of roles that exist in a utility space, a space that puts role behaviors closer to and farther from each other based on utility, creates a gradient for behaviors to evolve on. This is especially important to the ability of new blank agents to learn the present agent culture, the agreement on what agents in roles do.
This agreement consists of agent institutions, the mutually beneficial agreements that past agents converged upon together. New agents are needed to add diversity and improvements in utility to the agent institutions, in a simulation of social complexification that can fuel an AI complexification. The design of SISTER asserts the primacy of epistatic selective pressures and feedback over schema theory in genetic programming.
- The community is invited to review the tutorial and run the simulation on proposed solutions, as well as to register solutions with SingularityNET, so that multiple solutions can be run in a competition (or in a cooperation).
- Any contributions to SISTER are welcomed, including the contributions that competitors make by challenging SISTER, of course.
While our AI Research Lab gives you inside access into our AI initiatives, we’re not done yet! On the SingularityNET Community Forum , you can chat directly with our AI team, as well as developers and researchers from around the world. This is your chance to directly influence the future of AI.