AGI brain design


To create an AGI brain using the integrated approach, we need to use many neural networks that others have made. The problem is, many neural networks might use prior neural networks as their inputs. Do we need to plan out a large portion of the AGI brain in advance before building any networks?

Thank you for reading and sharing your opinion. :slight_smile:


In a way, this is actually one of the biggest challenges to figure out. How does a service understand the inputs and outputs of other services?

At the moment our alpha just lets people make arbitrary JSON-RPC calls, but we want to allow services to:

  1. publish their API and type definitions, including the ability for services to share common types (e.g. RGB images, 2d bounding boxes, parameteriseed probability distributions, etc)
  2. form some kind of ontology on top of these that conveys additional meaning and that can also evolve as the network’s understanding of reality improves (although the ontology description won’t necessarily be human understandable at that point): Ontologies for SingularityNET

The first point is the most important in the short term for practical reasons, but the second one will be in the back of our minds as we make design decisions.


What I’m thinking is metadata keywords for each ontology, and outsourcing a job requirement by search across those.

I am almost certain that direct I/O will not be necessary, only different jobs managed by different agents.

Each individual agent not directly aware of the overall service being delivered, but a higher level emergence of AGI can occur as a result of the agent swarms.