Can Deep Networks Learn Invariants?

In our latest post to the SingularityNET AI Research Lab, researcher @alexey chats about SingularityNET’s experiments on if deep neural networks can generalize outside their specific training sets.

The results of these experiments have helped fuel new possibilities for neural networks capable of producing AGI. These takeaways are being implemented in upcoming trials that seek to provide SingularityNET with unique and dynamic capabilities.

READ MORE:

9 Likes

Interesting read Alexey, I’m more a generalist so this may be fanciful… If the transition from variant to invariant was incremental by a optimum number of degrees… Computationally intense I know, but potentially another dnn layer may act to focus/ learn the invariant outcomes… The images seem like a newborn’s worldview…

Very interesting read and findings!

@alexey, will you submit this work to a peer-reviewed journal?

1 Like

The problem of transferring ‘skills’ can be solved by wider or deeper networks (we will mention this in our next post), but the problem of extrapolation (“true generalization”) cannot be solved by traditional DNNs.

2 Likes

Simon, we are not going to submit this work separately to a journal. Some its parts are included into other papers.

1 Like

Identification of agent abilities within a header of some sort, and subcontractor request/queries based upon service requirements beyond initial agent, should solve much of this.

It is however a long way away from lateral thinking and the ability to draw upon other resources in a intuitive fashion.

Though there would have to be some kind of standardisation which perhaps would place boundaries, which is kind of against our ethos. Bens ontologies post suggested a way to categories services.

2 Likes