In our latest post to the SingularityNET AI Research Lab, researcher @alexey chats about SingularityNET’s experiments on if deep neural networks can generalize outside their specific training sets.
The results of these experiments have helped fuel new possibilities for neural networks capable of producing AGI. These takeaways are being implemented in upcoming trials that seek to provide SingularityNET with unique and dynamic capabilities.
Interesting read Alexey, I’m more a generalist so this may be fanciful… If the transition from variant to invariant was incremental by a optimum number of degrees… Computationally intense I know, but potentially another dnn layer may act to focus/ learn the invariant outcomes… The images seem like a newborn’s worldview…
The problem of transferring ‘skills’ can be solved by wider or deeper networks (we will mention this in our next post), but the problem of extrapolation (“true generalization”) cannot be solved by traditional DNNs.
Identification of agent abilities within a header of some sort, and subcontractor request/queries based upon service requirements beyond initial agent, should solve much of this.
It is however a long way away from lateral thinking and the ability to draw upon other resources in a intuitive fashion.
Though there would have to be some kind of standardisation which perhaps would place boundaries, which is kind of against our ethos. Bens ontologies post suggested a way to categories services.