A Grand Unified Learning Algorithm?


In what sense can we articulate a single learning algorithm or meta-algorithm, encompassing all the types of learning that an AGI system needs to do?

I opined on this topic in a paper I gave at the AGI-17 conference in Melbourne, which won the Kurzweil Best Idea Prize there,


The thoughts there build a lot on these ideas


which are likely to be implemented (in some, maybe much improved form) during the next couple years as OpenCog AI gets more sophisticated

In short I think there is a universal learning meta-algorithm, that I call “Probabilistic Growth and Mining of Combinations” (PGMC), and I think the ways PGMC manifests itself in different domain-specific ways, can be mapped into each other morphically in elegant ways… Leveraging these insights in the context of practical AGI system design is another issue…


in addition to cognitive synergy (which btw is an exciting expression) our general intelligence (I think) comes from our ability to learn how to learn. As soon as we see something new, we can automatically mine its unique features and memorise and get better with experience. If a neural network could be made to learn how to build efficient and application specific neural networks, it could fall into a vortex of discoveries learning everything new.