A Grand Unified Learning Algorithm?


#1

In what sense can we articulate a single learning algorithm or meta-algorithm, encompassing all the types of learning that an AGI system needs to do?

I opined on this topic in a paper I gave at the AGI-17 conference in Melbourne, which won the Kurzweil Best Idea Prize there,

https://arxiv.org/abs/1703.04361

The thoughts there build a lot on these ideas

https://wiki.opencog.org/w/OpenCoggy_Probabilistic_Programming

which are likely to be implemented (in some, maybe much improved form) during the next couple years as OpenCog AI gets more sophisticated

In short I think there is a universal learning meta-algorithm, that I call “Probabilistic Growth and Mining of Combinations” (PGMC), and I think the ways PGMC manifests itself in different domain-specific ways, can be mapped into each other morphically in elegant ways… Leveraging these insights in the context of practical AGI system design is another issue…


#2

in addition to cognitive synergy (which btw is an exciting expression) our general intelligence (I think) comes from our ability to learn how to learn. As soon as we see something new, we can automatically mine its unique features and memorise and get better with experience. If a neural network could be made to learn how to build efficient and application specific neural networks, it could fall into a vortex of discoveries learning everything new.


#3

PGMC seems like it uses similar ideas to my Heuristic Algorithmic Memory which was the first to integrate frequent subprogram mining into a meta-learning algorithm. (AGI 2010,2011,2014). Although it’s just a sketch. Who knows, maybe there are several universal meta-learning algorithms that are equally well or complementary. We’ll see I guess but I don’t disclose algorithms before publication :slight_smile:


#4

Google AI already implemented very cool versions of Schmidhuber’s ideas on meta-learning for ANN’s. I wonder if you’ve seen the latest papers. What I use meta-learning is for MEMORY but there might be other uses. In general, I hypothesize that meta-learning is the crucible of CONSCIOUSNESS. Yes, the big C. In latest Deep Mind experiments, you can see how the prefrontal cortex model as a meta-learning system worked surprisingly well. Knowing something about your brain is Minsky’s model of C. Though the most sophisticated such theory was advanced by Friston. Follow my Twitter http://twitter.com/examachine , I do post new papers about this subject. BTW, we can definitely implement the newest algos that use RL for meta-learning and it looks like Schmidhuber does approve of that approach.