In what sense can we articulate a single learning algorithm or meta-algorithm, encompassing all the types of learning that an AGI system needs to do?
I opined on this topic in a paper I gave at the AGI-17 conference in Melbourne, which won the Kurzweil Best Idea Prize there,
The thoughts there build a lot on these ideas
which are likely to be implemented (in some, maybe much improved form) during the next couple years as OpenCog AI gets more sophisticated
In short I think there is a universal learning meta-algorithm, that I call “Probabilistic Growth and Mining of Combinations” (PGMC), and I think the ways PGMC manifests itself in different domain-specific ways, can be mapped into each other morphically in elegant ways… Leveraging these insights in the context of practical AGI system design is another issue…