Automated Theorem Proving and AGI

It seems to me there is great potential for cross-fertilization between the fields of Automated Theorem Proving and AGI …

This is especially clear when one looks at an AGI approach like OpenCog, in which probabilistic/heuristic logic plays a key role (the Probabilistic Logic Networks framework) …

There is a stream of work in ATP called “lean theorem proving” which involves creating really really simple theorem provers and seeing how well they can do. The answer is, sometimes pretty damn well, though not quite as well as the best theorem provers around…

It seems to me that integrating an appropriate lean theorem prover into OpenCog’s URE Rule Engine, and then figuring out how to use it together with PLN, would be a very interesting project and approach…

I note that in common PLN inferences, the majority of inference steps are actually just standard predicate logic steps … only a minority of steps are actually probabilistic/heuristic (though these are important ones)! This suggests that such integration could have practical near-term value…

The lean theorem prover Nanocop (http://www.leancop.de/papers/nanocop_ijcar16.pdf ) seems like a natural one to explore in this light, as unlike its predecessors such as leancop, it does not require any assumption that formulas are placed in DNF or CNF …

7 Likes

I’m in dire need of a master thesis topic and I’m highly interested in OpenCog’s reasoning engine and automated theorem proving. Can anyone point me to progress that has been made on this front, that I can read up on? Thanks!

1 Like

The idea stated above is a proposal to be implemented - if you can convince your professor to allow you to do research in this direction you could then create test cases of the current approach and implement a automated theorem proving approach and compare the results on several metrics.

Here is a link to the description of the URE Rule Engine:
https://wiki.opencog.org/w/Unified_rule_engine
Here is a link to the description of the PLN inference:
https://wiki.opencog.org/w/Probabilistic_logic_networks

1 Like

Automated theorem proving is going to be an important, and perhaps necessary, toolbox for future AGI. The internal monologue of a reasoning system must be capable of simulating different axiomatic frameworks, of comparing them, and of producing new ones. I cannot imagine an agent capable of drawing and refining analogies which does not also have the means to recontextualize its own lines of reasoning. OpenCog excites me because its primitive data structures are something like quivers, which you have already identified as a natural shape for general reasoning. A general intelligent agent should be capable of freely transmuting proof nets, which I think is an effective description of mathematical intuition. I don’t have a sufficient understanding of programming to begin digesting OpenCog, and obviously this limits what I can say about it, but the impression it leaves on me is of a general-purpose framework for manipulating diagrams. The recursive aspect of being able to conceive of the nodes of a graph as graphs and the edges as morphisms of these is analogous to important techniques and philosophies that arise from category theory and homotopy type theory.

You have brought Vepštas’ paper, “Sheaves: A Topological Approach to Big Data,” to my attention, but I am still reading it right now. Going merely off of the fact that sheaves present themselves in the context of OpenCog-like graph transmutation, I think this indicates a relationship between sufficiently-structured categories (per categorical logic, categorical semantics, ie the various correspondences between structural information and internal semantic behavior) and sufficiently-powerful informatic systems. I need to understand precisely how sheaves show up before making further comments, but sheaves are synthetically just “elements in topoi”, and topoi are synthetically just “localizations of free completions”. These localizations correlate with various (co)monads on these completions. I’m starting to realize how categorical universes (logoi, topoi, cosmoi, et cetera) are structural embodiments of linguistic universality. If propositions are types, and these type universes themselves behave like types in larger universes of universes (ecosystems of ecosystems?), then a type universe represents a universal Turing machine. Differences in universes arise in correspondence with differences in machine architectures. A machine capable of transmuting and refining its own architecture must internalize a metatecture of architectures. I don’t think automated deduction can be separated from sufficiently-general artificial intelligence, as it is a necessary tool for optimizing itself.

As an aside, I noticed while writing this out how limiting human language can be in conceptualizing information systems. Logical and philosophical dogma co-evolves with the semantics of natural language, and these dogmas become ingrained in the speaker. They can only be identified and clarified by pushing the boundaries of the language, and the sensation is that of an inevitable failing to capture exactly what I mean. I’m excited to see expansions in all modes of communication under the influence of AGI!