The future of AGI (part of it)

Hi all, I think that we need a framework with datasets of datasets, where there are classes of datasets and as such one can have modularity. This is similar to how our brains do it. We do not need repetitions but steps put together. Perhaps because it is software instead of wetware, one can go into other extremes. Then second, if we scale dataset development over the planet, we might get much further much sooner. We only need to know which datasets we want to have. Then learning with multiple datasets must become the future (my own estimate). In the end, a reservoir must come where all special steps are understandably interpretable for human eyes. A single world brain on the web might become possible (without consciousness), like this. I think it is inevitable, but when it will commence? Not sure, i try here… Note: mathematics, physics, computer vision, psychology. All of these are what datasets can be about. But also together, as such one already creates a kind of ‘body’ . An example: , for the system: how to distinguish objects and know which needs to come first (i.e. the outermost neighbor), this can be done by creating a language for this. This kind of language generation must be understood because all of what is possible like this is to be understood…
Note: I’m ready to answer questions…
Regards, Justin

Note: each element might have infinite amounts of alternatives. Only one representation for all of such related infinities of a single class.