Pre-Release: Building On GPT-2's Successors "Blender" and "PPLM"

I have found something pretty big after lots of research into AGI. It’s something you should add to your bag/ belt as it’s pretty straightforward and a key part to AGI.

PPLM and Blender are essentially if you will a GPT-2 but can basically recognize certain features (NSFW) or (about cars) and act bad or good on them (ex. shut down user if abusive), and, can drive which words to predict / ignore as well ex. talk about cars and dogs but not snakes. Blender also trained on chat logs, wiki, and empathy datasets, and decides how long of a response to generate and when to end that ex. not “birds fly using” but “birds fly using wings.”.

So the thing I want to share. You can see the image attached I made, to help understand. AGI next step.PNG

I have realized a very large next step for Blender/ PPLM. I want to keep it short here but fully detailed still. So you know how GPT-2 recognizes the context prompt to many past experiences/ memories, right? It generalizes / translates the sentence, and may decide bank=river, not TDbank. Well this is one of the things that helps it a lot. Now, you know how humans are born with low level rewards for food and mates, right? Well through semantic relation, those nodes leak/ update reward to similar nodes like farming/ cash/ homes/ cars/ science. Then it starts talking/ driving all day about money, not just food. It specializes/ evolves its goal / domain. Why? Because it’s collecting/ generating new data from specific sources/ questions / context prompts, so that it can answer the original root question of course. It takes the installed question wanting an outcome ex. “I will stop ageing by _” and is what I said above: “recognizes the context prompt to many past experiences/ memories” except it permanently translates into a narrower domain to create a “checkpoint(s)”. So during recognizing a Hard Problem context prompt / question we taught it/installed like “I will stop ageing by _” - it jumps into a new translation/ view and creates a new question / goal “I will create AGI by _”. It’s semantics, it’s gathering related predictions from similar memories, same thing, just that it is picking specific semantic paths, updating, just like RL. RL for text (prediction is objective).

I find your English and text formatting make it very difficult to understand what you are trying to say . .

What I’m saying is:

Blender can be forced to incorporate certain topics into what it says. So it talks about ex. cars all the time, no matter what. Humans do too, but evolve it. They start off wanting food or mom, then they can discover food = farming semantically and now talk about farming a lot more.

This agenda/persona updates/specializes, into a narrow domain. It evolves its question. Output of AGI controls input source to collect/generate data from. > It decides which lab tests to try, and those determine which next lab tests to try. At first, input source is random data collection, just like those robots that learn to walk.

1 Like

Wrote something sweet today so will share it:

The brain loves to collect lots of data (sight-see) because more data allows it to solve future problems better.

Dogs love new toys (new data) because it let’s them explore new problems. New data is more data.

Talking about exploring, exploiting is when you work all day in some domain you love (ex. AI) because other new data is not actually so useful. We evolve this filter/ our goals, we start off focusing on food and then move attention over to cash then to jobs if they have similar contexts. The brain makes “checkpoints” or filters where to collect new data from in the manifold space. Then explores there. Blender and PPLM both do this (they don’t evolve it though)​

As for your questions, browsing Instagram with color turned off feels worse because of missing data. Or perhaps you want to look at food directly and don’t need data, but the same problem remains that you are not seeing what you await/ forecast to see because you lack sufficient data. The brain always wants to see a future.

​If you play a game, the visuals may make it more relatable to understand the knowledge.

And when treats are paired with other domains, you can make the agent “get into” that domain.

Oh yeah, this seems interesting. I think that this is precisely the way that some AIs are learning with the new GPT-2 and GPT-3. You ask them a question, and then they must probably ask themselves that very own question. If you are talking about some Austrian psychoanalyst called Sigmund the AI can go on to ask himself to try finding answers that will provide new information about the World. It probably goes on to ask itself: “Who is Freud?”; then it looks up the internet and finds some information. “Oh, he was an Austrian neurologist. Oh well, what did he do that was special?” etc. etc. ; then goes to ask itself more questions. It could work by asking itself questions and then trying to answer them; and after answering them asking itself more and more questions about what it has discovered and so on and so on. By answering itself this process might lead to more answers and more information about the World. Interesting.

I think that it is very smart that you have some thoughts about what to do when the AI is not finding a precise answer for its question, making it then just try to find something else (finding a new way to the goal, as if it were). I think that this is the precise way that we learn much of our World and is an intelligent way of viewing future AI research.

Do you have any knowledge in philosophy of language? What is your take on it? Do you think there is such a thing as “inherent meaning” or do you think that meaning is just usage, like Wittgenstein’s Late Philosophy of Language?

-Karamazov