As much as I like lifenaut and everything they do, I wonder how successful it’s going to ultimately be: an AI needs to have experiences, and through those experiences appropriate actions corrosponding to appropriate reactions. Rather than trying to emulate the brain exactly, I’ve come to interpret AI develop as follows:
We have experiences that never change, as it’s an innature part of evolving from primates: we’re still primates, we just have a larger array of sensibilities. And we also have dynamic experiences that change who we are over time: which makes it a tricky thing to program.
Given a standard machine learning approach, no matter how much data for a specific motivation you plug in, it gets interpreted in the same way: using a number that is added to over time.
I’m working around this by having each motivation its own independant tree. But I feel like I’m kind of reaching the limits one person can do. I’ll have to see where this new project goes. To summarize:
experiences (1. Static experiences throughout life. (2. Dynamic experiences throughout life. reactions (1. Static reactions based on static experiences. (2. Dynamic reactions that change based on changing experiences. actios (1. Static actions that are innate to who we are. (2. Dynamic actions that change based on changing reactions to things that happen to us and others.
The experience engine would be an interpreter for experience data fed into the mechanism.
So the end result is this pseudocode:
In life loop Feed in experiences through data. Have one decision tree for each experience interpreted differently. Redirect mechanism to actions tied to appropriate reactions. end loop