OK, I’m going to fumble this a little, but please bare with me. I’m not a programmer, just a psychology major with a consistent invested interest in AI… So I’ve heard people say that we don’t yet have “real artificial intelligence” which I think is debatable, but there’s one thing I have noticed not really being discussed. Even when your talking with an AI chat bot, (one that purportedly has the ability remember you and remember information it’s presented with) it doesn’t seem to have any sense of general context. And I don’t exactly know how this would be achieved, but from a psychological standpoint it would seem that, at least when the human brain remembers a person, thing, or topic of discussion, what it’s doing is generating an internal representation of that person, place, thing, or topic. But it’s more than that. It’s giving that thing some sort of significance. It’s relating that thing to the rest of the knowledge in its system. For instants win someone tells me their name, like “hi I’m Jessica“ my subconscious immediately goes over stand-out instances in my life of people who bare the name Jessica and puts it in context with my experience. This, among other subconscious workings triggered by the interaction, causes me to create an internal representation of that person before I even know them. I’m not sure where I want this conversation to go, I think I just want to get people talking. I’d especially love to get programmers with backgrounds in psychology talking, because I feel like this could maybe be valuable in terms of further development for conscious AI. Not to be a sci-fi nerd cliché, LOL.
ive suggested already that sophia should use all of the texts from all of the singularity groups and marry their pics to get a great human insight to thinking people! dont know if shes doing that yet or even if she is capable. by the way she is real…
Eloquently put! I guess we would need to find a way also generate such an image, in which case how would we really know? Even with Sophia, as lovely as she is, that’s still a different thing from something like Alita. (Science fiction reference.)
But placing a human mind in a machine, is a bit of a different thing from creating synthetically an intelligence comparable to a human brain.
It’s an issue I face, as someone whose open minded about dating other intelligence that aren’t necessarily human in the organic sense of the word.
If we could create something like me (or better me, other people with similar cognitive situations, who cares about me specifically) I feel like it be an achievement if we could create something that had a high vocabulary but not always the grammar to know what to do with it. But there is an underlying understanding.
I don’t like using the term Aspergian, given how the term is misused sometimes. But that’s basically what I mean approximately.
We don’t have “real artificial intelligence” yet?? That’s a fair statement, I am in complete agreement with that assessment. Image recognition & pattern matching are great, but what is needed is a machine that can interact by being taught the way a human is taught. Can we teach a deep learning (DL) machine by talking to it in natural language (NL), the way a human child learns in school? No, not yet. Can a DL machine read a book on Python, understand it, and then have an interactive discussion to gather requirements for a software application and then code it? No. I very much doubt that deep learning will accomplish this. So no, AI still has a very long road ahead. Mastery of language and the ability to learn concepts via language, the way a human can, is the ultimate goal of AI, only then will we have human level AI (HLAI), and of course, soon after, beyond human level.
I’m actually the position that machine learning (essentially data-mining), and story-driven approaches are neither best suited to handling the complexities of programming human-like AI, even a weak General Intelligence that only knows a few phrases.
I might expand more on this later.