I’ll let the video do most the talking, this blew me away to be honest.
I have no hair left now; that just blew it all off. I rarely see demos that blow me away like that. This is insanely cool. Next level. So natural. I hope somehow Sophia can get this natural once SingularityNET rolls out. Wicked.
Ikr! so cool! Getting vibes of the film “Her” from this… Google setting one hell of a bar. I know Larry Page also has an obsessive interest in AGI so will deffo be interesting to see where this goes. And yeah hopefully! that’d be awesome! Hopefully google openly share how they managed it, although I’m guessing as per usual the key was tapping into their huge pools of data.
This is insane, really a huge step forward. I really hope SNET can be fast enough to bring us the decentralized future.
It’s cool. But question: what’s the point?
They are finding real use cases by bridging the gaps where Google was not able to go before. See Google as an entity trying to forever expand until it’s like the interface to everything that is boring and can be automated. With this kind of conversational AI, so much interaction can be automated. Not dating maybe. But probably business meetings, where contextual knowledge of business needs and capabilities can be brought down to a few decisions.
I don’t mean the tech generally. I mean “what’s the point in the AI using ums and mhmmms when making an appointment by the hairdresser?”
The same reasons Hanson Robotics are trying to make Sophia look as real as possible I’d imagine. One of them being just out of passion to make AI as human as possible. Another reason would probs be that although a lot will find it creepy at first, we’ll get used to it pretty damn quick and the interaction will soon become seamless between AI and humans. I for one would rather have a decent conversation with an AI that sounded human over one that sounded like a tin can. And then there’s marketing reasons, more people will want this over the competition because it’s new, flashy and again, the interactions would feel more personal and seamless, making people like it more.
Wel, the simplest answer IMHO would be. It’s convenient and frictionless. The more complex version. It’s them, or someone else. The tech is there, it’s within anyone’s grasp, provided enough data and dev talent is on hand, and it adds tremendous value to people. It’s the gift of time gained not talking to people you don’t feel like talking to.
After a while, people who choose to call for appointments themselves will be thought of as backward.
This will give Google a lot of power potentially over an increasing amount of casualties in our lives. We need competition in this field, so we don’t have a ratchet effect where is either produce data for Google to use their services, or lose the survival of the fittest battle to the unquestioning masses.
Wow! This new AI Assistant totally blows me. Can’t wait when Sophia achieved this. This looks like a human-like response. Awesome!
To Mike also…
My point is on this specific use-case. I’m interested in understanding how we perceive benefit being gained here. I fully understand the commercial view, the R&D views, etc.
The question I think we should ask ourselves is “Why?”
I think there’s a pretty crappy dynamic being set up here.
So, first off. “Because I’m too busy”. I call BS. No-one is that busy. How long does the actual interaction take? About 90 seconds? So it’s not because someone doesn’t have the time. It’s because “I have better things to do than talk to the hairdresser, restaurant owner, etc”. So keep that motive in mind.
SO let’s look at the other end of the discourse, the business’ human. So if the voice is weird, or tough to understand, that’s a problem. But why mimic humans so closely? Why the umms, errs, mmhmmms? This doesn’t add to the effectiveness of info being transferred. Clearly Google recognise something is missing. A fair guess of what would be missing is any other dimensions of communication. If there’s a robotic voice the human can quickly surmise that although info is being presented, there is no relationship, no self disclosure, no appeal. It feels like commands. But here’s the thing, that’s exactly what it is!!! The customer has rejected relationship, self disclosure, appeals BUT recognise this is going to feel unpleasant or offensive to the recipient. So they pretend those things are present. That’s deception.
So who is deceiving here? Well, if the tech is good and no disclosure is made (or not sufficient for the business human to note) that it is an AI, the customer is deceiving the business. They would like to issue commands, but recognising they are less likely to get their way by doing so directly, offer a simulacrum of a proper conversation. This is manipulation.
And what if the AI self discloses? For me that’s even worse. Because either the business human takes that on board fully, and is sat there thinking “why the umms and ermms, then?” fully aware that someone is trying to manipulate them. Or they are sat with dissonance where they are aware that the relationship elements of the conversation they are having are actually a fraud yet they go through with it anyway. And all this to assuage the lie held by the customer that they are “too busy” when in fact they find their time more important than that of another human.
It is pretty spectacular, to be honest. And what’s more: having this sort of realism built in, to where it’s so natural that the business is unawares that they’re interacting with an AI, this makes it much less likely that the business will not just refuse interaction with the AI because “Oh, Siri is calling us again, just hang up.”
Having this level of realism is very functional and just plain cool. Next level.
Additionally, having a more natural, human-like interaction would greatly aid the guided meditation and spiritual work that Sophia has ventured into. Anything to provide an extra layer of realism will help build rapport—very fundamental in a therapy context.
I’ve been watching this google io event as well. Seems like they’re focusing on AIs on the first day. More than half of data on this planet has been collected by Google through the last decades, now they’re using these data to feed their AIs. My concern is that our personal data automatically goes into google’s database if you’re using any of those google services such as gmail. Imagine one day if one of these ‘google assistants’ goes rouge, it could easily steal your ID and mimic your voice to make a call for sth unexpected. Does that sound terrified?
Valid points. For the ums in between words, I agree that doesn’t really make sense as a business use. For personal use it’s cool but yeah I get your point for business. But for the other ones like “mn-hmm” that does make sense since it’s confirming that the AI understood what was said. I think Google has also said that they are going to have it announce that the business is talking to the Google AI assistant so no manipulating will be happening from what I’ve seen so far. As for time talking to people on the phone, I for one have social anxiety and hate talking to strangers on the phone, which ends up being awkward for everyone involved lol. Maybe that’ll make it worse having tech to patch over this problem? But I’m not too sure, I’ve talked to many people on the phone and it never gets any better if I’m honest… talking in person when there’s visual cues isn’t as bad though.
Absolutely they are and they’ve never hidden this fact as far as I’m aware. Most people are willing to give up their private information, submitting to Google’s privacy terms, for free use of their web-based services. There’s no way Google would build out the infrastructure to support all of the people in the world with web services without getting anything in return; they wouldn’t be Google anymore, they’d be Freegle.
The problem is, these centralized organizations, this centralization tends to reward sociopathic behaviour. What Dr. Ben is trying to achieve with SingularityNET is a benevolent AI, directed and incentivized by “common good” projects.
Why Google will ultimately fail is because their model is built on their zero-sum hopes of absorbing every AI start-up in the world before they are large enough to compete, but this old way of doing business, the centralized way, is very threatened by decentralization. They will fall. Right now they’re doing cool shit because they’ve managed to milk the old centralized model for all it’s worth: but they will fall.
Decentralization is a force to be reckoned with. Eventually, OSS will reverse engineer what they’re doing, making it not so exclusive and special, providing it freely to everyone. Eventually, the decentralized approach will win. And it won’t even be close. We just have to suffer through Google doing evil in the meantime. Fight the good fight.
I’m not a psychologist, so couldn’t tell you if your social anxiety would get worse.
This casual use of NLP and NLG can go a couple of ways. Either people will get very tired of AIs pretending to be human, or there’s a market for psychologists to deal with people who are unsure if they’re talking to real people and if they should pretend they are when they’re not.
You know what they say about selling shovels in a gold rush. Self help books on “What to do when you don’t know if your friends are real” could be a BIG earner.
Yeah, this is why I’m not too sure myself on that front. It’s very hard to predict the ethical and social fallout from this type of AI.
That’s why I think it’s always good to ask “why?”
Everything has trade offs, and you can’t derisk the universe, but for me you have to have an honest debate. The first step is to ditch the conceit that people are too busy. Then we can say “OK, minor step forward in convenience, at what cost?”
You almost said what I’ve been meaning to say dude haha. I think SingularityNET doing right now is to keep on the roadmap while enhancing Sophia to the higher level. They are also working with Healthcare. Sophia will eventually achieved this improvement and Sophia would be a great teacher too while on it.
Yeah true, the upside to this tech is vast though so that’s the why. It won’t just be used for booking appointments after all. Think of call centres for example, or just customer service in general, people will feel more comfortable talking to an AI that has all the nuances of humans once they get used to it. But yes of course this has downsides… job displacement & social impacts being the main ones. But the upside outweighs it… it’ll make customer service so much more productive, and a call centre with AIs will of course have fewer errors in the long run and even save lives, think of emergency dispatchers for example, there’s plenty of horror stories of a dispatcher having a crappy day then hanging up on someone who needs help. But the fact is many people will get used to the AI having nuances, and when the call is directed towards a customer, that’s important… For the AI talking to a business, I agree have it straight talk, but for the customer, you need them to feel as comfortable as possible, and I guarantee people will forget they’re talking to an AI real fast. The downsides will spring up and we do need to be fully aware of them though of course.