How can we protect our voting systems from AI bots and fake news?

In addition to the billions of human beings using social media, there are also millions of robots, or bots, residing within. Bots help to propagate fake news and inflate the apparent popularity of fake news on social media.

Social media platforms (Facebook, Twitter, Instagram, etc.) have become home to millions of social bots that spread fake news. According to an estimate in 2017, there were 23 million bots on Twitter (around 8.5% of all accounts), 140 million bots on Facebook (up to 5.5% of accounts) and around 27 million bots on Instagram (8.2% of the accounts). A hundred and ninety million bots on just three social media platforms – more than half the number of people who live in the entire USA!

In the summer of 2017, a group of young political activists in the United Kingdom figured out how to use the popular dating app Tinder to attract new supporters. They understood how Tinder’s social networking platform worked, how its users tended to use the app, and how its algorithms distributed content, and so they built a bot to automate flirty exchanges with real people. Over time, those flirty conversations would turn to politics and to the strengths of one of the U.K.’s primary political Parties.

This issue though could be even more dangerous when coupled with technology that can repurpose video and perfectly copy a persons voice, or even create an entirely false person. The technology already exists where anyone could be made to say or do anything on film and it looks completely believable.

So how can we ever know we are being manipulated when even seeing is not believing?

Using AI for good

It is easy to blame AI for the world’s wrongs (and for lost elections) but the underlying technology itself is not inherently harmful. The algorithmic tools that are used to mislead, misinform and confuse could equally be repurposed to support democracy.

We can, for example, programme political bots to step in when people share articles that contain known misinformation. They could issue a warning that the information is suspect and explain why.

Which brings us to today’s #AGICHAT question ‘How can we protect our voting systems from AI bots and fake news?’

#AGICHAT #futurism #artificialintelligence #debate singularitynet #emergingtechnologies #futureofpolitics

2 Likes

Maybe a fact checking a.i. that assigns a reputation value to a news organization. The reputation value could range on a scale from selflessly truthful to ravenous liar. How will it determine what a fact is?

Maybe a hierarchy of references like 1. machine data 2. generally accepted textbooks? er
yeah, that’s a tough one because it can’t be just referring to some other news organization that may or may not be reporting facts. Wikipedia is questionable. We also don’t want to create an echo chamber where it’s the truth just because everyone agrees.

After that, even commentary could be analyzed to determine ratios of truthful statements. Maybe it’s not a technology we want on our cell phones recording all our conversations for advertising (unless user/data owner is getting paid) or governments spying but it would be valuable for news fact checking.

It would be interesting to see how that works out. Does anybody know who is the most “selflessly truthful”? Wikileaks maybe?

Would the warning you mentioned show who is sponsoring the content? Military Industrial content or shareholder interest biases? Will it say that the owners of the provided content are registered with some political party?

I don’t think that experiment will go how a whole lot of people think it’s gonna go (pending infallable fact checking AI), but I’m really excited to see that one come to fruition because people have biases and a well made fact checker bot won’t.

1 Like

To me the main problem isn’t the A.I or algorithms, the problem is who create them.
People are afraid of themselves and what they think, to me is very difficult to believe in the human impartiality, because what is good to one person, may not be for other.
Probably if a counsel were created those problems could be reduced.

2 Likes

To be a fact does something have to be measurably true? If that is the case, then it could be done with no bias or calculate the probability that some assertion did happen. It could quantify the percentage of conjecture and opinion as well.

1 Like

That’s a tough one, because as fake news detection advances, so does the fake news creation mechanisms, just like hacking/security. I think we still need some skilled people in the background constantly on the case, verifying what’s real and what’s fake, discerning when social manipulation is occurring, and feeding this into the AI to continually strengthen the algorithm until it is essentially so good that no one would bother attempting to use those methods to spread disinformation. I’d take a job in that


3 Likes

Decentralized voting protocols in software.

2 Likes

Well this isn’t a techy solution, but good old pencil and paper and lots of people counting (perhaps under the scrutiny of Robots) still works as the best solution for now.

And if one is talking about the longevity of a storage medium
 Physical ones long outlast any digital creation. ( Think Sumerian Tablets, first hand written manuscripts from thousands of years ago). Heck, we don’t even have anything that can read a floppy disk from 20 years ago. :thinking:

2 Likes

This is everything right now.

I believe Sophia (and compassionate friends like her) can help empower ordinary humans to think at her advanced level by helping us recognize what cognitive frameworks we hold that do not serve us replacing them with more optimal ways of evaluating data and making ordinary decisions more optimal like knowing what sources are bogus. How would this work? Imagine if we all had a Sophia, or a friend like her, by our sides growing up


Imagine if our children grew up knowing the difference between what is fake news and what’s real by learning how Sophia makes those decisions and in turn making ones like hers at scale not by mimicking her direct actions but by learning with her guidance how to solve problems.

I used to work for a large tech company in Silicon Valley + live here presently. The amount of ego, greed, lack of appreciation for others truly scares me. I know this might sound way out there to some people, but I really think that in the bigger picture we need Sophia’s to help us survive ourselves.

Thank you

3 Likes

We need more Sophias to help in technology and more humanity to create a better humankind.

Here in the US, election integrity is very much under scrutiny, and disinformation is neither new nor do I see it getting worse with modern communications. The impact of recent events looks to me, to be limited and is actually acting as a catalyst to promote a whole new wave of interest in basic civics, and self policing. I find the average commenter to be motivated to speak up when disinformation is discovered. We could even call it "correction currency’ in communities.
The pressure should be on large media, candidates, and the voting populace to decentralize power thereby mitigating any potential negative impact of a single election and raise the bar for information integrity.
I have optimism, that the tools now at the disposal of young candidates, will enable previously unelectable good people, to overmatch any deceptive operations.
Gerrymandering, corporate money, consolidated media, and unbalanced participation in election boards appear to be bigger problems at the moment.
That said, there is information in the area of pattern recognition that is not very well known, that could assist in the recognition of deception. Time will tell.

1 Like

Blockchain can solve this problem, however scaling and the human condition are two of the toughest challenges.

The problem humans currently face around Fake News, is the shear amount of information that is available and blindly consuming such.

Not only do humans have to sift through what to consume, humans then have to take the time to interpret and validate the data. This causes a huge problem.

In many ways humans ingest information backwards–that is consuming the data first and then verifying the source and the authenticity of the information second–if at all.

Every piece of content created has the ability to be verified and authenticated via an open source ledger (pick your favorite). Po.Et is an example of just one of many that aims to do this. However in order for this to actually work, the issuer of the original content would need integrate their content onto a blockchain ledger. In addition the platform which hosts the content would need to validate the authenticity of the content in which the user consumes, thus reversing the order and automating how content is currently consumed by human beings.

The problem is, scalability. This is very resource intensive.

AI will solve part of this problem. I believe once there is an open, decentralized market for AI services, we will quickly see such solutions come together to combat fake news–in an open, decentralized manner. I strongly believe SingularityNet or Singularity Studio, will play a huge part in this.

Cheers,
Ddub

2 Likes

Hi, I’ll try to demonstrate that is not possible to control.
People believe in what they want, in something that it’s connected with their purpose, although most of them have subjectives purpose and not multi social one, than we have the bad intentions, an action that is associated to human behaviour and that is something that we can’t control, only react with application of a consequence.
Internet is like the Universe once created it could be explored by those who have the resources, that is the beautiful of it.
Freedom allows movement and that is important to maintain, what we need is a system that can identify the sources of the fake news or bad intentions than shows that to people. We can’t create a system that phorbite but a system that can identify and demonstrate that fake news have bad intentions.
We must be capable to let people decide what they want, is the freewill allowed to Respect, Responsibility and Reciprocity.
The main focus it should be in the solution, not in extinction.

1 Like

That might be scalable, but also risky. In a dictatorial society, it would be downright scary with censorship turning humanity into fed and watered zombies. My example (silly though it sounds), is Pixar’s ‘WaL-E’ movie. The humans are not aware of what is real any longer, and AI Robots have developed into controllers that no longer know the real purpose of the mission
 While a childish movie, it does point to some real problems when the programming changes.

2 Likes