How can we protect our voting systems from AI bots and fake news?


#1

In addition to the billions of human beings using social media, there are also millions of robots, or bots, residing within. Bots help to propagate fake news and inflate the apparent popularity of fake news on social media.

Social media platforms (Facebook, Twitter, Instagram, etc.) have become home to millions of social bots that spread fake news. According to an estimate in 2017, there were 23 million bots on Twitter (around 8.5% of all accounts), 140 million bots on Facebook (up to 5.5% of accounts) and around 27 million bots on Instagram (8.2% of the accounts). A hundred and ninety million bots on just three social media platforms – more than half the number of people who live in the entire USA!

In the summer of 2017, a group of young political activists in the United Kingdom figured out how to use the popular dating app Tinder to attract new supporters. They understood how Tinder’s social networking platform worked, how its users tended to use the app, and how its algorithms distributed content, and so they built a bot to automate flirty exchanges with real people. Over time, those flirty conversations would turn to politics and to the strengths of one of the U.K.’s primary political Parties.

This issue though could be even more dangerous when coupled with technology that can repurpose video and perfectly copy a persons voice, or even create an entirely false person. The technology already exists where anyone could be made to say or do anything on film and it looks completely believable.

So how can we ever know we are being manipulated when even seeing is not believing?

Using AI for good

It is easy to blame AI for the world’s wrongs (and for lost elections) but the underlying technology itself is not inherently harmful. The algorithmic tools that are used to mislead, misinform and confuse could equally be repurposed to support democracy.

We can, for example, programme political bots to step in when people share articles that contain known misinformation. They could issue a warning that the information is suspect and explain why.

Which brings us to today’s #AGICHAT question ‘How can we protect our voting systems from AI bots and fake news?’

#AGICHAT #futurism #artificialintelligence #debate #singularitynet #emergingtechnologies #futureofpolitics


#2

Maybe a fact checking a.i. that assigns a reputation value to a news organization. The reputation value could range on a scale from selflessly truthful to ravenous liar. How will it determine what a fact is?

Maybe a hierarchy of references like 1. machine data 2. generally accepted textbooks? er…yeah, that’s a tough one because it can’t be just referring to some other news organization that may or may not be reporting facts. Wikipedia is questionable. We also don’t want to create an echo chamber where it’s the truth just because everyone agrees.

After that, even commentary could be analyzed to determine ratios of truthful statements. Maybe it’s not a technology we want on our cell phones recording all our conversations for advertising (unless user/data owner is getting paid) or governments spying but it would be valuable for news fact checking.

It would be interesting to see how that works out. Does anybody know who is the most “selflessly truthful”? Wikileaks maybe?

Would the warning you mentioned show who is sponsoring the content? Military Industrial content or shareholder interest biases? Will it say that the owners of the provided content are registered with some political party?

I don’t think that experiment will go how a whole lot of people think it’s gonna go (pending infallable fact checking AI), but I’m really excited to see that one come to fruition because people have biases and a well made fact checker bot won’t.


#3

To me the main problem isn’t the A.I or algorithms, the problem is who create them.
People are afraid of themselves and what they think, to me is very difficult to believe in the human impartiality, because what is good to one person, may not be for other.
Probably if a counsel were created those problems could be reduced.


#4

To be a fact does something have to be measurably true? If that is the case, then it could be done with no bias or calculate the probability that some assertion did happen. It could quantify the percentage of conjecture and opinion as well.


#5

That’s a tough one, because as fake news detection advances, so does the fake news creation mechanisms, just like hacking/security. I think we still need some skilled people in the background constantly on the case, verifying what’s real and what’s fake, discerning when social manipulation is occurring, and feeding this into the AI to continually strengthen the algorithm until it is essentially so good that no one would bother attempting to use those methods to spread disinformation. I’d take a job in that…