If you get your news from social media, as most Americans do, you are exposed to a daily dose of hoaxes, rumors, conspiracy theories and misleading news. When it’s all mixed in with reliable information from honest sources, the truth can be very hard to discern.

In fact, my research team’s analysis of data from Columbia University’s Emergent rumor tracker suggests that this misinformation is just as likely to go viral as reliable information.

Many are asking whether this onslaught of digital misinformation affected the outcome of the 2016 U.S. election. The truth is we do not know, although there are reasons to believe it is entirely possible, based on past analysis and accounts from other countries. Each piece of misinformation contributes to the shaping of our opinions. Overall, the harm can be very real: If people can be conned into jeopardizing our children’s lives, as they do when they opt out of immunizations, why not our democracy?

As a researcher on the spread of misinformation through social media, I know that limiting news fakers’ ability to sell ads, as recently announced by Google and Facebook, is a step in the right direction. But it will not curb abuses driven by political motives.

Exploiting social media

About 10 years ago, my colleagues and I ran an experiment in which we learned 72 percent of college students trusted links that appeared to originate from friends – even to the point of entering personal login information on phishing sites. This widespread vulnerability suggested another form of malicious manipulation: People might also believe misinformation they receive when clicking on a link from a social contact.

ball of generic newspapers with the word NEWS visibleView other variations of the concept:

To explore that idea, I created a fake web page with random, computer-generated gossip news – things like “Celebrity X caught in bed with Celebrity Y!” Visitors to the site who searched for a name would trigger the script to automatically fabricate a story about the person. I included on the site a disclaimer, saying the site contained meaningless text and made-up “facts.” I also placed ads on the page. At the end of the month, I got a check in the mail with earnings from the ads. That was my proof: Fake news could make money by polluting the internet with falsehoods.

Sadly, I was not the only one with this idea. Ten years later, we have an industry of fake news and digital misinformation. Clickbait sites manufacture hoaxes to make money from ads, while so-called hyperpartisan sites publish and spread rumors and conspiracy theories to influence public opinion.

This industry is bolstered by how easy it is to create social bots, fake accounts controlled by software that look like real people and therefore can have real influence. Research in my lab uncovered many examples of fake grassroots campaigns, also called political astroturfing.

In response, we developed the BotOrNot tool to detect social bots. It’s not perfect, but accurate enough to uncover persuasion campaigns in the Brexit and antivax movements. Using BotOrNot, our colleagues found that a large portion of online chatter about the 2016 elections was generated by bots.

Creating information bubbles

We humans are vulnerable to manipulation by digital misinformation thanks to a complex set of social, cognitive, economic and algorithmic biases. Some of these have evolved for good reasons: Trusting signals from our social circles and rejecting information that contradicts our experience served us well when our species adapted to evade predators. But in today’s shrinking online networks, a social network connection with a conspiracy theorist on the other side of the planet does not help inform my opinions.

Copying our friends and unfollowing those with different opinions give us echo chambers so polarized that researchers can tell with high accuracy whether you are liberal or conservative by just looking at your friends. The network structure is so dense that any misinformation spreads almost instantaneously within one group, and so segregated that it does not reach the other.

Inside our bubble, we are selectively exposed to information aligned with our beliefs. That is an ideal scenario to maximize engagement, but a detrimental one for developing healthy skepticism. Confirmation bias leads us to share a headline without even reading the article.

Our lab got a personal lesson in this when our own research project became the subject of a vicious misinformation campaign in the run-up to the 2014 U.S. midterm elections. When we investigated what was happening, we found fake news stories about our research being predominantly shared by Twitter users within one partisan echo chamber, a large and homogeneous community of politically active users. These people were quick to retweet and impervious to debunking information.

Read the original article on https://theconversation.com by clicking here

Share this:
Tags:

Add Comment

Ut tellus dolor, dapibus eget, elementum vel, cursus eleifend, elit. Aenean auctor wisi et urna. Aliquam erat volutpat. Duis ac turpis. Integer rutrum ante eu lacus. Required fields are marked*