8/9/2017

UnbiasedCrowd: Beating fake news at its own game

 

By Vishwajeet Narwal, Angel Ortega, Jose Angel Lopez, Mohammed Kambal, John O’Donovan, Tobias Hollerer,  Saiph Savage
HCI Lab at West Virginia University, University of California, Santa Barbara, and Universidad Nacional Autonoma de Mexico (UNAM)

When Donald Trump demonstrated that he relied on television news channels and social media for news, it became a great indicator that it might be time to evaluate the fake news epidemic. Fake news and biased news are becoming purveyors of truth. Furthermore, thanks to social media, news spreads at lightning fast speed. After a few key tweets/retweets, a biased story can go viral. The whole world knows, but only one side of the story. Social ingroups and outgroups are getting mixed messages. Nothing distorts the truth more than biased news. For this reason, we developed a system to detect visually biased news and enable activists to countermeasure biased news.

Among several types of journalistic biasing methods, visual framing is one of the easiest and effective tools used by media to manipulate citizens, where a strategic selection of visual images is used to influence readers towards a certain perspective. Sometimes activists and informed citizens may try to showcase this bias to inform fellow citizens how they might be manipulated by the media. However, they lack the tools to help them detect biased information and seek the neutral perspective about the event. In the hopes of fixing this gap, we developed UnbiasedCrowd.

UnbiasedCrowd enables activists to easily compare all the images of a selected news story to detect visual framing bias. If bias is detected, our system helps highlight it by juxtaposing two controversial images in an image macro. By sharing this image macro, the system then bootstraps on social media allowing activist to reach out to the crowd and inform them about the visual framing bias, and even ask them to take action against it.

In particular, the image is shared as a Twitter summary card, which upon clicking, redirects to a page on our system’s server where all the aggregated photos of the story are presented. This enables individuals to investigate the case by themselves (each person has access to the image acro that the activist created, and also to all the images that our system has collected). Consequently, each individual is empowered to detect if the activists, themselves, are trying to bias them.

In our current implementation, our system searches for people who are currently tweeting about the news story (based on hashtags or news articles being shared online). We consider that these individuals would be the most interested in being aware of the bias. Our bots then call people out and present to them the constructed image macro, asking them if they think there is a possible bias.

After individuals have been exposed to the bias, our system’s online bots engage in conversation with them to try and drive them to take some type of action on the bias. In particular, the bots start to communicate with the people who were originally targeted (that is, the individuals to whom the bots exposed the visual bias) by replying to their tweets and engaging them in conversation. The conversation focuses on asking about what actions can be done, given the possible news media bias, and encouraging people to take action. Note that the taking action part is supervised by activists who at this point can jump in to lead people to action themselves.

Creators and supporters of  propaganda use visual memes to deceive the reader on social media. Most of these readers are concerned about the news, but not enough to investigate the authenticity of particular claims by themselves. Moreover, images and memes are fast to comprehend, so they can easily go viral on social media.  By exploiting this same advantage of image macros, our platform aims to nullify the effect of media bias.

Explore Unbiased Crowd here.

Image: Zoi Koraki.

Comments