A primer on political bots: Part one


The rise of political bots brings into sharp focus the role of automated social media accounts in today’s democratic civil society. Events during the Brexit referendum and the 2016 U.S. Presidential election revealed the scale of this issue for the first time to the majority of citizens and policy-makers. At the same time, the deployment of Russian-linked bots designed to promote pro-gun laws in the aftermath of the Florida school shooting demonstrates the state-sponsored, real-time readiness to shape, through information warfare, the dominant narratives on platforms such as Twitter. The regular news reports on these issues lead us to conclude that the foundations of democracy have become threatened by the presence of aggressive and socially disruptive bots, which aim to manipulate online political discourse.

While there is clarity on the various functions that bot accounts can be scripted to perform, as described below, the task of accurately defining this phenomenon and identifying bot accounts remains a challenge. At Texifter, we have endeavoured to bring nuance to this issue through a research project which explores the presence of automated accounts on Twitter. Initially, this project concerned itself with an attempt to identify bots which participated in online conversations around the prevailing cryptocurrency phenomenon. This article is the first in a series of three blog posts produced by the researchers at Texifter that outlines the contemporary phenomenon of Twitter bots.

Bot accounts are a persistent feature of the user experience on Twitter. They can increase the influence of positive, negative, or “authentic” fake news stories; promote opinion posts from a variety of accounts (botnets); and circulate memes. Their ability to shape online political discourse and public opinion, however, is generating legitimate concerns.

The significance of the bot effect stretches from the academic research community, to tech and platform companies, national regulatory bodies, and the field of journalism. One of the most recognized examples of this involves the lead-up to the 2016 U.S. Presidential Election. During that period, over 50,000 automated Twitter accounts from Russia retweeted and disseminated political material posted by and for Trump, reaching over 677,775 Americans. Over 2,000,000 tweets and retweets were the result of these Twitter bots, accounting for approximately 4.25% of all retweets of Trump’s tweets in the lead-up to the U.S. election. These findings accentuate the larger issue of state actors using social media automation as a tool of political influence.

The basics

Bots in their current iteration have a relatively short, albeit rapidly evolving history. Initially constructed with non-malicious intentions, it wasn’t until the late 1990s with the advent of Web 2.0 when bots began to develop a more negative reputation. Although bots have been used maliciously in denial-of-service (DDoS) attacks, spam emails, and mass identity theft, their purpose is not explicitly to incite mayhem.

Before the most recent political events, bots existed in chat rooms, operated as automated customer service agents on websites, and were a mainstay on dating websites. This familiar form of the bot is known to the majority of the general population as a “chatbot” - for instance, CleverBot was and still is a popular platform to talk to an “AI”. Another prominent example was Microsoft’s failed Twitter Chatbot Tay which made headlines in 2016 when “her” vocabulary and conversation functions were manipulated by Twitter users until “she” espoused neo-nazi views when “she” was subsequently deleted.

Image: XKCD Comic #632.

A Twitter bot is an account controlled by an algorithm or script, which is typically hosted on a cloud platform such as Heroku. They are typically, though not exclusively, scripted to conduct repetitive tasks.  For example, there are bots that retweet content containing particular keywords, reply to new followers, and direct messages to new followers; although they can be used for more complex tasks such as participating in online conversations. Bot accounts make up between 9 and 15% of all active accounts on Twitter; however, it is predicted that they account for a much greater percentage of total Twitter traffic. Twitter bots are generally not created with malicious intent; they are frequently used for online chatting or for raising the professional profile of a corporation - but their ability to pervade our online experience and shape political discourse warrants heightened scrutiny.

While the majority of the bots in the examples outlined above were run by reasonably simple algorithms, bots are becoming increasingly complex and closer to passing the Turing Test, which evaluates whether machines are able to pass as a “human” to another human. One of the most well-known, complex bots was the account run by the fictitious character of Jenna Abrams which espoused pro-Trump tweets. Known as the “darling of the alt-right”, this account attracted over 70,000 followers and was quoted by a range of media organisations, with “her” tweets covering topics from Trump and right wing politics, to celebrity gossip and the value of punctuation. The account operated undetected for three years and was eventually closed after it was discovered that “Jenna” had been constructed by the Internet Research Agency, an organisation linked to the Russian government which undertakes operations on behalf of Russian interests.

Image: A tweet from the fictitious Jenna Abrams.

The issues involved here, however, are applicable beyond digital political communications and hold relevance for the spread of misinformation in the economic realm, including the new cryptocurrency phenomenon. In another concerning example, bots have been used to impersonate celebrities, such as Donald Trump, and falsely purport that they are hosting online competitions which Twitter users can participate in if they donate a small quantity of a specified cryptocurrency. One of these fraudulent Twitter bots, posing as Elon Musk, claimed it would donate 1.00 Ethereum (~$850) to the first 300 people who donated 0.20 Ethereum (~$170) to a specific cryptocurrency wallet in a “24 hours promotion”. The wallet specified by the bot amassed over $4,000 worth of this cryptocurrency through users which it had managed to scam.

It is within this context that we at Texifter set out to investigate how Twitter bots can be more quickly identified, where they originate, how accounts are automated, and how best to respond to them. We intend this series of blog posts to be  a resource for anyone interested in the use and influence of bots in the digital sphere. The topic is complex and fluid, however, and the answers and solutions offered will require amendment as the bot phenomenon itself evolves. The second post will focus on the findings of the Texifter team whilst the third will present possible solutions.This blog post series is a glimpse into this larger project. For more information or to get involved we encourage you to reach out to us.

Read Part 2 here.

Image: felixthehat.