A primer on political bots - part three: Moving forward


This is the final blog post in our three part series drawing from the research and discussion featured in Texifter’s Bot or Not: A Briefing Book. Part 1 and Part 2 of our series on bots focused on the complexity of defining bots and the difficulty of assigning intent to their activities. This article will focus on carving out some next steps in addressing the bot problem.  

Political, commercial, and social factors influence whether we think of bots as benevolent or malicious. Making sense of these factors has become a central query in discussions among Texifter researchers. Our dialogue has led us to a more granular understanding of what constitutes benign automation versus a problematic bot, and how to weigh whether the benefits of some benevolent bots could in fact outweigh the risks of commercial or espionage bots.

Bots remain a poorly understood fixture of the online experience. We formulated a publicly available guide that explores the presence of bots on social media, including how to define the phenomenon, identify a bot, and assess the influence they might have.

The uphill battle: How to address the the bot problem

Aside from these larger abstract questions on bots, more focus on what actually constitutes a “bot” is needed. Rethinking is also called for on the value of certain tests, such as the Turing Test, to determine whether an account is a “bot or not”. Actionable insights and online civic literacy can better contextualize the role of bots in shaping political discourse.

These discussions have created additional tensions within adjudication sessions among DiscoverText researchers. For instance, on the subject of defining bots, one researcher felt that it was essential that we, as researchers and members of interested communities, not reduce the definition of a “bot” to that of a performance-based metric (e.g. the Turing Test):    

"Although I agree that an actionable definition is necessary to combat the problem of bots on social media platforms, not focusing on the philosophical part of the question in terms of how we conceptually define a bot is a grave mistake. Further, the Turing Test, although valuable in our imagining and thinking through some issues of machine intelligence, is extremely limited and has been heavily criticized as a metric of ‘intelligence’ for machines ... bots, at the end of the day, are made by people and change along with them although some may remain constant, much like certain social norms that help govern society." - Researcher 1

However, there was disagreement with this philosophically-driven analysis from another team member, who commented that a more action-oriented definition of bots would provide greater clarity around our thinking on automation and move us towards a resolution:  

“Use simple math calculations to establish cutoffs. Divide the number of tweets by the number of days the account has existed. More than X tweets per day is a bot (or someone with more time than sense, and their tweets are probably of little value anyway). Compare number of following with number of followers. There's probably some reasonable guesses we could make.” - Researcher 2

But the problem is not just bot identification. There are also issues related to the intent of the bot’s creator - and this is a gray area. Determining whether a bot is “good” or “bad” involves, among many other considerations, evaluating the values and worldview of an individual:

“Although tweets can be classified as automatic or not (i.e., bot or not), classifying them as ‘good’ or ‘bad’ based on morality is obviously subjective; one person’s bad is another person’s double plus good. (Background reading: 1984). Nor are tweets from news organizations inherently good; it all depends on your politics.” - Researcher 3

While many can agree that bots have influenced online political discourse, further disagreement arises when determining political affiliations, national identity, and other subjective bot characteristics. In recognition of this, the briefing book we created is part of a larger effort to move the conversation forward on how to identify these non-human actors and evaluate their role and influence.

Considering the growing appearance of bots and their appropriation for information warfare, there is an even more urgent need for a multilayered assessment of the bot phenomenon. Ongoing interdisciplinary research work will yield more creative approaches for future projects. The briefing book, along with projects such as Botometer, and the work of a number of scholars across the globe are steps in the right direction.

While the current state of Twitter is argued to be poor and of ill-health due to the proliferation of bots, Twitter CEO and co-founder Jack Dorsey has issued public statements pledging to improve the platform:

Image: A tweet by chief executive of Twitter Jack Dorsey.

Following recommendations advanced by an MIT-affiliated nonprofit, there are four metrics that Dorsey suggests we pursue. These include looking at issues regarding: shared attention (is there overlay in the subjects we discuss?); shared reality (are the facts we use consistent?); variety (are different opinions available and grounded in a shared reality?); and receptivity (do we receive these different opinions openly and civilly?). In tackling these issues, Twitter strives to become a much healthier online source of information.

As a workbench for solving the problems inundating Twitter, the DiscoverText platform can be leveraged to deepen our understanding of the problems facing Twitter. In particular, the ability to create custom machine learning classifiers offers an opportunity to develop a more nuanced understanding of how bots operate on Twitter.

This concludes our three-part series pointing to our work on a bot briefing book, which was shared with the listserv of DiscoverText/Texifter users as well as the Association of Internet Researchers (AoIR) community. If you’d like to be a part of the Texifter team contributing to this project, please contact us.

Read Part 1 here and Part 2 here.

Image: Darren Moloney.