Automated social media accounts commonly referred to as ‘bots’ are actively influencing the COVID-19 ‘anti-mask movement’ on Twitter. Many have strong political allegiances and artificially increase engagements on politicians’ tweets.
Over the last few days we’ve been researching the impact of bots on public conversations in relation to COVID-19 on Twitter. Bots are commonly identifiable through randomly generated often numerical names, the use of stock imagery from public sources, and tweeting in consistent patterns through automated means. By analysing 36,974 social media tweets from August 2020 related to the anti-mask movement we found that 7% were bot accounts (2,588 profiles).
Figure 1: A visualisation showing how Twitter profiles interact with each other. Circles represent profiles, lines represent engagements.
Anonymous bot accounts on Twitter are not new. Their impact on manipulating debate on social media is long-recognised by Twitter. In one of their official blog posts, Twitter accepted that bot accounts have long been a problem on the platform and encourages users to report any suspicious activity.
Regardless, bot activity is still widely used to influence and boost social media presence. Commonly this can be achieved by purchasing bot accounts to increase follower numbers or by artificially inflating the number of social media shares an account receives.
Influence on public discussion
However, what we uncovered in conversations surrounding the anti-mask movement is different. Whilst all bots identified at some point shared content on the anti-mask movement, 69% of these accounts primarily tweeted content with a political affiliation.
Twitter conversations are typically highly polarised and our research confirmed this. 19% of the bot driven conversation was driven by bots sharing left-wing positioned content, while 50% shared extreme right-wing content. In the middle, only 17% were actively and consistently sharing information on the coronavirus generally. The implication is that the COVID-19 anti-mask movement is common-ground for two polarised conversations on Twitter.
All bot accounts have a source somewhere. Whilst automated, a human will have programmed bots to act in a certain way. The political connection implies that the bot network itself is either being managed or programmed to act in a certain way. It also highlights the wider trust issues that surround Twitter as a reliable source of content when so many anonymous accounts may have underhand motives in polarising the debate.
Not all bots are the same
Our research shows that 40% of the identified bots have only been active less than 6 months. Despite this, 25% of these bots have tweeted over 50,000 times, with some tweeting as many as 300,000 times. 69% of the bots would retweet content primarily on political subjects. It’s not clear why some bots have been around for longer than others; perhaps Twitter has not identified and removed them yet? Or they have become more sophisticated and do not fall within Twitter’s current guidelines on automation?
Our research only focused on bot accounts that have actively shared content in discussions around COVID-19 and specifically the anti-mask movement. However, it’s well-known in the industry that some bots are purely used to increase a Twitter profile’s followers. In 2018 Twitter culled up to 6% of Twitter accounts on their platform due to fake activity. Whilst doing our research, 5% of the 2,588 identified bot accounts were deleted. This highlights the extent of the problems Twitter was facing at that time with platform manipulation.
The broader impact on Twitter and for companies
By taking a random sample set of 1,000 bot accounts from the total of 2,588, we identified that those bots had connectively tweeted 1,022,103 times. This stretches beyond the anti-mask movement and a larger analysis could reveal numerous topics on Twitter that are subject to misinformation and manipulation. Despite Twitter’s best efforts, it appears many conversations are being misled by bot activity.
For companies, it’s important to quickly identify when bot activity is changing a conversation that may negatively impact their reputation. This could be either through a coordinated targeted attack or by mentioning highly polarised conversation areas (such as politics). Bots are programmatic and good social media community management will struggle to save the day.
The biggest, and perhaps, most dangerous question about bot activity though is ‘why?’. Is it state sponsored? Company sponsored? The whim of a programmer? Simply people looking to inflate their social influence? Perhaps all of these things are true.