In early March 2020, on the cusp of the pandemic’s global acceleration, a popular Twitter user called ZDoggMD (in real life, a physician named Zubin Damania) sounded a pro-vaccine rallying cry: #DoctorsSpeakUp. The hashtag, intended to call on real doctors to share the positive realities of vaccination with the world, was instead almost immediately hijacked by anti-vaxxers. Newly published research from a team at the University of Pittsburgh used supercomputers to delve into how the event went wrong – and how similar efforts could be insulated from such hijacking in the future.
Using Twitter’s Filtered Streams Interface, the researchers pulled all publicly available tweets using the #DoctorsSpeakUp hashtag on March 5, 2020. Five percent of those tweets – around a thousand – were assessed using thematic content analysis, allowing the researchers to study the associations between tweet sentiment, account type (human or likely bot) and tweet content (e.g. personal narrative, statement, etc.). The researchers used a tool called Botometer to assess the likelihood that any given account was likely to be a bot.
To run this intensive data analysis, the researchers turned to local supercomputing resources at the Pittsburgh Supercomputing Center (PSC). There, they made use of the Bridges system for some time before it was retired in mid-February 2021, when they switched over to the Bridges-2 system (which officially began production operations this spring).
“We’ve been working with [the] Pittsburgh Supercomputing Center since before Bridges, worked through the duration of Bridges, and we’re now on Bridges-2,” said Jason Colditz, a researcher at the University of Pittsburgh and one of the authors on the paper, in an interview with PSC’s Ken Chiacchia. Colditz noted that there are “terabytes and terabytes of data that we’ve collected from Twitter over the span of several years,” but that the data moved quickly, requiring stability and uptime. “And that’s really where working with PSC has been beneficial,” he said.
Using the supercomputer-powered analytics, the researchers came to some valuable insights: 78.9 percent of all studied tweets were anti-vaccination; 79.4 of the tweets from users claiming to be health professionals were pro-vaccination; and 96.3 percent of the tweets from users claiming to be parents (but not health professionals) were anti-vaccination. While bots represented only a small portion of the tweets, anti-vaccination bot tweets outnumbered pro-vaccination bot tweets by a factor of five-to-one. Furthermore, a larger percent of anti-vaccination tweets linked to scientific information compared to pro-vaccination tweets, though the researchers noted that the anti-vaxx tweets were likely to misrepresent the research.
The researchers concluded that the hijacking constituted a “highly coordinated response of devoted anti-vaccine antagonists.” Moving forward, they noted, “it would be valuable to ensure that pro-vaccine messages consider hashtag use and pre-develop messages that can be launched and promoted by pro-vaccine advocates.”
“It is, I think, a really beneficial time to be looking at social media to get a sense of what’s going on in these communications,” Colditz said, “and how we might as public health advocates be able to smooth out some of that rough road that we’re seeing with people being hesitant or downright adverse to engaging in vaccinations for the current pandemic.”
To read the paper, which was published in the May 2021 issue of Vaccine, click here.
To read the reporting on this research from PSC’s Ken Chiacchia, click here.