To perform their most recent analysis, the researchers studied more than 200 million tweets discussing coronavirus or covid-19 since January. They used machine-learning and network analysis techniques to identify which accounts were spreading disinformation and which were most likely bots or cyborgs (accounts run jointly by bots and humans).

The system looks for 16 different maneuvers that disinformation accounts can perform, including “bridging” between two groups (connecting two online communities), “backing” an individual (following the account to increase the person’s level of perceived influence), and “nuking” a group (actions that lead to an online community being dismantled).

Through the analysis, they identified more than 100 types of inaccurate covid-19 stories and found that not only were bots gaining traction and accumulating followers, but they accounted for 82% of the top 50 and 62% of the top 1,000 influential retweeters. The influence of each account was calculated to reflect the number of followers it reached as well as the number of followers its followers reached.

The researchers have begun to analyze Facebook, Reddit, and YouTube to understand how disinformation spreads between platforms. The work is still in the early stages, but it’s already revealed some unexpected patterns. For one, the researchers have found that many disinformation stories come from regular websites or blogs before being picked up on different social platforms and amplified. Different types of stories also have different provenance patterns. Those claiming that the virus is a bioweapon, for example, mostly come from so-called “black news” sites, fake news pages designed to spread disinformation that are often run outside the US. In contrast, the “reopen America” rhetoric mostly comes from blogs and Facebook pages run in the US.

The researchers also found that users of different platforms will respond to such content in very different ways. On Reddit, for example, moderators are more likely to debunk and ban disinformation. When a coordinated campaign around reopening America popped on Facebook, Reddit users began discussing the phenomenon and counteracting the messaging. “They were saying, ‘Don’t believe any of that stuff. You can’t trust Facebook,’” says Carley.

Unfortunately, there are no easy solutions to this problem. Banning or removing accounts won’t work, as more can be spun up for every one that is deleted. Banning accounts that spread inaccurate facts also won’t solve anything. “A lot of disinformation is done through innuendo or done through illogical statements, and those are hard to discover,” she says.

Carley says researchers, corporations, and the government need to coordinate better to come up with effective policies and practices for tamping this down. “I think we need some kind of general oversight group,” she says. “Because no one group can do it alone.”