Images Play Persuasive Role in Disinformation Campaigns

If the 2016 election is any indication, images included in state-sponsored social media posts are effective at disseminating propaganda, new analysis shows.

Disinformation campaigns similar to the ones leading to the presidential election in 2016 are again being developed and will ramp up before November, according to Gianluca Stringhini, a researcher and the co-director of Boston University’s Security Lab (SeclaBU).

Stringhini recently released research for which he studied disinformation spread through Russian state-sponsored accounts during the 2016 election and events thereafter. Specifically, his work analyzed a dataset of 1.8 million images that were posted in 9 million tweets shared from 3,600 Russian troll accounts (confirmed as such by Twitter in October 2018).

His findings, published in the report “Characterizing the Use of Images in State-Sponsored Information Warfare Operations by Russian Trolls on Twitter,” suggest that much of the disinformation spread during the 2016 election, as well as high-profile national events that followed (e.g., the white supremacist rally in Charlottesville, Va.), leveraged image-based posts on social media.

“Nobody had looked at images before we did,” he says. “But images are used as a vehicle of information.”

Stringhini points to memes as particularly effective. “They essentially can convey information much faster than text,” he says. “It’s more straightforward for people to share these images. It’s a more immediate vehicle of disinformation.”

The analysis shows that Russian state-sponsored trolls were more influential in spreading political imagery when compared with other topics, and that images fell just behind URLs in their effectiveness at disseminating propaganda. The dataset Stringhini’s team analyzed included a mix of original images created by Russian accounts, as well as existing, “real” images and memes that trolls would find and share to groups with opposing viewpoints to create controversy.

The report further points out that “the same images often appear on both their feeds and specific domains,” indicating that state-sponsored trolls “might be trying to make their accounts look more credible and push their agenda by targeting unwitting users on popular Web communities like Twitter.”

Indeed, a ranking of the top 20 domains that shared the same images as the state-sponsored accounts (page 6 of the report) puts Twitter in second place, behind Pinterest. Popular social media destinations including Facebook, Reddit, and YouTube made the list as well. 

For its part, Twitter last week disclosed 32,242 accounts in its archive of state-linked information operations. The archived accounts include “distinct operations” Twitter attributes to the People’s Republic of China (PRC), Russia, and Turkey. In a blog post outlining basic information about these actors, Twitter says Russian trolls comprised just 1,152 of those accounts, and that they were “suspended for violations of our platform manipulation policy, specifically cross-posting and amplifying content in an inauthentic, coordinated manner for political ends [including] promoting the United Russia party and attacking political dissidents.”

While the transparency is appreciated, it’s unclear whether these methods of locating and removing trolls are sophisticated enough or, crucially, whether they will ever be more sophisticated than the trolls themselves.

Indeed, Stringhini’s report notes that the small number of Russian state-sponsored accounts identified by Twitter suggest these actors work hard to be taken seriously – and succeed.

It may be too soon to tell whether images will play a large role in efforts to impact the 2020 elections. Stringhini’s report notes that Russian trolls also launched the Qanon conspiracy theory, “at least partially supported by the use of images initially appearing on imageboards like 4chan and 8chan.”

Further, he says, there’s some evidence of new activity: “Right now we are seeing a lot of image-based campaigns propagating … but it’s difficult to figure out which are organically developing,” Stringhini says. “Essentially, what we see are these campaigns pushing certain narratives that are geared toward polarizing public discourse. People should keep an eye on this activity.”  

However, he adds, “It’s tricky because this is what regular discussion looks like. It’s usually not very straightforward to identify this malicious activity. Trolls blend into regular discussion and slowly hijack it.”

Platforms, of course, have a greater ability than users do to identify suspicious accounts, by tracking access patterns, IP addresses, timing of posts, and coordination between campaigns. And while they are “doing that to some extent,” says Stringhini, ultimately it’s an “arms race” between the sophistication of platform tools vs. troll techniques. So far, the trolls are in the lead.

Related Content:

 

Learn from industry experts in a setting that is conducive to interaction and conversation about how to prepare for that “really bad day” in cybersecurity. Click for more information and to register

Nicole Ferraro is a freelance writer, editor and storyteller based in New York City. She has worked across b2b and consumer tech media for over a decade, formerly as editor-in-chief of Internet Evolution and UBM’s Future Cities; and as editorial director at The Webby Awards. … View Full Bio

Recommended Reading:

More Insights

Read More HERE

Leave a Reply