Think of an internet meme and you’ll probably smile. The most memorable viral images are usually funny, from Distracted Boyfriend to classics like Grumpy Cat. But some memes have a much more sinister meaning. They might look as innocuous as a frog, but are in fact symbols of hate. And as memes have become more political, these hateful examples have increasingly found their way onto mainstream social media platforms.
My colleagues and I recently carried out the largest scientific study of memes to date, using a dataset of 160m images from various social networks. We showed how “fringe” web communities associated with the alt-right movement, such as 4chan’s “Politically Incorrect” board (/pol/) and Reddit’s “The_Donald” are generating a wide variety of racist, hateful, and politically charged memes – and, crucially, spreading them to other parts of the internet.
We started by looking at images posted on Twitter, Reddit, 4chan, and Gab. The latter is a Twitter-like social network positioning itself as a “champion” of free speech, providing shelter to users banned from other platforms. You might have heard of it in the context of the recent Pittsburgh synagogue shooting.
We grouped visually similar images from this collection using a technique called perceptual hashing, which involves creating a unique fingerprint-style way to identify each image based on its features. Then we identified groups of images that belonged to the same meme and annotated them using metadata obtained from Know Your Meme, a comprehensive online encyclopedia of memes. This allowed us to analyse different social networks just by looking at the memes that appeared on them. What we found was very revealing (and, at times, disturbing).
Fringe social networks like /pol/ and Gab share hateful and racist memes at an impressive rate, producing countless variants of antisemitic and pro-nazi memes such as the Happy Merchant caricature of a “greedy” Jewish man with a large nose, or those including some version of Adolf Hitler in another image. Memes like Pepe the Frog (and its variants) are often used in conjunction with other memes to incite hate or influence public opinion on world events, such as Brexit or the advance of Islamic State.
Also, fringe web communities have the power to twist the meaning of specific memes, change their target context, and make them go viral on mainstream communities. A perfect example is the NPC Wojak meme, which refers to non-playable characters in video games that are controlled by computers. In September 2018, 4chan and Reddit users began creating fictional accounts, mocking liberals by referring to them as NPCs, meaning people with no critical thinking, bound by unchangeable programming, and manipulated by others.
Measuring influence
However, looking at web communities in isolation only provides a limited view of the meme ecosystem. Communities influence each other and memes posted on one site are often reposted on another. To measure the interplay and influence of different web communities, we turned to statistical models called Hawkes processes, which let us say with confidence whether a particular event is caused by a previous event.
This lets us determine, for example, whether someone posting a meme on 4chan results in the same meme being posted on Twitter. In this way we were able to model how the more niche platforms were influencing the mainstream ones and the wider web.
We found that /pol/ was by far the most influential disseminator of memes, in terms of the raw number of images originating there. In particular, it was more influential in spreading racist and political memes. However, The_Donald subreddit is actually the most “efficient” at spreading these memes onto other fringe social networks as well as mainstream ones such as Twitter.
Looking ahead
Negative or hateful memes generated by fringe communities have become a tool of political and ideological propaganda. Shedding light on their origins, spread and influence provides us with a better understanding of the dangers they pose. As such, we hope that making our data and methods publicly available will allow more researchers to monitor how weaponised memes might influence elections and broader political debate.
For example, we worked with Facebook to help the social network’s efforts to mitigate manipulation campaigns during the 2018 US midterm elections, providing them with real-time examples of politically-motivated memes that originated from fringe communities. This allowed them to gain a better understanding of dangerous memes and monitor their spread through the platform in politically relevant contexts. Overall, this line of work can help mainstream social networks identify hateful content, for example by improving automatic detection of hateful variants of popular memes, and hopefully remove it.
– Emiliano De Cristofaro, Associate professor, UCL
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Image by Marvin Meyer from Unsplash
Leave A Comment