Next Article in Journal
The Challenging Future of Pilgrimage after the Pandemic: New Trends in Pilgrimage to Compostela
Previous Article in Journal
Healing by Spiritual Possession in Medieval Japan, with a Translation of the Genja sahō
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hashtagged Trolling and Emojified Hate against Muslims on Social Media

School of Communication, Simon Fraser University, Burnaby, BC V5A 1S6, Canada
Religions 2022, 13(6), 521; https://doi.org/10.3390/rel13060521
Submission received: 10 April 2022 / Revised: 29 May 2022 / Accepted: 31 May 2022 / Published: 7 June 2022

Abstract

:
This empirical exploratory study examines a number of insulting hashtags used against Islam and Christianity on Twitter and Instagram. Using a mixed method, the findings of the study show that Islam is more aggressively attacked than Christianity by three major communities, unlike Christianity, which is targeted much less by two main online groups. The online discussion around the two religions is politically polarized, and the negative language especially used against Islam includes the strategic use of hashtags and emojis, which have been weaponized to communicate violent messages and threats. The study is situated within the discussion of trolling and hateful content on social media. Aside from the empirical examination, the study refers to the differences in Twitter’s and Instagram’s policies, for the latter does not allow using hashtags such as #f***Christians and #f***Muslims, unlike Twitter, which accepts all types of hashtags to be used.

1. Introduction

This study attempts to empirically examine and compare the extent of online trolling against Islam and Christianity using a unique dataset collected from Instagram and Twitter. Comparative religious studies focusing on the negative aspects of social media, such as this research, are rare; hence, this study fills a gap in the literature especially in relation to the nature of online political communities that engage with such content on social media. Among the problems that emerged in social media is the use of computer-mediated violence, offering violent actors the opportunity to disseminate hate and publicize their acts against religious groups. Propagators of online hate have used new tools to send and share hateful messages, recruit new members, reach new audiences, and even incite violence offline.1 In this study, I follow Bleich’s (2012) conceptualization of Islamophobia, which is broadly defined as “indiscriminate negative attitudes or emotions directed at Islam or Muslims” (p. 182). Bleich is careful to point out that criticism against Islam is and should not be considered Islamophobic, but “terms like indiscriminate—or cognates like undifferentiated or un-nuanced—cover instances where negative assessments are applied to all or most Muslims or aspects of Islam” (p. 182). I introduce the term computer-mediated trolling, which is defined as targeted attacks that are facilitated with the assistance of networked computers. In other words, I study here hashtagged attacks against Islam as well as Christianity to provide an important comparative perspective, and I also focus on the hateful emojified discourses targeting Muslim men and women. The purpose of this study is not to show Islam and Christianity as victims of trolling because these religions have been historically implicated in persecuting other religious groups; instead, the goal of the study is to describe the nature of trolling against these religions on social media without studying the potential counterattacks and focusing on the way Muslims are periodically targeted with hateful messages.

2. Online Bullying and Hate on Social Media

According to Cleland (2014), social media sites have paved the way for racist opinions and rhetoric to flourish online. Similarly, Brown (2009) argues that social networking sites make it easier to spread hate by replacing outdated forms of technology and creating a new social setting online. In addition, Ben-David and Matamoros-Fernández (2016) argue that with the emergence of social media, hate groups have added platforms, such Facebook, to their communicative networks, despite the fact that in its terms of service agreement, Facebook users agree not to post content that is hateful or violent. According to Farkas et al. (2018), research collected over a span of the last 10 years indicates how fake identities have been disseminated through social media to promote racism. Online antagonism that takes place over social media has the potential to accelerate existing real-life racism through the dispersal of hateful discourse (Patton et al. 2017). Milner (2013) confirms that trolling practices that often use humor work to antagonize people from minority backgrounds, creating a “marginalized other” (p. 63). Similarly, as argued by Matamoros-Fernández (2017), hate takes on a new shape when it comes to the online environment, as documented by far-right extremists often active on Facebook and other social networking sites, such as YouTube, Twitter, and Instagram (Al-Rawi 2017, 2020, 2021).
In this section, I survey a few previous studies that examined online hate against religious, ethnic, and racial groups, and I situate the literature within the broader discussion of the harmful content of social media, such as issues related to trolling, drug use, revenge porn, cyberbullying, abuse, public health, and negative psychological impact (Al-Rawi 2019; Baccarella et al. 2018; Cao and Sun 2018; Salo et al. 2018; Scheinbaum 2017; Smaldone et al. 2020). This study focuses on one aspect of social media that is manifested in online trolling and hate.
In their empirical research, Vidgen and Yasseri (2020) created a classification system to better understand Islamophobic hate on social media. Here, they distinguished between differing strengths of Islamophobia. Strong Islamophobia on social media is defined as “content which explicitly expresses negativity against Muslims” (p. 69). The authors (ibid) define weak Islamophobia as “content which implicitly implies negativity against Muslims” (p. 69). Using the power of computational analyses, an automatic software tool was created to distinguish between strong and weak Islamophobia on social media. First, to create the dataset, the research team created a list of 50,000 Twitter users composed of individuals who follow at least one of the six major political parties in the UK. Tweets from these accounts were sampled between January 2017 and June 2018, creating a dataset with 140 million tweets, which was then used to create a training dataset of 4000 tweets. A total of 1000 of the 4000 tweets within the training dataset were found using the search terms “Muslim” and “Islam”. Three blind human annotators then analyzed the tweets. Next, the researchers extracted key features that were deemed important, for example, the number of swear words, the mention of Muslim names, the mention of mosques, which were then used to test for strong and weak forms of Islamophobia. Those tweets that mentioned mosques were five times more likely to be categorized as strong Islamophobia. Similarly, through their study, MacAvaney et al. (2019) stress the importance of using keyword-based approaches to track potentially hateful keywords to classify online hate. Hatebase, for example, is a resource both MacAvaney et al. (2019) and Vidgen and Yasseri (2020) cite as valuable to create a classification system that can detect hate in combination with examining the sociopolitical context.
Further, Ben-David and Matamoros-Fernández (2016) sought out to investigate hate speech present on the Facebook pages of right-wing political parties in Spain. Using textual analysis, the authors compared the types of words that would be frequently used by political parties. There were categories or clusters created, which included Spain, immigration, independence movement, insults, Islam, Moroccan, Black people, Romanians, and South Americans. The top words identified in the Facebook posts that occurred frequently were stored and categorized. Additionally, for each political party, the authors selected the top 10 pictures and links with the highest amount of engagement and likes. Of the nine categories that emerged from the analysis of these links and images, the top category was labeled as anti-immigration, in which the images and links targeted immigrants as scapegoats for Spain’s problems (Ben-David and Matamoros-Fernández 2016). Using a textual method to analyze the posts, the researchers showed that top words were found under the immigration category, and although political parties did not overtly propagate hate speech on their channels, they repeatedly stigmatized immigrants by repeatedly linking them with crime, trouble, and danger (Ben-David and Matamoros-Fernández 2016). Among the visual content, 18.71% of the images collected were linked to anti-immigrant content; similarly, 25% of the links shared by the extreme-right wing parties were placed under the anti-immigration category. The research indicated that through a textual analysis, covert discrimination was found to be perpetuated by these political parties through the continuous association of immigrants with keywords such as danger and crime.
Similarly, Sorato et al. (2020) argue that by extracting fragments of text that are semantically similar, it is possible to depict recurrent linguistic patterns in certain kinds of discourse. The authors use a technique called SSP (Short Semantic Pattern) mining, which works to extract sequences of words that share a similar meaning in their word embedding representation. Here, Sorato et al. then used the extracted patterns and phrases to identify racist discourse presented in their dataset of collected tweets.
On the other hand, investigating how the underlying algorithms of social networking sites influence human activity is key to understanding how hate spreads online. According to Suler’s (2004) research on tackling hate on social networking sites, online users often feel less restrained when operating online. This is also highlighted in Kilvington and Price’s (2019) examination of Kick It Out, a small UK-based soccer charity that is monitoring racist abuse online and offering rehabilitation training for offenders. After a series of interviews with soccer players, fans, and social media experts, Kilvington and Price (2019) mentioned the lack of acknowledgement about the severity of fans’ hateful remarks towards nonwhite players. As a solution, clear guidelines, policies, and resources are needed for clubs to follow and use regarding racist incidents via social media. Furthermore, researchers have investigated how specific platforms push hateful content, highlighting that the idea that sites are “neutral” is a misconception. As argued by Van Dijck and Poell (2013), all human actions on social networking sites are influenced by the platform’s underlying social media algorithms, and this is also highlighted by Matamoros-Fernández’s above-mentioned study on hate and racism.
Similarly, Matamoros-Fernández (2017) uses social media to analyze online hate in the context of Adam Goodes, a racialized Australian footballer who was met with an influx of online hate for calling out systemic racism. The research project used an issue mapping approach to capture tweets, of which 2174 tweets were coded, containing images, 405 Facebook links, and 529 YouTube links (Matamoros-Fernández 2017). Furthermore, to examine how platforms perpetuate hateful content, the author created a fake Facebook profile and liked one page from tweets titled “Adam Goodes Flog of the Year” to analyze which content appeared based on the platform’s algorithms. The research examined how platformed racism unfolded in the case of Adam Goodes, where racist members, videos, and comments were protected by the platform itself, indicating algorithmic bias in the customized dissemination of racist information (Garcia 2016).
Finally, Farkas et al. (2018) analyzed 11 Danish Facebook pages that were disguised as Muslim extremists living in Denmark. By collecting posts made by these accounts, the researchers were able to highlight how social networking sites amplify stereotypical identities by connecting right-wing users to these fake pages. This, therefore, creates a hostile environment where Facebook users tap into a reservoir of extremism and anti-Muslim hate. In this sense, hate takes on a new form within online platforms, in which online environments form and solidify identities through posts, images, accounts, and sites that can be monitored and systematically studied.
Indeed, there are of course numerous other studies that make similar arguments to the sources cited above (see, for example, Aguilera-Carnerero and Azeez 2016; Awan 2014; Miller 2017; Williams et al. 2020), and I cannot list them all here due to the paper’s word limit. Previous studies generally show that despite social media public policies and moderation algorithms, there is ample evidence of online bullying directed at religions and hate against their followers. This study, however, discusses a unique case study on the use of trolling against religions, and it offers new evidence highlighting how certain online communities manage to bypass the policies followed by some social media platforms to express very violent messages through the coded use of language. It also provides a unique insight into the nature of algorithms used on Twitter and Instagram in relation to the use of trolling hashtags and hateful emojis.
In brief, social media research offers ample opportunities to empirically examine trolling, online bullying, and hate speech, and these platforms can also be used to monitor toxic language and identify perpetrators and online communities.
This study attempts to answer the following research questions:
RQ1.
What are the major communities that troll Islam and Christianity on Twitter and Instagram?
RQ2.
What is the nature of the hashtagged and emojified discourses about Christians and Muslims?

3. Methods

Using two Python scripts, I collected all the available 16,129 Instagram posts referencing #f***allah, #f***Islam, and #f***quran posted between 2013 and 2020. I also collected all the available 2089 tweets referencing the above hashtags, including #f***muslims, that were posted between 2013 and 2020. These social media messages were posted between 2013 and 2020 when the search was conducted, and these were all the posts that the Python scripts managed to retrieve. In total, I collected 18,218 social media posts and tweets that reference Islamophobic language using English language search terms that involve the use of the “f” word, spanning over 7 years. I focused my research on Twitter and Instagram because both allow hashtagged discussions, and I had the technical means to obtain the necessary data from these two platforms.
As regards Christianity-related hashtags, I collected 4012 tweets that referenced #f***bible, #f***thebible, #f***christ, #f***christianity, #f***christians, and #f***jesus and were posted between 2009 and 2020. Unlike the case for tweets referencing Islam, I used more keyword searches because there were many distinct references to Christianity as names such as Muhammed are commonly used for many Muslim men. Similar to the case of Islam, the hashtag #f***Christians does not exist on Instagram because it is blocked, and the total number of Instagram posts collected was 8573 posted between 2012 and 2020.
To analyze these social media posts, I used other Python scripts to extract the most used words, hashtags, emojis, sequence of emojis, and mentions. Finally, I used a combination of quantitative and qualitative measures to explain the collected social media data. First, the quantitative measures included the extraction of the above data (e.g., most used hashtags and emojis), while the qualitative aspects consisted of conducting a qualitative content analysis using a summative approach that focuses on the latent meaning of a text. The latter method “starts with identifying and quantifying certain words or content in text with the purpose of understanding the contextual use of the words or content” (Hsieh and Shannon 2005, p. 1283).
To answer the first research question, I used qualitative measures including the identification of the main online communities and proper contextualization by relying on the data extracted from the most mentioned users and their discussions. To answer the second research question, I followed the same approach to provide a critical qualitative interpretation of hashtagged and emojified discourses based on the samples found in Tables 2–5. The online communities were identified based on the qualitative examination of the most mentioned users who tag each other and their shared and distinctive use of words, hashtags, emojis and sequence of emojis, and bigrams (phrases made up of two words). For a complete list of emojis found on social media, please see the official Unicode website (Unicode 2022).
Finally, I followed a basic reverse engineering approach (Butcher 2016, p. 88) in late 2020 in an attempt to understand hashtag policies followed by Instagram and Twitter at that time in relation to attacks against Islam and Christianity and their adherents. In this respect, I searched all of the above hashtags on Twitter and Instagram and experimented with a variety of similar other hashtags and sequence of emojis to see whether they are present and widely used or blocked on the platforms. This is because social media algorithms are considered black boxes whose details are proprietary knowledge that is not disclosed to the general public (Christin 2020), and this is the only means to extract more information to understand the operational infrastructure or algorithms (Eilam 2011). In other words, reverse engineering is used to “obtain missing knowledge, ideas, and design philosophy when such information is unavailable” (Eilam 2011, p. 1).

4. Results and Discussion

The findings of the study show that many of the most mentioned Twitter users in relation to Islam are well-known Muslim politicians, organizations, or activists who are trolled due to their Muslim or liberal backgrounds (Table 1). Some of the well-known political figures include the US democratic congresswomen Ilhan Omar (two targeted accounts), Rashida Tlaib, and Alexandria Ocasio-Cortez, as well as the Twitter accounts of the Swedish prime minister, Stefan Löfven, and the Swedish Social Democrats. The targeted US figures represent democratic voices in the United States who often defend ethnic and religious minorities from attacks by some Republican figures and the far right. However, liberal and progressive voices such as the ones cited above belong to what is known as the Squad (Borah et al. 2022), who are themselves often trolled in mainstream media, such as Fox News, and on social media with the use of memes (Pintak et al. 2021; Al-Rawi 2021; Al-Rawi et al. 2021). Together with Löfven, these figures are the main trolling targets that often receive the worst type of hateful messages in addition to the Twitter account that is curated by the US Campaign for Palestinian Rights, yet the only exception is related to references to a Hindutva anti-Muslim activist because of the reason mentioned below. Similarly, the most mentioned users on Instagram are mainly far-right supporters and are often referenced to consolidate the online influence and outreach of this trolling online community similar to the case of the Hindu activist. If we examine the top 50 most mentioned users, however, we find that majority are ultranationalist Hindu activists who repeatedly post messages such as the following “🚩🦁⚔️🙏🏹जय हिंदुत्व 🏹🙏⚔️🦁🚩 #CAA #CAB #ISupportCAA #ISupportCAB #ISupportNRC #NarendraModi #AmitShah #YogiAdityanath #India #IndianArmy #Hindu #Hinduism #ChhatrapatiShivaji #ChhatrapatiShivajiMaharaj #MaharanaPratap #PrithvirajChauhan #BajarangDal #VishvaHinduParishad #RSS #RashtriyaSwayamsevakSangh #BJP #BhartiyaJanataParty #Rajputana #Rajput #PayalRohtagi #TigerRajaSingh #HinduRashtra #F***Islam #IslamIsShit #IslamIsJihad”. Similar to far-right groups in the West, the Instagram posts of ultranationalist Hindu communities or Hindutva in reference to Hindu nationalism from India and elsewhere often tag supportive users to create a strong online community and share similar messages. It is important to note here that the online support for the Hindutva ideology can be predominately found in India but also in other Western countries where pro-Modi Indian diasporic communities live, such as the USA (de Souza and Hussain 2021). Interestingly, this kind of support is also manifested in the seeming alliance between Modi’s and Donald Trump’s supporters, united by their hatred of Islam and negative attitude toward China (Singh 2021). As can be seen, the results of this study align with previous research that identified the way Hindu ultranationalist communities attack Muslims online (Gittinger 2018; Rajan and Venkatraman 2021; Amarasingam et al. 2022). However, this paper offers a unique insight into the way hashtags and emojis are used to troll Muslims.
There is obviously a trolling campaign found on the two social media platforms, which can be defined in the context of this study as coordinated and systematic online attacks against a minority religious group whose aim is to discredit its cause and/or demean it. This is evident from the most used hashtags, for they show clear divisive terms that are highly offensive and abusive towards Islam (Table 2). Twitter, for example, contains many hashtags that call for deporting Muslims from Western countries, such as #VoteThemOut, #SendThemAllBack, and #DeportAllMoslums. There are also a few hashtags that are used ironically, suggesting the opposite meaning, such as #ReligionOfPeace and #Peacefortheworld. The examination of the bigrams shows that some of the top phrases include “religionofpeace islamistheproblem”, indicating the ironic use of these terms.
Similar to the observation made above, we can see that there a few references to non-US terms that attack Western liberals, such as #svpol or Swedish politics, or others that show solidarity with anti-Muslim Hindu activists, such as #standwithmodi on Twitter, while there are more atheist hashtags on Instagram. Many conservative and far-right hashtags are used on Instagram, such as #americanasf, #Merica, #covefe, #deblorable, #pepe, and #libtards, which mostly mock liberals. These Instagram posts are often accompanied by pleas to protect freedom of speech, which is clear in the use of other hashtags, such as #freedomand #liberty. This aligns with previous research on the far right and their online strategies to attract attention and gain sympathy for their causes (Tumber and Waisbord 2021; Gounari 2021; Kamali 2022). What is disturbing, however, is the use of violent expressions in association with Muslims that seem to encourage physical violence, such as using the hashtag #pewpew, which is a popular one on Instagram in reference to gunshots, while other associated hashtags that promote militancy include #war, #guns, #army, and #rangers.
These latter problematic hashtags are often accompanied by other coded and more nuanced nonverbal messages represented in emojis. For example, Table 3 shows the most frequent emojis used on Twitter and Instagram, and we can clearly see differences between the two social media platforms. While the middle finger insult against Islam is dominant on Twitter (ranked number 1), it is not the same on Instagram (ranked number 20). In terms of Twitter emoji sequences, we find that the middle finger is also used in association with the mosque, the Kaaba in Mecca, and death threats with the use of the crossed swords (⚔) and human skull (💀), which are symbols of war. Emojis express far more than mere sentiments as there are clear messages associating Islam with satanic practices (😈☪) in different frequencies as well as terrorism against white people (💥🙋🏼). We can also see ultranationalistic messages by linking these insults to the flags of countries such as the US, UK, France, Australia, Israel, and Poland represented by letter symbols and highlighting the alleged threat/emergency of Islamic expansion in these countries with the following emoji sequence (🇬🇧🇺🇸🇦🇺🇵🇱🇮🇱🚨🚨) and other similar ones. Some of the other emojis attempt to offend and mark the difference between Muslim and Christian religions by repeatedly using the pig and bacon emojis (🐷🥓), while other sequences show clearer messages, such as 🇺🇸🗽📃🔫🗡⛪🐘🐖💀💪, which can be interpreted as follows: “We have to fight (🔫) in the USA (🇺🇸) for our liberty and freedom of speech (🗽) that is enshrined in the first amendment (📃) in order to protect (🗡) our homeland (⛪) and the Republican (🐘) values as well as Christian way of life (🐖). We will use force (💪) until we die or kill our enemy (💀)”. Finally, other celebratory and positive emojis on Twitter are meant to mock and welcome insults against Islam and Muslims, such as clapping, OK, and funny faces (👏, 👍, 😂, 🤣). Some direct examples that are still found on Twitter include both written and emojified hate messages, such as “People need to stop reading that silly book now it’s made up ! #fuckallah #nosurrender 👳🔫” or “@THERACISTDOCTOR the sandniggers at the bottom aren’t swedes! shame, such a beautiful country in ruins. #fuckislam👳🔫”.
As regards Instagram emojis, there are many other sequences that could not be listed in Table 3 due to its limited size, but they are presented here. First, we find that the poop symbol is more prominently used in terms of the sequence of emojis, and there are far more aggressive and militant ones than what is found on Twitter. For example, the gun emoji (🔫) was used 114 times on Instagram, as well as other violent emojis, such as explosion (💥) (n = 42) and skull (☠) (n = 19) in reference to threats against Muslims. In addition, there are more prominent country flags that express ultranationalistic sentiments, including the US, India, the Netherlands, the UK, Germany, and Israel, in different sequences. In addition, there are some frequent far-right emojis, such as the OK sign (👌) (n = 85) and Pepe the Frog (🐸) (n = 17), that are used by white supremacists and the Hindu OM 🕉 (n = 45). We also find the pig and bacon emojis to be very prominent, similar to Twitter, and that there are clear threats against Muslims (🔪⚰ or ☠☠ or 👳🏼🔫🇹🇷🚞💣💨📖🔥#koran), including Muslim men of different colors (👳🏾🔫; 👳🔫) and veiled Muslim women (🖕🏼🧕🏽💩).
As regards the findings on Christianity, the top 30 most recurrent words on Twitter include the word “atheism”, which is very frequent (n = 184) in addition to f***religion (n = 154), f***god (n = 137), f***trump (n = 128), and f***republicans (n = 107). This community seems to associate Christianity with Republican figures, such as Trump due to his conservative views and public affiliation with the Evangelical Church (Fea 2018; Martí 2019). On Instagram, the top 30 words are related to attacking Christianity and other general atheist terms, such as f***religion (n = 1196), atheist (n = 1175), and ISIS (n = 1050). Upon examining the bigrams on Twitter, we once again find that there is emphasis on attacking conservative republicans, such as “f***christians f***republicans” (n = 96) and “f***republicans f***trump” (n = 86). On Instagram, however, the focus in the top bigrams is on atheists attacking religious people, such as “f***religiouspeoples f***bible” (n = 978).
In order to answer the first research question, I identified three main groups targeting Islam based on the methodological procedures and findings presented above. There is a clear coordinated activity among the most mentioned users who tag each other in the sense that Islam is attacked while like-minded people are tagged using @username to notify them and encourage them to further collaborate. These online communities include the following: (1) far-right and antiliberal community that always associates Islam and Muslim immigrants with terrorism, (2) atheists that do not only attack Islam but all world religions, and (3) ultranationalist Hindu community.
To answer the second research question, I found that one of the dominant themes is related to stopping the alleged expansion of Shariah law and Islam in different countries, which is presented as a satanic cult (😒👹☪ or 👹☪) and expressed in different ways such as ❌❌🕋❌☪, 👊☪, and 🔫💣💯🐷🐽. In terms of political statements, there are other emojis that convey solidarity with Israel (✊✡) and the protection of freedom of speech against censorship in the USA (🤔🤐🤐🇺🇸🇺🇸🇺🇸). Regarding online trolling against Christianity, I identified two main online communities by following the procedures highlighted above: (1) atheists and (2) anti-Republican/conservative. For example, the top 10 mentioned users in tweets include Pope Francis @pontifex (n = 9) and the US President Donald Trump @realdonaldtrump (n = 7), as well as a few other anti-Trump and self-proclaimed atheist users. On Instagram, however, most of the top 10 users are atheists. Further, Table 4 shows that there are many atheist-related terms on Twitter, such as #atheism, #atheist, and #nogod, as well as general attacks against Islam and Judaism. Similar to the findings presented above, the other prominent community is the anti-Republican/conservative, which is evident from the use of hashtags such as #f***trump, #impeachandimprison, #impeachtrumpnow, and #dumptrump due to Trump’s public alignment with Christian groups, as stated above. On Instagram, however, the top hashtags are exclusively focused on the atheists’ community. Incidentally, this anticonservative community is largely missing in the examined datasets on Islam especially on Twitter.
Unlike the social media posts that reference Islam, I found that the #pewpew hashtag is completely missing in the two datasets referencing Christianity. Additionally, militant emojis such as 💥 (n = 17), 🔫 (n = 15), and 💣 (n = 4) are rarely used in the entire two datasets. Table 5, for example, shows only one emoji sequence (😈⛪🔥🔫) that contains a violent message, unlike the numerous aggressive sequence of emojis found in the datasets referencing Islam. In brief, the results show that there are two main online communities that troll Christianity. The first and largest one is an atheist online group that trolls all religions mostly targeting the Twitter account of Pope Francis. This finding closely corresponds with previous research on the increasing important role of atheists in creating online spaces to gather and sometimes troll other religions (Al-Rawi 2017; Addington 2017; Graczyk 2020). The second online community is anti-Trump that attacks conservative Republicans for their policies and close association with Christianity, often associating them with racism and conflict.
Aside from the discussion presented above, I followed a reverse engineering approach (Butcher 2016) to understand the policies followed by Twitter and Instagram regarding the use of some of the above hashtags. In this respect, Instagram does not allow hashtags such as # f***Christians and #f***Muslims, yet it allows similar hashtags against Islam and Christianity, such as #f***jesus, #f***christ, #f***Allah, and #f***Islam. On the other hand, Twitter allows all of these hashtags to be used. When I compared similar insults against other religions, such as Judaism and Hinduism, I found the same patterns along Twitter and Instagram platforms, which is possibly due to the legal implications behind such policies. In this respect, many EU countries do not allow attacks against religious groups, but the laws permit criticism against religions to protect freedom of speech (European Commission 2020). The problem, however, in this law is the legal challenges of distinguishing between attacks against individuals versus attacks against their faith. For example, Bleich stresses the “multidimensional nature of Islamophobia, and the fact that Islam and Muslims are often inextricably intertwined in individual and public perceptions” (Bleich 2012, p. 182). In other words, it is not practically possible for social media platforms to distinguish between attacks on religions and on people adhering to these religions by simply allowing or blocking certain hashtags as more advanced moderation tools are needed.

5. Conclusions

This study offers some original insight into identifying and critically analyzing computer-mediated trolling against Christianity and Islam as well as hashtagged and emojified hate against Muslims. The findings show that the language used against Islam and Christianity is politically driven, but Islam receives far more negative content. The atheist online community is active in attacking both Islam (Al-Rawi 2017) and Christianity; however, far-right and ultranationalist Hindu groups exclusively troll Islam and Muslims using very violent expressions. On the other hand, the anticonservative online community actively targets Christianity and trolls Trump as well as other US Republicans for their politics and religious affiliations. The implications of the study suggest that the two world religions examined here do not receive equal treatment, for they are constructed differently, which could be linked to geopolitics, stereotypes, conflicts, and other historical factors that are all linked to geographical contexts. Despite the ongoing discussions of improved community guidelines, advanced moderation techniques, and online safety measures enacted to protect minorities and vulnerable groups, this study shows, instead, one aspect of the problematic content, especially that which targets Muslims that is still thriving online. If social media platforms are serious about tackling bad actors, they need to do and invest much more to at least limit the amount of online hate.
In general, both Twitter and Instagram contain ample toxic content, though the former platform allows posting hateful content against Christians and Muslims. Additionally, both platforms provide ample avenues for white supremacists and other hate groups to express their views by using highly aggressive and militant language that encourages violence especially against Islam. Instead of expressing direct textual threats that can be identified by other users, we find Islamophobic groups who exploit the affordances of social media platforms by employing coded language that is communicated via emojis and onomatopoetic hashtags, such as #pewpew. This is a new online phenomenon that I call the weaponization of emojis. There is no doubt that freedom of speech must be largely protected, but when communication, even if it is packaged as funny memes or emojis, incites violence against religious and ethnic groups, then this kind of speech must be at least moderated.
Finally, this study is limited to search words in the English language targeting Christianity and Islam on Twitter and Instagram, and future studies need to take into account the inclusion of other languages that can provide more insight into possible cross-cultural and national comparisons of attacks against religions and cultural differences in the use of emojified hate. Finally, future empirical research is needed to focus on the nature of trolling against other world religions, such as Judaism and their followers, on other social media outlets, such as Telegram, TikTok, and YouTube. Another venue of hate expression is related to mobile apps, such as WhatsApp, which remains very popular in India and elsewhere.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Datasets can be shared with any interested scholars upon request.

Conflicts of Interest

The authors declare no conflict of interest.

Note

1
I would like to thank Ms. Jasleen Bains from Simon Fraser University for her kind assistance in collecting parts of the general literature review on racism on social media.

References

  1. Addington, Aislinn. 2017. Building bridges in the shadows of steeples: Atheist community and identity online. In Organized Secularism in the United States. Berlin, Boston: De Gruyter. [Google Scholar]
  2. Aguilera-Carnerero, Carmen, and Abdul Halik Azeez. 2016. ‘Islamonausea, not Islamophobia’: The many faces of cyber hate speech. Journal of Arab & Muslim Media Research 9: 21–40. [Google Scholar]
  3. Al-Rawi, Ahmed. 2017. Islam on YouTube: Online Debates, Protests, and Extremism. London: Springer. [Google Scholar]
  4. Al-Rawi, Ahmed. 2019. The fentanyl crisis & the dark side of social media. Telematics and Informatics 45: 101280. [Google Scholar]
  5. Al-Rawi, Ahmed. 2020. Kekistanis and the meme war on social media. The Journal of Intelligence, Conflict, and Warfare 3: 13. [Google Scholar] [CrossRef]
  6. Al-Rawi, Ahmed. 2021. Political Memes and Fake News Discourses on Instagram. Media and Communication 9: 276–90. [Google Scholar] [CrossRef]
  7. Al-Rawi, Ahmed, Alaa Al-Musalli, and Pamela Aimee Rigor. 2021. Networked Flak in CNN and Fox News Memes on Instagram. Digital Journalism 2021: 1–18. [Google Scholar] [CrossRef]
  8. Amarasingam, Amarnath, Sanober Umar, and Shweta Desai. 2022. “Fight, Die, and If Required Kill”: Hindu Nationalism, Misinformation, and Islamophobia in India. Religions 13: 380. [Google Scholar] [CrossRef]
  9. Awan, Imran. 2014. Islamophobia and Twitter: A typology of online hate against Muslims on social media. Policy & Internet 6: 133–50. [Google Scholar]
  10. Baccarella, Christian V., Timm F. Wagner, Jan H. Kietzmann, and Ian. P. McCarthy. 2018. Social media? It’s serious! Understanding the dark side of social media. European Management Journal 36: 431–38. [Google Scholar] [CrossRef]
  11. Ben-David, Anat, and Ariadna Matamoros-Fernández. 2016. Hate speech and covert discrimination on social media: Monitoring the Facebook pages of extreme-right political parties in Spain. International Journal of Communication 10: 1167–93. [Google Scholar]
  12. Bleich, Erik. 2012. Defining and researching Islamophobia. Review of Middle East Studies 46: 180–89. [Google Scholar]
  13. Borah, Porismita, Kate Keib, Bryan Trude, Matthew Binford, Bimbisar Irom, and Itai Himelboim. 2022. “You are a disgrace and traitor to our country”: Incivility against “The Squad” on Twitter. Internet Research. [Google Scholar] [CrossRef]
  14. Brown, C. 2009. WWW. HATE. COM: White supremacist discourse on the internet and the construction of whiteness ideology. The Howard Journal of Communications 20: 189–208. [Google Scholar] [CrossRef]
  15. Butcher, Taina. 2016. Neither Black Nor Box: Ways of Knowing Algorithms. In Innovative Methods in Media and Communication Research. Edited by Sebastian Kubitschko and Anne Kaun. Cham: Palgrave Macmillan, pp. 81–98. [Google Scholar]
  16. Cao, Xiaofei, and Jianshan Sun. 2018. Exploring the effect of overload on the discontinuous intention of social media users: An SOR perspective. Computers in Human Behavior 81: 10–18. [Google Scholar] [CrossRef]
  17. Christin, Angèle. 2020. The ethnographer and the algorithm: Beyond the black box. Theory and Society 49: 897–918. [Google Scholar] [CrossRef]
  18. Cleland, J. 2014. Racism, football fans, and online message boards: How social media has added a new dimension to racist discourse in English football. Journal of Sport and Social Issues 38: 415–31. [Google Scholar] [CrossRef] [Green Version]
  19. de Souza, Rebecca, and Syed Ali Hussain. 2021. “Howdy Modi!”: Mediatization, Hindutva, and long distance ethnonationalism. Journal of International and Intercultural Communication, 1–24. [Google Scholar] [CrossRef]
  20. Eilam, Eldad. 2011. Reversing: Secrets of Reverse Engineering. New York: John Wiley & Sons. [Google Scholar]
  21. European Commission. 2020. The Code of Conduct on Countering Illegal Hate Speech Online. June 22. Available online: https://ec.europa.eu/commission/presscorner/detail/en/qanda_20_1135 (accessed on 10 January 2022).
  22. Farkas, Johan, Jannick Schou, and Christina Neumayer. 2018. Platformed antagonism: Racist discourses on fake Muslim Facebook pages. Critical Discourse Studies 15: 463–80. [Google Scholar] [CrossRef]
  23. Fea, John. 2018. Believe Me: The Evangelical Road to Donald Trump. Grand Rapids, Michigan: Wm. B. Eerdmans Publishing. [Google Scholar]
  24. Garcia, Megan. 2016. Racist in the machine: The disturbing implications of algorithmic bias. World Policy Journal 33: 111–17. [Google Scholar] [CrossRef]
  25. Gittinger, Juli. L. 2018. Hinduism and Hindu Nationalism Online. London: Routledge. [Google Scholar]
  26. Gounari, Panayota. 2021. From Twitter to Capitol Hill: Far-right Authoritarian Populist Discourses, Social Media and Critical Pedagogy. Leiden: Brill. [Google Scholar]
  27. Graczyk, Agnieszka. 2020. Atheism and the changing image of Islam in Iraq. Review of Nationalities 10: 169–80. [Google Scholar] [CrossRef]
  28. Hsieh, Hsiu-Fang, and Sarah E. Shannon. 2005. Three approaches to qualitative content analysis. Qualitative Health Research 15: 1277–88. [Google Scholar] [CrossRef]
  29. Kamali, Sara. 2022. Homegrown Hate: Why White Nationalists and Militant Islamists Are Waging War Against the United States. California: University of California Press. [Google Scholar]
  30. Kilvington, Daniel, and John Price. 2019. Tackling Social Media Abuse? Critically Assessing English Football’s Response to Online Racism. Communication & Sport 7: 64–79. [Google Scholar] [CrossRef] [Green Version]
  31. MacAvaney, Sean, Hao-Ren Yao, Eugene Yang, Katina Russell, Nazli Goharian, and Ophir Frieder. 2019. Hate speech detection: Challenges and solutions. PLoS ONE 14: e0221152. Available online: https://sfu-primo.hosted.exlibrisgroup.com/permalink/f/1u29dis/TN_cdi_plos_journals_2276832890 (accessed on 10 January 2022). [CrossRef]
  32. Martí, Gerardo. 2019. The Unexpected Orthodoxy of Donald J. Trump: White Evangelical Support for the 45th President of the United States. Sociology of Religion 80: 1–8. [Google Scholar] [CrossRef] [Green Version]
  33. Matamoros-Fernández, Ariadna. 2017. Platformed racism: The mediation and circulation of an Australian race-based controversy on Twitter, Facebook and YouTube. Information, Communication & Society 20: 930–46. [Google Scholar]
  34. Miller, Charles. 2017. Australia’s anti-Islam right in their own words. Text as data analysis of social media content. Australian Journal of Political Science 52: 383–401. [Google Scholar] [CrossRef]
  35. Milner, R. M. 2013. Pop polyvocality: Internet memes, public participation, and the Occupy Wall Street movement. International Journal of Communication 7: 34. [Google Scholar]
  36. Patton, Desmond Upton, Douglas-Wade Brunton, Andrea Dixon, Reuben Jonathan Miller, Patrick Leonard, and Rose Hackman. 2017. Stop and frisk online: Theorizing everyday racism in digital policing in the use of social media for identification of criminal conduct and associations. Social Media+ Society 3: 2056305117733344. [Google Scholar] [CrossRef]
  37. Pintak, Lawrence, Brain J. Bowe, and Jonathan Albright. 2021. Influencers, Amplifiers, and Icons: A Systematic Approach to Understanding the Roles of Islamophobic Actors on Twitter. Journalism & Mass Communication Quarterly. [Google Scholar] [CrossRef]
  38. Rajan, Benson, and Shreya Venkatraman. 2021. Insta-hate: An exploration of Islamophobia and right-wing nationalism on Instagram amidst the COVID-19 pandemic in India. Journal of Arab & Muslim Media Research 14: 71–91. [Google Scholar]
  39. Salo, Jari, Matti Mäntymäki, and AKM Najmul Islam. 2018. The dark side of social media–and Fifty Shades of Grey introduction to the special issue: The dark side of social media. Internet Research 14: 71–91. [Google Scholar] [CrossRef] [Green Version]
  40. Scheinbaum, A. C., ed. 2017. The Dark Side of Social Media: A Consumer Psychology Perspective. London: Routledge. [Google Scholar]
  41. Singh, Raj Kumar. 2021. Hindutva and Donald Trump: An Unholy Relation. In The Anthropology of Donald Trump. London: Routledge, pp. 203–18. [Google Scholar]
  42. Smaldone, Francesco, Adelaide Ippolito, and Margherita Ruberto. 2020. The shadows know me: Exploring the dark side of social media in the healthcare field. European Management Journal 38: 19–32. [Google Scholar] [CrossRef]
  43. Sorato, Danielly, Fábio B. Goularte, and Renato Fileto. 2020. Short Semantic Patterns: A Linguistic Pattern Mining Approach for Content Analysis Applied to Hate Speech. International Journal on Artificial Intelligence Tools 29: 2040002. [Google Scholar] [CrossRef]
  44. Suler, John. 2004. The online disinhibition effect. Cyber Psychology & Behavior 7: 321–26. [Google Scholar]
  45. Tumber, Howard, and Silvio Waisbord, eds. 2021. The Routledge Companion to Media Disinformation and Populism. London: Routledge. [Google Scholar]
  46. Unicode. 2022. Full Emoji List, v14.0. Available online: https://unicode.org/emoji/charts/full-emoji-list.html (accessed on 10 January 2022).
  47. Van Dijck, José, and Thomas Poell. 2013. Understanding social media logic. Media and Communication 1: 2–14. [Google Scholar] [CrossRef] [Green Version]
  48. Vidgen, Bertie, and Taha Yasseri. 2020. Detecting weak and strong Islamophobic hate speech on social media. Journal of Information Technology & Politics 17: 66–78. [Google Scholar]
  49. Williams, Matthew L., Pete Burnap, Amir Javed, Han Liu, and Sefa Ozalp. 2020. Hate in the machine: Anti-Black and anti-Muslim social media posts as predictors of offline racially and religiously aggravated crime. The British Journal of Criminology 60: 93–117. [Google Scholar] [CrossRef] [Green Version]
Table 1. The top 10 most mentioned users on Twitter and Instagram.
Table 1. The top 10 most mentioned users on Twitter and Instagram.
No.TwitterCountInstagramCount
1.IlhanMN14h0t_hotdogs276
2.RashidaTlaib6i.beat.my.crab.v.2268
3.socialdemokrat5conservative.canucks260
4.SwedishPM4teenagerfortrump257
5.AOC4politicsdaily17257
6.PoliticalIslam4conservative_real_americans257
7.Ilhan4the.proud.republican254
8.AchAnkurArya3montanarepublican254
9.Muslimsexy3anti_liberal_memes254
10.PalestineToday3americaisstillamerica254
Table 2. The most frequent hashtags associated with the social media posts on Twitter and Instagram.
Table 2. The most frequent hashtags associated with the social media posts on Twitter and Instagram.
No.TwitterCountInstagramCount
1.f***islam1168f***islam11,836
2.f***muslims245freedom4201
3.f***allah1672a3598
4.religionofpeace104America3457
5.islamistheproblem101Trump3416
6.bansharialaw101USA3396
7.banislaminamerica98conservative3291
8.islamisacancer97liberty3005
9.votethemout75meme2877
10.islam51MAGA2776
11.f***quran41capitalism2583
12.sendthemallback39F***Islam2561
13.f***sharia36infidel2430
14.islamishate28republican2269
15.banislam28war2161
16.deportallmoslums23guns2100
17.f***mohammed22Merica2063
18.maga20trump2031
19.f***muhammad19eagle2017
20.muslims17army2012
21.stopislam17destroyislam2012
22.f***isis17americanasf2002
23.f***mohammedthepaedo14deblorable1999
24.islamicterrorism13backtheblue1996
25.f***religion13fakenews1904
26.jihadsquad13pewpew1878
27.repost12Rangers1821
28.islamiscancer12maga1817
29.maga202011atheist1776
30.islamerlort11atheism1774
31.libtard11f***allah1719
32.mohammedwasapedophile10right1631
33.f***iran10rogue1533
34.isis10memes1531
35.svpol10presidenttrump1482
36.f***thequran10libtards1390
37.f***palestine9islam1269
38.standwithmodi9makeamericagreatagain1264
39.muslim9pepe1245
40.trump20208covfefe1213
Table 3. The most frequent emojis used on Twitter and Instagram in relation to Islam.
Table 3. The most frequent emojis used on Twitter and Instagram in relation to Islam.
No.TwitterCountTwitter SequenceCountInstagramCountInstagram SequenceCount
1.🖕124🇬🇧🏴19🔴24,996🇳🇱⚔922
2.👏111😂🤣😂🤣422,526🇺🇸🦅🇺🇸412
3.👍93🖕🏼🖕🏼🖕🏼🖕🏼🖕🏼🖕🏼🖕🏼🖕🏼🖕🏼🖕🏼🖕🏼2🔵14,596🙏🙏🚩🚩🇮🇳🇮🇳132
4.😂66👏🏼👏🏼👏🏼👏🏼👏🏼👏🏼👏🏼👏🏼👏🏼2🇺🇸13,646🇺🇸🥇🇺🇸🥇🇺🇸🥇🇺🇸🥇86
5.🇬🇧63🖕😡22321🏹🇮🇳29
6.🇺🇸56🇺🇸❤🇺🇸1🇳🇱1283🙏🦁🕉27
7.🤣39🕌🖕🖕1🥇1114🐐🍆22
8.🏴37🔫🔫🔫1😂1053🇺🇸🇺🇸🇺🇸21
9.🤥33🔵🇺🇸🇺🇸🇺🇸🇺🇸🇺🇸🦅🦅🦅🦅™©®1🔥677💩🕋💩8
10.🙏31🚨🇬🇧🇬🇧🇬🇧🇬🇧🇬🇧🇬🇧🇬🇧🇬🇧🇬🇧🇬🇧🇬🇧🇬🇧🇬🇧🇬🇧1🙏651🐷🐷🐷4
11.🚨22🖕🖕💩💩😠🇺🇸🇺🇸🇮🇱💪1🚩589🥓🥓🥓3
12.😡18💥🙋🏼1👇529✊✡3
13.🤮17🚨🖕🏻🤬🖕🏻♥1💪465🔫🔫🔫2
14.💩14🔥🕋🕌🔥🀄1👊462💩🖕2
15.🤔13😡😡🤬🤬1🔽430😨🔫2
16.😈12🇺🇸🗽📃🔫🗡⛪🐘🐖💀💪1🦅429👳🏾🔫2
17.🙄11😡😡😡😡🖕🖕🖕🖕1387🙏🏹🗡2
18.🤬10🖕🏻🤬💩🕋🀄1🤘358👳🔫2
19.💪9🐷🐷🐷🥓🥓🥓1👍298👹☪2
20.🔫8🖕🖕🐐🐐1🖕239🤔🤐🤐🇺🇸🇺🇸🇺🇸2
Table 4. The most frequent hashtags on Instagram and Twitter in relation to Christianity.
Table 4. The most frequent hashtags on Instagram and Twitter in relation to Christianity.
No.TwitterCountInstagramCount
1.f***jesus2416f***christianity3894
2.f***christianity425f***thebible3526
3.f***christians303f***religion2821
4.f***thebible294atheist2540
5.atheism183f***jesus2472
6.f***religion154atheism2442
7.f***god136noreligion2317
8.f***christ130f***liars2035
9.f***trump128antireligion1658
10.lgbt109god1554
11.f***republicans107f***ingtruth1522
12.jesus105godisdead1498
13.christian100f***religious1415
14.impeachandimprison87godless1391
15.f***islam85f***jesuschrist1376
16.religion84f***islam1352
17.praisejesus82f***bible1349
18.god81science1343
19.republican80therealtruth1339
20.impeachtrumpnow79f***evilpeople1292
21.trump75freedomofthoughts1289
22.dumptrump64f***illuminatis1288
23.hailsatan61prayers1288
24.atheist52f***religiousbooks1271
25.66652f***religiouspeoples1266
26.p250f***haters1251
27.f***allah44revelation1248
28.f***bible36f***god1243
29.satan29f***christ1241
30.f***cyril27woke1224
31.theresistance27freethinker1169
32.christianity24logic1083
33.f***ndz24truth1068
34.f***cele23jesusisfake1046
35.f***yourgod23godisntreal1032
36.f***muhammad22lies1025
37.f***judaism21openyoureyes1020
38.voetsekanc20fake1016
39.f***moses19antijesus1016
40.nogod18openyourmind1013
Table 5. The top sequence of emojis on Twitter and Instagram in relation to Christianity.
Table 5. The top sequence of emojis on Twitter and Instagram in relation to Christianity.
No.TwitterCountTwitter SequenceCountInstagramCountInstagram SequenceCount
1.😂60😂😂61219😂😂😂55
2.🖕17😂😂😂2😡767😡✅40
3.🤣10🤣🤣🤣2😂725😂😂32
4.😈9🤗1🔥638✅😡19
5.👍9😊1👊583😂🤣18
6.🤔8👇👇1🙏530😡👈🏽17
7.💯7🔓🧠🛌🏿⬆1👈324😡🔥15
8.😭6👀😂1246🖕🏾🗽14
9.👿6🤔🤔🤔🤔1👎206💯💯💯14
10.🙏6🤣🤣😭😭🤣🤣1🤣199🖕🏾📿14
11.👎6🐰🐇🐰🐇1191⚫🔴⚪🔵14
12.🙌5👎👎👎👎👎👎1🖕158🌽🌵🍅🥜🥑🌶🍠🥒🥔14
13.🔥5💯💯💯💯1😒142🖕🏾🇪🇸14
14.👇4😈👊🏻🔥1🤔126🖕🏾🚣🏼14
15.😒4🤷🏿1💯108🖕🏾⛪14
16.👏4🖕🏼😒1🤷103🕉☸14
17.😊4🤧🤧🤧1🤦88😡✊🏽14
18.🐰4😈⛪🔥🔫1👉88🌱🌳🍃🍂13
19.😑4🖕🏿🖕🏿🖕🏿🖕🏿🖕🏿1🌋85😂😂😂😂13
20.💎4👍👍👍👍👍1🤢76😡🌋12
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Al-Rawi, A. Hashtagged Trolling and Emojified Hate against Muslims on Social Media. Religions 2022, 13, 521. https://doi.org/10.3390/rel13060521

AMA Style

Al-Rawi A. Hashtagged Trolling and Emojified Hate against Muslims on Social Media. Religions. 2022; 13(6):521. https://doi.org/10.3390/rel13060521

Chicago/Turabian Style

Al-Rawi, Ahmed. 2022. "Hashtagged Trolling and Emojified Hate against Muslims on Social Media" Religions 13, no. 6: 521. https://doi.org/10.3390/rel13060521

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop