College of Social and Behavioral Sciences

93 Social Media Perpetuating Hate Speech, Racism, and Racial Bias

Pierce Christoffersen

Faculty Mentor: Phillip Singer (Social & Behavioral Science, University of Utah)

 

Abstract

The research investigated in this paper examines the correlation between racism and hate speech, and how the Internet and social media have perpetuated both. Subsequently, my research delves into why we see racist ideologies and hate speech persist and thrive on social media platforms. Then, I further elaborate by analyzing the harms of hate speech, and racist ideologies—primarily domestic terrorism. Ergo, after showing the harmful effects of social media—particularly through a racial-political lens—I investigate the current regulatory state of the internet and social media companies, and how social media companies’ various interests prevent them from acting appropriately as a self-regulating force. Accordingly, my research concludes by arguing there is a need for a regulatory agency (at the federal level) tasked with overseeing the Internet and social media. The solution I argue for is for a new commission—referred to in my paper as the Internet Communication Commission—to be implemented based on the precedence of the 1927 Radio Act which created the Federal Communications Commission. A solution based on the work of Melody Fisher, Darvelle Hutchins, and Mark Goodman in “Regulating Social Media and the Internet of Everything.

Introduction

The topic I investigate in this research paper is the new age media’s impact on racial politics and public policy’s role in helping to mediate this new democratic forum that has emerged from social media. This topic is relevant and important to research because a pressing question in contemporary political science is how media consumption, especially social media, impacts societal discourse. In the case of racial and social politics, it appears that contemporary media has had mixed effects. The increased ease and flow of information through social media has led to heightened levels of coalition building, and a simultaneous furthering of the phenomena of filter bubbles. Historically, the media has been referred to as the “fourth estate” of our government with the task of acting as the government’s watchdog. Thus, if we are to discuss governmental operations, we would be leaving out a key aspect of our society by choosing to not discuss the media, particularly social media contemporarily.

My research paper seeks to analyze how social media is perpetuating racism and hate speech, which in some instances has led to harmful outcomes. The problem for social media companies is they are unable to effectively address this issue themselves—given differing interests and limited resources. Consequently, the goal of my paper is to show that negative discourse in social media is a social policy issue, and why the internet and social media as a means of discourse ought to be taken seriously. A task of contemporary scholars is to determine agency in the context of this new online environment, an environment that is tremendously different than that of traditional news outlets for two primary reasons. First, given the blurred lines of user’s involvement—where consumers can be producers or informants, and vice-versa—anonymity is a key aspect of these platforms. Second, there are the roles and responsibilities of social media companies since they themselves are not creating, posting, and sharing problematic content on their platforms. “[T]he Internet developed without any regulation. Now, the world is moving towards “the Internet of Everything lawmakers and regulators are trying to cope with worldwide havoc created through social media and viral content” (Fisher, Hutchins & Goodman, 2020). The “Internet of Everything” is a term that is meant to broadly describe the various software, technologies, data, and devices associated with the Internet. Therefore, the goal of this paper is to discover how agency should be determined in this new media world and what regulatory practices should be instituted (if any) for the Internet and social media.

I argue (that despite its differences) there are several similarities between the early uses of radio and our current social media state. Various problems that were addressed on how to properly regulate radio have re-emerged regarding the Internet and social media. Consequently, I (along with a couple of scholars) argue that the Radio Act of 1927 can be utilized as a precedent in creating a baseline regulatory body for the Internet and social media in hopes of further, more robust, and expansive institutions and policies to follow. Hence, I am employing the use of the concept of incrementalism which states institutional change occurs gradually, in increments over time—usually after an initial policy or institution occurs (Peters, 2019).

Racism, Hate Speech, & Social Media

To begin, a paramount problem in the current scholarly work regarding social media is proving whether social media is being employed as a medium to perpetuate racist ideologies, hate speech, and racial prejudice. Part of this difficulty lies in two primary issues. First, (which mirrors a problem with society as a whole) is a lack of consensus regarding what constitutes dangerous or racist speech and what is an acceptable benchmark to distinguish harmful content from appropriate content (ElSherif et al., 2018). Some companies, like Facebook, take a post-racial approach, thus denying the existence of racism, contemporarily (Siapera & Viejo-Otero, 2021). Second, once a benchmark is determined, you have to address the issue of semantics, since a lot of incendiary content possesses typos, or does not contain verbiage that is explicitly racist (ElSherif et al., 2018).

Despite these obstacles, there is substantial evidence to show that hate speech, racism, and stereotyping are perpetuated by social media (Cook et al., 1983; Dobson & Knezevic 2018; ElSherif et al., 2018; Siapera & Viejo-Otero, 2021). Moreover, hate speech across scholarly and legal works I examined all closely followed the definition by Caitlin Carlson, defining hate speech as “[an] expression that seeks to promote, spread or justify misogyny, racism, anti-Semitism, religious bigotry, homophobia, bigotry against the disabled” (Carlson, 2018). For example, in a Pew Research Center study, “60% of Internet users said they had witnessed offensive name-calling, 25% had seen someone physically threatened, and 24% witnessed someone being harassed for a sustained period of time” (ElSherif et al., 2018). Even more startling, however, is that a study in 2016, found that “95 percent of adolescents witnessed hate speech (directed against a minority group) on the internet” (Soral, Liu & Bilewicz, 2020). Social media has become a primary communication medium that is “deeply ingrained in people’s everyday socialization and a site where issues of race and power persists” (Fisher, Hutchins & Goodman, 2020). Moreover, “popular social media websites, such as Facebook, Instagram, Twitter, Pinterest, and LinkedIn, are all owned by white Americans and businesses, social media is a public space where Blacks continue to experience erasure and invisibility of race by dominant social groups” (Fisher, Hutchins, Goodman, 2020).

A case study that exemplifies social media’s role in perpetuating racism and racist stereotypes is that of Kimberly Wilkins. A massive fire destroyed a building, leaving over 100 residents homeless—including Wilkins herself. When local news outlets arrived at the scene, they began interviewing residents affected by the incident and Kimberly Wilkins was asked about her experience during the fire. Wilkins explained that as she was grabbing a soda from a vending machine, she suddenly realized there was a fire. Wilkins stated, “Oh Lord, Oh Jesus, it’s a fire. I ran out, I didn’t grab no shoes or nothing. Jesus! I ran for my life and then the smoke got me. I got bronchitis. Ain’t nobody got time for that.” This colorful statement was picked up by a Reddit user and quickly became an internet sensation. The quote eventually led to several racist memes of Wilkins being plastered around social media platforms overnight.

Unfortunately, the sensation of Wilkins’ statement not only motivated racial commentary but after Wilkins became an internet and media celebrity, the case study researchers had a difficult time finding any reports on the actual aftermath of the fire or on the various other individuals affected by the tragic accident (Dobson & Knezevic 2018). They were only able to find stories related to Wilkins’ short time as an internet celebrity, with no news regarding the over 100 people left homeless from the fire and their whereabouts. The “Kimberly Wilkins” Case Study tragically shows how social media changed the framing of a legitimate news topic about residents being left homeless due to a massive fire to a parody about a “funny black woman.”

This is because “social media has become a gauge for what [news] stories are deemed “news-worthy” as research by Dobson & Knezevic has shown (Dobson & Knezevic 2018.) Not only does this carelessly move the definition of newsworthy into the court of public opinion, but it is also exceptionally problematic because of the historic role of the media. Scholars that discuss the media’s role, describe its job as government and society’s “watchdog,” and as the United States’ “fourth estate” (Tumber, 2001). Yet, what contemporary sources argue is that this role has been shifting given the introduction of the internet and consequently social media. Going a step further, I argue that this is not just isolated to the internet but rather a societal shift towards “scandal” media that began around the time of the internet’s inception. Over the past few decades, there has been a viewership power struggle that has resulted in a media sea change where the objective shifted from providing factual unbiased information to obtaining and maintaining a large core audience through shock pieces and opinion news shows (Tumber, 2001). This phenomenon has been further exemplified by social media, and its attention-based economic model that prioritizes preserving and growing an audience, and its consumer base (A consumer base that is simultaneously aiding in producing content). “[U]sers…are no longer passive consumers of media content, but active producers and distributors of it” (Dobson & Knezevic, 2018). Social media is unique because individuals who were previously only consumers of media have become active producers or participants in it as well (Tumber, 2001; Dobson & Knezevic, 2018). This fundamental shift threatens the original practice of journalism where “traditional” media forums are required to follow FCC acts and regulations while still attempting to maintain an audience that is being seduced and bombarded by social media platforms that are not hindered by the same laws and further benefits from an excessive saturation of producer-consumers.

The Harms of Social Media

“The presence of hate speech in one’s environment can produce a sense of a social norm by suggesting that using such language about immigrants or minority groups is common rather than exceptional” (ElSherif et al., 2018). This can cause individuals to become more complacent in tolerating hate speech given their conditioning that it is a commonplace norm. This phenomenon may then be exemplified and cause individuals to be more willing to condone or even commit violent acts against minorities.

“[T]he widespread adoption of racial and ethnic slurs has historically been associated with acts of violence, ranging from hate crimes targeted at individuals to mass genocide…the casual use of racial slurs can create a climate that will tolerate crimes against humanity such as slavery or the Holocaust…Throughout history, hate speech has been used to dehumanize various religious, ethnic, or racial groups in order to make military action or physical violence against them more palatable” (Soral, Liu & Bilewicz, 2020).

“It is undisputed that social media has played a key role in the expansion of domestic terrorism” (Berryman, 2020). Social media’s commonplace hate speech, anonymity, and expedited ability to connect and communicate with others have meant that bad actors have had their biases reinforced as perceived norms and have had an easier time connecting with others who have similar biases—potentially harmful tendencies. Thus, social media has become a breeding ground for domestic terrorist groups, and bad actors.

Annoyingly, “[t]he existing case law seems to condemn holding social media providers liable for acts of terrorism, particularly in light of the existing evidence that focuses on the radicalization process” (Berryman, 2020). However, even with existing case law, when referencing Crosby v. Twitter—the case involving the “Pulse Night Club Shooter”— “the court…did not preclude liability completely, noting that one should not interpret its holding to mean that “[d]efendants could never proximately cause a terrorist attack through their social media platforms” (Berryman, 2020). Unfortunately, prosecutors have a near-to-impossible task to prove“proximate cause” when charging social media companies with liability relating to terrorism, and thus, to date, none have been cited (Berryman, 2020). Accordingly, it is only with considerable modifications to current legal statutes that social media companies could be legitimately charged—including being held accountable for the coordination and disinformation that took place on social media platforms—like Twitter, that helped to incite the January 6th Insurrection on the Capital.

At the end of her analysis, Berryman provides a couple of suggestions that she argues (if done correctly) could solve this problem. First, Berryman argues that it is essential for the Anti-Terrorism Act of 1990 (ATA) to include acts of domestic terrorism. While this would not directly address social media companies, it would regulate domestic terrorism as a whole and allow it to be part of the Act’s stipulations and penalties. Second, Berryman furthers a claim made by other legal scholars that the Communications Decency Act of 1996 (CDA) should be amended to include Internet Service Providers (ISPs). This would hold social media platforms liable if they were involved in providing materials that supported terrorists and their terrorist actions. Finally, Berryman states that a paramount issue with the current case law is its condemnation of charging social media companies and the overtly onerous task to show proximate cause. Therefore, Berryman contends these practices should be amended so the agency of social media companies can be properly addressed—especially as technologies are constantly improving and companies have a greater ability to filter potentially dangerous content (Berryman, 2020).

Current Social Media Regulation

Currently, the task of regulation falls upon the social media companies themselves. Consequently, social media companies (including Facebook) claim they are appropriately regulating the content that flows through their platforms. Currently, Facebook’s (like a lot of other platforms) model for regulating inflammatory content is a “hands-off approach” (Oates, 2020). This approach tries to maintain a free environment where every voice, ideology, and opinion can co-exist. Additionally, Facebook’s model is heavily reliant on users themselves reporting or blocking content that they find inappropriate (Siapera & Viejo-Otero, 2021). Due to the unreliability of self-reporting, it is important to examine these regulatory practices. Moreover, Facebook’s approach to controversial content is color-blind and post-racialized (as stated in the first section) meaning that if content is posted about a group that has historically been oppressed and marginalized the content is regarded with the same weight as a group that has not had the same oppressive history and were potentially themselves the oppressors (Siapera & Viejo-Otero, 2021). Consequently, “hate speech…is not seen as an ethical or political problem [for Facebook]. Rather hate speech is just another category of problematic content, one of about twenty” thereby amplifying and reproducing (white) supremacist positions (Siapera & Viejo-Otero, 2021). Accordingly, the internet and social media have become the ideal places for terrorist organizations because of the minimal censorship and regulation, along with previously mentioned factors like anonymity. The ease of accessing social media platforms, and the disapprobation of companies to ban such content have allowed individuals with similar, extreme, ideologies to connect and create a collective identity otherwise difficult or impossible (Berryman, 2020). “[With the internet] [n]ew citizenship linkages and virtual communities are emerging in which participation, whether around political affiliation, social issues or local community interests, suggest[ing] a move away from a unified public sphere to a series of separate public spheres. A single public sphere becomes obsolete as groups maintain their own deliberative democratic forums.” (Tumber, 2001).

Therefore, without comprehensive regulation, racist ideologies are sure to persist as more people become a part of “algorithmic enclaves” that reinforce their ideologies and biases (Dobson & Knezevic 2018). The paramount interest of social media companies is preserving as diverse and large of a user base as possible. Thus, instead of removing all potentially inflammatory content, social media companies rely upon the individual user’s ability to manually censor harmful content from themselves—manually meaning the user must select an option to block or report negative or harmful content on their feeds. Of course, as your sphere of information grows smaller and less diversified, manual censorship becomes less and less likely—until a time in which algorithms are able to stop showing the individual content that they find odious. A social media company’s interest is not in the user’s well-being, but rather the user’s time and information. As a former Facebook employee Frances Haugen stated, “the company [Facebook] systematically and repeatedly prioritized profits over the safety of its users” (Zakrzewski, 2021). Time, activity, and personal information are what fuel the attention economy of social media and the internet as a whole, not in maintaining a civil forum of collective discourse.

Furthermore, it is important to remember that social media companies openly admit engaging in this practice that prioritizes customizing content feeds to each individual’s interests, needs, and desires. As an example, in Facebook’s (Meta’s) terms of service, they state that they want to “[p]rovide a personalized experience for you” (Facebook, Terms of Service, 2022).

“Your experience on Facebook is unlike anyone else’s: from the posts, stories, events, ads, and other content you see in News Feed or our video platform to the Facebook Pages you follow and other features you might use, such as Trending, Facebook Marketplace, and search. We use the data we have – for example, about the connections you make, the choices and settings you select, and what you share and do on and off our Products – to personalize your experience” (Facebook, Terms of Service, 2022). This extends far beyond the user’s feed, however. Subsequently, they state they want to help “[c]onnect you with people and organizations you care about” (Facebook, Terms of Service, 2022). So, their algorithms try to: “help you find and connect with people, groups, businesses, organizations, and others that matter to you across the Meta Products you use. We use the data we have to make suggestions for you and others – for example, groups to join, events to attend, Facebook Pages to follow or send a message to, shows to watch, and people you may want to become friends with. Stronger ties make for better communities, and we believe our services are most useful when people are connected to people, groups, and organizations they care about” (Facebook, Terms of Service, 2022).

This is done all while ensuring their user’s individual right to say or do whatever online is protected. As stated by their additional mission to “[e]mpower you to express yourself and communicate about what matters to you” and “show you ads, offers, and other sponsored content to help you discover content, products, and services that are offered by the many businesses and organizations that use Facebook and other Meta Products” (Facebook, Terms of Service, 2022). Thus, the goals in some cases directly conflict with regulating hate speech and actors prone to engage in racist or hateful behavior. Rather, the interests of (in this case) Facebook, are to create a personalized content feed for each user that also connects them with other users who share similar content interests. The mission is to essentially help create specific groupings of individuals to help limit the amount at which actors with two contrasting (and in some cases harmful) viewpoints implicitly interact—while still providing the user with an option to manually search for certain users or content types. However, this is not to suggest that social media companies do not believe hate speech is an issue (i.e., Facebook, Twitter, Instagram, Youtube, Snapchat, Tiktok, Reddit). Nor that these companies are inherently at fault for the harmful actions that occur on their platforms. Across the several social media platforms’ community guidelines and terms of service I read, hate speech—along with bullying and harassment—were consistently highlighted as “violations” and not condoned or prohibited on their platforms.

Unfortunately, the ability to actually enforce a firm stance against “hate speech” is onerous and resource-intensive for various reasons. First, social media companies see hate speech as one of the many issues that occur on their platforms. Accordingly, they are (arguably so) willing to allocate so much money, time, and resources to address the issue. For instance, Facebook averages some 5 billion posts every single day circulating throughout their servers (“Facebook’s Hate Speech Problem,” 2020). Second, if social media companies begin striking down all the content several individuals classify as “harmful,” “obscene,” or “inappropriate” they risk disturbing the peace, causing individuals to feel as though their right to free speech is violated, and creating a hostile environment to the free exchange of ideas—which in some instances can be seen as hypocritical or targeting certain groups or ideologies. This causes a comprehensive and extensive regulatory body on social media platforms to be a net loss for companies, and against their best interests—most stating their goal is to allow for a free exchange of (authentic) content and ideas. Third, while social media companies have a vast number of resources because of their monopoly over the industry, there are limits to what their resources can achieve. A common problem for social media companies attempting to regulate harmful and illicit content is language barriers and differentiation in syntax and language between users of various groups, races, and nationalities (such as typos, abbreviations, slang, etc.).

Lastly, there is no legal reason as to why social media companies should care to strictly enforce anti-hate speech sentiments and actions on their platforms. “In the United States, most hate speech is protected by the First Amendment…in Snyder v. Phelps (2011), the U.S. Supreme Court held that picketing fallen soldier’s funerals with signs that said, “[G]od hates fags” did not meet the threshold for intentional infliction of emotional distress” (Carlson, 2018). So, regarding social media, (most) hate speech occurring on social media platforms is protected by the United States Bill of Rights. “Unless expression falls into the categories of fighting words, incitement to illegal advocacy, true threats, or the rarely invoked notion of group libel, it is considered protected” (Carlson, 2018). Yet, as we have seen with instances of domestic terrorism—which should fall under “true threats” or “fighting words” under freedom of expression—social media’s anonymity and fabricated distancing means that legal actions still do not occur in instances where serious physical danger and harm can and does occur.

The Solution for Accountability & Regulation

Consequently, I maintain that social media companies themselves are not equipped to regulate hate speech on their platforms and that there is no serious legal action taken against bad actors (and social media companies themselves) for running afoul when engaging in hate speech. I argue that the solution is a defined set of parameters regarding government oversight and regulations. I recognize and previously indicated that this may be seen as problematic due to the current perception that social media prioritizes individual freedom above all else. Fortunately, by utilizing the work of Fisher, Hutchins, and Goodman we understand that the Radio Act of 1927 led to the creation of the Federal Communications Commission (FCC) as a regulatory body of technocrats. The FCC determined the best way to manage and regulate the medium of radio and to identify what ran afoul. By employing the

Radio Act of 1927, and identifying the similarities between social media and radio, we can show why regulations—particularly at the Congressional level— are necessary and required and consequently that an Internet Communications Commission (ICC) (of sorts) should be instituted to regulate social media and the internet (Fisher, Hutchins & Goodman, 2020). The rudimentary similarities between radio and the internet are that upon each of their inceptions, the expansion of public discourse increased due to lack of regulations and oversight and that “Congress does not understand the technology” (Fisher, Hutchins, Goodman, 2020). Hence, many of the issues and questions raised in the Radio Act of 1927 mirror problems being discussed contemporaneously—such as monopolies, who should be producing content, free speech, hate speech, obscenity, and censorship. Therefore, if we investigate the arguments, problems, and logical solutions that led to the creation of the Radio Act of 1927 the same should and can be done (to a degree) for the Internet and social media.

First, at its inception, there was a collective fear that the radio industry would become a monopoly dominated by capitalist and corporate interests. Consequently, the creation of the FCC was meant to act as a regulatory body to limit the power and influence of corporations on the radio industry (Fisher, Hutchins & Goodman, 2020). Comparatively, “[t]he communication network monopolies of 2019 dwarf the radio monopoly of 1926”, the digital platforms hold so much economic power that “Apple, Google, Amazon, and Facebook have more impact on the world econom[y] than all countries except the U.S. and China” (Fisher, Hutchins, Goodman, 2020). Even if one believes it is foolhardy to have a so-called “Internet Communications Commission” break up the massive monopolies of the social media industries; it should still be acknowledged that the collective power of these corporations is too great for there to be no tangible government oversight. Moreover, as Galloway states: “These markets are no longer competitive. They can no longer resist abusing their market power” (Fisher, Hutchins & Goodman, 2020). It is ignorant and naive to merely allow these social media giants to act in the best interest of public discourse and to trust them to remain objective in their actions and regulatory principles—something that the Congress of 1927 understood and that the Congress of 2022 should inherently acknowledge as well.

Second, regarding a radio broadcaster’s responsibilities and its relation to free speech and hate speech in 1924, individuals at the time argued those with a voice on the radio should be upheld to a more stringent moral standard than is commonly applied in other uses referencing the right to free speech and expression. Clarence Dill (a Washington Senator) argued, that “[b]roadcasters should be businessmen of the highest class…The right to broadcast is to be based not upon the right of the individual, not upon the selfish desire of the individual, but upon a public interest to be served by the granting of these licenses” (Fisher, Hutchins & Goodman, 2020). This philosophy is one that is nonexistent in the realm of social media and the internet. For example, a man by the name of Christopher Blair, who was unemployed, “earned $17,000 weekly by making up ridiculous stories and posting them on Facebook during the 2016 presidential election” Blair stated, “The more extreme we become, the more people believe it” (Fisher, Hutchins & Goodman, 2020). This is why the issue of accountability and agency needs to be addressed on social media and the internet. The interest of individuals on social media is not to uphold a higher ethical and moral standard by ensuring the content they are producing is in the best interests of society, their interest in this unregulated entity lies only with themselves (Fisher, Hutchins & Goodman, 2020).

Ergo, while creating a regulatory body may seem intrusive, what the framers of the 1927 Radio Act believed was someone with the power to have their voice heard on the radio should be held to a higher level of accountability given their higher level of power. These same standards ought to be applied to the internet, social media, and digital platforms. As individual users, we have wide discretion in the content we post and given the fact our power of voice is immensely amplified by the internet and social media, our accountability should be as well.

The best way to establish a standard by which accountability and agency should be most equitably determined is by instituting a Federal Communications Commission whose sole task is to decide the most appropriate manner to regulate social media and internet content—and what actions should be taken against bad actors or companies (Fisher, Hutchins & Goodman, 2020). Similar to the FCC, the Internet Communications Commission would have technocratic experts appointed ranging from the owners of social media companies to scholars on communications law, veteran government officials in communications regulations, and even specialists in the realms of social media and the internet. While this may not directly address the paramount problems behind hate speech and racism online, it would at the very least instill a regulatory baseline for future, more robust policies. Arguably, it would be most agreeable to implement a law that banned all hate speech, but this, of course, creates its own problems regarding the right to free speech, etcetera. Therefore, the best alternative is to establish a regulatory framework (in the form of a commission) to help decide how social media and the internet should be navigated. If we endorse the concept of incrementalism, an initial implementation of policy would over time lead to better, more direct policies, institutions, and regulations in the future.

Conclusion

In summary, my findings have shown the following. First, racism, hate speech, and extremist ideologies are perpetuated through social media. Second, the media as a medium and a place for discourse has drastically changed. Third, social media companies are apathetic in regulating and mediating said racism and hate speech or holding their users accountable—in part because of economic and external interests. Fourth, current laws and regulations do not hold social media companies or users accountable. Thus, what my findings indicate is a need for some form of regulation, which I argue should take a form similar to that of the Radio Act of 1927. This would create an Internet Communication Commission tasked with determining the appropriate response to regulating social media and the Internet. Once this commission is created, I argue that we follow the concept of incrementalism, we can assume more robust and expansive regulations and policies surrounding the internet and social media will be instituted.

Bibliography

Berryman, Chloe. “Holding Social Media Providers Liable for Acts of Domestic Terrorism ” Florida Law Review.” Florida Law Review, 25 Feb. 2021, http://www.floridalawreview.com/2021/holding-social-media-providers-liable-for-acts-of-domestic-terrorism/.

Carlson, Caitlin. “Censoring Hate Speech in Social Media Content: Understanding the User’s Perspective.” Communication Law Review, vol. 17, no. 1, Nov. 2018.

Cook, Fay Lomax, et al. “Media and Agenda Setting: Effects on the Public, Interest Group Leaders, Policy Makers, and Policy.” JSTOR, Oxford University Press, 1983, https://www.jstor.org/stable/2748703.

Dobson, Kathy, and Irena Knezevic. “‘Ain’t Nobody Got Time for That!’: Framing and Stereotyping in Legacy and Social Media.” Canadian Journal of Communication, vol. 43, no. 3, 2018, https://doi.org/10.22230/cjc.2019v44n3a3378.

ElSherief, Mai, et al. View of Hate Lingo: A Target-Based Linguistic Analysis of Hate Speech in Social Media, University of California, Santa Barbara, 15 June 2015, https://ojs.aaai.org/index.php/ICWSM/article/view/15041/14891.

“Facebook’s Hate Speech Problem Is Even Bigger Than We Thought.” Anti-Defamation League, 23 Dec. 2020, https://www.adl.org/blog/facebooks-hate-speech-problem-is-even-bigger-than-we-thought.

Facebook. (2022, January 4). Terms of Service. Facebook. Retrieved April 16, 2022, from https://www.facebook.com/terms.php.

Fisher, Melody, et al. “Regulating Social Media and the Internet of Everything: The Precedent of the Radio Act of 1927.” Communication Law Review, vol. 19, no. 1, 2020, https://doi.org/https://commlawreview.org/Archives/CLRv19/Regulating_Social_Media_and_the_Internet_of_Everything_Fisher_Hutchkins_Goodman.pdf.

Oates, Sarah. “The Easy Weaponization of Social Media: Why Profit Has Trumped Security for U.S. Companies.” Digital War, Springer International Publishing, 2020, https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7212244/.

Jaishankar, K, and Imran Awan. “Islamophobia on Social Media: A Qualitative Analysis of the Facebook’s Walls of Hate.” Zenodo, 24 July 2016, https://zenodo.org/record/58517.

Peters, B. Guy. American Public Policy: Promise and Performance. 11th ed., SAGE Publications, Inc., 2019.

Terms of Service. Help center. (2022, January 4). Retrieved April 16, 2022, from https://help.instagram.com/581066165581870.

Terms of service | Tiktok. Legal Terms of Service. (2019, February). Retrieved May 10, 2022, from https://www.tiktok.com/legal/terms-of-service?lang=en.

Tumber, Howard. “Democracy in the Information Age.” Culture and Politics in the Information Age, 2002, pp. 31–45., https://doi.org/10.4324/9780203183250-7.

Twitter. (2021, August 19). Twitter terms of service. Twitter. Retrieved April 16, 2022, from https://twitter.com/en/to.

Siapera, Eugenia, and Paloma Viejo-Otero. “Governing Hate: Facebook and Digital Racism.” Television & New Media, vol. 22, no. 2, 2021, pp. 112–130., https://doi.org/10.1177/1527476420982232.

Snap inc.. terms of service. Snap Inc. (2021, November 15). Retrieved April 16, 2022, from https://snap.com/en-US/terms.

Soral, W., Liu, J. H., & Bilewicz, M. (2020). Media of contempt: Social media consumption predicts normative acceptance of anti-Muslim hate speech and Islamoprejudice. International Journal of Conflict and Violence, 14(1), 1-13. doi: 10.4119/ijcv-3774 Harvard: Soral, Wiktor, Liu, James H., Bilewicz, Michał. 2020. Media of Contempt: Social Media Consumption Predicts Normative Acceptance of Anti-Muslim Hate Speech and Islamoprejudice. International Journal of Conflict and Violence 14(1): 1-13. doi:10.4119/ijcv-3774

User agreement – September 12, 2021. Reddit. (2021, September 12). Retrieved May 9, 2022, from https://www.redditinc.com/policies/user-agreement-september-12-2021.

Wu, P. (2015). Impossible to Regulate: Social Media, Terrorists, and the Role for the U.N. Chicago Journal of International Law, 16(1), 281-311.

YouTube. (2022, January 5). Terms of Service. YouTube. Retrieved April 16, 2022, from https://www.youtube.com/static?template=terms.


About the author

License

Icon for the Creative Commons Attribution 4.0 International License

RANGE: Journal of Undergraduate Research (2023) Copyright © 2023 by Pierce Christoffersen is licensed under a Creative Commons Attribution 4.0 International License, except where otherwise noted.

Share This Book