Read our research on: Gun Policy | International Conflict | Election 2024

Regions & Countries

The future of free speech, trolls, anonymity and fake news online, many experts fear uncivil and manipulative behaviors on the internet will persist – and may get worse. this will lead to a splintering of social media into ai-patrolled and regulated ‘safe spaces’ separated from free-for-all zones. some worry this will hurt the open exchange of ideas and compromise privacy.

speech on internet trolls

One of the biggest challenges will be finding an appropriate balance between protecting anonymity and enforcing consequences for the abusive behavior that has been allowed to characterize online discussions for far too long. Bailey Poland

Since the early 2000s, the wider diffusion of the network, the dawn of Web 2.0 and social media’s increasingly influential impacts, and the maturation of strategic uses of online platforms to influence the public for economic and political gain have altered discourse. In recent years, prominent internet analysts and the public at large have expressed increasing concerns that the content, tone and intent of online interactions have undergone an evolution that threatens its future and theirs. Events and discussions unfolding over the past year highlight the struggles ahead. Among them:

  • Respected internet pundit John Naughton asked in The Guardian , “Has the internet become a failed state?” and mostly answered in the affirmative.
  • The U.S. Senate heard testimony on the increasingly effective use of social media for the advancement of extremist causes, and there was growing attention to how social media are becoming weaponized by terrorists, creating newly effective kinds of propaganda .
  • Scholars provided evidence showing that social bots were implemented in acts aimed at disrupting the 2016 U.S. presidential election . And news organizations documented how foreign trolls bombarded U.S. social media with fake news . A December 2016 Pew Research Center study found that about two-in-three U.S. adults (64%) say fabricated news stories cause a great deal of confusion about the basic facts of current issues and events.
  • A May 2016 Pew Research Center report showed that 62% of Americans get their news from social media . Farhad Manjoo of The New York Times argued that the “internet is loosening our grip on the truth. ” And his colleague Thomas B. Edsall curated a lengthy list of scholarly articles after the election that painted a picture of how the internet was jeopardizing democracy.
  • 2016 was the first year that an internet meme made its way into the Anti-Defamation League’s database of hate symbols .
  • Time magazine devoted a 2016 cover story to explaining “why we’re losing the internet to the culture of hate .”
  • Celebrity social media mobbing intensified. One example: “Ghostbusters” actor and Saturday Night Live cast member Leslie Jones was publicly harassed on Twitter and had her personal website hacked .
  • An industry report revealed how former Facebook workers suppressed conservative news content .
  • Multiple news stories indicated that state actors and governments increased their efforts to monitor users of instant messaging and social media
  • The Center on the Future of War started the Weaponized Narrative Initiative .
  • Many experts documented the ways in which “fake news” and online harassment might be more than social media “byproducts” because they help to drive revenue.
  • #Pizzagate, a case study , revealed how disparate sets of rumors can combine to shape public discourse and, at times, potentially lead to dangerous behavior.
  • Scientific American carried a nine-author analysis of the influencing of discourse by artificial intelligence (AI) tools, noting, “We are being remotely controlled ever more successfully in this manner. … The trend goes from programming computers to programming people … a sort of digital scepter that allows one to govern the masses efficiently without having to involve citizens in democratic processes.”
  • Google (with its Perspective API ), Twitter and Facebook are experimenting with new ways to filter out or label negative or misleading discourse.
  • Researchers are exploring why people troll .
  • And a drumbeat of stories out of Europe covered how governments are attempting to curb fake news and hate speech but struggling to reconcile their concerns with sweeping free speech rules that apply in America.

To illuminate current attitudes about the potential impacts of online social interaction over the next decade, Pew Research Center and Elon University’s Imagining the Internet Center conducted a large-scale canvassing of technology experts, scholars, corporate practitioners and government leaders. Some 1,537 responded to this effort between July 1 and Aug. 12, 2016 (prior to the late-2016 revelations about potential manipulation of public opinion via hacking of social media). They were asked:

In the next decade, will public discourse online become more or less shaped by bad actors, harassment, trolls, and an overall tone of griping, distrust, and disgust?

In response to this question, 42% of respondents indicated that they expect “no major change”  in online social climate in the coming decade and 39% said they expect the online future will be “more shaped” by negative activities . Those who said they expect the internet to be “less shaped” by harassment, trolling and distrust were in the minority. Some 19% said this . Respondents were asked to elaborate on how they anticipate online interaction progressing over the next decade. (See “About this canvassing of experts” for further details about the limits of this sample.)

Participants were also asked to explain their answers in a written elaboration and asked to consider the following prompts: 1) How do you expect social media and digital commentary will evolve in the coming decade? 2) Do you think we will see a widespread demand for technological systems or solutions that encourage more inclusive online interactions? 3) What do you think will happen to free speech? And 4) What might be the consequences for anonymity and privacy?

While respondents expressed a range of opinions from deep concern to disappointment to resignation to optimism, most agreed that people – at their best and their worst – are empowered by networked communication technologies. Some said the flame wars and strategic manipulation of the zeitgeist might just be getting started if technological and human solutions are not put in place to bolster diverse civil discourse.

A number of respondents predicted online reputation systems and much better security and moderation solutions will become near ubiquitous in the future, making it increasingly difficult for “bad actors” to act out disruptively. Some expressed concerns that such systems – especially those that remove the ability to participate anonymously online – will result in an altered power dynamic between government/state-level actors, the elites and “regular” citizens.

Anonymity, a key affordance of the early internet, is an element that many in this canvassing attributed to enabling bad behavior and facilitating “uncivil discourse” in shared online spaces. The purging of user anonymity is seen as possibly leading to a more inclusive online environment and also setting the stage for governments and dominant institutions to even more freely employ surveillance tools to monitor citizens, suppress free speech and shape social debate.

speech on internet trolls

Most experts predicted that the builders of open social spaces on global communications networks will find it difficult to support positive change in “cleaning up” the real-time exchange of information and sharing of diverse ideologies over the next decade, as millions more people around the world become connected for the first time and among the billions already online are many who compete in an arms race of sorts to hack and subvert corrective systems.

Those who believe the problems of trolling and other toxic behaviors can be solved say the cure might also be quite damaging. “One of the biggest challenges will be finding an appropriate balance between protecting anonymity and enforcing consequences for the abusive behavior that has been allowed to characterize online discussions for far too long,” explained expert respondent Bailey Poland , author of “Haters: Harassment, Abuse, and Violence Online.”

The majority in this canvassing were sympathetic to those abused or misled in the current online environment while expressing concerns that the most likely solutions will allow governments and big businesses to employ surveillance systems that monitor citizens, suppress free speech and shape discourse via algorithms, allowing those who write the algorithms to sculpt civil debate.

Susan Etlinger , an industry analyst at Altimeter Group, walked through a future scenario of tit-for-tat, action-reaction that ends in what she calls a “Potemkin internet.” She wrote: “In the next several years we will see an increase in the type and volume of bad behavior online, mostly because there will be a corresponding increase in digital activity. … Cyberattacks, doxing, and trolling will continue, while social platforms, security experts, ethicists, and others will wrangle over the best ways to balance security and privacy, freedom of speech, and user protections. A great deal of this will happen in public view. The more worrisome possibility is that privacy and safety advocates, in an effort to create a more safe and equal internet, will push bad actors into more-hidden channels such as Tor. Of course, this is already happening, just out of sight of most of us. The worst outcome is that we end up with a kind of Potemkin internet in which everything looks reasonably bright and sunny, which hides a more troubling and less transparent reality.”

One other point of context for this non-representative sample of a particular population: While the question we posed was not necessarily aimed at getting people’s views about the role of political material in online social spaces, it inevitably drew commentary along those lines because this survey was fielded in the midst of a bitter, intense election in the United States where one of the candidates, in particular, was a provocative user of Twitter.

Most participants in this canvassing wrote detailed elaborations explaining their positions. Their well-considered comments provide insights about hopeful and concerning trends. They were allowed to respond anonymously, and many chose to do so.

These findings do not represent all points of view possible, but they do reveal a wide range of striking observations. Respondents collectively articulated four “key themes” that are introduced and briefly explained below and then expanded upon in more-detailed sections .

The following section presents a brief overview of the most evident themes extracted from the written responses, including a small selection of representative quotes supporting each point. Some responses are lightly edited for style or due to length.

Theme 1: Things will stay bad because to troll is human; anonymity abets anti-social behavior; inequities drive at least some of the inflammatory dialogue; and the growing scale and complexity of internet discourse makes this difficult to defeat

While some respondents saw issues with uncivil behavior online on somewhat of a plateau at the time of this canvassing in the summer of 2016 and a few expect solutions will cut hate speech, misinformation and manipulation, the vast majority shared at least some concerns that things could get worse, thus two of the four overarching themes of this report start with the phrase, “Things will stay bad.”

The individual’s voice has a much higher perceived value than it has in the past. As a result, there are more people who will complain online in an attempt to get attention, sympathy, or retribution. Anonymous software engineer

A number of expert respondents observed that negative online discourse is just the latest example of the many ways humans have exercised social vitriol for millennia. Jerry Michalski , founder at REX, wrote, “I would very much love to believe that discourse will improve over the next decade, but I fear the forces making it worse haven’t played out at all yet. After all, it took us almost 70 years to mandate seatbelts. And we’re not uniformly wise about how to conduct dependable online conversations, never mind debates on difficult subjects. In that long arc of history that bends toward justice, particularly given our accelerated times, I do think we figure this out. But not within the decade.”

Vint Cerf , Internet Hall of Fame member, Google vice president and co-inventor of the Internet Protocol, summarized some of the harmful effects of disruptive discourse:

“The internet is threatened with fragmentation,” he wrote. “… People feel free to make unsupported claims, assertions, and accusations in online media. … As things now stand, people are attracted to forums that align with their thinking, leading to an echo effect. This self-reinforcement has some of the elements of mob (flash-crowd) behavior. Bad behavior is somehow condoned because ‘everyone’ is doing it. … It is hard to see where this phenomenon may be heading. … Social media bring every bad event to our attention, making us feel as if they all happened in our back yards – leading to an overall sense of unease. The combination of bias-reinforcing enclaves and global access to bad actions seems like a toxic mix. It is not clear whether there is a way to counter-balance their socially harmful effects.”

Subtheme: Trolls have been with us since the dawn of time; there will always be some incivility

An anonymous respondent commented, “The tone of discourse online is dictated by fundamental human psychology and will not easily be changed.” This statement reflects the attitude of expert internet technologists, researchers and pundits, most of whom agree that it is the people using the network, not the network, that is the root of the problem.

Paul Jones , clinical professor and director of ibiblio.org at the University of North Carolina, Chapel Hill, commented, “The id unbound from the monitoring and control by the superego is both the originator of communication and the nemesis of understanding and civility.”

John Cato , a senior software engineer, wrote, “Trolling for arguments has been an internet tradition since Usenet. Some services may be able to mitigate the problem slightly by forcing people to use their real identities, but wherever you have anonymity you will have people who are there just to make other people angry.”

And an anonymous software engineer explained why the usual level of human incivility has been magnified by the internet, noting, “The individual’s voice has a much higher perceived value than it has in the past. As a result, there are more people who will complain online in an attempt to get attention, sympathy, or retribution.”

Subtheme: Trolling and other destructive behaviors often result because people do not recognize or don’t care about the consequences flowing from their online actions

Michael Kleeman , formerly with the Boston Consulting Group, Arthur D. Little and Sprint, now senior fellow at the Institute on Global Conflict and Cooperation at the University of California, San Diego, explained: “Historically, communities of practice and conversation had other, often physical, linkages that created norms of behavior. And actors would normally be identified, not anonymous. Increased anonymity coupled with an increase in less-than-informed input, with no responsibility by the actors, has tended and will continue to create less open and honest conversations and more one-sided and negative activities.”

Trolls now know that their methods are effective and carry only minimal chance of social stigma and essentially no other punishment. Anonymous respondent

An expert respondent who chose not to be identified commented, “People are snarky and awful online in large part because they can be anonymous.” And another such respondent wrote, “Trolls now know that their methods are effective and carry only minimal chance of social stigma and essentially no other punishment. If Gamergate can harass and dox any woman with an opinion and experience no punishment as a result, how can things get better?”

Anonymously, a professor at Massachusetts Institute of Technology (MIT) commented, “We see a dark current of people who equate free speech with the right to say anything, even hate speech, even speech that does not sync with respected research findings. They find in unmediated technology a place where their opinions can have a multiplier effect, where they become the elites.”

Subtheme: Inequities drive at least some of the inflammatory dialogue

Some leading participants in this canvassing said the tone of discourse will worsen in the next decade due to inequities and prejudice, noting wealth disparity, the hollowing out of the middle class, and homophily (the tendency of people to bond with those similar to themselves and thus also at times to shun those seen as “the other”).

Unfortunately, I see the present prevalence of trolling as an expression of a broader societal trend across many developed nations, towards belligerent factionalism in public debate, with particular attacks directed at women as well as ethnic, religious, and sexual minorities. Axel Bruns

Cory Doctorow , writer, computer science activist-in-residence at MIT Media Lab and co-owner of Boing Boing, offered a bleak assessment, writing, “ Thomas Piketty, etc ., have correctly predicted that we are in an era of greater social instability created by greater wealth disparity which can only be solved through either the wealthy collectively opting for a redistributive solution (which feels unlikely) or everyone else compelling redistribution (which feels messy, unstable, and potentially violent). The internet is the natural battleground for whatever breaking point we reach to play out, and it’s also a useful surveillance, control, and propaganda tool for monied people hoping to forestall a redistributive future. The Chinese internet playbook – the 50c army, masses of astroturfers, libel campaigns against ‘enemies of the state,’ paranoid war-on-terror rhetoric – has become the playbook of all states, to some extent (see, e.g., the HB Gary leak that revealed U.S. Air Force was putting out procurement tenders for ‘persona management’ software that allowed their operatives to control up to 20 distinct online identities, each). That will create even more inflammatory dialogue, flamewars, polarized debates, etc.”

And an anonymous professor at MIT remarked, “Traditional elites have lost their credibility because they have become associated with income inequality and social injustice. … This dynamic has to shift before online life can play a livelier part in the life of the polity. I believe that it will, but slowly.”

Axel Bruns , a professor at the Queensland University of Technology’s Digital Media Research Centre, said, “Unfortunately, I see the present prevalence of trolling as an expression of a broader societal trend across many developed nations, towards belligerent factionalism in public debate, with particular attacks directed at women as well as ethnic, religious, and sexual minorities.”

Subtheme: The ever-expanding scale of internet discourse and its accelerating complexity make it difficult to deal with problematic content and contributors

As billions more people are connected online and technologies such as AI chatbots, the Internet of Things, and virtual and augmented reality continue to mature, complexity is always on the rise. Some respondents said well-intentioned attempts to raise the level of discourse are less likely to succeed in a rapidly changing and widening information environment.

As more people get internet access – and especially smartphones, which allow people to connect 24/7 – there will be increased opportunities for bad behavior. Jessica Vitak

Matt Hamblen , senior editor at Computerworld, commented, “[By 2026] social media and other forms of discourse will include all kinds of actors who had no voice in the past; these include terrorists, critics of all kinds of products and art forms, amateur political pundits, and more.”

An anonymous respondent wrote, “Bad actors will have means to do more, and more significant bad actors will be automated as bots are funded in extra-statial ways to do more damage – because people are profiting from this.”

Jessica Vitak , an assistant professor at the University of Maryland, commented, “Social media’s affordances, including increased visibility and persistence of content, amplify the volume of negative commentary. As more people get internet access – and especially smartphones, which allow people to connect 24/7 – there will be increased opportunities for bad behavior.”

Bryan Alexander , president of Bryan Alexander Consulting, added, “The number of venues will rise with the expansion of the Internet of Things and when consumer-production tools become available for virtual and mixed reality.”

Theme 2: Things will stay bad because tangible and intangible economic and political incentives support trolling. Participation = power and profits

Subtheme: ‘hate, anxiety, and anger drive participation,’ which equals profits and power, so online social platforms and mainstream media support and even promote uncivil acts.

Frank Pasquale , professor of law at the University of Maryland and author of “Black Box Society,” commented, “The major internet platforms are driven by a profit motive. Very often, hate, anxiety and anger drive participation with the platform. Whatever behavior increases ad revenue will not only be permitted, but encouraged, excepting of course some egregious cases.”

It’s a brawl, a forum for rage and outrage. … The more we come back, the more money they make off of ads and data about us. So the shouting match goes on. Andrew Nachison

Kate Crawford , a well-known internet researcher studying how people engage with networked technologies, observed, “Distrust and trolling is happening at the highest levels of political debate, and the lowest. The Overton Window has been widened considerably by the 2016 U.S. presidential campaign, and not in a good way. We have heard presidential candidates speak of banning Muslims from entering the country, asking foreign powers to hack former White House officials, retweeting neo-Nazis. Trolling is a mainstream form of political discourse.”

Andrew Nachison , founder at We Media, said, “It’s a brawl, a forum for rage and outrage. It’s also dominated social media platforms on the one hand and content producers on the other that collude and optimize for quantity over quality. Facebook adjusts its algorithm to provide a kind of quality – relevance for individuals. But that’s really a ruse to optimize for quantity. The more we come back, the more money they make off of ads and data about us. So the shouting match goes on. I don’t know that prevalence of harassment and ‘bad actors’ will change – it’s already bad – but if the overall tone is lousy, if the culture tilts negative, if political leaders popularize hate, then there’s good reason to think all of that will dominate the digital debate as well.”

Subtheme: Technology companies have little incentive to rein in uncivil discourse, and traditional news organizations – which used to shape discussions – have shrunk in importance

Several of the expert respondents said because algorithmic solutions tend “to reward that which keeps us agitated,” it is especially damaging that the pre-internet news organizations that once employed fairly objective and well-trained (if not well-paid) armies of arbiters as democratic shapers of the defining climate of social and political discourse have fallen out of favor, replaced by creators of clickbait headlines read and shared by short-attention-span social sharers.

It is in the interest of the paid-for media and most political groups to continue to encourage ‘echo-chamber’ thinking and to consider pragmatism and compromise as things to be discouraged. David Durant

David Clark , a senior research scientist at MIT and Internet Hall of Famer commented that he worries over the loss of character in the internet community. “It is possible, with attention to the details of design that lead to good social behavior, to produce applications that better regulate negative behavior,” he wrote. “However, it is not clear what actor has the motivation to design and introduce such tools. The application space on the internet today is shaped by large commercial actors, and their goals are profit-seeking, not the creation of a better commons. I do not see tools for public discourse being good ‘money makers,’ so we are coming to a fork in the road – either a new class of actor emerges with a different set of motivations, one that is prepared to build and sustain a new generation of tools, or I fear the overall character of discourse will decline.”

An anonymous principal security consultant wrote, “As long as success – and in the current climate, profit as a common proxy for success – is determined by metrics that can be easily improved by throwing users under the bus, places that run public areas online will continue to do just that.”

Steven Waldman , founder and CEO of LifePosts, said, “It certainly sounds noble to say the internet has democratized public opinion. But it’s now clear: It has given voice to those who had been voiceless because they were oppressed minorities and to those who were voiceless because they are crackpots. … It may not necessarily be ‘bad actors’ – i.e., racists, misogynists, etc. – who win the day, but I do fear it will be the more strident. I suspect there will be ventures geared toward counter-programming against this, since many people are uncomfortable with it. But venture-backed tech companies have a huge bias toward algorithmic solutions that have tended to reward that which keeps us agitated. Very few media companies now have staff dedicated to guiding conversations online.”

John Anderson , director of journalism and media studies at Brooklyn College, wrote, “The continuing diminution of what Cass Sunstein once called ‘general-interest intermediaries’ such as newspapers, network television, etc. means we have reached a point in our society where wildly different versions of ‘reality’ can be chosen and customized by people to fit their existing ideological and other biases. In such an environment there is little hope for collaborative dialogue and consensus.”

David Durant , a business analyst at U.K. Government Digital Service, argued, “It is in the interest of the paid-for media and most political groups to continue to encourage ‘echo-chamber’ thinking and to consider pragmatism and compromise as things to be discouraged. While this trend continues, the ability for serious civilized conversations about many topics will remain very hard to achieve.”

Subtheme: Terrorists and other political actors are benefiting from the weaponization of online narratives by implementing human- and bot-based misinformation and persuasion tactics

The weaponization of social media and “capture” of online belief systems, also known as “narratives,” emerged from obscurity in 2016 due to the perceived impact of social media uses by terror organizations and political factions. Accusations of Russian influence via social media on the U.S. presidential election brought to public view the ways in which strategists of all stripes are endeavoring to influence people through the sharing of often false or misleading stories, photos and videos. “Fake news” moved to the forefront of ongoing discussions about the displacement of traditional media by social platforms. Earlier, in the summer of 2016, participants in this canvassing submitted concerns about misinformation in online discourse creating distorted views.

There’s money, power, and geopolitical stability at stake now, it’s not a mere matter of personal grumpiness from trolls. Anonymous respondent

Anonymously, a futurist, writer, and author at Wired , explained, “New levels of ‘cyberspace sovereignty’ and heavy-duty state and non-state actors are involved; there’s money, power, and geopolitical stability at stake now, it’s not a mere matter of personal grumpiness from trolls.”

Karen Blackmore , a lecturer in IT at the University of Newcastle, wrote, “Misinformation and anti-social networking are degrading our ability to debate and engage in online discourse. When opinions based on misinformation are given the same weight as those of experts and propelled to create online activity, we tread a dangerous path. Online social behaviour, without community-imposed guidelines, is subject to many potentially negative forces. In particular, social online communities such as Facebook also function as marketing tools, where sensationalism is widely employed and community members who view this dialogue as their news source gain a very distorted view of current events and community views on issues. This is exacerbated with social network and search engine algorithms effectively sorting what people see to reinforce worldviews.”

Laurent Sch ü pbach , a neuropsychologist at University Hospital in Zurich, focused his entire response about negative tone online on burgeoning acts of economic and political manipulation, writing, “The reason it will probably get worse is that companies and governments are starting to realise that they can influence people’s opinions that way. And these entities sure know how to circumvent any protection in place. Russian troll armies are a good example of something that will become more and more common in the future.”

David Wuertele , a software engineer at Tesla Motors, commented, “Unfortunately, most people are easily manipulated by fear. … Negative activities on the internet will exploit those fears, and disproportionate responses will also attempt to exploit those fears. Soon, everyone will have to take off their shoes and endure a cavity search before boarding the internet.”

Theme 3: Things will get better because technical and human solutions will arise as the online world splinters into segmented, controlled social zones with the help of artificial intelligence (AI)

Most respondents said it is likely that the coming decade will see a widespread move to more-secure services, applications, and platforms and more robust user-identification policies. Some said people born into the social media age will adapt. Some predict that more online systems will require clear identification of participants. This means that the online social forums could splinter into various formats, some of which are highly protected and monitored and others which could retain the free-for-all character of today’s platforms.

Subtheme: AI sentiment analysis and other tools will detect inappropriate behavior and many trolls will be caught in the filter; human oversight by moderators might catch others

Some experts in this canvassing say progress is already being made on some fronts toward better technological and human solutions.

The future Web will give people much better ways to control the information that they receive, which will ultimately make problems like trolling manageable. David Karger

Galen Hunt , a research manager at Microsoft Research NExT, replied, “As language-processing technology develops, technology will help us identify and remove bad actors, harassment, and trolls from accredited public discourse.”

Stowe Boyd , chief researcher at Gigaom, observed, “I anticipate that AIs will be developed that will rapidly decrease the impact of trolls. Free speech will remain possible, although AI filtering will make a major dent on how views are expressed, and hate speech will be blocked.”

Marina Gorbis , executive director at the Institute for the Future, added, “I expect we will develop more social bots and algorithmic filters that would weed out the some of the trolls and hateful speech. I expect we will create bots that would promote beneficial connections and potentially insert context-specific data/facts/stories that would benefit more positive discourse. Of course, any filters and algorithms will create issues around what is being filtered out and what values are embedded in algorithms.”

Jean Russell of Thrivable Futures wrote, “First, conversations can have better containers that filter for real people who consistently act with decency. Second, software is getting better and more nuanced in sentiment analysis, making it easier for software to augment our filtering out of trolls. Third, we are at peak identity crisis and a new wave of people want to cross the gap in dialogue to connect with others before the consequences of being tribal get worse (Brexit, Trump, etc.).”

David Karger , a professor of computer science at MIT, said, “My own research group is exploring several novel directions in digital commentary. In the not too distant future all this work will yield results. Trolling, doxxing, echo chambers, click-bait, and other problems can be solved. We will be able to ascribe sources and track provenance in order to increase the accuracy and trustworthiness of information online. We will create tools that increase people’s awareness of opinions differing from their own and support conversations with and learning from people who hold those opinions. … The future Web will give people much better ways to control the information that they receive, which will ultimately make problems like trolling manageable (trolls will be able to say what they want, but few will be listening).”

Subtheme: There will be partitioning, exclusion and division of online outlets, social platforms and open spaces

Technology will mediate who and what we see online more and more, so that we are drawn more toward communities with similar interests than those who are dissimilar. Lindsay Kenzig

Facebook, Twitter, Instagram, Google and other platform providers already “shape” and thus limit what the public views via the implementation of algorithms. As people have become disenchanted with uncivil discourse “open” platforms they stop using them or close their accounts, sometimes moving to smaller online communities of people with similar needs or ideologies. Some experts expect that these trends will continue and even more partitions, divisions and exclusions may emerge as measures are taken to clean things up. For instance, it is expected that the capabilities of AI-based bots dispatched to assist with information sorting, security, and regulation of the tone and content of discourse will continue to be refined.

Lindsay Kenzig , a senior design researcher, said, “Technology will mediate who and what we see online more and more, so that we are drawn more toward communities with similar interests than those who are dissimilar. There will still be some places where you can find those with whom to argue, but they will be more concentrated into only a few locations than they are now.”

Valerie Bock , of VCB Consulting, commented, “Spaces where people must post under their real names and where they interact with people with whom they have multiple bonds regularly have a higher quality of discourse. … In response to this reality, we’ll see some consolidation as it becomes easier to shape commercial interactive spaces to the desired audience. There will be free-for-all spaces and more-tightly-moderated walled gardens, depending on the sponsor’s strategic goals. There will also be private spaces maintained by individuals and groups for specific purposes.”

Lisa Heinz , a doctoral student at Ohio University, commented, “Humanity’s reaction to negative forces will likely contribute more to the ever-narrowing filter bubble , which will continue to create an online environment that lacks inclusivity by its exclusion of opposing viewpoints. An increased demand for systemic internet-based AI will create bots that will begin to interact – as proxies for the humans that train them – with humans online in real-time and with what would be recognized as conversational language, not the word-parroting bot behavior we see on Twitter now. … When this happens, we will see bots become part of the filter bubble phenomenon as a sort of mental bodyguard that prevents an intrusion of people and conversations to which individuals want no part. The unfortunate aspect of this iteration of the filter bubble means that while free speech itself will not be affected, people will project their voices into the chasm, but few will hear them.”

Bob Frankston , internet pioneer and software innovator, wrote, “I see negative activities having an effect but the effect will likely be from communities that shield themselves from the larger world. We’re still working out how to form and scale communities.”

The expert comments in response to this canvassing were recorded in the summer of 2016; by early 2017, after many events (Brexit, the U.S. election, others mentioned earlier in this report) surfaced concerns about civil discourse, misinformation and impacts on democracy, an acceleration of activity tied to solutions emerged. Facebook, Twitter and Google announced some new efforts toward technological approaches; many conversations about creating new methods of support for public affairs journalism began to be undertaken; and consumer bubble-busting tools including “Outside Your Bubble” and “Escape Your Bubble” were introduced.

Subtheme: Trolls and other actors will fight back, innovating around any barriers they face

Some participants in this canvassing said they expect the already-existing continuous arms race dynamic will expand, as some people create and apply new measures to ride herd over online discourse while others constantly endeavor to thwart them.

Cathy Davidson , founding director of the Futures Initiative at the Graduate Center of the City University of New York, said, “We’re in a spy vs. spy internet world where the faster that hackers and trolls attack, the faster companies (Mozilla, thank you!) plus for-profits come up with ways to protect against them and then the hackers develop new strategies against those protections, and so it goes. I don’t see that ending. … I would not be surprised at more publicity in the future, as a form of cyber-terror. That’s different from trolls, more geo-politically orchestrated to force a national or multinational response. That is terrifying if we do not have sound, smart, calm leadership.”

Sam Anderson , coordinator of instructional design at the University of Massachusetts, Amherst, said, “It will be an arms race between companies and communities that begin to realize (as some online games companies like Riot have) that toxic online communities will lower their long-term viability and potential for growth. This will war with incentives for short-term gains that can arise out of bursts of angry or sectarian activity (Twitter’s character limit inhibits nuance, which increases reaction and response).”

Theme 4: Oversight and community moderation come with a cost. Some solutions could further change the nature of the internet because surveillance will rise; the state may regulate debate; and these changes will polarize people and limit access to information and free speech

A share of respondents said greater regulation of speech and technological solutions to curb harassment and trolling will result in more surveillance, censorship and cloistered communities. They worry this will change people’s sharing behaviors online, limit exposure to diverse ideas and challenge freedom.

Subtheme: Surveillance will become even more prevalent

While several respondents indicated that there is no longer a chance of anonymity online, many say privacy and choice are still options, and they should be protected.

Terrorism and harassment by trolls will be presented as the excuses, but the effect will be dangerous for democracy. Richard Stallman

Longtime internet civil libertarian Richard Stallman , Internet Hall of Fame member and president of the Free Software Foundation, spoke to this fear. He predicted, “Surveillance and censorship will become more systematic, even in supposedly free countries such as the U.S. Terrorism and harassment by trolls will be presented as the excuses, but the effect will be dangerous for democracy.”

Rebecca MacKinnon , director of Ranking Digital Rights at New America, wrote, “I’m very concerned about the future of free speech given current trends. The demands for governments and companies to censor and monitor internet users are coming from an increasingly diverse set of actors with very legitimate concerns about safety and security, as well as concerns about whether civil discourse is becoming so poisoned as to make rational governance based on actual facts impossible. I’m increasingly inclined to think that the solutions, if they ever come about, will be human/social/political/cultural and not technical.”

James Kalin of Virtually Green wrote, “Surveillance capitalism is increasingly grabbing and mining data on everything that anyone says, does, or buys online. The growing use of machine learning processing of the data will drive ever more subtle and pervasive manipulation of our purchasing, politics, cultural attributes, and general behavior. On top of this, the data is being stolen routinely by bad actors who will also be using machine learning processing to steal or destroy things we value as individuals: our identities, privacy, money, reputations, property, elections, you name it. I see a backlash brewing, with people abandoning public forums and social network sites in favor of intensely private ‘black’ forums and networks.”

Subtheme: Dealing with hostile behavior and addressing violence and hate speech will become the responsibility of the state instead of the platform or service providers

A number of respondents said they expect governments or other authorities will begin implementing regulation or other reforms to address these issues, most indicating that the competitive instincts of platform providers do not work in favor of the implementation of appropriate remedies without some incentive.

My fear is that because of the virtually unlimited opportunities for negative use of social media globally we will experience a rising worldwide demand for restrictive regulation. Paula Hooper Mayhew

Michael Rogers , author and futurist at Practical Futurist, predicted governments will assume control over identifying internet users. He observed, “I expect there will be a move toward firm identities – even legal identities issued by nations – for most users of the Web. There will as a result be public discussion forums in which it is impossible to be anonymous. There would still be anonymity available, just as there is in the real world today. But there would be online activities in which anonymity was not permitted. Clearly this could have negative free-speech impacts in totalitarian countries but, again, there would still be alternatives for anonymity.”

Paula Hooper Mayhew , a professor of humanities at Fairleigh Dickinson University, commented, “My fear is that because of the virtually unlimited opportunities for negative use of social media globally we will experience a rising worldwide demand for restrictive regulation. This response may work against support of free speech in the U.S.”

Marc Rotenberg , executive director of the Electronic Privacy Information Center (EPIC), wrote, “The regulation of online communications is a natural response to the identification of real problems, the maturing of the industry, and the increasing expertise of government regulators.”

Subtheme: Polarization will occur due to the compartmentalization of ideologies

John Markoff , senior writer at The New York Times, commented, “There is growing evidence that that the Net is a polarizing force in the world. I don’t believe to completely understand the dynamic, but my surmise is that it is actually building more walls than it is tearing down.”

Marcus Foth , a professor at Queensland University of Technology, said, “Public discourse online will become less shaped by bad actors … because the majority of interactions will take place inside walled gardens. … Social media platforms hosted by corporations such as Facebook and Twitter use algorithms to filter, select, and curate content. With less anonymity and less diversity, the two biggest problems of the Web 1.0 era have been solved from a commercial perspective: fewer trolls who can hide behind anonymity. Yet, what are we losing in the process? Algorithmic culture creates filter bubbles , which risk an opinion polarisation inside echo chambers.”

Emily Shaw , a U.S. civic technologies researcher for mySociety, predicted, “Since social networks … are the most likely future direction for public discourse, a million (self)-walled gardens are more likely to be the outcome than is an increase in hostility, because that’s what’s more commercially profitable.”

Subtheme: Increased monitoring, regulation and enforcement will shape content to such an extent that the public will not gain access to important information and possibly lose free speech

Experts predict increased oversight and surveillance, left unchecked, could lead to dominant institutions and actors using their power to suppress alternative news sources, censor ideas, track individuals, and selectively block network access. This, in turn, could mean publics might never know what they are missing out on, since information will be filtered, removed, or concealed.

The fairness and freedom of the internet’s early days are gone. Now it’s run by big data, Big Brother, and big profits. Thorlaug Agustsdottir

Thorlaug Agustsdottir of Iceland’s Pirate Party, said, “Monitoring is and will be a massive problem, with increased government control and abuse. The fairness and freedom of the internet’s early days are gone. Now it’s run by big data, Big Brother, and big profits. Anonymity is a myth, it only exists for end-users who lack lookup resources.”

Joe McNamee , executive director at European Digital Rights, said, “In the context of a political environment where deregulation has reached the status of ideology, it is easy for governments to demand that social media companies do ‘more’ to regulate everything that happens online. We see this with the European Union’s ‘code of conduct’ with social media companies. This privatisation of regulation of free speech (in a context of huge, disproportionate, asymmetrical power due to the data stored and the financial reserves of such companies) raises existential questions for the functioning of healthy democracies.”

Randy Bush , Internet Hall of Fame member and research fellow at Internet Initiative Japan, wrote, “Between troll attacks, chilling effects of government surveillance and censorship, etc., the internet is becoming narrower every day.”

Dan York , senior content strategist at the Internet Society, wrote, “Unfortunately, we are in for a period where the negative activities may outshine the positive activities until new social norms can develop that push back against the negativity. It is far too easy right now for anyone to launch a large-scale public negative attack on someone through social media and other channels – and often to do so anonymously (or hiding behind bogus names). This then can be picked up by others and spread. The ‘mob mentality’ can be easily fed, and there is little fact-checking or source-checking these days before people spread information and links through social media. I think this will cause some governments to want to step in to protect citizens and thereby potentially endanger both free speech and privacy.”

Responses from other key experts regarding the future of online social climate

This section features responses by several more of the many top analysts who participated in this canvassing. Following this wide-ranging set of comments on the topic will be a much-more expansive set of quotations directly tied to the set of four themes.

‘We’ll see more bad before good because the governing culture is weak and will remain so’

Baratunde Thurston , a director’s fellow at MIT Media Lab, Fast Company columnist, and former digital director of The Onion, replied, “To quote everyone ever, things will get worse before they get better. We’ve built a system in which access and connectivity are easy, the cost of publishing is near zero, and accountability and consequences for bad action are difficult to impose or toothless when they do. Plus consider that more people are getting online everyday with no norm-setting for their behavior and the systems that prevail now reward attention grabbing and extended time online. They reward emotional investment whether positive or negative. They reward conflict. So we’ll see more bad before more good because the governing culture is weak and will remain so while the financial models backing these platforms remain largely ad-based and rapid/scaled user growth-centric.”

‘We should reach ‘peak troll’ before long but there are concerns for free speech’

Brad Templeton , one of the early luminaries of Usenet and longtime Electronic Frontier Foundation board member, currently chair for computing at Singularity University, commented, “Now that everybody knows about this problem I expect active technological efforts to reduce the efforts of the trolls, and we should reach ‘peak troll’ before long. There are concerns for free speech. My hope is that pseudonymous reputation systems might protect privacy while doing this.”

‘People will find it tougher to avoid accountability’

Esther Dyson , founder of EDventure Holdings and technology entrepreneur, writer, and influencer, wrote: “Things will get somewhat better because people will find it tougher to avoid accountability. Reputations will follow you more than they do now. … There will also be clever services like CivilComments.com (disclosure: I’m an investor) that foster crowdsourced moderation rather than censorship of comments. That approach, whether by CivilComments or future competitors, will help. (So would sender-pays, recipient-charges email, a business I would *like* to invest in!) Nonetheless, anonymity is an important right – and freedom of speech with impunity (except for actual harm, yada yada) – is similarly important. Anonymity should be discouraged in general, but it is necessary in regimes or cultures or simply situations where the truth is avoided and truth-speakers are punished.”

Chatbots can help, but we need to make sure they don’t encode hate

Amy Webb , futurist and CEO at the Future Today Institute, said, “Right now, many technology-focused companies are working on ‘conversational computing,’ and the goal is to create a seamless interface between humans and machines. If you have [a] young child, she can be expected to talk to – rather than type on – machines for the rest of her life. In the coming decade, you will have more and more conversations with operating systems, and especially with chatbots, which are programmed to listen to, learn from and react to us. You will encounter bots first throughout social media, and during the next decade, they will become pervasive digital assistants helping you on many of the systems you use. Currently, there is no case law governing the free speech of a chatbot. During the 2016 election cycle, there were numerous examples of bots being used for political purposes. For example, there were thousands of bots created to mimic Latino/Latina voters supporting Donald Trump . If someone tweeted a disparaging remark about Trump and Latinos, bots that looked and sounded like members of the Latino community would target that person with tweets supporting Trump. Right now, many of the chatbots we interact with on social media and various websites aren’t so smart. But with improvements in artificial intelligence and machine learning, that will change. Without a dramatic change in how training databases are built and how our bots are programmed, we will realize a decade from now that we inadvertently encoded structural racism, homophobia, sexism and xenophobia into the bots helping to power our everyday lives. When chatbots start running amok – targeting individuals with hate speech – how will we define ‘speech’? At the moment, our legal system isn’t planning for a future in which we must consider the free speech infringements of bots.”

A trend toward decentralization and distributed problem solving will improve things

Doc Searls , journalist, speaker, and director of Project VRM at Harvard University’s Berkman Center for Internet and Society, wrote: “Harassment, trolling … these things thrive with distance, which favors the reptile brains in us all, making bad acting more possible and common. … Let’s face it, objectifying, vilifying, fearing, and fighting The Other has always been a problem for our species. … The internet we share today was only born on 30 April 1995, when the last backbone that forbade commercial activity stood down. Since then we have barely begun to understand, much less civilize, this new place without space. … I believe we are at the far end of this swing toward centralization on the Net. As individuals and distributed solutions to problems (e.g., blockchain [a digital ledger in which transactions are recorded chronologically and publicly]) gain more power and usage, we will see many more distributed solutions to fundamental social and business issues, such as how we treat each other.”

There are designs and tech advances ‘that would help tremendously’

Judith Donath of Harvard University’s Berkman Center, author of “The Social Machine: Designs for Living Online ,” wrote, “With the current practices and interfaces, yes, trolls and bots will dominate online public discourse. But that need not be the case: there are designs and technological advances that would help tremendously. We need systems that support pseudonymity: locally persistent identities. Persistence provides accountability: people are responsible for their words. Locality protects privacy: people can participate in discussions without concern that their government, employer, insurance company, marketers, etc., are listening in (so if they are, they cannot connect the pseudonymous discourse to the actual person). We should have digital portraits that succinctly depict a (possibly pseudonymous) person’s history of interactions and reputation within a community. We need to be able to quickly see who is new, who is well-regarded, what role a person has played in past discussions. A few places do so now (e.g., StackExchange) but their basic charts are far from the goal: intuitive and expressive portrayals. ‘Bad actors’ and trolls (and spammers, harassers, etc.) have no place in most discussions – the tools we need for them are filters; we need to develop better algorithms for detecting destructive actions as defined by the local community. Beyond that, the more socially complex question is how to facilitate constructive discussions among people who disagree. Here, we need to rethink the structure of online discourse. The role of discussion host/moderator is poorly supported by current tech – and many discussions would proceed much better in a model other than the current linear free for all. Our face-to-face interactions have amazing subtlety – we can encourage or dissuade with slight changes in gaze, facial expression, etc. We need to create tools for conversation hosts (think of your role when you post something on your own Facebook page that sparks controversy) that help them to gracefully steer conversations.”

‘Reward systems favor outrage mongering and attention seeking almost exclusively’

Seth Finkelstein , writer and pioneering computer programmer, believes the worst is yet to come: “One of the less-examined aspects of the 2016 U.S. presidential election is that Donald Trump is demonstrating to other politicians how to effectively exploit such an environment. He wasn’t the first to do it, by far. But he’s showing how very high-profile, powerful people can adapt and apply such strategies to social media. Basically, we’re moving out of the ‘early adopter’ phase of online polarization, into making it mainstream. The phrasing of this question conflates two different issues. It uses a framework akin to ‘Will our kingdom be more or less threatened by brigands, theft, monsters, and an overall atmosphere of discontent, strife, and misery?’ The first part leads one to think of malicious motives and thus to attribute the problems of the second part along the lines of outside agitators afflicting peaceful townsfolk. Of course deliberate troublemakers exist. Yet many of the worst excesses come from people who believe in their own minds that they are not bad actors at all, but are fighting a good fight for all which is right and true (indeed, in many cases, both sides of a conflict can believe this, and where you stand depends on where you sit). When reward systems favor outrage mongering and attention seeking almost exclusively, nothing is going to be solved by inveighing against supposed moral degenerates.”

Some bad behavior is ‘pent-up’ speech from those who have been voiceless

Jeff Jarvis , a professor at the City University of New York Graduate School of Journalism, wrote, “I am an optimist with faith in humanity. We will see whether my optimism is misplaced. I believe we are seeing the release of a pressure valve (or perhaps an explosion) of pent-up speech: the ‘masses’ who for so long could not be heard can now speak, revealing their own interests, needs, and frustrations – their own identities distinct from the false media concept of the mass. Yes, it’s starting out ugly. But I hope that we will develop norms around civilized discourse. Oh, yes, there will always be … trolls. What we need is an expectation that it is destructive to civil discourse to encourage them. Yes, it might have seemed fun to watch the show of angry fights. It might seem fun to media to watch institutions like the Republican Party implode. But it soon becomes evident that this is no fun. A desire and demand for civil, intelligent, useful discourse will return; no society or market can live on misinformation and emotion alone. Or that is my hope. How long will this take? It could be years. It could be a generation. It could be, God help us, never.”

Was the idea of ‘reasoned discourse’ ever reasonable?

Mike Roberts , Internet Hall of Fame member and first president and CEO of ICANN, observed, “Most attempts at reasoned discourse on topics interesting to me have been disrupted by trolls in last decade or so. Many individuals faced with this harassment simply withdraw. … There is a somewhat broader question of whether expectations of ‘reasoned’ discourse were ever realistic. History of this, going back to Plato, is one of self-selection into congenial groups. The internet, among other things, has energized a variety of anti-social behaviors by people who get satisfaction from the attendant publicity. My wife’s reaction is ‘why are you surprised?’ in regard to seeing behavior online that already exists offline.”

Our disembodied online identity compels us to ‘ramp up the emotional content’

Barry Chudakov, founder and principal at Sertain Research and StreamFuzion Corp., wrote,

“In the next decade a number of factors in public discourse online will continue to converge and vigorously affect each other:

1) Nowness is the ultimate arbiter: The value of our discourse (everything we see or hear) will be weighted by how immediate or instantly seen and communicated the information is. Real-time search, geolocation, just-in-time updates, Twitter, etc., are making of now, the present moment, an all-subsuming reality that tends to bypass anything that isn’t hyper-current.

2) Faceless selfism rocks : With photos and video, we can present ourselves dimensionally, but due to the lack of ‘facework’ in the online sim, our faces are absent or frozen in a framed portrait found elsewhere, and so there is no face-to-face, no dynamic interactivity, no responsive reading to our commentary, except in a follow-up comment. Still, we will get better at using public discourse as self-promotion.

“3) Anonymity changes us : Identity-shielding leads to a different set of ‘manners’ or mannerisms that stem from our sense (not accurate, of course) that online we are anonymous.

4) Context AWOL : Our present ‘ filter failure ,’ to borrow Clay Shirky’s phrase, is almost complete lack of context, reality check, or perspective. In the next decade we will start building better contextual frameworks for information.

5) Volume formula: The volume of content, from all quarters – anyone with a keypad, a device – makes it difficult to manage responses, or even to filter for relevance but tends to favor emotional button-pushing in order to be noticed.

“6) Ersatz us : Online identities will be more made-up, more fictional, but also more malleable than typical ‘facework’ or other human interactions. We can pretend, for a while, to be an ersatz version of ourselves.

7) Any retort in a (tweet) storm : Again, given the lack of ‘facework’ or immediate facial response that defined human response for millennia, we will ramp up the emotional content of messaging to ensure some kind of response, frequently rewarding the brash and outrageous over the slow and thoughtful.”

We will get better at articulating and enforcing helpful norms

David Weinberger , senior researcher at Harvard University’s Berkman Klein Center for Internet & Society, said, “Conversations are always shaped by norms and what the environment enables. For example, seating 100 dinner guests at one long table will shape the conversations differently than putting them at ten tables of ten, or 25 tables of four. The acoustics of the room will shape the conversations. Assigning seats or not will shape the conversations. Even serving wine instead of beer may shape the conversations. The same considerations are even more important on the Net because its global nature means that we have fewer shared norms, and its digital nature means that we have far more room to play with ways of bringing people together. We’re getting much better at nudging conversations into useful interchanges. I believe we will continue to get better at it.”

Anonymity is on its way out, and that will discourage trolling

Patrick Tucker , author of “The Naked Future ” and technology editor at Defense One, said, “Today’s negative online user environment is supported and furthered by two trends that are unlikely to last into the next decade: anonymity in posting and validation from self-identified subgroups. Increasingly, marketers need to better identify and authentication APIs (authentication through Facebook for example) are challenging online anonymity. The passing of anonymity will also shift the cost benefit analysis of writing or posting something to appeal to only a self-identified bully group rather than a broad spectrum of people.”

Polarization breeds incivility and that is reflected in the incivility of online discourse

Alice Marwick , a fellow at Data & Society, commented, “Currently, online discourse is becoming more polarized and thus more extreme, mirroring the overall separation of people with differing viewpoints in the larger U.S. population. Simultaneously, several of the major social media players have been unwilling or slow to take action to curb organized harassment. Finally, the marketplace of online attention encourages so-called ‘clickbait’ articles and sensationalized news items that often contain misinformation or disinformation, or simply lack rigorous fact-checking. Without structural changes in both how social media sites respond to conflict and the economic incentives for spreading inaccurate or sensational information, extremism and therefore conflict will continue. More importantly, the geographical and psychological segmentation of the U.S. population into ‘red’ and ‘blue’ neighborhoods, communities, and states is unlikely to change. It is the latter that gives rise to overall political polarization, which is reflected in the incivility of online discourse.”

‘New variations of digital malfeasance [will] arise’

Jamais Cascio , distinguished fellow at the Institute for the Future, replied, “I don’t expect a significant shift in the tone of online discourse over the next decade. Trolling, harassment, etc., will remain commonplace but not be the overwhelming majority of discourse. We’ll see repeated efforts to clamp down on bad online behavior through both tools and norms; some of these efforts will be (or seem) successful, even as new variations of digital malfeasance arise.”

It will get better and worse

Anil Dash , technologist, wrote, “I expect the negative influences on social media to get worse, and the positive factors to get better. Networks will try to respond to prevent the worst abuses, but new sites and apps will pop up that repeat the same mistakes.”

Sites will ban the ‘unvouched anonymous’; look for the rise of ‘registered pseudonyms’

David Brin , author of “The Transparent Society” and a leader of at the University of California, San Diego’s Arthur C. Clarke Center for Human Imagination, said, “Some company will get rich by offering registered pseudonyms, so that individuals may wander the Web ‘anonymously’ and yet vouched for and accountable for bad behavior. When this happens, almost all legitimate sites will ban the unvouched anonymous.”

Back around 20 B.C., Horace understood these problems

Fred Baker , fellow at Cisco, commented, “Communications in any medium (the internet being but one example) reflects the people communicating. If those people use profane language, are misogynistic, judge people on irrelevant factors such as race, gender, creed, or other such factors in other parts of their lives, they will do so in any medium of communication, including the internet. If that is increasing in prevalence in one medium, I expect that it is or will in any and every medium over time. The issue isn’t the internet; it is the process of breakdown in the social fabric. … If we worry about the youth of our age ‘going to the dogs,’ are we so different from our ancestors? In “Book III of Odes,” circa 20 B.C., Horace wrote: ‘Our sires’ age was worse than our grandsires. We, their sons, are more worthless than they; so in our turn we shall give the world a progeny yet more corrupt.’ I think the human race is not doomed, not today any more than in Horace’s day. But we have the opportunity to choose to lead them to more noble pursuits and more noble discussion of them.”

‘Every node in our networked world is potentially vulnerable’

Mike Liebhold , senior researcher and distinguished fellow at the Institute for the Future, wrote, “After Snowden’s revelations, and in context accelerating cybercrimes and cyberwars, it’s clear that every layer of the technology stack and every node in our networked world is potentially vulnerable. Meanwhile both magnitude and frequency of exploits are accelerating. As a result users will continue to modify their behaviors and internet usage and designers of internet services, systems, and technologies will have to expend growing time and expense on personal and collective security.”

Politicians and companies could engage ‘in an increasing amount of censorship’

Jillian York , director for International Freedom of Expression at the Electronic Frontier Foundation, noted, “The struggle we’re facing is a societal issue we have to address at all levels, and that the structure of social media platforms can exacerbate. Social media companies will need to address this, beyond community policing and algorithmic shaping of our newsfeeds. There are many ways to do this while avoiding censorship; for instance, better-individualized blocking tools and upvote/downvote measures can add nuance to discussions. I worry that if we don’t address the root causes of our current public discourse, politicians and companies will engage in an increasing amount of censorship.”

Sophisticated mathematical equations are having social effects

An anonymous professor at City University of New York , wrote, “I see the space of public discourse as managed in new, more-sophisticated ways, and also in more brutal ones. Thus we have social media management in Mexico courtesy of Peñabots, hacking by groups that are quasi-governmental or serving nationalist interests (one thinks of Eastern Europe). Alexander Kluge once said, ‘The public sphere is the site where struggles are decided by other means than war.’ We are seeing an expanded participation in the public sphere, and that will continue. It doesn’t necessarily mean an expansion of democracy, per se. In fact, a lot of these conflicts are cross-border. In general the discussions will stay ahead of official politics in the sense that there will be increasing options for participation. In a way this suggests new kinds of regionalisms, intriguing at a time when the European Union is taking a hit and trade pacts are undergoing re-examination. This type of participation also means opening up new arenas, e.g., Facebook has been accused of left bias in its algorithm. That means we are acknowledging the role of what are essentially sophisticated mathematical equations as having social effects.”

The flip side of retaining privacy: Pervasive derogatory and ugly comments

Bernardo A. Huberman , senior fellow and director of the Mechanisms and Design Lab at Hewlett Packard Enterprise, said, “Privacy as we tend to think of nowadays is going to be further eroded, if only because of the ease with which one can collect data and identify people. Free speech, if construed as the freedom to say whatever one thinks, will continue to exist and even flourish, but the flip side will be a number of derogatory and ugly comments that will become more pervasive as time goes on.”

Much of ‘public online discourse consists of what we and others don’t see’

Stephen Downes , researcher at National Research Council of Canada, noted, “It’s important to understand that our perception of public discourse is shaped by two major sources: first, our own experience of online public discourse, and second, media reports (sometimes also online) concerning the nature of public discourse. From both sources we have evidence that there is a lot of influence from bad actors, harassment, trolls, and an overall tone of griping, distrust, and disgust, as suggested in the question. But a great deal of public online discourse consists of what we and others don’t see.”

How about a movement to teach people to behave?

Marcel Bullinga , trendwatcher and keynote speaker @futurecheck, wrote, “Online we express hate and disgust we would never express offline, face-to-face. It seems that social control is lacking online. We do not confront our neighbours/children/friends with antisocial behaviour. The problem is not [only] anonymous bullying: many bullies have faces and are shameless, and they have communities that encourage bullying. And government subsidies stimulate them – the most frightening aspect of all. We will see the rise of the social robots, technological tools that can help us act as polite, decent social beings (like the REthink app). But more than that we need to go back to teaching and experiencing morals in business and education: back to behaving socially.”

  • A recent Pew Research Center analysis of communications by members of the 114 th Congress found that the public engagement with the social media postings of these lawmakers was most intense when the citations were negative, angry and resentful. ↩

Sign up for our Internet, Science and Tech newsletter

New findings, delivered monthly

Report Materials

speech on internet trolls

Table of Contents

5 facts about the qanon conspiracy theories, sxsw 2020 online session: misinformation and the 2020 u.s. election, qanon’s conspiracy theories have seeped into u.s. politics, but most don’t know what it is, concern about influence of made-up news on the election is lowest among those paying the least attention, democrats, republicans each expect made-up news to target their own party more than the other in 2020, most popular.

About Pew Research Center Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions. It is a subsidiary of The Pew Charitable Trusts .

How Trolls Are Ruining the Internet

Troll Culture of Hate Time Magazine Cover

T his story is not a good idea. Not for society and certainly not for me. Because what trolls feed on is attention. And this little bit–these several thousand words–is like leaving bears a pan of baklava.

It would be smarter to be cautious, because the Internet’s personality has changed. Once it was a geek with lofty ideals about the free flow of information. Now, if you need help improving your upload speeds the web is eager to help with technical details, but if you tell it you’re struggling with depression it will try to goad you into killing yourself. Psychologists call this the online disinhibition effect, in which factors like anonymity, invisibility, a lack of authority and not communicating in real time strip away the mores society spent millennia building. And it’s seeping from our smartphones into every aspect of our lives.

The people who relish this online freedom are called trolls, a term that originally came from a fishing method online thieves use to find victims. It quickly morphed to refer to the monsters who hide in darkness and threaten people. Internet trolls have a manifesto of sorts, which states they are doing it for the “lulz,” or laughs. What trolls do for the lulz ranges from clever pranks to harassment to violent threats. There’s also doxxing–publishing personal data, such as Social Security numbers and bank accounts–and swatting, calling in an emergency to a victim’s house so the SWAT team busts in. When victims do not experience lulz, trolls tell them they have no sense of humor. Trolls are turning social media and comment boards into a giant locker room in a teen movie, with towel-snapping racial epithets and misogyny.

For a limited time, TIME is giving all readers special access to subscriber-only stories. For complete access, we encourage you to become a subscriber. Click here.

They’ve been steadily upping their game. In 2011, trolls descended on Facebook memorial pages of recently deceased users to mock their deaths. In 2012, after feminist Anita Sarkeesian started a Kickstarter campaign to fund a series of YouTube videos chronicling misogyny in video games, she received bomb threats at speaking engagements, doxxing threats, rape threats and an unwanted starring role in a video game called Beat Up Anita Sarkeesian. In June of this year, Jonathan Weisman, the deputy Washington editor of the New York Times, quit Twitter, on which he had nearly 35,000 followers, after a barrage of anti-Semitic messages. At the end of July, feminist writer Jessica Valenti said she was leaving social media after receiving a rape threat against her daughter, who is 5 years old.

A Pew Research Center survey published two years ago found that 70% of 18-to-24-year-olds who use the Internet had experienced harassment, and 26% of women that age said they’d been stalked online. This is exactly what trolls want. A 2014 study published in the psychology journal Personality and Individual Differences found that the approximately 5% of Internet users who self-identified as trolls scored extremely high in the dark tetrad of personality traits: narcissism, psychopathy, Machiavellianism and, especially, sadism.

But maybe that’s just people who call themselves trolls. And maybe they do only a small percentage of the actual trolling. “Trolls are portrayed as aberrational and antithetical to how normal people converse with each other. And that could not be further from the truth,” says Whitney Phillips, a literature professor at Mercer University and the author of This Is Why We Can’t Have Nice Things: Mapping the Relationship Between Online Trolling and Mainstream Culture. “These are mostly normal people who do things that seem fun at the time that have huge implications. You want to say this is the bad guys, but it’s a problem of us.”

A lot of people enjoy the kind of trolling that illuminates the gullibility of the powerful and their willingness to respond. One of the best is Congressman Steve Smith, a Tea Party Republican representing Georgia’s 15th District, which doesn’t exist. For nearly three years Smith has spewed over-the-top conservative blather on Twitter, luring Senator Claire McCaskill, Christiane Amanpour and Rosie O’Donnell into arguments. Surprisingly, the guy behind the GOP-mocking prank, Jeffrey Marty, isn’t a liberal but a Donald Trump supporter angry at the Republican elite, furious at Hillary Clinton and unhappy with Black Lives Matter. A 40-year-old dad and lawyer who lives outside Tampa, he says he has become addicted to the attention. “I was totally ruined when I started this. My ex-wife and I had just separated. She decided to start a new, more exciting life without me,” he says. Then his best friend, who he used to do pranks with as a kid, killed himself. Now he’s got an illness that’s keeping him home.

Marty says his trolling has been empowering. “Let’s say I wrote a letter to the New York Times saying I didn’t like your article about Trump. They throw it in the shredder. On Twitter I communicate directly with the writers. It’s a breakdown of all the institutions,” he says. “I really do think this stuff matters in the election. I have 1.5 million views of my tweets every 28 days. It’s a much bigger audience than I would have gotten if I called people up and said, ‘Did you ever consider Trump for President?'”

Trolling is, overtly, a political fight. Liberals do indeed troll–sex-advice columnist Dan Savage used his followers to make Googling former Pennsylvania Senator Rick Santorum’s last name a blunt lesson in the hygienic challenges of anal sex; the hunter who killed Cecil the lion got it really bad.

But trolling has become the main tool of the alt-right, an Internet-grown reactionary movement that works for men’s rights and against immigration and may have used the computer from Weird Science to fabricate Donald Trump. Not only does Trump share their attitudes, but he’s got mad trolling skills: he doxxed Republican primary opponent Senator Lindsey Graham by giving out his cell-phone number on TV and indirectly got his Twitter followers to attack GOP political strategist Cheri Jacobus so severely that her lawyers sent him a cease-and-desist order.

The alt-right’s favorite insult is to call men who don’t hate feminism “cucks,” as in “cuckold.” Republicans who don’t like Trump are “cuckservatives.” Men who don’t see how feminists are secretly controlling them haven’t “taken the red pill,” a reference to the truth-revealing drug in The Matrix. They derisively call their adversaries “social-justice warriors” and believe that liberal interest groups purposely exploit their weakness to gain pity, which allows them to control the levers of power. Trolling is the alt-right’s version of political activism, and its ranks view any attempt to take it away as a denial of democracy.

In this new culture war, the battle isn’t just over homosexuality, abortion, rap lyrics, drugs or how to greet people at Christmastime. It’s expanded to anything and everything: video games, clothing ads, even remaking a mediocre comedy from the 1980s. In July, trolls who had long been furious that the 2016 reboot of Ghostbusters starred four women instead of men harassed the film’s black co-star Leslie Jones so badly on Twitter with racist and sexist threats–including a widely copied photo of her at the film’s premiere that someone splattered semen on–that she considered quitting the service. “I was in my apartment by myself, and I felt trapped,” Jones says. “When you’re reading all these gay and racial slurs, it was like, I can’t fight y’all. I didn’t know what to do. Do you call the police? Then they got my email, and they started sending me threats that they were going to cut off my head and stuff they do to ‘N words.’ It’s not done to express an opinion, it’s done to scare you.”

Because of Jones’ harassment, alt-right leader Milo Yiannopoulos was permanently banned from Twitter. (He is also an editor at Breitbart News, the conservative website whose executive chairman, Stephen Bannon, was hired Aug. 17 to run the Trump campaign.) The service said Yiannopoulos, a critic of the new Ghostbusters who called Jones a “black dude” in a tweet, marshaled many of his more than 300,000 followers to harass her. He not only denies this but says being responsible for your fans is a ridiculous standard. He also thinks Jones is faking hurt for political purposes. “She is one of the stars of a Hollywood blockbuster,” he says. “It takes a certain personality to get there. It’s a politically aware, highly intelligent star using this to get ahead. I think it’s very sad that feminism has turned very successful women into professional victims.”

A gay, 31-year-old Brit with frosted hair, Yiannopoulos has been speaking at college campuses on his Dangerous Faggot tour. He says trolling is a direct response to being told by the left what not to say and what kinds of video games not to play. “Human nature has a need for mischief. We want to thumb our nose at authority and be individuals,” he says. “Trump might not win this election. I might not turn into the media figure I want to. But the space we’re making for others to be bolder in their speech is some of the most important work being done today. The trolls are the only people telling the truth.”

The alt-right was galvanized by Gamergate, a 2014 controversy in which trolls tried to drive critics of misogyny in video games away from their virtual man cave. “In the mid-2000s, Internet culture felt very separate from pop culture,” says Katie Notopoulos, who reports on the web as an editor at BuzzFeed and co-host of the Internet Explorer podcast. “This small group of people are trying to stand their ground that the Internet is dark and scary, and they’re trying to scare people off. There’s such a culture of viciously making fun of each other on their message boards that they have this very thick skin. They’re all trained up.”

Andrew Auernheimer, who calls himself Weev online, is probably the biggest troll in history. He served just over a year in prison for identity fraud and conspiracy. When he was released in 2014, he left the U.S., mostly bouncing around Eastern Europe and the Middle East. Since then he has worked to post anti–Planned Parenthood videos and flooded thousands of university printers in America with instructions to print swastikas–a symbol tattooed on his chest. When I asked if I could fly out and interview him, he agreed, though he warned that he “might not be coming ashore for a while, but we can probably pass close enough to land to have you meet us somewhere in the Adriatic or Ionian.” His email signature: “Eternally your servant in the escalation of entropy and eschaton.”

While we planned my trip to “a pretty remote location,” he told me that he no longer does interviews for free and that his rate was two bitcoins (about $1,100) per hour. That’s when one of us started trolling the other, though I’m not sure which:

From: Joel Stein

To: Andrew Auernheimer

I totally understand your position. But TIME, and all the major media outlets, won’t pay people who we interview. There’s a bunch of reasons for that, but I’m sure you know them.

Thanks anyway,

From: Andrew Auernheimer

To: Joel Stein

I find it hilarious that after your people have stolen years of my life at gunpoint and bulldozed my home, you still expect me to work for free in your interests.

You people belong in a f-cking oven.

For a guy who doesn’t want to be interviewed for free, you’re giving me a lot of good quotes!

In a later blog post about our emails, Weev clarified that TIME is “trying to destroy white civilization” and that we should “open up your Jew wallets and dump out some of the f-cking geld you’ve stolen from us goys, because what other incentive could I possibly have to work with your poisonous publication?” I found it comforting that the rate for a neo-Nazi to compromise his ideology is just two bitcoins.

Expressing socially unacceptable views like Weev’s is becoming more socially acceptable. Sure, just like there are tiny, weird bookstores where you can buy neo-Nazi pamphlets, there are also tiny, weird white-supremacist sites on the web. But some of the contributors on those sites now go to places like 8chan or 4chan, which have a more diverse crowd of meme creators, gamers, anime lovers and porn enthusiasts. Once accepted there, they move on to Reddit, the ninth most visited site in the U.S., on which users can post links to online articles and comment on them anonymously. Reddit believes in unalloyed free speech; the site only eliminated the comment boards “jailbait,” “creepshots” and “beatingwomen” for legal reasons.

But last summer, Reddit banned five more discussion groups for being distasteful. The one with the largest user base, more than 150,000 subscribers, was “fatpeoplehate.” It was a particularly active community that reveled in finding photos of overweight people looking happy, almost all women, and adding mean captions. Reddit users would then post these images all over the targets’ Facebook pages along with anywhere else on the Internet they could. “What you see on Reddit that is visible is at least 10 times worse behind the scenes,” says Dan McComas, a former Reddit employee. “Imagine two users posting about incest and taking that conversation to their private messages, and that’s where the really terrible things happen. That’s where we saw child porn and abuse and had to do all of our work with law enforcement.”

Jessica Moreno, McComas’ wife, pushed for getting rid of “fatpeoplehate” when she was the company’s head of community. This was not a popular decision with users who really dislike people with a high body mass index. She and her husband had their home address posted online along with suggestions on how to attack them. Eventually they had a police watch on their house. They’ve since moved. Moreno has blurred their house on Google maps and expunged nearly all photos of herself online.

During her time at Reddit, some users who were part of a group that mails secret Santa gifts to one another complained to Moreno that they didn’t want to participate because the person assigned to them made racist or sexist comments on the site. Since these people posted their real names, addresses, ages, jobs and other details for the gifting program, Moreno learned a good deal about them. “The idea of the basement dweller drinking Mountain Dew and eating Doritos isn’t accurate,” she says. “They would be a doctor, a lawyer, an inspirational speaker, a kindergarten teacher. They’d send lovely gifts and be a normal person.” These are real people you might know, Moreno says. There’s no real-life indicator. “It’s more complex than just being good or bad. It’s not all men either; women do take part in it.” The couple quit their jobs and started Imzy, a cruelty-free Reddit. They believe that saving a community is nearly impossible once mores have been established, and that sites like Reddit are permanently lost to the trolls.

When sites are overrun by trolls, they drown out the voices of women, ethnic and religious minorities, gays–anyone who might feel vulnerable. Young people in these groups assume trolling is a normal part of life online and therefore self-censor. An anonymous poll of the writers at TIME found that 80% had avoided discussing a particular topic because they feared the online response. The same percentage consider online harassment a regular part of their jobs. Nearly half the women on staff have considered quitting journalism because of hatred they’ve faced online, although none of the men had. Their comments included “I’ve been raged at with religious slurs, had people track down my parents and call them at home, had my body parts inquired about.” Another wrote, “I’ve had the usual online trolls call me horrible names and say I am biased and stupid and deserve to be raped. I don’t think men realize how normal that is for women on the Internet.”

The alt-right argues that if you can’t handle opprobrium, you should just turn off your computer. But that’s arguing against self-expression, something antithetical to the original values of the Internet. “The question is: How do you stop people from being a–holes not to their face?” says Sam Altman, a venture capitalist who invested early in Reddit and ran the company for eight days in 2014 after one of its many PR crises. “This is exactly what happened when people talked badly about public figures. Now everyone on the Internet is a public figure. The problem is that not everyone can deal with that.” Altman declared on June 15 that he would quit Twitter and his 171,000 followers, saying, “I feel worse after using Twitter … my brain gets polluted here.”

Twitter’s head of trust and safety, Del Harvey, struggles with how to allow criticism but curb abuse. “Categorically to say that all content you don’t like receiving is harassment would be such a broad brush it wouldn’t leave us much content,” she says. Harvey is not her real name, which she gave up long ago when she became a professional troll, posing as underage girls (and occasionally boys) to entrap pedophiles as an administrator for the website Perverted-Justice and later for NBC’s To Catch a Predator. Citing the role of Twitter during the Arab Spring, she says that anonymity has given voice to the oppressed, but that women and minorities are more vulnerable to attacks by the anonymous.

But even those in the alt-right who claim they are “unf-ckwithable” aren’t really. At some point, everyone, no matter how desensitized by their online experience, is liable to get freaked out by a big enough or cruel enough threat. Still, people have vastly different levels of sensitivity. A white male journalist who covers the Middle East might blow off death threats, but a teenage blogger might not be prepared to be told to kill herself because of her “disgusting acne.”

Which are exactly the kinds of messages Em Ford, 27, was receiving en masse last year on her YouTube tutorials on how to cover pimples with makeup. Men claimed to be furious about her physical “trickery,” forcing her to block hundreds of users each week. This year, Ford made a documentary for the BBC called Troll Hunters in which she interviewed online abusers and victims, including a soccer referee who had rape threats posted next to photos of his young daughter on her way home from school. What Ford learned was that the trolls didn’t really hate their victims. “It’s not about the target. If they get blocked, they say, ‘That’s cool,’ and move on to the next person,” she says. Trolls don’t hate people as much as they love the game of hating people.

Troll culture might be affecting the way nontrolls treat one another. A yet-to-be-published study by University of California, Irvine, professor Zeev Kain and Amy Jo Martin showed that when people were exposed to reports of good deeds on Facebook, they were 10% more likely to report doing good deeds that day. But the opposite is likely occurring as well. “One can see discourse norms shifting online, and they’re probably linked to behavior norms,” says Susan Benesch, founder of the Dangerous Speech Project and faculty associate at Harvard’s Internet and Society center. “When people think it’s increasingly O.K. to describe a group of people as subhuman or vermin, those same people are likely to think that it’s O.K. to hurt those people.”

As more trolling occurs, many victims are finding laws insufficient and local police untrained. “Where we run into the problem is the social-media platforms are very hesitant to step on someone’s First Amendment rights,” says Mike Bires, a senior police officer in Southern California who co-founded LawEnforcement.social, a tool for cops to fight on-line crime and use social media to work with their communities. “If they feel like someone’s life is in danger, Twitter and Snapchat are very receptive. But when it comes to someone harassing you online, getting the social-media companies to act can be very frustrating.” Until police are fully caught up, he recommends that victims go to the officer who runs the force’s social-media department.

One counter-trolling strategy now being employed on social media is to flood the victims of abuse with kindness. That’s how many Twitter users have tried to blunt racist and body-shaming attacks on U.S. women’s gymnastics star Gabby Douglas and Mexican gymnast Alexa Moreno during the Summer Olympics in Rio. In 2005, after Emily May co-founded Hollaback!, which posts photos of men who harass women on the street in order to shame them (some might call this trolling), she got a torrent of misogynistic messages. “At first, I thought it was funny. We were making enough impact that these losers were spending their time calling us ‘cunts’ and ‘whores’ and ‘carpet munchers,'” she says. “Long-term exposure to it, though, I found myself not being so active on Twitter and being cautious about what I was saying online. It’s still harassment in public space. It’s just the Internet instead of the street.” This summer May created Heartmob, an app to let people report trolling and receive messages of support from others.

Though everyone knows not to feed the trolls, that can be challenging to the type of people used to expressing their opinions. Writer Lindy West has written about her abortion, hatred of rape jokes and her body image–all of which generated a flood of angry messages. When her father Paul died, a troll quickly started a fake Twitter account called PawWestDonezo, (“donezo” is slang for “done”) with a photo of her dad and the bio “embarrassed father of an idiot.” West reacted by writing about it. Then she heard from her troll, who apologized, explaining that he wasn’t happy with his life and was angry at her for being so pleased with hers.

West says that even though she’s been toughened by all the abuse, she is thinking of writing for TV, where she’s more insulated from online feedback. “I feel genuine fear a lot. Someone threw a rock through my car window the other day, and my immediate thought was it’s someone from the Internet,” she says. “Finally we have a platform that’s democratizing and we can make ourselves heard, and then you’re harassed for advocating for yourself, and that shuts you down again.”

I’ve been a columnist long enough that I got calloused to abuse via threats sent over the U.S. mail. I’m a straight white male, so the trolling is pretty tame, my vulnerabilities less obvious. My only repeat troll is Megan Koester, who has been attacking me on Twitter for a little over two years. Mostly, she just tells me how bad my writing is, always calling me “disgraced former journalist Joel Stein.” Last year, while I was at a restaurant opening, she tweeted that she was there too and that she wanted to take “my one-sided feud with him to the next level.” She followed this immediately with a tweet that said, “Meet me outside Clifton’s in 15 minutes. I wanna kick your ass.” Which shook me a tiny bit. A month later, she tweeted that I should meet her outside a supermarket I often go to: “I’m gonna buy some Ahi poke with EBT and then kick your ass.”

I sent a tweet to Koester asking if I could buy her lunch, figuring she’d say no or, far worse, say yes and bring a switchblade or brass knuckles, since I have no knowledge of feuding outside of West Side Story. Her email back agreeing to meet me was warm and funny. Though she also sent me the script of a short movie she had written (see excerpt, left).

I saw Koester standing outside the restaurant. She was tiny–5 ft. 2 in., with dark hair, wearing black jeans and a Spy magazine T-shirt. She ordered a seitan sandwich, and after I asked the waiter about his life, she looked at me in horror. “Are you a people person?” she asked. As a 32-year-old freelance writer for Vice.com who has never had a full-time job, she lives on a combination of sporadic paychecks and food stamps. My career success seemed, quite correctly, unjust. And I was constantly bragging about it in my column and on Twitter. “You just extruded smarminess that I found off-putting. It’s clear I’m just projecting. The things I hate about you are the things I hate about myself,” she said.

As a feminist stand-up comic with more than 26,000 Twitter followers, Koester has been trolled more than I have. One guy was so furious that she made fun of a 1970s celebrity at an autograph session that he tweeted he was going to rape her and wanted her to die afterward. “So you’d think I’d have some sympathy,” she said about trolling me. “But I never felt bad. I found that column so vile that I thought you didn’t deserve sympathy.”

When I suggested we order wine, she told me she’s a recently recovered alcoholic who was drunk at the restaurant opening when she threatened to beat me up. I asked why she didn’t actually walk up to me that afternoon and, even if she didn’t punch me, at least tell me off. She looked at me like I was an idiot. “Why would I do that?” she said. “The Internet is the realm of the coward. These are people who are all sound and no fury.”

Maybe. But maybe, in the information age, sound is as destructive as fury.

Editor’s Note: An earlier version of this story included a reference to Asperger’s Syndrome in an inappropriate context. It has been removed. Additionally, an incorrect description of Megan Koester’s sexual orientation has been removed. The original version also omitted an author of a study about Facebook and good deeds.

More Must-Reads From TIME

  • Jane Fonda Champions Climate Action for Every Generation
  • Passengers Are Flying up to 30 Hours to See Four Minutes of the Eclipse
  • Biden’s Campaign Is In Trouble. Will the Turnaround Plan Work?
  • Essay: The Complicated Dread of Early Spring
  • Why Walking Isn’t Enough When It Comes to Exercise
  • The Financial Influencers Women Actually Want to Listen To
  • The Best TV Shows to Watch on Peacock
  • Want Weekly Recs on What to Watch, Read, and More? Sign Up for Worth Your Time

Contact us at [email protected]

You May Also Like

Trolls Aren’t Like the Rest of Us

Online jerks and offline jerks are largely one and the same. Here’s how to keep them from affecting your happiness.

A person wearing earplugs ignores two large, mean speech bubbles talking into his ears.

“ How to Build a Life ” is a weekly column by Arthur Brooks, tackling questions of meaning and happiness. Click here to listen to his podcast series on all things happiness, How to Build a Happy Life .

M y friend Peter Attia , a wellness and longevity expert who helps people live better lives, is dreaming up an invention to improve his own: a machine that shocks him with 100 volts of electricity every time he starts to engage with his online critics. “Every time I get attacked unfairly and answer an internet troll, it always gets worse and worse because the virtual crowd that shows up is made up of more trolls,” he told me. “But I never seem to learn.”

Attia is far from alone in his troll trouble. If you use the internet, the odds are about even that you’ll be mistreated there. A 2021 Pew Research report found that 41 percent of U.S. adults have personally experienced some form of online harassment. Fifty-five percent think it is a “major problem.” Seventy-five percent of the targets of online abuse say their most recent experience was on social media. I can’t think of any other area of voluntary interaction—with the possible exception of driving in rush-hour traffic—where people so frequently expose themselves to regular abuse.

But we are not helpless in the face of either online abusers or the ones flipping us off on the highway. In fact, they are mostly one and the same: bullies with personality disorders. And you can protect your happiness by dealing with them both in some tangible, practical ways.

Want to stay current with Arthur's writing? Sign up to get an email every time a new column comes out.

W ithout even realizing it, many internet users mistakenly assume that cyberattackers follow conventional rules of behavior. People try to reason with trolls or appeal to their better nature. These responses are similar to how you might approach a friend who’s inadvertently insulted you, or a family member who disagrees with you about something important. But trolls are not like your loved ones, and research shows that these strategies are ineffective because they misapprehend a troll’s true motives, which are usually to attract attention, exercise control, and manipulate others .

Many people who engage in online harassment are not what most of us would consider to be well-adjusted . In 2019, scholars writing in the journal Personality and Individual Differences surveyed 26 studies of internet “trolling,” cyberbullying, and related antisocial online behaviors. They found significant associations with psychopathy, Machiavellianism, sadism, and narcissism, in that order. In other words, just as you would conclude that a stranger attacking you in person is badly damaged, you can conclude the same about a stranger attacking you on social media.

Read: How to win a debate with a bully

But despite the fact that online jerks and offline jerks tend to be the same people, online life feels way more full of jerks than offline life. Bizarre, hostile behavior seems to be more common online than in person. According to a recent study in the American Political Science Review , Americans rate online political discussions as 50 percent more negative than offline discussions. The reason is that once abusers enter an online space, they tend to take it over. Trolls like trolling, whereas most people don’t like being trolled. So trolls are attracted to internet forums such as Twitter, where they can get their toxic jollies without much threat of being beaten up, while moral people exit —all increasing the troll-to-normal ratio over time. If you feel as though your relationship with social media has gotten worse over time, this might explain why.

Read: Trolls are winning the internet, technologists say

Our attackers are weirdos, and the internet is a weirdo’s paradise. But for some reason, we often have trouble understanding that. Instead, we take attacks seriously and personally. One scholar has proposed that this tendency to internalize trollish insults results from a phenomenon called solipsistic introjection : reading written communication can feel like hearing a voice inside our own head. As such, a troll’s insults can be experienced as a form of self-criticism, which is hard to ignore.

E ven if you want to bid the online sewer a not-so-fond adieu, your circumstances might make doing so too costly. Exiting social media today would be like getting rid of your telephone 20 years ago. And maybe you simply don’t want to be forced off social media by the trolls, any more than you would placidly accept being forced off the playground because of menacing bullies who treat it as their exclusive property.

If you need or want to participate in online communities, but you hate the abuse, here are three strategies to consider.

1. Nonreceipt

As a child, you were probably advised more than a few times to ignore taunts and insults. Part of this is just common sense. Way back in 1997, basically the internet’s Stone Age, a Unix handbook for systems administrators offered instruction on how to deal with a troll: “You’re an adult—you can presumably figure out some way to deal with it, such as just ignoring the person.”

This is a version of a Buddhist strategy for dealing with insults. In the Akkosa Sutta , the Buddha teaches, “Whoever returns insult to one who is insulting … is said to be eating together, sharing company, with that person.” You don’t have to actively reject abuse on the internet; you can simply not receive it. When you are taunted, say to yourself, I choose not to accept these words .

Read: Teens are being bullied ‘constantly’ on Instagram

I’m not going to pretend that this is easy; you can decide for yourself whether this tactic is workable for you. And in the case of threats or hate speech, you may want to make your nonreceipt more tangible by blocking the trolls, and reporting the abuse. (This remedy is imperfect at best, unfortunately, given social-media companies’ spotty record at enforcement of their own norms.)

2. Nonresponse

Not receiving an insult means you cannot respond in any way (beyond, perhaps, blocking and reporting an attacker). According to the Center for Countering Digital Hate , a British NGO, ignoring trolls is crucial for stopping abuse. This makes sense, given the evidence that trolls are seeking attention, including negative attention. Nonresponse denies them the reward they seek.

Read: Should we feed the trolls?

Responding to a bully on the internet or in real life—remember, they are typically the same people—is proof that they are worth your time and notice. It gives them a twisted kind of status. While a healthy person gets status from admiration for meritorious behavior, research on playground bullies finds that they seek status by showing dominance through aggression. Don’t feed this monster, in person or online. When possible, meet aggression with deafening silence.

3. Non-anonymity

The internet offers (at least) one important tool that makes life easier for bullies: anonymity. As both research and common sense attest, allowing users to hide their identity abets abuse. A colleague of mine, a fellow professor who holds many views outside academia’s political orthodoxy, has a particularly strong approach to dealing with trolls: Once a year, he takes a few hours to review his followers and block anyone who doesn’t use their real name.

It’s not a perfect technique, given how easily social-media users can falsify their identity and create new handles. But my friend swears that it has dramatically improved the discourse he enjoys online, because the majority of his interlocutors—positive and negative—are interacting as themselves. If you choose this route, be morally consistent and avoid being anonymous yourself. You might take the practice a step further and withdraw from conversation platforms that are anonymous by design.

Read: How to invent a person online

W hat if you’re not just a victim, but a bully or troll yourself? You probably (hopefully) are not beating up kids for their milk money, but if you find that you have fallen into aggressive internet behaviors, this dark cyberside to your personality is worth addressing.

You can look for a few clues to figure out if you’re the troll. Research on internet bullies has found that they have an easier time being themselves online than in person. Ask yourself: Do you feel the same way? Also consider whether you find pleasure in insulting others without consequence and seeing them get hurt or angry; whether you enjoy the safety of anonymity when expressing your views; and whether “mobbing” and working to “cancel” others gives you a sense of satisfaction or purpose.

Read: How to fix the internet

If this introspection leads you to admit to yourself that you have become a bit of a troll, or are voluntarily part of a culture or group that engages in online bullying, remember how it feels to be on the other side of the exchange. Ask yourself if you would want your loved ones to know what you’ve been doing on the internet.

Then take action: Repudiate anonymity completely. Declare publicly that you will never troll or bully, and ask others to hold you accountable. And if the trolling is just too tempting, make a plan to log off entirely and pull the plug on your cyberself.

  • Skip to main content
  • Keyboard shortcuts for audio player

The Twitter Paradox: How A Platform Designed For Free Speech Enables Internet Trolls

Charlie Warzel, who covers technology for BuzzFeed , has written a series of articles about Twitter's response to hate speech. He says the platform's community guidelines are enforced haphazardly.

TERRY GROSS, HOST:

Our next guest, Charlie Warzel, has written a series of articles about harassment on Twitter and how the company is trying to deal with it. Warzel is a technology reporter for BuzzFeed.

Charlie Warzel, welcome to FRESH AIR. As you point out in your articles, one of Twitter's greatest strengths is also one of its vulnerabilities, free speech. The founders of Twitter were strong free speech advocates. It's the free speech approach to Twitter that has enabled Twitter to be such an important platform for pro-democracy movements, for the Arab Spring. Can you talk about that paradox that the free speech that Twitter embodies is also Twitter's vulnerability?

CHARLIE WARZEL: I think that this is one of the fundamental issues of the internet, this issue of free speech right now. And what we're sort of seeing is the idealistic understanding of what the internet could be, this utopian idea that so many entrepreneurs and people who have created these enormous social platforms, that they believe at their core that the internet can sort of raise all voices and really be an amazing tool.

And to have that anonymity tends to be something that these platforms favor. In Twitter's case, its core to their idea of free speech, and free speech is one of the founding principles that Twitter is built upon and this understanding that to truly connect the world, to truly be the pulse of the world, you have to give people the option to be able to be free of persecution. And that's why you saw so much of what happened in the early days of Twitter with the Iranian revolution and the Arab Spring, where Twitter played such an important role for political dissidents. It really sort of protected and allowed them to have a voice and elevated the platform.

GROSS: Would you compare Twitter's policy with Facebook and Instagram in terms of what you can say and how - and how you have to identify yourself?

WARZEL: Instagram and Facebook have adopted a real identity-centric approach. You have to give a version of yourself. You can't choose a pseudonym. You have to project some version of the person who you really are. And that is a very powerful thing, and it's a reason why Facebook is sort of one of the primary ways we authenticate ourselves across the internet.

And as a result, Facebook has its own problems with abuse and harassment but not nearly to the same degree because there's no way for people to sort of hide behind an anonymous account name or an anonymous avatar. On Facebook, you have to project that image of who you are. And Facebook has really doubled down on that. They have a lot of strict community standards as well as Instagram. One of those is no nudity, and it is something that has - that strong stand has inured the platform a little bit more to the kind of abuse that we're seeing grow so rapidly on Twitter.

GROSS: Yeah, you mentioned that Twitter is one of the few social media platforms used in the adult entertainment industry because it allows nudity.

WARZEL: Absolutely. And for that, it's been an incredibly useful platform for adult entertainers. And it is an example of giving a voice and being a home for people that don't necessarily have a voice. And I think that you see that actually working - not to make too much of a jump, but it's really one of the same principles that's at play with a lot of activist movements in the country.

It is a place where you can broadcast your raw opinion. You can get the news out that, you know, maybe some platforms are wary to broadcast. And that's been an incredibly successful tool for all kinds of movements like Black Lives Matter and the Arab Spring.

GROSS: So one of the kinds of videos that's a real issue on social media is beheading videos. Sometimes a beheading video, as gruesome as it is, is news because it proves that a hostage has been murdered. Sometimes it's purely harassment. People have been getting beheading videos just as a way of upsetting them, of harassing them, of threatening them. Can you compare, for instance, how Facebook and Twitter deal with beheading videos?

WARZEL: I think this is something that my reporting showed Twitter struggled internally with a lot, especially in in 2014, when this rash of ISIS beheading videos really started to flood the internet. There were internal meetings that we reported that showed that Twitter's executives were truly concerned that the platform would be overrun by this kind of content that may be newsworthy but is also broadcasting a very distinct message and is also incredibly disturbing.

But Twitter has become the place where news happens, where you get that raw eyewitness account and access. And Twitter had to figure out a way to harness the best of that. And I think that is something that they're still struggling with. They've created a newsworthy clause which allows them to allow certain images based off of their relevancy to public information and to the news. But there was a worry inside the company that if Twitter were to be overrun with these grisly, just very disturbing videos that there really wouldn't be anyone who would want to sign on.

GROSS: If you're just joining us, my guest is journalist Charlie Warzel. He covers tech for BuzzFeed, and he's written a series of articles about Twitter and trolling. We're going to take a short break, then we'll be back. This is FRESH AIR.

(SOUNDBITE OF MUSIC)

GROSS: This is FRESH AIR. And if you're just joining us, my guest is journalist Charlie Warzel. He's a tech reporter for BuzzFeed. He's been writing a series of articles about Twitter and trolling and what Twitter is and isn't doing to try to stop trolling.

You write that there's really a discussion, a debate within Twitter about what is Twitter. Is it more of a communication utility where it just - you know, it opens up the lines and you do what you will with it, kind of like the phone company, or is it a mediator of content, where content has to have some oversight? Would you describe more about that debate?

WARZEL: Anyone who's been following Twitter for the past decade has watched this evolution. Twitter started out as a very sort of quick, short-burst messaging platform, just the 140 characters, no images, no video, really sort of like a status update, what you're doing that day or at that moment and in a sense evolved just incredibly to be this sort of media-rich platform that has content partnerships with the NFL and media outlets like Bloomberg. As a result, Twitter really sort of is this media company. It is a place where news happens. It is a vibrant source of news for so many people.

And yet Twitter is also a utility in many ways. Twitter is this communication method, this digital way of reaching somebody, of having a conversation? It provides that infrastructure. And the real problem here is - seems to be that Twitter doesn't really want to put itself in any kind of box like that. They're very reluctant to, and they keep redefining, you know, who they are.

And the problem with that redefinition is that a utility is not subject to nearly the same kind of moderation as a media company. I can send you or anyone almost anything over a text message and AT&T or Verizon aren't going to moderate that and have no requirement to moderate that, whereas if I use a blogging platform to, you know, smear somebody or say something awful about somebody, there is sort of a standard on the internet that has been created that that should be regulated.

GROSS: What kind of effort is Twitter making to come up with a solution, a product that will both protect people on Twitter and protect free speech?

WARZEL: This is the fundamental problem, the free speech element really, really hampers Twitter. It's very important to them that no voices be silenced. And yet the task of moderating is to silence certain voices to some extent. Twitter introduced a quality filter not too long ago that they have rolled out to everyone. It used to be only for special verified users, so lots of celebrities and journalists. But this filter has proven - it's driven by an algorithm, and it's proven to be generally poor. It's also an opt-in filter, meaning everyday users are everyday users are going to have to go through their settings and change that. And that's something that I think plenty of everyday Twitter users who aren't sort of in the weeds don't necessarily even know they have that option.

GROSS: What does it filter?

WARZEL: The quality filter will ostensibly favor tweets that are created by verified users. I think that there is some effort to filter out certain search terms perhaps that are particularly violent or racially insensitive or tagged to hate speech in some way. But again, this is all very proprietary information that Twitter doesn't really let anyone in on, especially journalists. And this is one of the difficulties in covering Twitter from this angle of harassment is that there's so little knowledge as to what Twitter is really trying to do and so little effort on their part to disclose any of it, an unwillingness to disclose any of it that makes it difficult to see how, if at all, they are earnestly trying to fix this problem.

GROSS: BuzzFeed conducted a survey of Twitter users. There were 2,700 users who responded to the survey. I would say right at the jump here this is a very unscientific (laughter) survey.

WARZEL: Yes.

GROSS: This is representative of people who knew about the survey and decided to participate in it. That said, what were some of your takeaways from these responders?

WARZEL: I'll also just stress that this is an unscientific survey. But nonetheless we wanted to hear from users themselves and understand exactly what happens when they do go out there and experience harassment and report it. And what we found was that roughly 46 percent of respondents told us that the last time they reported an abusive tweet to the company, the company took no action on the request that they were aware of.

Another 29 percent said when they reported abusive tweets, they never heard anything back at all. It was effectively radio silence. And 18 percent said that when they did report an abusive tweet, they were told that the tweet did not violate Twitter's rules of being either a violent threat or hateful conduct. Only 56 instances out of roughly 2,700 people surveyed showed that Twitter deleted an offending account or a tweet that violated these rules. And so I think what - what the survey really showed was that regardless of what Twitter is doing behind the scenes, Twitter is doing a poor job of communicating exactly what's going on once you hit that report button.

GROSS: The New York Times this week ran a double-page spread of all the people, places and things Trump has insulted on Twitter since declaring his presidency. And there were one, two, three, four, five, six, seven, eight columns of really small print covering two pages of these tweets. Have you been following his tweets? And I'm just wondering how you think Donald Trump's use of Twitter is affecting perceptions of Twitter.

WARZEL: I think this is one of the most fascinating things about Twitter is just how integral it has been in this election. Donald Trump has been able to really leverage Twitter to get his message out to the base, really sort of skirting the media. And then also using Twitter as a way to pick up a lot of free media.

He can send out a string of incendiary tweets at 3 in the morning and by 7 a.m., they're dominating all of the morning shows. Twitter has been just central in this. And yet so much of that message lately has been so negative.

And if you look at the way that Donald Trump tweets and sort of what that New York Times spread can kind of show is that Donald Trump is himself a very effective troll with regard to Twitter. He says incendiary things that may or may not be based at all in fact. He sort of is looking for the reaction more than he's looking for any sort of substance. The fact that you have engaged with it, that you are outraged by it is just as important as whether or not you believe in it.

And I think that, you know, that behavior is again sort of being normalized in that sense to have somebody who exhibits a lot of this trollish behavior be elevated to the most covered human being in America or maybe the world for this past 18 months, I think that that has a profound effect on how other people, you know, choose to use the internet.

GROSS: If you're just joining us, my guest is journalist Charlie Warzel. He covers tech for BuzzFeed. And he's written a series of articles about Twitter and trolling. We're going to take a short break, then we'll be back. This is FRESH AIR.

What kind of response have you gotten from Twitter to your requests to interview the CEO or get more information about what they're trying to do to deal with people who harass other people?

WARZEL: A major fundamental issue of my reporting on Twitter has been the lack of transparency. Twitter has not allowed us to speak with Jack Dorsey on this issue. And Twitter has not made any executives available to talk about this issue yet. If you speak with Twitter about this issue, they will say that theyre working on it and that this is something that they take very seriously now and have always taken very seriously and that they are actively working towards putting out some tools that will that will stop this. What those tools are is yet to be determined, and they have hinted publicly that we might see some of those things soon.

But there's not a lot for Twitter right now to gain perhaps by acknowledging this problem without putting forth a solution. That seems to be sort of the company line. I would argue, however, that so many of the people that I've spoken with love Twitter but are so frustrated and sort of feel that the company isn't angry enough about this, about this failing that, you know, the people at Twitter surely want to see this problem go away as much as anyone else.

And what people would like to see from Twitter - the people I have interviewed, the people experiencing this abuse on a daily basis - is a little bit more outrage. In 2014, Twitter's former CEO Dick Costolo released a memo that said we suck at dealing with abuse. And that memo was greeted by people who experience harassment with a lot of kudos. People were sort of thrilled to know that the company saw it, was frustrated and was going to deal with it. Since then not much has been done, and there's this sort of growing frustration as Twitter stays silent that maybe it doesn't understand just how bad this problem is.

GROSS: So why did Disney and the company sales force decide against buying Twitter?

WARZEL: The reports showed that, among a number of reasons, investors in both of the companies were troubled by a lot of the issues of harassment that are currently plaguing Twitter and all the bad press that that sort of entails. You have a lot of very high-profile celebrities who have quit the platform, like the "Saturday Night Live" actor Leslie Jones.

And when those sort of things happen, they create sort of this PR disaster for Twitter. So the harassment issue has sort of for the first time truly started to impact Twitter's bottom line. In past years, harassment has been something that Twitter can sort of point to as a small cordoned-off problem. It's something that's happening, the company regrets that it's happening, it is trying to fix the problem, but the rest of Twitter is out here spreading great information, you know, being the place where celebrities can interact with each other and with normal people. And it is billed as this wonderful community.

This sort of shows, though, that the harassment issue and the fact that abuse is increasing on the platform at a pretty alarming rate, it's finally affecting Twitter's bottom line. It's finally affecting how the company is performing, how the company is viewed in Silicon Valley and in the eyes of plenty of its competitors. And I think that that could be a moment for Twitter. It could be a real reckoning where Twitter finally says we have to get this problem under control or risk the future.

GROSS: Charlie Warzel, thank you so much for talking with us.

WARZEL: Thank you so much.

GROSS: Charlie Warzel is a technology reporter for BuzzFeed.

GROSS: Tomorrow on FRESH AIR, our guest will be chef Anthony Bourdain. His book "Kitchen Confidential" was a best-selling behind-the-scenes tell all about the restaurant business. In his Peabody Award-winning CNN series "Parts Unknown," he travels the globe sampling foods from diverse cultures. But his new cookbook, "Appetites," focuses on the food he makes for his family at home. I hope you'll join us.

GROSS: FRESH AIR's executive producer is Danny Miller. Our senior producer is Roberta Shorrock. Our interviews and reviews are produced and edited by Amy Salit, Phyllis Myers, Ann Marie Baldonado, Sam Briger, Lauren Krenzel, John Sheehan, Heidi Saman, Mooj Zadie and Thea Chaloner. Therese Madden directed today's show. I'm Terry Gross.

Copyright © 2016 NPR. All rights reserved. Visit our website terms of use and permissions pages at www.npr.org for further information.

NPR transcripts are created on a rush deadline by an NPR contractor. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of NPR’s programming is the audio record.

I Wrote this Paper for the Lulz: the Ethics of Internet Trolling

  • Published: 15 August 2020
  • Volume 23 , pages 931–945, ( 2020 )

Cite this article

  • Ralph DiFranco 1  

1352 Accesses

5 Citations

13 Altmetric

Explore all metrics

Over the last decade, research on derogatory communication has focused on ordinary speech contexts and the use of conventional pejoratives, like slurs. However, the use of social media has given rise to a new type of derogatory behavior that theorists have yet to address: internet trolling. Trolls make online utterances aiming to frustrate and offend other internet users. Their ultimate goal is amusement derived from observing a good faith interlocutor engage with their provocative posts. The basis for condemning a pejorative utterance is often taken to be the harm it causes or a defective attitude in the speaker. However, trolling complicates this picture, since trolling utterances are by definition insincere and should be recognizable as such to other trolls. Further, these utterances seem morally questionable even when they cause little to no harm (e.g. when a troll’s utterance fails to secure uptake), and they often do not feature conventional pejoratives. I argue that while the potential for negative effects is relevant to ethical assessment, in general trolling is pro tanto wrong because the troll fails to accord others the proper respect that is their due (independently of whether they harm them). However, this characteristic wrong-making feature is sometimes overridden.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

Similar content being viewed by others

speech on internet trolls

When Doublespeak Goes Viral: A Speech Act Analysis of Internet Trolling

Andrew Morgan

Disruptive online communication: How asymmetric trolling-like response strategies steer conversation off the track

Henna Paakki, Heidi Vepsäläinen & Antti Salovaara

speech on internet trolls

Bad Faith, Bad Politics, and Bad Consequences: The Epistemic Harms of Online Deceit

For a collection of papers that are representative of the work philosophers of language have done on slurs in recent years, see Sosa ( 2018 ).

Questions about the moral and legal implications of hate speech are explored in Tirrell ( 2017 ), who explains how slurs motivate harmful discrimination against vulnerable groups, and Waldron ( 2012 ), who argues that the harm of public hate speech is sufficient for justifying legal restrictions on it.

See Stanley ( 2015 ) for examples.

Barney ( 2016 ) describes the concern troll as “the one who ‘sees the other side’” (p. 1).

Phillips uses the label ‘subcultural’ in recognition of the fact that STs make up an online speech community whose members have a shared goal (lulz) and will often coordinate their efforts to achieve it.

Here I am drawing on Kate Manne’s distinction of misogyny from sexism . Misogyny refers to social systems that operate “within a patriarchal order to police and enforce women’s subordination and to uphold male dominance” (Manne 2018 , p. 33). In comparison, sexism is an ideology that rationalizes and justifies patriarchal social relations (Manne 2018 , p. 79). Those who lack a sexist ideology and have sincere feminist commitments may nevertheless sometimes channel misogynistic social forces to enforce gender norms (Manne 2018 , p. 77). I take it that ‘express’ is not a success term, that is, an agent can express a derogatory attitude A without harboring A .

Of course, an ST could enjoy lulz derived from fantasizing about their target reacting to them with frustration or offense independently of whether the target actually reacts this way. In any event, lulz are an intended effect that need not be achieved.

Relatedly, Basu ( 2019 ) raises an objection to effects-based accounts of what is wrong with having racist beliefs. Basu gives the example of a hermit who lives in an isolated forest and finds a picture of a man named Sanjeev. The hermit forms a racist belief about Sanjeev (namely, that he smells like curry). Despite the fact that this belief cannot harm Sanjeev (since the hermit will never interact with other people), Basu argues that the hermit wrongs Sanjeev just by virtue of having a racist belief about him.

A similar intuition is expressed by Cohen ( 2017 , p. 186). However, Cohen does not mention STing specifically, and it is not clear which form of trolling Cohen has in mind. Rather than offering a general ethical assessment of trolling, Cohen aims to account for the virtuous arguer’s character by contrasting them with a paradigmatically vicious individual (the troll).

Glüer and Wikforss distinguish non-constitutive norms , which exist independently of the activity they govern, from constitutive norms , which create the very activity they regulate. As an example of the former, consider prescriptions regarding dinner etiquette, which govern an independently existing activity, eating (Glüer and Wikforss 2018 ).

See also Cuneo ( 2014 ), who argues that the normativity of our discursive practices (e.g. the fact that our judgments and behavior incur certain commitments, and that as speakers we have rights, responsibilities, and obligations vis-à-vis our interlocutors) is essential to our ability to perform speech acts.

One reason that I regard Robin’s behavior as pro tanto wrong is that ceteris paribus, it would be better for Robin to persuade Ted through rational means than to troll him. It is regrettable that she had to resort to STing to get Ted to appreciate the hypocrisy of his stance. However, one may think that in certain cases it is appropriate to respond to irrational perspectives only by using strategies like trolling, as opposed to relying on rational persuasion. One worry about responding to flat-Earthers, climate change deniers, anti-vaxxers, and phrenologists by presenting them with reasons to abandon their views is that we risk legitimizing them in a problematic way. We may even think that such individuals have forfeited their right to equal consideration as conversation participants, though I will not attempt to settle this issue here.

Similarly, Bell ( 2013 , p. 160) suggests that the experience of being contemned may motivate someone who harbors inapt contempt to reflect on their objectionable attitudes and work to change them.

Harvey ( 1999 , p. 72) suggests that protesting hate groups can be a valuable act of solidarity with those targeted by such groups. Countertrolling racist STs may have a similar function, though whether a racist troll’s target welcomes and appreciates the countertroll’s intervention is a separate issue.

Barney R (2016) [Aristotle], On Trolling . Journal of the American Philosophical Association 2:193–195

Article   Google Scholar  

Basu R (2019) The wrongs of racist beliefs. Philos Stud 176:2497–2515

Bell M (2013) Hard feelings: the moral psychology of contempt. Oxford University Press, Oxford

Book   Google Scholar  

Brandom R (2009) Reason in philosophy. Harvard University Press, Cambridge

Buss S (1999) Appearing respectful: the moral significance of manners. Ethics 109:795–826

Cohen DH (2017) The virtuous troll: argumentative virtues in the age of (technologically enhanced) argumentative pluralism. Philosophy & Technology 30:179–189

Cohen, Richard (2018). “Speak no more of socialism.” Washington Post , July 2, 2018. URL = < https://www.washingtonpost.com/opinions/alexandria-ocasio-cortezs-win-could-revive-american-socialism/2018/07/02/0dafc8b6-7e24-11e8-bb6b-c1cb691f1402_story.html?noredirect=on&utm_term=.7c9bb4720cac>

Cuneo T (2014) Speech and morality: on the Metaethical implications of speaking. Oxford University Press, Oxford

Glüer, Kathrin and Asa Wikforss (2018). “The Normativity of Meaning and Content.” The Stanford Encyclopedia of Philosophy . Edward N. Zalta (ed.). URL = < https://plato.stanford.edu/entries/meaning-normativity/>

Harvey J (1999) Civilized oppression. Rowman and Littlefield, Lanham

Google Scholar  

Jones, Jamie (2019). “16 Parents Who Trolled The Hell Out Of Their Kids.” BuzzFeed (website), September 11, 2018. URL = < https://www.buzzfeed.com/jamiejones/parents-who-are-as-funny-as-they-are-savage>

Manne K (2018) Down girl: the logic of misogyny. Oxford University Press, Oxford

Marcotte, Amanda (2018). “Democrats: Quit listening to ‘civility’ scolds and concern trolls: This is an emergency.” Salon (website), July 6, 2018. URL = < https://www.salon.com/2018/07/06/democrats-quit-listening-to-civility-scolds-and-concern-trolls-this-is-an-emergency/>

Mueller, Robert (2019). “Report On The Investigation Into Russian Interference In The 2016 Presidential Election: Volume I.” Originally published by The Department of Justice, April 18, 2019. URL = < https://www.justice.gov/storage/report.pdf>

Noggle, Robert (2018). “The Ethics of Manipulation.” The Stanford Encyclopedia of Philosophy . Edward N. Zalta (ed.). URL = < https://plato.stanford.edu/entries/ethics-manipulation/#ManiAlwaWron>

Pardes, Arielle (2018). “Hey Bullies, Instagram’s Niceness Cops Are Comin’ For You.” Wired (website), October 9, 2018. URL = < https://www.wired.com/story/instagram-anti-bullying-algorithm/>

Patel, Remee (2015). “19 Times Women Gave the Best Damn Responses to Men on Tinder.” Buzzfeed (website), June 9, 2015. URL = < https://www.buzzfeed.com/remeepatel/tinderellas-slaying-tinderfellas>

Phillips W (2015) This is why we Can’t have Nice things: mapping the relationship between online trolling and mainstream culture. The MIT Press, Cambridge

Richter R (1986) On Philips and racism. Can J Philos 16:785–794

Rogers, Katie (2016). “Leslie Jones, Star of ‘Ghostbuster’, Becomes a Target of Online Trolls.” New York Times , July 19, 2016. URL = < https://www.nytimes.com/2016/07/20/movies/leslie-jones-star-of-ghostbusters-becomes-a-target-of-online-trolls.html>

Sosa D (2018) Bad words: philosophical perspectives on slurs. Oxford University Press, Oxford

Stanley J (2015) How propaganda works. Princeton University Press, Princeton

Sun, Lena H. (2019). “Anti-vaxxers trolled a doctor’s office. Here’s what scientists learned from the attack.” The Washington Post , March 21, 2019. URL = < https://www.washingtonpost.com/health/2019/03/21/anti-vaxxers-trolled-doctors-office-heres-what-scientists-learned-attack/>

Tirrell L (2017) Toxic speech: toward an epidemiology of discursive harm. Philos Top 45:139–161

Waldron J (2012) The harm in hate speech. Harvard University Press, Cambridge

Walker, Peter (2016). “Troll Aid: How a Calais charity is using online abuse to raise money to help refugees.” Independent , January 28, 2016. URL = < https://www.independent.co.uk/news/uk/home-news/trollaid-how-a-calais-charity-is-using-online-abuse-to-raise-money-to-help-refugees-a6839111.html>

Download references

Acknowledgements

I am grateful to Andrew Morgan and John Dyck for helpful discussions, and to the audience of a March 2019 colloquium at Auburn University for many illuminating comments.

Author information

Authors and affiliations.

Department of Philosophy, Auburn University, Auburn, AL, 36849-3715, USA

Ralph DiFranco

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Ralph DiFranco .

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

DiFranco, R. I Wrote this Paper for the Lulz: the Ethics of Internet Trolling. Ethic Theory Moral Prac 23 , 931–945 (2020). https://doi.org/10.1007/s10677-020-10115-x

Download citation

Accepted : 06 August 2020

Published : 15 August 2020

Issue Date : November 2020

DOI : https://doi.org/10.1007/s10677-020-10115-x

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Social media
  • Find a journal
  • Publish with us
  • Track your research
  • Expert Advisory Panel
  • Our partners
  • Become a partner
  • Advice for parents and carers
  • Advice for professionals
  • Connecting Safely Online
  • Fostering Digital Skills
  • UKCIS Vulnerable Users Working Group
  • Online hate
  • Online grooming
  • Fake news and misinformation
  • Screen time
  • Inappropriate content
  • Cyberbullying
  • Online reputation
  • Online Pornography
  • Radicalisation
  • Privacy and identity theft
  • Report issue
  • Pre-school (0-5)
  • Young Children (6-10)
  • Pre-teen (11-13)
  • Teens ( 14+)
  • Social media privacy guides
  • Gaming platforms and devices
  • Smartphones and other devices
  • Broadband & mobile networks
  • Entertainment & search engines
  • Get smart about smartphones
  • My Family’s Digital Toolkit
  • Navigating teens’ online relationships
  • Online gaming advice hub
  • Social media advice hub
  • Press Start for PlayStation Safety
  • Guide to apps
  • Digital resilience toolkit
  • Online money management guide
  • The dangers of digital piracy
  • Guide to buying tech
  • UKCIS Digital Passport
  • Online safety leaflets & resources
  • Digital wellbeing research programme
  • Parent Stories
  • Expert opinion
  • Press releases
  • Our expert panel
  • Free digital stories and lessons
  • Early years
  • Primary school
  • Secondary school
  • Connect school to home
  • Professional guidance

Tackling online hate and trolling

Advice to support children & young people.

Find out more about how to tackle hate online and online trolls with our useful advice guide, what online hate is and to how to support your child.

speech on internet trolls

See our useful tips on online hate and trolling and how to equip children with tools on how to deal with it.

Online hate speech is any online communication or expression which encourages or promotes hatred, discrimination or violence against any person or group because of their race, religion, disability, sexual orientation, gender or gender identity. It can be referred to as cyberbullying or trolling and, if serious enough, may break the law as a hate crime.

The meaning of troll and trolling is simply referring to one user targeting another to get a reaction. This could be by making racist, sexist, homophobic or otherwise hateful comments. On social networks, it’s common to see ‘trolls’ in the comment sections making inflammatory comments. Other users may say “don’t feed the troll,” which means you should not respond to the comments because that’s exactly what they want.

Report anyone making hateful comments to the game or social network you’re using and block users who target you in particular.

  • One-third of young people encountered hate speech online.
  • A report from Ofcom showed 33% of parents and children were concerned about being exposed to hate speech.
  • A European study in 2016 found that a third of young people were worried about being targeted by online hate material.

Being exposed to online hate can have a real impact on young people’s wellbeing . It can also normalise discrimination, hateful attitudes and behaviours towards certain groups of people.

Sometimes online hate can lead to hate crimes offline . There have been incidents where young people who have been threatened online because of their sexual orientation, religion or race and have taken their own lives due to the constant nature of the abuse they received.

Hate crime committed, whether online or offline, is illegal. H owever, not all offensive content is illegal in the UK. If it incites hatred based on race, religion and sexual orientation then this can be considered as a crime. For content that does not meet the threshold of a hate crime, the police are required to record it as a hate incident. Laws in the UK aim to protect the freedom of speech so it can be a delicate balance to police online. How do platforms protect users from online hate? The majority of platforms have community guidelines and specific policies on hate speech which outline what is and isn’t allowed on the platform. If a user breaks these rules their account can be blocked or removed from the platform. Some platforms including social networks also use artificial intelligence as well as moderators to spot harmful content, so it’s picked up early on. However, a lot of the policing of hate speech on social platforms relies on users reporting it to the platform so action can be taken.

Children and young people are especially vulnerable to online hate as sometimes many are looking for groups or causes that will give them a sense of identity. Victims of online hate may show:

  • low self-esteem
  • sleeping disorders
  • increased anxiety and feelings of fear and insecurity
  • feeling lonely or isolated
  • feeling embarrassed and therefore want to deal with the problem by themselves.

Sometimes children may “feel left out, like they’ve got no friends”, which may impact their schooling and lead to depression.

  • Trolling – social media posts that contain hate speech or images. Posts that are created can be reposted, shared, liked or retweeted, therefore continuing the cycle of hate.
  • Messaging – messages containing hate speech/images can be directly or indirectly sent to the victims through messages via email, apps like WhatsApp, forums, gaming sites, etc.
  • Online harassment – can include repeated attempts to send unwanted communications or contact in a manner that could be expected to cause distress or fear.
  • Baiting – this is used in bullying to intentionally make a person angry by saying or doing something that annoys them. For example, insulting someone’s sexual preference or race.
  • Virtual mobbing – when a number of individuals use social media or messaging to make comments to or about another individual, usually because they are opposed to that person’s opinions. The volume of messages may amount to a campaign of harassment.

Other forms:

  • Threats of violence
  • Hoax calls and abusive phone messages

The best way to protect your child from online hate and trolling is to take an active interest in how they socialise on and offlin e. Having meaningful conversations with them is essential to them developing critical thinking skills.

Here are some tips you can share with them to help develop good online behaviours:

  • Tip 1 – Make sure they know to treat others as they want to be treated.
  • Tip 2 – Advise them not to spread hateful or threatening content online and to report any they see.
  • Tip 3 – Tell them not to say something online that they wouldn’t say face-to-face.
  • Tip 4 – Ensure they’re aware of the privacy controls on the platforms they use, such as Instagram, Snapchat and Roblox. Find out more here .
  • Tip 5 – Ask them if they know about online hate; can they recognise it?
  • Tip 6 – Encourage your children to have an open attitude and honest curiosity about other people because some instances of hate speech are based on ignorance or false information.
  • Tip 7 – Look for terms that might creep into your child’s vocabulary. Sometimes kids (and adults) use harmful terms without realising. See our glossary for some of these phrases and our text dictionary for common terms they might use in chats.
  • Block the perpetrator immediately.
  • Report it to the school.
  • Report online hate material to the website admin – most websites have rules known as ‘acceptable use policies’. See our report issue page .
  • Report it to the hosting company – If the website itself is hateful or supports violence then let the website’s hosting company know. You can find out which company hosts a website by entering their web address on ‘Who is hosting this?’
  • Contact Stop Hate UK .
  • Contact the police .

More to explore

See related advice and practical tips to support children online:

  • Advice for 0-5 years
  • Advice for 11-13 years
  • Advice for 14+ year olds
  • Advice for 6-10 years
  • Cyberbullying resources
  • Inappropriate content resources
  • Social media safety
  • Support wellbeing with tech

On site links

  • Resources to deal with cyberbullying
  • What is the real-world impact of online hate speech on young people?
  • Wellbeing and Safety on Instagram – Advice for Parents and Carers
  • Wellbeing and Safety on Instagram – Advice for Teens
  • Cyberbullying advice hub

Related Web Links

Stop Hate UK

SELMA Hacking Hate

House of Commons 2020 Hate Crime Report

Uncovered: Online Hate Speech in the Covid Era

Download Workbook

  • To receive personalised online safety guidance in the future, we’d like to ask for your name and email. Simply fill your details below. You can choose to skip, if you prefer.
  • First name *
  • Last name *
  • Email Address *
  • I am a * Parent/Carer Teacher Professional
  • Organisation name
  • Skip and download
  • Comments This field is for validation purposes and should be left unchanged.

UK wants to squeeze freedom of reach to take on internet trolls

speech on internet trolls

The UK government has announced (yet) more additions to its expansive and controversial plan to regulate online content — aka the Online Safety Bill .

It says the latest package of measures to be added to the draft are intended to protect web users from anonymous trolling.

The Bill has far broader aims as a whole, comprising a sweeping content moderation regime targeted at explicitly illegal content but also ‘legal but harmful’ stuff — with a claimed focused of protecting children from a range of online harms, from cyberbullying and pro-suicide content to exposure to pornography.

Critics, meanwhile, say the legislation will kill free speech and isolate the UK, creating splinternet Britain, while also piling major legal risk and cost on doing digital business in the UK. (Unless you happen to be part of the club of ‘safety tech’ firms offering to sell services to help platforms with their compliance of course.)

In recent months, two parliamentary committees have scrutinized the draft legislation. One called for a sharper focus on illegal content , while another warned the government’s approach is both a risk to online expression and unlikely to be robust enough to address safety concerns — so it’s fair to say that ministers are under pressure to make revisions.

Hence the bill continues to the shape-shift or, well, grow in scope.

Other recent (substantial) additions to the draft include a requirement for adult content websites to use age verification technologies ; and a massive expansion of the liability regime, with a wider list of criminal content being added to the face of the bill.

The latest changes, which the Department of Digital, Culture, Media and Sport (DCMS) says will only apply to the biggest tech companies, mean platforms will be required to provide users with tools to limit how much (potentially) harmful but technically legal content they could be exposed to.

Campaigners on online safety frequently link the spread of targeted abuse like racist hate speech or cyberbullying to account anonymity, although it’s less clear what evidence they’re drawing on — beyond anecdotal reports of individual anonymous accounts being abusive.

Yet it’s similarly easy to find examples of abusive content being dished out by named and verified accounts. Not least the sharp-tongued secretary of state for digital herself, Nadine Dorries, whose  tweets lashing an LBC journalist recently led to this awkward gotcha moment at a parliamentary committee hearing.

Point is: Single examples — however high profile — don’t really tell you very much about systemic problems.

Meanwhile, a recent ruling by the European Court of Human Rights — which the UK remains bound by — reaffirmed the importance of anonymity online as a vehicle for “the free flow of opinions, ideas and information”, with the court clearly demonstrating a view that anonymity is a key component of freedom of expression.

Very clearly, then, UK legislators need to tread carefully if government claims for the legislation transforming the UK into ‘the safest place to go online’ — while simultaneously protecting free speech — are not to end up shredded.

UK publishes draft Online Safety Bill

Given internet trolling is a systemic problem which is especially problematic on certain high-reach, mainstream, ad-funded platforms, where really vile stuff can be massively amplified, it might be more instructive for lawmakers to consider the financial incentives linked to which content spreads — expressed through ‘data-driven’ content-ranking/surfacing algorithms (such as Facebook’s use of polarizing “engagement-based ranking”, as called out by whistleblower Frances Haugen ).

However the UK’s approach to tackling online trolling takes a different tack.

The government is focusing on forcing platforms to provide users with options to limit their own exposure — despite DCMS also recognizing the abusive role of algorithms in amplifying harmful content (its press release points out that “much” content that’s expressly forbidden in social networks’ T&Cs is “too often” allowed to stay up and “actively promoted to people via algorithms”; and Dorries herself slams “rogue algorithms”).

Ministers’ chosen fix for problematic algorithmic amplification is not to press for enforcement of the UK’s existing data protection regime against people-profiling adtech — something privacy and digital rights campaigners have been calling for for literally years — which could certainly limit how intrusively (and potentially abusively) individual users could be targeted by data-driven platforms.

Rather the government wants people to hand over more of their personal data to these (typically) adtech platform giants in order that they can create new tools to help users protect themselves! (Also relevant: The government is simultaneously eyeing reducing the level of domestic privacy protections for Brits as one its ‘Brexit opportunities’… so, er… 😬)

DCMS says the latest additions to the Bill will make it a requirement for the largest platforms (so called “category one” companies) to offer ways for users to verify their identities and control who can interact with them — such as by selecting an option to only receive DMs and replies from verified accounts.

“The onus will be on the platforms to decide which methods to use to fulfil this identity verification duty but they must give users the option to opt in or out,” it writes in a press release announcing the extra measures.

Commenting in a statement, Dorries added: “Tech firms have a responsibility to stop anonymous trolls polluting their platforms.

“We have listened to calls for us to strengthen our new online safety laws and are announcing new measures to put greater power in the hands of social media users themselves.

“People will now have more control over who can contact them and be able to stop the tidal wave of hate served up to them by rogue algorithms.”

Twitter does already offer verified users the ability to see a feed of replies only from other verified users. But the UK’s proposal looks set to go further — requiring all major platforms to add or expand such features, making them available to all users and offering a verification process for those who are willing to prove an ID in exchange for being able to maximize their reach.

DCMS said the law itself won’t stipulate specific verification methods — rather the regulator (Ofcom) will offer “guidance”.

“When it comes to verifying identities, some platforms may choose to provide users with an option to verify their profile picture to ensure it is a true likeness. Or they could use two-factor authentication where a platform sends a prompt to a user’s mobile number for them to verify. Alternatively, verification could include people using a government-issued ID such as a passport to create or update an account,” the government suggests.

Ofcom, the oversight body which will be in charge of enforcing the Online Safety Bill, will set out guidance on how companies can fulfil the new “user verification duty” and the “verification options companies could use”, it adds.

“In developing this guidance, Ofcom must ensure that the possible verification measures are accessible to vulnerable users and consult with the Information Commissioner, as well as vulnerable adult users and technical experts,” DCMS also notes, with a tiny nod to the massive topic of privacy.

Digital rights groups will at least breathe a sign of relief that the UK isn’t pushing for a complete ban on anonymity, as some online safety campaigners have been urging.

When it comes to the tricky topic of online trolling, rather than going after abusive speech itself, the UK’s strategy hinges on putting potential limits on freedom of reach on mainstream platforms.

“Banning anonymity online entirely would negatively affect those who have positive online experiences or use it for their personal safety such as domestic abuse victims, activists living in authoritarian countries or young people exploring their sexuality,” DCMS writes, before going on to argue the new duty “will provide a better balance between empowering and protecting adults — particularly the vulnerable — while safeguarding freedom of expression online because it will not require any legal free speech to be removed”.

“While this will not prevent anonymous trolls posting abusive content in the first place — providing it is legal and does not contravene the platform’s terms and conditions — it will stop victims being exposed to it and give them more control over their online experience,” it also suggests.

Asked for thoughts on the government’s balancing act here, Neil Brown, an internet, telecoms and tech lawyer at Decoded Legal , wasn’t convinced on its approach’s consistency with human rights.

“I am sceptical that this proposal is consistent with the fundamental right ‘to receive and impart information and ideas without interference by public authority’, as enshrined in Article 10 Human Rights Act 1998,” he told TechCrunch. “Nowhere does it say that one’s right to impart information applies only if one has verified one’s identity to a government-mandated standard.

“While it would be lawful for a platform to choose to implement such an approach, compelling platforms to implement these measures seems to me to be of questionable legality.”

Under the government’s proposal, those who want to maximize their online visibility/reach would have to hand over an ID, or otherwise prove their identity to major platforms — and Brown also made the point that that could create a ‘two-tier system’ of online expression which might (say) serve the extrovert and/or obnoxious individual, while downgrading the visibility of those more cautious/risk-averse or otherwise vulnerable users who are justifiably wary of self-ID (and, probably, a lot less likely to be trolls anyway).

“Although the proposals stop short of requiring all users to hand over more personal details to social media sites, the outcome is that anyone who is unwilling, or unable, to verify themselves will become a second class user,” he suggested. “It appears that sites will be encouraged, or required, to let users block unverified people en masse.

“Those who are willing to spread bile or misinformation, or to harass, under their own names are unlikely to be affected, as the additional step of showing ID is unlikely to be a barrier to them.”

TechCrunch understands that the government’s proposal would mean that users of in-scope user-generated platforms who do not use their real name as their public-facing account identity (i.e. because they prefer to use a nickname or other moniker) would still be able to share (legal) views without limits on who would see their stuff — provided they had (privately) verified their identity with the platform in question.

Brown was a little more positive about this element of continuing to allow for pseudonymized public sharing.

But he also warned that plenty of people may still be too wary to trust their actual ID to platforms’ catch-all databases. (The outing of all sorts of viral anonymous bloggers over the years highlights motivations for shielded identities to leak.)

“This is marginally better than a ‘real names’ policy — where your verified name is made public — but only marginally so, because you still need to hand over ‘real’ identity documents to a website,” said Brown, adding: “I suspect that people who remain pseudonymous for their own protection will be rightly wary of the creation of these new, massive, datasets, which are likely to be attractive to hackers and rogue employees alike.”

Implement differential privacy to power up data sharing and cooperation

User controls for content filtering

In a second new duty being added to the Bill, DCMS said it will also require category one platforms to provide users with tools that give them greater control over what they’re exposed to on the service.

“The bill will already force in-scope companies to remove illegal content such as child sexual abuse imagery, the promotion of suicide, hate crimes and incitement to terrorism. But there is a growing list of toxic content and behaviour on social media which falls below the threshold of a criminal offence but which still causes significant harm,” the government writes.

“This includes racist abuse, the promotion of self-harm and eating disorders, and dangerous anti-vaccine disinformation. Much of this is already expressly forbidden in social networks’ terms and conditions but too often it is allowed to stay up and is actively promoted to people via algorithms.”

“Under a second new duty, ‘category one’ companies will have to make tools available for their adult users to choose whether they want to be exposed to any legal but harmful content where it is tolerated on a platform,” DCMS adds.

“These tools could include new settings and functions which prevent users receiving recommendations about certain topics or place sensitivity screens over that content.”

Its press release gives the example of “content on the discussion of self-harm recovery” as something which may be “tolerated on a category one service but which a particular user may not want to see”.

Brown was more positive about this plan to require major platforms to offer a user-controlled content filter system — with the caveat that it would need to genuinely be user-controlled.

He also raised concerns about workability.

“I welcome the idea of the content filer system, so that people can have a degree of control over what they see when they access a social media site. However, this only works if users can choose what goes on their own personal blocking lists. And I am unsure how that would work in practice, as I doubt that automated content classification is sufficiently sophisticated,” he told us.

“When the government refers to ‘any legal but harmful content’, could I choose to block content with a particular political leaning, for example, that expounds an ideology which I consider harmful? Or is that anti-democratic (even though it is my choice to do so)?

“Could I demand to block all content which was in favour of COVID-19 vaccinations, if I consider that to be harmful? (I do not.)

“What about abusive or offensive comments from a politician? Or is it going to be a far more basic system, essentially letting users choose to block nudity, profanity, and whatever a platform determines to depict self-harm, or racism.”

“If it is to be left to platforms to define what the ‘certain topics’ are — or, worse, the government — it might be easier to achieve, technically. However, I wonder if providers will resort to overblocking, in an attempt to ensure that people do not see things which they have asked to be suppressed.”

An ongoing issue with assessing the Online Safety Bill is that huge swathes of specific details are simply not yet clear, given the government intends to push so much detail through via secondary legislation. And, again today, it noted that further details of the new duties will be set out in forthcoming Codes of Practice set out by Ofcom.

So, without far more practice specifics, it’s not really possible to properly understand practical impacts, such as how — literally — platforms may be able to or try to implement these mandates. What we’re left with is, mostly, government spin.

But spitballing off-of that spin, how might platforms generally approach a mandate to filter “legal but harmful content” topics?

One scenario — assuming the platforms themselves get to decide where to draw the ‘harm’ line — is, as Brown predicts, that they seize the opportunity to offer a massively vanilla ‘overblocked’ feed for those who opt in to exclude ‘harmful but legal’ content; in large part to shrink their legal risk and operational cost (NB: automation is super cheap and easy if you don’t have to worry about nuance or quality; just block anything you’re not 100% sure is 100% non-controversial!).

But they could also use overblocking as a manipulative tactic — with the ultimately goal of discouraging people from switching on such a massive level of censorship, and/or nudging them to return, voluntarily, to the non-filtered feed where the platform’s polarizing content algorithms have a fuller content spectrum to grab eyeballs and drive ad revenue… Step 3: Profit.

The kicker is platforms would have plausible deniability in this scenario — since they could simply argue the user themselves opted in to seeing harmful stuff! (Or at least didn’t opt out since they turned the filter off or else never used it.) Aka: ‘Can’t blame the AIs gov!’

Any data-driven algorithmically amplified harms would suddenly be off the hook. And online harm would become the user’s fault for not turning on the available high-tech sensitivity screen to shield themselves. Responsibility diverted.

Which, frankly, sounds like the sort of regulatory overside an adtech giant like Facebook could cheerfully get behind.

Still, platform giants face plenty of risk and burden from the full package of proposal coming at them from Dorries & co.

The secretary of state has also made no secret of how cheerful she’d be to lock up the likes of Mark Zuckerberg and Nick Clegg.

In addition to being required to proactively remove explicitly illegal content like terrorism and CSAM — under threat of massive fines and/or criminal liability for named execs — the Bill was recently expanded to mandate proactive takedowns of a much wider range of content, related to online drug and weapons dealing; people smuggling; revenge porn; fraud; promoting suicide; and inciting or controlling prostitution for gain.

So platforms will need to scan for and remove all that stuff, actively and up front, rather than acting after the fact on user reports as they’ve been used to (or not acting very much, as the case may be). Which really does upend their content business as usual.

DCMS also recently announced it would add new criminal communications offences to the bill too — saying it wanted to strengthen protections from “harmful online behaviours” such as coercive and controlling behaviour by domestic abusers; threats to rape, kill and inflict physical violence; and deliberately sharing dangerous disinformation about hoax COVID-19 treatments — further expanding the scope of content that platforms must be primed and on the lookout for.

So given the ever-expanding scope of the content scanning regime coming down the pipe for platforms — combined with tech giants’ unwillingness to properly resource human content moderation (since that would torch their profits) — it might actually be a whole lot easier for Zuck & co to switch to a single, super vanilla feed.

Make it cat pics and baby photos all the way down — and hope the eyeballs don’t roll away and the profits don’t drain away but Ofcom stays away… or something.

UK warns Facebook to focus on safety as minister eyes faster criminal sanctions for tech CEOs
UK revives age checks for porn sites

How-To Geek

What is an internet troll (and how to handle trolls).

Internet trolls are common online. Here's what they are---and how to avoid feeding the trolls.

Quick Links

What are internet trolls, signs someone is trolling, how should i handle them.

Internet trolls are people who want to provoke and upset others online for their own amusement. Here's how to spot the signs that someone is a troll, and how to handle them.

If you've been on the internet for any period of time, you've likely run into a troll at some point. An internet troll is someone who makes intentionally inflammatory, rude, or upsetting statements online to elicit strong emotional responses in people or to steer the conversation off-topic. They can come in many forms. Most trolls do this for their own amusement, but other forms of trolling are done to push a specific agenda.

Trolls have existed in folklore and fantasy literature for centuries, but online trolling has been around for as long as the internet has existed. The earliest known usage of the term can be traced back to the 1990s on early online message boards. Back then, it was a way for users to confuse new members by repeatedly posting an inside joke. It's since turned into a much more malicious activity.

Trolling is distinct from other forms of cyberbullying or harassment. It is normally not targeted towards any one person and relies on other people paying attention and becoming provoked. Trolling exists on many online platforms, from small private group chats to the biggest social media websites. Here's a list of places online where you're likely to see online trolls:

  • Anonymous online forums:  Places like Reddit, 4chan, and other anonymous message boards are prime real-estate for online trolls. Because there's no way of tracing who someone is, trolls can post very inflammatory content without repercussion. This is especially true if the forum has lax or inactive moderation.
  • Twitter:  Twitter also has the option to be anonymous, and has become a hotbed for internet trolls. Frequent Twitter trolling methods involve hijacking popular hashtags and mentioning popular Twitter personalities to gain attention from their followers.
  • Comment sections:  The comment sections of places such as YouTube and news websites are also popular areas for trolls to feed. You'll find a lot of obvious trolling here, and they frequently generate a lot of responses from angry readers or viewers.

You'll find trolls anywhere online, including on Facebook and on online dating sites. They're unfortunately pretty common.

It can sometimes become difficult to tell the difference between a troll and someone who just genuinely wants to argue about a topic. However, here are a few tell-tale signs that someone is actively trolling.

  • Off-topic remarks:  Completely going off-topic from the subject at hand. This is done to annoy and disrupt other posters.
  • Refusal to acknowledge evidence:  Even when presented with hard, cold facts, they ignore this and pretend like they never saw it.
  • Dismissive, condescending tone:  An early indicator of a troll was that they would ask an angry responder, "Why you mad, bro?" This is a method done to provoke someone even more, as a way of dismissing their argument altogether.
  • Use of unrelated images or memes:  They reply to others with memes , images, and gifs. This is especially true if done in response to a very long text post.
  • Seeming obliviousness: They seem oblivious that most people are in disagreement with them. Also, trolls rarely get mad or provoked.

The list above is by no means definitive. There are a lot of other ways to identify that someone is trolling. Generally, if someone seems disingenuous, uninterested in a real discussion, and provocative on purpose, they're likely an internet troll.

The most classic adage regarding trolling is, "Don't feed the trolls." Trolls seek out emotional responses and find provocation amusing, so replying to them or attempting to debate them will only make them troll more. By ignoring a troll completely, they will likely become frustrated and go somewhere else on the internet.

You should try your best not to take anything trolls say seriously. No matter how poorly they behave, remember these people spend countless unproductive hours trying to make people mad. They're not worth your time of day.

If a troll becomes spammy or begins to clog up a thread, you can also opt to report them to the site's moderation team. Depending on the website, there's a chance nothing happens, but you should do your part to actively dissuade them from trolling on that platform. If your report is successful, the troll may be temporarily suspended or their account might be banned entirely.

  • Do Not Sell My Personal Info

Register For Free

  •  ⋅ 
  • Social Media

10 Effective Tactics to Defeat Internet Trolls

Trolls may be the bane of the internet, but they shouldn't ruin your day. Learn to purge online trolls and prevent them from returning.

speech on internet trolls

The trolls have moved out of their caves and onto the internet.

Unlike the mythological creatures of early Scandinavian folklore, online trolls are real, and dealing with them is never a fun experience.

An internet troll, as defined by Wikipedia , is:

“…a person who posts inflammatory, insincere, digressive, extraneous, or off-topic messages in an online community (such as a newsgroup, forum, chat room, or blog), with the intent of provoking readers into displaying emotional responses, or manipulating others’ perception.”

Put simply, an internet troll is someone who takes great pleasure in being an insufferable jerk online.

The more people they tick off, the better. Trolls thrive on sarcasm and insults, and they’ve been around for as long as the internet has existed.

Unfortunately, the trolls of today have escalated into a much more malicious force of hate than the original jokesters that were prevalent back in the ’90s. Now, 41% of Americans have experienced some form of online harassment.

What’s more, the number of severe encounters such as cyberbullying, physical threats, stalking, sexual harassment, etc. have sadly become more common.

That’s why today’s topic is necessary.

In this post, you’ll see how to know you’re facing an internet troll, and find a list of tips to add to your arsenal so you’re ready to handle the nonsense right away and protect your peace of mind.

Warning Signs You’re Dealing with an Internet Troll

Some of the warning signs that you’re dealing with a troll include:

  • Blindness to evidence: Trolls are notorious for ignoring facts and either doubling down on their stance or redirecting to a new topic altogether.
  • Name-calling: Internet trolls aren’t known for their creativity. They’ll often latch onto the latest trending insult and use it in every situation. Hello, “Karen.”
  • Topic redirects: This is an old-school trolling technique that’s still around today on chats and forums. Trolls enjoy making off-topic remarks to try and distract posters from the discussion. They’ll also post unrelated images or memes.
  • Condescending tone: “Why you mad, bro?” Trolls love to stoke the fire and then act dismissive when people become angry, which only triggers more frustration. And they know it.
  • Overexaggerating: While most people use words that aren’t absolute, there’s no middle ground for trolls. Everything has to be on the extreme end of the spectrum. Instead of saying “often” or “sometimes,” they’ll say “always” or “never.”

There’s something about the anonymity of the internet that brings out the worst in trolls.

Most of them wouldn’t dare engage in a direct face-to-face confrontation. But through the computer screen, there aren’t any real consequences to make them think twice about letting their inner nastiness out.

Defeat Internet Trolls with These 10 Techniques

Trolls aren’t picky – they’ll target individuals, businesses, celebrities, politicians… you name it. If you’re on the internet, you’re fair game for a troll.

Here’s how you can shut them down.

1. Don’t Feed the Trolls

The classic internet adage still holds merit. Trolls thrive on emotional responses to their provocation.

It can be difficult to restrain yourself but don’t add fuel to the fire.

If you don’t engage, the troll will hopefully move on.

2. Be the Boss! No Trolls Allowed

If you’re in charge of a platform — whether it’s your social media profile, discussion forum, blog, etc. — you need a list of clear guidelines that includes a “no trolling” policy.

Implementing these rules establishes impartiality. If someone is angry that their comment was deleted, you can point back to your policy and cite a violation as the cause of the removal.

For example, see how the Library of Congress set clear guidelines in their comment and posting policy :

Comment and posting policy of the Library of Congress.

3. Add Moderators to Your Roster

Managing a single, small-scale blog or social media profile is one thing, but if you have hundreds or even thousands of posts and a major troll infestation, it’s time to call in backup!

A team of moderators is a worthwhile investment if you aren’t able to keep up with the troll onslaught yourself.

They can verify comments and deal with policy violators so you can focus your attention on other tasks.

If you don’t have the resources or funds to hire moderators, look into some of the tools available on various platforms:

  • Facebook’s comment moderation plugin .
  • YouTube’s comment settings for automatic moderation.
  • Twitter’s reporting option for abusive tweets.
  • Instagram’s reporting option for policy violations.
  • WordPress’s comment moderation tools .
  • Other blog tools such as Disqus and IntenseDebate, which are two of the most popular.

4. If You Can’t Ignore the Trolls, Call Their B.S.

Trolls aren’t interested in having civilized, rational conversations. Their arguments aren’t logical, and they’re certainly not strong debaters.

Stay calm and simply ask for facts and sources to back up their unsubstantiated claims.

Chances are, they won’t have any, and they’ll sputter into silence. All they really wanted was a heated, passionate debate, and you denied them that.

Every time they make a wild statement, counter it with a polite request for evidence.

5. Kill Them with Kindness

It’s hard to respond to hate with kindness. But since trolls are usually looking for a fight, reacting with an opposite approach often stops them in their tracks.

One particularly uplifting example was posted on Funny Side of Tumblr. An exchange started with a furious mother attacking someone for “making her child sick” because the youth was exploring their gender identity.

Rather than reciprocate the anger by becoming defensive, the user responded with kindness, even complimenting and ultimately connecting with the upset mother and answering her questions.

What started with, “My child is sick due to you!” drew to a close with, “Bless you, if I have more questions I can ask you.”

Tumblr users showed appreciation for the way the situation was diffused:

Chat thread about kindness to diffuse trolls.

It’s worth noting that this shouldn’t be your expected outcome. In this case, the aggressiveness came from a place of fear and confusion, but in most other instances, trolls aren’t going to come around.

Still, it doesn’t hurt to show a little kindness. You might make a difference in someone’s life.

This conversation would have ended a lot differently if anger had been met with more anger.

6. Disarm Them with Humor

Much like with kindness, trolls aren’t usually equipped to respond to humor. Their goal is to make people mad, not make them laugh.

Laughter is troll kryptonite.

If you need some inspiration on how to fight trolls with humor, check out Wendy’s Twitter .

The brand has become well-known for its tongue-in-cheek humor when responding to trolls.

Wendy’s even goes so far as to regularly invite other brands to be roasted .

Twitter thread responding to Trolls with humor.

However, be cautious with a humorous approach. It’s easy to cross the line and become offensive in the eyes of your audience.

7. Have Friends-Only Social Profiles

This solution is pretty cut-and-dried. If you don’t want random trolls commenting on your posts, keep them private.

Obviously, this won’t work if you’re a business, influencer , or someone who needs to reach the public, but it’s an easy way to keep your personal profile safe.

On Twitter, you can make your account private by going through More > Settings and privacy > Your account ( you’ll have to put in your password again ) > Protected Tweets .

You can also update photo tagging options.

How to make your account private on Twitter.

On Facebook, you can run a privacy checkup to update your settings. Click the drop-down arrow, then Settings & Privacy > Privacy Checkup.

Facebook's privacy checkup feature.

Remember that you can also set individual posts for private, friends, friends with exceptions, specific friends, only you, or customized visibility.

Post visibility options on Facebook.

8. Block, Ban, or Report Trolls

While this option is more tedious, it’s sometimes necessary if you have a troll that just won’t stop.

Facebook, Twitter, Instagram, and most other social media platforms give you the option to report a post for being abusive, among other options like unfollowing the person who posted it.

How to block and ban profiles.

9. Decompress Before You Reply

Remember – a troll’s goal is to make you and other people upset. Don’t let them achieve their goal.

Before you type a response, try this:

  • Take a deep breath.
  • Walk away for a few minutes (minimum).
  • Remind yourself it’s not personal, and it’s not worth getting upset.

When you’re composed enough to return and address the issue, try to keep a clear, open mind.

Replying when you’re angry isn’t going to end in a peaceful resolution.

10. Stay Professional

One of the worst errors you can make is confusing an unhappy customer for a troll and responding in an unprofessional manner. Stay calm and factual.

If someone is complaining about your business, apologize and try to redirect the conversation to a private channel so the issues can be resolved outside of the public’s scrutiny.

If someone is nitpicking a typo or other minor mistake, thank them for pointing out the error, fix it, and then don’t engage any further.

Whether you’re answering a troll or a real customer, remember that your comments are public, and the rest of the community is watching. In most cases, people are less concerned with what the problem was and more with how you handled it.

Trolls are Only as Big as We Make Them

Internet trolls thrive on drama.

If you stoop to their level, they’re winning.

It’s not about being right or wrong. If you stop engaging the trolls, you’re taking the oxygen away from their fire.

Take the high road, and leave the trolls far below.

More Resources:

  • 7 Urgent Steps to Take When Your Facebook Account Gets Hacked
  • 25 Things You Should Never Do on Social Media
  • News and Trends on the Most Popular Social Networks

Featured image: delcarmat/Shutterstock Image #8: Created by author, September 2021

Julia McCoy is an 8x author and a leading strategist around creating exceptional content and presence that lasts online. As ...

Subscribe To Our Newsletter.

Conquer your day with daily search marketing news.

Sue Scheff

Psychopathy

The people behind online hate, a new study finds online haters show relatively high levels of psychopathy..

Posted March 31, 2021 | Reviewed by Jessica Schrader

 Antonio Guillem/Shutterstock

Haters will hate

Do you know, right now, what the Internet is saying about you?

Could one careless tweet cost you your job? Are nude photos of you lingering on your ex’s smartphone? Could one angry customer trash your small business?

Will a potential romance cool because of what’s been posted about you online? How likely is it that any of that will happen?

More likely than you think. In today’s digitally driven world, countless people are being electronically embarrassed every day.

Stories of troll attacks revenge porn , sexting scandals, email hacks, webcam hijackings, cyberbullying, and screenshots gone viral fill our newsfeeds.

According to a 2017 Pew Research Center survey, 66 percent of adult Internet users say they have witnessed online harassment and 67 percent of adult Internet users under the age of 30 have personally experienced it.

Given events like the 2014 Sony Pictures email hack that leaked studio heads’ private messages and the 2015 Ashley Madison breach that revealed the identities of millions of alleged philanderers, it is clear that we are all potentially one click away from being unwillingly thrust into the Internet glare.

And what awaits us there? A nation of finger-wagging vultures who delight in tormenting us and tearing our reputations to shreds or worse—inducting you into the cancel culture .

This culture of attacking people with the simple stroke of a keyboard has become much more than a fad. In a 2014 survey conducted by YouGov , 28 percent of Americans admitted to engaging in malicious online activity directed at somebody they didn’t even know.

How have we become this Shame Nation, where we are constantly hurling our collective outrage at an endless supply of fresh victims? And is there anything we can do to stop this, before it affects us or the people we love?

Understanding derogatory behavior

As social media grows every day, it also gives people more space to share their ideas and express their opinions. Sadly, this comes with a rise of online hate behavior which seems to be getting more malicious with the trend of the cancel culture.

This is important because more and more employees and colleges are using social media to screen applicants. Online reputation is typically your first impression someone will have of you—if you are being digitally tormented, it can be risky for your future.

A new study published in Frontiers in Psychology explored the psychological profile of people who posted hate comments online during the 2018 Winter Olympic Games. The researchers found that hate commenters demonstrated high levels of one specific Dark Triad trait— psychopathy .

Piotr Sorokowki and his research team say it was surprising that narcissism and Machiavellianism were not related to online hate behavior, given that these traits have been previously linked to both online trolling and cyber-bullying .

“Our research is one of the first to establish a psychological background of online haters,” Sorokowski and colleagues remark, “while setting a clear line between online hating and other derogatory online behaviors (e.g., trolling, cyber- bullying , or hatred speech).”

Developing empathy to defeat hate

Dr. Michele Borba, author of UnSelfie , has been educating students and people of all ages about the importance of developing empathy in our lives—especially in today's world.

"Empathy is not an inborn trait," Borba shares. "Empathy is a quality that can be taught —in fact, it's a quality that must be taught, by parents, by educators and by those in a child's community. And what's more, it's a talent that kids can cultivate and improve, like riding a bike or learning a foreign language."

We're never too old to learn. There is too much digital discourse right now and it's time for adults to start taking action. Empathy is a verb according to Borba , and it can be taught to grown-ups too.

5 ways to help curb online hate

1. Never perpetuate hate or misinformation. Don’t forward, like, or retweet distasteful comments or images.

speech on internet trolls

2. Report and flag abusive, mean, hateful content to the social platform.

3. Reach out to someone that is struggling. Private message them, even if it’s only a virtual hug. Let them know you are there for them.

4. Kindness is contagious. Talk about it with your kids. Read headlines of people doing good things for other people—then get involved.

5. Lead by example not only for your children, but for your colleagues, friends, and family.

Always remember, your online behavior is a reflection of your offline character.

PEW Research Survey: Online Harassment 2017

YouGov Survey: Malicious Online Comments 2014

Shame Nation: The Global Epidemic of Online Hate, Sourcebooks, 2017

Are Online Haters Psychopaths? Psychological Predictors of Online Hating Behavior Research, 2021

UnSelfie: 9 Essential Habits that Provide the "Empathy Advantage", Simon & Schuster, 2018

Sue Scheff

Sue Scheff is a an advocate and family internet safety expert and the author of Shame Nation: The Global Epidemic of Online Hate .

  • Find a Therapist
  • Find a Treatment Center
  • Find a Psychiatrist
  • Find a Support Group
  • Find Teletherapy
  • United States
  • Brooklyn, NY
  • Chicago, IL
  • Houston, TX
  • Los Angeles, CA
  • New York, NY
  • Portland, OR
  • San Diego, CA
  • San Francisco, CA
  • Seattle, WA
  • Washington, DC
  • Asperger's
  • Bipolar Disorder
  • Chronic Pain
  • Eating Disorders
  • Passive Aggression
  • Personality
  • Goal Setting
  • Positive Psychology
  • Stopping Smoking
  • Low Sexual Desire
  • Relationships
  • Child Development
  • Therapy Center NEW
  • Diagnosis Dictionary
  • Types of Therapy

March 2024 magazine cover

Understanding what emotional intelligence looks like and the steps needed to improve it could light a path to a more emotionally adept world.

  • Coronavirus Disease 2019
  • Affective Forecasting
  • Neuroscience

Massachusetts Institute of Technology

Mit connect.

  • Dealing with Trolling: Update for 2023

October 19, 2023

The more time you spend on the internet, the greater the chance you will either witness or become a victim of trolling. Trolling is defined as antagonizing others online by deliberately posting inflammatory, irrelevant, or offensive comments or other disruptive content. Internet trolls aim to provoke an emotional response and are trying to engage in a fight or argument. A few negative comments do not equal trolling. Trolls will persistently harass their targets, especially if they know they’re hitting a nerve.

The following are some trolling behaviors:

  • Attacking or criticizing something you’ve posted, praised, or agreed with
  • Posting personal insults meant to humiliate you in front of others
  • Escalating verbal aggression when the target responds
  • Making statements designed to upset others
  • Hate speech

There is no policy on how MIT schools, departments, labs, and centers (DLCs) should respond if a staff member or student is being trolled online. It’s possible that if a DLC gets involved or comments publicly or in platform, that could further incite or motivate the harasser. I do not recommend DLCs engaging or intervening in any way on a public platform.

But there are some steps you can take to combat social media trolls:

  • Ignore them. DO NOT engage—that’s what they want, and it will motivate them to continue. Trolls seek attention and if they don’t get it, they might move on.
  • Block them. Almost every platform allows you to block users. Blocking users can mean different things on different social media sites, but generally it stops them from seeing your content and vice versa. Blocked accounts cannot follow you, find your posts in a search, or direct message you.
  • Report them. Almost every platform has a policy against abusive language, behavior, and hate speech. Reporting them could get their post removed, or get the account suspended or deleted depending on the site and the situation. Simply reporting a user won’t block the person from reaching you again, so make sure you block them as well.

You can consider removing their directory listing, including their MIT email address, office phone number and location, from your websites to help prevent trolling from moving offline. If the trolling moves into a person’s real life on campus, for instance through emails or phones calls, or escalates in the following ways, it should be reported to the MIT Police (617-253-1212):

  • Threats of violence, bodily harm, or death
  • Following a victim from one channel to another to purposefully harass them
  • Posting information that could compromise a person’s safety, such as a home address
  • Engaging in stalking behavior or hate crimes

Trolling is not a federal crime, but under many state laws, harassment, stalking, and bullying are illegal. In Massachusetts, laws prohibit several acts of harassment and stalking —committed in-person, by mail or phone, or through the use of electronic communications. Electronic communications can include conduct or messages communicated by email, text message, instant messaging, phone, on the internet, or through a website or social media application.

A few reminders

When posting on personal social media channels, be clear that the views and opinions expressed are your own, and do not represent the official stance or policy of MIT. But even when you are clear, understand that your audience may still attribute your comments to MIT, so be mindful of how they will reflect on the Institute and its reputation. Here are MIT’s policies on  personal conduct ,  racism , and  harassment  for your reference.

As mentioned in a blog post in 2019, follow MIT’s policies and procedures when using social media to promote an MIT event, initiative, or academic program.

These incidents are distressing and are never cut and dry. Social media managers are encouraged to contact Jenny Fowler, MIT’s director of social media strategy, at [email protected] to talk through these incidents on a case-by-case basis and consult with other colleagues across the Institute as needed.

speech on internet trolls

Jenny Li Fowler

Director of Social Media Strategy

Communications Initiatives  

  • Twitter https://twitter.com/thejennyli
  • LinkedIn https://www.linkedin.com/profile/public-profile-settings?trk=prof-edit-edit-public_profile

View Author Profile

Author's Recent Posts

  • MIT on TikTok
  • Here's what you need to know about Facebook page likes and follows
  • Share full article

Advertisement

Subscriber-only Newsletter

David French

Florida has banned kids using social media, but it won’t be that simple.

In a collage-like illustration, a storefront grate rolls down to mask the top of a child’s head.

By David French

Opinion Columnist

My entire life I’ve seen a similar pattern. Older generations reflect on the deficiencies of “kids these days,” and they find something new to blame. The latest technology and new forms of entertainment are always bewitching our children. In my time, I’ve witnessed several distinct public panics over television, video games and music. They’ve all been overblown.

This time, however, I’m persuaded — not that smartphones are the sole cause of increasing mental health problems in American kids, but rather that they’re a prime mover in teen mental health in a way that television, games and music are not. No one has done more to convince me than Jonathan Haidt. He’s been writing about the dangers of smartphones and social media for years , and his latest Atlantic story masterfully marshals the evidence for smartphones’ negative influence on teenage life.

At the same time, however, I’m wary of government intervention to suppress social media or smartphone access for children. The people best positioned to respond to their children’s online life are parents, not regulators, and it is parents who should take the lead in responding to smartphones. Otherwise, we risk a legal remedy that undermines essential constitutional doctrines that protect both children and adults.

I don’t want to minimize the case against phones. Haidt’s thesis is sobering:

Once young people began carrying the entire internet in their pockets, available to them day and night, it altered their daily experiences and developmental pathways across the board. Friendship, dating, sexuality, exercise, sleep, academics, politics, family dynamics, identity — all were affected.

The consequences, Haidt argues, have been dire. Children — especially teenagers — are suffering from greater rates of anxiety and depression, and suicide rates have gone up; and they spend less time hanging out with friends, while loneliness and friendlessness are surging.

Neither smartphones nor social media are solely responsible for declining teen mental health. The rise of smartphones correlates with a transformation of parenting strategies, away from permitting free play and in favor of highly managed schedules and copious amounts of organized sports and other activities. The rise of smartphones also correlates with the fraying of our social fabric. Even there, however, the phones have their roles to play. They provide a cheap substitute for in-person interaction, and the constant stream of news can heighten our anxiety.

I’m so convinced that smartphones have a significant negative effect on children that I’m now much more interested in the debate over remedies. What should be done?

That question took on added urgency Tuesday, when Ron DeSantis, the governor of Florida, signed a bill banning children under 14 from having social media accounts and requiring children under 16 to have parental permission before opening an account. The Florida social media bill is one of the strictest in the country, but Florida is hardly the only state that is trying to regulate internet access by minors. Utah passed its own law; so have Ohio and Arkansas . California passed a bill mandating increased privacy protections for children using the internet.

So is this — at long last — an example of the government actually responding to a social problem with a productive solution? New information has helped us understand the dangers of a commercial product, and now the public sector is reacting with regulation and limitation. What’s not to like?

Quite a bit, actually. Federal courts have blocked enforcement of the laws in Ohio , Arkansas and California . Utah’s law faces a legal challenge and Florida’s new law will undoubtedly face its day in court as well. The reason is simple: When you regulate access to social media, you’re regulating access to speech, and the First Amendment binds the government to protect the free-speech rights of children as well as adults.

In a 2011 case, Brown v. Entertainment Merchants Association , the Supreme Court struck down a California law banning the sale of violent video games to minors. The 7-to-2 decision featured three Democratic appointees joining with four Republican appointees. Justice Antonin Scalia, writing for the majority, reaffirmed that “minors are entitled to a significant measure of First Amendment protection, and only in relatively narrow and well-defined circumstances may government bar public dissemination of protected materials to them.”

The state certainly has power to protect children from harm — as laws restricting children's’ access to alcohol and tobacco attest — but that power “does not include a free-floating power to restrict the ideas to which children may be exposed,” the majority opinion said. Consequently, as the court has repeatedly observed, “Speech that is neither obscene as to youths nor subject to some other legitimate proscription cannot be suppressed solely to protect the young from ideas or images that a legislative body thinks unsuitable for them.”

Lawmakers and parents may find this doctrine frustrating, but there is a genuine method to the free-speech madness, even for children. In a free-speech case from 1982, Island Trees School District v. Pico , Justice William Brennan cast doubt on a public school district’s effort to remove “improper” books from library shelves and wrote powerfully in support of student free speech and students’ access to ideas. “Just as access to ideas makes it possible for citizens generally to exercise their rights of free speech and press in a meaningful manner,” Brennan argued, “such access prepares students for active and effective participation in the pluralistic, often contentious society in which they will soon be adult members.”

Justice Brennan is exactly right. We can’t shelter children from debate and dialogue and then expect them to emerge in college as grown-ups, ready for liberal democracy. Raising citizens in a flourishing republic is a process, one that isn’t susceptible to one-size-fits all bans on speech and expression, even if that speech or expression poses social and emotional challenges for today’s teens.

Compounding the problem, social media bans are almost always rooted at least in part in the content on the platforms. It’s the likes, comments, fashions, and trends that cause people to obsess over social media. Yet content discrimination is uniquely disfavored in First Amendment law. As the Supreme Court has repeatedly explained, one of the most basic First Amendment principles is that “as a general matter, the government has no power to restrict expression because of its message, its ideas, its subject matter, or its content.”

For content discrimination to be lawful, it has to pass the most difficult of legal tests, a test called “strict scrutiny.” This means that the law is only constitutional if it advances a “compelling government interest and is narrowly drawn to serve that interest.” While one can certainly agree that protecting the mental health of young people is a compelling interest, it is much more difficult to argue that sweeping bans that cut off children from gaining access to a vast amount of public debate and information are “narrowly drawn.”

Finally, attempting to restrict minors’ access to social media can implicate and limit adult speech . Age verification measures would require both adult and child users of social media platforms to reveal personally identifying information as a precondition for fully participating in the American marketplace of ideas.

It’s for these reasons (and others) that federal district judges in California, Arkansas and Ohio have blocked enforcement of each state’s social media law, and it’s for these reasons that the laws in Utah and Florida rightly face an uphill legal climb.

The government isn’t entirely powerless in the face of online harms. I think it is entirely proper to attempt to age-limit online access to pornography . The Supreme Court has permitted state and local governments to use zoning laws to push porn shops into specific, designated areas of the community, and “zoning” online porn for adults only should be entirely proper as well. The Supreme Court hasn’t permitted age-gating pornography yet , but its prior objections were rooted in part in the technical challenges to age verification. With better technology comes better capability to reasonably and easily distinguish between children and adults.

The distinction between social media and pornography should be obvious. There is a difference between denying access to content to minors that they possess no right to see or produce, and to denying access to content that they have a right to both see and produce.

It is also entirely proper to ban smartphones in schools. The court has long held that the First Amendment rights of students should be construed “in light of the special characteristics of the school environment.” And it’s highly likely that courts would uphold phone bans as a means of preventing proven distractions during instruction.

But the primary responsibility for policing kids’ access to phones should rest with parents, not with the state. Not every social problem has a governmental solution, and the more that the problem is rooted in the inner life of children, the less qualified the government is to address it.

And don’t think that a parent-centered approach to dealing with the challenge of online generation is inherently inadequate. As we’ve seen throughout American history, parenting cultures can change substantially, based on both information and experience. Public intellectuals like Jonathan Haidt perform an immense public service by informing the public, and just as parents adjust children’s diets or alter discipline habits in response to new information, they can change the culture around cellphones.

In fact, there are signs this is already happening. I have three children — aged 25, 23 and 16 — and I can personally attest to the changing culture in my little corner of the world. I gave my oldest two kids iPhones when they were 12 and 11, and granted access to Facebook and Instagram with little thought to the consequences. Most of my peers did the same.

Quickly enough, we learned our mistake. When my youngest entered middle school, I noticed that parents were far more cautious. We talked about phone use, and we tried to some extent to adopt an informal, collaborative approach so that no member of the friend group was alone and isolated while all her peers were texting on their phones and posting online. It didn’t work perfectly, and my daughter spent a few unpleasant months as the last friend without a phone at age 15, but awareness of the risks was infinitely higher, and even when children did receive phones, the controls on use were much tighter.

One of the core responsibilities of the American government at all levels is to protect the liberty of its citizens , especially those liberties enumerated in the Bill of Rights. At the same time, it is the moral obligation of the American people to exercise those liberties responsibly. Haidt and the countless researchers who’ve exposed the risks of online life are performing an invaluable role. They’re giving parents the information we need to be responsible. But the First Amendment rights of adults and children are too precious to suppress, especially when parents are best positioned to protect children from harm online.

David French is an Opinion columnist, writing about law, culture, religion and armed conflict. He is a veteran of Operation Iraqi Freedom and a former constitutional litigator. His most recent book is “Divided We Fall: America’s Secession Threat and How to Restore Our Nation .” You can follow him on Threads ( @davidfrenchjag ).

Internet Trolling, Its Impact and Suggested Solutions Essay

Introduction, perpetrators and victims of internet trolling, the effects of trolling on individual people and the society, suggested solutions to trolling activities, reference list.

Growing access to internet resources has brought not only the advantages of getting the necessary information or services. Along with receiving benefits, people may fall victim to serious dangers presented by hackers, trolls, phreakers, and other deviant and antisocial groups. Internet trolling is one of the most frequently used jargon words of our century. This activity involves causing harm to internet users with the aim of bringing enjoyment for the person doing this harm (an internet troll) or entertaining the audience this person wants to impress (Bishop 2013). The problem of internet trolling is getting bigger every day as trolls acquire more possibilities to spread their unkind word through the world wide web.

The vast extent of internet trolling is partially explained by the diversity of the topics in which trolls are engaged. These may include political, sexual, racial, gender, and many other spheres where trolls exercise their provocative activity. Trolling fluctuates between dubiously distasteful and almost illegal. Trolls provoke their victims with illusively sexist or racist words, they post outrageous images with the aim of wrecking the discussion and fill the conversation threads with preposterous misinterpretations of other users’ opinions (Phillips 2015). The scope of trolling is unbelievably large as the trolls are apt to penetrate any subject one can possibly imagine. The trolls do not have any personal feelings towards what they are writing. Thus, they can write about anything. What gives them more freedom is that they are legally protected by the postulates of freedom of speech proclaimed in the First Amendment (Phillips 2015). Thus, the extent of trolling activity is rather wide, and it is getting bigger every day.

Depending on what kind of activity brings the most fun, trolls are divided into a number of categories. There are trolls who love getting people enraged (“rabid flamers”), who luxuriate in correcting others’ mistakes (“priggish grammar trolls”), “crybabies” who threaten never to come back when someone hurts their feelings but always return the “never-give-up, never-surrender” type who always has to be right, and the “retroactive stalkers” who will not calm down until they have found something embarrassing in your history which they can bring up every time you post a thing.

There are also such types as “lame teenager,” “self-feeding troll,” “bored hater” and “Nellie McNeggerson” who enjoy complaining and contradicting others (Grande 2010). “Sharing troll” is the one who discloses your personal information if he/she is angry with you, “profane screamer” asserts his/her opinion by writing in capitals, “white knight” defends someone even if nobody asks him/her for that, “expert” behaves as if he/she knows everything about everything. Other popular varieties of trolls are “spoiler” who reveals the film endings and sports match results, “fraud” who steals money or personal secrets, and “flooder” who posts the same thing repeatedly. Finally, there are “liar” and “stalker” types who both try to seduce others. The difference is that the “stalker” can actually be harmful while the “liar” is usually inoffensive (Grande 2010).

Psychological premises for trolling are concerned with invisibility, anonymity, and the absence of real-time communication (Stein 2016). Trolls emphasize that unlike clearly abusive messages posed by flamers, their messages are merely provocative and meant to bring fun (Bishop 2014). However, revealing people’s personal data or causing conflicts is not considered funny by other internet users.

The popular victims of trolling activists are the women defending the feminist movement, celebrities, visitors of tourism and hospitality websites, and even people who passed away – RIP pages frequently become the objects of trolling (Lumsden & Morgan 2016). Anonymity allows the trolls to say whatever they want to reach their aim: provoke a fight among internet users. The societal aspect of such provocations interests trolls most of all.

Female victims claim that the majority of offensive messages on the internet are gender-related (Lumsden & Morgan 2016). The women say that they have to choose their language very cautiously not to initiate any incidents, but the trolls do their best to provoke fights in feeds to various female activist blogs and articles. Sexist trolling is almost as frequent as the racist one, but the methods of fighting sexism are much less powerful than the ways of dealing with racism (Lumsden & Morgan 2016).

Celebrities are among the most popular trolling victims because they have a lot of admirers and followers. Thus, trolls are happy to cause fights among the fans of popular culture idols about whom they may actually not care at all (Lumsden & Morgan 2016). However, unlike other people, celebrities may even gain an advantage from trolling. The more discussed their lives are the more fame and, consequently, the income they obtain.

Trolling of the tourism and hospitality industry is performed via social media sites (Mkono 2015). The aim of trolls, in this case, is to undermine the reputation of tourist companies by leaving negative fake reviews about the facilities and services. So far, this variety of trolling is difficult to fight because the messages at websites such as TripAdvisor are anonymous, and it is impossible to track a huge number of trolls leaving comments there (Mkono 2015). If the anonymity option is changed, the harm done to tourism companies will be eliminated.

Basically, trolls do not take what they do seriously, so they never lose. They take a decision of whether or not to give any weight to their words. Thus, the problem of immortality is not in what the trolls utter but in the absence of implication of their utterances for themselves (Phillips 2015). However, that is so only from their own point of view. What concerns others, trolling is considered an activity having a disastrous impact on society in general and separate people in particular. Trolls are able to cause fights between individuals and between whole groups of people. The more conflict they manage to provoke, the happier they will be. It seems that trolls are not concerned with the future of mankind and do not feel responsible for provoking dangerous conflicts at all.

Although trolls can influence people of all ages and social statuses, the most dramatic effect is performed on teenagers who are vulnerable and susceptible to any kind of attack – real or virtual. There have been cases of teens committing suicide after they became victims of trolling. When trolls go too far and reveal people’s personal information, such as photos or personal history details, they do not realize how destructive their activity may be. Teenagers cannot stand being blackmailed and trolled, and there have been many sad stories connected with such activities (Millet 2014). Parents are concerned about trolling as it undermines the teenagers’ self-esteem and confidence. However, while teens may be the most vulnerable social group, trolls can impact anyone. People constantly suffer from negative comments and get frustrated because of unnecessary posts that take away their time. Psychologists recommend to develop such defensive reactions and ignoring trolls’ comments and getting distracted from negative information. Still, not everyone is able to restrict his/her vision to only positive things and not be touched by the antagonistic messages sent by trolls.

Not only individual people can be bullied by trolls and their methods. Society becomes injured as well. The greatest impact produced on society by the trolls is that they succeed in dividing the society into opposite sides (Rani 2016). Social media have always been a way of allowing individuals to share their viewpoints and find those with similar opinions. With the advent of trolling, various hostile activities were awakened. Instead of considering it bad to offend someone, people have got so used to insulting and disrespectful behavior that they actually consider it a part of normal life (Rani 2016). Moreover, some individuals begin to feel influential when they troll others and soon cannot stop performing their adverse actions. In such a way, trolling gradually builds up a polarised society. The threat of such societal changes is that people become less humane and friendly and tend to be cruel and hostile.

Trolling is a fast-growing risk for individual people and whole societies which has a tendency to develop and find new intricate ways of expression every day. To eliminate the negative impact of internet trolls, people should establish effective interventions at various levels.

Since trolls have a lot of techniques at their disposal, the fight against their destructive effects requires a versatile approach. In order to solve the problem of trolling, it is necessary to approach it at an individual level, at the level of online media corporations, and at a legislative level. To deal with trolls at a personal degree, there is a golden rule for every internet user: “do not feed the trolls” (Grande 2010). This advice means that one should not get provoked by the trolls’ messages and insults. It may not be an easy thing to do, but the outcomes will be rather positive: no stress or spoilt mood and no wasted time. However, a rational decision to disregard trolls may not be sufficient. Frequently, more discreet interventions are needed (Sanfilippo, Yang & Fichman 2017). In the case of deviant trolls, such methods as cutting trolls off or exposing their identity may be employed. Additionally, internet users consider ignoring trolls not only a great reaction measure but also as a useful preventive effort.

What concerns the steps taken by online media corporations, their fight against trolling requires much more time and resources. First of all, they need to check all posts to see whether any of them are written by trolls. Then, they need to create barriers for such posts and block unwanted kinds of messages. These activities require more people working on websites and more money to pay for their salaries. Insufficiency of such resources is basically the main reason why there is a big problem with trolling over the internet. Another serious issue is the anonymity granted to internet users which disables the online media corporations to control their visitors. This problem is what connects the media companies with the legislative system.

The government’s regulation of trolling is quite limited by the First Amendment which guarantees every citizen freedom of speech (Phillips 2015). However, with the increasing damaging impact of internet trolling, governments of many countries are developing strategies for confronting trolls and preventing their adverse impact on internet users. For instance, the UK adopted the Communications Act 2003 which regulates mobile phone calls, emails, text messages, and internet messages (Lumsden & Morgan 2016). Section 127 of this Act proclaims that sending messages which are offensive or indecent is an offense that occurs notwithstanding the fact of receiving or not receiving the message. With the growing number of offensive cases provoked by trolling, in 2012, the UK government initiated an amendment to the Defamation Act which would enable the government to track the identities of internet users.

At the same time, internet providers would not be punished for their users’ publications on the condition that they share information about their users (Lumsden & Morgan 2016). In a debate in a House of Commons initiated in 2012, some Members of Parliament emphasized that changing regulations regarding anonymity would intimidate the freedom of speech. Furthermore, law adjustments are not enough when it comes to dealing with internet deviations problems. Apart from legislative changes, alterations in people’s cultural lifestyles are also necessary (Lumsden & Morgan 2016). Cultural transformations are especially important in the view of the modern “sexualized” behavior of celebrities widely illustrated in different media modes such as newspapers, reality TV-shows, and magazines.

Therefore, while it is impossible to implement new laws instantly, there are things that any sober-minded person can do to avert the adverse outcomes of communications with trolls. The basic rule is not to provoke any reaction on their part and to stay away from their negative posts.

The problem of trolling as a fast-growing issue of modern society. With so many people going online, more and more individuals get involved in troll messages every minute and are psychologically damaged by their deviant conduct. Trolls penetrate every part of the internet activity. They leave their unnecessary comments, provoke fights, or simply depress others, which makes an adverse impact on internet users. Trolling occurs in various spheres and divergent types of media sources. Trolls may leave false negative comments of recommendation which deceive people, or they may foster the users and make their lives unbearable. With the advancement of technologies, there is an urgent need to develop people’s security from trolls.

Possible solutions to the problem of trolling are possible at several levels: personal, corporate, and governmental. The most beneficial resolution would be to eliminate online anonymity at a governmental level. However, due to the existence of many laws and regulations defending privacy and freedom of speech, it is quite complicated to gain any results in this sphere in a short time. What can and should be made by every internet user is being cautious of one’s behavior online. People should be careful not to provoke the trolls. However, it is often the case that they do not even need to be provoked. On such occasions, the best solution is to ignore trolls at a personal communication level. Online media corporations can contribute to solving the problem by implementing stricter rules on online chats and forums and by blocking the trolls. By taking small steps consistently, it is possible to develop a troll-free internet environment where every user can count on having a good time without the need to get distracted and frustrated. Finally, apart from thinking of the ways to change trolls we should come up with the ideas of how to change our society so that there are fewer provocations and more pleasant things to discuss.

Bishop, J 2013, Examining the concepts, issues, and implications of internet trolling , Information Science Reference, Hershey.

Bishop, J 2014, ‘Digital teens and the ‘antisocial network’: prevalence of troublesome online youth groups and internet trolling in Great Britain’, International Journal of E-Politics, vol. 5, no. 3, pp. 1-15.

Grande, T L 2010, ‘The eighteen types of internet trolls’, Smosh . Web.

Lumsden, K & Morgan, H M 2016, ‘‘Fraping’, ‘trolling’ and ‘rinsing’: social networking, feminist thought and the construction of young women as victims or villains’, Clinical and Experimental Optometry , vol. 99, no. 2, pp. 1-17.

Millet, W 2014, ‘The dangerous consequences of cyberbullying and trolling’, The Circular . Web.

Mkono, M 2015, ‘‘Troll alert!’: provocation and harassment in tourism and hospitality social media’, Current Issues in Tourism , vol. 1, pp. 1-14.

Phillips, W 2015, This is why we can’t have nice things: mapping the relationships between online trolling and mainstream culture , Massachusetts Institute of Technology Press, Massachusetts.

Rani, R 2016, ‘How abusive trolls are ruining an otherwise great tool – social media’ , Youth Ki Avaaz . Web.

Sanfilippo, M A, Yang, S & Fichman, P 2017, ‘Managing online trolling: from deviant to social and political trolls’, Proceedings of the 50th Hawaii International Conference on System Sciences , pp. 1802-1811.

Stein, J 2016, ‘How trolls are ruining the internet’ , Time . Web.

  • Chicago (A-D)
  • Chicago (N-B)

IvyPanda. (2021, June 21). Internet Trolling, Its Impact and Suggested Solutions. https://ivypanda.com/essays/internet-trolling-its-impact-and-suggested-solutions/

"Internet Trolling, Its Impact and Suggested Solutions." IvyPanda , 21 June 2021, ivypanda.com/essays/internet-trolling-its-impact-and-suggested-solutions/.

IvyPanda . (2021) 'Internet Trolling, Its Impact and Suggested Solutions'. 21 June.

IvyPanda . 2021. "Internet Trolling, Its Impact and Suggested Solutions." June 21, 2021. https://ivypanda.com/essays/internet-trolling-its-impact-and-suggested-solutions/.

1. IvyPanda . "Internet Trolling, Its Impact and Suggested Solutions." June 21, 2021. https://ivypanda.com/essays/internet-trolling-its-impact-and-suggested-solutions/.

Bibliography

IvyPanda . "Internet Trolling, Its Impact and Suggested Solutions." June 21, 2021. https://ivypanda.com/essays/internet-trolling-its-impact-and-suggested-solutions/.

  • How I Beat a Patent Troll
  • “Trolls” by Stefan Spjut as the Central Figure of Norse Literature
  • The Upcoming Era of Drones and Outcomes to a Society
  • The Cyberspace War: Propaganda and Trolling
  • Balancing Freedom of Speech and Responsibility in Online Commenting
  • Meaning of Culture and Its Importance
  • Aspects of Civil Servant Job
  • Using Passive and Active Voice
  • Free Speech and the Internet
  • Gendered Media Messages in Commercials of Toys
  • Online Media Publishing Options and Applications
  • Internet Monopolies: Everybody Wants to Rule the World
  • Wireless Networks' Historical Development
  • Electronic Nature of Information Systems
  • Mobile Internet and Its Economics
  • Newsletters
  • Account Activating this button will toggle the display of additional content Account Sign out

Well, at Least Elon Musk Has Realized He Accidentally Created a Badge of Shame

In 2022, when Elon Musk campaigned to buy Twitter—before he realized he would be massively overpaying and went to court to get out of the deal he himself proposed, before he admitted defeat and took over the company in a $44 billion leveraged buyout—he promised to restore “free speech” to the site.

He vowed to right the wrongs of a dual-class system that had benefited the haves at the expense of the have-nots—and he homed in on the blue check marks slapped on verified accounts as the culprit enabling this disparity. On his first day as owner of the site, Musk tweeted , “Twitter’s current lords & peasants system for who has or doesn’t have a blue checkmark is bullshit. Power to the people! [Twitter] Blue for $8/month.” Lords and peasants!

So, a year ago, Elon Musk took blue check marks away from anyone who refused to pay him money. This week, he started giving them back for free.

While Musk wanted to frame the removal of blue check marks as some great anti-elite democratization, some Robin Hood–esque pursuit of justice, in reality it was always a money-making proposition. If Musk could make more money directly from users in the form of recurring subscription revenue, he’d reduce the company’s dependence on advertisers and their demands about what merits acceptable content on the site. (Musk’s laissez faire approach to content moderation has always been at odds with advertiser demands for a so-called brand-safe environment to place their ads.)

The main selling point for X’s subscription product—once called Twitter Blue, and now called X Premium—quickly became the blue check mark, though Musk has added features and benefits to the offering in the year and a half since. Suddenly, Musk’s favorite right-wing trolls and Tesla-to-the-moon fan accounts were all equipped with blue check marks, seeming more important and legitimate upon a quick glance.

But Musk fumbled his own plot. That became clear back in April 2023, once he removed blue check marks from people who used to have them.

For years, Twitter gave blue verification badges to a wide variety of important people. It was used chiefly to verify the identities of rich, famous, and powerful people like Beyoncé Knowles or Barack Obama. That was important. Not only do people need access to the president’s tweets—let alone those of the queen of pop—but verifying these accounts helped everyone by reducing confusion and scams. But Twitter eventually began identifying journalists, academics, and other people who could be repositories of reliable information. (Yes, myself included.)

Since X is often used as an up-to-the-minute news aggregator—and an internet hub for journalists—these blue check marks gave the site’s users a shortcut to quickly deem whether some piece of information was from a reputable or unreputable source. (Obviously, exceptions abound.) In other words, the blue check marks aren’t just a status symbol, but an important feature of a popular news site. According to Pew Research Center, more than half (53 percent) of X users still rely on it for news. What Musk never understood, or appreciated, was that the check marks helped Twitter as much as they helped the badge-holders.

Instead, Musk glommed onto the right-wing habit of using “blue check” as a derogatory moniker for elites. By abolishing the blue checks, Musk’s maneuver was a pronouncement that a new regime had taken power.

But naturally, once any single person could simply buy a blue check mark and appear legitimate for eight bucks a month, chaos ensued. It seemed like just about every corporate account was being impersonated. One fake account pretending to be the pharma giant Eli Lilly tweeted out, “We are excited to announce insulin is free now,” a tweet that caused mass confusion and led the stock to drop 4 percent . (Eli Lilly did slash the price of two of its most commonly prescribed insulin drugs mere months later, perhaps somewhat in response to the incident on X.)

Letting people buy blue check marks never made sense, but Musk erred in removing what he called “legacy” check marks—the ones that people didn’t pay him for.

What the billionaire owner was too dense to realize was that the value of selling a blue check mark was mostly in blending in, appearing legitimate, and feigning importance. Removing all of the important people (celebrities) and pseudo-important people (me) simply turned the blue check mark into a blue badge of shame. By August 2023, Musk started figuring out that he’d messed up and added a feature to let people pay $8 but hide their check mark . He also gradually began giving the most famous celebrities their check marks back even if—like Stephen King —they didn’t want them.

This week, however, X began alerting many of the less famous but still popular accounts that had their blue check marks removed that they’d be eligible for a free subscription to X Premium—and thus the reinstatement of their blue badge. “Going forward, all X accounts with over 2500 verified subscriber followers will get Premium features for free and accounts with over 5000 will get Premium+ for free,” Musk tweeted on March 27.

Across X, many accounts that were regifted the blue badge tweeted to clarify that they did not, in fact, stoop to being so lame as to pay for a blue check mark. “My blue check is back and I just want to make clear I am not paying El*n M*sk for this thanks very much,” Wired writer Lauren Goode tweeted . “Just to be clear, I did not pay for verification,” film producer Franklin Leonard wrote . “It’s like a mole grew back,” wrote New Yorker staff writer Emily Nussbaum.

You’re wondering about me? How nice of you. Apparently, I’m still blue check–less, so—for now—I’m in the clear. Good riddance.

comscore beacon

ACLU Commends FCC Efforts to Restore Net Neutrality Rules

The fcc will vote on the rules later this month, securing access to a free and open internet..

A photo of ACLU Commends FCC Efforts to Restore Net Neutrality Rules

WASHINGTON – The American Civil Liberties Union commends the Federal Communications Commission’s (FCC) announcement today that it will hold a vote later this month to reinstate vital net neutrality regulations by reclassifying broadband under Title II of the Communications Act of 1934. Under this new classification, the FCC will also have the oversight authority it needs to protect internet users from abusive or neglectful internet service provider practices.

“Broadband is a necessity. It’s critical that the FCC has the tools necessary to make sure that everyone has access to high speed, reliable, and affordable internet, and that powerful telecommunications companies aren’t allowed to put profit over people,” said Jenna Leventoff, ACLU senior policy counsel. “We are thrilled that the FCC is moving closer to reinstating essential net neutrality rules, and we look forward to reading the full order when it is available.”

In 2017, former FCC Chairman Ajit Pai repealed the Open Internet Order, which classified broadband as a Title II service and allowed the FCC full regulatory authority over the internet. The rule had also required internet service providers to treat all internet traffic equally by prohibiting blocking, throttling, or paid prioritization. The ACLU applauds the advancement of reinstating these regulatory powers.

Stay Informed

Every month, you'll receive regular roundups of the most important civil rights and civil liberties developments. Remember: a well-informed citizenry is the best defense against tyranny.

By completing this form, I agree to receive occasional emails per the terms of the ACLU’s privacy statement.

Learn More About the Issues in This Press Release

  • Net Neutrality
  • Internet Speech
  • Free Speech

Related Content

Federal Trial Challenging Florida Law that Targets Voter Registration, Civic Engagement, and Political Speech Begins

Federal Trial Challenging Florida Law that Targets Voter Registration, Civic Engagement, and Political Speech Begins

Federal Judge Dismisses Elon Musk’s X Lawsuit Against Nonprofit Researchers

Federal Judge Dismisses Elon Musk’s X Lawsuit Against Nonprofit Researchers

School that Forced Girls to Wear Skirts to Promote “Chivalry” to Pay $1.465 Million in Attorney Fees and Costs as Part of Settlement

School that Forced Girls to Wear Skirts to Promote “Chivalry” to Pay $1.465 Million in Attorney Fees and Costs as Part of Settlement

Individuals in a group holding ACLU-branded signs saying "We the People," and "You Can't Legislate Away Our Existence."

State Legislative Sessions: How They Impact Your Rights

speech on internet trolls

'It gets worse': Internet trolls Diddy as it's revealed he owes nearly $100M in mortgages on his Los Angeles and Miami homes

Trigger Warning: This article mentions sexual assault and trafficking which may trigger some readers. Discretion is advised.

LOS ANGELES, CALIFORNIA: Entertainment mogul Sean "Diddy" Combs , once celebrated for his business acumen and lavish lifestyle, now finds himself at the center of a financial firestorm .

Shocking revelations have emerged that the self-proclaimed billionaire owes a staggering $100 million in mortgage payments on his opulent mansions in Los Angeles and Miami , as per the New York Post .

The substantial debt, along with ongoing inquiries into accusations of trafficking and the possibility of a RICO case, has created a grim outlook for Diddy's empire, sparking online trolling and speculation about his future.

Diddy has taken out eight bank loans totaling $140 million to finance real estate acquisitions

Despite a net worth estimated at $1 billion by Forbes , the rapper-turned-business mogul has secured a total of eight bank loans amounting to an astonishing $140 million to fund his lavish real estate purchases.

According to The Daily Mail , Diddy acquired a luxurious Holmby Hills estate in Los Angeles for $39 million back in 2014, financing the purchase with two mortgages from Bank of America, each valued at $25.35 million.

Diddy's real estate empire extends beyond Los Angeles to the sun-drenched shores of Miami Beach . In 2003, he acquired a sprawling nine-bedroom, 12-bathroom waterfront home on the exclusive Star Island for $14.5 million from Sony Music head Tommy Mattola.

Revelations of Diddy's mortgage woes come amid multiple ongoing criminal investigations

To finance this purchase, Diddy took out five mortgages totaling an astonishing $68.45 million. While he managed to pay off a significant portion, roughly $42.35 million, a substantial debt remains outstanding.

Not content with a single Miami Beach mansion, Diddy expanded his portfolio in July 2021 by purchasing a neighboring 10-bedroom, six-bathroom property from Gloria and Emilio Estefan for $35 million.

This acquisition was secured with a single home loan from Bank of America for $20.7 million, scheduled for repayment by August 2036.

The revelations of Diddy's mortgage woes come amid ongoing investigations by the Department of Homeland Security and New York prosecutors into allegations of trafficking and a potential RICO case .

An unnamed officer with the Department of Homeland Security revealed to The Post, "We believe that there is a disturbing history of trafficking ."

Internet trolls Diddy upon learning about his staggering mortgage debt

As news of Diddy's staggering mortgage debt spread like wildfire across the internet, trolls and critics wasted no time in mocking the entertainment mogul's financial situation.

One X user quipped, "Oh Diddy is gonna go down hard. On the run, now debt, oh man." [sic]

Another user expressed increasing disbelief, stating, "It gets worse and worse for this man, and I heard them houses under his kids' names. Damn Diddy."

Another comment highlighted the staggering financial burden, with a user stating, "8 mortgages for 3 properties is insane for any economic class."

Concerns over the legal ramifications were also voiced, as one person remarked, "The point is, Diddy is in legal trouble. If he has to pay any large amounts of lawsuits, this could put his finances in jeopardy. His bills are astronomically higher than ours, so if he misses payments because of his legal troubles, that will be astronomically bad."

A user mused, "What happened... 'poor' Didi..." while a skeptic questioned Diddy's purported billionaire status, remarking, "And they say he is a billionaire." Lastly, a user found amusement in Diddy's downfall, stating, "Watching his entire life fall apart has been so enjoyable."

This article contains remarks made on the Internet by individual people and organizations. MEAWW cannot confirm them independently and does not support claims or opinions being made online.

'It gets worse': Internet trolls Diddy as it's revealed he owes nearly $100M in mortgages on his Los Angeles and Miami homes

IMAGES

  1. Traits of a troll: the psychology of internet trolling

    speech on internet trolls

  2. Día del Troll: nueve tipos de 'trolls' que puedes encontrar en redes

    speech on internet trolls

  3. The Truth About Online Trolls And How to Deal With Them

    speech on internet trolls

  4. How to Handle Internet Trolls

    speech on internet trolls

  5. Internet Trolls- Everything You Need to Know

    speech on internet trolls

  6. Internet Trolling: How Do You Spot a Real Troll?

    speech on internet trolls

VIDEO

  1. Troll Face Quest: Internet Memes. All 10 POOPS Locations/Walkthrough/Guide. Part 3

  2. Akhilesh Yadav trolls yogi Adityinath; Speech trends on internet #viral #trending #ytshorts #shorts

  3. Internet trolls have been real quiet since this dropped 

  4. IShowSpeed Trolls MEDITATION CLASSES On Zoom!

  5. Internet Trolls: An Actual Conversation

  6. When The Internet Trolls Unite IRL

COMMENTS

  1. The Future of Free Speech, Trolls, Anonymity and Fake News Online

    The Future of Free Speech, Trolls, Anonymity and Fake News Online. Many experts fear uncivil and manipulative behaviors on the internet will persist - and may get worse. This will lead to a splintering of social media into AI-patrolled and regulated 'safe spaces' separated from free-for-all zones. Some worry this will hurt the open ...

  2. When Doublespeak Goes Viral: A Speech Act Analysis of Internet Trolling

    In her book Down Girl, Kate Manne describes the harassment of writer Lindy West by an 'internet troll'.During an interview some years after the event, the troll apologized and confessed to West that his past hateful speech and behavior was the result of feeling threatened by confident women (Manne, 2018, pp. 51-52).In 2019, boredpanda.com published an article detailing the series of ...

  3. How Trolls Are Ruining The Internet

    The name internet trolls came from a fishing method online thieves use to find victims. Find out how trolling is becoming a political fight. ... Reddit believes in unalloyed free speech; the site ...

  4. Protect yourself from online trolls

    Use digital tools: Learn to use built-in tools to mute, block, and report trolls, online abuse, and hate speech. If you get a flurry of attention on X, check to see if you've been added to any lists and remove your name from any that seem suspicious. Trolls often add people to lists for easy targeting by other online harassers.

  5. Feeding the Trolling: Understanding and Mitigating Online Trolling

    Trolls use the number of views to compare themselves to other trolls. Publicity: One of our informants was included in a magazine's list of the 30 most influential people on the internet. Another troll received an award from a social media platform for posting the most inciteful comment. Pleasant emotional states

  6. How to Deal With Trolls

    Declare publicly that you will never troll or bully, and ask others to hold you accountable. And if the trolling is just too tempting, make a plan to log off entirely and pull the plug on your ...

  7. The Twitter Paradox: How A Platform Designed For Free Speech ...

    The Twitter Paradox: How A Platform Designed For Free Speech Enables Internet Trolls Charlie Warzel, who covers technology for BuzzFeed, has written a series of articles about Twitter's response ...

  8. I Wrote this Paper for the Lulz: the Ethics of Internet Trolling

    Over the last decade, research on derogatory communication has focused on ordinary speech contexts and the use of conventional pejoratives, like slurs. However, the use of social media has given rise to a new type of derogatory behavior that theorists have yet to address: internet trolling. Trolls make online utterances aiming to frustrate and offend other internet users. Their ultimate goal ...

  9. The Phenomenon of Internet Trolling and the Spreading of Hate Speech on

    For example, according to Gemiharto and Sukaesih (2020), Internet trolls use fake online profiles to write provocative or off-topic messages to disturb discussions and cause emotional responses ...

  10. What Are Internet Trolls, And What Does Trolling Mean?

    The internet troll can be likened to a bad clown or a mean court jester. Their goal is to mock someone from a place of relative safety and anonymity, with no higher goal than simply needling their victims. The term trolling came under more scrutiny in recent years after the practice was linked to various political shenanigans on the internet.

  11. Trolling: Where Does Freedom of Speech Begin and End?

    In the age of social media, the right to free speech appears to have been overstretched: trolls can now write anything without a care for the harm they can cause on others (Rainie et al., 2017).

  12. Guide to tackling online hate and trolling

    Advice to support children & young people. Find out more about how to tackle hate online and online trolls with our useful advice guide, what online hate is and to how to support your child. Download guide Share. 961 likes. See our useful tips on online hate and trolling and how to equip children with tools on how to deal with it.

  13. Trolling as a Collective Form of Harassment: An Inductive Study of How

    Likewise, if trolling is a form of society's feces, studying the content trolls adopt, the jokes trolls make, and groups trolls most frequently target provides insights into the cultural environment of society (Phillips, 2015, p. 143). Trolling as feces, as a byproduct of society's cultural production, frames trolling as an outcome.

  14. UK wants to squeeze freedom of reach to take on internet trolls

    Given internet trolling is a systemic problem which is especially problematic on certain high-reach, mainstream, ad-funded platforms, where really vile stuff can be massively amplified, it might ...

  15. What Is an Internet Troll? (and How to Handle Trolls)

    An internet troll is someone who makes intentionally inflammatory, rude, or upsetting statements online to elicit strong emotional responses in people or to steer the conversation off-topic. They can come in many forms. Most trolls do this for their own amusement, but other forms of trolling are done to push a specific agenda.

  16. A Psychologist Explains Why Internet Trolls Thrive On Anonymity

    Internet trolls are fueled by their anonymity to deceive, manipulate and vilify others. Unlike the real world, anonymity is easily attainable in cyberspace. The internet allows an individual to ...

  17. 10 Effective Tactics to Defeat Internet Trolls

    1. Don't Feed the Trolls. The classic internet adage still holds merit. Trolls thrive on emotional responses to their provocation. It can be difficult to restrain yourself but don't add fuel ...

  18. The People Behind Online Hate

    5 ways to help curb online hate. 1. Never perpetuate hate or misinformation. Don't forward, like, or retweet distasteful comments or images. 2. Report and flag abusive, mean, hateful content to ...

  19. Dealing with Trolling: Update for 2023

    Trolling is defined as antagonizing others online by deliberately posting inflammatory, irrelevant, or offensive comments or other disruptive content. Internet trolls aim to provoke an emotional response and are trying to engage in a fight or argument. A few negative comments do not equal trolling. Trolls will persistently harass their targets ...

  20. The Antisocial Network: Investigating Internet Trolls

    Broadcaster Richard Bacon investigates internet bullying and tracks down and confronts internet "trolls", including one who has targeted himself and his fami...

  21. Don't feed the Internet trolls! It's easier said than done…

    But occasionally, we will receive a rude response to one of our posts, and we must decide either to coolly ignore it or jump in with a heated rebuttal. According to the researchers in this study, toxic comments like the one in our imaginary post, defined as "rude, disrespectful or unreasonable conduct", is a pretty universal online experience.

  22. Troll (slang)

    Usage. Application of the term troll is subjective.Some readers may characterize a post as trolling, while others may regard the same post as a legitimate contribution to the discussion, even if controversial. More potent acts of trolling are blatant harassment or off-topic banter. However, the term Internet troll has also been applied to information warfare, hate speech, and even political ...

  23. Opinion

    Utah's law faces a legal challenge and Florida's new law will undoubtedly face its day in court as well. The reason is simple: When you regulate access to social media, you're regulating ...

  24. Internet Trolling, Its Impact and Suggested Solutions Essay

    Grande, T L 2010, 'The eighteen types of internet trolls', Smosh.Web. Lumsden, K & Morgan, H M 2016, ''Fraping', 'trolling' and 'rinsing': social networking, feminist thought and the construction of young women as victims or villains', Clinical and Experimental Optometry, vol. 99, no. 2, pp. 1-17. Millet, W 2014, 'The dangerous consequences of cyberbullying and trolling ...

  25. Elon Musk realized he created a badge of shame with blue checks on X

    Since X is often used as an up-to-the-minute news aggregator—and an internet hub for journalists—these blue check marks gave the site's users a shortcut to quickly deem whether some piece of ...

  26. 'He hates to lose so badly': Internet trolls Trump as he sues Truth

    Internet blasts Trump over Truth Social woes The legal battle has sparked a flurry of reactions on social media. Critics of Trump seized the opportunity to lambast the former president's business ...

  27. 'Just a billionaire on paper': Internet in splits as Jimmy Fallon ...

    M ANHATTAN, NEW YORK CITY: Jimmy Fallon's latest monologue packed a punch with humor and commentary on the latest developments in the world of social media and finance.

  28. 'May cover a couple legal bills': Internet trolls Trump as ...

    Internet mocks Donald Trump Trump faced intense online trolling for raising more than $65 million in March as netizens suggested the money might be used to pay his lawyers to cover his mounting ...

  29. ACLU Commends FCC Efforts to Restore Net Neutrality Rules

    WASHINGTON - The American Civil Liberties Union commends the Federal Communications Commission's (FCC) announcement today that it will hold a vote later this month to reinstate vital net neutrality regulations by reclassifying broadband under Title II of the Communications Act of 1934. Under this new classification, the FCC will also have the oversight authority it needs to protect ...

  30. 'It gets worse': Internet trolls Diddy as it's revealed he owes ...

    Internet trolls Diddy upon learning about his staggering mortgage debt. As news of Diddy's staggering mortgage debt spread like wildfire across the internet, trolls and critics wasted no time in ...