How does digital intolerance take shape, and who are the invisible allies of hate speech

Milica Damjanović , Montenegro

With globalization, its key principles, and the expansion of opportunities for connection, exchange of  ideas, and cooperation in all its aspects, serious global problems are also developing. One of the biggest is the prevalence of hate speech, where the growth trend is increasing. The existence and expansion of  social networks where this problem is most prevalent gives this phenomenon a greater opportunity for  development. Although platforms claim that content control is at a high level, the reality is something  completely different. By relying on the report of the European Agency for Fundamental Rights (FRA), this articlew asks some key questions: to what extent do individuals use platforms and spread hate speech, what happens when hate speech becomes a daily occurence in the digital space, and who is responsible? Do we lightly share our  lives with invisible factors and to what extent is freedom of speech threatened, and perhaps its absence  is what threatens public space and dialogue?

How hate overwhelmed social media: responsibility and the consequences we’re ignoring

The digital space, with its development, has afforded us the chance to express our opinions, to be free and authentic, and to stay well-informed and not be manipulated in any way—especially the way that has  traditionally been done by mainstream media to influence public opinion. Over the last 15 years, the  majority of young people have related to social media sites as their primary source of information. The most important problem in the digital world are fake news, disinformation and misinformation, while hate speech stands out as the most  critical issue. Every second, thousands of offensive comments appear on social media and many of them  go unpunished and in some cases become viral. According to a 2023 report, hate speech targets  vulnerable groups such as women, ethnic minorities, and the LGBTQ+ community, and social media  platforms have become fertile ground. Although platforms claim to be regulating content, reality paints a different picture.

Faces of Digital Hate

Internet hate speech cannot be attributed to one specific source because it results from multiple actors  including anonymous social media users whose sole purpose is to distribute insults while also including  teenagers who believe being rude is part of internet etiquette as well as influential persons who  generate profits from controversy. Young people, especially children and adolescents, are both primary perpetrators and those who face the worst consequences. Teenagers, in particular, often engage in hate speech for various reasons—peer pressure, attempts at humor, unconscious adoption of narratives seen in popular content, and most commonly, the desire to blend in and be accepted. On the other hand, an even greater and more dangerous problem lies in anonymous users hiding behind fake profiles, treating  the internet as a space without accountability, which allows them to target specific groups without  consequence.

The dark side of influencer influence: power without responsibility

Social media influencers and celebs can greatly change the thoughts and feelings of their fans just because of their popularity and postings online. More often than not, they are thought of as people who make others laugh, but their reach goes beyond that. A number of them employ polarization and provocation as strategies to get more attention and interaction. The harmful impact of public figures or  influential people using speech that promotes hate is that they enhance its negative influence. Applied  to these situations, social media audiences are given the possibility to not only strengthen their positions but also to feel justified in their prejudices.

Moving to the audience’s point of view, this situation is indeed worrying, because people completely trust and blindly consume the content they are fed, thus allowing false narratives and hate speech to  become the new normal when talking about everyday life. This process, which is further fueled by  algorithms designed to prioritize controversial content, will in turn lead to hate speech dominating  digital space.

In contrast, countries like France have shown the rest of Europe the way by acknowledging the problem and introducing laws that limit the commercial influence of social media figures and, thus, prevent consumer exploitation.

Algorithms – The Silent Allies of Hate

Most social media platforms claim to be actively working to curb hate speech and maintain strict  moderation. However, their algorithms often play the opposite role. The FRA report highlights a  concerning trend: content that provokes strong emotions—whether positive or negative—is far more  likely to go viral. The driving forces behind the highest levels of engagement are controversy and  conflict, anything that fuels online drama. Algorithms prioritize such content, regardless of whether it  contains hate speech, because it increases reach and, consequently, generates higher profits.

A significant part of the problem lies in artificial intelligence, which is expected to detect and automatically remove problematic content. However, AI struggles to recognize subtle forms of hate speech, such as coded language, sarcasm, or irony. As a result, many harmful expressions go unnoticed or unpunished simply because algorithms fail to contextualize them correctly.

The European Union has taken significant steps to limit the spread of harmful messages online through the Digital Services Act (DSA). This regulation imposes strict obligations on digital platforms, requiring  them to react swiftly to illegal content and be more transparent about how their recommendation  algorithms function. The goal is to hold platforms accountable for algorithmic decisions that influence  content visibility.

Unfortunately, social media companies have shown little willingness to disrupt this dynamic. Despite  regulations like the DSA, platforms continue to favor content that triggers strong emotional reactions— ensuring their profit margins remain intact while hate speech continues to thrive in the digital space.

Hate Bots: the manipulators of digital conflicts

By focusing solely on human actors, we often overlook the fact that a significant portion of toxic online  discussions is fueled by automated accounts—bots. Their primary function is to promote specific  narratives in favor of an individual or group by spreading extreme, polarizing content, often laced with  misinformation. The real danger of these bots lies in their ability to create the illusion of widespread  support. When users see a flood of similar comments, they perceive the extremist viewpoint as widely  accepted and, in their need for belonging, may adopt these radical stances themselves.

Wounds that don’t heal: the psychological consequences

Hate speech leaves lasting scars on the human psyche. Research has shown that continuous exposure to negative and aggressive messages on social media leads to chronic anxiety, depression, and feelings of  worthlessness—especially among younger users. Science confirms that constant verbal aggression alters the way the brain processes information, heightening fear and insecurity. This impact is particularly  severe on young people who are still in the process of shaping their identity and self-confidence. When  they become targets of hate speech, they lose faith in themselves and, in some cases, start expressing  aggression toward others—normalizing hostile communication as an acceptable way to resolve conflicts.

These psychological effects are not limited to individuals. Hate speech corrodes entire communities,  creating a vicious cycle where hostility becomes the norm in digital interactions, making it even harder  to break free from the toxic culture of online discourse.

Breaking the chain: how to combat hate speech online?

Tackling hate speech in today’s digital landscape is a complex challenge that requires a combination of  legal regulations, technological advancements, and, most importantly, a shift in societal awareness. While European countries have already implemented certain measures—such as the Digital Services Act  (DSA)—legislation alone is not enough. However, combating hate speech cannot focus solely on  platforms and their responsibility.

Education and emotional intelligence play a crucial role in transforming digital culture. Media literacy  programs, particularly those aimed at younger generations can help individuals recognize manipulation,  understand the consequences of their online behavior, and develop resilience against provocations.  Some European educational systems have started integrating courses on digital ethics into their  curricula, yet the real question remains—how much of this will translate into everyday behavior?

The responsibility also lies with us, individual users. Each of us has the power to reject harmful content,  report hate speech, and refuse to engage in digital harassment. Without active societal participation,  laws and algorithmic interventions will have only a limited impact. To create a healthier digital space, we must ask ourselves—will we remain passive observers, or will we be part of the solution?

Hate does not divide – it multiplies

Hate speech is not just a passing phenomenon of the internet; it is a reflection of deeper societal issues.  Hidden personal dissatisfaction, bots manipulating public discourse, algorithms amplifying extreme  content, and influencers who often unknowingly contribute to the normalization of toxic  communication all leave consequences that spill over from the digital into the real world—threatening  mental health and fueling polarization.

Europe is attempting to respond through regulations, but the law alone is not enough, as shocking  content is still favored, and users remain passive observers. To truly curb hate speech, a combination of  legal accountability, technological innovation, and a shift in digital culture is necessary. Each of us must  choose whether we will be part of the problem or part of the solution—whether we will like and share  negative content or stand up and change the course of the discussion.

The internet is now shaping the world, and we decide what kind of world it will be. There is no neutrality in the fight against hate – we either feed it or extinguish it.