10/02/2025
Author: Vesa Shatri
Far-right extremism has gained alarming momentum across the globe, driven by social and political divisions, fears around migration, and the capacity of modern technology to magnify dangerous messaging. Online platforms like Telegram, TikTok, and niche forums have made it easier than ever for extremist ideas to reach an extensive audience. In many instances, fear-based narratives paint immigrants or minority communities as a grave threat to “traditional” culture and values. By targeting people’s insecurities and amplifying decontextualized facts and conspiracy theories, these groups successfully recruit sympathizers and mobilize them toward hostility or even violent acts.
Researchers trying to counter this phenomenon face an uphill struggle. Extremists strategically move onto platforms that are difficult to moderate, spreading their content more rapidly than it can be taken down or accurately analyzed. With each new platform comes fresh layers of secrecy, creating insular online communities hardened against critical outside information. Through coded language, humor, and memes, far-right voices often camouflage hateful ideologies as merely edgy or irreverent, thereby increasing their mainstream appeal.
A disconcerting reality is that such narratives can translate into real-world violence. High-profile incidents in the United Kingdom, including Darren Osborne’s terror attack on worshippers near a London mosque and Andrew Leak’s assault on a migrant facility in Dover, demonstrate how quickly individuals can become radicalized online. Their digital footprints revealed immersion in racist and conspiratorial content, highlighting the profound dangers posed by the internet’s echo chambers.
Confronting these threats calls for an approach that addresses the specific ways extremist groups manipulate public opinion. One initiative helping to fill this critical gap in research is the SMIDGE project, which focuses on middle-aged individuals aged 45 to 65—an age range influential in public discourse, voting behavior, and community leadership. SMIDGE operates in Belgium, Denmark, Italy, Kosovo, Cyprus, and the United Kingdom, using methods such as social media research, content analysis, interviews, and creative engagement with local communities. By exploring the vulnerabilities and viewpoints of this demographic, the project uncovers how personal concerns about migration, employment, or cultural identity can be weaponized to legitimize extremist ideologies.
A central feature of SMIDGE is its online database of videos and related content associated with extremism. Researchers can track the evolution of language, the tactics used to manipulate viewers, and the shared themes in these productions. Recent entries illustrate how German lawmakers from the far-right AfD party are quoted or misquoted to strengthen nationalist narratives in Serbia, or how Viktor Orbán’s rhetoric targeting migrants in Hungary echoes broader European far-right talking points. The database also captures instances where discussions about terrorism or climate change are selectively framed to inflame prejudice against particular groups. By chronicling these examples, SMIDGE offers evidence of how extremist discourse is adapted, reinterpreted, and circulated across borders.
Below are a few illustrative examples identified through the SMIDGE platform:
By collecting examples like these, SMIDGE enables a deeper understanding of how far-right discourse is formed, adapted, and circulated. Researchers can track patterns, identify which messages gain traction, and design more nuanced counter-narratives.
Although the problem of far-right extremism has unique manifestations in specific regions—including the Western Balkans, where ethnic tensions and historical legacies remain potent—it is nonetheless part of a global pattern. In places like Kosovo and Serbia, deep-rooted nationalisms, economic challenges, and widespread institutional distrust create an environment where radical ideologies can take hold. These threats are magnified when far-right groups partner with political parties, religious institutions, or sports fan organizations, leveraging symbols and events to rally support. Extremists who see themselves as guardians of a particular cultural identity can exploit any international crisis or political shift to stoke fears of replacement, invasion, or moral decline.
In this environment, SMIDGE’s approach offers important lessons for researchers and policymakers alike. Rather than focusing solely on how content is disseminated, the project probes the underlying anxieties, prejudices, and resentments that make propaganda so potent. By identifying the language, images, and personal fears that extremists weaponize, it becomes possible to develop more nuanced counter-narratives. Local educators, journalists, and community leaders gain practical tools to challenge hateful content when it appears, while also meeting the concerns that often drive people to seek out far-right explanations.
Policy responses could include increased cooperation between tech platforms and governments, with transparent processes for content moderation and data-sharing to help analysts monitor extremist material. Strengthening institutional trust is equally critical. Investment in civic education, expanded support for investigative journalism, and targeted awareness campaigns for at-risk groups—particularly middle-aged internet users—can all serve as bulwarks against radicalization.
Ultimately, SMIDGE reveals that the internet is not solely responsible for the spread of hatred; it is the medium through which preexisting biases and anger are exploited. At the same time, digital technologies can become powerful tools for positive change. By collecting data on extremist narratives, analyzing patterns of radicalization, and engaging local stakeholders, projects like SMIDGE shed light on the deeper currents fueling far-right ideologies. This vital research can guide policymakers who are eager to craft legislation, shape education initiatives, and strengthen community resilience against extremist threats.
Although online hate cannot be eradicated overnight, a sustained and informed response—one that addresses the emotional underpinnings of prejudice as much as the tactical dissemination of propaganda—is within reach. SMIDGE demonstrates that solutions require collaboration, vigilance, and empathy. With the right resources, communities can use digital tools to build understanding and common ground, exposing hate for what it really is and ensuring that critical policy decisions are grounded in the realities of how extremism spreads.
The author is a junior researcher at the Kosovar Center for Security Studies and a Security Studies student at UBT