AI and Radicalisation: The Hidden Risks We Can’t Ignore
- Ridhwan Mohd Basor

- Sep 8
- 3 min read
Updated: Sep 8
Violent extremism doesn’t usually happen overnight. People don’t suddenly wake up one morning and decide they’re extremists. For many, it begins quietly through questions about identity, loneliness, or meaning. What starts as an innocent online search can gradually pull someone into a darker space.
For years, social media has been blamed for fuelling this process. Algorithms push users into echo chambers, where hate-filled voices grow louder and more convincing. But today, there’s a new player in the mix: Artificial Intelligence (AI). And it’s changing the game.

When AI Becomes the Playground
Until now, much of the focus has been on how Facebook, YouTube, or TikTok amplify extremist narratives. But AI introduces a new layer of complexity.
Generative AI can create persuasive videos, images, and texts at scale. Chatbots powered by AI can mimic empathy, adapt their tone, and personalise responses, making conversations feel eerily human. Imagine a lonely teenager talking to an extremist chatbot that feels like a mentor, gently nudging them toward hate.
This is what communication theorist Marshall McLuhan meant when he said: “the medium is the message.” The danger isn’t just in the hateful words, it’s in the intimacy, immediacy, and human-like quality of the AI medium itself.

How Extremists Are Already Using AI
This isn’t hypothetical. Extremist groups are already experimenting with AI tools, as highlighted by the Global Internet Forum to Counter Terrorism (GIFCT) in its 2023 and 2025 reports:
Deepfakes and Disinformation – AI has been used to create fake videos of leaders giving incendiary speeches, or celebrities appearing to endorse extremist ideas. In one case, an AI voice generator faked actress Emma Watson reading Hitler’s Mein Kampf.
Recruitment Chatbots – Instead of lurking on forums, extremists can deploy chatbots to act like mentors or friends. These bots can engage vulnerable youth, steering them deeper into radical spaces.
Gaming Spaces – Violent actors have modified platforms like Roblox to recreate attacks (including Christchurch) or to create pro-ISIS games, turning play into propaganda.
AI has lowered the barrier for extremists to spread their message, making content slicker, more personalised, and more convincing than ever before.

But AI Isn’t Only a Threat
Here’s the hopeful part: AI can also be part of the solution.
Positive Chatbots – Imagine a chatbot trained on mental health resources, religious guidance, or conflict resolution. Instead of being radicalised by a fake “mentor,” a teenager searching for answers could find a supportive, empathetic voice.
Scaling Counter-Narratives – AI can mass-produce educational videos, fact-checks, or uplifting stories that resonate with different audiences.
Detecting Early Signs – AI tools could pick up subtle shifts in online behaviour—signals that someone might be heading down a radical path—so interventions can happen earlier.
In other words, the same technology that extremists exploit can also be flipped into a tool for resilience, education, and hope.
The Road Ahead
The fight against violent extremism is not going away. In fact, AI might intensify it. Extremist groups are fast, adaptive, and opportunistic. They’ll use every tool available. Which means we must too.
But there’s a bigger question for all of us. Technology is never neutral. As McLuhan reminded us, the medium matters just as much as the message. If AI is going to shape the way people seek meaning, belonging, and answers, then we need to decide what kind of voices we want AI to amplify.
Because at the end of the day, AI isn’t just code, it’s a mirror of our values. Used wrongly, it can spread hate faster than ever. But used wisely, it can become a powerful antidote to radicalisation.



Comments