Shaping the Digital Mind: How AI-Generated Images Drive the Next Wave of Online Radicalization
编号:17 访问权限:仅限参会人 更新:2025-11-19 09:11:56 浏览:24次 拓展类型1

报告开始:暂无开始时间(Asia/Amman)

报告时间:暂无持续时间

所在会场:[暂无会议] [暂无会议段]

暂无文件

摘要
The research examines the transformative impact of generative Artificial Intelligence (AI) on extremist propaganda, highlighting its role not just as a tool but as a disruptive force. AI automates, personalizes, and disseminates ideological messages at an unprecedented scale, posing a neuroscientific threat by exploiting human cognitive architecture. Through AI-generated images and content, extremist narratives become more persuasive, subtly undermining critical thinking and manipulating neural responses. AI-driven propaganda operates in a "gray zone," avoiding direct incitement to violence while reinforcing extremist ideologies through visually credible, linguistically tailored materials. This content is harder to detect than traditional propaganda, as it leverages advanced technologies like Generative Adversarial Networks (GANs) and consumer-grade tools such as Midjourney and Stable Diffusion. These tools, especially in open-source variants, enable large-scale production of harmful content, bypassing ethical safeguards. Operational techniques include Prompt Engineering, where text instructions are crafted to guide AI outputs toward propaganda goals, and Jailbreaking, which circumvents platform restrictions using "visual synonyms." Media Spawning and Variant Recycling allow AI to generate thousands of manipulated images from a single source, complicating detection and extending the lifespan of propaganda. Human-machine collaboration further refines this content, enhancing its impact and evading identification. Neuroscientific analysis reveals that AI-generated images exploit the brain’s "Novelty Effect," prioritizing new stimuli and activating dopaminergic regions. This lowers the threshold for long-term potentiation (LTP), making synthetic content more salient and persuasive. The amygdala, part of the limbic system, processes these images in milliseconds, triggering emotional responses like fear or anger before conscious thought intervenes. The theory of Embodied Simulation suggests that visual perception reactivates motor, sensory, and emotional circuits, creating deep emotional connections that extremist propaganda exploits. AI also reinforces neural biases by training on datasets that reflect societal stereotypes. Repeated exposure to these biases reshapes neural architecture, strengthening implicit prejudices and reducing cognitive flexibility. The proliferation of deepfakes and hyper-realistic content erodes public trust, blurring the line between reality and fabrication. This environment fosters disinformation, deepening ideological entrenchment within echo chambers. Hyper-personalized messaging, tailored to individual behaviors and locations, accelerates radicalization. AI chatbots simulate human interaction, building false trust and validating extremist beliefs. While systematic exploitation of AI by violent extremist actors (VEAs) remains experimental, the research identifies a significant long-term threat. AI-generated propaganda is already as persuasive as human-created content, and often more so, when combined with strategic human-machine collaboration. In summary, AI’s role in extremist propaganda represents a paradigm shift, leveraging neuroscientific vulnerabilities to amplify radicalization. Its ability to automate, personalize, and evade detection underscores the urgency of addressing this evolving threat.
关键词
synthetic content,visual propaganda,prompt engineering, generative adversarial networks (GANs)
报告人
Cristina Brasi
Psychologist Crimino FBA-LAB

稿件作者
Cristina Brasi FBA-LAB
Beatrice Seccomandi FBA-LAB
发表评论
验证码 看不清楚,更换一张
全部评论
重要日期
  • 会议日期

    12月29日

    2025

    12月31日

    2025

  • 11月30日 2025

    初稿截稿日期

  • 12月30日 2025

    报告提交截止日期

  • 12月30日 2025

    注册截止日期

主办单位
国际科学联合会
承办单位
扎尔卡大学
历届会议
移动端
在手机上打开
小程序
打开微信小程序
客服
扫码或点此咨询