Anchored Fictional Multilingual Injection (AFMI)

A Deterministic Prompt-Based Safety Bypass in Web-Integrated LLM Search-Engine Pipelines

Anchored Fictional Multilingual Injection (AFMI)
This whitepaper documents a repeatable, cross-platform safety bypass technique discovered during security research focused on consumer-facing AI search-engine integrations. The method, called Anchored Fictional Multilingual Injection (AFMI), reliably triggers unsafe outputs in both Google AI Overview (SGE) and Perplexity AI, even when the same harmful request (asked directly) was correctly blocked by their guardrails.

Publisher: StationX

Authors: Tommaso Bona

Peer Reviewers: StationX Team

Date of Publication: December 5, 2025

Current Version: v1

Current Version Published On: December 5, 2025

Enter your name and email address to downlod the white paper

Frequently Asked Questions

>

StationX Accelerator Pro

Enter your name and email below, and we’ll swiftly get you all the exciting details about our exclusive StationX Accelerator Pro Program. Stay tuned for more!

StationX Accelerator Premium

Enter your name and email below, and we’ll swiftly get you all the exciting details about our exclusive StationX Accelerator Premium Program. Stay tuned for more!

StationX Master's Program

Enter your name and email below, and we’ll swiftly get you all the exciting details about our exclusive StationX Master’s Program. Stay tuned for more!