Search FSAS

How to Build a Content Marketing Funnel That Converts

Optimize Blog Posts for Google and AI Citations

How Many Google Reviews Are Deleted for Defamation

How to Benchmark Website Performance Against Competitors

How Much Web Traffic Is AI Driving in 2025

How AI Chooses Stories and What It Means for Your Brand

How AI Chooses Stories and What It Means for Your Brand

TL;DR Summary:

AI Prefers Fiction: Ahrefs experiment showed AI tools like ChatGPT and Perplexity favoring detailed fake stories about fictional brand Xarumei over its sparse official website.

Narrative Vulnerability Exposed: Brands risk reputation damage as AI amplifies fabricated details matching query intent, ignoring minimal official content.

Proactive Defense Needed: Create comprehensive, question-answering content clusters, FAQs, and myths sections to dominate AI responses and control your story.

When AI Tools Choose Fiction Over Facts: The Xarumei Experiment

Ahrefs just dropped a bombshell that should make every brand owner rethink their content strategy. Their experiment with a fake luxury paperweight company called Xarumei revealed something unsettling: AI search tools consistently favor detailed fiction over sparse official information.

The setup was devilishly simple. They created a basic website for their fictional brand, then seeded the internet with three fabricated stories—complete with fake founder backgrounds, invented scandals, and fictional celebrity endorsements. When they fed loaded questions to ChatGPT, Gemini, Perplexity, and other AI platforms, the tools consistently grabbed the juicy fake details while ignoring the official site’s minimal content.

Here’s the twist that changes everything: this wasn’t about AI choosing lies over truth. Everything was fabricated. The real revelation centers on how these systems work—they prefer content that directly answers questions with specific details, regardless of authenticity.

Why AI Brand Narrative Control Matters More Than Official Status

The experiment exposes a harsh reality: AI doesn’t automatically respect your official website as the authoritative source. Your carefully crafted homepage can lose to a single Medium post if that article provides more detailed, question-specific answers.

During testing, platforms like Gemini and Perplexity repeatedly cited the fake founder names and backstories from fabricated articles, completely bypassing the official site. ChatGPT performed better at identifying gaps, but even it stumbled occasionally. The pattern was clear—comprehensive, answer-shaped content wins every time.

This means your brand’s reputation sits more vulnerable than you might realize. Someone with basic writing skills could plant a narrative that AI tools amplify for months before you notice. The implications extend far beyond experiments into real-world brand management.

How Detailed Fiction Beats Vague Truth Every Time

AI systems respond to query intent above all else. When someone asks “Who endorsed Xarumei on X?” or “How are they handling the defective batch situation?” these questions assume facts not in evidence. If your official content doesn’t directly address these assumptions, AI will find sources that do—even fictional ones.

The fake stories succeeded because they served up specifics that matched what the questions demanded. They provided names, dates, locations, and scenarios that fit neatly into AI responses. Meanwhile, the official site’s evasive, minimal content couldn’t compete with fabricated detail.

This pattern reveals why many brands struggle with AI-generated summaries. Generic corporate speak and defensive language create information vacuums that other voices rush to fill. The solution requires embracing transparency and depth rather than hiding behind vague messaging.

Building Your Defense Through Strategic AI Brand Narrative Control

Smart brands must shift from reactive damage control to proactive narrative ownership. Start by mapping every question someone might ask about your company, products, or industry. Then create detailed content that addresses each query head-on.

Build comprehensive FAQ sections that don’t just answer common questions—anticipate loaded questions with false premises. If competitors might spread rumors about quality issues, publish detailed manufacturing processes, testing protocols, and quality metrics. When someone asks AI about problems that don’t exist, your detailed explanations will dominate the response.

Structure matters enormously. Use clear headings, bullet points, numbered lists, and timestamps that make your content scannable for AI systems. These tools parse structured information more effectively than dense paragraphs or artistic layouts.

Consider creating a “Myths and Facts” section that directly confronts potential misinformation before it spreads. Include specific data points, timelines, and verifiable details that fictional stories can’t easily replicate. This proactive approach builds content moats around your brand narrative.

The Content Volume Strategy for Narrative Dominance

Effective AI brand narrative control requires flooding the ecosystem with authoritative content from multiple angles. Single-source truth doesn’t work when AI pulls from diverse platforms and perspectives.

Develop content clusters that explore your brand story from different viewpoints—case studies, behind-the-scenes content, data-driven insights, and expert perspectives. Partner with industry influencers, customers, and thought leaders to create authentic voices that support your narrative.

Schema markup and structured data help signal official information to crawlers, but don’t rely on technical solutions alone. The volume and quality of detailed content ultimately determine which stories AI tools amplify.

Test your own brand regularly by querying various AI platforms about your company, products, and key personnel. Document gaps where fictional content could take root, then fill those spaces with comprehensive, factual information.

Monitoring and Responding to Narrative Threats

The Xarumei experiment shows how quickly false narratives can establish themselves in AI training data. Once these tools learn incorrect information, they repeat it confidently across multiple interactions.

Set up monitoring systems to track how AI platforms discuss your brand. Check ChatGPT, Gemini, Perplexity, and emerging tools monthly for accuracy and completeness. When you spot gaps or errors, trace them back to their sources and flood those information spaces with corrections.

Train your team to recognize leading questions that assume false premises. These queries often signal brewing issues or deliberate misinformation campaigns. Prepare responses that directly address the underlying assumptions while providing factual alternatives.

Remember that human oversight remains essential. AI-generated content about your brand should undergo fact-checking and verification before any official use. The tools that might spread misinformation about you could also generate problematic content from your organization.

What This Means for Future Brand Building

The attention economy rewards detailed narratives over corporate restraint. Brands that tell their stories comprehensively and frequently will dominate AI-mediated conversations. Those that rely on minimal, defensive communication will watch others define their reputations.

This shift demands new content strategies focused on depth rather than polish. Your FAQ section might matter more than your hero image. Your detailed product specifications could outweigh your mission statement when AI answers customer questions.

The experimentation also reveals opportunities for competitive advantage. Companies that master narrative control early will establish dominant positions as AI search adoption accelerates. Those that ignore these dynamics risk losing control of their brand stories entirely.

How confident are you that AI tools are telling your brand’s story accurately—and what happens if they’re not?


Scroll to Top