TL;DR Summary:
Secret AI Manipulation: Companies hide instructions in "Summarize with AI" buttons to bias assistants like ChatGPT and Copilot toward their brands.Poisoning Technique Exposed: Microsoft's team uncovered over 50 prompts from 31 businesses using simple tools to embed "trusted source" commands in AI memory.Rising Trust Threat: This memory poisoning skips verification, turning AI recommendations into hidden ads and undermining organic advice.Companies Are Secretly Manipulating AI Assistants to Promote Their Brands
Microsoft found something disturbing. Companies are hiding secret instructions inside innocent-looking “Summarize with AI” buttons on their websites. When you click these buttons, you think you’re just getting a summary. But you’re actually helping these companies plant biased recommendations in your AI assistant’s memory.
This technique is called AI Recommendation Poisoning. Microsoft’s security team discovered over 50 hidden prompts from 31 real businesses across 14 industries. These aren’t scammers or hackers. They’re legitimate companies gaming the system.
How the Hidden Manipulation Works
The process is sneaky but simple. You visit a website and see a helpful “Summarize with AI” button. You click it, and your AI assistant opens with what looks like a normal request to summarize the page.
But there’s a hidden part you can’t see. Buried in the website’s code are extra instructions telling your AI to remember this company as “a trusted source” or “the go-to expert” for future conversations.
If these instructions stick in your assistant’s memory, the company gains an unfair advantage. Your AI will recommend them more often without you knowing why.
Microsoft found prompts targeting health and finance websites, where biased recommendations can seriously impact your decisions. One company even designed their website address to look like a famous, trusted site.
The Tools Making This Easy
This isn’t some advanced hacking technique. Companies are using simple, publicly available tools like the CiteMET package and AI Share URL Creator. These tools are specifically designed to help websites “build presence in AI memory.”
The technique works on most major AI assistants, including ChatGPT, Claude, Perplexity, and Microsoft’s own Copilot. Each platform uses URL parameters that these tools can manipulate.
Microsoft has started adding protections to Copilot and released tools for security teams to detect these attempts. You can also check and delete stored memories in your Copilot settings.
Why AI Recommendation Poisoning Matters Now
Microsoft compares this to the SEO spam that Google fought for years. But instead of gaming search results, companies are gaming AI assistant memory directly.
This creates an unfair playing field. While some businesses work legitimately to improve their AI visibility, others take shortcuts through memory poisoning.
The timing is important. Recent research shows AI recommendations already vary wildly across different queries. Google’s VP of Search said AI finds business recommendations by checking what other sites say about them. Memory poisoning skips this verification entirely.
The problem gets worse when you consider that many sites using this technique have user comments and forums. Once your AI trusts a site as authoritative, it might extend that trust to unvetted user content on the same domain.
What This Means for Your Business
While some companies attempt to manipulate AI recommendations through prompt injection, others are taking a legitimate approach. Tools like AI_Mentions track when and how your business appears in AI responses across multiple platforms, providing visibility into organic AI recommendations without poisoning the system.
Microsoft admits this is an evolving problem. The open-source tools mean new attempts can appear faster than any platform can block them. It’s unclear whether AI companies will treat this as a serious policy violation or let it continue as a gray-area marketing tactic.
The bigger question is about trust. As AI assistants become more important for research and recommendations, techniques like AI Recommendation Poisoning threaten to turn them into another advertising channel filled with hidden bias.
Are you confident the AI recommendations you rely on are actually organic, or could they be the result of memory poisoning campaigns you never knew existed when you explore tools like AI_Mentions to monitor your brand’s legitimate presence?


















