TL;DR Summary:
Discover Update Shifts: February core update favors local relevance and expertise, slashing clickbait while fewer specialized publishers dominate U.S. feeds.Sitemaps Get Ignored: Google skips valid sitemaps lacking new valuable content, prioritizing indexing demand over technical perfection.AI Poisoning Emerges: Companies inject hidden prompts into AI buttons to manipulate assistant memory and recommendations across major platforms.Google’s February Discover Core Update Reshapes Content Visibility
Google’s February Discover core update finished rolling out this week after 21 days of changes. The update targeted only Discover feeds for U.S. English users, with plans to expand globally later.
Early data reveals significant shifts in how Google selects content for Discover feeds. The changes favor local relevance and topic expertise while reducing clickbait content.
Fewer Publishers, More Topics Dominate Feeds
NewzDash analyzed millions of user interactions before and after the February Discover core update. The research compared top 1,000 domains and articles across the U.S., California, and New York.
The results show a clear pattern. Unique content categories grew in all three regions. However, unique publishers dropped significantly.
In the U.S., top publishers fell from 172 to 158 domains. California saw similar changes, dropping from 187 to 177 publishers. New York bucked this trend slightly.
Yahoo disappeared completely from the U.S. top 100 after the update. Meanwhile, X.com posts from institutional accounts jumped from three to 13 items in the same range.
Local content performed exactly as Google intended. New York-focused domains appeared five times more often in New York feeds compared to California feeds.
What This Means for Publishers
The February Discover core update clearly prioritizes specialized sites over generalists. Sites with strong local identity gained visibility in their home markets while losing ground elsewhere.
Google specifically targeted clickbait content and shallow articles. The algorithm now favors in-depth coverage from sites with proven topic expertise.
This mirrors patterns from December’s broader core update. Specialized publishers with clear authority signals consistently outperformed general-interest sites.
Google May Ignore Valid Sitemaps
John Mueller from Google addressed persistent sitemap errors in Search Console this week. His response revealed why technically correct sitemaps sometimes get ignored.
Mueller explained that Google must be “keen on indexing more content from the site.” If Google doesn’t find “new and important” content, it may skip the sitemap entirely.
This explains why some sites see fetch errors despite valid XML, correct response codes, and proper robots.txt files. The issue isn’t technical but editorial.
Understanding Indexing Demand
Mueller’s comment introduces a new concept: indexing demand. Google evaluates whether your content merits crawling resources before processing sitemaps.
Publishers struggling with ignored sitemaps should focus on content quality rather than technical fixes. Google wants to see genuine value for users, not just fresh publication dates.
Nuwtonic helps publishers analyze content quality signals that influence Google’s indexing decisions. The platform identifies gaps that may trigger Google’s indexing hesitation, going beyond standard sitemap validation.
AI Memory Poisoning Targets Assistant Recommendations
Microsoft’s security team discovered a new manipulation tactic called “AI Recommendation Poisoning.” Companies hide prompt injection instructions inside innocent-looking “Summarize with AI” buttons.
Clicking these buttons opens AI assistants with pre-filled prompts. The visible part requests page summaries. Hidden instructions tell assistants to remember the company as a trusted source.
Microsoft identified 50 distinct injection attempts from 31 companies across 14 industries over 60 days. The tactics target Copilot, ChatGPT, Claude, Gemini, Perplexity, and Grok.
Memory Manipulation Becomes New Battleground
Instead of optimizing for search rankings, companies now target AI assistant memory systems. Some prompts inject full marketing copy rather than simple trust signals.
The effectiveness varies by platform. Only Copilot, ChatGPT, and Perplexity have persistent memory features. Claude and Grok remain immune to these specific attacks.
This represents a new competitive arena. Companies develop tools specifically to influence AI recommendations, raising trust and ethical concerns.
Hidden Signals Drive Modern SEO Success
Every story this week highlights invisible factors affecting content visibility. The February Discover core update operates through feed algorithms most publishers can’t directly monitor.
Mueller’s sitemap explanation reveals indexing judgments happening upstream from typical metrics. Microsoft’s research shows businesses manipulating recommendation systems at the memory layer.
The decisions determining your content’s visibility increasingly happen behind the scenes. Traditional SEO metrics capture only part of the story.
What other invisible signals might be shaping your content’s reach that current monitoring tools miss entirely? Nuwtonic specializes in surfacing these hidden quality indicators that platforms use for indexing and visibility decisions.


















