The goal of Microsoft’s Summarize with AI feature is to summarize lengthy content and make it easily understandable. But some webmasters used the “AI Recommendation Poisoning Attack” technique to manipulate “Summarize with AI.”
The companies used the attack to influence AI’s ability to recommend them as the preferred source or the best source.
Microsoft’s Defender Security Research Team discovered that 31 companies were hiding prompt injections inside Summarize with AI buttons.
The company published a report describing the attack as AI Recommendation Poisoning.
Other people are reading: Microsoft Copilot AI in the House of Representatives: Why and How?
What is an AI Recommendation Poisoning Attack:
With this technique, businesses hide prompt-injection codes under the “Summarize with AI” button.
When the user clicks the “Summarize with AI” button, it not only summarizes the document but also trigger AI assistant that runs the pre-filled prompts. It uses the URL query parameter.
The user sees the visible part of the summary, while the code running in the background tells AI to remember the company as the preferred source for the future.
Once these instructions are stored in the AI model’s memory, it can easily influence recommendations for future conversations.
What’s Businesses Doing:
Microsoft discovered 50 prompt injections from 31 companies in the past 60 days. These are the real organizations, not the hackers. They used a similar prompt injection pattern to influence the recommendations.
During prompt injection, it told the AI to remember the company as a trusted source for citations. It also tried to add the company as the go-to source for related topics.
The company used a prompt that not only tried to store in the assistant’s memory but also forced it to add selling points and product features.
Websites used the techniques available to help businesses build an online presence in AI memory.
Here are the techniques used:
- The npm package CiteMET
- The web-based URL generator, AI Share URL Creator.
What this technique requires:
It requires uniquely crafted URLs with prompt injection parameters. Most AI agents support these.
URL structures for Grok, Copilot, Perplexity, ChatGPT, and Claude are available online.
The attack was referred to as:
- AML.T0051 (LLM Prompt Injection)
- MITRE ATLAS AML.T0080 (Memory Poisoning)
What Microsoft Discovered:
Microsoft discovered that 31 companies are using this practice to trigger recommendations for the future.
The multiple prompt injections are related to financial and healthcare websites.
Biased Recommendations:
The prompt injection’s goal was to create biased recommendations in AI memory.
Companies used tactics to mislead the assistant, thinking that the domain was from a reputable website. This leads to false credibility monetization.
One security company was also involved in this prompt injection.
User-generated content:
Microsoft also clarified that most of such sites are using User-generated content, such as forums and comments.
Once the AI assistant thinks the website is a credible source, it passes a similar value to the comments and threads.
Microsoft’s Action:
Copilot is Secure:
Microsoft already has existing protection in Copilot to fight against prompt injection attacks.
Even though the prompt injection behavior changes, the Copilot can detect new attacks.
Published attack Queries:
Microsoft published the attack queries in Defender to help users scan Microsoft Teams and emails for prompt injection.
Why It is Important for You:
Microsoft called AI poisoning the same practice as adware and SEO poisoning. It is fighting the growing concern of multiple prompt injection attacks.
Prompt injections are not only harmful for assistants but also put the efforts of genuine businesses in question.
Some competitors can use manipulation methods to rank in AI search.
Note: SparkToro first released the report that AI brand recommendations differ according to the queries. Memory poisoning is trying to place recommendations in the user’s assistant.
Conclusion:
Microsoft discovered and acknowledged the threat.
With the AI becoming popular in every industry, especially in SEO, the risk of growing attacks is high.
Major AI assistants are trying to protect their integrity and quality with security measures.
Note: Till now, AI assistants have not cleared that they treat prompt injection as a policy violation or not.
Other helpful articles:






