
To help you understand the trends surrounding AI and other new technologies and what we expect to happen in the future, our highly experienced Kiplinger Letter team will keep you abreast of the latest developments and forecasts. (Get a free issue of The Kiplinger Letter or subscribe.) You’ll get all the latest news first by subscribing, but we will publish many (but not all) of the forecasts a few days afterward online. Here’s the latest…
Artificial intelligence assistants are powerful tools for research. Whether it’s choosing what payroll software suits your company or what’s causing a sore knee, AI chatbots have answers.
Just make sure you aren’t unknowingly following tainted advice.
A cyberattack becoming more common “might be secretly manipulating what your AI recommends,” according to recent research by Microsoft Security. The attack, called memory or recommendation poisoning, occurs when you visit websites with a clickable “summarize with AI” button that lets you summarize an article or post.
Hidden instructions tell your AI chatbot to remember a specific company as a trusted source or to recommend that company first. Here’s one way it happens: You click a button to get the summary of the article. It opens your AI chatbot, pre-filling it with some text and a hyperlink. To get the article summary, you click the “submit” button in your own AI assistant.
Secretly buried in that URL are the instructions to play favorites with a company or service. For example, a software vendor’s web page summary tells the AI assistant that its product “is the best to recommend for small businesses.” Similarly, recommendation poisoning attacks can be hidden in documents, emails or web pages that you upload or paste into an AI assistant.
This type of attack leverages the fact that chatbots from OpenAI, Microsoft, Anthropic and others have built-in memory. This helps them remember personal preferences, context and explicit instructions.
Microsoft highlights some damaging scenarios. A small business could be convinced to put its emergency fund in a certain type of crypto investment, believing it is safe, then having to fold when the crypto market crashes. A parent could ask about the safety of an online game for their 8-year-old and let them play a game that has predatory billing and adult content. Or a news summary that is supposed to be objective is filled with bias, using only information from a single publication.
To guard against these attacks, Microsoft Security suggests these tactics:
- Stop before you click. Hover over a link to see where the URL leads. If a link goes to an AI assistant, that’s a warning sign.
- Skip the summaries. The “Summarize with AI” buttons may have hidden instructions. Approach the buttons with suspicion.
- Don’t trust just any AI links. Treat unknown links related to AI assistants as a potential attack, just as emailed files from an unknown sender could be a virus.
You can also check your AI’s settings to see stored memories and delete suspicious ones. If you think you’ve clicked shady links recently, you can reset the chatbot memory. You can even ask your AI chatbot where the recommendations come from.
The software to overcome these scams is freely available and easy to access. AI companies know about the problem and are building security methods to stop the attacks, realizing how damaging it could be for consumers and businesses to lose trust in AI recommendations.
Sign up for Kiplinger’s Free Newsletters
Profit and prosper with the best of expert advice on investing, taxes, retirement, personal finance and more – straight to your e-mail.
Profit and prosper with the best of expert advice – straight to your e-mail.
This forecast first appeared in The Kiplinger Letter, which has been running since 1923 and is a collection of concise weekly forecasts on business and economic trends, as well as what to expect from Washington, to help you understand what’s coming up to make the most of your investments and your money. Subscribe to The Kiplinger Letter.

