(Image credit: Getty Images)
It’s hard to avoid stories about AI in the news or on social media these days. The technology is widely expected to be transformative, although the extent and the speed at which it will impact society is not well understood.
Anthropic, the maker of AI model Claude, has been in the news lately for several different reasons. In late March, the company admitted it had accidentally posted some of Claude’s source code online.
While this didn’t expose any user data or confidential information directly, there is some concern that bad actors could use this information to build rogue agents or find ways to inject malicious commands into the tool’s workflows.
Sign up for Kiplinger’s Free Newsletters
Profit and prosper with the best of expert advice on investing, taxes, retirement, personal finance and more – straight to your e-mail.
Profit and prosper with the best of expert advice – straight to your e-mail.
Meanwhile, it has also introduced its next big model: Mythos (paywall). This AI model is said to have the ability to find thousands of vulnerabilities in corporate software, and is likely to be able to find unknown vulnerabilities autonomously.
Jamie Dimon, CEO of JPMorgan Chase, mentioned Mythos on a recent earnings call, suggesting our entire financial system could now be at risk from cybersecurity threats because it is so interconnected. Exposing potential failure points impacts the whole system.
In general, as AI companies like Anthropic, OpenAI, Meta and others all race to release new tools with the best features, attention to security seems to be falling by the wayside as speed is prioritized.
As an individual and an investor, how do you help protect yourself given the vulnerabilities, both known and unknown, that AI is exposing?
Control what you store in AI’s memory — and what you connect it to
For starters, be prudent with the personal information you share with AI tools, such as ChatGPT, Claude, Gemini and Perplexity. These tools typically store your data by default unless you tell them not to.
They do this to enhance the user experience and make your chat results more customized over time. They might even seem to read your mind when you ask questions in the future.
However, this does mean they may be storing your preferences or information you share, such as your occupation, personal information gleaned from your questions and any documents and images you upload.
If you share medical records or questions, they may also store highly confidential facts about your health history.
When you also consider that bad actors on the internet are looking for ways to trick AI tools into sharing user data — a technique called “prompt injection” — it seems sensible to limit the sensitive information you hand over.
When setting up an account with an AI tool, consider strongly whether or not to disable the memory functions. Doing so in just a few clicks could help reduce the chance that a potential future data breach would expose personal data you don’t want disseminated.
Keep your devices up to date
We may soon be seeing a flurry of critical software updates from big tech companies, such as Microsoft, Google and Apple, as they use pilot access to Mythos to identify and remediate vulnerabilities not previously detected.
Keeping your devices up to date will be crucial. Other AI models currently in development, not just Mythos, will likely make this our new reality. People holding on to old computers or smartphones no longer receiving software updates may be putting themselves at a higher risk of being accessed by bad guys.
Keep your data private
Social media apps leave all privacy protections turned off by design, unless you proactively enable them. That means apps like Facebook and Instagram collect as much information as possible regarding your tastes and preferences, and services like Google will store your browsing history and location data if you use features like Google Maps.
If any of these services are hacked, your personal data could potentially be at risk.
As AI agents can now connect to other services to automate work for you, there is always the possibility that personal data stored in an app you use could be shared with other systems.
For example, Claude can now connect to Gmail and Google Drive (when you choose to give it access), giving it the ability to search through your emails and files. That could be a great productivity enhancement for some of us, but it is important to think through the privacy implications.
Similarly, automation tools like Microsoft Copilot Tasks allow an AI agent to perform repetitive tasks in a browser for you, but it may require access to specific website usernames and passwords and store those to perform routines on a schedule.
Limit public information about you on the internet
You can also ask data broker sites to delete your personal information or pay services such as Incogni or Optery to do it for you. This doesn’t ensure 100% deletion of your data from the internet, but it does reduce its footprint.
Part of this effort should also include limiting the number of websites you create accounts with and give your phone number, address or credit card details to. When you sign up for rewards programs or shopping accounts, it all adds incrementally to your digital footprint.
As the possibility of breaches due to new vulnerabilities increases, any site that contains personally identifiable information for you could be used in future phishing or impersonation attacks.
Also, remember that AI training model data may include information publicly available on the internet, so it may also scrape any information posted about you online.
Choose the financial institutions you work with
Thinking about how this could impact you financially, the risk is that increasingly sophisticated AI models could potentially be used to hack banks, exchanges or brokerages, putting your money and financial information at risk.
Individual investors may not be able to do much about this on their own, apart from choosing to work with larger financial institutions with stronger cyber protections and larger IT departments actively engaged in preventing intrusions.
Smaller financial institutions will need to be adept at managing their technology and patching any gaps in their security.
Perhaps over time, AI could be used to improve cyber defenses, but in the short run, new AI model functions may be tilting the balance in favor of bad actors. The prospect of having an adversary probing and testing for vulnerabilities in the software and systems of banks and exchanges will be daunting for security professionals.
Institutions may need to dedicate more money and resources to protecting their environment as technologies evolve.
We will all need to continue to be diligent with our security by moderating what data we share with AI tools so we can leverage their abilities.
The old adage “trust but verify” seems relevant here. Make sure your online accounts are not sharing your personal details by default, and be skeptical when allowing AI tools to connect to your email and data storage accounts.

