Close Menu
Money MechanicsMoney Mechanics
    What's Hot

    EnerCom Denver Initial List of Presenting Companies for the 31st Annual Energy Investment Conference to be held August 17–19, 2026, in Denver, Colorado

    March 25, 2026

    4 Stocks Offering Reliable Income and Buybacks Amid Market Uncertainty

    March 25, 2026

    Secondary reinsurance market could drive greater capital efficiency, says Howden Re

    March 25, 2026
    Facebook X (Twitter) Instagram
    Trending
    • EnerCom Denver Initial List of Presenting Companies for the 31st Annual Energy Investment Conference to be held August 17–19, 2026, in Denver, Colorado
    • 4 Stocks Offering Reliable Income and Buybacks Amid Market Uncertainty
    • Secondary reinsurance market could drive greater capital efficiency, says Howden Re
    • Is Gas Really More Expensive Than Ever?
    • Stocks Slide Again as Crude Oil Controls: Stock Market Today
    • How Is CRH plc’s Stock Performance Compared to Other Building & Construction Stocks?
    • Gold and Dow Jones Alignment Suggests Favorable Risk-Reward Setup for Investors
    • Bond Economics: Bond And Loan Financing
    Facebook X (Twitter) Instagram
    Money MechanicsMoney Mechanics
    • Home
    • Markets
      • Stocks
      • Crypto
      • Bonds
      • Commodities
    • Economy
      • Fed & Rates
      • Housing & Jobs
      • Inflation
    • Earnings
      • Banks
      • Energy
      • Healthcare
      • IPOs
      • Tech
    • Investing
      • ETFs
      • Long-Term
      • Options
    • Finance
      • Budgeting
      • Credit & Debt
      • Real Estate
      • Retirement
      • Taxes
    • Opinion
    • Guides
    • Tools
    • Resources
    Money MechanicsMoney Mechanics
    Home»Earnings & Companie»Tech»How Google’s new AI model protects user privacy without sacrificing performance
    Tech

    How Google’s new AI model protects user privacy without sacrificing performance

    Money MechanicsBy Money MechanicsSeptember 16, 2025No Comments4 Mins Read
    Facebook Twitter LinkedIn Telegram Pinterest Tumblr Reddit WhatsApp Email
    How Google’s new AI model protects user privacy without sacrificing performance
    Share
    Facebook Twitter LinkedIn Pinterest Email


    googai5555gettyimages-2234049984

    picture alliance/Contributor/picture alliance via Getty Images

    Follow ZDNET: Add us as a preferred source on Google.


    ZDNET’s key takeaways

    • AI developers are trying to balance model utility with user privacy.
    • New research from Google suggests a possible solution.
    • The results are promising, but much work remains to be done.

    AI developers have long faced a dilemma: The more training data you feed a large language model (LLM), the more fluent and human-like its output will be. However, at the same time, you run the risk of including sensitive personal information in that dataset, which the model could then republish verbatim, leading to major security compromises for the individuals affected and damaging PR scandals for the developers. 

    How does one balance utility with privacy?

    Also: Does your generative AI protect your privacy? Study ranks them best to worst

    New research from Google claims to have found a solution — a framework for building LLMs that will optimize user privacy without any major degradations in the AI’s performance.

    Last week, a team of researchers from Google Research and Google DeepMind unveiled VaultGemma, an LLM designed to generate high-quality outputs without memorizing its training data verbatim. The result: Sensitive information that makes it into the training dataset won’t get republished.

    Digital noise

    The key ingredient behind VaultGemma is a mathematical framework known as differential privacy (DP), which is essentially digital noise that scrambles the model’s ability to perfectly memorize information found in its training data. 

    Crucially, the researchers embedded DP at the level of sequences of tokens. This means that at the most fundamental level, VaultGemma will not be able to perfectly memorize or reproduce the details on which it’s been trained.

    Also: 4 ways I save money on my favorite AI tool subscriptions – and you can too

    “Informally speaking, because we provide protection at the sequence level, if information relating to any (potentially private) fact or inference occurs in a single sequence, then VaultGemma essentially does not know that fact: The response to any query will be statistically similar to the result from a model that never trained on the sequence in question,” Google wrote in a blog post summarizing its findings.

    There was a delicate balance to strike, here: The Google researchers had to add this digital noise without catastrophically compromising the model’s performance. The better an AI model is able to memorize and thus perfectly replicate its training data, the better it should perform — at least, assuming your metric for “better” is generating human-like responses to user prompts. 

    But if your metric is optimizing user privacy, then the memorization-only paradigm is a problem, because most of us don’t want to live in a world in which huge AI models are just hoovering up carbon copies of our personal information that can then be unpredictably republished by those same models.

    Google’s new research, then, focused on comprehensively mapping out the optimal formula for balancing compute, privacy, and model utility.

    Promising early results

    Built upon the Gemma 2 family of open models, which Google debuted in 2024, VaultGemma clocks in at just 1 billion parameters, according to the company — a relatively paltry size compared to the largest and most powerful models on the market, some of which are reported to be built with upward of a trillion parameters.

    However, VaultGemma still performed across key benchmarks roughly on par with some older models, including OpenAI’s GPT-2. This suggests that a compute-privacy-utility optimization framework could eventually be a viable alternative to leading proprietary models, even though it has a long way to go before it comes close to catching up.

    Also: How people actually use ChatGPT vs Claude – and what the differences tell us

    “This comparison illustrates that today’s private training methods produce models with utility comparable to that of non-private models from roughly 5 years ago, highlighting the important gap our work will help the community systematically close,” Google wrote in the blog post.

    The model weights and training methods behind VaultGemma have been published in a research paper to allow the AI community to refine private models further. The weights can also be accessed via HuggingFace and Kaggle.





    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Telegram Email
    Previous ArticleCanada approves Blackstone-backed LNG project, paving way for $7 billion export facility – Oil & Gas 360
    Next Article Investors need clarity on Fannie and Freddie
    Money Mechanics
    • Website

    Related Posts

    Best Costco deals to compete with Amazon’s Big Spring Sale 2026

    March 24, 2026

    Cauldron Ferm has turned microbes into nonstop assembly lines

    March 24, 2026

    1 in 2 security leaders say they’re not ready for AI attacks – 4 actions to take now

    March 24, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    EnerCom Denver Initial List of Presenting Companies for the 31st Annual Energy Investment Conference to be held August 17–19, 2026, in Denver, Colorado

    March 25, 2026

    4 Stocks Offering Reliable Income and Buybacks Amid Market Uncertainty

    March 25, 2026

    Secondary reinsurance market could drive greater capital efficiency, says Howden Re

    March 25, 2026

    Is Gas Really More Expensive Than Ever?

    March 25, 2026

    Subscribe to Updates

    Please enable JavaScript in your browser to complete this form.
    Loading

    At Money Mechanics, we believe money shouldn’t be confusing. It should be empowering. Whether you’re buried in debt, cautious about investing, or simply overwhelmed by financial jargon—we’re here to guide you every step of the way.

    Facebook X (Twitter) Instagram Pinterest YouTube
    Links
    • About Us
    • Contact Us
    • Disclaimer
    • Privacy Policy
    • Terms and Conditions
    Resources
    • Breaking News
    • Economy & Policy
    • Finance Tools
    • Fintech & Apps
    • Guides & How-To
    Get Informed

    Subscribe to Updates

    Please enable JavaScript in your browser to complete this form.
    Loading
    Copyright© 2025 TheMoneyMechanics All Rights Reserved.
    • Breaking News
    • Economy & Policy
    • Finance Tools
    • Fintech & Apps
    • Guides & How-To

    Type above and press Enter to search. Press Esc to cancel.