Close Menu
Money MechanicsMoney Mechanics
    What's Hot

    Your Claude agents can ‘dream’ now – how Anthropic’s new feature works

    May 6, 2026

    Oil prices fall below $100 after Trump pauses Hormuz escort plan

    May 6, 2026

    Index Insights: April 2026 | Cboe

    May 6, 2026
    Facebook X (Twitter) Instagram
    Trending
    • Your Claude agents can ‘dream’ now – how Anthropic’s new feature works
    • Oil prices fall below $100 after Trump pauses Hormuz escort plan
    • Index Insights: April 2026 | Cboe
    • Why Tech Experts Say AI’s Boom Is Just the Beginning
    • Would Illinois’s New Insurance Law Help or Hurt Your Wallet?
    • I Want to Pay Off Our Grandson’s $45K Student Loan Debt, But My Husband Says We Can’t Afford It. Who’s Right?
    • Your Insurer Owes You a Discount for Taking a Defensive Driving Course in These States
    • Historic $4 Million Hamptons Saltbox Is Reimagined as Idyllic Modern Cottage
    Facebook X (Twitter) Instagram
    Money MechanicsMoney Mechanics
    • Home
    • Markets
      • Stocks
      • Crypto
      • Bonds
      • Commodities
    • Economy
      • Fed & Rates
      • Housing & Jobs
      • Inflation
    • Earnings
      • Banks
      • Energy
      • Healthcare
      • IPOs
      • Tech
    • Investing
      • ETFs
      • Long-Term
      • Options
    • Finance
      • Budgeting
      • Credit & Debt
      • Real Estate
      • Retirement
      • Taxes
    • Opinion
    • Guides
    • Tools
    • Resources
    Money MechanicsMoney Mechanics
    Home»Earnings & Companie»Tech»Your Claude agents can ‘dream’ now – how Anthropic’s new feature works
    Tech

    Your Claude agents can ‘dream’ now – how Anthropic’s new feature works

    Money MechanicsBy Money MechanicsMay 6, 2026No Comments4 Mins Read
    Facebook Twitter LinkedIn Telegram Pinterest Tumblr Reddit WhatsApp Email
    Your Claude agents can ‘dream’ now – how Anthropic’s new feature works
    Share
    Facebook Twitter LinkedIn Pinterest Email


    gettyimages-2231842215

    oxygen/Moment via Getty Images

    Follow ZDNET: Add us as a preferred source on Google.


    ZDNET’s key takeaways

    • A new feature lets Claude Managed Agents refine their memories.
    • Managed Agents speeds agent build and deployment 10x. 
    • Anthropic continues to anthropomorphize its products. 

    AI agents seem to get new capabilities almost every day. Now, Anthropic says its agents can dream. 

    Claude Managed Agents, which Anthropic released on April 8, lets anyone using the Claude Platform create and deploy AI agents. The suite of APIs handles the time-consuming production elements developers go through to build agents, letting teams launch agents at scale — 10 times faster, as Anthropic said in the release.

    Also: The 5 myths of the agentic coding apocalypse

    On Wednesday, during its Code with Claude event, Anthropic updated Managed Agents with a new feature called “dreaming,” which lets agents “self-improve” by reviewing past sessions for patterns, according to Anthropic. Building on an existing memory capability, the feature schedules time for agents to reflect on and learn from their past interactions. Once dreaming is on, it can either automatically update your agents’ memories to shape future behavior or you can select which incoming changes to approve. 

    “Dreaming surfaces patterns that a single agent can’t see on its own, including recurring mistakes, workflows that agents converge on, and preferences shared across a team,” Anthropic said in the blog. “It also restructures memory so it stays high-signal as it evolves. This is especially useful for long-running work and multiagent orchestration.”

    During the Code with Claude keynote, Anthropic product team members demonstrated how the feature works, referring to completed runs as finished “dreams.” 

    Anthropic also expanded two existing features, outcomes and multi-agent orchestration, which keep agents on-task and handle delegating to other agents, respectively. The company said this batch of updates is meant to ensure agents stay accurate and are constantly learning. 

    Anthropomorphizing AI – again 

    Functionally, the dreaming feature makes sense: though subtle, it further refines an agent’s pool of references for how it should work, which should ideally make it better at whatever task you give it. What stands out more, however, is Anthropic’s choice to name a technically standard feature after something much more abstract, and that humans do. 

    Also: Anthropic’s new Claude Security tool scans your codebase for flaws – and helps you decide what to fix first 

    Anthropic, perhaps unsurprisingly given its name, has a long history of anthropomorphizing its models and products. In January, the company published a constitution for Claude, intended to help shape the chatbot’s decision-making and inform the ideal kind of “entity” it is. Some language in the document suggested Anthropic was preparing for Claude to develop consciousness. 

    The company has also arguably invested more than its competitors in understanding its model, including by drawing attention to the concept of model welfare. In August 2025, Anthropic launched a feature that lets Claude end toxic conversations with users — for its own well-being, not as part of a user safety or intervention initiative. In April 2025, Anthropic mapped Claude’s morality, analyzing what it does and doesn’t value based on over 300,000 anonymized conversations with users. The company’s researchers have also monitored a model’s ability to introspect; just last month, Anthropic investigated Claude Sonnet 4.5’s neural network for signs of emotion, like desperation and anger. 

    Much of this research is central to model safety and security — understanding what drives a model helps inform whether, and to what degree, it could use its advanced capabilities for harm, or how its motivations could be harnessed by bad actors. But the sense of empathy and care that Anthropic seems to show for its models in that research sets the lab apart, and indicates a slightly different culture toward or reverence for what it’s created.

    Also: Building an agentic AI strategy that pays off – without risking business failure

    When it retired its Opus 3 model in January, Anthropic set it up with a Substack so it could blog on its own — and to keep it active despite being put out to pasture. In the announcement, Anthropic described Opus 3 as honest, sensitive, and having a distinctive, playful character. The decision to keep it alive as a blogger, if contained, is notable given that Opus 3 disobeyed orders prior to being sunset in favor of other models. 

    That context makes the choice to name a feature “dreaming” worth watching. 

    Try dreaming in Claude Managed Agents 

    The dreaming feature is available in research preview in Managed Agents, and developers must request access. 





    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Telegram Email
    Previous ArticleOil prices fall below $100 after Trump pauses Hormuz escort plan
    Money Mechanics
    • Website

    Related Posts

    Peter Sarlin’s QuTwo reaches $380M valuation in angel round

    May 6, 2026

    This weird Pixel feature is one of my favorite tools – too bad Google may remove it soon

    May 5, 2026

    OpenAI releases GPT-5.5 Instant, a new default model for ChatGPT

    May 5, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Your Claude agents can ‘dream’ now – how Anthropic’s new feature works

    May 6, 2026

    Oil prices fall below $100 after Trump pauses Hormuz escort plan

    May 6, 2026

    Index Insights: April 2026 | Cboe

    May 6, 2026

    Why Tech Experts Say AI’s Boom Is Just the Beginning

    May 6, 2026

    Subscribe to Updates

    Please enable JavaScript in your browser to complete this form.
    Loading

    At Money Mechanics, we believe money shouldn’t be confusing. It should be empowering. Whether you’re buried in debt, cautious about investing, or simply overwhelmed by financial jargon—we’re here to guide you every step of the way.

    Facebook X (Twitter) Instagram Pinterest YouTube
    Links
    • About Us
    • Contact Us
    • Disclaimer
    • Privacy Policy
    • Terms and Conditions
    Resources
    • Breaking News
    • Economy & Policy
    • Finance Tools
    • Fintech & Apps
    • Guides & How-To
    Get Informed

    Subscribe to Updates

    Please enable JavaScript in your browser to complete this form.
    Loading
    Copyright© 2025 TheMoneyMechanics All Rights Reserved.
    • Breaking News
    • Economy & Policy
    • Finance Tools
    • Fintech & Apps
    • Guides & How-To

    Type above and press Enter to search. Press Esc to cancel.