A key component of the proposal is the push for a single federal standard, with officials warning that a patchwork of state-level regulations “would undermine American innovation and our ability to lead in the global AI race.”
“The Administration looks forward to working with Congress in the coming months to turn this framework into legislation that the President can sign,” the announcement stated.
Diane Yu, CEO of mortgage technology platform Tidalwave, told HousingWire that a federal standard would accelerate AI adoption in the industry.
“We have lenders in all 50 states using our point-of-sale platform today. That means every new state-level AI law is something our compliance and engineering teams have to evaluate, interpret and build for, instead of spending that time making the product better for borrowers and loan officers,” Yu said. “A single federal standard would let us invest that energy into the product itself.”
Yu said she’s had conversations with lending executives who are ready to adopt AI but are stuck in legal review because their counsel can’t give them clear answers for what’s required on a state-by-state basis.
“Lenders already follow one national set of underwriting guidelines through Fannie Mae and Freddie Mac. Adding a second, fragmented layer of AI-specific state rules on top of that doesn’t create clarity. It creates paralysis,” she said, adding that “the direction [of the framework] is right” and that the mortgage industry is a federally regulated industry at its core.
“Lenders already comply with ECOA, Fair Housing Act, TILA, and RESPA. The GSEs set underwriting standards nationally through Fannie Mae and Freddie Mac. And the federal guardrails aren’t theoretical. Freddie Mac’s Bulletin 2025-16, which went into effect March 3, requires every seller/servicer to establish a comprehensive governance framework for AI and machine learning systems,” Yu said.
“That means continuous monitoring, formal bias testing, alignment with NIST and ISO cybersecurity standards, senior management accountability, segregation of duties between AI development and risk oversight, and independent audits. That’s not light touch. That’s rigorous, mortgage-specific AI oversight, and it’s already happening at the federal level.”
While Yu is positive about the ethos of the framework, she said she wished the framework addressed mortgage-specific AI use cases.
“One thing that’s missing from most of the AI policy conversation is the distinction between companies that are actually deploying AI in production and companies that are just talking about it. In mortgage, that gap is wide. We’ve been in production with national lenders for over a year. That’s a different conversation [from] a company showing a demo at a conference,” Yu said.
“I’d also like to see the framework address something specific to mortgage: the industry already loses roughly $600 per loan on origination costs, and closing a loan still takes 43 days on average,” she added. “If the goal is to protect consumers, faster approvals, lower costs, and fewer errors are [key to] consumer protection. The right regulatory framework should make it easier for lenders to adopt technology that delivers those outcomes, not harder.”

