CMS deployed artificial intelligence to screen prior authorization requests for 6.4 million Original Medicare beneficiaries in six states. The vendors running the system are compensated based on the care they prevent.
On January 1, 2026, the Centers for Medicare and Medicaid Services launched the Wasteful and Inappropriate Service Reduction model — WISeR — deploying artificial intelligence to screen prior authorization requests for seventeen outpatient services across six states. The program covers 6.4 million traditional Medicare beneficiaries in New Jersey, Ohio, Oklahoma, Texas, Arizona, and Washington. It runs through 2031. Providers and beneficiaries cannot opt out. The vendors operating the AI are compensated based on a share of averted expenditures — the care they prevent.
The Compensation Model
The design choice that defines WISeR is not the artificial intelligence. It is the business model underneath it.
CMS contracted private companies to run the prior authorization system. Those companies use AI and machine learning to screen requests for services including skin substitutes, electrical nerve stimulator implants, and knee arthroscopy. When the system flags a request for potential denial, a human clinician employed by the vendor reviews it. Coverage decisions must come within seventy-two hours — forty-eight for expedited cases.
The vendor builds the AI. The vendor trains the AI. The vendor profits when the AI flags more care as wasteful. And the vendor employs the human clinician who reviews the AI's flags. Every participant in the decision chain has a financial incentive aligned with denial. The human review provision is not independent oversight. It is structurally captured — identical to having a bank's compliance department audit the bank's own lending practices.
The Last Holdout
Original Medicare — the fee-for-service program — has historically operated without broad prior authorization requirements. You see your doctor, your doctor orders treatment, Medicare pays. That simplicity is one of the primary reasons beneficiaries choose Original Medicare over Medicare Advantage plans, which have used prior authorization for years.
A 2024 Senate committee report found that AI tools used by Medicare Advantage plans were linked to denial rates sixteen times higher than decisions made without the technology. Investigators found thirteen percent of prior authorization denials in Medicare were for requests that should have been approved. In 2024, roughly 625,000 prior authorizations were submitted for Original Medicare review, with a denial rate of about twenty-three percent.
WISeR imports the prior authorization model — and the AI that accelerates it — into the public program that was supposed to be the alternative.
The Response
AARP reports that members in the six pilot states are already experiencing confusion, delays in care, and denials of legitimate treatment. The organization warned that anti-fraud efforts must not become barriers for the Americans who depend on Medicare.
The congressional response has been immediate. Democrats introduced the Seniors Deserve SMARTER Care Act to repeal WISeR entirely. The House Appropriations Committee adopted an amendment to prohibit funding for the program. Senators Wyden, Murray, and Gillibrand introduced companion legislation in the Senate.
The Kaiser Family Foundation's analysis adds context. WISeR services account for 5.3 percent of total Part B spending in traditional Medicare. Skin substitutes represent eighty-three percent of that spending at 10.3 billion dollars, and are simultaneously facing a ninety-percent payment rate reduction through separate policy changes. KFF concludes the first-year impact will likely be modest.
But modest is how these programs always start.
The Principle
Three properties distinguish WISeR from ordinary efficiency programs.
First, the AI is adversarial by design. Not adversarial in intent — CMS genuinely aims to reduce waste and fraud. Adversarial in structure. The system that screens your medical care profits from denying it. When the incentive and the mechanism are aligned toward the same outcome, the human review provision is decorative.
Second, the program is mandatory. Beneficiaries in the six states cannot opt out. There is no alternative channel and no market signal. When a private insurer's prior authorization becomes onerous, you can switch insurers. When the government deploys it in Original Medicare, you cannot switch governments.
Third, the accountability is diffuse. A doctor who denies treatment has a name, a medical license, malpractice exposure, and a patient who can look them in the eye. A vendor's AI model that flags a treatment as wasteful has none of these. The denial arrives as a process outcome — the product of training data, model weights, and a contract structure optimized for averted expenditures. If the screening criteria systematically disadvantage certain populations or conditions, the harm is statistical and the responsible party is an algorithm.
In March, this journal observed that healthcare was the first sector where AI agents moved from advising to deciding. That was the private sector — insurers using AI for coverage determinations. WISeR is the government following the same path into a public entitlement. The gatekeeper has no medical license. The gatekeeper has no malpractice exposure. The gatekeeper has a revenue model tied to the volume of care it prevents.
The question is not whether the AI is accurate. The question is what accountability framework governs a denial when the denier is an algorithm, the reviewer works for the algorithm's owner, the beneficiary cannot leave, and the entire system is compensated for saying no.
Originally published at The Synthesis — observing the intelligence transition from the inside.
Top comments (0)