The Day I Realized Our Star Coder Was Actually Our Biggest Risk Adjustment Liability

Sandra was legendary. Twenty-three years of experience. Certified three times over. She could spot an HCC from across the room. Other coders came to her with questions. Management never questioned her work. That’s exactly why she cost us $3 million in audit penalties.

The problem wasn’t incompetence. It was the opposite. Sandra knew risk adjustment coding so well that she’d developed her own interpretations, shortcuts, and judgment calls that made perfect clinical sense but violated CMS guidelines. She was coding based on what should be true, not what the documentation actually supported.

The Expert Paradox

Your best coders are often your biggest audit risks. Sounds crazy, right? But experienced coders develop confidence that leads them to fill in blanks the documentation doesn’t complete. They know diabetes plus that medication equals likely complications, so they code it. They recognize chronic kidney disease patterns, so they assign the HCC even when the provider didn’t explicitly state it.

I discovered this during a pre-audit review. Sandra had coded chronic systolic heart failure for a patient. When I checked the documentation, it mentioned “heart problems” and showed medications consistent with CHF. Sandra connected the dots. CMS doesn’t accept connected dots. They want every dot explicitly documented.

The newer coders, the ones we worried about? They stuck religiously to the documentation. If it wasn’t written clearly, they didn’t code it. Their accuracy rate was actually higher than Sandra’s when measured against audit standards, not clinical reality. The inexperience we saw as weakness was actually protecting us.

This pattern repeats everywhere. Experienced coders “know” that certain specialists always document thoroughly, so they’re less careful reviewing those notes. They “know” that specific conditions always travel together, so they assume related diagnoses. They “know” the providers’ documentation patterns, so they interpret unclear notes based on history. Every one of these shortcuts creates audit vulnerability.

The Documentation Translation Problem

Here’s what nobody tells you about risk adjustment coding: you’re not coding medical reality, you’re coding documentation reality. These are completely different things.

A patient absolutely has diabetes with complications. Their labs prove it. Their medications confirm it. Their specialist visits demonstrate it. But if the documentation doesn’t explicitly state it with proper MEAT criteria, it doesn’t exist for coding purposes. This drives experienced coders insane because they’re trained to understand clinical truth.

I watched Sandra struggle with a chart last month. The patient clearly had depression (multiple antidepressants, psychiatry referrals, discussion of mood in notes). But the words “depression” or “major depressive disorder” never appeared. Just “mood issues” and “emotional challenges.” Sandra wanted to code it. She knew it was real. But documentation reality said no.

The translation problem gets worse with specialists. Nephrologists write “CKD stage 3” assuming everyone knows that means chronic kidney disease. Cardiologists document “EF 35%” knowing that indicates heart failure. Experienced coders translate automatically. But CMS doesn’t accept translations, they want exact language.

The Quality Paradox Solution

We solved our Sandra problem (and she’s still our star coder) by flipping the quality assurance process. Instead of having senior coders review junior work, we have junior coders audit senior work. Sounds backwards, but it works brilliantly.

Junior coders don’t have the confidence to interpret. They flag anything unclear. When they review Sandra’s work and ask “Where does it explicitly say chronic kidney disease?” Sandra can’t point to her clinical judgment. She has to find actual documentation. This keeps everyone honest.

We also created what I call “interpretation alerts.” Whenever a coder makes any judgment call, they flag it. Not as wrong, just as interpreted. These get secondary review. About 30% fail audit standards despite being clinically correct. That’s 30% fewer penalties we’ll face.

The documentation back-check has been revolutionary. For every code assigned, coders must copy the exact text supporting it. Not summarize, not paraphrase, copy. If they can’t find explicit text, they can’t code it. This slows coding by maybe 10% but improves audit success by 40%.

The Monday Morning Fix

You can identify your organization’s expert paradox problems this week. Pull 20 charts coded by your most experienced coder. Have someone with less than two years’ experience review them using only this question: “Can you find exactly where it says this?”

Don’t let them ask the senior coder for clarification. Don’t let them use clinical judgment. Just documentation, plain and simple. The gaps they find will terrify you, and that’s good. Better to find them now than during an audit.

Next, reverse your quality flow for one week. Junior reviews senior. New reviews experienced. Fresh eyes review confident interpretations. The pushback will be intense (Sandra threatened to quit), but the findings will save you millions.

Your star coders aren’t wrong. They’re just coding for the wrong reality. In risk adjustment, clinical expertise can be a liability if it overrides documentation discipline. The coders you trust most might be the ones who need the most scrutiny, not because they’re bad, but because they’re so good they’ve forgotten that CMS doesn’t care what’s medically true, only what’s documented clearly.

Author

  • Muhammad Ibrahim

    I'm Muhammad Ibrahim. I started 3D printing a few years ago when I wanted to make props for a costume. Since then I've been learning about prop and costume making, as well as historical methods of armor production. On this site I hope to share what I've learned with you to help you with your projects; whether it's for cosplay, roleplaying or tabletop games, or just for fun.

Similar Posts