> AI is dangerous because we’re deploying it into systems that are already broken, controlled by entities with massive profit incentives and zero accountability, at a scale that makes individual harm invisible.
So basically, mirroring what we're doing for recruitment/hiring. In STEM fields, at least, the system was already problematic before – set up to reward performative ability over actual job-related skills – but AI is making it worse. In addition to résumé embellishment, cherry picking (literally every reference list), and numerous lies of omission, we now have applicants sending in AI-generated application materials and hiring managers using AI to filter them.
Hiring people is a nightmare. And it's probably worse for the people I'm hiring, who send one application after another into the void.
The 1.2 second review stat is just staggering. I work in tech and we'd never ship a product with that kind of quality bar, but somehow its acceptable when the stakes are peoples health. The scariest part is how the appeals process is designed to exhaust you rather than actually adjudicate claims fairly. I've had family members give up on treatments because the paperwork felt more draining than the illness itself.
> ProPublica reviewed internal documents showing that Cigna doctors spent an _average_ of 1.2 seconds reviewing each of the 300,000 denials it issued in two months.
An _average_ of 1.2 seconds is the equivalent of reviewing 90% of the claims for .25 seconds (4 per second) and reviewing the remaining 10% for just under 8 seconds apiece. Maybe 8 seconds is enough to actually think about the borderline cases?
Also, I think it's important to see how many claims were approved during the same two months. If Cigna approved 3,000,000 claims in those same two months, with an _average_ review time of 1.2 seconds per approved claim, that tells a different story.
That's really interesting, and great point. There are millions of Cigna subscribers, so I imagine they approved at a higher rate than denied. My understanding is that denials have sizably increased with the algorithmic tools (since earlier in 2020s than gen ai becoming popular) and that was the concern in the reporting. But now I'm curious about approvals; I tried to look it up quickly but I think it will take some more time to explore as that data was harder to find.
I haven't worked in healthcare claims, but I have worked with large volumes of applications. A sizeable minority are no-brainer denials because they applied for the wrong program, applied for something that we don't even offer, or are trying to game the system for free money.
I don't know what the Cigna denials were for, but I'd venture to guess that a lot of the denials are for services that aren't covered. For example, buying something that requires a letter of medical necessity without having a LMN from your doctor. Or buying a Hallmark card from CVS and trying to get it reimbursed.
Once you see these a few hundred times, 1.2 seconds per denial isn't surprising.
Not trying to defend Cigna here, but here's how a reasonable, non-evil person could end up in a situation where they deny a lot of claims, and most denials are overturned on appeal:
1. You have to pay your doctors a lot of $$$/hour just for them to deny bad claims (sometimes accidental, sometimes made in bad faith).
2. To save $$$ and put those medical degrees to better use, you try to automate the system a bit.
3. The system is imperfect and occasionally flags false positives (denies claims that should be approved). You know this, so you give people an opportunity to appeal a denial. This shifts the onus to a few unlucky people.
4. If the system still works properly ~99% of the time, then ~1% of the people who appeal SHOULD get their denials overturned!
5. The other 99% of people who don't appeal might've been like "oh well, it was worth a shot."
I'm using really simple numbers here and I'm not trying to defend the healthcare companies that suck tons of money out of my paycheck. I'm mostly playing devil's advocate.
Even if the companies (and their CEOs) are evil, the non-evil people working for them could generate some unsavory-looking numbers for a variety of non-evil reasons.
Yeah, and I imagine a product that was wrong 80-90% of the time (thinking about how denials were appealed and overturned) would not be something that people would want to keep using. I hope that more people will use these tools against them with free claims fighters and things like that, your family members are not alone, but it shouldn't have to be that way. Health insurance companies are a scam.
> AI is dangerous because we’re deploying it into systems that are already broken, controlled by entities with massive profit incentives and zero accountability, at a scale that makes individual harm invisible.
So basically, mirroring what we're doing for recruitment/hiring. In STEM fields, at least, the system was already problematic before – set up to reward performative ability over actual job-related skills – but AI is making it worse. In addition to résumé embellishment, cherry picking (literally every reference list), and numerous lies of omission, we now have applicants sending in AI-generated application materials and hiring managers using AI to filter them.
Hiring people is a nightmare. And it's probably worse for the people I'm hiring, who send one application after another into the void.
The 1.2 second review stat is just staggering. I work in tech and we'd never ship a product with that kind of quality bar, but somehow its acceptable when the stakes are peoples health. The scariest part is how the appeals process is designed to exhaust you rather than actually adjudicate claims fairly. I've had family members give up on treatments because the paperwork felt more draining than the illness itself.
> ProPublica reviewed internal documents showing that Cigna doctors spent an _average_ of 1.2 seconds reviewing each of the 300,000 denials it issued in two months.
An _average_ of 1.2 seconds is the equivalent of reviewing 90% of the claims for .25 seconds (4 per second) and reviewing the remaining 10% for just under 8 seconds apiece. Maybe 8 seconds is enough to actually think about the borderline cases?
Also, I think it's important to see how many claims were approved during the same two months. If Cigna approved 3,000,000 claims in those same two months, with an _average_ review time of 1.2 seconds per approved claim, that tells a different story.
That's really interesting, and great point. There are millions of Cigna subscribers, so I imagine they approved at a higher rate than denied. My understanding is that denials have sizably increased with the algorithmic tools (since earlier in 2020s than gen ai becoming popular) and that was the concern in the reporting. But now I'm curious about approvals; I tried to look it up quickly but I think it will take some more time to explore as that data was harder to find.
I haven't worked in healthcare claims, but I have worked with large volumes of applications. A sizeable minority are no-brainer denials because they applied for the wrong program, applied for something that we don't even offer, or are trying to game the system for free money.
I don't know what the Cigna denials were for, but I'd venture to guess that a lot of the denials are for services that aren't covered. For example, buying something that requires a letter of medical necessity without having a LMN from your doctor. Or buying a Hallmark card from CVS and trying to get it reimbursed.
Once you see these a few hundred times, 1.2 seconds per denial isn't surprising.
Not trying to defend Cigna here, but here's how a reasonable, non-evil person could end up in a situation where they deny a lot of claims, and most denials are overturned on appeal:
1. You have to pay your doctors a lot of $$$/hour just for them to deny bad claims (sometimes accidental, sometimes made in bad faith).
2. To save $$$ and put those medical degrees to better use, you try to automate the system a bit.
3. The system is imperfect and occasionally flags false positives (denies claims that should be approved). You know this, so you give people an opportunity to appeal a denial. This shifts the onus to a few unlucky people.
4. If the system still works properly ~99% of the time, then ~1% of the people who appeal SHOULD get their denials overturned!
5. The other 99% of people who don't appeal might've been like "oh well, it was worth a shot."
I'm using really simple numbers here and I'm not trying to defend the healthcare companies that suck tons of money out of my paycheck. I'm mostly playing devil's advocate.
Even if the companies (and their CEOs) are evil, the non-evil people working for them could generate some unsavory-looking numbers for a variety of non-evil reasons.
Yeah, and I imagine a product that was wrong 80-90% of the time (thinking about how denials were appealed and overturned) would not be something that people would want to keep using. I hope that more people will use these tools against them with free claims fighters and things like that, your family members are not alone, but it shouldn't have to be that way. Health insurance companies are a scam.
Couldn't agree more. Your points hit home, like predatory tech in my Pilates apps.
Wait, what? I'm not familiar. Can you share more?