Gavjenks
TPF Noob!
- Joined
- May 9, 2013
- Messages
- 2,976
- Reaction score
- 588
- Location
- Iowa City, IA
- Can others edit my Photos
- Photos OK to edit
In general scientific research, we use a confidence interval of 95%, that is, we call a relationship "significant" if we are 95% sure that the data aren't just a result of noise.
My problem I would like to talk about is that when it comes to medical outcomes, 95% is often grossly inappropriate as a responsible confidence interval, and yet clinical trials are designed around it still. Rendering them in most cases almost completely useless for actually assisting me in making good medical decisions.
Example #1: Athlete's foot cream
Athlete's foot is annoying, but it is a pretty minorly bad thing.
Death is a really really majorly bad thing, however. Even if there's a very tiny chance of it happening due to using the cream, I'm not going to want to use the cream for its minor benefit, right? And by "tiny chance" I mean that I expect there to be MUCH MORE confident than a p < 0.05 level that the athlete's foot cream is not going to cause, say, kidney failure. If there's even a 1/1000 chance that there exists a relationship between athlete's foot cream and sudden death, given the data, I probably would decline to use that medication.
But the problem is, nobody has any way whatsoever of guaranteeing me that degree of certainty. Because they don't run tens of thousands of people in foot cream clinical trials. They maybe run a few hundred usually. Enough to satisfy a power analysis that operates on a 0.05 level ONLY.
(and in case you think this is a frivolous example, ketoconazole fairly common antifungal drugs were announced last year to be pretty solidly linked to total liver and adrenal failure, and possibly birth defects, and only to be used orally as a last resort in severe deadly fungal infections, long after being released and used widely. And I didn't even KNOW that when I started writing this example half an hour ago!)
Example #2: Flu vaccines
Some numbers (all from the CDC) for people under 65 years of age (most of these numbers aren't known for people over 65 is why I restrict it):
% of people who get the flu when unvaccinated = roughly 15%
% of THOSE people who would have gotten the flu but don't, due to being vaccinated = roughly 60% in average seasons (i.e. in simpler terms ~40% of people it "doesn't take")
So far = roughly 9% likelihood that a given vaccine actually ends up saving you from anything
Then, the likelihood of a person with symptoms being hospitalized = roughly 0.1% (although this is only ballpark guessed by experts)
So far = roughly 0.009% likelihood that a given vaccine saves you a trip to the hospital.
Then, percent of those people who die = nobody knows. The CDC doesn't even know how many people die of the flu, not even within an order of magnitude of precision. But even the most liberal estimates would be something like 1% of hospitalized people.
So far = at most, about 0.00009% likelihood that a given vaccine will save your life. I.e. something like 1/1,000,000, (a number with truly horrible precision to it, though).
Okay, so if I want to decide whether to take a vaccine or not, I also want to know how likely a vaccine is to kill me itself. Is it higher or lower than 1/1,000,000?
Well, thanks to clinical trials for flu vaccines only involving a few hundred people, the answer is: nobody has a remotely vague idea. All they can tell you is that it is probably less than about 1/100 to kill you (likely much much lower, but that's what the data can GUARANTEE). (Note: this is all for death only, but the numbers are similarly fuzzy / almost completely unknown for any other severe complication or side effect you could name)
So how do I make an educated decision whether it helps me more than it hurts me? I can't. I have to flip a coin. So what good is the clinical trial doing me? And why am I being told it's a great thing?
And so on and so forth for pretty much every MINOR drug or medical procedure out there. Major procedures like heart transplants are actually much easier to make educated decisions on, because the deadly outcomes either way are so much more common, that we have plenty of data to get the precision we need. It's the minor ones that nobody has any clue about when you get down to it.
So is it even ethical to "prescribe" such minor things, or promote them? Versus, for example, simply saying "Here are our best known odds. You have permission to get this done if you want, but you have to follow your own intuition." That would be the HONEST way...
My problem I would like to talk about is that when it comes to medical outcomes, 95% is often grossly inappropriate as a responsible confidence interval, and yet clinical trials are designed around it still. Rendering them in most cases almost completely useless for actually assisting me in making good medical decisions.
Example #1: Athlete's foot cream
Athlete's foot is annoying, but it is a pretty minorly bad thing.
Death is a really really majorly bad thing, however. Even if there's a very tiny chance of it happening due to using the cream, I'm not going to want to use the cream for its minor benefit, right? And by "tiny chance" I mean that I expect there to be MUCH MORE confident than a p < 0.05 level that the athlete's foot cream is not going to cause, say, kidney failure. If there's even a 1/1000 chance that there exists a relationship between athlete's foot cream and sudden death, given the data, I probably would decline to use that medication.
But the problem is, nobody has any way whatsoever of guaranteeing me that degree of certainty. Because they don't run tens of thousands of people in foot cream clinical trials. They maybe run a few hundred usually. Enough to satisfy a power analysis that operates on a 0.05 level ONLY.
(and in case you think this is a frivolous example, ketoconazole fairly common antifungal drugs were announced last year to be pretty solidly linked to total liver and adrenal failure, and possibly birth defects, and only to be used orally as a last resort in severe deadly fungal infections, long after being released and used widely. And I didn't even KNOW that when I started writing this example half an hour ago!)
Example #2: Flu vaccines
Some numbers (all from the CDC) for people under 65 years of age (most of these numbers aren't known for people over 65 is why I restrict it):
% of people who get the flu when unvaccinated = roughly 15%
% of THOSE people who would have gotten the flu but don't, due to being vaccinated = roughly 60% in average seasons (i.e. in simpler terms ~40% of people it "doesn't take")
So far = roughly 9% likelihood that a given vaccine actually ends up saving you from anything
Then, the likelihood of a person with symptoms being hospitalized = roughly 0.1% (although this is only ballpark guessed by experts)
So far = roughly 0.009% likelihood that a given vaccine saves you a trip to the hospital.
Then, percent of those people who die = nobody knows. The CDC doesn't even know how many people die of the flu, not even within an order of magnitude of precision. But even the most liberal estimates would be something like 1% of hospitalized people.
So far = at most, about 0.00009% likelihood that a given vaccine will save your life. I.e. something like 1/1,000,000, (a number with truly horrible precision to it, though).
Okay, so if I want to decide whether to take a vaccine or not, I also want to know how likely a vaccine is to kill me itself. Is it higher or lower than 1/1,000,000?
Well, thanks to clinical trials for flu vaccines only involving a few hundred people, the answer is: nobody has a remotely vague idea. All they can tell you is that it is probably less than about 1/100 to kill you (likely much much lower, but that's what the data can GUARANTEE). (Note: this is all for death only, but the numbers are similarly fuzzy / almost completely unknown for any other severe complication or side effect you could name)
So how do I make an educated decision whether it helps me more than it hurts me? I can't. I have to flip a coin. So what good is the clinical trial doing me? And why am I being told it's a great thing?
And so on and so forth for pretty much every MINOR drug or medical procedure out there. Major procedures like heart transplants are actually much easier to make educated decisions on, because the deadly outcomes either way are so much more common, that we have plenty of data to get the precision we need. It's the minor ones that nobody has any clue about when you get down to it.
So is it even ethical to "prescribe" such minor things, or promote them? Versus, for example, simply saying "Here are our best known odds. You have permission to get this done if you want, but you have to follow your own intuition." That would be the HONEST way...