Dr. Harvey Risch Exposes the Central Fallacy of Evidence-Based Medicine

A serious critique of why preponderance of clinical evidence was disregarded by public health officials during the COVIDcrisis. This is not for everyone.

This presentation is not for everyone. It addresses fundamental flaws in the logic of randomized controlled clinical trials and evidence-based medicine as the modern cornerstones for medical decision making. These flaws lead to a series of bad decisions by official public health officers during the COVIDcrisis, and is one of many factors contributing to the tragedy of the global response to this relatively mild outbreak of a novel engineered coronavirus. I am featuring Harvey’s presentation because it is one of the best brief lectures I have ever seen critiquing the current trend in clinical medicine focusing on only studies which are aligned with the new discipline of evidence based medicine while discarding observations from front line clinicians and the overall preponderance of evidence.

This was presented at the Fourth International COVID (Crisis) Summit in Bucharest, Romania.

The Distortion of Evidence Based Medicine (transcript)

Dr. Harvey Risch:        Thank you all for coming to this event. I thank the parliamentarians for sponsoring us, supporting us, and thanks to Lily and Robert for organizing this. I'm going to give you some science comments of [00:00:30] something that's been bugging me for the last three years, but probably actually more like almost 30 years. And then I'm going to have a few kind of offhand comments afterwards of things that I've thought about from people who've spoken today.

So let me just state my conflicts of interest. I'm an advisor to the chief medical team of The Wellness Company which is a private company that does telemedicine [00:01:00] and supplements, and I hold some stock in it. I've also done some consulting work in cases of people who've been fired from their jobs and similar things. I have no grants or relationships to any pharma from the last three or five years, and no planned relationships and no payments from them and whatever.

Now, what's bugging me is that [00:01:30] we've been bombarded with propaganda about science for the last three years, and we've been told that everything about the pandemic and how all the management measures and our curtailed behaviors and so on is all based on science. And I'm going to assert that none of it is based on science, it's based on plausibility, and there's a gigantic difference between plausibility and science. For the doctors here, [00:02:00] this comes down to problems with the plausibility of evidence-based medicine which is a plausibility scam, as I'll show you.

So let me just sketch out just a basic rudimentary idea about science and what I think of it. So the idea of science is to explain how nature works. Nature works in fact by every subatomic particle and quantum of energy interacting with absolutely every other one according to the ways or laws that nature operates. [00:02:30] And that's a very unsatisfying model of reality. I would call it the saturated model. It fits everything perfectly. It doesn't explain anything. So if we back away from that, then to have a more simplified model or a theory means that we don't involve everything in the universe, and therefore, the model only fits approximately. So how well models fit is an indication of how good the explanation is.

I [00:03:00] lost half of the graph. Nope. Okay. Well, here's a model. The figure on the upper right is two variables, X and Y, that are measured. Each dot is a point of measurement of something about nature. You can see that the line there sloping up to the right is an average representation of the relationship between those dots. It shows that as X increases, Y increases in proportion, say [00:03:30] the graph that's supposed to be on the bottom there is the same sloped line, but the dots are separated much further apart. What that means is that it's the same relationship that it's expressing, it's just that it fits less well. So what's being fitted here is, each dot is a measurement of nature, and we're saying that this relationship is encoded by an average behavior of that sloped line that characterizes some of the relationships of all those dots. But because [00:04:00] they're not all exactly on the line, then there's something else that also explains where they are that we're not capturing in this model. So we're saying nature has a tendency to act this way, but it doesn't say exactly where the points are.

So the idea that when X changes, Y changes in proportion is a theory. That's an example of a theory, just like gravitational force between two masses is equal to the gravitational concept times the product [00:04:30] of the masses divided by the square of the distance between them. That's a theory. Or rays of light, which have momentum but no rest mass, bend in their paths when they pass by gravitational objects through gravitational fields. Those are theories. Theories are descriptions that are proposed to explain something about nature, but theories are not science. And this is maybe counterintuitive. Theories can be technical, or they can be simple. [00:05:00] If you're saying, "Oh, I dropped my book on the floor," the idea that if you let go of your book, gravity is going to pull it to the floor, is a theory, but you don't deal with the science of that because you experience it every day, day in, day out. So it's intuitive to you and you validate that theory by a huge amount of evidence.

Science occurs when observational or experimental work is carried out to investigate the specifics of what the theories explain. That work provides results that either support the theory [00:05:30] or tend to refute the theory. If you build up an evidence base of materials that support the theory, then that allows scientists to tentatively accept the theory as a representation of how nature works. And if you find contradictory evidence, then you reject the theory or you modify the theory to account for the contradictory evidence and you repeat the cycle. What scientists believe about how nature [00:06:00] works is not science. It's a statement of the beliefs of scientists. And that's different from how nature works because after all, science is approximate, scientists are approximate, scientists can be wrong. We know they've been wrong a lot during the last three years, and they've been wrong a lot before that too.

My favorite quote about this from Karl Popper, the philosopher of science, was that "studies of what scientists believe have no relationship to [00:06:30] studies of how nature behaves." This quote has allowed me to maintain my independence throughout the whole pandemic. That me knowing the scientific evidence that I've studied and I've compiled and thought about and know how nature works for the narrow theories that I've entertained is what kept me afloat. What all these other CDC, FDA, and all this other say, "Oh, well, they say something different. Why should we believe you, Dr. Risch, when all these people say something different?" [00:07:00] It's because I don't care whether 50 or a hundred scientists say something, it's irrelevant. What matters is the actual scientific evidence. And since I've mastered that and I know what work I put into it and I know the realities of that, that's what's kept me afloat.

Now, I have to go through a little bit of technical stuff. You'll have to excuse me. We already heard other technical stuff today, but I'll try to explain it the best I can. In 1991, there was a claim put out [00:07:30] that medical evidence was bad and needed to be fixed, and that the term evidence-based medicine with some new principles for gathering evidence about medical evidence like drugs, vaccines and things, would somehow repair that. Now, of course, when this happened in 1991, I thought this was hubris and obnoxious and ignored 200 years of medical science. Not all of that medical evidence was good, but some of it was. We knew a lot of things about medicine in 1991 [00:08:00] and how to treat patients in 1991.

So this was an issue of not just territoriality of somebody coming in and saying, "We want control of medical evidence." This was postulating a theory about how medical evidence was gathered, and the basic idea of this evidence-based medicine is that it asserted that only randomized control trials, RCTs, provided high quality evidence for benefit or harm. [00:08:30] This is plausible. After all, if you randomize something, you'd think that you've balanced everything out. So if there's confounding, which I'll explain, it's the same in both arms, and therefore the only thing that's left is the treatment versus the placebo. In fact, that's a misrepresentation. It's plausible. It sounds good. Sounds like you randomize, you should get what you want, right? You should balance everything out. It turns out that randomization doesn't work. And I'll show you why.

A randomized [00:09:00] controlled trial, just to be on the same page, is where patients eligible for a treatment, that the treatment has to have unknown benefits. Because if you know what the benefits are, you can't do the trial. Anyway, unknown benefits are given, either the treatment or the placebo, and then you follow them up and there's a whole bunch of details about how you do this right and careful and so on. And then to see what happens and how they're assigned to the treatment group or the placebo group is done by a random calculation. [00:09:30] At best, the randomization is done by somebody who's not the investigator of the trial so that they can maintain independence. So the investigator doesn't know which drug or vaccine the patient got, the patient doesn't know which one they got, and therefore there can't be any undue influence on measurements of the outcomes in the trial. So that's called double-blind and is one of the quality measures of doing randomized trials.

[00:10:00] Randomization is supposed to make everything the same in the treatment in the placebo arms, except for the treatment. Now, if you measure things about people in these kinds of studies, if you measure age, sex, race, body weight, height, things like that, then you can adjust for them. We have methods of adjusting for these things statistically in the analysis of the results. So things that are measured in fact don't [00:10:30] need to be randomized. The only benefit of randomization is things that aren't measured, meaning that you may not measure some crucial variable that somebody says the journal editor or the reviewer of your paper says, "Oh, you should have measured this and you didn't control for it, and therefore your paper is junk." So the claim of randomized trials is that they can control for unmeasured factors, and they automatically balance unmeasured [00:11:00] factors by randomization.

The reason this is important is because when you do a randomized trial, or even a non-randomized trial, you'll see an association, a relationship between which drug, which treatment the people got and which outcome they got, whether they died or not, and so on. That's an association, a statistical association. What the randomized trial does is it gives you more than that. If it's done well and properly, it gives you [00:11:30] the ability to claim that that association is actually causation, that the drug caused the behavior or the outcome to happen.

Now, when COVID-19 was first occurring, this bizarre to my mind paper appeared in New England Journal called The Magic of Randomization versus the Myth of Real-World Evidence. It appeared February 13th, 2020 [00:12:00] before we hardly had any cases in the US. It had been written in January probably of 2020. The authors are for very well-known British medical statisticians paid for by pharma. The paper claims that randomization creates perfect studies and all other non-randomized studies are just rubbish or evidentiary rubbish. And I felt it was discreet against my entire discipline of epidemiology. But then I realized they had conflicts [00:12:30] of interest and so on. You can ask why was this written when it was written, that's another conspiracy theory if you like, but okay.

This paper, however, did not discuss the flaw of randomized studies, which is that for randomization to work, it requires large numbers of outcome events. The numbers of deaths, say, in a study have to be large in order for randomization to work. So we have to ask, why is that? So the answer for that [00:13:00] is that the problem is called confounding in epidemiology. Confounding is a circumstance where you have an exposure and an outcome and you want to say that the exposure causes the outcome, but there might be this other factor that's related to both the exposure and the outcome, and you have to control for that or match on that or adjust for that in order to be able to make a causation statement between the exposure and the outcome. That's called the confounder. So confounding is a distortion in the magnitude [00:13:30] of the relationship. This drug causes the death to be reduced by 50%. The 50% is the magnitude of benefit. And the confounder says, "Oh no, if you adjust for the confounding, it only makes it 33% benefit instead of 50%." So the confounder biases distorts the relationship you're looking at.

Okay. So why does this matter at all? Well, if you don't measure any factors in the study, if you just measure the disease outcome [00:14:00] and the treatment group, then these unmeasured confounders probably matter because most diseases have factors that relate to their disease behaviors. On the other hand, if you measure every factor in the world that you could possibly think of, and now with most chronic diseases, we know most of the risk factors for them, at least the ones that have been studied over the last 30 to 40 years, then if you measure all of those, it probably doesn't matter. The randomization doesn't matter because we measure all of that [00:14:30] and you adjust for that and you get a very good relationship. In 1970s, we didn't know all these risk factors. So to do a non-randomized trial in 1970 meant things weren't measured. To do one in 2015 means everything is measured and we adjust for it.

Here's the problem why I said what I said. Suppose you [00:15:00] flip a coin 10 times. It's very easy to get seven heads and three tails or vice versa by chance. About a third of the time, you'll get that. Seven heads and three tails is a factor of two, two and a third. That's a difference. That is potential confounding in a study. If on the other hand you flip a coin a hundred times, the chance that you'll get 70 heads and 30 tails or [00:15:30] vice versa or more extreme is 0.00008. It doesn't happen. It's extremely rare, but it's the same magnitude of confounding of bias. It's the same two or two and a third.

So what matters is the numbers of events that occur. In order for the randomization to work, you have to have large numbers of outcome events in the study. No studies do this. No randomized trials designed their studies to have large numbers of outcome events. They designed them to have statistical power to see an effect that you [00:16:00] want to see. If you want to see a twofold benefit, you design the study to have enough subjects to see a twofold benefit. But you don't design it so that potential confounders will only reduce the confounding by 10% or less. Okay, this just isn't done. So this is the problem.

I'm going to go quickly. Here's a study from one of my colleagues at Yale. This is a demographic table. Every randomized trial has a demographic table at the beginning of it telling you the distributions of age, sex, gender, whatever. Anyway. [00:16:30] I don't know if this has a pointer. But anyway, I just picked three variables there that are decidedly different between the two groups. This was randomized, but the numbers are small. There's only 32 subjects in each group. The numbers are too small for the randomization to have done anything. So there's large biases in this somewhere between 2.4 fold and 1.8 fold and so on. And that's not acceptable if you're doing a randomized trial, really if you're doing a careful one.

[00:17:00] I've already said this. Randomized trials that don't have enough subjects are not provably gold standard work. They don't tell you that the randomization is enough in order to have removed potential confounding of unknown variables. And that is the whole point of a randomized trial. It's to remove potential confounding by unmeasured variables. So if you have small number of outcome events, [00:17:30] you fail on that, it's not a gold standard. However, those trials are even worse than non-randomized but controlled trials, because in non-randomized trials, we investigators know you have to measure everything in the world and adjust for it because it's not randomized. So that's how we've done that for the last 30 plus years.

Randomized trials think that randomization worked and so they don't adjust for anything. They just compare in a two-by-two table the results and it's all potentially biased because [00:18:00] it's not big enough. So this is a kind of bizarre circumstance where there's a pretend plausibility that randomization automatically makes trials good when in fact that's not true. The trials are not designed for that, by and large. And that non-randomized trials, what we epidemiologists do, is being told that that's evidentiary rubbish and the randomized trials are the gold standard when nothing is the case.

Now, [00:18:30] that was all theory. I'll just show you that this was a meta-analysis of meta-analysis published by the Cochrane Library people where they looked at randomized trials and their non-randomized but controlled trial counterparts. I think there's 10 or 11 of these comparison studies. Each one had hundreds to thousands of individual studies looking at randomized trial versus non-randomized trial. The bottom line there, that black diamond down there shows that the relative [00:19:00] risk, the difference between randomized and non-randomized trials is 1.08, 8% difference on average between the randomized and non-randomized trials. So empirically, this justifies what I've been telling you this whole convoluted theory about why randomized trials are not necessarily good studies. Then here I'm telling you that here's the empirical evidence to support what I just said.

Well, people got fed up with this. Americans petitioned Congress that they wanted [00:19:30] drugs that the FDA wasn't giving. So Congress passed in 2016 the 21st Century Cures Act, which in section 3022 says that evidence has to include everything relevant and not just randomized trials. So of course, when we petitioned FDA to approve hydroxychloroquine in May of 2020, they came back and said, "No, we're not going to do that unless you show us a number [00:20:00] of high quality, large randomized controlled trials." Just violating exactly the law that they were supposed to be following.

Now, given what I've just said, this gives a problem of how to evaluate evidence, and the way we evaluate evidence for causation is Sir Austin Bradford Hill's essay from 1965 where he groups kinds of evidence into a number of different, [00:20:30] what he called, aspects. And I'm not going to go through all that. All this was is a rubric for taking every possible piece of evidence bearing on your relationship and organizing it into some category or other. Some of these are more important. The strength of association and dose response are usually considered more important, but all of them can bear on a relationship. When you accumulate enough evidence along these lines all pointing to the direction of supporting the relationship, that is when you draw conclusions of likely [00:21:00] to be causal. So you have to have a lot of evidence, and that's what you do. It's not having two randomized controlled trials that have significant benefit. That is not how we evaluate evidence. That's how the FDA evaluates evidence, but it's not how the real-world does it.

Okay, so let me just conclude. So I've already said this. Randomized trials are not necessarily gold standards. They can be if they're done well, but it takes a lot more than the average trialist that thinks they [00:21:30] know what to do. This plausibility claim misrepresentation is just one of zillions that we've had to face over the last three or three and a half years that masks supposedly keep the virus from being transmitted from one person to another, or the vaccines create immunity and reduce transmission, or the pandemic of the unvaccinated.

Speaker 2:                    [00:22:00] We have to stop.

Dr. Harvey Risch:        Right. Okay. So let me just leave you with a cartoon. I just want to make one final sentence. So this is science, and we've heard a lot of science, and I've certainly appreciated the amount of information that I've learned and it's all very important. The one thing that is missing for me that Aurelian really brought to light is that we've [00:22:30] motivated ourselves to want to cope with our circumstances, but we haven't planned how to cope with our circumstances. What we need is Event 201 for us. We need to be able to bring our resources to bear in planning and figuring out what we can use. Just like the Romanian parliamentarians did it for Romania, we need to be able to do that in much more general terms.

I've got two [00:23:00] or 300 doctors who are on my wavelength that I email day in, day out. You've got other resources. We need to be able to put those resources. We've got media resources. Epoch Times covers us a lot. There's other media resources that we have. We need to be able to put our resources together and plan for the IHR amendments, for the WHO, the health treaty, and whatever, climate change and all these things that are coming our way. We need to be able to cope with that by fighting [00:23:30] back by plans. The same way they did it to us, we need to do it back to them. Okay, thanks.

Previous
Previous

Chinese "Surge" in Children with Respiratory Infections?

Next
Next

Not 14M Lives Saved, But Over 17M Dead from the mRNA COVID vax