How CDC Blatantly Uses Weekly Reports to Spread COVID Disinformation: Three Examples

by Madhava Setty, M.D., Childrens Health Defense:

The authors of the Centers for Disease Control and Prevention’s Morbidity and Mortality Weekly Report are afforded the luxury of broadcasting their findings to massive audiences through media outlets that don’t hold them accountable for even gross lapses in scientific rigor.

The Centers for Disease Control and Prevention (CDC) — the primary U.S. health protection agency — publicly pledges, among other things, to “base all public health decisions on the highest quality scientific data that is derived openly and objectively.”


The CDC’s “primary vehicle for scientific publication of timely, reliable, authoritative, accurate, objective, and useful public health information and recommendations,” according to the agency, is its Morbidity and Mortality Weekly Report (MMWR).

The CDC states that the MMWR readership consists predominantly of physicians, nurses, public health practitioners, epidemiologists and other scientists, researchers, educators and laboratorians.

However, these weekly reports also serve as the means by which the agency disseminates its scientific findings to a much wider readership through media outlets that inform hundreds of millions of people.

Though the CDC asserts its MMWRs reliably communicate accurate and objective public health information, the reports are not subject to peer review, and the data behind the scientific findings are not always available to the public.

Moreover, when the media summarizes MMWR findings in articles intended for the general public, they often omit or misrepresent important details.

As a result, the reports often steer public opinion to a level of certainty the authors of the reports themselves cannot justify — and often, to incorrect conclusions.

As Marty Makary M.D., M.P.H., and Tracy Beth Høeg M.D., Ph.D., recently revealed, some officials within the CDC claim the heads of their agencies “are using weak or flawed data to make critically important public health decisions, that such decisions are being driven by what’s politically palatable to people in Washington or to the Biden administration and that they have a myopic focus on one virus instead of overall health.”

In this article, I will demonstrate how the CDC used three key MMWRs to compel the public to comply with pandemic response measures.

These reports were flawed to an extent suggesting more than mere incompetence or even negligence — they were deliberate attempts by CDC scientists to mislead the public.

These MMWRs address the effectiveness of mask mandates (March 5, 2021), vaccine safety during pregnancy (Jan. 7, 2022) and the risk of COVID-19 in children (April 22, 2022).

Do I need to wear a mask?

The New York Times in May ran this story, “Why Masks work, but Mandates Haven’t,” in which the author concluded:

“When you look at the data on mask-wearing — both before vaccines were available and after, as well as both in the U.S. and abroad — you struggle to see any patterns.”

But that’s not what the CDC concluded in its March 5, 2021, MMWR:

“Mask mandates were associated with statistically significant decreases in county-level daily COVID-19 case and death growth rates within 20 days of implementation.”

How could the CDC claim there was a statistically significant decrease in cases within 20 days of mask mandate implementation if there were no patterns in the data?

The explanation is necessarily detailed because the CDC authors’ methodology is so devious. A detailed critique of the agency’s approach is offered in this preprint paper (Mittledorf, Setty) which I will summarize here.

The CDC researchers examined the number of COVID-19 cases reported each day in each U.S. county that implemented a mask mandate.

Then they calculated the Daily Growth Rate (DGR) of cases (and deaths) in each county on each day for 60 days preceding the countywide mandate and for 100 days afterward.

The authors purportedly showed the DGR fell after mandates were imposed. It is important to realize that when the DGR falls on a certain day, it does not mean that fewer new cases occurred on that day compared to the day before — it means the number of new cases is not growing as fast as it was prior to that day.

In other words, by using DGR as the measure of interest, the authors can still claim a “significant decrease in COVID-19 case growth rate” even if the number of new cases on a given day is larger than the day before.

When data for 2,313 U.S. counties were tallied into a composite graph, this is what they found:

change case death growth rate
Figure 1. Image credit: CDC

Note that mandates were implemented at different times in different counties, so the “reference period” occurred at different times during the year depending on the county.

Furthermore, the plot indicates the DGR at different times relative to the DGR at the reference period.

In other words, when the plot falls below zero it does not mean the DGR is negative — it means it was less than it was during the 20 days prior to the institution of the mandate (the “reference period”).

Nevertheless, it seems that on average, the DGR falls after the implementation of mask mandates.

However, what was happening prior to the reference period?

We don’t know — and neither do the authors of the CDC report.

Figure 1 includes ranges of confidence intervals that stretch above and below that of the reference period prior to mask mandate implementation. Because the upper bound of the DGR is greater than the reference period prior to the point mandates were implemented, it is entirely possible the DGR was already in decline prior to the implementation of mask mandates.

The authors’ own data and calculations demonstrate the drop in DGR may have had nothing to do with mask mandates at all.

In other words, the authors also could have concluded mask mandates were associated with a drop in the DGR 40 days prior to their implementation.

In fact, this is clearly demonstrated in the graph. The DGR for both cases and deaths is highest in the period 20 to 40 days before the mandate.

How amazing! Masks seem to work several weeks before people are forced to wear them!

Beyond ignoring what their own data suggested, the CDC authors made two very suspicious decisions when designing their study.

The CDC chose to limit its analysis to 100 days after mandates were instituted. Was this an arbitrary length of time? Or was there another reason?

We examined data from the entire country for the period of the study and plotted the DGR for a full year here:

us daily growth rate cases
Figure 2

Figure 2 clearly demonstrates the DGR was already in steep decline at the beginning of the study period, just as pointed out earlier.

The graph also indicates the DGR temporarily rose at the beginning of the summer, then fell, then began to rise again at the beginning of the autumn.

Because the overwhelming majority of mask mandates began in the late spring and early summer, a 100-day window of analysis will show a declining DGR because it will miss the increase in DGR in the fall.

Also note that a shorter period of observation, say 50 days, would have resulted in equivocal or opposite findings as the summer “bump” would have made it seem like mask mandates had no effect or possibly increased the DGR.

The CDC conveniently chose an observational window that could be neatly nestled between the periods of higher DGR.

Read More @

Read further at SGT Report

Leave a Reply

Your email address will not be published. Required fields are marked *