DDP Newsletter Vol. XL, No. 4
In the guise of promoting health and safety, the federal government has issued 200,000 pages of regulations and has 288,000 full-time federal employees engaged in regulatory activities. Every regulator destroys 138 private-sector jobs, according to an Auburn University study, and each dollar in a regulator’s salary destroys $112 of economic output. Regulatory costs devour $5 trillion, or one-fifth of our entire economy. Additionally, regulation fuels the totalitarian administrative state, strangles start-ups, and drives industry offshore. The worst offender is probably the Environmental Protection Agency (https://tinyurl.com/3zcv2n3m).
The regulations are based on published research and often claim to save thousands or millions of lives. But much of the research claims display the “Bunnies in the Sky” phenomenon: If you look long and hard enough at the clouds, you will see something. Warren Kindzierski, Ph.D., discussed the crisis of irreproducibility (falseness) of research claims in science at our 42nd annual meeting (https://youtu.be/78sTkKrJ0bA).
Bad (irreproducible) science has crowded out good (reproducible) science in the literature, permitting governments to develop policies with no reliable evidence of public benefit or even with actual harm. In the National Association of Scholars Shifting Sands Project, Kindzierski and S. Stanley Young, Ph.D., showed how false but “statistically significant” results get established.
Observational studies have many potential sources of bias. Researchers have enormous flexibility to manipulate their data selection and analysis to get results they want. Examples include selective design, selective use of data, selective analyses, and selective reporting of results. Multiple testing multiple modeling (MTMM) bias involves using data sets with a computer to test multiple outcomes, multiple predictors, different population subgroups, or multiple statistical cause-effect models without statistical correction. This increases the likelihood of making a type I (false positive) error. Observational studies routinely perform MTMM statistical tests on a data set. One in twenty results (5%) could be “significant” (a false positive) even when the null hypothesis is true. Other terms for this type of bias in published literature are data dredging, fishing expeditions, multiplicity, or multiple comparisons.
Randomized studies reduce, but do not eliminate all sources of bias.
An analysis of MTMM determines the SearchSpace, the number of possible hypothesis tests. One study of links between various foods and diseases had 20,000 possible hypothesis tests, of which 1,000 could have been “significant” false positive tests with P < .05. This provides ample opportunities to fish for spurious correlations.
A P-value plot—which graphs P-values from studies included in a meta-analysis vs. their rank order—can distinguish a null or uncertain effect from a true positive. This method in petroleum refinery workers showed a null effect for chronic myeloid leukemia risk but a positive one for mesothelioma risk.
By this method, EPA’s claims of deleterious effects of small particulates (PM2.5) on all-cause mortality, heart attacks, or asthma attacks cannot be substantiated, as James Enstrom, Ph.D., M.P.H., has previously shown (https://tinyurl.com/58e2c75d).
P-value plots of studies of health outcomes of eating red and processed meat showed a null association with cardiovascular mortality, breast cancer incidence, and colorectal cancer incidence, and claims concerning all-cause and all-cancer mortality are shown to be uncertain and unproven.
Concerning COVID response measures, Young and Kindzierski found null effects of public masking on respiratory illness, or of lockdowns on mortality. However, an association between lockdowns and domestic violence was validated.
Claims of “implicit gender and racial bias” are extremely challenging to evaluate. A computer-based speed-response test referred to as the Implicit Association Test (IAT) developed by researchers at Harvard is extremely important as it forms the scientific framework of Diversity, Equity, and Inclusion (DEI). P-value plot testing showed no association between the IAT and real-world microbehaviors.
Young and Kindzierski’s method shows that extremely costly and intrusive regulations are largely based on false results, even if promoted as “evidence-based” and “data-driven.”
‘POST-TRUTH SCIENCE’
We are constantly being exhorted to “trust the science.”
As William Briggs, Ph.D., points out, “Academics Blame Lower Trust in Scientists on Everything but Bad Scientists” (https://tinyurl.com/mnjmf3u8). “Science is the understanding of the nature of world. Controlling the world via this understanding is not science, but something else. Confusing the two leads to scientism.”
The science that the public is allowed to see is limited by “fact-checkers,” such as Science Feedback and other members of the International Fact-Checking Network (IFCN), who allegedly generate and disseminate misinformation. Meta, owner of Facebook (3 billion worldwide users), explicitly relies on IFCN-approved organizations (https://tinyurl.com/47pdxek4).
“Trusted” publications such as Scientific American are rife with fraud such as “citation sorcery.” That is, the cited sources do not support the claims that are made. For example, references backing the claim that mask protection has been “validated over decades” have nothing to do with viruses. Or an oft-cited “study” turns out to be an opinion piece or a brief letter (https://tinyurl.com/5u79jbdm).
On important questions such as “Where did COVID come from?” Science magazine has played the Three Wise Monkeys (see no evil, hear no evil, speak no evil) and has attacked and defamed scientists who pointed out scientific fraud and misconduct (https://tinyurl.com/42ehbh6u).
One method of introducing bias, “P-hacking,” occurs when researchers collect or select data or statistical analyses until nonsignificant results become significant. A method that uses text-mining demonstrates that p-hacking is widespread throughout science. Authors Megan Head et al. nevertheless conclude that “its effect seems to be weak relative to the real effect sizes being measured,” and that “p-hacking probably does not drastically alter scientific consensuses drawn from meta-analyses” (https://tinyurl.com/mybzvscv). Their study does not consider the SearchSpace.
Consensus results, however, in the hands of regulators have multibillion-dollar and life-and-death consequences. The consequence of safety-first, no pollution is not weighed against the consequence of not being able to make steel or antibiotics.
Or the fact that there is no “post-truth” science.