Go to

They said “Science” — Did You Not Hear That?

In the Church, considered as a social organism, the mysteries inevitably degenerate into beliefs.1
— Simone Weil

Science, when sincere, principled, and dispassionate, is a beautiful tool. And it’s beautiful whether the conclusions are right or wrong; that’s part of the process. But how science is used in political discourse can turn it ugly.

A little while back, I wrote about a politician’s fallacious appeal to science while advocating a particular COVID-19 response. In a tweet, the politician did what I call “playing the science card” — saying, in effect, “you must believe me and adopt my political position because here’s the science.” It’s a cheap shot, but it’s worse. It degrades and devalues science, turning a particular instance of scientific work into a religious belief against which to test who is or isn’t faithful.It degrades and devalues science, turning a particular instance of scientific work into a religious belief against which to test who is or isn’t faithful. It undermines the key principle of science as open debate — always learning, always in tension, always adjusting. Further, playing the science card is an open invitation to bludgeon any alternate viewpoints as being anti-science. The beautiful tool becomes a bloody assault weapon.

But there’s a response to the science-as-weapon card: Don’t fret; keep calm and examine the science behind the claim. True, I, and perhaps you, do not have societal credentials as “scientist” (remember that wonky commencement phrase, “all the rights and privileges pertaining thereunto”?), yet as citizens, we are responsible for what, and to whom, we delegate the foundations of public policy. Pursuing our better selves, and understanding our limitations, we will examine claims to purported science, consciously deciding what is trustworthy and what is not.

An illustrative example

I’ll illustrate by examining the work behind the politician’s science card play. I will mask the identity of the study, because my point is not this specific instance, it’s to encourage improvements in the general tone and quality of societal dialog and debate.

Get email alerts

Why miss out on what's new? Choose from three categories for your alerts.

If your subscription was successful, use the "Edit Account" page to make further changes. If there was an error, reload the page to try again.

Furthermore, I do not wish to vilify those involved, nor do I question their motives, good-heartedness, or social concern. Nor do I question the urgency of the situation and the need to make rapid decisions based on incomplete information. I question only the science itself, the manner of arguing for its validity, and the manner of presenting its conclusions. (If you simply must know the study’s identity, comment on this article and I’ll reply with details.)

My analysis is based on the state of the project on March 25, 2020, two days after it was referenced in the politician’s tweet. The best aspects of the work were that:

  • It was timely. The authors acted quickly based on available information to address an urgent global issue.
  • It involved (apparent) expertise. Though I don’t know and can’t vouch for quality of input by the primary medical advisor, the individual at least has relevant credentials from Harvard and Yale (BA, MPH, MD).
  • The model is open. The authors document assumptions, logic, definitions, limitations, and data sources. This facilitates assessment by others as to the model’s validity.

Before I lay out my major critiques, let me clarify three things. First, to emphasize: my real point has nothing to do with this particular project — this is just an example. Second, my real point is that, whenever someone cries “science!” we should look behind to understand what and how the work was done. Third, the project and its work are useful, but I take issue in that it presented itself in ways that encouraged politicians and the public to use it as a science-card bludgeon.The project and its work are useful, but I take issue in that it presented itself in ways that encouraged politicians and the public to use it as a science-card bludgeon.

Looking closer, and as a foundation for public policy, the project and its presentation suffer from major issues:

  • The project was done by non-scientists. When referenced by the politician, the main authors included other politicians, technology executives, consultants, and computer programmers working with guidance from one medical doctor. Even as of the date of this article, the team’s composition is of similar character. We should applaud the authors for their initiative in a time of need, and we should not require that only “official” scientists weigh in on issues, yet we should recognize that this team is a citizen collaboration; it does not approach the standard of peer-reviewed science. The urgency of the issue would not allow the normal peer-review process, so I would not expect it to meet that standard, but I would expect the team to make this clear.
TASCHEN art books: Advertisement
  • The project over-presented its endorsements. Though endorsements are good, and would begin to fill the “peer reviewed” criterion, the project announced its endorsements as though they were compelling and extensive when they were actually quite limited. Endorsements at the time were from four MDs, project participants, political scientists, and an author. Since then, one additional MD has endorsed it.
  • The model oversimplified COVID-19 uncertainties. One of my deepest concerns, and a major reason why I question the work, is that it oversimplifies by allowing no ranges in possible outcomes, and worse, this does injustice to the world of modeling based on data. Various COVID-19 related factors have ranges of validity, but the model ignored that and presented simplified, singular numbers. The model graphs four curves of hospitalizations based on four sets of assumptions: Limited public policy action, social distancing, shelter-in-place, and a more restrictive lockdown. Analyzing scenarios is good to do, yet when there are so many uncertainties, it is irresponsible to present a singular curves. For each scenario, the work should graph a low-to-high range based on variable assumptions. Even the average retirement planning model does this (the one I use graphs a range of outcomes based on 1000 scenarios).
Figure 1: Suspicious similarity between US states
The image overlays five graphs from a study, each slightly transparent so that all can be seen. From left to right, there is a scale of hospitalizations, a line for the date of the analysis, a line marking where hospitals become overloaded, red "hills" of hospitalizations if no action is taken (~80% of chart height), and then much smaller orange "hills" for social distancing (~25% of chart height). At or below ~5% of chart height are horizontal black curves for available hospital beds and barely visible curves for shelter-in-place and full-lockdown options.© source kept anonymous
  • Comparison of graphs across US states looks suspiciously the same. In Figure 1, I’ve overlaid the graphs of five different states with wide variations in geography and demographics (California, Idaho, Maine, North Dakota, and Texas). Aligning the dates and peaks of the graphs, the dates of peaks vary, but the relative disparity between “limited action” and “social distancing” is almost exactly the same across the five. For two very different states (CA and ID), the graphs are identical (this could be an error in programming the model). Figure 2 shows the slope between the peaks of these two scenarios and again, they are nearly the same. I hasten to add: It is within the realm of possibility that, across all US states, there would be narrow variation in how scenarios would play out but, for me, it strains credulity to see such similitude. Given the states’ variations in factors like population density and hospital infrastructure, it’s a red flag to investigate the model more deeply before putting much stock in it.
Figure 2: Suspicious similarity between US state peaks
The image overlays five graphs from a study, each slightly transparent so that all can be seen. From left to right, there is a scale of hospitalizations, a line for the date of the analysis, a line marking where hospitals become overloaded, red "hills" of hospitalizations if no action is taken (~80% of chart height), and then much smaller orange "hills" for social distancing (~25% of chart height). At or below ~5% of chart height are horizontal black curves for available hospital beds and barely visible curves for shelter-in-place and full-lockdown options. Lastly, black trendlines, largely parallel, connect between the peaks of the red and orange hills.© source kept anonymous
  • It is an isolated analysis against a single societal goal. This is my biggest concern with the project. It purports to be sufficient to say something profound and complete (enough) about public policy, yet it considers only one narrow set of goals in isolation from all else (albeit an important set): hospitalizations and deaths from the virus. Likewise, the group’s political advocacy is based on only these concerns. These are important concerns; deaths of family and friends are painful. But the model does not consider, nor does the group’s advocacy hint at, broader systemic analysis that would include secondary and tertiary affects of the scenarios analyzed or the policies advocated. How would a more severe lockdown exacerbate inequality? How should domestic abuse be factored into a lockdown scenario (see BBC News articles: 6 Apr, 13 Apr)? How would harsher economic conditions affect hospitalizations and deaths in the near term? How will they lower overall human well-being, which may lead to higher suicide rates?

All told, the model qualifies only as political advocacy. This is bolstered by the fact that the team had job postings out to fill positions for marketing director, public relations director, and social media lead. It should not be referred to, nor masquerade as, science.

A better way

I won’t simply critique, I’ll offer an approach that I believe is more valuable to society because it fosters dialogI won’t simply critique, I’ll offer an approach that I believe is more valuable to society because it fosters dialog…., is more clear and candid, and treats science with the respect it deserves. Some of what I’ll suggest is embedded deeper in the work, but should be brought up into the top line of how the work is portrayed. Specifically, the project should have:

  • Been clear upfront that the work was political advocacy based on their view of certain data.
  • Laid out their view of what a comprehensive, systemic analysis would require, clarifying the limited portion of that view addressed by the project.
  • Stated their belief that society had insufficient time and-or data for a more rigorous and complete analysis. Rakuten Kobo sells audiobooks and e-books in partnership with Walmart Advertisement --------------------------------------------------------------------------------
 <strong><em>Rakuten Kobo</em></strong> sells audiobooks and e-books in partnership with Walmart
  • Kept their analysis simpler and in line with the quality of guidance achievable (i.e., instead of presenting a very heavy analysis that gives the appearance of greater rigor and reliability than it actually has).
  • Built in ranges of assumptions and carried them into the presentation of possible outcomes.
  • Invited alternate views, while emphasizing the time-sensitive nature of the situation.

The most important thing, however, would have been to not portray the work as science demanding a particular political agenda. Science is impotent to demand a political agenda. There is no path of mathematical logic or scientific method from an “is” (science) to an “ought” (moral imperative for action). Is-to-ought must always be bridged by a value system, and that’s a question for philosophy and morality, not science.

The end does not justify the means: Well-meaning societal concern does not justify incomplete work or political advocacy masquerading as science.


Endnotes

1 Weil, Simone. The Notebooks of Simone Weil. Routledge, 2004, p. 284.


Related posts



14 Apr 2020; updated 10 Nov 2020
User avatar

Randy Heffner

Randy lives at the intersection of philosophy, theology, and culture — reading, watching, walking, and sometimes creating in search of our better selves. Film and photography have a lot to do with it, but anyway, art. The tie is an anomaly.

What's your comment or question?