Why Should We Trust Science?
Harvard professor Naomi Oreskes argues why the public should trust scientists—but not for the reason most of us think.
The following is an excerpt of Why Trust Science? by Naomi Oreskes.
Why Trust Science?
There is now broad agreement among historians, philosophers, sociologists, and anthropologists of science that there is no (singular) scientific method, and that scientific practice consists of communities of people, making decisions for reasons that are both empirical and social, using diverse methods. But this leaves us with the question: If scientists are just people doing work, like plumbers or nurses or electricians, and if our scientific theories are fallible and subject to change, then what is the basis for trust in science?
I suggest that our answer should be two-fold: 1) its sustained engagement with the world and 2) its social character.
The first point is crucial but easily overlooked: Natural scientists study the natural world. Social scientists study the social world. That is what they do. Consider a related question: Why trust a plumber? Or an electrician? Or a dentist or a nurse? One answer is that we trust a plumber to do our plumbing because she is trained and licensed to do plumbing. We would not trust a plumber to do our nursing, nor a nurse to do our plumbing. Of course, plumbers can make mistakes, and so we get recommendations from friends to ensure that any particular plumber has a good track record. A plumber with a bad track record may find herself out of business. But it is in the nature of expertise that we trust experts to do jobs for which they are trained and we are not. Without this trust in experts, society would come to a standstill. Scientists are our designated experts for studying the world. Therefore, to the extent that we should trust anyone to tell us about the world, we should trust scientists.
This is not the same as faith: We do (or should) check the references of our plumbers and we should do the same for our scientists. If a scientist has a track record of error, underestimation, or exaggeration, this might be grounds for viewing his or her claims skeptically (or at least judging their results with this information in mind.) If a scientist is receiving financial support—directly or indirectly—from an interested party, this may be grounds for applying a higher level of scrutiny than we might otherwise demand. (For example, an editor might send the paper for additional review, or a reviewer might pay extra attention to study design, where subconscious bias may slip in.)
No doubt individual scientists, like individual plumbers, may be stupid, venal, corrupt, or incompetent. But consider this: The profession of plumbing exists because in general plumbers do a job we need them to do, and in general they do it successfully. When we evaluate the track record of science, we find a substantial record of success—in explanation, in prediction, in providing the basis for successful action and innovation. We have a world of medicines, technologies, and conceptual understandings derived from science that have enabled people to do things they have wanted to do.
This consideration—that scientists are, in our society, the experts who study the world—is a reminder to scientists of the importance of foregrounding the empirical character of their work—their engagement with nature and society and the empirical basis it provides for their conclusions. As I have stressed elsewhere, scientists need to explain not just what they know, but how they know it. Expertise as a concept also carries with it the embedded idea of specialization, and therefore the limits to expertise, reminding us why it is important for scientists to exercise restraint with respect to subjects on which they lack expertise.
However, reliance on empirical evidence alone is insufficient for understanding the basis of scientific conclusions and therefore insufficient for establishing trust in science. We must also take to heart—and explain—the social character of science and the role it plays in vetting claims. Here it is worth reiterating my point that scientists who were offended by the “social” turn in science studies got it wrong: Much of what we identify as “science” are social practices and procedures of adjudication designed to ensure—or at least to attempt to increase the odds—that the process of review and correction are sufficiently robust as to lead to empirically reliable results.
Peer review is one example of such a practice: it is through peer review that scientific claims are subjected to critical interrogation. (This is why, in my own work, I have stressed the importance of evaluating scientific consensus through analysis of the peer-reviewed literature and not the popular press or social media, and why my book was subject to peer review.) This includes not only the formal review that papers go through when submitted to academic journals, but also the informal processes of judgment and evaluation that research findings undergo when scientists discuss their preliminary results in conferences and workshop and solicit comments from colleagues prior to submitting them for publication, as well as the continued process of evaluation that published claims endure as fellow scientists attempt to use and build on those claims.
Tenure is another example: We evaluate scholars’ work in order to judge whether they are worthy of joining the community of scholars in their fields, in effect to be certified as experts. Tenure is effectively the academic version of licensing. The crucial element of these practices is their social and institutional character, which work to ensure that the judgments and opinions of no one person dominate and therefore that the value preferences and biases of no one person are controlling. Of course, within any community there will be dominant groups and individuals, but the social processes of collective interrogation offer a means for the less dominant to be heard so that, to the maximum degree possible, the conclusions arrived at are non-partisan and non-idiosyncratic. The social character of science forms the basis of its approach to objectivity and therefore the grounds on which we may trust it.
In recent years, this insight has been implicitly incorporated into scientific practices, particularly in just those domains where scientific claims are likely to be viewed as controversial. The U.S. National Academy of Sciences works to ensure that the panelists who perform its reviews are diverse and represent a range of viewpoints. Scholars have called this approach the “balancing of bias.” The Intergovernmental Panel on Climate Change—now one of the world’s largest aggregations of scientists—makes a particular point of seeking geographical, national, racial, and gender diversity in its chapter-writing teams. While the motivations for inclusivity may be in part political, the widespread character of practices of inclusion suggest that many scientific communities now recognize that diversity serves epistemic goals.
Adapted from Naomi Oreskes’ Why Trust Science? © 2019 Princeton University Press.
Naomi Oreskes is the author of Why Trust Science? (Princeton University Press, 2019) and co-author of the book Merchants of Doubt (Bloomsbury Press, 2010). She’s also a professor in the department of the history of science and an affiliated professor in earth and planetary sciences at Harvard University in Cambridge, Massachusetts.