Trust seems to play an ineliminable role in a considerable portion of our ordinary activities of knowledge acquisition. I believe that 382 million people live in the U.S. not because I have conducted a census myself but on the basis of an internet query. Often such trust is rational, because it is adequately justified even though the justification in question is not of the sort that can count as evidence towards the truth of what is believed (my reasons for trusting the search result do not count as evidence towards the population of the U.S.). I do not have first-order but second-order justification: I have good reasons to believe that the information source (such as the website of the American Census Bureau) has good reasons to assert that there are 382 million people currently living in the U.S. Similarly, I trust my physician rationally when I believe her diagnosis of my disease, unless there are good reasons to doubt her expertise or integrity. When I visit a new city, it is often rational to believe the directions given by an ordinary-looking local.
In more general terms, epistemic or intellectual trust implies that one depends on another to know, understand or to be justified in believing something. If one is well justified in believing that the source of information knows, understands or generally has adequate justification for what it asserts, then one is trusting rationally. If the information happens to be true for the reasons held by the source, one can also be said to acquire knowledge by trusting. The second-order justification we have for believing an information source is about the reliability, honesty, competence or overall trustworthiness of that source. The ordinary sense of trust is clearly much broader, but for present purposes I will use the term epistemic trust in this specific sense of a process of belief formation that involves epistemic dependence on assertions by others. Given the overwhelming proportion of indirectly acquired knowledge in our total knowledge base, we can assert without much doubt that epistemic dependence is a central characteristic of ordinary knowledge acquisition.
It is tempting to think that the situation is completely different in the case of scientific knowledge — our most critical and thereby most esteemed epistemic endeavor. I observe that this is reflected in the opinions of my scientist friends whenever I ask them whether trust has any role in the production and dissemination of scientific knowledge. But when pressed to go beyond stating principles and to reflect on the ordinary practices of scientific inquiry, most practicing scientists tell that trust indeed plays a very central role. Can individual scientists re-run or scrutinize all previous studies that somehow feature in the theoretical background or even as some of the premises of their new studies? Does an experimenter personally understand, let alone test or verify, all theoretical claims and observation reports that go into establishing the reliability of each experimental instrument (e.g. optical theory of lenses and the telescope) or measurement procedure? Can any researcher single-handedly gather and analyze all the data needed in a big research project that typically demands different specializations and hence division of cognitive labor? Imagine working in a research establishment like CERN; can you see for yourself if each instrument, software, algorithm is working properly, or can you go through all the calculations and computations to detect errors? How about the works of past scientists — can anyone do without reliance on previous research in discovering new phenomena, which is arguably the key rationale of cumulative science? The answer is generally a big no. Further, the resources we invest in scientific inquiry are too precious to exhaust by re-testing each and every study, by repeating every single observation — especially highly complex and expensive ones. Scientific knowledge is practically impossible without networks of epistemic dependence, hence without trust. By extension we can also speak of placing epistemic trust on instruments of measurement, observation or computation, as we are similarly dependent on them in the context of discovery as well as in that of justification. Evidence of their reliability is usually known by a number of experts and others rely on their testimony.
The famous sociologist of knowledge Merton counted organized skepticism among five key norms in science.[1] If this norm is understood in a sense that is close to philosophical skepticism, in light of the above we can see that organized skepticism, thus understood, does not seem to be a suitable prescriptive norm, let alone a descriptive one, because the very notion of skepticism is not compatible with that of epistemic dependence. Philosophical skepticism generally implies suspension of judgment regarding things we think we know or are well justified to believe or act by. As a methodological attitude, skepticism prescribes doubt regarding all “fallible” sources of information and justification, because it seeks certainty. The skeptic typically refuses to believe or act by anything that is uncertain, however well justified otherwise. Epistemic dependence, on the other hand, necessarily implies some degree of uncertainty and fallibility. But this is true for any level of justification in science. Our most successful scientific theories may crumble under the weight of novel evidence, most esteemed experiments may turn out to contain flaws, most renowned scientists may be discovered to have made mistakes or even committed fraud; but the progress of scientific inquiry requires that we tentatively trust and act by scientific statements that are well tested and reported on current standards unless and until we discover signs indicating otherwise.
Further, skeptical inquiry describes at best a superficially social epistemic process, which is realized individually by many, but not to a robustly or substantially social one as is actual scientific inquiry. Science is social in the robust or substantial sense that processes of discovery and justification (from hypothesis formation to quality control) are distributed; that is, organized into networks of epistemic dependence — thus “inquiry” as a meaningful unit is realized only by collectives, not individuals.
How Merton himself understands organized skepticism, however, is rather different. He describes it from a clearly sociological, not epistemological perspective:
Another feature of the scientific attitude is organized skepticism, which becomes, often enough, iconoclasm. Science may seem to challenge the “comfortable power assumptions” of other institutions, simply by subjecting them to detached scrutiny. Organized skepticism involves a latent questioning of certain bases of established routine, authority, vested procedures, and the realm of the “sacred” generally. It is true that, logically, to establish the empirical genesis of beliefs and values is not to deny their validity, but this is often the psychological effect on the naive mind. Institutionalized symbols ·and values demand attitudes of loyalty, adherence, and respect. Science, which asks questions of fact concerning every phase of nature and society, comes into psychological, not logical, conflict with other attitudes toward these same data which have been crystallized and frequently ritualized by other institutions. Most institutions demand unqualified faith; but the institution of science makes skepticism a virtue. Every institution involves, in this sense, a sacred area that is resistant to profane examination in terms of scientific observation and logic. The institution of science itself involves emotional adherence to certain values. But whether it be the sacred sphere of political convictions or religious faith or economic rights, the scientific investigator does not conduct himself in the prescribed uncritical and ritualistic fashion. He does not preserve the cleavage between the sacred and the profane, between that which requires uncritical respect and that which can be objectively analyzed.
Being an institution that does not involve any sacred area and does not recognize other institutions’ demands of unqualified faith on topics that can be empirically investigated has very little to do with problems of reliability, credibility, quality control and self-correction in science. In this sociological sense organized skepticism is compatible with epistemic dependence. However, this is not because it makes a contribution to a theory of scientific knowledge that can incorporate epistemic dependence, but simply because this sociological understanding of organized skepticism is broadly irrelevant to the epistemology of science.
What is actually understood by organized skepticism, however, appears to be none of these. For instance, in the context of a study[2] investigating scientists’ subscription to the Mertonian norms, organized skepticism is described (as an item) thus: “Scientists consider all new evidence, hypotheses, theories, and innovations, even those that challenge or contradict their own work.” This is basically an epistemological definition, very similar to that of epistemic responsibility.
Simine Vazire describes organized skepticism in the form of a prescriptive rather than descriptive norm (for she maintains that actually the counter-norm of organized dogmatism prevails in science).[3] The contents of her description involve notions that belong to the epistemology of science, juxtaposed rather eclectically with notions Merton uses in describing organized skepticism. A clear indication is the choice of certain terms (marked in bold):
Merton’s fourth norm, organized skepticism, states that scientists should engage in critical evaluation of each other’s claims and that nothing should be considered sacred. Scientific self-correction relies on this norm because theories or findings that are treated as sacred cannot be corrected. Thus, the push for higher standards of evidence, and for practices such as preregistration, transparency, and direct replication that make it harder to (intentionally or not) exaggerate the evidence for an effect, is in the spirit of the Mertonian norm of organized skepticism. Self-correction requires being willing to put theories and effects to severe tests and accepting that if they do not pass these high bars, we should be more skeptical of them.
The notion of organized skepticism seems nonetheless evocative of some epistemic sensibilities that are central to scientific inquiry, thus it is understandable that it resonates well with many people who concern themselves with the epistemic values and norms of science: Compared to all other human practices scientific inquiry requires the highest level of scrutiny and the least dogmatism. But I think this is much better conceived and expressed through “criticism” rather than skepticism. The important difference is that an understanding of scientific rationality that is based on criticism instead of skepticism is compatible with epistemic dependence. As I argue in the end as well as in the second and third parts, clarifying the nature and role of epistemic dependence in scientific rationality can further help us diagnose and remedy current problems concerning quality, reliability and credibility in science better.
In comparison to such everyday processes of knowledge acquisition, what counts as “well-justified,” hence rational trust within science is rather different. It is not rational, for a scientist, to rely on somebody’s assertion on the grounds that he or she is an expert (such as the family doctor) or has no obvious incentive to be dishonest (such as the local giving directions). Further, as scientific questions often do not have objective, easily obtainable and unambiguous answers (such as the population of the U.S.), no epistemic authority can ever have the final word. Neither is it rational to place trust other scientists because they have implicit skills or knowledge (e.g. “flair”), attractive careers, or even a good track-record. In the context of scientific inquiry, I can rationally trust an assertion by a researcher only if I can make the judgment that (i) the researcher conducted the inquiry necessary to acquire the kind of evidence required for making that particular assertion, and that (i) I could have in principle reproduced the evidence if I followed the same procedure, given the same skills and background knowledge as the researcher. These skills and knowledge in turn are of a nature that they can be acquired by anyone with adequate “general” cognitive skills through education and training. Thus I rationally trust only if the first-order evidence can easily be subjected to criticism in a social process of analysis, methodological evaluation, replication and so on that is on the whole reliable: a social process of criticism that is generally able to detect errors, evidential inadequacies, weak inferential connections and so on, when such are present. This is the only possible sense of trustworthiness in science and of science.
This picture of the social process of science comes close to the perspective Popper called critical rationalism, which purports that scientific objectivity rests not on the skeptical attitude or the impersonal detachment (cf. the Mertonian norm “disinterestedness”) of individual scientists, but on the social process of criticism:
What may be described as scientific objectivity is based solely upon that critical tradition which, despite all kinds of resistance, so often makes it possible to criticize a dominant dogma. In other words, the objectivity of science is not a matter for the individual scientist but rather the social result of mutual criticism, of the friendly-hostile division of labour among scientists, of their co-operation and also of their competition.[4]
In accordance with this criterion of rationality in trusting assertions by others, we can add that the norm of assertion within science is that the assertion (e.g. any scientific claim, observation report) can be objectively criticized. Thus an assertion is normatively acceptable as a scientific assertion only to the extent that it can be subjected to criticism on empirical, logical or other methodological grounds. The focus is on criticism because all knowledge and especially scientific knowledge is fallible; that is, there can never be conclusive evidence for or proof of a scientific statement, and any supporting evidence (unlike criticism) is of little informative value. Thus, the only way forward is by accepting scientific theories that survive criticism until we have different, better methods and tools of criticism at our disposal. Popper identifies the criterion of success as high corraboration, which means that (i) the theory makes intersubjectively testable and highly informative predictions and (ii) these predictions have pasted severe tests; that is, tests that they would have more probably failed. In more general terms, what can be objectively (i.e., intersubjectively) criticized can be asserted, and what further survives serious criticism can be tentatively relied on:
What cannot (at present) in principle be overthrown by criticism is (at present) unworthy of being seriously considered; while what can in principle be so overthrown and yet resists all our critical efforts to do so may quite possibly be false, but is at any rate not unworthy of being seriously considered and perhaps even of being believed — though only tentatively.[5]
There are many different properties that make it possible for any process of scientific inquiry to be subjected to criticism. Popper talks about openness to criticism, or readiness to be criticized as the required scientific attitude for the social process of criticism to work. Openness to criticism is a psychological notion, having to do with the attitude of the scientist. Regarding the nature of scientific claims in particular, we can also talk about intersubjective testability, which requires (for Popper) that claims are falsifiable. Falsifiability has mainly to do with the logical form of scientific statements. From a broader angle, a research process can be subjected to criticism only if there is transparency regarding all relevant aspects and steps of the procedure by which evidence is collected, analyzed, evaluated and interpreted, and that the whole procedure is in principle repeatable. More particularly, we can talk about how actual research outcomes, as these are what gets to be communicated, can potentially be criticized. This is where the criteria for rational trust are the most relevant.
As an analogy, it might be useful to consider the corresponding quality of research outcomes as inquirability. Inquirability is a concept that actually belongs to the Theory of Decision Support System Design for User Calibration[6] as one of its three components (the other two being expressivity and visibility). Decision support systems are considered as inquiring systems, and their calibration is “an objective measure of the accuracy of one’s decision confidence.” Decision confidence is the belief in the quality of a decision, and obviously it might reflect or fail to reflect the objective quality of a decision. The theory conceives decisions of objectively high quality in terms of knowledge. In this respect, accurate and well justified decision confidence can be regarded as metaknowledge. Research outcomes, like decision support systems, often serve as bases for decisions, such as tentative acceptance, suspension of judgment or rejection of theories, and they have to be assessed, like the calibration of decision support systems, with regards to the accuracy of the beliefs in the quality of such decisions:
[I]nquirability indicate[s] how well the inquiring system is designed for user calibration… Inquirability is a continuum of designs for DSS actions ranging from the servile and illusory that lulls to the contrarian that engages and continuously challenges. Actions that are servile and illusory are designed to please, to unquestioningly provide data that supports the decision-maker’s position(s) and assumption(s). Little, if any, metaknowledge is identified or resolved by a DSS that is servile and illusory. At the other extreme, DSS actions designed to be contrarian, engage and challenge the decision-maker’s positions and assumptions to identify and resolve metaknowledge through the dialectic process of contrasting, debating, and resolving differences… Near the servile end of the inquirability continuum, the actions of a DSS can be designed to generate data that justifies or supports a position, a set of assumptions, or a decision that the user has already made… Servile inquirability fails to inform because it simply presents data that is in accord with the decision-maker’s position. As might be expected, data that agrees with one’s decision does not improve calibration.
Research outcomes can be said to be inquirable to the extent that the justification process that generated them is transparent, the assumptions regarding the reliability and validity of the measures are made visible, the methods are repeatable and so on. Research outcomes that lack inquirability in our adapted sense, like servile and illusory decision support systems that cannot be calibrated, cannot be assessed as to whether trust in them corresponds with high trustworthiness; on the contrary, often they present as findings what best justifies the claims of the researcher.[7]
Inquirability and actual criticism (e.g. testing, replication) are closely related. Inquirability facilitates and proliferates criticism, since the more there is transparent, intersubjectively testable research, the more there will be opportunity and incentive for criticism. In turn, the more there is a critical research culture, scaffolded by more efficient technologies and social mechanisms for criticism, research and its communication will have to meet higher demands of transparency, methodological rigor, better testability and more severe tests. Consequently, the reliability of studies will have to increase and the average scientific integrity will have to rise. As the trustworthiness of science increases, there are fewer reasons to rationally mistrust scientific claims and trust in science becomes more rational.
The criteria of rationality in trusting scientific claims are not different in nature for the scientist and the lay person, though there is considerable quantitative difference in the required sensitivity to signs of inadequate honesty, transparency, rigor and reliability — in short, to signs of trustworthiness. There is not a substantial qualitative difference in what it takes to trust rationally in science, because trust in science is rational only as trust in the reliability of the scientific method and the efficiency of the social process of criticism. When lay people trust in science on other grounds, like “science delivers certain, unshakeable truth” or “science has the right method to answer all kinds of important questions,” they are trusting irrationally just like the scientist who blindly accepts the epistemic authority of a renowned expert in the field, or who relies chiefly on unreliable signs of quality such as journal rankings. If there are accumulating signs of diminishing trustworthiness, trust in scientific claims becomes irrational both for the scientist and the lay person. The quantitative difference between expert and lay trust concerns, on the other hand, the lack of skills and background knowledge on the part of the lay person to judge the quality of, or sometimes even to understand scientific justification. In relation, the due epistemic responsibility of being vigilant towards low quality research and lack of scientific integrity falls the most on scientists working in the relevant field and the least on the lay person.
Clearly trust is at least partly blind. But rational trust is adequately and reliably sensitive towards signs of lacking trustworthiness in order to mitigate the vulnerability stemming from epistemic dependence. Systematic search for such signs is a central part of quality control in science. Instead of going skeptical and refusing to rely on second or higher order reasons to accept epistemic claims across the board, the scientific community should (usually does) endeavor to increase this sensitivity as needed.
All this is good and well, but how about the “credibility” crisis that has been haunting several scientific fields for a decade? Meta-scientific studies and the augmenting volume of self-reflexive discussions addressing a reproducibility[8] or a credibility crisis[[10]] have shed a surprising new light on the reliability and credibility of scientific claims. The widespread nature of questionable research practices[11][12] and scientific misconduct[13] have raised many questions regarding the adequacy of the internal quality-control system of science. All these show that a considerable portion of trust within as well as in science has been and still is irrational.
There has been much discussion of methodological reforms, changing the incentive structures in science as well as reforms in the social processes of gate-keeping and quality control. One rather neglected dimension in diagnosing the causes behind the plethora of problems though is that science has changed dramatically over the centuries in all respects pertaining to discovery, but the various aspects of the social process of criticism, such as scientific communication and peer-review has largely remained the same. While we currently have super computers, big data technologies, learning algorithms, machines of hitherto undreamed of complexity such as the particle accelerator, enormous space telescopes, genome mapping technologies etc., scientists still communicate their results to one another chiefly on digital “paper,” the PDF, and the communications traffic is modulated only by a handful of people per scientific paper, as if research is still being demonstrated and reviewed in small and closed learned societies. Together with the accelerating technological advances, the sheer number of scientists and scientific institutions have become incomparably bigger. Over the years we have transitioned from the small scale, easily reproducible science that is conducted and criticized within small aristocratic circles to massively collaborative, highly technological, highly specialized, inferentially complex science. The mismatch between the original context of various technologies and practices of quality control and the current one clearly leaves room for very weak networks of epistemic dependence, and the resulting epistemic vulnerability seems to have been widely exploited — intentionally or not.
We have invested a lot in the technologies of scientific discovery but close to none in those of scientific criticism. Thus contemporary science can conduct inquiry with super-human powers but has to criticize it in an all too human way.
What we need in the face of a wide credibility crisis might seem to be eliminating credence and adopting a much more skeptical attitude towards science, but from the perspective I outlined what is needed might also involve increased epistemic dependence. Counterintuitive though this may sound in the first instance, what I mean to express is that we can increase the inquirability of research outcomes and facilitate actual criticism by more reliance on technology and on social processes; that is, by virtue of extension of cognitive skills required for scientific criticism through technological tools (such as smart mathematical notebooks instead of paper or software for checking various properties of statistical findings) and establishing larger collaborative networks in which various tasks integral to the process of criticism are distributed.
In the second part I talk about the technological extension of cognitive skills, especially in relation to how it might increase the inquirability of research outcomes and the sensitivity of criticism for scientific communication and quality control.
In the third part I will talk about the distribution of cognitive labor and epistemic responsibility, with particular respect to how science could develop wider collaborative networks of criticism and reconceive accountability, integrity, credit and blame in science as applicable (also) to supra-individual entities such as scientific institutions, research groups or scientific communities of entire fields. Hope you stay tuned in!
Originally published on medium/science and philosophy
[1]Merton, R. K. (1973). The Sociology of Science: Theoretical and Empirical Investigations. University of Chicago Press.
[2]Anderson, M. S., Ronning, E. A., Vries, R. D., & Martinson, B. C. (2010). Extending the Mertonian norms: Scientists’ subscription to norms of research. The Journal of Higher Education, 81(3), 366–393. https://doi.org/10.1080/00221546.2010.11779057
[3] Vazire, S. (2018). Implications of the Credibility Revolution for Productivity, Creativity, and Progress. Perspectives on Psychological Science, 13(4), 411–417. https://doi.org/10.1177/1745691617751884
[4] Popper, K. R. [1962](1994). “The logic of the social sciences.” In In Search of a Better World: Lectures and Essays from Thirty Years, 64–81. Routledge.
[5] Popper, K. R. (1963). Conjectures and Refutations: The Growth of Scientific Knowledge. Routledge & Kegan Paul.
[6] Kasper, G. (1996). A Theory of Decision Support System Design for User Calibration. Information Systems Research, 7(2), 215–232. www.jstor.org/stable/23010860
[7] Lakens, D. (2019, November 18). The Value of Preregistration for Psychological Science: A Conceptual Analysis. https://doi.org/10.31234/osf.io/jbh4w
[8] Camerer, C. F. et al. (2018). Evaluating the replicability of social science experiments in Nature and Science between 2010 and 2015. Nature Human Behaviour, 2(9), 637–644. DOI: https://doi.org/10.1038/s41562-018-0399-z
[9] Open Science Collaboration. (2015). Estimating the reproducibility of psychological science. Science, 349(6251), aac4716.
[10] Ioannidis, J. P. A. (2005) Why most published research findings are false. PLoS Med, 2(8): e124.
[11] John, L. K., Loewenstein, G., & Prelec, D. (2012). Measuring the prevalence of questionable research practices with incentives for truth telling. Psychological Science, 23(5), 524–532.
[12] Fraser, H., Parker, T., Nakagawa, S., Barnett, A., Fidler, F. (2018). Questionable Research Practices in Ecology and Evolution. PLoS ONE, 13(7): e0200303.
[13] Fanelli, D. (2009) How Many Scientists Fabricate and Falsify Research? A Systematic Review and Meta-Analysis of Survey Data. PLoS ONE, 4(5): e5738.