It is often pointed out that scientific knowledge is social in character. This statement can mean several quite different things, such as that science is laden with political and moral values, that scientific knowledge is socially constructed, or that it is a public good governed by public interests. More often than not this characterization, usually offered by sociologists of science, is intended to indicate an external source of epistemic vulnerability. A less investigated but nowadays quite relevant sense in which science is social is that scientific knowledge production is not the work of individual geniuses but genuinely a collective achievement. This “epistemically” social character of science may go far beyond the cumulative character of scientific knowledge and come in full relief in research collaborations that distribute scientific labor in order to reach certain epistemic ends. This is typically the case when the investigation of a research question far surpasses the limited competence and cognitive capacities of individual scientists. Science is becoming increasingly social in this strictly epistemic sense, as more and more fields come to feature large research collaborations, or “team science”. However, this sense of sociality is not free from critique either, but this time by epistemologists.
Though there have been a few influential and enthusiastic accounts of collaborative science as distributed, social knowledge in the philosophy of science literature, such as those by Ronald Giere and Karin Knorr-Cetina, epistemological treatment of the topic is extremely limited and mostly skeptical. Due to the traditionally individualist perspective of the discipline of epistemology, collectivity is easily interpreted as a source of epistemic vulnerability. So, some doubt that research collaborations can produce knowledge reliably, and still others argue that they undermine epistemic responsibility and thus accountability. I will defend, on the contrary, that collectives can actually produce knowledge even more reliably and responsibly than individuals do when certain conditions are in place. But we need to adopt a non-individualist epistemology to appreciate the opportunities presented by collaborative science.
Scientific inquiry as distributed cognition
Scientific inquiry has various dimensions, but at bottom it is a highly structured cognitive process. We intuitively think that cognitive processes are realized in the head, so scientific inquiry is something that happens in the head of the individual scientist. But it is often the case that such a complex form of cognition as scientific inquiry is simply impossible without substantial reliance on scientific instruments, computer programs and other experts. Scientists can tackle “big questions” by forming epistemic collectives where all these elements are organized into complex cognitive systems that generate knowledge at a supra-individual level. Such collectives pose unique epistemic opportunities and challenges, both of which are due to the fragmented nature of the processes of knowledge production involved. Thus, it is worthwhile to have an illuminative conceptual framework to better analyze how knowledge is produced within research collaborations. The concept of distributed cognition provides us with such a framework.
Distributed cognition describes a situation where multiple agents collectively realize a cognitive task through dynamic interactions with one another and possibly with various artifacts. The task typically surpasses the cognitive capacities of any single individual.
Ron Giere, on the basis of his observations at the Indiana University Cyclotron Facility, and Karin Knorr-Cetina, on the basis of her field research stay at CERN, both described the experiments they examined in terms of distributed cognition. Giere wrote:
In thinking about this facility, one might be tempted to ask, Who is gathering
the data? From the standpoint of distributed cognition, that is a poorly framed
question. A better description of the situation is to say that the data is being
gathered by a complex cognitive system consisting of the accelerator, detectors,
computers and all the people actively working on the experiment.
Understanding such a complex cognitive system requires more than just
enumerating the components. It requires also understanding the organization
of the components. And […] this includes the social organization.
In her influential book Epistemic Cultures Knorr-Cetina similarly emphasized that the knowledge is produced not at the level of the individual scientists but at that of the experiment:
The point is that no single individual or small group of individuals can, by themselves, produce the kind of results these experiments are after ̶ for example, vector bosons or the long “elusive” top quark or the Higgs mechanism. It is this impossibility which the authorship conventions of experimental HEP exhibit. They signify that the individual has been turned into an element of a much larger unit that functions as a collective epistemic subject. […] No individual knows it all, but within the experiment’s conversation with itself, knowledge is produced.
In many other fields from genetics to climate science, large research collaborations are becoming increasingly common. Moreover, they are not unique to certain fields of the natural sciences. There has recently been calls for big team science in psychology, and we have already begun to see various projects as well as standing initiatives that can come under this title. To name a few, ManyLabs[1][2][3][4] denotes several collaborative replication projects where individually produced datasets are pooled together, and Psychological Science Accelerator (PSA) is a crowdsourced research network consisting of more than 500 laboratories in more than 70 countries that aims to enlarge and diversify samples. PSA bore its first fruit with a registered report, and here is a retrospect on the process. All these examples also deserve closer investigation from the perspective of distributed cognition.
When is scientific knowledge production socially distributed?
Scientific knowledge production is already social in certain epistemically relevant senses even without being a distributed cognitive process, because it involves epistemic dependence on others. First and foremost, scientists rely on the theories, findings, or protocols of many others, past and present. In the simplest possible case, if (i) A knows that p, I know that (ii) if p then q and that (iii) A knows that p, then I can be said to know that q through epistemic dependence on A. While A’s evidence for p is a first-order justification for believing that p, my reasons for believing that A knows that p is second-order justification. In the scientific context second-order justification concerns the assessments of reliability regarding the data, methods, instruments, or the track-record of other experts as informants. It is close to impossible to find any example of scientific inquiry that does not feature similar kinds of epistemic dependence.
We can further say that a considerable portion of all research has a clear social dimension by virtue of relying on technologies of cognitive enhancement, from scientific instruments which render otherwise unobservable phenomena observable to computer software which undertake complex computations or construct models out of big data in a humanly impossible way. Usually, the scientist using such tools does not have the competence to produce them or even to scrutinize their reliability. Thus, they rely on other people for both supplying the tools their research depends on and for providing evidence of their reliability.
Neither of these give us socially distributed knowledge, and pose any serious challenge to the traditional individualistic conception of knowledge, because they exemplify one-way epistemic dependencies. While I depend on A to know that q, A may not at all be part of the process of coming to know that q—A might even be a long dead scholar who just established that p. Similarly, a programmer who writes a deep learning algorithm is often not an integral part of the research projects it is used in. In distributed cognitive systems, however, we speak of mutual epistemic dependence relations within a group that is unified around a common epistemic task, so that, as Hardwig put it, “individual researchers are united into a team that may have what no individual member of the team has: sufficient evidence to justify their mutual conclusion”. Here we speak of a unitary cognitive task, such as designing and running an experiment that can adequately test a scientific claim, which is achieved only collectively.
Distributed knowledge implies networks of epistemic dependence, and collaborations are criticized precisely on the grounds that this generates epistemic vulnerability. In the first part of this blog series, I have defended the view that epistemic dependence does not necessarily imply epistemic vulnerability, hence it is not necessarily an impediment to knowledge, whether in the form of reliance on artifacts or on other people. I argued that scientific knowledge can also be (and actually is) reliably and responsibly produced on the basis of warranted or rational trust—this is when we lack sufficient evidence for believing a scientific proposition, but accept it on the basis of sufficient second-order justification that the proposition is the outcome of a reliable knowledge-generation process. It is often the case that scientists (let alone lay people) don’t individually have sufficient second-order justification for the reliability of the knowledge-generation process behind a scientific claim but have good reasons to justify reliance on other experts who scrutinized the research process on behalf of the scientific community. An integral part of warranted trust in scientific claims is thus the existence of an efficient and reliable social process of criticism, which Popper emphasized as the core of scientific progress. In the context of research collaborations, networks of trust are the very fabric of scientific inquiry: The individual members by themselves have only partial first-order justification and partial second-order justification. For this trust to be warranted, two conditions must be met: (i) the distributed process of scientific inquiry should be reliable, i.e., get things right sufficiently more than it errs and (ii) the collaboration should realize (in parallel to the socially distributed research process) a a reliable and efficient socially distributed process of internal criticism, so that there is sufficient (second-order) justification for any member to trust the reliability of the contributions of others. To the extent that these are met, the individual pieces of evidence contributed by the members of the collaboration can cohere into a unified body of sufficient (first-order) justification for the scientific claim put forward by the collaboration, and the collaboration manifests epistemic responsibility as a collective property, in the sense that it is vigilant towards errors and has social and technological means at its disposal to fix them.
Depending on their social organization, research collaborations can indeed be in an even better position to minimize sources of error than individual researchers or small teams. This is because they can complement the traditional forms of scientific quality control with even more rigorous internal review mechanisms. Actually, the social process of criticism can be better realized as a distributed process just like the distributed research process in collaborations, than via dependence on the sporadic, entirely voluntary and mostly post hoc scrutiny of random peers, as it is pretty much the case with traditional peer review. A socially distributed process of criticism would be organized so as to make use of available expertise and resources in the most efficient and effective way, and could do so by relying on the already established social organization of a research collaboration.
Networks of epistemic dependence in research collaborations
There are many different forms of scientific collaborations depending on the nature of the division of epistemic labor, and one illuminative way to approach these differences is to look at the web of epistemic dependencies that make up the epistemic system underlying the collaboration.
A big portion of scientific collaborations consist of people with overlapping or complementary expertise, for instance multi-author projects within the same field or interdisciplinary ones that bring together neighboring expertise such as collaborations of developmental psychologists and cognitive linguists on language acquisition. In the former case the division of epistemic labor is often in terms of human cognitive resources, and each member of the collaboration has the epistemic competence to scrutinize as well as reproduce the epistemic justification other members have for the epistemic output they contribute. Here there is no ineliminable need for warranted trust, because the kind of knowledge produced can in principle be produced by each individual member. In the latter case the division of epistemic labor is along disciplinary lines, but the members of the collaboration often have the appropriate epistemic competence to understand and even to scrutinize, although possibly not to reproduce, the justificatory grounds of the others’ contributions. Here first-order justification, namely scientific evidence, may be distributed but everyone can have individually derived second-order justification for trusting the parts of evidence provided by the others.
Especially in cases where the division of epistemic labor reflects differentiation in terms of highly divergent areas of expertise, we are faced with the questions of whether individual members of the collaboration can be attributed with significant epistemic credit and responsibility for the resulting epistemic successes and failures (hence, can be said to know the outcome of the distributed process). This is because typically each agent lacks the competence required to scrutinize or even understand some aspects of the collectively conducted research. In the case of large research collaborations where various experts interact in a systemic way that produces scientific knowledge irreducibly at the system level, we often encounter examples where both first-order and second-order justification is truly distributed. This is because the task of scrutinizing the reliability of evidence can only be collectively achieved. It is especially with respect to such cases of radically distributed scientific cognition we need the notion of warranted or rational trust.
The questions that are of greatest relevance to scientific practice here are thus how we can assess the reliability of distributed research processes and whether the distribution of epistemic justification poses a problem regarding epistemic credit and responsibility.
In regard to the first question, the particular organization of the research process around differentiated competences, various instruments and specific social practices that shape the information flow are crucial. In regard to the second, the idea of networks of higher-order (social) justification is quite pertinent.
Let’s look at a skeptical epistemological take to illustrate the issue more concretely.
Evaluating reliability and responsibility in research collaborations
In their paper investigating how interests and values exert an influence in massively distributed epistemic collaborations, Winsberg, Huebner and Kukla argue that accountability becomes a problem because in such collaborations there is no coherent justification to be given for the entire process, hence no single person can be accountable for the whole study. In other words, they argue that the reliability of the research process is undermined because it is distributed, and for this reason it is quite improbable that coherent and adequate second-order justification can be offered. The main reason they give is that collaborative research involves lots of unforced methodological choices, where the degree of freedom is usually too high to give a rational story based on best epistemic standards in the abstract. So, decisions have to be taken regarding concrete problems, by researchers with different skills, training and methodological standards.
…when a collaboration relies on numerous epistemically substantial contributors who must exercise expert judgment (rather than ‘human computers’ under top-down control), we introduce a role for at least that many sets of interests and goals, each of which will shape the research process in their own way […], unforced methodological choices must be made at every turn, and there is plenty of room for value-laden inductive risk balancing within these choices. Different researchers will use different standards and methodologies, which will be driven by different goals and pressures; but without centralized coordination, there is no built-in guarantee that the justificatory stories that undergird various aspects of a study will form one coherent justification.
I think this argument might perhaps legitimately apply to certain individual cases of collaborative research that face domain specific challenges (the authors focus on examples from climate science and biomedical research), but would be a mischaracterization of how research is generally organized in successful research collaborations. More importantly, it rests on a rather traditional, individualistic conception of epistemic responsibility or accountability, which cannot easily accommodate distributed processes of knowledge production. Orestis Palermos, for instance, offers a pertinent account of how epistemic responsibility emerges as a collective property in distributed systems through self-regulation:
The continuous interactions between the members of the group allow them to continuously monitor each other’s performance. In result, if there is something wrong with their distributed process, the group will be alerted to it and respond appropriately. When no alert signals are communicated, the group can responsibly accept the deliverances of its distributed cognitive ability by default.
The heterogeneity of expertise doesn’t necessarily imply diminished epistemic responsibility, because individuals don’t have to scrutinize all aspects of the research process if this can be realized as a distributed process. Probably the most pertinent example of this would be the collaborations in high energy physics (HEP), where we have massively collaborative experiments that are collectively planned, executed, monitored and analyzed. The collaborators aren’t tools to be governed top-down, because they contribute their expert judgment in all these phases. A coherent justificatory story is indeed given, despite the highly distributed nature of the research, thus despite a lack of “centralized coordination”. Collaborations in HEP are particularly interesting also because one would expect them to suffer from high degrees of epistemic vulnerability due to the high complexity of the technological infrastructure and the heterogeneity of expertise required to investigate the typical research questions. Despite this, they arguably furnish the most prominent and successful examples of collaborative science.
In another paper, Huebner, Kukla and Winsberg actually include a section on if/how interests and values may also influence HEP experiments and undermine their reliability:
…consider an anecdote that was relayed to us. There were two major groups looking for the Higgs particle at CERN: ATLAS and CMS. When ATLAS reported data consistent with the observation of the Higgs particle at 2.8 sigma, the buzz around CMS was that they needed to do whatever was necessary to “boost their signal.” This meant shifting their distribution of inductive risks to prevent them from falling too far behind the ATLAS group—toward higher power, at the expense of reliability, or towards a lower probability of missing a Higgs-type event, at the expense of a higher probability of finding a false positive. Hence even here, it seems that we see the influence of local pressures and interests on methodology, in ways that cannot be simply eliminated.
We can generally say that any bias towards the background or the signal hypothesis might influence error probabilities. But we would arguably need more tangible reasons to think this was actually the case. To begin with, the very existence of two independent collaborations, formed around two detectors, that investigate the same research question in parallel is a methodological feature aimed at assessing the reliability of the findings, in a way that is very similar to replication studies. The two experiments combined Higgs search results on the basis of a unified framework for common methodological tools and information exchange, which was decided upon through lengthy discussions over the minutest details. Thus, as I confirmed in a discussion with Dr. Philip Bechtle from the ATLAS collaboration, any “tempering” with the statistics would have easily been noticed outside of the CMS collaboration. Both teams use the same statistical procedures, developed collectively over the years, and apply the same criteria. The collaboration between the two collaborations was realized in the same way as the collaborations operate internally: through consensus building in planning and distributed processes in implementation and quality control.
The authors also mention the blinding procedures used to reduce the effect of bias on the results. They do not pass judgment on their effectiveness, but take the very existence of such procedures as indicating the ubiquitousness of inductive risk balancing throughout the research process:
…this structural mechanism is designed to minimize any distortion issuing from the interests of the scientists. Whether of not this technique helps to address this particular problem, it indicates a way in which inductive risk balancing continues to occur in unpredictable and perhaps unrecoverable ways throughout the research process.
Firstly, we have good reason to think that the blinding procedures are effective against bias, because, as Dr. Bechtle stated, one both (i) cannot see the data in the signal region and (ii) is responsible towards colleagues to demonstrate complete understanding of all relevant uncertainties in all control and validation regions. Moreover, and more importantly for our discussion, the very function of a blinding procedure is to increase the reliability of a process through distributing it. There isn’t a single case of research that doesn’t involve similar risks to differing extents. The reliability of a particular research design with an integral blinding procedure could only be higher than a similar design lacking it. So instead of signaling a liability, it testifies to a potential epistemic virtue of distributed cognition.
Clearly no procedure is completely proof against mere error or biased management of uncertainties. But it is hard to say this is a more serious issue in collaborative research – arguably research collaborations like those in HEP are in an even better position to minimize error and bias, since the distributed nature of the research process allows researchers to work in multiple roles and overlap in multiple dimensions, thus to cross-check each other’s output. In the extremely seldom case when a false discovery nonetheless can’t be prevented, large teams can actually be in a much better position to detect it. Dr. Bechtle mentioned, as an informative case of error, the “discovery” of the superluminal neutrinos, which was identified and dealt with by the collaboration after the preprint was out. For an (again, rare) example of bias in analyses, he mentioned the “observation“ of pentaquark states by several smaller experiments, whereupon a larger collaboration, HERA-B, did more proper analyses and showed no evidence.
The authors lastly argue that even though reliability may be less of a problem in HEP in comparison to other fields, there are no good reasons to think that accountability isn’t problematic. Similar concerns about accountability are raised in a paper by Huebner and Bright. They argue that only the collaboration as a whole can be held accountable, because responsibility in the senses of (i) “attributing” a scientific outcome to somebody and (ii) identifying somebody as “answerable” (who is to provide epistemic justification) can be too diffused. But they point out that it is hard to conceive how a whole collaboration can be punished.
I think we can say that the nature of the continuous information flow (as portrayed by Knorr-Cetina), of the internal as well as external decision making, cross-checking and review mechanisms through nested work groups, panels and committees, together with high transparency allow the members of the collaborations to have good second-order justification to stand behind the findings and conclusions. It is not necessary that everyone in the collaboration is in a position to scrutinize all other contributions; it suffices if each contribution is reliably cross-checked by some. What is needed is transparency of how the web of second-order justifications is organized and reflection on its reliability.
I also think that what is identified as the most problematic issue regarding the dissipation of accountability is not essentially a concern about research reliability, but about whom to punish in the case of a breach of epistemic responsibility or, much more broadly, of scientific integrity. In the case of HEP experiments, breaches of integrity like fraud are much less of a concern because quality control occurs in much earlier steps and more rigorously than in any typical research project. In the rare case where such issues emerge, the transparency of the social structure of epistemic dependencies would allow for their timely identification.
Huebner and Bright add that massive collaborations have a serious impact on the checking mechanisms in peer review, because only an extremely limited pool of reviewers can have the required competences. They claim that this makes it very hard to detect and to punish fraud or other problematic research practices.
The charge of “a serious impact on the checking mechanisms in peer review” should be conjoined with another premise in order to raise deep concern; namely, that external peer review in its traditional form is the best or alternative-less social mechanism for scientific quality control, which the authors do not explicitly argue for (one of the co-authors even argues elsewhere for the abolition of traditional peer review). It is plausible to think that the traditional peer review cannot handle the complexity of research in the case of large collaborations, not only because there are few individuals with multiple epistemic competences but more importantly due to its own lack of complexity as a social mechanism. It is an empirical question how much peer review indeed adds in case there are much more complex, multi-layered and multi-phase internal review mechanisms. This is actually the case at CERN, where any paper draft goes through several steps of internal peer review before it is submitted for external peer review, such as internal presentations to close colleagues, collaboration-wide open peer-review, and publication committee review.
Collective responsibility beyond collective knowledge?
Lastly, a significant portion of the problems relating to scientific quality control pertain equally to the community level, such as publication bias. They are collective epistemic responsibility problems. Thus, epistemic responsibility should not be confined to scientists, whether individuals or collaborations, but considered also at the scientific community level.
While distributed processes of knowledge production extend only as far as the cognitive system realizing them does, it is possible to speak of distributed social processes of criticism or scientific quality control also beyond cognitive systems. Just as reliable quality control within a research collaboration does not necessarily imply that every member equally scrutinizes the contributions of all the others, community level quality control may as well be undertaken by specialized initiatives supported by scientific bodies and journals. A good example is Registered Replication Reports in psychological science, which is an initiative for conducting multiple preregistered close replications of a selected study that is supervised by scientific journals.
We can further mention some recent suggestions aimed towards distributing the social process of criticism and thereby the epistemic responsibility for quality control to bigger portions of the scientific community. Open peer-review, for instance, is a proposal intended to render the reviewers accountable for their judgments on scientific manuscripts as well as to increase the overall reliability of peer-review. Open peer commentary and post-publication peer review, which are forms open peer review can take, replaces or complements the traditional peer-review process with ongoing, massively distributed criticism by the community.
In closing this post and the three-part series, let me re-state the main point: Good science does not hinge on self-reliance and skepticism, but on warranted trust and collectively organized, distributed criticism. From the employment of complex scientific instruments to large collaborations, scientific inquiry increasingly consists in networks of epistemic dependence. What epistemic responsibility demands of the scientists is not self-reliance in gathering evidence and wide-ranging skepticism against all external epistemic sources, but the collective maintenance of a social process of criticism that can reliably assess the reliability of scientific evidence. Scientific quality control based on a social process of criticism implies not less but often even more epistemic dependence on social mechanisms and technologies of cognitive extension. The earlier we embrace this basic fact, the better suited our perspective would be to deal with the problems of reliability and responsibility in contemporary science.