January 6, 2016

Open Science and the Scientific Ethos: a Panacea?

When a Nobel Prize-winning physicist states that scientists are not honest, it is probably a good idea to take him at his word. Can ‘Open Science’ be a useful bulwark against the weaknesses of humans and their institutions? This blog is about the ethos of science and how it relates to scientific openness. It starts off by outlining the most well-known set of scientific norms, and then discusses them in relation to the open discourses. The norms and discourses are then assessed critically, followed by a section on scientific reproducibility. Can openness cure science of its ills?






I would like to point out that people are not honest. Scientists are not honest at all, either. It’s useless. Nobody’s honest. Scientists are not honest. And people usually believe that they are. That makes it worse.
Richard Feynman, 1963 (1999, p.106).

It may appear an obvious point, but it is sometimes necessary to point out that scientists are only human. Science is a human activity, and the institution of science should be just as open to examination as any of our others. Institutions defend their boundaries, attempt to acquire power, develop internal norms and evolve reward and punishment systems. This may all happen innocently enough, from an internal perspective, but the effects can be undeniably powerful and sometimes pernicious.

Merton's Norms

The most famous description of the norms that have developed in the institution of science is that of Robert K. Merton in The Normative Structure of Science (1973a), and I will begin by describing them. For Merton there are four norms (which he calls “institutional imperatives”) that comprise the “ethos of modern science”. The first is Universalism. Essentially, universalism means that scientific claims should be assessed independently of the person making the claim, and that all should be able to contribute. In Merton’s words, “[o]bjectivity precludes particularism.” A good example of the utility of this norm was provided by its being ignored in the post-war Soviet Union. Trofim Lysenko, a biologist, claimed that anybody who disagreed with his views on agricultural heredity were anti-Marxist, and soon his theories were being championed by Stalin who locked up or executed all dissenters. By 1948 it was illegal to challenge his ideas. Political bias trumped universalism. Wheat yields duly fell.

Another aspect of universalism, as described by Merton, is that science should be meritocratic. Talented people should rise to the top – any other arrangement would be to the detriment of scientific progress. This is clearly not always the case. In 1942, Merton explained that “the ethos of science may not be consistent with that of the larger society”, as if they were separate entities, with society polluting science with its non-universalist racisms and other ideologies. By 1968, though, Merton was describing the “Matthew effect,” a structural problem within science itself, in which rewards and publications tend to accrue to those who already have standing in the field (1973b). Scientific institutions are therefore not immune to human imperfections.

The second norm is Communism. This has rather squeamishly been renamed Communalism in the years since. This norm should ensure that scientific knowledge is the property of all humanity, and not merely the person who is deemed to be the ‘discoverer’ (a term which is itself not unproblematic). Merton himself shows the big exception which science allows to this law, though he sees it as a small issue. It is eponymy, for instance in Newton’s laws or Maxwell’s equations. This nod to what he describes as merely a “commemorative device” is seen as the only institutional admission of intellectual property rights, and is the only reward system that can be permitted. Other systems, such as patents or even socialism, are seen as being in society and not in science – again entailing a dichotomy which is in reality difficult to maintain. If eponymy is the only positive incentive in science, then punishment of deviance against these norms would be its negative. It can only be expressed as a loss of reputation, so that scientists who keep things to themselves or act to profit financially from their work are looked upon as greedy or - worse still - unscientific.

Merton’s third norm is Disinterestedness, that scientists should subsume their efforts into the greater scientific adventure without thought of personal gain. Merton is very careful here to separate the interests of individual scientists - such as a hankering for truth or improving humanity’s lot - from the disinterestedness of the institution as a whole. He postulates that scientists will, through their training, internalise this disinterestedness, and will experience some “psychological conflict” should they try to work against it. The guardians of this disinterestedness are other scientists, especially those involved in the peer review process. As scientists are not individually better or more moral than the rest of the population, Merton can only ascribe the “virtual absence of fraud” in science to the institutional norms and the threat of sanction against transgressors.

The final norm is Organised Scepticism. This means that any claim should be critically assessed before it enters the scientific mainstream, regardless of its origin. In this section, Merton is especially keen to separate science, which should be pure and objective, from the institutions in society that would seek to subvert, coerce or pollute it, such as religions or totalitarian regimes. There should be no room for sacred cows.

Openness and the Norms

Merton’s norms, or at least something very much like them, are still held to be the basis of good scientific practice. If one were to accept these norms as valid, it is easy to see how moves towards openness could provide support for them. Starting with universalism, open approaches to publication would enable more and more people to contribute to science, regardless of their background. No more would large publication houses be able to choose who contributes, and how much. Big names would no longer be able to monopolise research as, in effect, all would be able to publish. This would lessen the Matthew effect. This effect might also be further negated if open notebooks became the norm, allowing the public to see exactly which piece of research was carried out by which researchers. Further, citizen science initiatives such as the eBird project or Galaxy Zoo, increase the numbers able to contribute even to those outside of the formal institutions themselves.

Looking at communism, openness could help greatly in ridding science of secrecy and publishing the knowledge it produces to all. Open access to all research would mean that everybody could share in its benefits, and the negative aspects of gene patenting or huge and unwarranted increases in the price of medicines would be alleviated.

Disinterestedness would clearly be aided by openness. Fraudulent or incomplete data sets would be much easier to wheedle out if open data were to become the norm. Openness would also help reduce the problem of publication bias, whereby research is only published if it contains a positive result. The logic of the publishers is that nobody wants to read a report that ends with a negative. This may be true, but the non-publication of null results has a hugely distorting effect on science publication and science practice. The incentives are currently not set up to be conducive to objective research, as eponymy is not the only reward in science; getting published (or not!) is the biggest incentive in the game. Introducing pre-registration for all research would allow the disinterested observer to see just how many experiments have been carried out that have not shown the effects that the shiny, newly-published research claims to have produced.

Finally, one can see that organised scepticism relies heavily on openness, and so any increase thereof would be beneficial. Scientific claims cannot adequately be assessed without knowledge of the method and data used to make them. Open data and more open notebooks, along with open access, will mean that more people will be able to make better informed criticism of any claim before it can be accepted.

But Merton’s norms are not without criticisms, and so any claims for the wonders of openness that rest upon them can be criticised respectively.

Criticisms of the Norms

The first thing to say about The Normative Structure of Science is that it is very much a product of its time. Sensible Christians had by and large retreated from making scientific claims since the bruising at the hands of Darwin and Huxley, and religion is therefore only mentioned fleetingly in this piece. However, other ideologies had been taking science’s name in vain, and it is against these that we have to see Merton’s norms. Throughout his norms we see Merton mention the spectre of racism and totalitarianism, both of which had at the time attempted to wear science’s clothes in an attempt at authenticity. Nazism, for instance, was seen as a social Darwinist and eugenicist project. It also tried to develop and use the latest military technology for its ends. This brings to mind George Orwell’s England Your England from about the same time, which he starts: “As I write, highly civilised human beings are flying overhead, trying to kill me. They do not feel any enmity towards me as an individual, nor I against them. They are only ‘doing their duty’, as the saying goes … [h]e is serving his country, which has the power to absolve him from evil.” Merton was fighting, as he saw it, a rear-guard action against the nationalistic science and technological misuse Orwell outlined. This was of course admirable, but not necessarily applicable before or after this period.

What of before this period? Shapin claims that scientists (or natural philosophers, as they were known) were always seen to be virtuous individuals, as far back as the ancient Greek philosophers, who single-mindedly pursued truth and honour (1995). This idea of a virtuous individual survived into the 17th century, when gentlemen scientists of independent wealth were deemed to be incorruptible, as they required no financial gain. The formation of the Royal Society in 1660 brought these virtuous scientists together, bounded by each other’s honour. According to Shapin, this idea of the virtuous individual survived until the mid-20th century, when science ceased to be seen as a collection of individuals and more as an institution. This is when Merton attempted to formulate his institutional norms.

Since then, we have seen instances that show the limitations of the norms. Scientific scandals such as the case of Jan Hendrik Schön, who fabricated and reused data to fraudulently publish many articles in reputable journals such as Science and Nature, indicate that internalising disinterestedness is not enough to stop some (and who knows exactly how many?) from crossing the Rubicon. Certainly any notion of sanction also proved unable to hold him back. Further, other scientists were perhaps unwilling to believe that anybody would transgress the norms and so were unable to spot the fraud; as Eugenie Samuel Reich found out, “[i]t had been natural to question the way that Schön had interpreted his data … but not to assume that he was lying” (2009).

It seems then that these norms are neither timeless nor wholly effective. They are not even an accurate reflection of human nature. For each norm there is a counter-norm, as demonstrated by Ian Mitroff’s investigation into the Apollo astronauts. Should this be surprising? All humans profess to be communal, disinterested and universal. They may even be honest. But at the same time, most would also like a Nobel Prize, a valuable patent, and a disease named after them.

Perhaps I am being unfair to Merton. After all, it is better to have norms to live up to, even if we fall short, than not to have them at all. The existence of murderers does not mean that we should not have a societal norm against murder. However, to formalise and institutionalise these norms is to say that any transgression is a form of deviance, and not therefore a natural human thing to happen. I think transgression is ‘natural’ (like ‘discoverer’, this term is somewhat problematic). It will always happen. Not just are they frozen in time, I think that they are also too limiting and do not take enough account of the very individual incentives that make us human in the first place.

What has this to do with openness? In my opinion, many of the claims that have been made elsewhere for the benefits of openness, some of which I mentioned above, are in some form dependent on Mertonian ideals (or, at least, something very like them). Openness, then, also fails to take sufficient account of human frailties such as (but certainly not limited to!) greed, ambition and jealousy. Will scientists on the verge of what they see as a major breakthrough, couched in secrecy and nervous of being gazumped, really submit to an approach of open notebooks? Perhaps the culture will shift that way. It is far from there yet.

By honest I don’t mean that you only tell what’s true. But you make clear the entire situation. You make clear all the information that is required for somebody else who is intelligent to make up their mind.
Richard Feynman, 1963 (1999, p.106).

Reproducibility

One more thing about openness and norms is reproducibility, which was alluded to as a part of disinterestedness (as you cannot cheat if your peers are checking to see if your experiments can be replicated). If your experiment cannot be replicated, it is open to a withering scientific putdown such as Peter Medawar’s succinct effort, “[t]his work therefore becomes an exhibit in the capacious ill-lit museum of unreproducible phenomena” (1985, pp.186). Open notebooks, open data, pre-registration of methods, open access and open review are all seen as ways to increase the possibilities for the reproduction of research, and therefore to increase the validity of results at all stages of the experimental process.

First it is important to note that no experiment is perfectly reproducible, and it gets less reproducible the more complex it gets. It is one thing to reproduce Galileo dropping balls from the Tower of Pisa; it is quite another to replicate a psychology study with different subjects, in a different lab, by different researchers, in a different country, at a different time. In the series of lectures from which the above quote comes, Feynman goes on to describe psychoanalysts and psychiatrists as witch doctors. His tongue is only halfway into his cheek. Openness might shine a light on the limits to these studies, but it will not cure them.

One attempt to recreate psychology experiments was made by the Open Science Collaboration, in a very open way (2012). It showed that over 60% of psychological studies couldnot be reproduced. How do we explain this? Are they fraudulent? Badly done? Or merely too complicated? As well us encouraging us to take psychology with a pinch of salt, these results should also encourage us to see experiments, when complex enough, as discreet and fundamentally un-reproducible. No amount of openness can change this, just as no amount of scientific openness can change human nature.

A final point on reproducibility, which links back to publication bias. If every detail of every study is published, who is actually going to do the reproducing? What are their credentials? Going back to Shapin, he claimed that all this focus on replication was leading away from virtue and into a culture of vigilance. For him this is not a good development, as so much time would be spent on checking other people’s work that nobody would have time to do anything original. This was in 1995, and the problem will only get worse with more open methods. His suggestion was to move back away from institutional vigilance to a culture of individual virtue. This is also problematic, as the sheer number of papers published nowadays (which will only increase as open publishing gains in acceptance) will need some kind of quality control, and I’m not sure we should be trusting individuals to virtuously check their own work.

Perhaps random testing would be the way forward, for scientists only need to be shown to be fraudulent once for their career to be over. This, again, would raise more questions than it answers.

Openness is patently not a bad thing in itself. Nothing is. It is also not a panacea for science’s ills. It should not be treated as such. Nothing is that simple where humans are involved.

References
Feynman, R. P. (1999). The Meaning of It All, London: Penguin Books
Medawar, J. S. (1985). Aristotle to zoos: a philosophical dictionary of biology. Harvard University Press.
Merton, R. K. (1973a [1942]). The Normative Structure of Science. In The Sociology of Science. Theoretical and Empirical Investigations. (pp.267-278) Chicago/London: The University of Chicago Press
Merton, R. K. (1973b [1968]). The Matthew Effect in Science. In The Sociology of Science. Theoretical and Empirical Investigations. (pp.439-459) Chicago/London: The University of Chicago Press
Mitroff, I. I.. (1974). Norms and Counter-Norms in a Select Group of the Apollo Moon Scientists: A Case Study of the Ambivalence of Scientists. American Sociological Review, 39(4), 579–595
Open Science Collaboration. (2012). An open, large-scale, collaborative effort to estimate the reproducibility of psychological science. Perspectives on Psychological Science, 7(6), 657-660
Orwell, G. (1962 [1941]). England Your England. In Inside the Whale and Other Essays. (pp.62-90) London: Penguin
Reich, E.S. (2009). Plastic Fantastic, New York: Palgrave Macmillan
Shapin, S. (1995). Trust, Honesty and the Authority of Science. In National Academy of Science (Ed.), Society’s Choices: Social And Ethical Decision Making in Biomedicine, (p.388-408) Washington, DC: National Academy Press

12 comments:

  1. Barry, thanks for this input! In the bigger context of open source methids already available e.g. for big data analysis maybe it would be necessary to rethink not so much quality control to be open but to reflect necessary infrastructures to get access to methods and data?

    ReplyDelete
    Replies
    1. Hi 'km', thanks for your comments. I agree, I think some kind of pre-registering (to an open register, of course) and an open notebook scheme would be a useful way of reconfiguring the system so that people can get a look at the methods and data used in studies.

      Delete
  2. Thanks for this interesting link between Merton and Open Science. Another major problem I see with reproducibility is also liked to the publication bias you mention, namely that merely reproducing experiments does not get you published (unless you can debunk a famous experiment). That means that positive replications do not help researchers career-wise and are thus not very attractive. I guess this means that the academic credit system would need to change as well if we want more quality control in science. An interesting addition to Shapin's account on trust and virtue is the following article by John Hardwig: http://web.utk.edu/~jhardwig/RoleTrust.pdf

    ReplyDelete
    Replies
    1. Thanks for your comments Judith. Yes, in the current system, it seems the career of a researcher is helped neither by positive replications nor null results. You have to do something new or blow something old out of the water.

      And thanks for the article, it was interesting to read about trust from a perspective outside of STS.

      Delete
  3. Excellent critical reflection on openness and Merton thinking. Deeply time-contextualized, well explained input. You underline precisely the complexity of human systems and take into account many elements. Thought provoking read!

    ReplyDelete
  4. For me, the best example of Merton's organized skepticism in the workings of science is the peer review system. Scholarly literature is not regarded a "serious" contribution if it has not been peer-reviewed. Thus, I was not sure if I would totally agree with you that this norm "relies heavily on openness", as in peer review it is often a "double-blind" procedure...

    ReplyDelete
    Replies
    1. Thanks "es"! Yes you are right of course. I was thinking more about organised scepticism not being able to be applied unless the methods, some of the data, and the results of experiments had actually been published - no scientist would make a judgement for or against a claim without some knowledge of how and why it was made. So some openness is necessary at least.

      I suppose I was arguing that some degree of open access to the published works was necessary, using the old meaning of 'open'. If this course has taught me anything, it is that you really have to be careful when using the word 'open', and define exactly what you mean (if that is even possible). My text would have been better had I done so.

      Delete
  5. Thanks Barry for this very good contribution to Open Science. This blog post is easy to read, leads the reader through the blog and explains at the beginning of each paragraph what is to be expected. This helps a lot.
    Let me add a short recommendation: The blog is titled with a question, I hoped to find an answers and find mostly facts (which are indeed very well elaborated). In the abstract Barry raised the question can openness cure science of its ills? I am sorry to say that there is no further discussion about “illness” which would be an interesting discussion. I personally find questions very good style – but would appreciate to find some more or less satisfying answers in the text....

    ReplyDelete
    Replies
    1. Thanks Walter. I tried to blindside you by saying that open science is not a panacea in the last paragraph, but you are right that this argument wasn't fully developed in the piece itself. I wish I could say, like a journalist would, that the title was added by the sub-editor and I had no control over it; this is alas not true!

      Delete
  6. Thank you Barry.
    The only question that I have is admittedly not too much connected to open cultures: why are we so much afraid of being erroneous or falsifiable in our makings of science? Is it just the reputation regimes that are institutionalized and institutionalizing in science or should we dig a bit deeper into the ways in which we make science (working)?
    And if I try to turn my attention towards open research cultures, Will those enable us/or make it easier to make reflexive use out of negative or nill results?

    ReplyDelete
  7. Hi Steve,

    Thanks for your comments. I would say that in the natural sciences, falsifiablitiy is not something to be afraid of; for natural scientists there is no science without falsifiablity!

    My feeling about open research cultures is that if projects are pre-registered, and this register is open to all, then some investigative person (maybe a science journalist even) would be able to find out just how many negative studies there had been, even if they hadn't been published in a journal. This could lead to an alleviation of some of the deleterious effects of publication bias.

    There are lots of 'woulds,' 'coulds' and 'shoulds' here though!.

    ReplyDelete