“Publish or Perish” sounds to be these days the axiom in Science. Production and time
pressure are part of the scientist’s daily basics. Interrelations between
Indexed journals and science production have yielded lesser room for
confirmation fashion. For that reason Open Science Collaboration initiatives,
tested in the Psychology field, intended to estimate reproducibility of some
research reports; that results lead us to new implications, questions and key
areas of responsibility in the scientific and academic world.
In addition to Indexed journals that are considered of higher scientific
quality, we find the Impact factor (IF) - briefly defining it as a measure
reflecting the average number of citations to recent articles published in X
journal – both have been highly recognized as standard quality measures but
also strongly criticized. In the same line, new paradigms of the relationship
between Science and Innovation can lead to a ‘mechanization’ of science, in the
sense of understanding innovation exclusively as the expectable outcome. In
this regard, it is important to consider the role of science in society; its
relation with industry and the way funding is located in this range. If we
follow the perspective where Innovation is deeply linked to production, we
should take into account that the rules of the market and production might also
play a main role.
One essential feature related to production is time. Maximizing time.
Production has to be fast and incentives (not only economic) are linked to
valuable results, counting with the extremely scarce founding sources available
most of the times for scientific research. So, in this regard, the paper ‘An Open, Large-Scale, Collaborative Effort
to Estimate the Reproducibility of Psychological Science’ presents an
initiative under the roof of the Open Science movement. It is described as a
collaborative project, which integrates several volunteers and institutions. The
project described by the authors is called “The Reproducibility Project” and it
is tested in the area of Psychology. It starts from the premise of “… [due to]
strong incentives for innovation and weak incentives for confirmation, direct
replication is rarely practiced or published” p.657. Underlining the authors’ words, innovation is the production
motor and determines the speed Science has to be produced.
If the pressure of presenting results is so big, and there is always
founding limitations even for actual research plus tight time schedules, what
should be the motivation for scientists in reproduce their colleagues work?
Reproducibility just for the sake of the Science would be the most favorable
answer; however some kinds of incentives have to mediate. Perhaps ‘desire to
help Science advance’, ‘improve their own career’ or just ‘contribute’ and
‘learning new stuff’ would be among the answers. The task has also more
challenges for instance the fear, as the authors have also mentioned, of
damaging the image of Science and therefore further funding could be reduced.
Fact that in real terms unfortunately might be true, but I also share the
thought that projects like the one described in the paper might increase
transparency and accessibility among the scientific community.
Visibility is another concept inherent to Science, traditionally
understood as the appearance in Indexed journals, IF, renowned research
institution sponsorships, frequency of publication, relevance of the topic and
so on; but with this initiative we could also explore another perspective of
the visibility approach. What about to offer my research to some
reproducibility projects? Some of the fruitful outcomes are contribution,
increase the collaborative work spirit and eventually to improve my results on
further publications: A win-win result also regarding visibility.
On the other hand side, Peer review is currently one of the most
accurate and successful mediums to confirm the veracity of the research in
Science. However Peer review has also its own limitations, for instance the
need for specialized equipment that is unavailable, or the specificity of the
lab samples that are impossible to get access to and not having the budget to
invest to, just for mentioned the most common ones. Challenges that might be controlled
in a reproducibility project instead.
How to deal with the results of a ‘Reproducibility Project’ in Science,
is a big question also. In the paper, the authors in my opinion also struggled
with the evaluation of Replication-Study Results, by claiming that “…As yet,
there is no single general, standard answer to the question “what is
replication?” so we employ multiple criteria” (p. 658). In this point, I will
take the risk to say that there is a big difference between research in Psychology
and research in Natural Science, particularly in my field, Microbiology. In
order to replicate an experiment in Microbiology, most of the scientists that
usually work in the same field should know what is the procedure to follow –or
by general rule, it has to be clarified in the methodology part of the paper-
and also most of them also count with the proper equipments or eventually can
have access to them. So, the concept of replication has no a wide range of
definitions in Microbiology as it might have in Psychology.
References
Open Science Collaboration. (2012). An open,
large-scale, collaborative effort to estimate the reproducibility of
psychological science. Perspectives on
Psychological Science, 7(6),
657-660.
Thank you for the post, I think one of the important incentives for (young) scientists is visibility. For sure citations and impact factors are a good way to measure success. But in some cases this is not the incentive that is really relevant. At a discussion in Dec 2015 the speakers point out that visibility is very important for young researchers. They want to be seen. Probably we should put more emphasis on this question - I have the impression that visibility is not discussed as an important advantage of the open access approach.
ReplyDeleteWalter, I agree with you that visibility of scientific output is a if not the most important incentive for researchers to publish genuine research articles. I also agree that visibility of science and scientists is all too often underreflected as a category. Yet I wouldn't consider impact factors of journals as being a measure of success or quality of research, but as good proxies to indicate (potential) visibility of research output. Further I would think of citation counts as a measure whether a piece of scientific deliberation has been impactful in a scientific community - hence whether it has been seen and taken up.
DeleteThus I think we have to narrow down our conception of visibility to concepts of visibility in the field vs. visibility to broader audiences. Whereas we have available - as stated above - journal impact factors and citation counts as proxies to account for the former, we until today lack robust measures for the latter - even with altmetrics developing as tools of assessing visibility and impact of science in broader terms.
As we have tried to reflect in a study in 2013 http://www.bibliometrie-pf.de/article/viewFile/168/217 the possibility of publishing research output in Gold Open Access journals has low impact on researchers motivation for chosing a journal as channel of deliberation. It still is those journals with high visibility in the field/scientific community, exhibited through impact factors.
So my question would be, what kind of visibilty are researchers striving for? Is or could visibility to the publics be a real incentive for researchers to adopt Open Cultures?