White House OSTP public access recommendations: Maturing your institutional Open Access strategy

While the global picture of Open Access remains something of a patchwork (see our recent blog post The Changing Landscape of Open Access Compliance), trends are nevertheless moving in broadly the same direction, with the past decade seeing a move globally from 70% of all publishing being closed access to 54% being open access

The White House OSTP’s new memo (aka the Nelson Memo) will see this trend advance rapidly in the United States, stipulating that federally-funded publications and associated datasets should be made publicly available without embargo.

In this blog post, Symplectic‘s Kate Byrne and Figshare‘s Andrew Mckenna-Foster start to unpack what the Nelson Memo means, along with some of the impacts, considerations and challenges that research institutions and librarians will need to consider in the coming months.

Demystifying the Nelson Memo’s recommendations

The focus of the memo is upon ensuring free, immediate, and equitable access to federally funded research. 

The first clause of the memo is focused on working with the funders to ensure that they have policies in place to provide embargo-free, public access to research. 

The second clause encourages the development of transparent procedures to ensure scientific and research integrity is maintained in public access policies. This is a complex and interesting space, which goes beyond the remit of what we would perhaps traditionally think of as ‘Open Access’ to incorporate elements such as transparency of data, conflicts of interest, funding, and reproducibility (the latter of which is of particular interest to our sister company Ripeta, who are dedicated to building trust in science by benchmarking reproducibility in research).  

The third clause recommends that federal agencies coordinate with the OSTP in order to ensure equitable delivery of federally-funded research results in data. While the first clause mentions making supporting data available alongside publications, this clause takes a broader stance toward sharing results. 

What does this mean for institutions and faculty?

The Nelson memo introduces a clear set of challenges for research institutions, research managers, and librarians, who now need to consider how to put in place internal workflows and guidance that will enable faculty to easily identify eligible research and make it openly available, how to support multiple pathways to open access, and how to best engage and incentivize researchers and faculty. 

However, the OSTP has made very clear that this is not in fact a mandate, but rather a non-binding set of recommendations. While this certainly relieves some of the potential immediate pressure and panic around getting systems and processes in place, it is clear that what this move does represent is the direction of travel that has been communicated to federal funders. 

Funders will look at the Nelson Memo when reviewing their own policies, and seek alignment when setting their own policy requirements that drive action for faculty members across the US. So while the memo does not in itself mandate compliance for institutions, universities, and research organizations, it will have a direct impact on the activities faculty are being asked to complete – increasing the need for institutions to offer faculty services and support to help them easily comply with their funders requirements.

How have funders responded so far? 

We are already seeing clear indications that funders are embracing the recommendations and preparing next steps. Rapidly after the announcement, the NIH published a statement of support for the policy, noting that it has “long championed principles of transparency and accessibility in NIH-funded research and supports this important step by the Biden Administration”, and over the coming months will “work with interagency partners and stakeholders to revise its current Public Access Policy to enable researchers, clinicians, students, and the public to access NIH research results immediately upon publication”. 

Similarly, the USDA tweeted their support for the guidance, noting that “rapid public access to federally-funded research & data can drive data-driven decisions & innovation that are critical in our fast-changing world.”

How big could the impact be?

While it will take some time for funders to begin to publish their updated OA Policies, there have been some early studies which seek to assess how many publications could potentially fall under such policies. 

A recent preprint by Eric Schares of Iowa State University [Impact of the 2022 OSTP Memo: A Bibliometric Analysis of U.S. Federally Funded Publications, 20217-2021] used data from Dimensions to identify and analyse publications with federal funding sources. Schares found that: 

  • 1.32 million publications in the US were federally funded between 2017-2021, representing 33% of all US research outputs in the same period. 
  • 32% of federally funded publications were not openly available to the public in 2021 (compared to 38% of worldwide publications during the same period). 

Schares’ study included 237 federal funding agencies – due to the removal of the $100m threshold, many more funders now fall under the Nelson memo than under the previous 2013 Holdren memo. This makes it likely that disciplines who previously were not impacted will now find themselves grappling with public access requirements.

Source: Impact of the 2022 OSTP Memo: A Bibliometric Analysis of U.S. Federally Funded Publications, 2017 2021: https://ostp.lib.iastate.edu

In Schares’ visualization here, where each dot represents a research institution, we can see that two main groupings emerge. The first is a smaller group made up of the National Laboratories. They publish a smaller number of papers overall, but are heavily federally funded (80-90% of their works). The second group is a much larger cluster, representing Universities across the US. Those organisations have 30 – 60% of their publications being federally-funded, but building from a much larger base number of publications – meaning that they will likely have a lot of faculty members who will now need support.

Where do faculty members need support?

Sustainable Development Goals (SDGs) labels on publications

According to the 2022 State of Open Data Report, institutions and libraries have a particularly essential role to play in meeting new top-down initiatives, not only by providing sufficient infrastructure but also support, training and guidance for researchers. It is clear from the findings of the report that the work of compliance is wearing on researchers, with 35% of respondents citing lack of time as reason for not adhering to data management plans and 52% citing finding time to curate data as the area they need the most help and support with. 72% of researchers indicated they would rely on an internal resource (either colleagues, the Library or the Research Office) were they to require help with managing or making their data openly available.

How to start?

Institutions who invest now in building capacity in these areas to support open access and data sharing for researchers will be better prepared for the OSTP’s 2025 deadline, helping to avoid any last-minute scramble to support their researchers in meeting this guidance.

Beginning to think about enabling open access can be a daunting task, particularly for institutions who don’t yet have internal workflows or appropriate infrastructure set up, so we recommend breaking down your approach into more manageable chunks: 

1. Understand your own Open Access landscape 

  • Find out where your researchers are publishing and what OA pathways they are currently using. You can do this by reviewing your scholarly publishing patterns and the OA status of those works.
  • Explore the data you have for your own repositories – not only your own existing data sets, but also those from other sources such as data aggregators or tools like Dimensions.
  • Begin to overlay publishing data with grants data, to benchmark where you are now and work to identify the kinds of drivers that your researchers are likely to see in the future. 

2. Review your system capabilities

  • Is your repository ready for both publications and data?
  • Do you have effective monitoring and reporting capabilities that will help you track engagement and identify areas where your community may need more support? Are your systems researcher-friendly; how quickly and easily can a researcher make their work openly available??

3. Consider how you will support your research ecosystem 

  • Identify how you plan to support and incentivize researchers, considering how you will provide guidance about compliant ways of making work openly available, as well as practical support where relevant.
  • Plan communication points between internal stakeholders (e.g. Research Office, Library, IT) to create a joined-up approach that will provide a shared and seamless experience to your researchers.
  • Review institutional policies and procedures relating to publishing and open access, considering where you are at present and where you’d like to get to.

How can Digital Science help? 

Symplectic Elements was the first commercially available research information management system to be “open access aware”, connecting to institutional digital repositories in order to enable frictionless open access deposit for publications and accompanying datasets. Since 2009 through initial integration with DSpace – later expanding our repository support to Figshare, EPrints, Hyrax, and custom home-grown systems – we have partnered with and guided many research institutions around the globe as they work to evolve and mature their approach to open access. We have deep experience in building out tools and processes which will help universities meet mandates set by national governments or funders, report on fulfilment and compliance, and engage researchers in increasing levels of deposit. 

Our sister company Figshare is a leading provider of cloud repository software and has been working for over a decade to make research outputs, of all types, more discoverable and reusable and lower the barriers of access. Meeting and exceeding many of the ‘desirable characteristics’ set out by the OSTP themselves for repositories, Figshare is the repository of choice for over 100 universities and research institutions looking to ensure their researchers are compliant with the rising tide of funder policies.

Below is an example of the type of Open Access dashboard that can be configured and run using the various collated and curated scholarly data held within Symplectic Elements.

In this example, we are using Dimensions as a data source, building on data from Unpaywall about the open access status of works within an institution’s Elements system. Using the data visualizations within this dashboard, you can start to look at open access trends over time, such as the different sorts of open access pathways being used, and how that pattern changes when you look across different publishers or different journals, or for different departments within your organization. By gaining this powerful understanding of where you are today, you can begin to think about how to best prioritise your efforts for tomorrow as you continue to mature your approach to open access. 

Growing maturity of OA initiatives over time – not a “one and done”.

You might find yourself at Level 1 right now where you have a publications repository along with some metadata, and you’re able to track a number of deposits and do some basic reporting, but there are a number of ways that you can build this up over time to create a truly integrated OA solution. By bringing together publications and data repositories and integrating them within a research management solution, you can enter a space where you can monitor proactively, with an embedded engagement and compliance strategy across all publications and data. 

For more information or if you’d like to set up time to speak to the Digital Science team about how Symplectic Elements or Figshare for Institutions can support and guide you in your journey to a fully embedded and mature Open Access strategy, please get in touch – we’d love to hear from you.

This blog post was originally published on the Symplectic website.

The post White House OSTP public access recommendations: Maturing your institutional Open Access strategy appeared first on Digital Science.



from Digital Science https://ift.tt/nt7dqGf

Will we only ever dream of endless energy?

The National Ignition Facility (NIF) has achieved fusion ignition using powerful laser systems and x-rays.
Image credit: NIF, Lawrence Livermore National Laboratory, US.

The recent nuclear fusion ignition event at the National Ignition Facility at the Lawrence Livermore National Laboratory in California is a triumph of modern science and of the persistence of scientists who continue to strive to solve some of the most difficult technical and engineering challenges of a generation. However, it is important to see this development in a broader context of global events as well as the research environment that has been created to support the nuclear energy developments upon which society is increasingly likely to depend in the coming years.

Did we vote for this?

It may be argued that geopolitics has been driven by an energy agenda since the late 19th century, when the industrial revolution had moved solidly beyond the borders of the UK and countries began competing for global resources to fuel their burgeoning industrial economies. As our economies have become larger so has our need for energy. Most recent wars (including the one in Ukraine) have been about control of energy resources – oil or gas. As supplies become more scarce or more expensive to extract, tensions will rise. While voters do not vote (in most cases) directly to support a specific energy-based geopolitical stance, in recent years energy has become a more overt topic in elections.

Even in countries where energy independence is a critical geopolitical issue, green parties do not command a large percentage of the vote, nor do mainstream political parties necessarily have well-articulated policies related to energy independence. In Germany, a country with significant foreign energy dependencies (63.7%) that have appeared in the news this year, the Greens garnered 20.5% of the vote in the 2021 federal elections. Meanwhile, in The Netherlands and Belgium next door, countries with even higher percentage dependencies on foreign energy (68.1% and 78% respective) than Germany, green parties have begun to slowly gain ground.

This is perhaps due to the fact that our homes have, until this winter, remained warm at a reasonably affordable cost. However, the phase change that we have all experienced in 2022 (for some very painfully) is a sign of things to come. Indeed, if electorates were to cast their votes more directly based on the growing issues of energy dependence, we might see a significant change in the political landscape in the next few years. Trading blocs like the EU may become more robust in their energy policy – we have already seen the establishment of the EU Energy Platform to start to mitigate the effects of dependency on Russian gas. Being outside such a bloc in current times appears foolish at best.

Enter the apparent saviour of the day, courtesy of a nuclear fusion experiment from the National Ignition Facility (NIF) at Lawrence Livermore National Laboratory in California. Hailed by a number of media outlets as a solution to our energy problems, we need to be careful about being overly optimistic. Anyone who has had an interest in nuclear fusion knows that we have been 30 years away from commercial nuclear fusion for the last 40 years. Indeed, it will come as a surprise to precisely no one who knows me that the seminar I gave in English class 31 years ago as a 14-year-old was on tokamak fusion. I clearly recall stating that nuclear fusion was 30 years away. Which just goes to show – I was wrong!

But, this all sounds a bit dangerous…

Perhaps unsurprisingly, some voters have been worried about the risks of developing nuclear solutions. Harnessing the energy source that, uncontrolled, underlies the most destructive weapons that our species has ever produced, and which powers the Sun, and consequently our entire lives, is an illusive and sometimes perilous pursuit. Classic science fiction novels such as Asimov’s Robot series, and TV shows like the 1980s adaptation of Buck Rogers have shown the post-apocalyptic atomic horrors that paint vivid pictures in our minds of both promises of success and failure with fusion. For many, fusion is not just a technology but a cultural phenomenon. As a technology it looms large in our collective consciousness partly because it is one that has been in development and which holds so much power both for positive and negative outcomes. As a young researcher, it is a beguiling field of study – some of the best minds on the planet, for several generations, have wrestled with taming nuclear fusion.

Figure 1: Timeline of the key developments in nuclear fusion research.

Our knowledge of both forms of nuclear energy – fission and fusion – originate in Einstein’s famous observation that energy and mass are equivalent: E = mc2. In the case of nuclear fission (the process used in current nuclear power plants and in the earliest atomic weapons), heavy elements such as Uranium and Plutonium are used. A heavy element is one in which there are many protons and neutrons in the nucleus of each atom. A configuration of many protons and neutrons (beyond 92 protons) is unstable, which means that the energy required to keep the nucleus together is more than if the atom were to split into two (or more) lighter elements. Just a little interaction with, say, a free neutron is enough to break down the nucleus of some heavy elements into the nuclei of two or more lighter elements. As this process takes place a little energy is given off, which can be converted to heat to turn a turbine. The downside of nuclear fission is that you end up with residual elements that, while more stable than the original atoms in the reaction, are still radioactive and remain so for many years. Such waste products require careful storage in locations where they cannot damage living organisms.

Figure 2: Nuclear Fission versus Nuclear Fusion processes. In the left pane, a heavy element is broken apart via interaction with a neutron into two smaller (but still radioactive) elements and an amount of energy. In the right pane, a deuterium nucleus (a proton and a neutron) and a tritium nucleus (a proton and two neutrons) are brought together to form helium (two protons and two neutrons), a “spare” neutron and energy. In both cases, the right side of each pane is “energetically favourable”, which is to say that the configuration of protons and neutrons on the right of the interaction requires less energy than the configuration on the left, which means that energy is released.

Nuclear fusion, however, is a process that takes place at the other end of the periodic table with very light elements. The energy produced in the fusion reaction is around 5-10x larger than that in a fission reaction. In addition, the by-products are not radioactive – just helium, some neutrons, and energy. In essence, nuclear fusion is a completely clean energy source. Such is the promise of nuclear fusion that some of the best minds in physics have worked on nuclear fusion over the last century. Today, the best minds are also supplemented by AIs, which help to optimise calculations and design the next generation of test reactors.

There are many approaches being developed as a candidate for a commercial nuclear fusion reactor. The main ones include: Magnetic confinement fusion (the type involving ring-style devices – probably the most famous until the recent announcement from NIF), inertial confinement fusion (the type reported on recently); laser-driven fusion; magnetised-target fusion, acoustic inertial confinement fusion, Z-pinch fusion, Muon-catalysed fusion and Nuclear reaction control fusion. Each of these approaches has a different risk profile and different pros and cons, but a successful solution may well need learnings from several of these different technologies.

While the experiment reported recently from the NIF is a significant step in getting to nuclear fusion it is not actually a “break even” event – if you include all the energy used in creating the reaction, you’ll find that the reaction still didn’t get more energy out than was put in. There is still a long way to go but, there may be a value to making something out of this step. Returning science to the public consciousness in a positive way, especially in the face of recent developments in Ukraine and their fallout in the oil industry, may have its benefits. But, it will be important not to overplay the hand – presenting this as fusion being “just around the corner” can backfire badly.

OK, so when will we have it?

Given the increasing importance of this technology to the future of humanity, one would expect to see a significant amount of research funding going into the various different routes to fusion. And while the amount is substantial it is, perhaps, less than might be expected.

Global competitive grant funding for fusion research is at the level of around USD $800 million per year. Put another way, the US spends around USD $45 billion per year on the total budget of the National Institutes of Health (NIH) and the world spends around USD $32 billion annually on Sustainable Development Goal-related competitive research grants.

I contend neither that health research is not critical, nor that SDG-related research is not an excellent way to spend public money. However one may expect that an effectively limitless, clean energy source that would reduce global dependency on fossil fuels, make a considerable contribution not only to the reduction in greenhouse gases and the cost of living, but which would also reduce global geopolitical tensions, might warrant more than 1.5% of the annual funding spent on these other worthy and critical initiatives. 

I don’t want to address issues of lobbying in this piece as the point is well known, rather I want to finish by exploring two points that are closer to research. Firstly, the observation that metrics are powerful drivers of behaviour and, secondly, that links to immediacy seem to be critical in decision making.

Over the last few years, the global nuclear fusion community has consistently produced around 4,000-5,000 research papers per year. However, over the same period the biomedical research community has produced between 800k and 1.25m papers per year; SDG communities have published between 400k and 1m articles per year. A naive argument would be that fusion papers look expensive relative to the more recent papers in either SDG-related research or biomedicine. But, while it is objectively clear that these areas of research are not comparable in their nature, the incentives in the research world are very much skewed toward paper production, which will tend to disadvantage nuclear fusion research. Of course, papers are only one measure of research output. The recent announcement with which I started this blog is a very tangible output of research and its media coverage is positive, but such events are few and far between and hence don’t easily play into a higher speed research narrative.

At a more fundamental level, immediacy plays a critical role in this discussion. It took the better part of 20 years to build momentum for research and funding of SDG-related research, but similar levels of research output and funding were achieved for COVID research in just 24 months. The threat of not understanding the SDGs is not immediately evident in the lives of those with established advanced economies or large continental territories that are not so directly at risk from rising water levels or energy challenges – it has not been a burning platform for them. While the threat of COVID is not as existential or as long-lived for humanity as either SDGs or the emerging energy crisis, the immediacy of the issue in the G20 made the topic instantly appealing both for funding and for publication.

At its heart, nuclear fusion suffers from a perception problem – it is always 30 years away. Because we don’t associate everyday challenges such as energy prices, war, and economic stagnation with not having nuclear fusion as one of our power options, we don’t make research decisions or political choices based on funding and solving this problem. We need a long-term alignment across the political spectrum that strives for nuclear fusion with consistent funding and clear strategic intent to gain this.  

If the NIF announcement leads to a broad realisation that we are getting closer and that voters and hence politicians will take note of the seriousness of our situation, then perhaps another 30 years will not be needed.

Funding levels and publication counts in this article are sourced from Dimensions.

Daniel Hook

About the Author

Daniel Hook, CEO | Digital Science

Daniel Hook is CEO of Digital Science, co-founder of Symplectic, a research information management provider, and of the Research on Research Institute (RoRI). A theoretical physicist by training, he continues to do research in his spare time, with visiting positions at Imperial College London and Washington University in St Louis.

The post Will we only ever dream of endless energy? appeared first on Digital Science.



from Digital Science https://ift.tt/0CwJudA

A Conflict of Interests – Manipulating Peer Review or Research as Usual?

In seeking to define morality and moral actions, the Catechism of the Catholic Church states in paragraph 1753 that, “A good intention (for example, that of helping one’s neighbor) does not make behaviour that is intrinsically disordered, such as lying and calumny, good or just. The end does not justify the means.”

Stephen Sammut, PhD

Science, Scientific Method, and Politics 

It is tempting to think of science in the abstract as objective and pure based on rigorous analysis of empirical evidence. Conversely, politics might often appear less structured and more chaotic, based on subjective values and driven by interest groups and compromises. However, both are human endeavours – neither science nor politics functions solely in the abstract. Both are influenced by biases that are often not evident or transparent to the external observer. The scientific method is one mechanism of checks and balances used to curtail undue, inappropriate, or political influence on science. 

The scientific method teaches researchers to be sceptical and revolves around the performance of rigorous experiments, the collection of data, and the unbiased presentation of results in a format with sufficient explanation and transparency that peers may review, question and reproduce the results. In contrast to the platonic ideal of the scientific method, scientific enterprise in practice is more complex and nuanced. It involves many scientists with complex relationships and drivers, research institutions with needs, funding agencies with stakeholders, and publishers with shareholders. All operate according to their incentives and values. And they compete for support and funding within a society shaped by a complex, dynamic, and multi-stakeholder landscape. 

Politics also operates in what often seems like a detached or parallel universe in which decisions are reached via a mix of scientific and economic evidence, the needs of the general population, and sometimes by influential interested individuals, groups, and companies. 

In reality, science and politics have always been intimately connected, and neither works in practice as they do in theory. Science is political, and although politicians and lobbyists may not use the scientific method, they use science. Science may be used politically but what is crucial is to ensure that politics and subjectivity do not interfere with the scientific method.

Peer review is a check within the framework of scientific communication, but it is not the check. It is, however, the one salient to this story.

Existing since the 1700s, peer review provides an opportunity to validate scientific research. Growing to an accepted norm about 50 years ago, peer review ideally operates by having knowledgeable, independent experts review scientific research. Most people reading this article understand the broad workings of peer review. The peer reviewers should be independent of each other and experts in a topic covered in the paper (Fig. 1). The reviewers offer insight into the quality of the subject and the strength of the methods. In theory, all actors should be independent of one another, but in practice, this is rarely the case. ‘Peers’ means there should be some overlap among people and their knowledge – the people taking on the review must have the capacity and capability to form a thoughtful critique of a given piece of work. To that end, the editors, peer reviewers, and authors are often part of the same scientific society or even organisation (Fig. 2). 

Figure 1: Peer Review Process: Independence; and Figure 2: Peer Review Process: Affiliation Overlap.

Because the peer review process can vary and has not been standardised, the difference between optimising and manipulating the process may not be clear. The first is a grey area of knowing how the system works and fine-tuning the approach for professional gains. The latter refers to understanding how the system works and stepping over community boundaries of acceptable practices. The Committee on Publication Ethics (COPE) offers guidance on peer review. In contrast, the International Committee on Medical Journals Ethics (ICMJE) clearly states: “Reviewers should declare their relationships and activities that might bias their evaluation of a manuscript and recuse themselves from the peer-review process if a conflict exists.” 

See what you think in the following actual case.

Manipulation of Peer Review or Research as Usual?

We take a controversial 2022 research publication as our subject in this case study. However, the nature of the research is not critical to our discussion but rather the scholarly communications process and its integrity – specifically the character of the peer review process. We abstract crucial elements of this case and highlight the most salient and relevant issues. We look at this case without revealing the topic area, as this can be a distraction to the point at hand. 

We identified the current case not via a specific literature search (i.e., a topic-based approach) but rather by studying variances in trust marker signatures (e.g., hypothesis, conflict of interest, funding statements) across a range of literature, being blind to the subject area. This paper fell outside a specified range of norms for several trust markers. For example, the study purpose did not use the drier language typical for research in this area which, combined with the lack of a funding statement, raised an initial suspicion. 

Our chosen case involves three guest editors, four peer reviewers, and a single author, all of whom appear to be closely affiliated either in the community or through their professional affiliations. Three peer reviewers work directly for a single private organisation (“Organisation X”). One of the guest editors, the fourth peer reviewer, and the author are all affiliated with Organisation X. However, only one of the peer reviewers listed an affiliation with Organisation X. The other two guest editors are closely aligned with the principles of organisation X but are leaders in similar organisations. Only one of the peer reviewers originated from a traditional academic research institution. The other peer reviewers did not have affiliations with traditional research institutions. Nuances of peer review are described elsewhere.

Generally, we expect reviewers to have varying and overlapping knowledge and training in related fields for proper peer review. For example, having a topic expert and a statistician in economics would overlap fields with different areas of expertise. Additionally, we expect to see a balance of knowledge and affiliations across editors, peer reviewers, and the author. Affiliations may overlap in narrow fields with small or cutting-edge communities, but the case in question is not a narrow field. Aligned interests raised a flag, though.

In summary, the expertise of guest editors, peer reviewers and the author appears to overlap, as do their perspectives, affiliations, and alignment of interests. (Fig. 3).

Objectively and without specific context, many questions come to mind: When would these overlaps be acceptable while maintaining a robust commitment to research integrity? What other information do you need to know to make that decision? Will the peer reviewers be able to critically and independently evaluate the science within the paper?

Figure 3: Peer Review Process: Case Study.

The Case: When are commonly held interests too overlapping for peer reviewers? 

The case mentioned above is the recently published (and now retracted) paper in Frontiers in Psychology, “The Turnaway Study: A Case of Self-Correction in Science Upended by Political Motivation and Unvetted Findings” (Coleman, 2022). This paper sought to criticise The Turnaway Study, a landmark study describing “the mental health, physical health, and socioeconomic consequences of receiving an abortion compared to carrying an unwanted pregnancy to term”. The article came to our attention through algorithms where trust markers appear irregular. This alert suggested we search social media and PubPeer, where a corroborating signal was found. In addition, the signal indicated we should look closer at the trust markers within the article to ensure due diligence of scientific processes was followed. Because Frontiers published the names of reviewers and their declared affiliations, this transparency allows researchers to review their associations in the context of the peer review process and assess the potential for insularity. 

Coleman’s article, retracted on 26th December 2022, and described in Retraction Watch, appeared in the journal as part of a research topic (a curated article collection, somewhat like a special issue), Fertility, Pregnancy and Mental Health – a Behavioral and Biomedical Perspective. This research topic was led by three guest editors at Frontiers, while the specific Coleman article had four peer reviewers. All peer reviewers state different affiliations, but three are with the same anti-abortion Charlotte Lozier Institute (CLI), which states on its website that it is: “the preeminent organisation for science-based pro-life information and research.” Moreover, the editor charged with reviewing this article is affiliated with CLI. However, most associations were not disclosed (see table and Fig. 4).

Name Role Stated Affiliation Affiliation with Potential for Conflict of Interest Cited by CLI*
Stephen Sammut Guest Editor Franciscan University of Steubenville Charlotte Lozier Institute, Former member WECARE** 1
Patrick P Yeung Guest Editor Saint Louis University St Louis Guild of the Catholic Medical Association
Denis Larrivee Guest Editor Loyola University Chicago International Association of Catholic Bioethics
Robin Pierucci Reviewer Homer Stryker MD School of Medicine, Western Michigan University Charlotte Lozier Institute 7
Steven Braatz Reviewer American Association of ProLife ObGyns Charlotte Lozier Institute 4
Tara Sander Lee Reviewer Charlotte Lozier Institute Charlotte Lozier Institute 8
John Thorp Reviewer Carolina Asia Center, University of North Carolina at Chapel Hill Crisis Pregnancy Center Director 7
Priscilla K. Coleman Author Human Development and Family Studies, Bowling Green State University Former Director, WECARE** 4
*Cited by CLI means the author wrote or was cited in blog posts or other writings published by the Charlotte Lozier Institute. Note that being cited by CLI does not indicate an endorsement from the person being cited.
**World Expert Consortium for Abortion Research and Education (WECARE).
Figure 3: Peer Review Process: Affiliations.

CLI presented an amicus brief (an expert opinion) to the US Supreme Court on 29th July 2021 in support of overturning the court’s earlier decision to uphold the outcome of Roe vs Wade, which had asserted for the past 50 years that women in the United States have a constitutional right to an abortion. Moreover, one of the peer reviewers for the Coleman article, Robin Perrucci, MD, an associate scholar at CLI, filed a separate amicus brief on 20th July 2020 with the Life Legal Defense Foundation in the Dobbs v. Jackson Health US Supreme Court case. Priscilla K. Coleman directed the World Expert Consortium for Abortion Research and Education (WECARE), where Stephen Sammut was among ten other members. John Thorp’s legal testimonies on abortion have previously come into question, and he has been the medical director of an anti-abortion crisis pregnancy centre for over 40 years.

Giving Air to Unethical Practices

We are passing no comment on the area of research involved here since this is a highly emotive area for many. However, this peer review process is of clear interest in research conduct and integrity viewed independently of the underlying research. Furthermore, our simple example highlights the potential for institutes, peer reviewers, or authors to translate aligned political interests into scientific influence.

A decision-making majority of editors and peer reviewers are members or affiliates of organisations with publicly stated aligned interests; this process does not meet the standard of the independent, unbiased scientific method.

Allowing this paper to be published in the scholarly record provides a sense of unwarranted legitimacy to the arguments. We hope that publishers will learn from this experience and take action.

For those responsible for the paper, including its undeclared conflicts of interest, the end goal of having a ‘peer-reviewed’ article does not justify the means used to get there.

Dr Leslie McIntosh

About the Author

Dr Leslie McIntosh, CEO | Ripeta

Dr McIntosh is founder and CEO of Ripeta, a company formed to improve scientific research quality and reproducibility. Part of Digital Science, Ripeta leads efforts in automating quality checks of research manuscripts. Academic turned entrepreneur, Dr McIntosh served as the executive director for the Research Data Alliance (RDA) – US region and as the Director of the Center for Biomedical Informatics at Washington University School in St. Louis. Over the past years, she has dedicated her work to improving science.

The post A Conflict of Interests – Manipulating Peer Review or Research as Usual? appeared first on Digital Science.



from Digital Science https://ift.tt/aRjCU1t

Featured Post

Prof. Dr. Thomas Braunbeck | University of Heidelberg, Germany | Best Researcher Award

  International Research Awards on New Science Inventions Join us for the International Research Awards on New Science Inventions, a premie...

Popular