Welcome to Digital Science’s 2024 Annual Report, a comprehensive overview of our efforts to revolutionize the global research ecosystem. We empower researchers and institutions with innovative tools, including those leveraging AI, to drive collaboration, transparency, and impactful discoveries. The report highlights our achievements in advancing open research practices, supporting the academic community, fostering research integrity, and championing sustainability.
Download the report for insights into our vision and values, detailing contributions to improving research outcomes, driving innovation, and promoting open data standards. It also outlines our Environmental, Social, and Governance (ESG) commitments, showcasing efforts to reduce our carbon footprint. Through impactful partnerships and groundbreaking tools, Digital Science continues to lead in transforming how science is conducted and shared for the benefit of society.
Digital Science exists to support the research ecosystem.
I am excited to share with you our wide and varied contributions to the needs of the research ecosystem and our communities”
Daniel Hook
CEO, Digital Science
Highlights
Launch of Research Transformation campaign
In 2024, Digital Science initiated the Research Transformation campaign, a global effort to understand and support the evolving research landscape. Through surveys and interviews with nearly 400 academics across 70 countries, the campaign explored themes like AI, openness, and research security, culminating in the publication of the report Research Transformation: Change in the Era of AI, Open and Impact.
Commitment to Open Data practices
Digital Science pledged support for the Barcelona Declaration on Open Research Information in 2024, launching its own Open Principles to promote inclusivity, reproducibility, and accessibility in research. The annual State of Open Data Report. revealed growing global recognition of open data practices, while highlighting disparities in resources that impede progress.
Advancing forensic scientometrics
Digital Science made significant strides in the emerging field of Forensic Scientometrics (FoSci) in 2024, developing tools like the Author Check tool to uncover errors and manipulations in scientific publications. This work strengthens trust in scholarly communication and addresses systemic vulnerabilities in research integrity.
Strengthening research in Sub-Saharan Africa
In partnership with the Training Centre in Communication (TCC Africa), Digital Science helped to trained over 570 early-career researchers across seven African nations in 2024. The collaboration enhanced open access adoption, expanded African scholarship in the Dimensions database, and advanced equitable scholarly publishing practices.
Environmental sustainability initiatives
Digital Science demonstrated its commitment to sustainability by setting net-zero targets aligned with the Paris Agreement goals. In 2024, the company reported its carbon emissions, purchased renewable electricity certificates, and invested in high-quality offsets to mitigate its environmental impact.
Driven by curiosity and guided by a strong sense of purpose, Digital Science champions a global research ecosystem that values integrity, inclusivity, and impact.”
Stefan von Holtzbrinck
CEO, Holtzbrinck
For Digital Science, Environmental, Social, and Governance (ESG) commitment isn’t just about compliance.
Altmetric adds podcasts as an attention source, offering a more complete view of research influence
Wednesday 15 October 2025
In a major step forward for tracking the real-world impact of research, Digital Science today announces that Altmetric has added a new attention source: Podcasts.
Altmetric is the first in the world to include podcasts among its measures of research impact.
In addition to podcasts, Altmetric’s many attention sources include select social media channels, news, blogs, public policy sites, patents, clinical guidelines, and more.
A complete view of research influence
Miguel Garcia, VP of Product, Digital Science, said: “Altmetric is about tuning in to where research conversations are really happening, and understanding how that research is being received, discussed, debated, and shared. A complete view of research influence isn’t possible without podcasts.
“With Altmetric podcast tracking, we recognize that these real-world conversations play a critical role in shaping public understanding and acceptance of research. Podcasts add rich, narrative-driven evidence to the impact story, offering a more complete view of research influence across scholarly, professional, and public domains.
“With more than half a billion people listening to podcasts for information, and at a time when podcasts are growing as a communication and educational platform, we feel the moment is right to include these conversations as an attention source. Publishers, academics, industry, governments, and funders will all now benefit from better understanding the impact of research.”
Benefits of podcast tracking
By adding podcasts as an attention source, Altmetric will enable users to:
Strengthen reporting on research impact
Capture a broader, more complete attention landscape
Gain deeper public engagement insights
Diversify research impact data sources
All user segments within the research ecosystem will benefit from Altmetric’s podcast tracking:
Academics: Strengthen submissions that demonstrate the real-world impact and influence of research
Enterprise: Identify emerging Key Opinion Leaders (KOLs) and track therapeutic-area conversations, even outside traditional publishing
Publishers: Highlight where journals are discussed in accessible, mainstream forums that boost author engagement
Funders: Ensure research funded is making an impact in broader public discourse, justifying investment
Altmetric is a leading provider of alternative research metrics, helping everyone involved in research gauge the impact of their work. We serve diverse markets including universities, institutions, government, publishers, corporations, and those who fund research. Our powerful technology searches thousands of online sources, revealing where research is being shared and discussed. Teams can use our powerful Altmetric Explorer application to interrogate the data themselves, embed our dynamic ‘badges’ into their webpages, or get expert insights from Altmetric’s consultants. Altmetric is part of the Digital Science group, dedicated to making the research experience simpler and more productive by applying pioneering technology solutions. Find out more at altmetric.com and follow @altmetric on X and @altmetric.com on Bluesky.
About Digital Science
Digital Science is an AI-focused technology company providing innovative solutions to complex challenges faced by researchers, universities, funders, industry and publishers. We work in partnership to advance global research for the benefit of society. Through our brands – Altmetric, Dimensions, Figshare, IFI CLAIMS Patent Services, metaphacts, Overleaf, ReadCube, Symplectic, and Writefull – we believe when we solve problems together, we drive progress for all. Visit digital-science.com and follow Digital Science on Bluesky, on X or on LinkedIn.
Media Contact
David Ellis, Press, PR & Social Manager, Digital Science: Mobile +61 447 783 023, d.ellis@digital-science.com
Digital Science report offers “mixed score card”, makes 23 recommendations including mandatory ORCIDs for all Aussie researchers
Thursday 9 October 2025
Digital Science, a technology company serving stakeholders across the research ecosystem, has made a series of 23 recommendations for Australia’s research future in a report published today into the use of persistent identifiers (PIDs) in research.
Commissioned by the Australian Research Data Commons (ARDC), Digital Science was tasked with developing a comprehensive PID benchmarking framework, and to conduct a benchmarking process that could be used to monitor the effectiveness of Australia’s National PID Strategy over time. The report, developed collaboratively with the ARDC, also benefited from consultation and engagement with the Australian research community.
The lead author of the report, Digital Science’s VP of Research Futures, Simon Porter, will discuss the findings at two upcoming events in Brisbane, Australia: International Data Week (13-16 October) and the eResearch Australasia Conference (20-24 October).
A unique opportunity for Australian research
“This is the first time Australia’s National PID Strategy has been benchmarked, and it represents a unique opportunity for the Australian research system to benefit from that process,” Simon Porter said.
“What we’ve seen from the benchmarking is that Australia’s adoption of ORCID for research publications across the research sector has been extremely successful – and Australia is now third in the world for including DOI (Digital Object Identifier) links with dissertations published online.
“Workflows between publishers, institutional research information systems, and ORCID are also sufficiently strong, and we can see that Australia is well placed for a more comprehensive use of the ORCID infrastructure.
“However, our comprehensive review gave Australian research a mixed score card and recommended several changes and interventions to help strengthen the national strategy,” Mr Porter said.
“One of the key issues we’ve seen is that although Australian researchers are more engaged than the global average in the practice of data citation, they trail significantly behind their European peers.
“And while ORCID and ROR adoption has been strong for publications, the use of persistent identifiers with data sets and non-traditional research outputs (NTROs) remains the exception rather than the norm. As significant publishers of NTRO items in their own right, institutions should hold themselves to the same standards that they expect from publishers – all creators should ideally be described with an ORCID, and affiliation id (ROR).”
Natasha Simons, Director of National Coordination at the ARDC, congratulated Digital Science on the release of the National PID Benchmarking Toolkit. “The Australian Persistent Identifier Strategy is a critical national initiative to benefit the Australian people by strengthening our digital information ecosystem, the quality of our research and our capacity for effective research engagement, innovation and impact,” she said. “So it is essential to develop robust benchmarks that can track our progress and measure outcomes. The Toolkit provides us with exactly what’s needed.”
Recommendations to strengthen Australia’s research future
Some of the 23 recommendations made in the report include:
Australian research has progressed to the point where ORCIDs should now be mandatory for all researchers; Australian Institutions should require ORCID registration within their institutional research information management systems.
Australian research institutions should adopt the best practices of publishers to ensure that all authors are described by ORCIDs and affiliations via ROR.
Australia should join international pressure to ensure that all publishers both record ORCID records and push the associated metadata into Crossref, and to avoid publishers that do not support ORCID workflows.
Australia should consider a national policy for publishing dissertations with DOIs in institutional repositories, formalizing the use of ORCIDs for authors and their supervisors.
Reports published by universities and their research centres should ideally be published in institutional repositories, with associated identifiers.
Ongoing benchmarking analysis of PIDs should not ignore closed access material. (e.g., ignoring closed-access publications would result in missing 35% of Australia’s research output in 2024.)
RAiDs (Research Activity Identifiers) should be added from “day one” of the creation of a funding grant.
Grants funding organizations should create persistent identifiers “as soon as is practical” – including complete metadata – to enable research funding to be visible and tracked earlier.
“We welcome the opportunity to have led this benchmarking process, and we hope our recommendations will lead to some meaningful improvements within Australian research,” Mr Porter said.
“Importantly, we’ve also demonstrated that it is possible to produce a benchmarking toolkit for PIDs, and our work may have implications for other nations and their roadmaps towards a persistent identifier future.”
Background: The importance of PIDs
Persistent identifiers (PIDs) are unique numbered references to individual researchers and their work, which are connected to digital outputs and resources. They help connect researchers, projects, outputs, and institutions, and have become critical for:
Making research inputs and outputs FAIR (findable, accessible, interoperable, and reusable)
Enabling research outputs to be identified, tracked and cited
Analyzing research impact
Supporting national-scale research analytics
Widely used PIDs include ORCID iDs, DOIs, RORs, and emerging identifiers include DOIs for grants, and identifiers for projects (RAiDs).
Note: In the report, Simon Porter declares that he is also a member of the ORCID Board.
Digital Science is an AI-focused technology company providing innovative solutions to complex challenges faced by researchers, universities, funders, industry and publishers. We work in partnership to advance global research for the benefit of society. Through our brands – Altmetric, Dimensions, Figshare, IFI CLAIMS Patent Services, metaphacts, OntoChem, Overleaf, ReadCube, Symplectic, and Writefull – we believe when we solve problems together, we drive progress for all. Visit digital-science.com and follow Digital Science on Bluesky, on X or on LinkedIn.
Media contact
David Ellis, Press, PR & Social Manager, Digital Science: Mobile +61 447 783 023, d.ellis@digital-science.com
Staffing cuts and budget reductions are squeezing federal research agencies from both sides — yet your mission hasn’t gotten any smaller.
When critical reviews take 15–20 days, every lost day means slower funding decisions, higher risk exposure, and reduced program impact. Smaller teams simply can’t afford to waste time chasing data across siloed systems.
Waiting for resources to improve isn’t a strategy.
With fewer people to share the load, inefficiencies multiply — and so do the risks of missed impacts, unvetted partners, and misaligned funding.
Our new report, Doing More with Less: How Federal Research Agencies Are Maximizing Impact with Smarter Data Intelligence, reveals how agencies are:
Cutting review times by up to 90% — without adding headcount
Gaining real-time visibilityinto performance, partnerships, and risk
Reducing reliance on overburdened staff for manual data work
Securing data access in alignment with FedRAMP and DoD IL-4 requirements, pending 2026 certification
With Dimensions, your smaller team can work like a larger one — unifying publications, grants, patents, policy, collaborator data, and risk insights in one secure platform.
Get the report. Get the advantage.
Fill out the form to access your copy of Doing More with Less and see how other agencies are meeting higher expectations with fewer resources.
Doing More with Less: How Federal Research Agencies Are Maximizing Impact with Smarter Data Intelligence
This article distills key insights from the expert roundtable, “AI in Literature Reviews: Practical Strategies and Future Directions,” held in Boston on June 25 where a range of R&D professionals joined this roundtable, bringing perspectives from across the pharmaceutical and biotechnology landscape. Attendees included senior scientists, clinical development leads, and research informatics specialists, alongside experts working in translational medicine and pipeline strategy. Participants represented both global pharmaceutical companies and emerging biotechs, providing a balanced view of the challenges and opportunities shaping innovation in drug discovery and development.
Discussions covered real-world use cases, challenges in data quality and integration, and the evolving relationship between internal tooling and external AI platforms. The roundtable reflected both enthusiasm and realism about AI’s role in drug discovery – underscoring that real progress depends on high-quality data, strong governance, and tools designed with scientific nuance in mind. Trust, transparency, and reproducibility emerged as core pillars for building AI systems that can support meaningful research outcomes.
If you’re in an R&D role, whether in computational biology, informatics, or scientific strategy and looking to scale literature workflows in an AI-enabled world, keep reading for practical insights, cautionary flags, and ideas for future-proofing your approach.
Evolving Roles and Tooling Strategies
Participants emphasized the diversity of AI users across biopharma, distinguishing between computational biologists and bioinformaticians in terms of focus and tooling. While foundational tools like Copilot have proven useful, there’s a growing shift toward developing custom AI models for complex tasks such as protein structure prediction (e.g., ESM, AlphaFold).
AI adoption is unfolding both organically and strategically. Some teams are investing in internal infrastructure like company-wide chatbots and data-linking frameworks while navigating regulatory constraints around external tool usage. Many organizations have strict policies governing how proprietary data can be handled with AI, emphasizing the importance of controlled environments.
Several participants noted they work upstream from the literature, focusing more on protein design and sequencing. For these participants, AI is applied earlier in the R&D pipeline before findings appear in publications.
Stock image
Data: Abundance Meets Ambiguity
Attendees predominantly use public databases such as GeneBank and GISAID rather than relying on the literature. Yet issues persist: data quality, inconsistent ontologies, and a lack of structured metadata often require retraining public models with proprietary data. While vendors provide scholarly content through large knowledge models, trust in those outputs remains mixed. Raw, structured datasets (e.g., RNA-seq) are strongly preferred over derivative insights.
One participant described building an internal knowledge graph to examine drug–drug interactions, highlighting the challenges of aligning internal schemas and ontologies while ensuring data quality. Another shared how they incorporate open-source resources like Kimball and GBQBio into small molecule model development, with a focus on rigorous data annotation.
Several participants raised concerns about false positives in AI-driven search tools. One described experimenting with ChatGPT in research mode and the Rinsit platform, both of which struggled with precision. Another emphasized the need to surface metadata that identifies whether a publication is backed by accessible data, helping them avoid studies that offer visualizations without underlying datasets.
A recurring theme was the frustration with the academic community’s reluctance to share raw data, despite expectations to do so. As one participant noted:
“This is a competitive area—even in academia. No one wants to publish and then get scooped. It’s their bread and butter. The system is broken—that’s why we don’t have access to the raw data.”
When datasets aren’t linked in publications, some participants noted they often reach out to authors directly, though response rates are inconsistent. This highlights a broader unmet need: pharma companies are actively seeking high-quality datasets to supplement their models, especially beyond what’s available in subject-specific repositories.
Literature and the Need for Feedback Loops
Literature monitoring tools struggle with both accuracy and accessibility. Participants cited difficulties in filtering false positives and retrieving extractable raw data. While tools like ReadCube SLR allow for iterative, user-driven refinement, most platforms still lack persistent learning capabilities.
The absence of complete datasets in publications, often withheld due to competitive concerns, remains a significant obstacle. Attendees also raised concerns about AI-generated content contaminating future training data and discussed the legal complexities of using copyrighted materials.
As one participant noted:
“AI is generating so much content that it feeds back into itself. New AI systems are training on older AI outputs. You get less and less real content and more and more regurgitated material.”
Knowledge Graphs and the Future of Integration
Knowledge graphs were broadly recognized as essential for integrating and structuring disparate data sources. Although some attendees speculated that LLMs may eventually infer such relationships directly, the consensus was that knowledge graphs remain critical today. Companies like metaphacts are already applying ontologies to semantically index datasets, enabling more accurate, hallucination-free chatbot responses and deeper research analysis.
What’s Next: Trust, Metrics, and Metadata
Looking forward, participants advocated for AI outputs to include trust metrics, akin to statistical confidence scores, to assess reliability. Tools that index and surface supplementary materials were seen as essential for discovering usable data.
One participant explained:
“It would be valuable to have a confidence metric alongside rich metadata. If I’m exploring a hypothesis, I want to know not only what supports it, but also the types of data, for example, genetic, transcriptomic, proteomic, that are available. A tool that answers this kind of question and breaks down the response by data type would be incredibly useful. It should also indicate if supplementary data exists, what kind it is, and whether it’s been evaluated.”
Another emphasized:
“A trustworthiness metric would be highly useful. Papers often present conflicting or tentative claims, and it’s not always clear whether those are supported by data or based on assumptions. Ideally, we’d have tools that can assess not only the trustworthiness of a paper, but the reliability of individual statements.”
There was also recognition of the rich, though unvalidated, potential in preprints, particularly content from bioRxiv, which can offer valuable data not yet subjected to peer review.
Conclusion
The roundtable reflected both enthusiasm and realism about AI’s role in drug discovery. Real progress depends on high-quality data, strong governance, and tools designed with scientific nuance in mind. Trust, transparency, and reproducibility emerged as core pillars for building AI systems that can support meaningful research outcomes.
Digital Science: Enabling Trustworthy, Scalable AI in Drug Discovery
At Digital Science, our portfolio directly addresses the key challenges highlighted in this discussion.
ReadCube SLR offers auditable, feedback-driven literature review workflows that allow researchers to iteratively refine systematic searches.
Dimensions & metaphacts offers the Dimensions Knowledge Graph, a comprehensive, interlinked knowledge graph connecting internal data with public datasets (spanning publications, grants, clinical trials, etc.) and ontologies—ideal for powering structured, trustworthy AI models that support projects across the pharma value chain.
Altmetric identifies early signals of research attention and emerging trends, which can enhance model relevance and guide research prioritization.
For organizations pursuing centralized AI strategies, our products offer interoperable APIs and metadata-rich environments that integrate seamlessly with custom internal frameworks or LLM-driven systems. By embedding transparency, reproducibility, and structured insight into every tool, Digital Science helps computational biology teams build AI solutions they can trust.