Large Language Models (LLMs) are transforming energy systems by enabling smarter forecasting, optimization, and decision support ⚡๐ค. From demand prediction to grid management, LLMs help analyze complex data and improve operational efficiency across energy networks.
Despite the opportunities, challenges such as data quality, model reliability, and energy consumption of AI itself remain significant ⚠️๐. Ensuring transparency, cybersecurity, and integration with existing infrastructure is critical for real-world adoption.
Looking ahead, the fusion of LLMs and energy systems promises sustainable, resilient solutions ๐ฑ๐ฎ. With responsible AI design, interdisciplinary collaboration, and supportive policies, LLMs can accelerate the transition toward intelligent, low-carbon energy ecosystems.
With The State of Open Data survey and report now reaching its 10th anniversary, Mark Hahnel shares the wins of the past decade, the challenges, and the future of data sharing.
Ten years into the open data era, we have achieved something of genuine significance: we have won the argument.
In 2016, “open academic data” still required evangelism. The hardest part was persuading colleagues that data sharing mattered at all. Today, that conversation has fundamentally shifted. The majority of researchers now accept that sharing data is valuable, and this matters enormously, for it means the debate has moved from ideology to operations. The challenge is no longer one of belief. It is one of incentives, infrastructure, and quality.
The last decade of open data was principally concerned with volume, getting more datasets into the world, then the next decade must concern itself with value: ensuring that shared data is usable, trusted, discoverable, and credited in ways that genuinely alter researcher behaviour. We now have more shared data than at any point in history, yet we remain far short of having sufficient reusable data, for humans, and machines.
Figure 1: Researchers were asked “How familiar are you with the FAIR data principles (i.e. Findable, Accessible, Interoperable and Reusable)?” Response options included: ‘Never heard of FAIR before’, ‘Previously heard of FAIR but not familiar with the principles’, and ‘Familiar with FAIR’. Yearly sample sizes: 2018 (n = 1,239), 2025 (n = 3,932). Source: The State of Open Data 2025: A Decade of Progress and Challenges.
We prioritised sharing over reuse
The figures tell a story of both progress and stagnation. Awareness of FAIR principles has risen markedly over the past decade. Yet the persistent difficulty has scarcely shifted, with researchers continuing to report receiving inadequate credit for sharing data, and this gap is closing at a glacial pace. This matters because credit is the engine of the system. Without it, data sharing becomes either altruism or compliance. And compliance has a predictable failure mode: it optimises for the minimum required output.
Figure 2: Researchers were asked “Do you think researchers currently get sufficient credit for sharing data?”. The bar chart displays the percentage responding ‘Yes’ and ‘They receive too much credit’ versus ‘No, they receive too little credit’ for each survey year. Longitudinal data from six annual State of Open Data surveys (N=28,584). Yearly sample sizes 2020 (n=4,945), 2021 (n=4,491), 2022 (n=6,104), 2023 (n=6,091), 2024 (n=3,721), 2025 (n=3,232). Source: The State of Open Data 2025: A Decade of Progress and Challenges.
This is how we arrive at what some have termed “data dumping grounds” – datasets that technically satisfy a mandate but are poorly described, difficult to interpret, and effectively inert for anyone hoping to reuse them. If we measure success primarily by counting deposits, we ought not be surprised when we receive deposits optimised for counting.
More policy, less enthusiasm
Support for open data remains high, yet support for mandates has declined sharply in certain regions. The most plausible explanation is not that researchers have turned against openness, but that they have experienced the reality of implementation. Mandates without adequate time, funding, training, infrastructure, or recognition do not feel like progress. They feel like yet another unfunded administrative burden, one more task to complete after the actual research is done.
This is not an argument against mandates. It is an argument against mandates alone.
Mandates can create compliance. They rarely create quality by themselves.
Figure 3: Respondents from Australia, Brazil, Canada, China, Germany, India, Italy, Spain, United Kingdom, and the United States were asked: “How supportive would you be of a national mandate for making research data openly available?” Responses shown as percentage of valid responses per year for those who responded ‘Strongly support’. Sample sizes: 2016 (n=945), 2017 (n=1,348), 2018 (n=692), 2019 (n=2,907), 2020 (n=2,430), 2021 (n=2,303), 2022 (n=2,783), 2023 (n=3,139), 2024 (n=2,598), 2025 (n=2,410). Total longitudinal sample N=21,555. Source: The State of Open Data 2025: A Decade of Progress and Challenges.
FAIR is straightforward to understand and remarkably difficult to execute well.
We have spent years treating the gap between awareness and practice as an education problem: teach researchers what FAIR means and they will implement it. But the gap persists because it is fundamentally an engineering and workflow problem. A matter of tools, integration, staffing, and standards. Most researchers lack the time, and often the specialist knowledge, to produce machine-actionable metadata, select appropriate schemas, apply controlled vocabularies correctly, and anticipate downstream interoperability requirements. Nor should they be expected to do so unaided.
The AI opportunity to make “Good” the easy path
It is here that the next decade becomes genuinely interesting. In barely a year, researchers’ adoption of AI tools for data-related work has increased notably, particularly in the two areas where compliance is low and value most significant – data processing and metadata creation. This is not a gradual cultural shift; it is the pattern one observes when tools begin solving real problems within real workflows.
AI will not magically render data FAIR. But it can alter the economics of FAIR:
It can draft metadata so that researchers begin from seventy percent rather than zero.
It can identify missing fields, inconsistent units, broken formats, and common interoperability errors.
It can recommend standards and vocabularies appropriate to discipline and repository requirements.
It can reduce the box ticking afterthought approach, which currently undermines quality.
This represents the most significant shift available to us: moving FAIR from “best practice” to “path of least resistance.”
However, AI is only as useful as the standards it can target. It performs well with clear structure and shared rules; it is considerably less reliable amid ambiguity. The next decade of open data cannot therefore be premised on the notion that “AI will resolve matters.” It must instead be: AI combined with standards, stewardship, and incentives.
If we wish the next ten years to differ meaningfully from the last, we must change what we reward and what we measure.
Volume metrics are seductive precisely because they are straightforward: number of datasets deposited, number of repositories, number of mandates, number of downloads.
Value metrics are more demanding, but they are what actually matter:
Reuse: citations of datasets, documented downstream use, integration into subsequent studies
Quality: completeness of metadata, adherence to community standards, interoperability assessments passed
Time-to-share: how early data becomes available within the research lifecycle, not months after publication
Trust: provenance, versioning, validation, and clear licensing
Equity: whether infrastructure and support are genuinely available across regions and disciplines, not merely within well-resourced institutions
The open data movement will reach maturity when success is defined by impactful reuse, not merely successful deposit. If the first decade was characterised by advocacy, the second must be defined by operationalisation.
1. Make Credit Real
We already know that recognition constitutes a primary barrier, and it is not resolving itself.
Datasets must be treated as first-class research objects wherever it matters:
Hiring and promotion criteria that explicitly acknowledge data contributions
Funding applications that meaningfully evaluate dataset outputs and stewardship plans
Consistent dataset citation norms, enforced across publishers and platforms
2. Build Workflows that Reduce Friction
When data deposition is integrated directly into researchers’ existing environments,submission systems, electronic laboratory notebooks, analysis platforms, behaviour changes. Reduce steps, reduce context-switching, reduce ambiguity. If sharing well is easy, it happens. If it is difficult, it becomes a mere checkbox.
3. Fund the Missing Layer: Data Stewardship and Training
Discipline-specific training that extends beyond “what FAIR stands for”
Local infrastructure suited to local contexts, because one size emphatically does not fit all
4. Use AI to make FAIR easy
The objective should be:
AI-assisted metadata creation with human review
Automated validation checks integrated into repositories and workflows
Clear provenance and versioning
Automating metadata crosswalks and exposure to machines
2035: Open FAIR data is just research done well
By 2035, sharing well-documented, reusable data should cease to be a special achievement. It should be unremarkable. Standard. Expected. Not because researchers have suddenly become more virtuous, but because the system has finally aligned incentives, tooling, and support.
The first decade constructed the moral case for open data. The next decade must construct the practical reality. Academic research stands at yet another technology-driven inflection point. The institutions that embrace machine-first FAIR will find themselves having more impact for their research and researchers.
Saline mushrooms are emerging as an exciting solution to global food challenges by thriving in high-salinity environments where traditional crops fail ๐ฑ✨. Grown on salty soils and brackish water, these resilient fungi transform underused land into productive food sources while reducing pressure on freshwater and fertile farmland ๐.
Beyond sustainability, saline mushrooms offer impressive nutritional value ๐ช๐ฝ️. Rich in protein, dietary fiber, antioxidants, and essential minerals, they support healthy diets while catering to plant-based and functional food trends ๐ฅ๐ฟ. Their unique umami flavor also makes them attractive for culinary innovation and gourmet applications ๐ฉ๐ณ๐.
As climate change increases soil salinity worldwide, saline mushrooms represent a smart adaptation strategy ๐ก️♻️. By combining environmental resilience, nutrition, and economic potential, they could play a vital role in future food systems and sustainable agriculture ๐พ๐.
⚡ Offline Inverse Reinforcement Learning (IRL)is transforming how industrial facilities manage PV–battery–load systems. By learning decision-making behavior from historical operational data ๐, offline IRL uncovers the hidden objectives behind expert energy management strategies. This data-driven approach enables smarter control without requiring real-time trial-and-error, making it both safe and cost-efficient ๐ค๐.
๐ In industrial environments, energy costs and demand charges often conflict with each other ๐ฐ๐. Offline IRL helps balance this trade-off by jointly optimizing both objectives. By understanding when to store, discharge, or draw power from the grid ⚙️☀️, the system minimizes peak demand while maintaining operational efficiency, ensuring reliable energy usage during high-load periods.
๐ฑ The result is an intelligent, adaptive energy management framework tailored for industrial applications. With improved cost savings, reduced carbon footprint ♻️, and scalable deployment across factories and plants ๐ญ, offline IRL empowers industries to move toward smarter, greener, and more resilient energy systems ๐
The rapid growth of IoT devices has pushed computation closer to the network edge through Fog computing, enabling low latency, real-time analytics, and smarter services. However, this distributed architecture also introduces complex challenges in managing security threats, protecting user privacy, and efficiently utilizing limited resources. ๐ง ๐ก๐
This comprehensive survey brings together an integrated view of security, privacy, and resource efficiency in IoT-Fog networks. It discusses authentication mechanisms, encryption techniques, trust management, and privacy-preserving models while highlighting energy-aware scheduling, load balancing, and resource optimization strategies. ๐๐ก️⚡
By analyzing existing architectures, threat models, and open research challenges, the survey provides valuable insights for researchers and practitioners. It emphasizes the need for holistic solutions that balance protection and performance, paving the way for scalable, resilient, and sustainable IoT-Fog ecosystems. ๐๐๐
Built on Digital Science’s Dimensions – the world’s largest interconnected global research database – Dimensions Author Check provides unmatched transparency into authors’, editors’, and reviewers’ publishing and collaboration histories.
With more than 9,000 journal sites using ScholarOne, publishers globally will benefit from Author Check’s reliable, concise, structured insights, enabling them to quickly spot unusual activities, such as retractions, expressions of concern, or atypical collaboration patterns.
“We’re delighted to add Author Check as a Silverchair Universe partner to the direct benefit of ScholarOne users,” said Hannah Heckner Swain, VP of Strategic Partnerships at Silverchair. “By connecting best-in-class services to our products, we give publishers the flexibility of choosing the services that best fit their needs.”
Dr Leslie McIntosh, VP of Research Integrity & Security at Digital Science, said: “ScholarOne is used by millions of researchers worldwide, which makes it an ideal platform to integrate the Dimensions Author Check API.
“Supporting the integrity of the scholarly record is vital for upholding trust and transparency in research itself, where publishers play a critical role. Because the Author Check API can be used at scale, ScholarOne will now offer maximum benefit to the academic publishing community.”
Silverchair is the leading independent platform partner for scholarly and professional publishers, serving our growing community through flexible technology and unparalleled services. Our teams build, maintain, and innovate platforms across the publishing lifecycle—from idea to impact. Our products facilitate submission, peer review, hosting, dissemination, and impact measurement, enabling researchers and professionals to maximize their contributions to our world. www.silverchair.com
About Dimensions
Part of Digital Science, Dimensions hosts the largest collection of interconnected global research data, re-imagining research discovery with access to grants, publications, clinical trials, patents and policy documents all in one place. Follow Dimensions on Bluesky, X and LinkedIn.
About Digital Science
Digital Science is an AI-focused technology company providing innovative solutions to complex challenges faced by researchers, universities, funders, industry, and publishers. We work in partnership to advance global research for the benefit of society. Through our brands – Altmetric, Dimensions, Figshare, IFI CLAIMS Patent Services, metaphacts, Overleaf, ReadCube, Symplectic, and Writefull – we believe when we solve problems together, we drive progress for all. Visit digital-science.com and follow Digital Science on Bluesky, on X or on LinkedIn.
Quantum tunneling allows particles to pass through energy barriers instead of climbing over them, influencing how organic reactions actually proceed ⚛️๐งช This phenomenon helps explain reaction rates that seem impossible by classical chemistry alone.
In synthetic organic chemistry, tunneling is especially important in proton and hydrogen transfer reactions ✨ It affects kinetic isotope effects, reaction selectivity, and temperature-dependent behavior observed in many catalytic systems.
Understanding quantum tunneling enables chemists to design smarter catalysts and more efficient reaction pathways ๐ฑ⚗️ It opens doors to faster, greener syntheses with improved control over molecular transformations.