With a number of deadlines for open access (OA) coming up in 2025 and beyond, the race is on for many publishers to make the transition to OA. Simon Linacre asks, are these targets achievable?
Traditionally, September and October have always been one of the busiest – and most interesting – times to be in the publishing industry. Back in the day, September would be the deadline for the first of the following year’s issues to be collated by editors, while in more recent times big events like the ALPSP Conference, the Frankfurt Book Fair and Open Access Week have set the agenda for the remainder of the year and beyond.
In 2024, this period has perhaps more intrigue than most given a number of deadlines and political events occurring in the next 12 months or so, many of them revolving around open access (OA) and its further adoption. But will things pan out the way people anticipate, and are there solutions that can be used to help forge a path through so many uncertainties about the future?
Conference season
At the recent ALPSP Conference in Manchester in September, there was a good deal of discussion about how open access had developed this year, and its potential progress in 2025 and beyond. Perhaps unsurprisingly at a conference full of publishers, the mood was a little downbeat when it came to the theme of OA, but not for the reasons one might think. Reading between the lines, there was a frustration at the shifting sands many felt they had to constantly navigate, in the shape of changing or newly introduced policies, and a sense that innovation was being stymied as a result.
For example, the tone for OA seemed to have been set by the JISC report on transformative agreements (TAs) which was published in the UK earlier in 2024. This made for somber reading, with the headline prediction that while the UK’s transitioning to OA was faster than most countries, based on the journal flipping rates observed between 2018–2022 it would take at least 70 years for the big five publishers to flip their TA titles to OA.
With this in mind, the fact that there were deadlines for Plan S set for 2025 around transition that seemed unlikely to be met, and with the OSTP memo in the US mired in committees and a potential change on the cards in the White House, the belief among many publishers was that the move to OA was not happening at the pace or in the direction that many thought it would.
Geopolitical calculations
In addition to what is happening in the UK, Europe and in the US, events further afield are also causing publishers to take stock of their medium-to-long-term strategies. The publication of authors based in Russia has declined sharply since the invasion of Ukraine in early 2022, and collaboration between US authors and those based in China have also decreased, possibly due to policy changes by the Chinese government favoring publication in China-based journals, but also potentially due to fears about research security issues in the US and in other countries.
China’s move to OA is also happening at a much lower level than many countries, which is significant as it takes up such a high percentage of published articles, passing the US a few years ago as the world’s most prolific publisher of research articles. As a result, despite the increase in the number of TAs being agreed with universities, publishers are still seeing a high degree of uncertainty in the transition to OA.
Forward motion
This uncertainty will be in the back of publishers’ minds when celebrating OA Week this year, coming as it does every year on the back of major conferences such as ALPSP and Frankfurt, and in the midst of fine tuning budgets for the following year. At Digital Science, we understand this predicament given how closely we work with publishers as customers, and also because many of us have worked in the publishing industry ourselves. As such, we have been analyzing how Digital Science solutions can help publishers steer a path forward on OA and transformative agreements, and have created this use case for Dimensions in support of our community.
This resource has been designed to reflect the period of change that the publishing industry is undergoing, supporting the need for publishers to create, evaluate and negotiate TAs by delivering a strong range of historical and predictive data through Dimensions. Using the Dimensions database – which now holds data on almost 150m publications as well as details on funding, grants and patents – publishers can easily find and analyze data surrounding authorship across categories such as country, geography, institution and funder. Understanding a given discipline’s current or future state of play can complement publishers’ own data and inform their strategies accordingly.
Solid state
The theme of this year’s OA Week – ‘Community over Commercialization’ – is a deliberately provocative one, and should engender a good deal of debate during the week and beyond. It should also broaden the conversation to adjacent areas such as open research and open science, as here we have policy and geopolitics making waves for everyone involved in the research ecosystem.
The origin of some of these ripples can be seen in two upcoming reports from Digital Science. At the end of October, a new report on Research Transformation includes substantial input from those involved in academia on how OA is impacting on their work, while November sees the ninth annual State of Open Data report, tracking how researchers see open data issues developing as part of their work. Without giving too much away, both of these reports call for greater awareness of – and support using – the myriad of fast-developing technologies that are starting to impact academics and their institutions. As such, the community of interest that supports OA Week every year needs to work together in the ecosystem they all inhabit if those OA deadlines are to be met.
Simon Linacre, Head of Content, Brand & Press | Digital Science
Simon has 20 years’ experience in scholarly communications. He has lectured and published on the topics of bibliometrics, publication ethics and research impact, and has recently authored a book on predatory publishing. Simon is an ALPSP tutor and has also served as a COPE Trustee.
People with lived experience of a condition bring unique and valuable insights when planning research into that condition. Using data from Dimensions, Emily Alagha examines the evolution of autistic people’s involvement in autism research over the past two decades.
Author’s note about identify-first language
In this post, I am using identity-first language (e.g., ‘autistic person’) to honor the preference of many in the autism community who embrace their identity as an integral part of who they are. This approach reflects the values of empowerment and self-identification.
The Rise of Participatory Research
There’s a growing recognition in the research community that individuals with lived experience of a condition or phenomenon can offer unique and valuable insights to the design of scientific studies. This collaborative approach is often referred to as participatory research and actively involves individuals with lived experience in all stages of the research process. Dimensions data (visualized below) reveals a steady increase in research articles using terms related to participatory research, suggesting a growing embrace of this approach within the scientific community. This shift reflects a move towards more inclusive research practices that empower individuals and communities to actively participate in knowledge creation that is directly relevant to the needs and priorities of those it aims to serve.
This post examines recent trends in a specific subset of participatory research that highlights lived experience contributions, as identified through publication authorship and acknowledgments. Focusing on autism research, I will delve into this trend by leveraging Dimensions data to analyze autistic authorship and acknowledged collaborative support. I’ll also compare the trajectory of this movement to similar trends in mental health and chronic illness research. Finally, I’ll discuss the implications of these findings for research impact and visibility and advocate for greater inclusion of those with lived experience in shaping future studies.
Characterizing Autistic Contributor Representation in Autism Research Articles
Methodology
Individual contributions to research studies are most often represented by the author and acknowledgements sections of publications. To investigate how autistic contributions are characterized in the literature, I leveraged the capabilities of the Dimensions database to search within the raw affiliation and acknowledgements fields of research publications. I used a combination of search strategies to focus on publications related to autism research and specifically targeted publications that either:
Included autistic or neurodiverse authors in the raw affiliations section OR
Acknowledged autistic people, patient networks, or advisory groups in their acknowledgments section AND
Mentioned autism-related keywords in their full text
I examined author affiliations and acknowledgments to identify the most common language used to represent contributions from autistic people. I also explored bibliometric indicators such as citation counts, Field Citation Ratio (FCR), and Altmetric Attention Scores to assess the impact and reach of autism research with autistic contributors compared to the broader field of autism studies. Finally, I applied the same approaches to explore how lived experience contributions are characterized in other fields to identify avenues for potential future growth of autistic representation in research.
The Rise of Autistic Authorship
To understand how autistic authors represent themselves, I conducted a qualitative review of author affiliations in participatory autism research to identify common phrases and terms. These range from explicit identifiers like “Autistic Researcher” or “Independent Autistic Scholar,” to affiliations with advocacy organizations such as the Autistic Self Advocacy Network, and roles emphasizing lived experience like “Expert by Experience” or “Lived Experience Professional.” While the number of publications authored by self-identified autistic individuals is currently limited (231), these publications offer valuable insights into the unique perspectives and contributions of autistic researchers.
Image 1: Author collaboration network for lived experience autism researchers and their co-authors.
This network visualization represents a preliminary attempt to identify leading neurodivergent researchers engaged in autism and neurodiversity scholarship. While the search terms were designed to highlight self-identified neurodivergent researchers and allies, it’s important to note that this method may not be fully accurate, and not all individuals included may identify as neurodivergent. The visualization highlights key figures like Sonia Johnson, Fiona Ng, and Dora Madeline Raymaker, who are known for their work in this area and could provide valuable leadership on best practices for autistic inclusion in research.
Highlighting specific examples of impactful, autistic-led research with high citation counts and Altmetric Attention Scores (a measure of online attention and engagement) demonstrates the influence of these authors on the broader research conversation.
Top Cited Research Article among Autistic Lived Experience Authors:
Nicolaidis, C., Raymaker, D., McDonald, K., Dern, S., Boisclair, W. C., Ashkenazy, E. & Baggs, A. (2013). Comparison of Healthcare Experiences in Autistic and Non-Autistic Adults: A Cross-Sectional Online Survey Facilitated by an Academic-Community Partnership. Journal of General Internal Medicine, 28(6), 761–769. https://doi.org/10.1007/s11606-012-2262-7
This study compares the healthcare experiences of autistic and non-autistic adults through an online survey, uncovering significant disparities for autistic people. Autistic collaboration involves authors from the Autistic Self Advocacy Network and the Academic Autistic Spectrum Partnership in Research and Education (AASPIRE). The high citation count of this study underscores its impact on shaping subsequent research around healthcare access and equity for autistic people.
Top Altmetric Score and Field Citation Ratio among Autistic Lived Experience Authors:
Pearson, A. & Rose, K. (2021). A Conceptual Analysis of Autistic Masking: Understanding the Narrative of Stigma and the Illusion of Choice. Autism in Adulthood, 3(1), 52–60. https://doi.org/10.1089/aut.2020.0043
This conceptual analysis investigates autistic masking as a response to stigma. Collaborators include Kieran Rose of The Autistic Advocate and Infinite Autism. The high Altmetric score and Field Citation Ratio (a measure of a study’s influence within its specific field) highlight the broad reach and impact of this work on online platforms and in further research.
These examples illustrate the power of autistic-led research to generate new insights and draw attention to often overlooked topics. Having examined the influence of key autistic researchers, it’s essential to explore the broader scope of autistic involvement in research, beyond authorship.
Broadening the Scope: How do Papers Characterize Autistic Contributions Beyond Authorship?
While authorship provides a clear indicator of direct contribution, it doesn’t capture the full spectrum of autistic involvement in research. I expanded the analysis to include the acknowledgments section of publications to gain additional insight into how autistic people contribute to and shape research. Acknowledgments often reveal a wider range of roles and contributions, such as participation in advisory boards or community networks.
Expanding the analysis to include publications that acknowledge autistic or neurodiverse people, patient networks, or advisory groups in the acknowledgments section significantly broadened the dataset to 703 publications (as of September 25, 2024). Throughout this post, I use the term ‘autistic-contributor research’ to describe these studies where autistic individuals are explicitly acknowledged or listed as co-authors. This term represents a narrower subset of participatory autism research, specifically focusing on visible contributions through acknowledgments or authorship, rather than all potential forms of participatory involvement.
As the chart below illustrates, this expanded search demonstrates that autistic contributions extend beyond authorship and can be recognized in several different capacities.
Image 2: Counts of select collaboration phrases in the acknowledgements and author affiliation fields of autistic-contributor research literature.
Patient Representation: The term “patient” emerges as a frequent descriptor in research acknowledgments. It can encompass diverse roles like “patient partner,” or refer to administrative functions related to patient involvement. However, the meaning of “patient” in the author affiliation and acknowledgements section can be ambiguous, sometimes signifying autistic individuals themselves, other times denoting individuals with different conditions within the study.
While widely used, “patient” has limitations in autism research. It centers on pathology and potentially overlooks the broader spectrum of autistic experiences beyond the clinical realm. Not all autistic people identify with this label, as it may imply illness or deficit. While “patient” may suggest autistic involvement in healthcare research, it also highlights the need for more precise language that recognizes the multifaceted roles of autistic people beyond the traditional patient-provider dynamic.
Independent Researchers and Advocates: The presence of terms like “advocate,” “self-advocate,” “lived experience,” and “independent researcher” highlights several ways autistic people contribute to research both as individuals and as part of broader groups of expertise. The use of “independent researcher” in affiliations suggests a recognition of the contributions made by autistic researchers working outside traditional academic institutions.
Group Advisory Roles: The prevalence of terms like “advisory board,” “advisory panel,” “community network”, and “working group” underscores the importance of structured mechanisms to ensure that autistic perspectives and lived experiences directly inform research design and implementation. These groups may not always be composed of autistic people, but they often have close ties to communities with lived experience and aim to represent those perspectives.
Autistic-contributor studies in this dataset are significantly more likely to employ qualitative or mixed-methods approaches when compared to all autism research. Qualitative methods, such as interviews and focus groups, allow autistic people to express their unique perspectives and insights in their own words. Some examples of how studies may integrate autistic voices include co-creating research questions with autistic people, adapting methods to be more accessible, including autistic researchers on the team, and involving autistic participants in data analysis and communication of findings. These collaborative approaches can help studies be more directly relevant to the autism community.
Who is leading in these types of autistic-contributor collaborations?
It can be useful to explore leading organizations in this dataset to understand where and how investments in autistic-contributor collaborations are happening. Affiliation, funding, and geographic data in Dimensions highlight the United Kingdom’s prominent role in fostering research collaborations involving autistic people. The National Institute for Health and Care Research (NIHR) and the Department of Health and Social Care (DHSC) are the leading funders, while University College London and King’s College London are at the forefront of institutions publishing participatory approaches in this field. These data suggest a strong commitment within the UK to promoting inclusive research practices. However, it’s important to acknowledge that this analysis primarily reflects English-language publications, and there may be additional contributions in other languages that use different terminology to acknowledge autistic participation.
In a concept analysis of autistic-contributor research literature, I found a clear emphasis on lived experience, health services, and support systems. Instead of primarily asking “What causes autism?” or “How can we diagnose autism?”, this research asks “How can we improve the lives of autistic people?”. This emphasis is reflected in the prominence of terms like “improve access” and “health system” in the autistic-contributor research network visualization above.
This focus contrasts with broader clinical autism research, which emphasizes cognitive and behavioral aspects of autism. In the clinical autism concept network above, the strongest themes are diagnosis, social skills, and behavior.
The distinction is further reinforced by how research is categorized. Clinical autism research falls under Field of Research (FoR) classifications of Psychology and Biomedical Sciences, while autistic-led research leans towards Health Sciences and Health Services. This highlights a fundamental difference in priorities.
It’s also worth considering the potential impact of age on these research approaches. Autistic-led research may naturally involve more adults, given the complexities of participating in research design. This could lead to a greater focus on issues relevant to autistic adults, an area often overlooked in traditional research.
Though still in its early stages, autistic-contributor research shows promising signs of greater impact in both academic citations and public reach.
Citation, Field Citation Ratio (FCR), & Citation Rate: The average Field Citation Ratio (FCR) for autistic-contributor research is 5.30, compared to 2.31 for all autism research. The citation rate for autistic-contributor autism research (76.65%) is slightly higher than the overall citation rate for autism research (65.57%). Additionally, autistic-contributor research demonstrates a comparable average number of citations per publication (22.76) to the broader field of autism research (23.28). These figures indicate that autistic-contributor research is cited more frequently within the scientific community.
Altmetric Attention Score & Societal Impact: Autistic-contributor research in autism exhibits an average Altmetric Attention Score of 8.6, notably higher than the average of 4 for all autism research. This indicator shows that autistic-contributor autism research sparks more conversations outside of academia than broad autism research.
Translation into Policy, Practice & Innovation: Autistic-contributor research in autism has a higher rate of citation in policy documents (4.7%) compared to the broader field of autism research (2.0%). It also maintains a comparable rate of citation in clinical trials (0.7% vs. 1.2%). However, when it comes to citations in patents, autistic-contributor research lags behind with only 0.4% of publications cited compared to 2.2% in the broader field. These figures suggest that while involving autistic people in research may lead to findings that are more readily translatable into policies and clinical practices, there’s room for growth in terms of fostering innovation and generating patentable discoveries.
Autistic-contributor research in autism represents a small subset of the overall autism literature, but its higher FCR scores and Altmetric Attention Score, comparable citation averages, and stronger translation into policy collectively show the value and influence of research that actively involves autistic people.
Learning from Other Fields: Comparison to Chronic Illness and Mental Health Research Literature with Lived Experience Contributions
Both chronic illness and mental health research fields have a strong track record of including people with lived experience as active contributors. We can gain valuable insights to enhance autistic representation in research by analyzing language used to acknowledge lived experience contributions in these fields. If we were to standardize language used to describe these collaborations, would it be easier to measure these types of collaborations? What terms would be best to use across fields?
“Patient” and “patient advocates” are some of the most highly used terms across both mental health and chronic illness participatory research, but may present challenges in the context of autism research where some participants do not want to pathologize autism. An emphasis on “lived experience” as an authorship and acknowledgement phrase is also common across all three fields, and may be a better approach to recognize contributions in autism research. Another structure sometimes used in the author affiliation fields is “with [condition]”, such as “researcher with chronic illness” or “advisor with bipolar disorder”. This structure is difficult to standardize across research areas and may make it harder to discover experts with relevant lived experience.
Additionally, there is an emphasis on group collaborators across all three fields. The prevalence of working groups and advisory panels demonstrates the effectiveness of these structures in facilitating meaningful participation and ensuring that diverse perspectives are heard.
Image 5: Counts of select collaboration phrases in the acknowledgements and author affiliation fields of participatory autism research literature.
Image 6: Counts of select collaboration phrases in the acknowledgements and author affiliation fields of lived experience-contributor chronic illness and mental health research literature.
Implications and Recommendations
Despite the promising rise in participatory autism research, it still constitutes a small fraction of the overall autism literature. Much of the research remains rooted in clinical or mechanistic approaches and often overlooks the contributions of those with lived experience. To address this gap, funders, researchers, and institutions must prioritize participatory research approaches that actively incorporate autistic perspectives at every stage of the research process.
Recommendations:
Funders and Institutions: Prioritize funding and support for participatory research initiatives that actively involve autistic people in all stages of the research process.
Researchers: Embrace collaborative approaches and methodologies, establish meaningful partnerships with autistic and neurodivergent communities, and ensure that research designs and methodologies are inclusive and accessible.
Publishers: Consider metadata fields which standardize how participatory collaborations are described, in collaboration with the research community Consistent language can improve the discoverability of lived experience collaborators.
Autistic Individuals: Seek out opportunities to participate in research, share your expertise and insights, and advocate for greater representation and inclusion within the research community.
By actively involving autistic people in the research process, researchers in the field can improve the relevance of their work and address the real-world challenges and needs of the community. This evidence can inform policy decisions and advocacy efforts that lead to more equitable and supportive systems for autistic people and foster a deeper understanding of autism.
Special thanks to Holly Wolcott, Ph.D., Senior Vice President of Research Analytics at Digital Science, for her insightful feedback on this blog post.
Emily Alagha, Senior Director of Research Analytics & Support | Digital Science
Emily Alagha is a Senior Director of Research Analytics & Support at Digital Science, where she leverages AI-powered platforms like Dimensions to support data-driven strategies to optimize research funding and enhance research management practices. With a background in medical librarianship, she is passionate about health literacy and ensuring research is accessible to all. She is also a neurodivergent self-advocate committed to amplifying autistic voices and increasing autistic representation in research.
Authors either have a conflict of interest or not, right? Wrong. Research from Digital Science has uncovered a tangled web of missing statements, errors, and subterfuge, which highlights the need for a more careful appraisal of published research.
At this year’s World Conference on Research Integrity, a team of researchers from Digital Science led by Pritha Sarkar presented a poster with findings from their deep dive on conflict of interest (COI) statements. Entitled Conflict of Interest: A data driven approach to categorisation of COI statements, the initial goal was to look at COI statements with a view to creating a binary model that determines whether a Conflict of Interest statement is present or not in an article.
However, all was not as it seemed. While some articles had no COI and some had one present, those present covered a number of different areas, which led the team to think COIs might represent a spectrum rather than binary options.
Gold standard
Conflict of interest is a crucial aspect of academic integrity. Properly declaring a COI statement is essential for other researchers to assess any potential bias in scholarly articles. However, those same researchers often encounter COI statements that are either inadequate or misleading in some way even if they are present.
The Digital Science team – all working on research integrity with Dimensions – soon realized the data could be leveraged further to better explore the richness inherent in the nuanced COI statements. After further research and analysis, it became clear that COI statements could be categorized into six distinct types:
None Declared
Membership or Employment
Funds Received
Shareholder, Stakeholder or Ownership
Personal Relationship
Donation
This analysis involved manually annotating hundreds of COI statements with Natural Language Processing (NLP) tools. The aim was to create a gold standard that could be used to categorize all other COI statements, however despite the team’s diligence a significant challenge persisted in the shape of ‘data skewness’ – which can be defined as an imbalance in the distribution of data within a dataset that can impact data processing and analytics.
Fatal flaw
One irresistible conclusion to the data skewness was a simple one – that authors weren’t truthfully reporting their conflicts of interest. But could this really be true?
The gold standard approach came from manually and expertly annotating COI statements to develop an auto-annotation process. However, despite the algorithm’s ability to auto-annotate 33,812 papers in just 15 minutes, the skewness that had been initially identified persisted, leading to the false reporting theory for authors (see Figure 1 of COI Poster).
To firm up this hypothesis, when the Retraction Watch database was analyzed, the troubling trend, including the discrepancy between reported COI category and retraction reason, became even more apparent (see Figure 2 of the COI Poster).
Moreover, when the team continued with the investigation, they found there were 24,289 overlapping papers in Dimensions GBQ and Retraction Watch, and among those papers, 393 were retracted due to conflict of interest. Out of those 393 papers, 134 had a COI statement, however 119 declared there was no conflict to declare.
Conclusion
Underreporting and misreporting conflict of interest statements or types can undermine the integrity of scholarly work. Other research integrity issues around paper mills, plagiarism and predatory journals have already damaged the trust the public has with published research, so further problems with COIs can only worsen the situation. With the evidence of these findings, it is clear that all stakeholders in the research publication process must adopt standard practices on reporting critical trust markers such as COI to uphold the transparency and honesty in scholarly endeavors.
To finish on a positive note, this research poster was awarded second-place at the 2024 World Conference on Research Integrity, showing that the team’s research has already attracted considerable attention among those who seek to safeguard research integrity and trust in science.
Simon Linacre, Head of Content, Brand & Press | Digital Science
Simon has 20 years’ experience in scholarly communications. He has lectured and published on the topics of bibliometrics, publication ethics and research impact, and has recently authored a book on predatory publishing. Simon is an ALPSP tutor and has also served as a COPE Trustee.
The use of AI technologies has always been susceptible to charges of potential bias due to skewed datasets large language models have been trained on. But surely firms are making sure those biases have been ironed out, right? Sadly, when it comes to AI and recruitment, not all applications of the technology are the same so firms need to tread carefully. In other words – if you don’t understand it, don’t use it.
Since the launch of ChatGPT at the end of 2022, it has been difficult to read a newspaper, blog or magazine without some reference to the strange magic of AI. It has enthused and concerned people in equal measure, with recruiters being no different. From every gain in being able to understand and work with huge amounts of information, there appears to be negatives around data bias and inappropriate uses.
Scismic is part of the larger company Digital Science, and both have been developing AI-focused solutions for many years. From that experience comes an understanding that responsible development and implementation of AI is crucial not just because it is ‘the right thing to do’, but because it simply ensures better solutions are created for customers. Customers who in turn can trust Digital Science and Scismic as partners during a period of such rapid change and uncertainty.
AI in focus
The potential benefits of using AI in recruitment are quite clear. By using Generative AI such as ChatGPT, large amounts of data can be scanned and interpreted quickly and easily, potentially saving time and money during screening. In turn, the screening process may also be improved by easily picking up key words and phrases in applications, while communications about the hiring process can be improved by using AI-powered automated tools.
But, of course, there is a downside. Using AI too much seems to take the ‘human’ out of Human Resources, and AI itself is only as good as the data it has been trained on. A major issue with AI in recruitment has been highlighted by the recent brief issued by the US Equal Employment Opportunities Commission (EEOC), which supported an individual who has claimed that one vendor’s AI-based hiring tool discriminated against them and others. The EEOC has recently brought cases against the use of the technology, suggesting that vendors in addition to employers can be held responsible for the misuse of AI-based technology.
When should we use AI?
In general, if you don’t understand it, do not use it. Problems arise for both vendors and recruiters alike when it comes to the adoption of AI tools at scale. While huge data sets offer the advantages set out above, they also introduce biases over and above human biases that employers and employees have been dealing with for years. Indeed, rather than extol the virtues of using AI, it is perhaps more instructive to explain how NOT to use this powerful new technology.
As a responsible and ethical developer of AI-based recruitment solutions, colleagues at Scismic were surprised to see a slide like the one below at a recent event. While it was designed to show the advantages of AI-based recruitment technology to employers it actually highlights the dangers of ‘layering’ AI systems on top of each other. This means the client company will lose even more visibility on who and how the system is selecting – increasing the risk of bias, missing good candidates and, ultimately, the risk of legal challenge.
In this scenario, with so many technologies layered onto each other throughout the workflow, it is almost impossible to understand how the candidate pipeline was developed, where candidates were excluded, and at which points bias has caused further bias in the selection process!
While the list of AI tools used in the process is impressive, which is less so from a recruitment perspective is the layer upon layer of potential biases these tools might introduce to the recruitment process.
At Scismic, they offer a different approach. AI is used to REMOVE biases in datasets, so that all of the advantages of using automated processes are protected by introducing mitigating processes, thus ensuring a fairer and more ethical recruitment program for employers.
Positive Discrimination?
Scismic’s technology focuses on objective units of qualifications – skills. We use AI to reduce the bias of terminology usage associated with describing skills. Now we have two ways in which we reduce evaluation bias:
Blinded candidate matching technology that relies on objective units of qualifications – skills
Removing bias of candidates terminology to describe their skill sets.
What type of AI is being used?
To help explain how Scismic does this, we can split AI into subjective (or Generative) AI like ChatGPT, and objective AI. Subjective AI is, broadly, a contextual system that makes assumptions on what to provide the user based on the user’s past interactions and its own ability to use context. This system can work well for human interactions (such as ChatBots) which is what it was designed for.
However, when applied to decision making about people and hiring (which is already an area fraught with difficulty) subjective and contextual systems can simply reinforce existing bias or generate new bias. For example, if a company integrates a GenAI product into its Applicant Tracking System (ATS) and the system identifies that most of the people in the system share a particular characteristic then the system will assume that’s what the company wants. Clearly if the company is actually trying to broaden its hiring pool this can have a very negative effect, which can also be challenged in court.
Objective AI works differently as it does not look at the context around the instruction given but only for the core components it was asked for. This means it doesn’t make assumptions while accumulating the initial core results (data) but can provide further objective details on the data set. In many ways it is a ‘cleaner’ system but because it is focused and transparent it is the better choice for removing unintended bias.
AI is a tool and, as with so many jobs that require tools the question is often; what is the best tool to use? In short, we recommend that a tool that produces better results with less bias is the answer in a hiring process.
Case by case
To show how well some cases can turn out when using ‘objective AI’ responsibly and astutely, here are three case studies that illustrate how to arrive at some genuinely positive outcomes:
The right AI: With one customer, Scismic was hired to introduce a more diverse pool of talent as the company was 80% white males, and those white males were hiring more white males to join them. After introducing Scismic’s recruitment solution, the percentage of diverse applicants across the first five roles they advertised rose from 48% to 76%
The right approach: One individual who had been unlucky in finding a new role in life sciences for a very long time finally found a job through Scismic. The reason? He was 60 years old. With an AI-based hiring process, his profile may well have been ignored as an outlier due to his age if a firm typically hired younger people. However, by removing this bias he finally overcame ageism – whether it had been AI- or human-induced – and found a fulfilling role with a very grateful employer
The right interview: Another potential hire being helped by Scismic is neurodivergent, and as a result appears to struggle to be successful in interviews. An AI-based scan of this person’s track record might see a string of failed interviews and therefore point them to different roles or levels of responsibility. But the lack of success is not necessarily down to this, and human intervention is much more likely to facilitate positive outcomes than using AI as a shortcut and misdiagnose the issue.
When not to use AI?
One aspect highlighted in these case studies is that while AI can be important, what can be equally as important is when NOT to use it, and understand it is not a panacea for all recruitment problems. For instance, it is not appropriate to use AI when you or your team don’t understand what the AI intervention is doing to your applicant pipeline and selection process.
Help in understanding when and when not to use AI can be found in a good deal of new research, which shows how AI is perhaps best used as a partner in recruitment rather than something in charge of the whole or even part of the process. This idea – known by some as ‘co-intelligence’ – requires a good deal of work and development on the human side, and key to this is having the right structures in place for AI and people to work in harmony.
For example, market data shows that in the life sciences and medical services, employee turnover is over 20%, and in part this is due to not having some of the right structure and processes in place during recruitment. Using AI in the wrong way can increase bias and lead to hiring the wrong people, thus increasing this churn. However, using AI in a structured and fair way can perhaps start to reverse this trend.
In addition, reducing bias in the recruitment process is not all about whether to use or not use AI – sometimes it is about ensuring the human element is optimized. For instance, recent research shows that properly structured interviews can reduce bias in recruitment and lead to much more positive outcomes.
With recruitment comes responsibility
It is clear that AI offers huge opportunities in the recruitment space for employees and employers alike, but this comes with significant caveats. Both for recruiters and vendors, the focus on developing new solutions has to be how they can be produced and implemented responsibly, ethically and fairly. This should be the minimum demand of employers, and is certainly the minimal expectation of employees. The vision of workplaces becoming fairer due to the adoption of ethically developed AI solutions is not only a tempting one, it is one that is within everyone’s grasp. But it can only be achieved if the progress of recent decades in the implementation of fairer HR practices are not lost in the gold rush of chasing AI. As a general rule, recruiters and talent partners should understand these components of the technologies they are using:
What is the nature of the dataset the AI model has learnt from?
Where are the potential biases and how has the vendor mitigated these risks?
How is the model making the decision to exclude a candidate from the pipeline? And do you agree with that premise?
Understanding the steps involved in creating this structure can be instructive – and will be the focus of our next article, ‘Implementing Structured Talent Acquisition Processes to Reduce Bias in your Candidate Evaluation’. In the meantime, you can contact Peter Craig-Cooper atPeter@scismic.comto learn more about our solutions.
Simon Linacre, Head of Content, Brand & Press | Digital Science
Simon has 20 years’ experience in scholarly communications. He has lectured and published on the topics of bibliometrics, publication ethics and research impact, and has recently authored a book on predatory publishing. Simon is an ALPSP tutor and has also served as a COPE Trustee.