Building the Infrastructure for Funders
In the last decade, research and science funding has undergone several key shifts, from the increased importance of open science initiatives to shifts toward greater diversity and globalization. Perhaps most notably, the idea of impact is at the heart of many changes in the funding ecosystem. Whether impact comes from greater interdisciplinary collaboration, more partnerships between public research institutions and commercial or industrial applications, the central demand for impact drives much of the decision-making.
While there are certainly arguments to be made advocating for the importance of blue skies or theoretical research to the overall scientific enterprise, there is perhaps a greater need to focus on specific real-world impact of research, how it can be measured, and what infrastructure decisions the scholarly publishing community needs to make in order to demonstrate that impact effectively for funders and other stakeholders. Without this shared foundation, we risk a continued disconnect between funder priorities and the capabilities of publishing technology.
In this article, we’ll work toward a shared definition of impact, advocate for the collaboration needed to develop the right metrics to evaluate impact, and explore the infrastructure that exists or needs to be developed to create a holistic understanding of the value of published research.
Changes to the Funding Landscape
The increased emphasis on impact, openness, and collaboration in the research process have led to many changes in funding priorities. Today’s world is more globally connected than ever before, and societal challenges are not contained by geographic or disciplinary borders. As a result of the urgency of those global challenges, funders’ aims have evolved to influence and solve those problems through collaborative, interdisciplinary, and open science funding.
We could look at these trends and changes from a geographic perspective, as the US, the EU, China, India, and others have approached funding differently, but for the purposes of this article we will instead examine some shared themes and drivers of different funder mandates and priority decisions.
Open science
There are many open science arguments, but the funding perspective is that publicly funded research should be publicly available. That is, individuals who contribute to funding through taxes should be able to access that work. There are added impact benefits to open science, such as the potential for wider reach and easier uptake for commercial or industrial applications. Funders are more frequently mandating various levels of OA publishing, embargo periods, and open data deposits, and publishers must figure out how to comply while continuing to build sustainable business models.
Interdisciplinary research
In keeping with the nature of global challenges like climate change, public health, food security, and more, funders have been actively supporting the rise of interdisciplinary research. The US National Science Foundation gives high priority to interdisciplinary, convergence, and transformative research – demonstrating their focus on research that “pushes the frontiers of knowledge.” As they write, “Today’s grand challenges will not be solved by one discipline alone. But the integration of knowledge, methods and expertise from across science and engineering is not simple or automatic.”
While there are exceptions, funding often prioritizes research approaches that create points of collaboration and connection with the aim of solving complex problems. Further down the research lifecycle, however, it can be challenging to publish this research; it’s more complex to find appropriate journals, and it’s more complex for journals to find appropriate peer reviewers. While many strides have been made to expand publishing with an interdisciplinary eye, there is still a way to go to overcome this disconnect between funding and publishing priorities.
Competition vs collaboration
While interdisciplinary research and open science initiatives tend to indicate a rise in collaboration in research practice, there is still a vibrant—and sometimes counterintuitive—culture of collaboration. For researchers, there can be a scarcity of resources at both ends of the research lifecycle: applying for grants and funding, and later article acceptance. For researchers at universities, the ability to succeed when applying for grants and when submitting to journals also has a direct bearing on their career advancement. So while funders might prioritize collective activity, there is still a pervasive culture of competition. Funding support is ultimately a finite resource, and certain disciplines, types of projects, and even geographies may be prioritized. This competition can lead, in worst case scenarios, to research errors and misconduct. Without a shift in motivation from “get published” or “advance my career” to “solve problems” and “advance science,” we risk skewing the ongoing practice of science.
Impact measurements
We’ve seen complications in diverging priorities, research approaches, and research practices for funding and publishing. These come to a head with impact measurements. More often, grants are requiring various science communication activities to improve the impact of funded research beyond publishing. On top of that, the ability to evaluate the effectiveness of funding mandates, priorities, and demands is elusive. Science publishing infrastructure needs to evolve to integrate measures beyond citation and usage, and it needs to be easy for funders to measure their effectiveness. The demand for a return on investment is understandable, but defining and measuring that impact and the value of published research is not always simple.
Given these priorities, from the funder perspective, the role of publishing is not always clear. By understanding the perspective and priorities of funders, publishers can demonstrate their impact and importance in the research ecosystem in a way that aligns with those priorities.
Defining Impact
As we’ve briefly outlined, there are complex and at times conflicting priorities in the funding ecosystem. At the root of all these goals, however, is the need to ensure that investments make a difference and are worth the money.
This is where impact comes into play. But what does impact mean, and does it mean the same thing to all stakeholders?
-
Funders – For funders, impact is big and broad. Impact means the research that they fund is influencing solutions to real-world problems. It means research informs policy, it leads to invention, and it improves lives.
-
Authors – Impact is often required for authors, rather than defined by them. From a career advancement perspective, an author’s institution typically defines impact as the frequency with which they publish or present. But authors also need to adhere to the impact requirements of their grants. They can often become stuck in the middle, trying to achieve impact for funders, institutions, and publishers.
-
Publishers – Traditionally, impact comes down to scientific impact. Does the research published in a journal inform or spur additional research? The Journal Impact Factor is aptly named from this perspective.
These differing definitions aren’t inherently contradictory. But to create a central impact infrastructure, each aspect of these definitions needs to be woven together. Critically, we also need to be able to measure it.
Measuring Impact
A shared understanding and definition of impact is critical because that will inform how we evaluate and measure the effectiveness of both funding investments and publishing. There are several inherent challenges to measuring “impact.”
-
Impact is a lagging measure – The time between awarding grant funding and publishing an article can be long. The time between the publication of an article and that research being picked up by policy documents, commercial organizations, or being used for actual innovation. The challenge here is when to measure impact and how funders can evaluate that impact when there’s no way to measure success for years.
-
Impact requires multiple metrics – Impact is not a single measure, but more of an equation. Citations, readership, policy citations, public reach, patents, and more. Those article metrics are all relevant to the discussion of impact, but it’s also important to remember that a single grant might produce multiple articles, presentations, book chapters and more. Tracing those different outputs and consolidating different impact metrics in one place is no easy undertaking.
-
Metrics exist in multiple places – While some metrics can be analyzed centrally, many exist on individual publisher sites or with vendors who measure a full body of scholarship but only through a single lens. Some measures may not be evaluated centrally at all, and there are qualitative aspects to impact that require anecdotal analysis as well. Bringing metrics together in a central funding infrastructure requires robust collaboration.
The success of published outputs of funded research is a critical piece of the value puzzle for funders. But there are aspects of impact that exist outside of publication: education, media and news coverage, political and societal influence. Measuring a holistic return on investment for research funding requires reaching out to stakeholders beyond the traditional realm of scholarly publishing.
Impact infrastructure
Creating an environment where funders are a key part of publishing infrastructure — and their priorities are addressed in the technology we build — will improve our ability to collaborate. This infrastructure can’t be siloed by individual publisher, and instead needs a level of collaboration and centralization that improves our ability to measure impact.
Developing this infrastructure will first require an understanding of the needs of all areas of the funding relationship: from authors, from funders, from publishers, and beyond. This infrastructure will balance visibility and transparency with comprehensive data so every stakeholder can make informed decisions about grant, award, and publishing decisions. There are several initiatives already in place to work towards a more integrated impact infrastructure. From Clarivate’s recent Grants Index launch, which aims to improve visibility and transparency, to Sensus Impact, Silverchair and Oxford University Press’ initiative to create a consolidated dashboard of metrics that evaluates funder impact, to the ongoing efforts of CHORUS to create spaces for shared author, funder, and publisher discussions, there is active progress already happening in this space. The next step for the scholarly publishing community is to test these initiatives, collaborate on their growth, and identify additional gaps that can be filled with infrastructure designed to deepen relationships with funders.
Understanding the funder landscape and the priorities of this evolving stakeholder is critical for the sustainability of the publishing industry. Without understanding their needs and collaborating to develop the right infrastructure to help measure the return on their investment, there is a risk of deepening a divide. Publishers need to demonstrate their continued relevance, and that requires a renewed focus on understanding impact and developing the right infrastructure to measure it.
Silverchair is a Voting Member organization of NISO. We are grateful to them for their sponsorship of our NISO Plus 2024 in Baltimore event.