Establishing Content Trust Markers
Letter from the Executive Director, May 2026
When you come across any content these days, it is worthwhile to ask whether it should be trusted. This is not simply a result of the growth of generative AI tools, nor is it a result of increased concerns about research integrity, nor is this situation anything new. In some ways, the entire enterprise of scholarly research dissemination was always rooted in skepticism of people’s research findings. Even if I know and trust you personally, I want you to show that what you’ve done is “real” in the sense of replicable. This led over time to the development of a variety of aspects of the research process and scholarly publishing, such as randomized controlled trials, statistical testing, pre-registration, double-blind peer review, protocol description, data and software sharing, and even post-publication retraction. Now, with the new wave of generative tools, we can “no longer believe our own eyes.”
Last year, I wrote in the Scholarly Kitchen that we should collectively return to a model of research publication based on a zero-trust architecture. We should all be more suspicious of the content that is presented to us, regardless of the author or publication. Was it written by the people who are in the byline, and are their credentials valid? Is their work supported by the various elements of the process, such as availability of data, protocols registered, grant funding acknowledgements, or co-authors at recognizable institutions? This is one element of the Research Nexus that Ed Pentz spoke about in his 2024 Miles Conrad Lecture at the NISO Plus conference.
It is not necessary to take every element as a verifiable source of trust or check every “trust box". For example, not every research project has a publishable data set, or it might not include funding, and research shouldn’t be dismissed because it lacks these things. Even the existence of images, data sets, or funding statements doesn’t necessarily guarantee that these are accurate or haven’t been manipulated in some manner. There are malicious actors in the world attempting to fool the systems for some form of gain. Certainly, we can do as much as we can to make the process and outputs as transparent as possible. It is harder to fabricate all of the elements of a network of trust than it is to create a single fictitious result or paper that is disconnected from its research lifecycle. Trust is a compilation of signals and heuristic; it is not a hard and fast rule.
We certainly need more signals of trust, and these signals should be consistently applied. To address this problem, last week the NISO membership approved the creation of a new project to define a consistent trust marker system for our community. The Identification of Trust Markers for Increasing Credibility and Trust in Research (Trust Markers or ITEM) project will provide consumers of scholarly content with the necessary definitions and a framework for understanding items that would signal trust in published scholarly content. There are a variety of community initiatives to develop a consensus on scientific integrity standards, such as United2Act, US Office of Research Integrity, the European Code of Conduct for Research Integrity, STM Research Integrity Hub, the SOPs4RI Toolbox, Japan’s Initiatives for Promotion of Research Integrity, and Content-update Signaling and Alerting Protocol (CUSAP) project, along with attempts to improve awareness of retracted work, including the NISO Communication of Retractions, Removals, and Expressions of Concern (CREC) Recommended Practice and the COPE’s retraction guidelines. These have been integrated into proprietary approaches to the labelling of content, including the Center for Open Science’s Open Science Badges, Public Knowledge Project’s Publication Facts Label, and F1000’s VeriXiv’s upcoming badging of checks. The lack of consistent definitions or standardized framework may cause confusion among readers about the meaning of these signals. The ITEM project seeks to build consistent application and use of these trust markers, aiming to create a new NISO Recommended Practice that will define a limited set of proposed trust markers, an associated set of metadata for those markers, and an extensible approach to the visualization of these markers. The project will also set up pilot implementations of this new framework and assess its impact. A community call for participation has been circulated, and a new working group will be formed in the coming weeks, with work beginning this summer.
During the February NISO Plus conference in Baltimore, I had the opportunity to sit down with Robert Hilliker,Director of Library Relations (North America) at Springer Nature and a member of NISO’s Information Policy and Analysis Topic Committee for a short conversation as part of the new Get to Know NISO series. During that conversation, he talked about the process of reviewing and approving this new work item. This is just a small example of the thoughtful process involved in reviewing and improving new projects before they are launched. We are grateful for all the volunteers who submit their ideas, the people who review them, the members who vet them, and the many working group members who advance the work.
As the ground shifts under our feet regarding the veracity of content and traditional signals of trust that we could rely on, we need to develop new approaches to assess and gauge the trustworthiness of the information we receive. The new Trust Markers project is good place to start.