Letter from the Executive Director
Standards can take many forms. Last week, I was at an SSP educational program in Washington. During the meeting, Craig Van Dyke from CLOCKSS spoke about the development of ORCID (a recently-joined Voting Member of NISO, so "Welcome!"). He described the formation, goals, and underpinnings of ORCID as a standard. In this description, Craig specifically mentioned the difference between "S"tandards and "s"tandards, noting the importance of the capital letter at the beginning of the word. I have also made the same distinction in my own presentations and writings over the years. Obviously, there is a difference between the two approaches to consensus and I don't want to minimize the value of a formal process. These values include the right to review and comment on drafts, an open process that is balanced among stakeholders, documented development procedures, the right to appeal, gathering input from external perspectives, and a structure for periodic review and maintenance. The "Capital 'S'" standards have stability, weight, and recognition, having gone through this process.
But in practical terms, does circumventing the formal "Standards" process make the resulting ORCID system, or NISO recommendations such as KBART, ODI, or our work on privacy principles any less valuable or impactful? Of course not. These systems and structures are valuable because they are adopted, they are used, and they are built into an organization's workflows. Many formal capital "S" standards began as de facto standards, such as what is now JATS and the Dublin Core Metadata elements within NISO, or the PDF and Office file formats that were formalized elsewhere.
The process of developing these and other de facto standards is one key element that differs from the formal standards development process. Although they sought input from external stakeholders in the process, it was ultimately the National Library of Medicine (NLM) that took decisions about how the NLM DTD should work and what elements it should include or exclude that eventually became JATS. NLM took those decisions based on their own needs, systems development, and goals. It wasn't a concern for them how, as an example, comic book publishers wanted to use or engage with the NLM DTD. The National Library of Medicine was happy to allow others to use the structures and adapt them for their own purposes, but did not want to or feel the need to be involved. Of course, opening up the NLM DTD to the outside community, unburdening the agency from the problems of maintaining the structure, and making JATS less domain specific (as the DTD was) were all motivators for NLM to want to turn JATS into a formal standard. Similarly, there were benefits to Adobe and Microsoft to opening up PDF and Office file formats to standardization.
Interestingly, as NISO has engaged both in formal and less formal ANSI-accredited standards, NISO Recommended Practice consensus development processes have taken a stronger role in our community. For example, we've recently received a proposal from the JATS4R community to bring their work into the NISO portfolio as a suite of NISO Recommended Practices. This proposal is currently being considered by the Information Creation & Curation Topic Committee and, if approved, will go out for voting member approval as a project for consideration. The JATS4R recommendations provide implementation support and guidance for the JATS standard to support reusability and interoperability. Having JATS4R and the JATS standard published by NISO should reinforce both projects and tie them more closely together. It is also a perfect illustration of the value of both normative standards and the industry guidance that NISO provides. In this particular case, NISO will be providing direction -- the "Capital 'S' standard -- and guidance -- the "lower case 's' standard, a recommended practice. The two are tied together, but serve different roles and for different purposes.
In practice, as I said in a tweet at the time, the difference between the two is more about the process and the approach than the output the majority of the time. It is also important to note that regardless of the formality of the process, if the result isn't adopted, the standard isn't as valuable as it could be. And if the output is adopted broadly, the standards formality isn't necessarily the most critical question.
Sincerely,
Todd Carpenter,
Executive Director, NISO
NISO Reports
Media Stories
The World Wide Success That is XML
Liam Quin, July 2018, The W3C Blog
XML (Extensible Mark-Up Language) has established itself as the leading format for representing and exchanging information. It is used in a variety of contexts and integrated with linked data, documents, and relational and non-relational databases, with the Internet of Things and in devices ranging from music players to aircraft. More adaptable in some respects than its predecessor, SGML (Standard Graphic Mark-Up Language), XML has achieved the status of a mature technology and and an international reach of adoption. In the words of the author, Liam Quin, the departing W3C XML Activity Lead, “...it’s time to sit back and enjoy the ability to represent information, process it, interchange it, with robustness and efficiency. There’s lots of opportunities to explore in making good, sensible use of XML technologies. XML is everywhere.”
EU Approves Controversial Copyright Directive, Including Internet ‘Link Tax’ and ‘Upload Filter’
James Vincent, The Verge, September 12, 2018
The European Parliament approved a new Copyright Directive, including two controversial elements that had caused the Directive to be rejected earlier in the summer, Articles 11 (the “link tax”) and 13 (the “upload filter”). Described in the article, the intended effect of those rules is as follows: “Article 11 is intended to give publishers and papers a way to make money when companies like Google link to their stories, allowing them to demand paid licenses. Article 13 requires certain platforms like YouTube and Facebook stop users sharing unlicensed copyrighted material.” Critics doubt the effectiveness of such mandates. The final vote on approval by members of the European Parliament will take place in January of 2019.
Europe’s Plan S for Open Science
A coalition of European nations and funding bodies released “Plan S” during the first week of September. One key statement led off the document, “After 1 January 2020 scientific publications on the results from research funded by public grants provided by national and European research councils and funding bodies, must be published in compliant Open Access Journals or on compliant Open Access Platforms.” Ten additional bullet points covered author retention of copyright, the development of high-quality open access journals in fields where none existed, responsibility for publication fees, and monitoring for non-compliance.
The European countries that have committed to this plan include Austria, Finland, France, Ireland, Italy, Luxembourg, the Netherlands, Norway, Poland, Slovenia, Sweden, and the UK. Funding bodies that have signed on include the European Commission, including the European Research Council.
An Explosion of Openness is About to Hit Scientific Publishing
A.B., September 7, 2018, The Economist
In its coverage of Plan S, The Economist says “This radical initiative requires the scientists they fund to publish their work in open-access journals or freely-accessible websites by 2020. Strikingly, the plan bars scientists from publishing in what is today around 85% of periodicals, including some of the most venerable, such as Nature and Science. “
Making It Easier to Discover Datasets
Natasha Noi, Research Scientist, Google AI, September 5, 2018, Google Blog
Google has established a search tool for datasets hosted on publisher platforms, an author’s website, or in government and academic repositories. “To create Dataset search, we developed guidelines for dataset providers to describe their data in a way that Google (and other search engines) can better understand the content of their pages. These guidelines include salient information about datasets: who created the dataset, when it was published, how the data was collected, what the terms are for using the data, etc. We then collect and link this information, analyze where different versions of the same dataset might be, and find publications that may be describing or discussing the dataset. Our approach is based on an open standard for describing this information (schema.org) and anybody who publishes data can describe their dataset this way. We encourage dataset providers, large and small, to adopt this common standard so that all datasets are part of this robust ecosystem.”
Google DataSet Search: A Reflection on the Implications
Aaron Tay, September 11, 2018, Musings About Librarianship (Blog)
Elsewhere, Aaron Tay, librarian for the Singapore Management University, in commenting on Google Dataset Search, noted that “Google datasearch feels raw even for a ’beta’ product.” His blog entry offers an extensive overview of the need that Google is attempting to satisfy and where their search tool might need work.
HathiTrust Research Center Extends Non-Consumptive Research Tools to Copyrighted Materials: Expanding Research through Fair Use
Jessica Rohr, September 20, 2018, Hathi Trust Blog
“Since 2011, HTRC has been developing services and tools to allow researchers to employ text and data mining methodologies using the HathiTrust collection. To date, this service has been available only on the portion of the collection that is out of copyright. With the development of a landmark HathiTrust policy and an updated release of HTRC Analytics, HTRC now provides access to the text of the complete 16.7-million-item HathiTrust corpus for non-consumptive research, such as data mining and computational analysis, including items protected by copyright.”
What Altmetrics Can Tell Us About the "Real World" Impacts of Books
Stacey Konkiel and Euan Adie, September 2018, Digital Science White Paper
The authors note that they take a high-level look at altmetrics for books and book chapters, examining patterns in discussion across the 17 data sources that Altmetric tracks. Just as others such as Clarivate Analytics have found, books are referenced more as a full publication than at the chapter level. In the context of patents and policies for the sciences, citations of monographs take a longer time to accumulate. Also noted, “though some publishers release metadata and persistent identifiers at the chapter level—especially for edited volumes—this practice is far from common”.
Publons Release Inaugural Global State of Peer Review (GSPR) Report
Daniel Johnston, Co-Founder, Publons, 2018
Comparing the experience of Western nations with those of emerging nations, Publons surveyed nearly 12,000 researchers to investigate who within the global scientific community is participating in peer review, how efficient the process might be, the quality of the end product, and views of a possible future. Researchers in the USA, the UK, and Japan are responsible for far more of the peer review than colleagues in such countries as China, Brazil, and Turkey. Most important to career success, these researchers noted, were being published in respected journals and securing of grant funding. Both of those were rated as being more important than being highly cited in respected journals or activity in research, teaching, or administrative work. Eighty-four percent thought institutions should weight more heavily the value contributed in the peer review process.
Interrogating Institutional Practices in Equity, Diversity, and Inclusion: Lessons and Recommendations from Case Studies in Eight Art Museums
Liam Sweeney and Roger Schonfeld, Ithaka S+R. Ithaka S+R. 20 September 2018. Web. 2 October 2018
Ithaka S+R, the Mellon Foundation, and the Association of Art Museum Directors (AAMD) have partnered to conduct a series of case studies on the topic of diversity in these cultural heritage organizations. This report synthesizes findings from eight case studies from participating museums in the United States and provides recommendations for effecting institutional change. The case studies discuss a variety of concerns -- “how to increase staff and board diversity, work toward more inclusive exhibitions and public programs, cultivate an inclusive and equitable climate in the museum, and build trust with audiences museums have historically not reached or served well”.
We’re Still Failing to Deliver Open Access and Solve the Serials Crisis; To Succeed, We Need a Digital Transformation of Scholarly Communication Using Internet Era Principles
Toby Green, Head, OECD Publishing, OECD (Preprint)
This article preprint by Toby Green, Head, OECD Publishing, Organisation for Economic Cooperation and Development, suggests that the more assertive stance of librarians in favor of Open Access may mean that a tipping point is nigh in shifting from a financial model of subscriptions to a financial model of article publication charges. The hesitation arises because the APC paywall will still impact poorly-funded authors. Are preprint repositories a more intelligent approach to ensuring a fully-open playing field? Green notes that “Preprint services and repositories are much cheaper to run than journals with the cost per article published being around 100 times cheaper. If, using the internet-era principle of ‘fail-fast’, all articles were first published as preprints and only those that succeeded in attracting the attention of journal editors were invited to be put forward for formal publishing, the average cost of publishing a paper would fall significantly.” He challenges the scholarly community to more completely embrace well-tested Internet principles in flipping the model of delivery of scholarship.
New and Proposed Specs and Standards
ISO/TS 19475-1:2018 Document management -- Minimum requirements for the storage of documents -- Part 1: Capture
ISO/TS 19475-2:2018 Document management -- Minimum requirements for the storage of documents -- Part 2: Storage
ISO/TS 19475-3:2018 Document management -- Minimum requirements for the storage of documents -- Part 3: Disposal
These documents specify requirements for maintaining the authenticity, integrity and readability of documents during storage processes. When electronic documents are managed in a typical office environment, it is necessary to determine the processes required for their management. This includes identifying the type of documents to be managed and their importance to the organization. Clarifying the management of electronic documents promotes usability of the documents, in both a legal and business context.
-
ISO/TS 19475-1 specifies requirements for the capture of documents into document management systems.
-
ISO/TS 19475-2 specifies requirements for the storage of documents. It is aimed at maintaining the authenticity and integrity of the stored documents.
-
ISO/TS 19475-3 specifies requirements for the evaluation of stored documents and for implementing decisions to either destroy the documents or transfer them to another storage facility.”
Technical Committee : ISO/TC 171/SC 1 Quality, preservation and integrity of information
ISO/IEC 20071-23:2018 Information technology -- User interface component accessibility -- Part 23: Visual presentation of audio information (including captions and subtitles)
This document provides guidance for producers, exhibitors, and distributors on the visual presentation of alternatives to audio information in audiovisual content, such as captions/subtitles. This document provides requirements and recommendations that are intended to support users who are not able to use the audio information, prefer to use a visual representation of audio information, or prefer both audio and visual presentations.
First Public Working Drafts: JSON-LD 1.1 Syntax, JSON-LD 1.1 Processing Algorithms and API, and JSON-LD 1.1 Framing
The W3C JSON-LD Working Group has published three First Public Working Drafts: the JSON-LD 1.1 Syntax document; the JSON-LD 1.1 Processing Algorithms and API document; and the JSON-LD 1.1 Framing document. These define a JSON-based format to serialize Linked Data, a set of algorithms for programmatic transformations of JSON-LD documents, and an approach that allows developers to query by example. All three documents are derived from Community Group Reports published by the JSON for Linking Data W3C Community Group, whose mission is to update the JSON-LD 1.0 specifications to address specific usability or technical issues based on the community’s experiences, implementer feedback, and requests for new features.
About Newsline
Publication date
ISSN 1559-2774
- Authorea Is Acquired by Atypon and Joins The Wiley Family
- COUNTER Code of Practice for Research Data Usage Metrics Release 1
- CrossRef reaches 100 Million registered content items
- EBSCO Information Services Releases Serials Price Projection Report 2019
- Gale Cengage Transforms Digital Humanities Research with Launch of New Digital …
- Library of Congress: Congressional Research Service Reports Now Available Online
- MIT Open Access Task Force Releases White Paper
- New Associate Deans Join Libraries (Carnegie Mellon University)
- OCLC Research and Canadian Association of Research Libraries (CARL) partner to …
- Project MUSE offers nearly 300 HTML5 Open Access Books on Redesigned Platform
- Silverchair Launches DermaTrainer
- Software Preservation Best Practices in Fair Use to Help Safeguard Cultural Rec…
- Wiley and Clarivate Analytics Partner to Launch new open peer review workflow