Those of us in publishing and academia have been remarkably fortunate in how well we have been able to adapt to working from home during the pandemic. So many people—from health care workers and first responders to the people stocking our grocery shelves and many more—have not had that luxury, and so many folks have lost their jobs because their work couldn’t be done at all during lockdown. Most of us in publishing and academia have not only been able to keep our jobs, we’ve been able to keep doing our work, in many cases hardly missing a beat.
Not that many years ago, this wouldn’t have been possible. Remember when “meeting” meant actually meeting—in person? Now we take Zoom for granted. Of course, we miss being with our colleagues and having hallway conversations at conferences—and dining in restaurants with friends in the cities where those conferences were held. But our day-to-day work meetings? Zoom and other technologies have enabled those to continue. As for conferences, many have gotten far greater participation virtually than they have had in the past in person.
And it wasn’t that long ago that virtually every aspect of publishing involved paper. Authors typed their manuscripts. Reviewers got paper copies. Editors marked up the accepted manuscripts on paper. Typesetters retyped them, with proprietary codes to drive their typesetting machines, which output onto paper. Copies of that paper were sent to the authors and proofreaders. Eventually, paper books and journals went to readers and schools and libraries.
Okay, ancient history. But even once much of this was digital, the workflows were still cumbersome. Most systems were proprietary, meaning that work done at one stage often had to be redone at the next stage—and often from something printed out at the previous stage.
Today, files flow through the workflow and supply chain, across time zones and geographies, with remarkable ease. What removed the former blockage and friction was interoperability.
The story of the evolution of technology over the past few decades is a story of ever-increasing interoperability. Most of the software and systems we use now are based on internet and web technologies. We have free, open, nonproprietary standards to thank for that.
We don’t need to worry, when we’re on a Zoom call on our laptop, whether the others on the call are using a PC or a Mac, or whether they’re using a desktop or a laptop or a tablet or a phone. We don’t need to care what browser they’re using.
Behind this is a technology that most people have never heard of: WebRTC. That stands for Web Real-Time Communication. It’s a standard that was initially developed in a collaboration between Google, Microsoft, Apple, Mozilla, and Opera—the organizations behind the web browsers we use. Fierce competitors in most respects, they recognized that for things to “just work” in any browser, they had to cooperate.
Now Web RTC has become the province of standards organizations. The Worldwide Web Consortium (the W3C) is standardizing the APIs behind WebRTC, and the Internet Engineering Task Force (the IETF) is standardizing the protocols—all based on existing, free, open internet and web technologies under the hood.
These standards are developed by groups of volunteers in organizations like the W3C and NISO. The members of those standards bodies range from giant global companies like Google, Apple, Microsoft, SpringerNature, and OUP to medium-sized and smaller publishers, technology companies, service providers, libraries, and other organizations. The individuals from those organizations, when they roll up their sleeves and get to work on something, are peers.
The processes of these standards organizations are typically very transparent and open. In the W3C, for example, when a decision is made to advance certain work to official standard status—called Recommendations in the W3C—a working group is formed of representatives of member organizations. Those folks are required to agree that any of their contributions to the work will be free of patent claims. That Working Group has a charter that spells out exactly what they are charged to do, and when they expect to deliver the result; that charter is public.
Along the way, their work is done in the open: from working drafts (“here’s what we’re working on”) through candidate recommendation (“here’s what we plan to publish—please comment and see if it works for you”) and proposed recommendation (“thanks for the feedback and the testing, we think we’ve got this right now”) to final recommendation, all of these are publicly available and the decisions are made by consensus.
All these recommendations go through what’s called “horizontal review”: they must take into account accessibility, internationalization, privacy, and security, and they must align with the W3C Web Technical Architecture. Every one of the hundreds of standards from the W3C undergoes that horizontal review before it can be finalized.
The resulting standards are free of cost, open, and patent-free. They are able to be used by anybody in the world to create proprietary software, systems, products, and services. The standards that underpin them are what enable the interoperability we have come to depend on. Does that mean that all of those things are guaranteed to be interoperable? No; it just enables them to be interoperable. But increasingly, they are, because our ecosystem increasingly demands that. We expect things to just work.
Nothing proves that point better than the speed with which we’ve been able to adjust to the constraints of the pandemic. In so much of what we’ve done, things have just worked. Thanks to standards.