Navigating an AI-Focused World

Letter from the Executive Director, April 2024

Machine learning tools, natural language processing, algorithmic analysis of data, generative text, and image tools are at the forefront of everyone’s minds these days.  Whether they are elements of discussions about manipulated images, fake recordings, copyright lawsuits, or the latest product launch that touts some form of “AI integration,” it’s obvious that the next generation of technology to impact the world seems at hand.  But  we shouldn’t expect that our jobs will instantaneously disappear, nor will our lives be magically transformed to the point where we can sit back and let the robots care for us completely.

However, the impacts of artificial intelligence on the publishing and information management community could potentially be profound. Many people have been considering those areas that are most obvious and have even begun litigation in areas such as copyright infringement, licensing, ethical and equity issues of AI systems, and content provenance. Certainly, each of these needs significant communal work.  

A lot of this work, though, will only partially be technical. Most of the topics I’ve just mentioned are not technical aspects of how the machines will process data or how the technical models will be trained. They are social issues—about credit, about ethics, about legal rights and compensation. How the social aspects are encoded in the technology, by whom, and who decides are questions that will take far longer to work through than setting up the systems and determining how to optimize them. The consensus standards process is ideally suited to address these issues, as much for the technical as for the social aspects of these issues.

As we consider the implications of AI systems, I have begun trying to seek out and highlight some of the more creative applications of AI technology. Among these is the potential application of AI to improve accessibility. Of course, image recognition and creating alt tags to support visually impaired readers is an obvious, if error-prone, tool at this stage. There are other interesting examples; for example, I was recently pointed to some recent research on generative AI tools being used to support those with interpersonal communication issues, such as autism, or to create images from descriptive text for those with aphantasia, the inability to visualize images in the mind. Technology vendors are also supporting the community with computer vision and AI-based tools to reduce manual testing for accessibility compliance, to speed coding, and to increase automation of accessibility functionality.

We all need to understand these systems better and experiment with how to use them effectively. To support this, on April 4, NISO will launch an eight-week training series on AI and prompt engineering, led by William Mattingly, who is a postdoctoral fellow at the Smithsonian Institution Data Science Lab. This course will help people with modest exposure to AI tools develop a solid understanding of the necessary terminology and concepts, as well as hands-on experience with AI tools. Later this year, other NISO programs will also build on AI experience in the marketplace. Expanding on the outputs of the NISO Plus conference, NISO’s standards leadership committees are exploring future NISO work related to AI systems. Last fall, the NISO Board launched a subcommittee to lead exploration of NISO’s engagement in this space and develop  a coherent program of work for us to explore to support the community.  

Our community has been dealing with profound changes for more than 40 years as digital technology has come to dominate how content is created, distributed, discovered, and preserved. I suggest that our community is well positioned to engage with these new tools and approaches because of our past experience. As we navigate these issues, a proven way to find solutions that work for everyone in the community is via the standards process.

 

Todd Carpenter
Executive Director, NISO