Newsline June 2016

It is apparent to technologists in the information publishing and provision community that authentication methods currently in use by participant content providers and institutions are far from ideal. The overwhelming majority of institutions providing content to patrons use IP-based authentication, which is based on the address system used by every connected device on the Internet. If a user comes from a specific address or range of addresses, then the system allows access; if not, they are blocked. In theory, this should work seamlessly, but in reality, there are many holes and problems with the set up.

Some of the many problems with this type of authentication system have been obvious since it was first developed, while others have become more pressing in recent years. IP addresses are relatively easy to spoof and the content provider never knows who is on the other side of the address; it must simply trust that the user is authorized. As Internet access became more ubiquitous and institutional users could connect from various places, it became necessary to authenticate those not directly linking via their institutional network. The development of proxy servers, which authenticate users before passing them forward to their desired content, provided a solution to this authorization issue.

Often, because users don't recognize IP authentication controls (which are designed to be invisible to users), the proxy systems are viewed as a barrier to entry and a cause of frustration. Proxy systems aren't inherently insecure, but their security is dependent on proper implementation as described in a talk last month by Don Hamparian from OCLC, provider of the most widely adopted proxy system is the library community, EZProxy. Especially as more and more users access content via mobile appliances, the challenge of authenticating devices not running directly through an institutional network has grown exponentially.

Because of some significant security breaches and subsequent data losses, a variety of significant players that are focused on backbone services and that highly prioritize security are pushing forward with more advanced security protocols. In addition to using multi-factor authentication, Google, for example, has begun testing new strategies for authentication to improve security. One hopes that these newer approaches will gain wider adoption.

Authentication has become an area of focus for content providers, which have seen a rise in piracy that takes advantage of loose security and authentication systems. In some ways, these pirate systems, such as Sci-Hub or LibGen, exploit holes in the security systems that publishers and libraries have put in place. Some hacking of usernames and passwords is caused by phishing or other credential breaches, for which the solution is education and better security practices. On the other hand, content security is destroyed by the willingness of some in the academic community to "donate" their log-in credentials. No security system, regardless of how well it is architected, can solve the "1D-10-T" problem presented by those willing to share their credentials with hackers in Kazakhstan.

These various problems are motivating many in the community to begin conversations about improving authentication. However, the process of development and implementation of such advancements is not simple. Publishers and institutions, each reliant on the other and each reluctant to move first, cannot impose their improvements to authentication issues by themselves. Plenty is at risk for institutions, libraries, and publishers, making it high time that the full community start serious conversations about creating more complete solutions for these issues facing us now.

Sincerely,

Todd Carpenter

Executive Director