Home | About NISO | Blog

Archive for the ‘legal’ Category

NISO response to the National Science Board on Data Policies

Wednesday, January 18th, 2012

Earlier this month, the National Science Board (NSB) announced it was seeking comments from the public on the report from the Committee on Strategy and Budget Task Force on Data Policies, Digital Research Data Sharing and Management.  That report was distributed last December.

NISO has prepared a response on behalf of the standards development community, which was submitted today.  Here are some excerpts of that response:

The National Science Board’s Task Force on Data Policies comes at a watershed moment in the development of an infrastructure for data-intensive science based on sharing and interoperability. The NISO community applauds this effort and the focused attention on the key issues related to a robust and interoperable data environment.

….

NISO has particular interest in Key Challenge #4: The reproducibility of scientific findings requires that digital research data be searchable and accessible through documented protocols or method. Beyond its historical involvement in these issues, NISO is actively engaged in forward-looking projects related to data sharing and data citation. NISO, in partnership with the National Federation of Advanced Information Services (NFAIS), is nearing completion of a best practice for how publishers should manage supplemental materials that are associated with the journal articles they publish. With a funding award from the Alfred P. Sloan Foundation and in partnership with the Open Archives Initiative, NISO began work on ResourceSync, a web protocol to ensure large-scale data repositories can be replicated and maintained in real-time. We’ve also had conversations with the DataCite group for formal standardization of their IsCitedBy specification. [Todd Carpenter serves] as a member of the ICSTI/CODATA task force working on best practices for data citation and NISO is looking forward to promoting and formalizing any recommendations and best practices that derive from that work.

….

We strongly urge that any further development of data-related best practices and standards take place in neutral forums that engage all relevant stakeholder communities, such as the one that NISO provides for consensus development. As noted in Appendix F of the report, Summary Notes on Expert Panel Discussion on Data Policies, standards for descriptive and structural metadata and persistent identifiers for all people and entities in the data exchange process are critical components of an interoperable data environment. We cannot agree more with this statement from the report of the meeting: “Funding agencies should work with stakeholders and research communities to support the establishment of standards that enable sharing and interoperability internationally.”

There is great potential for NSF to expand its leadership role in fostering well-managed use of data. This would include not only support of the repository community, but also in the promulgation of community standards. In partnership with NISO and using the consensus development process, NSF could support the creation of new standards and best practices. More importantly, NSF could, through its funding role, provide advocacy for—even require—how researchers should use these broad community standards and best practices in the dissemination of their research. We note that there are more than a dozen references to standards in Digital Research Data Sharing and Management report, so we are sure that this point is not falling on unreceptive ears.

The engagement of all relevant stakeholders in the establishment of data sharing and management practices as described in Recommendation #1 is critical in today’s environment—at both the national and international levels. While the promotion of individual communities of practice is a laudable one, it does present problems and issues when it comes to systems interoperability. A robust system of data exchange by default must be one grounded on a core set of interoperable data. More often than not, computational systems will need to act with a minimum of human intervention to be truly successful. This approach will not require a single schema or metadata system for all data, which is of course impossible and unworkable. However, a focus on and inclusion of core data elements and common base-level data standards is critical. For example, geo-location, bibliographic information, identifiers and discoverability data are all things that could be easily standardized and concentrated on to foster interoperability. Domain-specific information can be layered over this base of common and consistent data in a way that maintains domain specificity without sacrificing interoperability.

One of the key problems that the NSB and the NSF should work to avoid is the proliferation of standards for the exchange of information. This is often the butt of standards jokes, but in reality it does create significant problems. It is commonplace for communities of interest to review the landscape of existing standards and determine that existing standards do not meet their exact needs. That community then proceeds to duplicate seventy to eighty percent of existing work to create a specification that is custom-tailored to their specific needs, but which is not necessarily compatible with existing standards. In this way, standards proliferate and complicate interoperability. The NSB is uniquely positioned to help avoid this unnecessary and complicating tendency. Through its funding role, the NSB should promote the application, use and, if necessary, extension of existing standards. It should aggressively work to avoid the creation of new standards, when relevant standards already exist.

The sharing of data on a massive scale is a relatively new activity and we should be cautious in declaring fixed standards at this state. It is conceivable that standards may not exist to address some of the issues in data sharing or that it may be too early in the lifecycle for standards to be promulgated in the community. In that case, lower-level consensus forms, such as consensus-developed best practices or white papers could advance the state of the art without inhibiting the advancement of new services, activities or trends. The NSB should promote these forms of activity as well, when standards development is not yet an appropriate path.

We hope that this response is well received by the NSB in the formulation of its data policies. There is terrific potential in creating an interoperable data environment, but that system will need to be based on standards and rely on best practices within the community to be fully functional. The scientific community, in partnership with the library, publisher and systems provider communities can all collectively help to create this important infrastructure. Its potential can only be helped by consensus agreement on base-level technologies. If development continues in a domain-centered path, the goal of interoperability and delivering on its potential will only be delayed and quite possibly harmed.

The full text PDF of the entire response is available here.  Comments from the public related to this document are welcome.

Interested in the details of the Google Settlement?

Wednesday, November 19th, 2008

Jonathan Band, a DC-based intellectual property lawyer, has produced an excellent distillation of the Google Library/Publisher/Author’s Guild settlement.  For those who are interested but not committed to reading the full 141 pages and 15 attachments, Jonathan’s summary is readable and a much more manageable 21 pages.  Thanks and congratulations to Jonathan for a great summary.

Google Settlement gets tentative court approval

Tuesday, November 18th, 2008

Yesterday, the NY Court judge overseeing the publishers/authors/Google settlement has given tentative approval to the deal.  More details are here.

Court acknowledges copyright law application to Open Source software

Friday, August 15th, 2008

In an important ruling yesterday, the U.S. Court of Appeals for the Federal Circuit yesterday stood behind the concept that open source software should be covered by copyright law, which strengthens the rights of OS developers. 

This is a critical win for open source developers, which although seemingly obvious was an untested aspect of US law.  Software has long been viewed in the courts as being covered under copyright law.  Some background on the copyright protections provided to software is here, here and here.

The crux of the case centers on whether the terms of an open source license such as the Artistic License in this case (or similar licenses, such as Creative Commons) should be considered “conditions of, or merely covenants to, the copyright licenses.”  When a copyright holder grants a nonexclusive license to use a copyrighted work, he/she forfeits his/her rights to sue for copyright infringement and can only sue for breach of contract.

Why is this an important distinction?  One could consider violations of use a violation of contract law, which would significantly reduce the penalties for violation.  Contract law violations frequently result in awards that are a derivative of the monetary damages related to the contract.  In the case of Open Source software, there is very limited if any exchange of funds, and therefore very limited monetary damages.

In the ruling, Judge White addressed the question of economic benefits accruing to OS developers by writing:

The lack of money changing hands in open source licensing should not be presumed to mean that there is no economic consideration, however.  There are substantial benefits, including economic benefits, to the creation and distribution of copyrighted works under public licenses that range far beyond traditional license royalties. 

Copyright infringement on the other hand has a set of penalties and remedies that are much more significant and are not explicitly tied to the financial terms of an exchange.  In addition, copyright cases can include attorney’s fees in the remediation.

In the decision, the Artistic License used by the plaintiff was deemed to be limited in scope and have conditions, which the licensor violated, then the case was deemed to be infringing on copyright. Deciding in the plaintiff’s favor because of the clause “provided that …” created limitations and conditions in the license to which the licensor must adhere or they would be infringing on the copyrights of the licensor.  In this particular case, the “conditions set forth in the Artistic License are vital to enable the copyright holder to retain the ability to benefit from the work of downstream users.”  In addition, licensors are “authorized to make modifications and to distribute the materials provided that the user follows the restrictive terms of the Artistic License.”  These conditions of use were deemed to be sufficient restrictions to the terms of the license to distinguish them from contractual covenants.

This case will reinforce the legal protections for producers of OS software that have underpinned the development and sharing of OS code for years.  Andy Updegrove, a lawyer specializing in intellectual property law, standards and a prolific blogger, was quoted in PC Magazine as saying:

“For the community this wasn’t about the money at all, but about receiving the blessing of an important court that the foundations upon which the entire fee and open source and Creative Commons philosophies are based.”