Something Wicked this Way Comes: Local and National Linking Environments
In the developing world of digital objects, libraries are having to do things they have not done before. We often say that librarians are ideally suited to imposing order on the digital world because we succeeded, more or less, in imposing order on the print world. And it may well be true that, where digital objects are direct analogues of printed objects, we do a good job of providing bibliographic description, including subject headings and notation, and thereby allow these objects to be ‘retro-fitted’ to our tried and tested bibliographic gateway of choice, the OPAC. But there is so much more to do than just that.
Digital objects can only be partially understood as analogues of printed objects. Any digital object contains the potential to transcend what would be possible for its printed equivalent. A printed object cannot reach out to touch another printed object. True, it can contain a reference to another printed object which may be held by the same library, but the reader who may be interested in that related object has to do the work of tracking it down themselves by returning to the catalogue and undertaking a search as though on an unrelated object.
Imagine a library in which the books on the shelves were physically linked to other books on other shelves on other floors, with librarians busy pinning threads to books and then walking with the end of those threads to other parts of the library to pin the end to another book. The content navigation thereby introduced would of course quickly made the library unnavigable. Yet even that bizarre image is not quite accurate as an analogy with linking in the digital context. Imagine a library in which not only were librarians busy pinning threads to backs of books, but that new threads kept appearing – and at times disappearing – automatically, in response to new books being shelved for the first time. Now we are truly in a Hogwarts world.
Linking technology is one of the most important tools we have as managers of a subset of the digital objects which make up the internet. The internet itself is of course constructed upon the possibility of linking in a hypertextual space. The vision of the computer scientist Tim Berners-Lee, whose ideas created the World Wide Web, has been significantly abetted by the vision of the systems librarian Herbert Van de Sompel, who – while working with colleagues in the library at the University of Ghent in Belgium – developed context-sensitive linking, based upon a new type of URL, the OpenURL, in a system which he called ‘SFX’. SFX stands for ‘Special Effects’, and according to Jenny Walker, Vice President for Marketing and Business Development for the Information Services Division at Ex Libris (USA) Inc., he chose this name because he believed it ‘would offer to scholarly communication some of that magic offered by special effects in the film industry.’ [slides 2-5]
Until SFX came along in 1999 with its ‘Open Linking Framework’, the linked hypertextual space of the internet was, despite its speed and its vast interrelatedness, essentially a static space, composed of static links. What the Open Linking Framework did was to generate dynamism into the space by breaking the link between the referrer – a URL which is typically part of a citation - and its referent. The break was effected by means of a link resolution server, which captured the information in the referrer, enriched it with other information from the server, and then transported the resulting metadata in the form of an OpenURL as a search request to a range of approved targets. What this meant was that the Library asserted itself virtually into the transaction between user and resource, in order to provide the user with what it deemed to be the ‘appropriate copy’ of the referent. Thus, in the early days of this technology, it was often claimed that what the OpenURL did was to solve the ‘appropriate copy’ problem.
The OpenURL is essentially an extensible syntax which allows for interoperability by providing a simple and consistent way to identify where any item is found and how any item is described.
Let’s take a step back and reconsider this, using our print library analogy. Suppose a book entitled Something wicked has been published by Brown Publishers in the UK, but by Black Publishers in the US. Now suppose that my library only has the Brown (UK) edition, but a user comes to the library help desk with a request for Something wicked published by Black Publishers. I, helpful librarian, point out to the user that we don’t have the Brown edition, but we do have the Black edition, which is the appropriate copy of the book for a UK library. It would be a very unusual user who in that circumstance demanded that the library provide them with the Brown edition, and refused to take the Black edition instead. Yet with a static link, that is often the logic. I may be looking at records for articles in Web of Knowledge, and find a reference to an article called ‘Extremely Useful Research’ published in the Journal of Useful Studies. This may be a journal which we in Edinburgh University Library take in electronic form, but as part of a bundle of journals published by a single journal aggregation service, such as SwetsWise. However, the same journal is also provided to customers of aggregation services provided by ingenta, and the static link in the Web of Knowledge article points exclusively at the ingenta version of the article. As a result, I, the user, clicking on the link, arrive at a screen which tells me that my university library does not subscribe to this journal – even though, of course, we do. Using a link resolution service, however, means that the link in the Web of Knowledge article goes first to the server, and is there resolved into an OpenURL which runs a ‘fetch’ against a range of targets, including SwetsWise. As a result, an appropriate copy is found and sent to my screen, so I am not met with the equivalent of a helpdesk librarian telling me we don’t have a copy of the book they requested simply because the wrong publisher was cited.
In many ways, the OpenURL symbolises the battle between libraries and publishers for control of access to digital resource. Publishers have been fighting hard to draw users into the digital resource world via their gateways, and to tailor user choices to their own products. But it is gratifying to see that the technology which is winning out in this battle is a technology which has emerged from the library world, and which is based upon the principle of free enquiry of the knowledge universe – principles which the library world once embodied in tools such as national union catalogues and comprehensive indexing and subject heading schemes. The library mediates between the user and the publishers in order to smooth the playing field and give all users an equal chance of accessing all knowledge. That, at least, is the theory. In researching this presentation I came across quotations from key players in the OpenURL world which emphasised this point regularly. Going back to my Black vs Brown Publishers analogy, David Seaman of the DLF said in a presentation made in New York last year ‘Users don’t work in physical libraries whose books are shelved by publisher’. And in an article published in The Serials Librarian last year, Jenny Walker of Ex Libris wrote ‘bibliographic instruction classes have become more focused around vendor-specific pathways rather than common-sense research approaches.’ If that is true, it suggests that the publishers may be winning the battle, but I think we can be confident that, with open linking practice spreading fast, the war will be won by librarians.
Let’s now take a look at the OpenURL in practice at Edinburgh. [slides 6-16]
Open linking therefore provides the vital role of giving the user what they want, if it is available, when they click on a link, rather than giving them what a publisher or a controller other than their library thinks they should have. We want to take away unpleasant surprises. Since Harry Potter is in my mind, I can’t help but make the analogy with Bertie Bott’s Every-Flavoured Beans, the favoured choice of sweets at Hogwart’s. When you dip your hand into the bag, you are never quite sure what flavour the jelly bean will be (who can forget Dumbledore selecting one he believes to be toffee-flavoured, only to remark shortly afterwards, in disappointment, ‘Alas – earwax’).
But the technology can do something else. It can provide users with ‘extended services’ as well. These are services which the user may find additionally useful, and are presented as targets which can be searched depending on the metadata present in the OpenURL. Jenny Walker in her article in The Serials Librarian last year gave some examples of these:‘ISSN or journal name can be used to check for print holdings in the library catalog, whether or not electronic full-text is available;
Author names can be used to look up the authors in a citation database to see other articles they have written or how well-cited they are;
Subject terms from the original citation can be re-used in other related databases or to link to Web sites that librarians judge potentially useful;
ISSN or journal name can be used to look up the journal in a serials directory to find out more general information about the journal – its publication schedule, where it is indexed, publisher information, etc.’
Other extended services which could be added would include biographical dictionaries for names as subjects. Search engines can be presented as additional services, and interlibrary loan requests can also go directly to the ILL department. [slides 17-18]
There are other ways of doing linking, however. One of these is CrossRef. [slide 19] CrossRef is an initiative of the publishing world to control the linking which occurs in order to provide for link persistence. It uses the Digital Object Identifier (DOI) system to do this. The DOI is an alphanumeric name that identifies digital content, such as a book or journal article. The DOI is paired with the object's electronic address, or URL, in an updateable central directory, and is published in place of the URL in order to avoid broken links while allowing the content to move as needed. DOIs are distributed by publishers and by CrossRef, and there is no end-user charge associated with their use. As an identifier, the DOI can be incorporated into many different systems and databases. Nevertheless, the problem with CrossRef is that it does not address the ‘appropriate copy’ problem for libraries, nor does it give libraries any control over the route to resources for users. It represents an efficiency in the digital object environment, certainly, and laudably applies a standard, the DOI, as a means of assuring resource persistence and preventing the frustration of broken links. And clearly there are sound commercial reasons for doing so. But it is not enough on its own.
However, and fortunately for libraries, the CrossRef publishers realised that library control of links was a more important consideration for librarians than is link persistence. And publishers are not in the business of ignoring librarians, whose purchasing decisions are of some importance to them. And so they found a way of putting CrossRef at the service of open linking. This has been done by making the DOI directory itself OpenURL-enabled, so that it can recognise a user with access to a local resolver. When the user clicks on a DOI, the CrossRef system does two key things. First, it redirects that DOI back to the user's local resolver, and second, it allows the DOI to be used as a key to pull metadata out of the CrossRef database, metadata that is needed to create the OpenURL targeting the local link resolver. As a result, the institutional user clicking on a DOI is directed to appropriate resources. By using the CrossRef DOI system to identify their content, publishers in effect make their products OpenURL aware, and this leveraging of the CrossRef system to support open linking is a major gain for digital library services.
So far, I have talked about local link resolution, provided by local resolvers supplied by companies like Ex Libris or Endeavor acting to transport metadata in OpenURLs to targets within the control of the library. But in the UK we have a national resolver service as well, which is currently being built by EDINA on behalf of JISC, the Joint Information Systems Committee. This service, which was known in its project phase as Balsa, but has recently been restyled GetCopy [slides 20-21], is what is described as a ‘lightweight broker’ acting within the JISC Information Environment. GetCopy provides a free resolver for sites which do not have their own service. It deploys a Rights Evaluation Mechanism which ensures that all end users’ access rights are verified for all links to target services. This is a complex undertaking on behalf of the entire UK, but it is consistent with the JISC vision of a coherent Information Environment, in which common services underpin content and presentation services, and EDINA is doing quite a lot of the middleware work required for this vision.
Nevertheless, the work is a long way from done, and the challenges are immense. Making a success of linking within the digital object environment is as important now to our profession as making a success of cataloguing and subject indexing was in the world of the print library. I will leave you with some food for thought on this subject, from David Seaman. [slides 22-23].
This page was updated 21 June 2004, and is maintained by Katy Sidwell.