A Brief Aside …

The quantum theory of archives is validated by events! (Image: Brian Westin from Flickr)

As I continue to work on the next installment of Analyzing the Lifecycle on Practical Terms, I wanted to toss in a quick aside about another idea that I am very keen on: managing data at the “quantum” or smallest practical level. We are in the midst of re-architecting our repository infrastructure with our consultants. In our discussions, we are discovering that the opportunities that exist for us to leverage repository content both within and outside of our repository system have expanded significantly since we first developed the system. This means that since we made good decisions in the beginning about atomizing our data into small (or quantum) units in the repository, our data is ready and able to be leveraged in new ways that go beyond simple search result lists.

It is a nice feeling when decisions are validated by later events.

Analyzing the Lifecycle in Practical Terms: Part I: Definitions

Continuing our research in thinking about all collections objects as sets of data, we are applying some theoretical constructs to the real world, both to understand the nature and needs of data objects, and the capabilities of management, presentation and discovery systems.

Today we start by looking at a set of characteristics of data that will eventually become criteria for determining how and where to manage and deliver our data collections. These characteristics are sometimes inherent in the objects themselves, applied by the holding institution to the objects, or created when the objects are ingested into a repository or other management or presentation system.

Characteristics of Integrity

These characteristics are inherent in the data no matter how the institution is seeking to use or manage them.  They are core to the definition of a preservable digital object, and were defined at the very beginning of the digital library age. See: “Preserving Digital Information” (1996) https://www.clir.org/pubs/reports/pub63watersgarrett.pdf

  • Content: Stuctured bits
  • Fixity: frozen as discrete objects
  • Reference: having a predictable location
  • Provenance: with a documented chain of custody
  • Context: linked to related objects

If a digital object lacks a particular characteristic of integrity, it is not preservable, but that does not mean that we don’t manage it in some system or another.

Characteristics of the Curation Lifecycle

The digital curation lifecycle models how institutions mange their data over time. Rather than being inherent in the data itself, these characteristics are dependent upon the collection development goals of the institution,  and subject to review and alteration. The characteristics below are related to digital preservation activities. This is exhaustively explained in the “Reference Model for and Open Archival Information System” https://public.ccsds.org/pubs/650x0m2.pdf.

  • Review
  • Bitstream maintenance
  • Backup/Disaster recovery
  • Format normalization
  • Format migration
  • Redundancy
  • Audit trail
  • Error checking

Characteristics of Usability

Some of the characteristics of usability are effectively inherent, others are definable by the institution. The characteristics of Intellectual Openness, while not inherent in the data itself, are typically externally determined. The institution does not generally have the ability to alter this characteristic unilaterally. The characteristics of Interoperability and Reusability are inherent in the data when it is acquired, but may be changed by creating derivatives or though normalization, based on level of Intellectual Openness. The ideas of Interoperabilty and Reusability in digital libraries come from: A Framework of Guidance for Building Good Digital Collections, 3rd ed. http://www.niso.org/publications/rp/framework3.pdf

  • Intellectual Openness
    • Open
    • Restricted-by license or intellectual property
  • Interoperability-the ability of one standards-based object to be used in another standards based system
  • Reusability-The ability to re-use, alter, or modify the object, or any part of that object to create new information or knowledge. Reusability makes scholarship possible.

Next time we will examine how these characteristics relate to digital objects, and then after that, how those characteristics, along with institutional mission,  help determine the systems and platforms that we could use to manage, preserve,  and make available digital content from our repositories.

 

Putting the Archives Where the People Are

Archives on the radio at UConn

We are continually looking for more effective ways to connect people to archives and help them understand the value of archives to a modern society and culture, I want to pass along an idea that an archivist here at UConn implemented in conjunction with the student radio station. “D’Archive” is a weekly show featuring conversation, commentary, interaction with primary sources, and more.

Graham Stinnett, Outreach Archivist, at the Archives and Special Collections, hosts and coordinates the content and guests, which will include archivists, researchers, and the general community. If you are interested in hearing the live version of d’Archive, air time is 10am on Thursdays  at 91.7 FM if you are in the Northeastern Connecticut area, or streaming live at http://fm.whus.org/ from anywhere in the world.

And, in case you missed it, the first episode is available in streaming form from the WHUS website:  http://whus.org/2017/09/darchive

That  episode focuses on the Connecticut Punk scene of the 1980s—What? You didn’t KNOW that there was a Punk culture in Connecticut?  Well, now you can learn all about it.

Visualizing Data Sets

A curious circle of interest around 1943 in a search for 1925.

I’ve been continuing to experiment with the Kumu social networking application, seeing how I can use it to visualize all sorts of data. I’ve gotten better at manipulating the display to make the maps easier to use.

My current experiment is to take a search result set from the Connecticut Digital Archive do some minimal manipulation on it, and put it into a Google sheet that I link to the visualization app. The result is running on a test server, and is quite interesting I think.

For this basic test, I did a simple search in the repository for “1925” not specifying  any metadata field, but just looking for it somewhere in a record, expecting that most results would have 1925 in the date. But, that wasn’t always the case, and the outliers proved to be more interesting that the expected results.

Using the tool, you can arrange content by date, owning institution, or creator. When I arranged by “Date” I got this interesting circle around 1943. Not understanding why that would happen, I took a closer look and discovered that all of the photos were taken in 1943 as worker identification photos for the Post boatyard in Mystic Connectcut. In the description, each worker was identified by his name and birthdate. These 20 or so men (out of more than 200 of these images in the repository) were all born in 1925.  I wonder if they knew that?

I think tools like this can make it interesting and informative to do “sloppy” or simple searches, and find hidden relationships that come out of the data.

Collections As Data

The purpose of digital projects has changed over time.

A lot of the talk here at Digital Directions is about thinking of your digital collections as data. One definition  I like is “information that has been translated into a form that is efficient for movement or processing.” This idea that the purpose of building digital collections  is no longer to create a faithful representation of a physical object, but to provide a resource that transcends the original purpose of form of the object is becoming more common.  I used a new slide in my management presentation this year to show how I believe the original purpose of digital projects has been upended.

It used to be that the primary purpose of digitization projects was to provide a digital representation or copy of the original analog object, with as much fidelity to the original as possible.  At the time, that was something of a tall order. Nowadays, while we still do that, we also make it possible, through both technology and creative commons licensing, for people to manipulate the content in ways not part of the original purpose of the digital original.

We have stood the old model on its head and are getting closer to the envisioned future of the potential of digital archives.

From Uniqueness to Ubiquity

A still unique but no longer scarce historical document

Another step along the path from analog to digital thinking in archival access is to stop thinking about our collections as unique, even if they are one of a kind. What does this mean?

When all access to analog content was by way of the reading room, everything existed in an environment of scarcity, since a one-of-a-kind document, like this 1815 membership certificate from the Windham County Agricultural society could only be experienced in one place, and at limited times. This was scarcity of opportunity.   Since most manuscript collections were never published in any form, this scarcity seemed a permanent condition. In fact, some repositories, perversely it seems to us now, prided themselves on the fact that people were forced to come to their reading rooms from all over the world to view their treasures.

A digital object can be in many places at once.

Digitization changed all that. Repositories now pride themselves on how much of their collections are available 24 x 7, and in the number of places they are discoverable.   Ubiquity has replaced scarcity as the coin of the realm so to speak. The original documents remain as unique as before, but their ability to be ubiquitous gives them as much value as their uniqueness.  How does this change the way we think about value in what we do?

Records Management Meets Digital Preservation

Library data architecture map

At UConn Library we are involved in a project to develop a systematic data architecture, although we don’t quite use that term, which is more of an IT term.  According to Wikipedia, “In information technology, data architecture is composed of models, policies, rules or standards that govern which data is collected, and how it is stored, arranged, integrated, and put to use in data systems and in organizations.”

This definition does not address the preservation or sustainabilty aspect of data management that is central to the data curation lifecycle, but data architecture is meant to be only one of the aspects of what is called solution architecture.

Like many organizations that made the transformation from the analog to the digital world, Libraries have over the years developed multiple and sometimes conflicting solutions, systems, and policies for managing digital collections and files in their domain. These solutions  were usually implemented to solve particular problems that arose at the time, with less thought of how those decisions would have large-scale impact, often because there was no large scale impact, or there was no way for these decisions to affect other areas of the organization.  And of course external vendors were only too happy to sell libraries “solutions” that were specific to a particular use case.

As digital content became the medium of activity and exchange, systems improved and became more flexible, it is now possible, and in fact necessary, to look at our data management systems more broadly.

If we keep in mind that, at the root, all digital content is “ones and zeros” and that any system that can manage ones and zeros is potentially useful to a library, no matter where it comes from or what it is sold or developed for, then we can build an approach, or data architecture, that will serve us well, efficiently, and effectively.

How we get to that point is easier said than done. In order to get beyond thinking about the system first we need to understand the nature or characteristics of our data. That’s where records management thinking intersects with this. RM thinking assesses the needs and limits of access and persistence (or what RM folks would call retention). Based on those criterial records are held and managed in certain ways and in certain environments to meet the requirements of their characteristics.  For example, sensitive records may be stored in a more secure facility than non-sensitive records.

How does RM thinking apply to digital libraries?  The RM idea is embodied in the DCC’s Lifecycle model, and many digital archivists have internalized this idea already. Many librarians, who work more with current data, have had less of a reason to  internalize the DCC model of data curation into their work, and the model has generally only been applied to content already designated as preservation worthy. What would it mean to apply RM/Lifecycle thinking to all areas of library content?

We have been mapping the relationships among different content types that the library is responsible for in terms of six different characteristics:

  • File format
  • Manager
  • IP rights holder
  • Retention
  • current management platform
  • Current access platforms

Then we are going to look at the characteristics the content types have in common, and develop a set of policies that govern the data that has these characteristics, and only then will we look to use/alter/build/purchase applications and systems to implement these policies.

It is always difficult to separate applications from the content they manipulate, but it is essential to do so in order to create a sustainable data architecture that puts the content first and the applications second.

Our project is in its early phases, and the map linked to above is very much a work in progress. Check back often to see the evolution of our thinking.

Automating Data Entry for 20,000 folders.

Patrick uses the scan pen to do some data entry

So from the macro level to the micro level, you never know what is going to happen.  We have an artificial collection that was created over some 20 years of “alternative” news and information sources relating mostly to late-20th century counter culture groups. The collection fills about a dozen filing cabinets, with folders that may contain two issues of a newsletter, or fifty flyers from a protest group. Each folder has a typewritten title, sometimes referring to the title of the publication, sometimes referring to an idiosyncratic subject term.  It has been a daunting task to think about creating an online index of these resources, the data entry alone would be an enormous task. And once we did that, there would be enormous pressure to provide online access to the contents as well.

Scanning titles

With some seed funding from a private donor, we are beginning to digitize the collection, and create online access to the resources. We made some decisions that are consistent with the idea of “quantum archives,” and applied some technological solutions to a difficult problem.

First, we defined the smallest unit of description to be the folder. No matter if the folder had 20 different documents or a homogenous set, we would manage and describe at the folder level. A user would discover the folder, and then browse through the pages in the folder (or use full text searching) until they find what they want. Folder titles and one or two genre terms would be the initial entry points.

The genre list

In order to automate data entry (remember that the folder titles are typed) we purchased a text scanning stylus. Using a spreadsheet, we attach and scan the barcode of the folder, the title of the folder, and genre terms from a typewritten sheet.  there are no typographical errors, and with the scanning pen, we can enter data at a rate far higher than hand typing.

Once we populate the spreadsheet, we use other processes to convert the spreadsheet into MODS xml descriptive metadata records, pair them with the set of scanned objects from the folder and use a batch process to ingest them into the preservation digital repository.  After a bit of tinkering with settings, workflow, and process, we are far exceeding the throughput of a manual process.

 

Archival Documents Star in Virtual Reality Experience

VR participants consult archival sources in archives and faculty collaboration

We are working with a professor in the Digital Media and Design department on a project that leverages archival documents in an unusual environment. Called Courtroom 600 after the room in which trial was held, this project creates an immersive virtual reality experience at the Nuremberg war crimes tribunal.  But rather than just using the archives to do research and help them create the environment, the VR designers are incorporating the documents themselves into the experience.  When a participant encounters a defendant or some event, he or she can call up relevant documents from the archives within the VR environment, and get background information or more details in order to understand what is happening in the virtual space.   Kind of like  looking stuff up on your iPad while you are watching a movie. 

We are still in the very early phases of the project, and are learning what kinds of demands an educational VR experience makes on our collections, but as you can see in this early screen shot, we are taking the archives to an entirely new dimension!