OSS4Pres 2.0: Design Requirements for Better Open Source Tools

By Heidi Elaine Kelly

____

This is the second post in the bloggERS series describing outcomes of the #OSS4Pres 2.0 workshop at iPRES 2016, addressing open source tool and software development for digital preservation. This post outlines the work of the group tasked with “drafting a design guide and requirements for Free and Open Source Software (FOSS) tools, to ensure that they integrate easily with digital preservation institutional systems and processes.” 

The FOSS Development Requirements Group set out to create a design guide for FOSS tools to ensure easier adoption of open-source tools by the digital preservation community, including their integration with common end-to-end software and tools supporting digital preservation and access that are now in use by that community. 

The group included representatives of large digital preservation and access projects such as Fedora and Archivematica, as well as tool developers and practitioners, ensuring a range of perspectives were represented. The group’s initial discussion led to the creation of a list of minimum necessary requirements for developing open source tools for digital preservation, based on similar examples from the Open Preservation Foundation (OPF) and from other fields. Below is the draft list that the group came up with, followed by some intended future steps. We welcome feedback or additions to the list, as well as suggestions for where such a list might be hosted long term.

Minimum Necessary Requirements for FOSS Digital Preservation Tool Development

Necessities

  • Provide publicly accessible documentation and an issue tracker
  • Have a documented process for how people can contribute to development, report bugs, and suggest new documentation
  • Every tool should do the smallest possible task really well; if you are developing an end-to-end system, develop it in a modular way in keeping with this principle
  • Follow established standards and practices for development and use of the tool
  • Keep documentation up-to-date and versioned
  • Follow test-driven development philosophy
  • Don’t develop a tool without use cases, and stakeholders willing to validate those use cases
  • Use an open and permissive software license to allow for integrations and broader use

Recommendations

  • Have a mailing list, Slack or IRC channel, or other means for community interaction
  • Establish community guidelines
  • Provide a well-documented mechanism for integration with other tools/systems in different languages
  • Provide functionality of tool as a library, separating out the GUI and the actual functions
  • Package tool in an easy-to-use way; the more broadly you want the tool to be used, package it for different operating systems
  • Use a packaging format that supports any dependencies
  • Provide examples of functionality for potential users
  • Consider the organizational home or archive for the tool for long-term sustainability; develop your tool based on potential organizations’ guidelines
  • Consider providing a mechanism for internationalization of your tool (this is a broader community need as well, to identify the tools that exist and to incentivize this)

Premise

  • Digital preservation is an operating system-agnostic field

Next Steps

Feedback and Perspectives. Because of the expense of the iPRES conference (and its location in Switzerland), all of the group members were from relatively large and well-resourced institutions. The perspective of under-resourced institutions is very often left out of open-source development communities, as they are unable to support and contribute to such projects; in this case, this design guide would greatly benefit from the perspective of such institutions as to how FOSS tools can be developed to better serve their digital preservation needs. The group was also largely from North America and Europe, so this work would eventually benefit greatly from adding perspectives from the FOSS and digital preservation communities in South America, Asia, and Africa.

Institutional Home and Stewardship. When finalized, the FOSS development requirements list should live somewhere permanently and develop based on the ongoing needs of our community. As this line of communication between practitioners and tool developers is key to the continual development of better and more user-friendly digital preservation tools, we should continue to build on the work of this group.

Referenced FOSS Tool and Community Guides

____

heidi-elaine-kellyHeidi Elaine Kelly is the Digital Preservation Librarian at Indiana University, where she is responsible for building out the infrastructure to support long-term sustainability of digital content. Previously she was a DiXiT fellow at Huygens ING and an NDSR fellow at the Library of Congress.

Building Bridges and Filling Gaps: OSS4Pres 2.0 at iPRES 2016

By Heidi Elaine Kelly and Shira Peltzman

____

This is the first post in a bloggERS series describing outcomes of the #OSS4Pres 2.0 workshop at iPRES 2016.

Organized by Sam Meister (Educopia), Shira Peltzman (UCLA), Carl Wilson (Open Preservation Foundation), and Heidi Kelly (Indiana University), OSS4PRES 2.0 was a half-day workshop that took place during the 13th annual iPRES 2016 conference in Bern, Switzerland. The workshop aimed to bring together digital preservation practitioners, developers, and administrators in order to discuss the role of open source software (OSS) tools in the field.

Although several months have passed since the workshop wrapped up, we are sharing this information now in an effort to raise awareness of the excellent work completed during this event, to continue the important discussion that took place, and to hopefully broaden involvement in some of the projects that developed. First, however, a bit of background: The initial OSS4PRES workshop was held at iPRES 2015. Attended by over 90 digital preservation professionals from all areas of the open source community, individuals reported on specific issues related to open source tools, which were followed by small group discussions about the opportunities, challenges, and gaps that they observed. The energy from this initial workshop led to both the proposal of a second workshop, as well as a report that was published in Code4Lib Journal, OSS4EVA: Using Open-Source Tools to Fulfill Digital Preservation Requirements.

The overarching goal for the 2016 workshop was to build bridges and fill gaps within the open source community at large. In order to facilitate a focused and productive discussion, OSS4PRES 2.0 was organized into three groups, each of which was led by one of the workshop’s organizers. Additionally, Shira Peltzman floated between groups to minimize overlap and ensure that each group remained on task. In addition to maximizing our output, one of the benefits of splitting up into groups was that each group was able to focus on disparate but complementary aspects of the open source community.

Develop user stories for existing tools (group leader: Carl Wilson)

Carl’s group was comprised principally of digital preservation practitioners. The group scrutinized existing pain points associated with the day-to-day management of digital material, identified tools that had not yet been built that were needed by the open source community, and began to fill this gap by drafting functional requirements for these tools.

Define requirements for online communities to share information about local digital curation and preservation workflows (group leader: Sam Meister)

With an aim to strengthen the overall infrastructure around open source tools in digital preservation, Sam’s group focused on the larger picture by addressing the needs of the open source community at large. The group drafted a list of requirements for an online community space for sharing workflows, tool integrations, and implementation experiences, to facilitate connections between disparate groups, individuals, and organizations that use and rely upon open source tools.

Define requirements for new tools (group leader: Heidi Kelly)

Heidi’s group looked at how the development of open source digital preservation tools could be improved by implementing a set of minimal requirements to make them more user-friendly. Since a list of these requirements specifically for the preservation community had not existed previously, this list both fills a gap and facilitates the building of bridges, by enabling developers to create tools that are easier to use, implement, and contribute to.

Ultimately OSS4PRES 2.0 was an effort to make the open source community more open and diverse, and in the coming weeks we will highlight what each group managed to accomplish towards that end. The blog posts will provide an in-depth summary of the work completed both during and since the event took place, as well as a summary of next steps and potential project outcomes. Stay tuned!

____

peltzman_140902_6761_barnettShira Peltzman is the Digital Archivist for the UCLA Library where she leads the development of a sustainable preservation program for born-digital material. Shira received her M.A. in Moving Image Archiving and Preservation from New York University’s Tisch School of the Arts and was a member of the inaugural class of the National Digital Stewardship Residency in New York (NDSR-NY).

heidi-elaine-kellyHeidi Elaine Kelly is the Digital Preservation Librarian at Indiana University, where she is responsible for building out the infrastructure to support long-term sustainability of digital content. Previously she was a DiXiT fellow at Huygens ING and an NDSR fellow at the Library of Congress.

Defining Levels of Processing vs. Levels of Effort

By Carol Kussmann and Lara Friedman-Shedlov

This is the third post in the bloggERS series #digitalarchivesfail: A Celebration of Failure.

____

The Electronic Records Task Force (ERTF) at the University of Minnesota just completed its second year of work.  This year’s focus was on the processing of electronic records for units of the Archives and Special Collections.  The Archives and Special Collections (ASC) is home to the University of Minnesota’s collection of rare books, personal papers, and organizational archives. ASC is composed of 17 separate collecting units, each focusing on a specific subject area.  Each unit is run separately with some processing activities being done centrally through the Central Processing department.

We realized quickly that even more than traditional analog records processing, electronic records work would be greatly facilitated by utilizing Central Processing, rather than relying on each ASC unit to ingest and process this material.  In keeping with traditional archival best practices, Central Processing typically creates a processing plan for each collection.  The processing plan form records useful information to use during processing, which may be done immediately or at a later date, and assigns a level of processing to the incoming records. This procedure and form works very well with analog records and the Task Force initially established the same practice for electronic records.  However we learned that it is not always efficient to follow current practices and that processes and procedures must be evaluated on a continual basis.

The processing plan is a form about a page and a half long with fields to fill out describing the condition and processing needs of the accession.  Prior to being used for electronic records, fields included: Collection Name, Collection Date Span, Collection Number, Location(s), Extent (pre-processing), Desired Level of Processing, Restrictions/Redactions Needed, Custodial History, Separated Materials*, Related Materials, Preservation Concerns, Languages Other than English, Existing Order, Does the Collection need to be Reboxed/Refoldered, Are there Significant Pockets of Duplicates, Supplies Needed, Potential Series, Notes to Processors, Anticipated Time for Processing, Historical/Bibliographical Notes, Questions/Comments.

A few changes were made to include information about electronic records.  The definitions for Level of Processing were modified to include what was expected for Minimal, Intermediate, or Full level of processing of electronic records.  Preservation Concerns specifically asked if the collection included digital formats that are currently not supported or that are known to be problematic.  After these minor changes were made, the Task Force completed a processing plan for each new electronic accession.

After several months experience using the form, Task Force members began questioning the value of the processing plan for electronic records.  In the relatively few instances where accessions were initially reviewed for processing at a later date it captured useful information for the processor to refer back to without having to start from the beginning. However, the majority of electronic records that were being ingested were also being processed immediately and only a handful of the fields were relevant to the electronic recorded..  The only piece of information being captured on the Processing Plan that was not recorded elsewhere was the expected “level of processing” for the accession.  To address this, the level of processing was added to existing documentation for the electronic accessions eliminating the need for creating a Processing Plan for accessions that were to be immediately processed.

The level of processing itself soon became a point of contention among some of the Electronic Records Task Force members.  For electronic records, the level of processing only defined the level at which the collection would be described on a finding aid – collection, series, sub-series, or potentially to the item.  The following definitions were created based on Describing Archives: A Content Standard (DACS).

Minimal: There will be no file arrangement or renaming done for the purpose of description/discovery enhancement.  File formats will not be normalized.  Action will generally not be taken to address duplicate files or PII information identified during ingest.  Description will meet the requirements for DACS single level description (collection level).

Intermediate: Top level folder arrangement and top-level folder renaming for the purpose of description/discovery enhancement will be done as needed.  File formats will not be normalized. Some duplicates may be weeded and redaction of PII done.  Description will meet DACS multi-level elements: described to the series level with high research value series complemented with scope and content notes.

Full: Top level folder arrangement and renaming will be done as needed, but where appropriate renaming and arrangement may also be done down to the item level.  File normalization may be conducted as necessary or appropriate.  Identified duplicates will be removed as appropriate and PII will be redacted as needed. Description will meet DACS multi-level elements: described to series, subseries, or item level where appropriate with high research value components complemented with additional scope and content notes.

Discussions between the ERTF and unit staff about each accession assisted with assigning the appropriate level of processing.  This “level of processing,” however, did not always correlate with the amount of effort that was being given to an accession.  For example, a collection assigned a minimal level of processing could take days to address while a collection assigned a full level of processing might only take hours based on a number of factors.  Just because the minimal level of processing says that there will be no file arranging or renaming done – for the purpose of description/discovery – does not mean that no file arranging or renaming will be done for preservation or ingest purposes.  File renaming must often be done for preservation purposes.  If unsupported characters are found in file names these must be addressed.  If file names are too long this must also be addressed.

Other tasks that might be necessary to assist with the long-term management of these materials include:

  • Removing empty directories
  • Removing .DSStore and .Thumbs files
  • Removing identified PII while not necessary for the description, better protects the University. The less PII we need to manage, the less risk we put ourselves in.
  • Deleting duplicates (as much as the “level of processing” tries to limit this, as someone who needs to manage the storage space, continually adding duplicates will cause problems down the line). We have a program that easily removes them, so use it.

Therefore the “level of processing”, while helpful in setting expectations for final description, does not provide accurate insight into the amount of work that is being done to make electronic records accessible. In order to address the lack of correlation between the processing level assigned to an accession and the actual level of effort being given to the processing of the accession, a Levels of Effort document was drafted to help categorize the amount of staff time and resources put forth when working with electronic materials.  The expected level of effort may be more useful for setting priorities then a level of processing as there is a closer one-to-one relationship with the amount of time required to complete the processing.

This is another example of how we were not able to directly apply procedures developed for analog records towards electronic records. The key is finding the balance between not reinventing the wheel and doing things the way they have always been done.

Next Steps...Processing of and Access to Electronic Records
Poster by Valerie Collins and Carol Kussmann, on behalf of the University of Minnesota’s Electronic Records Task Force: https://osf.io/qya26/

____

Carol Kussmann is a Digital Preservation Analyst with the University of Minnesota Libraries Data & Technology – Digital Preservation & Repository Technologies unit, and co-chair of the University of Minnesota Libraries – Electronic Records Task Force (ERTF). Questions about the activities of the ERTF can be sent to: lib-ertf@umn.edu.

Lara Friedman-Shedlov is Description and Access Archivist for the Kautz Family YMCA Archives at the University of Minnesota Libraries.  Her current interests include born digital archives and diverse and inclusive metadata.

#digitalarchivesfail: Well, That Didn’t Work

By A.L. Carson

This is the first post in the bloggERS series #digitalarchivesfail: A Celebration of Failure.

____

They say you learn more from failure than from success. FIAT was a great teacher.

This is a story about never giving up, until you do: about the project where nothing went right, and just kept going. It takes place at the University of Texas, Austin, School of Information (iSchool) in the Digital Archiving course. A big part of that class is the hands-on technology project, where students apply archival theory to legacy hardware, digital records, or a mix of both. Our class had three mothballed servers and three ancient personal computers available; my group was assigned the largest (and oldest) of the retired School of Information servers, a monster tower-chassis Dell PowerEdge 4400 called FIAT.

Our assignment was clear: following archival principles, gain access to the machine’s filesystem, determine dates of service, inventory the contents, and image the disks or otherwise retrieve the data. We had an advantage in that we knew what FIAT had been used for: the backbone server for the iSchool, serving the public-facing website and the home directories for faculty, staff, and students. In light of this, we had one additional task: locate the old website directory. Hopefully, at the end of the semester, we would have a result to present for the iSchool Open House.

As the only one in the group with Linux server experience (I’d been the school’s deputy systems administrator for about a year), I volunteered as technical lead and immediately began worrying about what would go wrong.

It would be easier to list what went right.

We got access to the machine. We estimated manufacturing and usage dates. We determined the drive configuration. We were able to view the file directory, and we located the old iSchool website.

That’s it.

The catalog of dead ends, surprises, and failures is rather longer, and began almost immediately. None of us had done anything like this before, but I had enough experience with servers to develop specific fears, which may or may not have been better than the general anxiety my group members suffered, and turned out to be largely misplaced.

I was sure that FIAT had been set up in a RAID array, but I didn’t know the specifications or how to image it. (1) To find out without directly accessing the machine– which might have compromised the integrity of its filesystem– we needed its Dell service tag number. If we could give that to the iSchool’s IT coordinator (my boss), Dell’s lookup tool would tell us what we needed to know.

The service tag had been scraped off.

That was annoying, but not fatal. Since we had the model number, we could find a manual; with access to the iSchool’s IT inventory, I could look up the IT control tag and see what information we had. From this, we determined that FIAT was produced between 1999 and 2003, could have been set up for either hardware or software RAID depending on a hardware feature, and was probably running the operating system Red Hat v2.7. That gave us a ballpark for service life. It didn’t move us forward, though, so while my compatriots researched RAID imaging strategies, I looked for another route.

Best practice for computer accessions calls for accessing the machine from a “dead” state, so that metadata doesn’t get overwritten and the machine can be preserved in its shutdown state. For us, that meant booting from a Live CD, a distribution of Linux which runs in the RAM and mounts, or attaches, to the filesystem without engaging the operating system, allowing us to see everything without altering the data. My thought was that we could boot that way and then check for a RAID configuration at the system level: open the box with the crowbar inside it.

And it would have worked, too, if it weren’t for Murphy.

After making the live CD, we turned FIAT on and adjusted the boot order in the BIOS so we could boot from the disk. We learned three things from this: first, and most frighteningly, one of the drive ports didn’t show up in the boot sequence (and another spun up with the telltale whine of a Winchester drive going bad, increasing the pressure to get this done). Second, the battery on FIAT’s internal clock must have died, because it displayed a date in May 2000 (which we figured was probably when the board had been installed). Third, neither the service tag number nor the processor serial number appeared in the BIOS, so we still couldn’t look it up.

Changing the boot order in the BIOS: note the blank where the Service Tag number ought to be.
Changing the boot order in the BIOS: note the blank where the Service Tag number ought to be.

Carrying merrily on, we went ahead with the live CD boot. What happened next was our mistake, and I only realized it later. Though Knoppix (the Linux OS we were running from the live CD) started and ran, the commands for displaying partitions and drives returned no results, and navigating to /dev (where the drives mount in Linux) didn’t reveal any mount points. Nothing in the filesystem looked right, either.

What had happened (and a second attempt made this apparent) was that Knoppix hadn’t mounted at all. It was just running in the RAM. We hadn’t noticed the error message that told us this because we were too excited that the CD drive had worked. Knocked back but hardly defeated, we took a week off to email smarter people and regroup.

Knoppix failing to mount and unable to debug.
Knoppix failing to mount and unable to debug.

The next thing we did involved a screwdriver.

Popping the side off to read the diagram and locate the RAID controller key– or not, as it happened–was mildly cathartic and hideously dusty. I spent the next three days sneezing. Without a hardware controller, I was certain that the machine had been set up with a software RAID; since our attempts to boot from the CD had failed, I proposed that we pull the drives and image them separately with the forensic hardware we had available. My theory was that, since the RAID was configured in the software, we could rebuild it from disk images. This theory did not have a chance to be disproved.

That blue thing between the chipset and the rail is where the hardware RAID controller key wasn’t.
That blue thing between the chipset and the rail is where the hardware RAID controller key wasn’t.

Unscrewing the faceplate and pulling the drives gave me a certain amount of satisfaction, I’ll admit. It also solved the mystery of the missing drive: the reason why one of the SCSI ports wasn’t coming up on the boot screen was that it was empty. With that potential catastrophe averted, we imagined ourselves well set on our way to imaging the disks. Until we discovered that the Forensic Recovery of Evidence Device Laptop (or FRED for short) in the Digital Archaeology Lab didn’t have cables capable of connecting our 80-pin SCSI-2 drives to its 68-pin SCSI-3 write blocker. And that, despite having a morgue’s worth of old computer cables and connectors, there wasn’t anything in the lab with the right ends. That’s when I started fantasizing about making FIAT into a lamp table.

So, while my comrades returned to preparing a controlled vocabulary for our pictures and drafting up metadata files (remember, we never actually gave up on getting the data), I called or drove to every electronics store in town, including the Goodwill computer store. I found a lot of legacy components and machines, but nothing that would convert SCSI-2 to SCSI-3; so I put out a call to my nerd friends to find me something that would work.

They tried.

With their help, I found an adaptor with SCSI-2 on one side and SCSI-3 on the other. When it arrived, I met up with one of my groupmates at the Digital Archaeology Lab, where the two of us daisy-chained the FRED cables, write blocker, our connector, and the (newly labeled according to our naming convention) drives to see what would happen.

The short version is: nothing.

The SCSI-2 to SCSI-3 to write blocker to FRED-L daisychain that didn’t work.
The SCSI-2 to SCSI-3 to write blocker to FRED-L daisychain that didn’t work.

The longer version is: complicated nothing. Some of the drives, attached to the power supply and write blocker, didn’t turn on at first, then did later without us changing anything. The write blocker’s SCSI connection light never lit up. FRED never registered an attached drive. We tried several jumper combinations, all with the same result: when we could get the drives to turn on at all, the write blocker couldn’t see them, and neither could FRED.

Having exhausted our options for doing it the right way, we explained the situation to our professor, Dr. Pat Galloway (who I think was enjoying our object lesson in Special Problems), and got permission to just turn FIAT on and access it directly. I put the drives back in, we tried booting with Knoppix again just in case (revealing the error), then changed the boot order back and watched it slowly come back to life.

Of course no one had the password.

Illustrating the adage “if physical access has been achieved, consider security compromised,” I put FIAT into Single User Mode, allowing me to reset the root password (we put it in the report, don’t worry) and become its new boss. (2)

Our major success! The RAID 5 rebuilding itself.
Our major success! The RAID 5 rebuilding itself.

This is where it got weird. And exciting! Prior to this, we’d been frustrated- afterwards, we upgraded to baffled. After watching FIAT rebuild itself- as a RAID 5 array- we had to figure out what to do next: how to image the machine, and onto what.

We made three attempts to connect FIAT to something, or something to FIAT, each of which resulted in its own unique kind of failure.

After noticing a SCSI-3 port on the back of FIAT–a match to the one FRED’s write blocker cables–and with no idea if this would even work for a live machine, I proposed plugging the two together to see what happened.

Again, the short answer is: nothing. We tried it both through FRED’s write-blocker and directly to the laptop, but neither FRED nor FIAT registered a connection. Checking for drives showed no new devices, and no connection events appeared in the log file or the kernel message buffer. (3)

Our next bright idea was to attach a USB storage device and run a disk dump to it. We formatted a drive, plugged it in, and prepared for nothing to happen. For once, it didn’t. Instead, FIAT reported an error addressing the device that even Stack Overflow didn’t recognize. I found the error class, however: kernel issues. We thought that maybe the drive was too big, or too new, so we hunted up an older USB drive and tried again. Same result. Then we rebooted. The error messages stopped, but no new connection registered. I tried stopping the service, removing the USB module, and restarting the service, but both ports continued throwing errors.

No luck.

USB errors outputting even as we check df -h and see only the SCSI drives.
USB errors outputting even as we check df -h and see only the SCSI drives.

Servers are meant be networked. Hoping that FIAT’s core functions remained intact, I acquired some cables and a switch and rigged up a local access network (LAN) between FIAT and my work laptop. If it worked, we could send the disk dump command to a destination drive attached to my Mac. Or we could stare at the screen while FIAT dumped line after line of ethernet errors, which seemed like more fun. Again, I stopped the service, cleared the jobs on the controller (FIAT only had one ethernet port), restarted it, then restarted the service, but the timeout errors persisted (4).

Ethernet errors outputting after attempting to connect FIAT to a LAN.
Ethernet errors outputting after attempting to connect FIAT to a LAN.

FIAT’s total solipsism suggested a dead south bridge as well as serious kernel problems. While it might have been possible for an electrical engineer (ie, not us) to overcome the former, the latter presented a catch-22: even if we were willing to alter the operating system (no), the only way to fix FIAT would have been with an update, which thanks to the kernel errors, we couldn’t perform.

To be clear: those three things all happened in about two hours.

During this project, we kept reflective journals, accessible only to ourselves and our professor. The final entry in mine simply reads: “I think I’m so smart.”

With about a week left, I had an idea. Other than how to convert a tower-chassis server into an end table. I discussed it with the group, and when we couldn’t find anything wrong with it, I suggested they start the final report and presentation: if this worked, we’d have something to turn in. If not, I’d write my section of the report and we’d call it done.

We had found the website while exploring the filesystem. FIAT had a CD drive. I could compress and copy the website directory to a CD and we could turn that in. It wasn’t ideal, but we’d have something to show for a semester’s worth of work.

So while my compatriots got the project documentation ready for ingest to the iSchool’s DSpace, I went to work on FIAT one last time. I covered my bases–researched the compression protocols Red Hat 2.7 supported, the commands I’d need, how to find the hardware location so I could write the file out once it was created.

FIAT being FIAT, I hit a snag: the largest CD I had available was 700MB, but the real size of the website directory was 751MB. After a little investigation, I decided which files and folders we could live without (and put locations and my reasoning in the report): excluding them, I created an .iso smaller than 700MB.

That file still resides in the directory where I put it. The final indignity, FIAT’s sting in the tail, was this: it had a CD drive, which I found with cdrecord -scanbus. What it did not have was a CD-ROM drive. Attempting to write the .iso to disk, cdrecord returned an error: unsupported drive at bus location. FIAT can read, but it can’t write.

And then we were out of time. After my last idea fizzled, we gave up on FIAT and put together our final report, including suggestions for further work and server decommissioning recommendations for never letting this happen again. I presented the project at the Open House anyway, sitting on FIAT (the casters on the tower model made moving its 115lb bulk a breeze) for four hours, showing people the file structure and regaling them with tales of failure. Talking over the fantastic noise it made, while the other members of my group held down their own project posters, I found that people appreciate a good comedy of errors. I’ve embraced FIAT as my shaggy-dog story, my archival albatross. And now I know what to say in job interviews when they ask me to “talk about a time you didn’t succeed.”

 

The author would like to acknowledge the efforts and contributions of their fellow-travelers, Arely Alejandro, Maria Fernandez, Megan Moltrup, and Olivia Solis, as well as the guidance and assistance of Dr. Patricia Galloway, Sam Burns, and members of the UT Austin storage ITS team, without whom none of this would have happened.

____

1 Redundant Array of Inexpensive (or Independent) Disks, a storage virtualization method which uses either hardware or software to combine multiple physical drives into a single logical unit, improving read/write speeds and providing redundancy to protect against drive failure. RAID arrays can be set up at a number of levels depending on user need, all of which have their own implications for preservation and data recovery.

2 Something I had no prior experience with- certainly not with a Red Hat 2.7 machine! I spent more time looking up error codes, troubleshooting, and searching for workarounds than interacting with FIAT.

3 Throughout, I used the command df -h in FIAT to display drives, and read the kernel message buffer, where information about the operating status of the machine can be read, with demsg.

4 I cannot emphasize enough how much on-the-fly learning occurred during this part- even as a low-level systems administrator, trying to get FIAT to talk to something involved a lot of new material for me.

____

A.L. Carson, MSIS UT ‘16, is the only archivist on Earth who is allergic to cats. Trained as a digital archivist, they now apply those perspectives on metadata and digital repositories as a Library Fellow at the University of Nevada, Las Vegas. Twitter: @mdxCarson

Preservation and Access Can Coexist: Implementing Archivematica with Collaborative Working Groups

By Bethany Scott

The University of Houston (UH) Libraries made an institutional commitment in late 2015 to migrate the data for its digitized and born-digital cultural heritage collections to open source systems for preservation and access: Hydra-in-a-Box (now Hyku!), Archivematica, and ArchivesSpace. As a part of that broader initiative, the Digital Preservation Task Force began implementing Archivematica in 2016 for preservation processing and storage.

At the same time, the DAMS Implementation Task Force was also starting to create data models, use cases, and workflows with the goal of ultimately providing access to digital collections via a new online repository to replace CONTENTdm. We decided that this would be a great opportunity to create an end-to-end digital access and preservation workflow for digitized collections, in which digital production tasks could be partially or fully automated and workflow tools could integrate directly with repository/management systems like Archivematica, ArchivesSpace, and the new DAMS. To carry out this work, we created a cross-departmental working group consisting of members from Metadata & Digitization Services, Web Services, and Special Collections.

Continue reading

Exploring Digital Preservation, Digital Curation, and Digital Collections in Mexico

By Natalie Baur

This post is the fourth post in our series on international perspectives on digital preservation.

___

During the 2015-2016 academic year, I received a Fulbright García-Robles fellowship to pursue research relating to the state of digital preservation initiatives and digital information access in Mexico. The Instituto de Investigaciones Bibliotecológicas y de la Información at the Universidad Nacional Autónoma de México in Mexico City graciously hosted me as a visiting researcher, and I worked with leading Mexican digital preservation expert Dr. Juan Voutssás.

In Mexico, I was able to conduct interviews with nearly thirty organizations working on building, managing, sharing and preserving their digital collections. The types of organizations I visited were diverse in several areas: geographic location (i.e. outside of heavily centralized Mexico City), organization size, organization mission, and industry sector.

  • Cultural Heritage organizations (galleries, libraries, archives, museums)
  • Government institutions
  • Business/For-profit organizations
  • College and University archives and libraries

Because of the diversity of the types of institutions that I visited, the results and conclusions I drew were also varied, and I noticed distinct trends within each area or category of institutions. For the brevity of this blog post, I have taken the liberty to abbreviate my findings in the following bullet points. These are not meant to be definitive or exhaustive, as I am still compiling, codifying and quantifying interview data.

  • The focus on digital collection building and preservation in business and government tends toward records management approaches. Retention schedules are dictated by the federal government and administered and enforced by the National Archives. All federal and state government entities are obligated to follow these guidelines for retention and transfer of records and archives. While the guidelines and processes for paper records are robust, many institutions are only beginning to implement and use electronic records management platforms. Long-term digital preservation of records designated for permanent deposit is an ongoing challenge.
  • In cultural heritage institutions and college and university archives, digital collection work is focused on building digitization and digital collection management programs. The primary focus of the majority of institutions is still on digitization, storage and diffusion of digitized assets, and wrangling issues related to long-term, sustainable maintenance of digital collections platforms and backups on precarious physical media formats like optical disks and (non-redundant) hard drives.
  • While digital preservation issues are still in the nascent stages of being worked through and solved everywhere around the globe, in some areas strong national and regional groups have been formed to help share strategies, create standards and think through local solutions. In Mexico and Latin America, this has mostly been done through participation in the InterPARES project, but a national Mexican digital preservation consortium, similar to the National Digital Stewardship Alliance (NDSA) in the United States, is still yet to be established in Mexico. In the meantime, several Mexican academic and government institutions have taken the lead on digital preservation issues, and through those initiatives, a more cohesive, intentional organization similar to the NDSA may be able to take root in the near future.

My opportunity to live and do research in Mexico was life-changing. It is now more crucial than ever for librarians, archivists, developers, administrators, and program leaders to look outside of the United States for collaborations and opportunities to learn with and from colleagues abroad. The work we have at hand is critical, and we need to share all the resources we have, especially those resources money cannot buy: a different perspective, diversity of language, and the shared desire to make the whole world, not just our little corner of it, a better place for all.  


natalie_headshotNatalie Baur is currently the Preservation Librarian at El Colegio de México in Mexico City, an institution of higher learning specializing in the humanities and social sciences. Previously, she served as the Archivist for the Cuban Heritage Collection at the University of Miami Libraries and was a 2015-2016 Fulbright-García Robles fellowship recipient, looking at digital preservation issues in Mexican libraries, archives and museums. She holds an M.A. in History and a certificate in Museum Studies from the University of Delaware and an M.L.S. with a concentration in Archives, Information and Records Management from the University of Maryland. She is also co-founder of the Desmantelando Fronteras webinar series and the Itinerant Archivists project. You can read more about her Fullbright-García Robles fellowship here.

Digital Preservation, Eh?

by Alexandra Jokinen

This post is the third post in our series on international perspectives on digital preservation.

___

Hello / Bonjour!

Welcome to the Canadian edition of International Perspectives on Digital Preservation. My name is Alexandra Jokinen. I am the new(ish) Digital Archives Intern at Dalhousie University in Halifax, Nova Scotia. I work closely with the Digital Archivist, Creighton Barrett, to aid in the development of policies and procedures for some key aspects of the University Libraries’ digital archives program—acquisitions, appraisal, arrangement, description, and preservation.

One of the ways in which we are beginning to tackle this very large, very complex (but exciting!) endeavour is to execute digital preservation on a small scale, focusing on the processing of digital objects within a single collection, and then using those experiences to create documentation and workflows for different aspects of the digital archives program.

The collection chosen to be our guinea pig was a recent donation of work from esteemed Canadian ecologist and environmental scientist, Bill Freedman, who taught and conducted research at Dalhousie from 1979 to 2015. The fonds is a hybrid of analogue and digital materials dating from 1988 to 2015. Digital media carriers include: 1 computer system unit, 5 laptops, 2 external hard drives, 7 USB flash drives, 5 zip disks, 57 CDs, 6 DVDs, 67 5.25 inch floppy disks and 228 3.5 inch floppy disks. This is more digital material than the archives is likely to acquire in future accessions, but the Freedman collection acted as a good test case because it provided us with a comprehensive variety of digital formats to work with.

Our first area of focus was appraisal. For the analogue material in the collection, this process was pretty straightforward: conduct macro-appraisal and functional analysis by physically reviewing material. However, (as could be expected) appraisal of the digital material was much more difficult to complete. The archives recently purchased a forensic recovery of evidence device (FRED) but does not yet have all the necessary software and hardware to read the legacy formats in the collection (such as the floppy disks and zip disks), so, we started by investigating the external hard drives and USB flash drives. After examining their content, we were able to get an accurate sense of the information they contained, the organizational structure of the files, and the types of formats created by Freedman. Although, we were not able to examine files on the legacy media, we felt that we had enough context to perform appraisal, determine selection criteria and formulate an arrangement structure for the collection.

The next step of the project will be to physically organize the material. This will involve separating, photographing and reboxing the digital media carriers and updating a new registry of digital media that was created during a recent digital archives collection assessment modelled after OCLC’s 2012 “You’ve Got to Walk Before You Can Run” research report. Then, we will need to process the digital media, which will entail creating disk images with our FRED machine and using forensic tools to analyze the data.  Hopefully, this will allow us to apply the selection criteria used on the analogue records to the digital records and weed out what we do not want to retain. During this process, we will be creating procedure documentation on accessioning digital media as well as updating the archives’ accessioning manual.

The project’s final steps will be to take the born-digital content we have collected and ingest it using Archivematica to create Archival Information Packages for storage and preservation and accessed via the Archives Catalogue and Online Collections.

So there you have it! We have a long way to go in terms of digital preservation here at Dalhousie (and we are just getting started!), but hopefully our work over the next several months will ensure that solid policies and procedures are in place for maintaining a trustworthy digital preservation system in the future.

This internship is funded in part by a grant from the Young Canada Works Building Careers in Heritage Program, a Canadian federal government program for graduates transitioning to the workplace.

___

dsc_0329

Alexandra Jokinen has a Master’s Degree in Film and Photography Preservation and Collections Management from Ryerson University in Toronto. Previously, she has worked as an Archivist at the Liaison of Independent Filmmakers of Toronto and completed a professional practice project at TIFF Film Reference Library and Special Collections.

Connect with me on LinkedIn!

Announcing the Second #bdaccess Twitter Chats: 2/16 @ 2 and 9pm EST

By Daniel Johnson and Seth Anderson

This post is the seventeenth in a bloggERS series about access to born-digital materials.

____

Contemplating how to provide access to born-digital materials? Wondering how to meet researcher needs for accessing and analyzing files? We are too! Join us for a Twitter chat on providing access to born digital records. This chat will help inform the Born Digital Access Bootcamp: A Collaborative Learning Forum at the New England Archivists spring meeting.

*When?* Thursday February, 16  at 2:00pm and 9:00pm EST
*How?* Follow #bdaccess for the discussion
*Who?* Information professionals, researchers, and anyone else interested in managing or using born-digital records

Newly-conceived #bdaccess chats are organized by an ad-hoc group that formed at the 2015 SAA annual meeting. We are currently developing a bootcamp to share ideas and tools for providing access to born-digital materials and have teamed up with the Digital Library Federation to spread the word about the project. Information and a Storify about our previous Twitter chat is available in a previous bloggERS post.

Understanding how researchers want to access and use digital archives is key to our curriculum’s success, so we’re taking it to the Twitter streets to gather feedback from practitioners and researchers. The following five questions will guide the discussion:

Q1. _What is your biggest barrier to providing #bdaccess to material?

Q2. _What do you most want to learn about providing #bdaccess?

Q3. _What factors and priorities (whether format-based, administrative, etc) motivate your institution to provide #bdaccess?

Q4. _Have you conducted user testing on any of your #bdaccess mechanisms?

Q5. _Who do you rely on in providing #bdaccess or in planning to do so?

Q6. _Would you be willing to showcase your methods of #bdaccess at the NEA Bootcamp?

Can’t join the chat on 2/16/2017 ? Follow #bdaccess for ongoing discussion and future chats!

____

Daniel Johnson is the digital preservation librarian at the University of Iowa, exploring, adapting, and implementing digital preservation policies and strategies for the long-term protection and access to digital materials.

Seth Anderson is the project manager of the MoMA Electronic Records Archive initiative, overseeing the implementation of policy, procedures, and tools for the management and preservation of the Museum of Modern Art’s born-digital records.

Consortial Certification Processes: the Goportis Digital Archive—a Case Study

By Franziska Schwab, Yvonne Tunnat, and Dr. Thomas Gerdes

This post is the second post in our series on international perspectives on digital preservation.

___

The Goportis Consortium consists of the three German National Subject Libraries: the TIB Hannover, ZB MED Cologne/Bonn and the ZBW Kiel/Hamburg.

One key area of collaboration is digital preservation. We jointly use the Goportis Digital Archive based on Ex Libris’s Rosetta since 2010. The certification of our digital archive is part of our quality management, since all workflows are evaluated. Beyond that, a certification seal signals to external parties, like stakeholders and customers, that the long-term availability of the data is ensured, and the digital archive is trustworthy.

So far TIB and ZBW have successfully completed the certification processes for the Data Seal of Approval (DSA) and are currently working on the application for the nestor Seal. Here are some key facts about the seals:

Seal Since Extent Focus Certified institutions (01/2017)
Data Seal of Approval 2010 16 guidelines Ingest, Preservation, Access 64
nestor Seal 2014 34 criteria Ingest, Preservation, Access, Organization & Sustainability Aspects 2

Distribution of Tasks

In general, we are equal partners. For digital preservation, though, TIB is the consortium leader, since it is the software licensee and hosts the computing center.

Due to the terms of the DSA—as well as those of the nestor Seal—a consortium cannot be certified as a whole, but only each partner individually. For that reason each partner drew up its own application. However, for some aspects of the certification ZBW had to refer to the answers of TIB, which functions as its service provider.

Beside these external requirements, we organized the distribution of tasks on the basis of internal goals as well. We interpreted the certification process as an opportunity to get a deeper insight in the workflows, policies and dependencies of our partner institutions. That is why we analyzed the DSA guidelines together. Moreover, we discussed the progress of the application process regularly in telephone conferences and matched our answers to each guideline. As a positive side effect, this way of proceeding strengthened not only the ability of our teamwork, it also led to a better understanding of the guidelines and more elaborate answers for the DSA application.

The documentations for the DSA were created in more detail than recommended in order to facilitate further use of the documents for the nestor Seal.

Time Frame

The certification process for the DSA extended over six months (12/2014–08/2015).  In each institution one employee was in charge of the certification process. Other staff members added special information about their respective areas of work. This included technical development, data specialists, legal professionals, team leaders, and system administration (TIB only). The costs of applying for the seal can be measured in person months:

Institution Person Responsible Other Staff Total
TIB ~ 3 ~ 0.25 ~ 3.25
ZBW ~ 1.5 ~ 0.1 ~ 1.6

Outlook: nestor Seal

The nestor Seal represents the second level of the European Framework for Audit and Certification of Digital Repositories. With its 34 criteria, it is more complex than the DSA. It also requires more detailed information, which makes it necessary to involve more staff from different departments. The time effort is not foreseeable at this time.

map5
Map with relationships between the nestor criteria (Click on the image to enlarge it.) (Read more.)

 

Based on our positive experiences with the DSA certification, we plan to acquire the nestor Seal following the same procedures. The DSA application has prepared the ground for this task, since important documents, such as policies, have already been drafted.

___

Franziska Schwab is working as a Preservation Analyst in the Digital Preservation team at the German National Library of Science and Technology (TIB) since 2014. She’s responsible for Pre-Ingest data analysis, Ingest, process documentation, policies, and certification.

Yvonne Tunnat is the Digital Preservation Manager for the Leibniz Information Centre for Economics in Kiel/Hamburg (ZBW) since 2011. Her key working areas are format identification, validation, and preservation planning.

Dr. Thomas Gerdes is part of the Digital Preservation team of the Leibniz Information Centre for Economics in Kiel/Hamburg (ZBW), since 2015. His interests are in the field of certification methods.