Preservation Beyond the Bits: An Interview with Linda Tadic

Linda Tadic is founder and CEO of Digital Bedrock (www.digitalbedrock.com), a managed digital preservation and consulting service in Los Angeles. A founding member and former president of the Association of Moving Image Archivists, she has written and given lectures on AV metadata, copyright, and digital asset management and preservation. She is adjunct professor in UCLA’s Department of Information Studies’ Media Archives Studies program.

We asked Tadic about her research into the environmental consequences of digital preservation. Her presentation “The Environmental Impact of Digital Preservation,” which she’s given in Portland, OR, Singapore, and Paris, describes the relationship of digital preservation to ongoing environmental degradation and outlines ways archivists and archival institutions can lessen their impact. Slides and notes for the presentation can be found at www.digitalbedrock.com/resources.

This interview was conducted over email.

Continue reading

Keeping Track of Time with Data Accessioner

By Kevin Dyke

This post is the fourth in our Spring 2016 series on processing digital materials.

———

When it comes to working to process large sets of electronic records, it’s all too easy to get so wrapped up in the task at hand that when you finally come up for air you look at the clock and think to yourself, “Where did the time go? How long was I gone?” Okay, that may sound rather apocalyptic, but tracking time spent is an important yet easily elided step in electronic records processing.

At the University of Minnesota Libraries, the members of the Electronic Records Task Force are charged with developing workflows and making estimates for future capacity and personnel needs. In an era of very tight budgets, making a strong, well-documented case for additional personnel and resources is critical. To that end, we’ve made some efforts to more systematically track our time as we pilot our workflows.

Chief among those efforts has been a customization of the Data Accessioner tool. Originally written for internal use at the David M. Rubenstein Rare Book & Manuscript Library at Duke University, the project has since become open source, with support for recent releases coming from the POWRR Project. Written in Java and utilizing the common logging library log4j, Data Accessioner is structured in a way that made it possible for someone like me (familiar with programming, but not much experience with Java) to enhance the time logging functionality.  As we know some accession tasks take a few minutes, others can run for many hours (if not days). Enhancing the logging functionality of Data Accessioner allows staff to accurately see how long any data transfer takes, without needing to be physically present. The additional functionality was in itself pretty minor: log the time and folder name before starting accessioning of a folder and upon completion. The most complex part of this process was not writing the additional code, but rather modifying the log4j configuration. Luckily, with an existing configuration file, solid documentation, and countless examples in the wild, I was able to produce a version of Data Accessioner that outputs a daily log as a plain text file, which makes time tracking accessioning jobs much easier. You can see more description of the changes I made and the log output formatting on GitHub. You can download a ZIP file with the application with this addition from that page as well, or use this download link.

Screenshots and a sample log file:

Main Data Accessioner Folder
Main Data Accessioner folder
Contents of Log Folder
Contents of log folder
Sample of the beginning and ending of log file showing the start time and end times for file migration
Sample of the beginning and ending of log file showing the start time and end times for file migration

With this change, we are now able to better estimate the time it takes to use Data Accessioner.  Do the tools you use keep track of the time it takes to run?  If not, how are you doing this?  Questions or comments can be sent to lib-ertf [at] umn [dot] edu.

———

Kevin DykeKevin Dyke is the spatial data analyst/curator at the University of Minnesota’s John R. Borchert Map Library. He’s a member of the University of Minnesota Libraries’ Electronic Records Task Force, works as a data curator for the Data Repository for the University of Minnesota (DRUM), and is also part of the Committee on Institutional Cooperation’s (CIC) Geospatial Data Discovery Project. He received a Masters degree in Geography from the University of Minnesota and can be reached at dykex005 [at] umn.edu.

#snaprt chat Flashback: Archivist and Technologist Collaboration

By Ariadne Rehbein

This is a cross post in coordination with the SAA Students and New Archives Professionals Roundtable.

The spirit of community at the 2016 Code4Lib Conference in Philadelphia (March 7-10) served as inspiration for a recent SAA Students and New Archives Professionals Roundtable #snaprt Twitter chat. The conference was an exciting opportunity for archivists and librarians to learn about digital tools and projects that are free to use and open for further development, discuss needs for different technology solutions, gain a deeper understanding of technology work, and engage with larger cultural and technical issues within libraries and archives. SNAP’s Senior Social Media Coordinator hosted the chat on March 15, focusing the discussion on collaboration between archivists and technologists.

Many of the chat questions were influenced by discussions in the Code4Archives preconference workshop breakout group, “Whose job is that? Sharing how your team breaks down archives ‘tech’ work.” On the last day of the conference, SNAP invited participants through different Code4Lib and Society of American Archivist channels, such as the conference hashtag (#c4l16), the Code4Lib listserv, various SAA listservs, and the SNAP Facebook and Twitter accounts. All were invited to share suggestions or discussion questions for the chat. Participants included archives students and professionals with varying years of experience and focuses, such as digital curation, special collections, university archives, and government archives. Our chat questions were:

  • How do the expertise and knowledge of archivists and technologists who work together often overlap or differ? How much is important to understand of one another’s work? What are some ways to increase this knowledge?
  • What are some examples of technologies that archives currently use? What is their goal/ what are they used to do?
  • Who created and maintains these tools? Why might an archive choose one tool over another?
  • What kinds of tools and tech skills have new archivists learned post-LIS? What is this learning process like?
  • What are some examples of tasks or projects in an archival setting where the expertise of technologists is essential or extremely helpful? Please share any tips from these experiences.
  • Do you know of any blogs/posts that are helpful for born digital preservation / AV preservation / digitized content workflow?

Several different themes emerged in the chat:

  • The importance of an environment that supports relationships between those of different backgrounds and skills. Participants suggested developing a sharing a vocabulary to clearly convey information and providing casual opportunities to meet.
  • The decision to implement a technology solution to serve a need may involve a variety of considerations, such as level of institutional priority, cost, availability of technology professionals to manage or build the system, security, and applicability to other needs.
  • Participants suggested that students gain skills with a variety of different technologies, including relational databases, command line basics, Photoshop, Virtual Box, Bitcurator, and programming (through online tutorials.) The ability and willingness to learn on the job and teach others is important too! These are useful tools and may also help build a shared vocabulary.
  • Participants had engaged in a number of collaborative tasks or projects, such as performing digital forensics, building DIY History at the University of Iowa, implementing systems such as Preservica, and determining digital preservation storage solutions.
  • Some great resources are available for born-digital, digitized, and audiovisual preservation, including AV Preserve, the Digital Curation Google Group, the Bitcurator Consortium, The Signal blog, Chris Prom’s Practical E-Records, the Code4Lib listserv, Digital Preservation News, and National Digital Stewardship Residency blog posts.

Please visit Storify to read the full chat:

Storify of #snaprt chat about archivist and technologistsMany thanks to Wendy Hagenmaier of the ERS Steering Committee for inviting SNAP to share this post. #snaprt Twitter chats typically take place 3 times per month, on or around the 5th, 15th, and 25th at 8 PM ET. Participation is open to anyone interested in issues relevant to MLIS students and new archives professionals. To learn more about the chats, please visit our webpage.

Rehbein_snaprtcode4lib_ersblog_02Ariadne Rehbein strives to support students and new archives professionals as SNAP Roundtable’s Senior Social Media Coordinator. As Digital Asset Coordinator at the Arizona State University Libraries, she focuses on processing and stewardship of digital special collections and providing expertise on issues related to digital forensics, asset management workflows, and policies in accordance with community standards and best practices. She is a proud graduate of the Department of Information and Library Science at Indiana University Bloomington.

Recent Changes in How Stanford University Libraries is Documenting Born-Digital Processing

By Michael G. Olson

This post is the third in our Spring 2016 series on processing digital materials.

———

Stanford University Libraries is in the process of changing how it documents its digital processing activities and records lab statistics. This is our third iteration of how we track our born-digital work in six years and is a collaborative effort between Digital Library Systems and Services, our Digital Archivist Peter Chan, and Glynn Edwards, who manages our Born-Digital Program and is the Director of the ePADD project.

Initially we documented our statistics using a library-hosted FileMaker Pro database. In this initial iteration we were focused on tracking media counts and media failure rates. After a single year of using the database we decided that we needed to modify the data structure and the data entry templates significantly. Our staff found the database too time consuming and cumbersome to modify.

We decided to simplify and replaced the database with a spreadsheet stored with our collection data. Our digital archivist and hourly lab employees were responsible for updating this spreadsheet when they had finished working with a collection. This was a simple solution that was easy to edit and update, and it worked well for four years until we realized we needed more data for our fiscal year-end reports. As our born-digital program has grown and matured, we discovered we were missing key data points that documented important processing decisions in our workflows. It was time to again improve how we documented our work.

BDFL_labstats_FY2015Q1-Q2_v2Stanford Statistics Spreadsheet version 2

For our brand new version of work tracking we have decided to continue to use a spreadsheet but have migrated our data to Google Drive to better facilitate updates and versioning of our documentation. New data points have been included to better track specific types of born-digital content like email. This new version also allows us to better document the processing lifecycle of our born-digital collections. In order to better do this we have created the following additional data points:

  • Number of email messages
  • Email in ePADD.stanford.edu
  • File count in media cart
  • File size on media cart (GB)
  • SearchWorks (materials discoverable / available in library catalog)
  • SpotLight Exhibit (a virtual exhibit)

BDFL_stats_v3Stanford Statistics Spreadsheet version 3

We anticipate that evolving library administrative needs, the continually changing nature of born-digital data, and new methodologies for processing these materials will make it necessary to again change how we document our work. Our solution is not perfect but is flexible enough to allow us to reimagine our documentation strategy in a few short years. If anyone is interested in learning more about what we are documenting and why, please do let us know, as we would be happy to provide further information and may learn something from our colleagues in the process.

———

Michael G. Olson is the Service Manager for the Born-Digital / Forensics Labs at Stanford University Libraries. In this capacity he is responsible for working with library stakeholders to develop services for acquiring, preserving and accessing born-digital library materials. Michael holds a Masters in Philosophy in History and Computing from the University of Glasgow. He can be reached at mgolson [at] Stanford [dot] edu.

Clearing the digital backlog at the Thomas Fisher Rare Book Library

By Jess Whyte

This is the second post in our Spring 2016 series on processing digital materials.

———

Tucked away in the manuscript collections at the Thomas Fisher Rare Book Library, there are disks. They’ve been quietly hiding out in folders and boxes for the last 30 years. As the University of Toronto Libraries develops its digital preservation policies and workflows, we identified these disks as an ideal starting point to test out some of our processes. The Fisher was the perfect place to start:

  • the collections are heterogeneous in terms of format, age, media and filesystems
  • the scope is manageable (we identified just under 2000 digital objects in the manuscript collections)
  • the content has relatively clear boundaries (we’re dealing with disks and drives, not relational databases, software or web archives)
  • the content is at risk

The Thomas Fisher Rare Book Library Digital Preservation Pilot Project was born. It’s purpose: to evaluate the extent of the content at risk and establish a baseline level of preservation on the content.

Identifying digital assets

The project started by identifying and listing all the known digital objects in the manuscript collections. I did this by batch searching all the .pdf finding aids from post-1960 with terms like “digital,” “electronic,” “disk,” —you get the idea. Once we knew how many items we were dealing with and where we could find them, we could begin.

Early days, testing and fails
When I first started, I optimistically thought I would just fire up BitCurator and everything would work.

whyte01

It didn’t, but that’s okay. All of the reasons we chose these collections in the first place (format, media, filesystem and age diversity) also posed a variety of challenges to our workflow for capture and analysis. There was also a question of scalability – could I really expect to create preservation copies of ~2000 disks along with accompanying metadata within a target 18-month window? By processing each object one-by-one in a graphical user interface? While working on the project part-time? No, I couldn’t. Something needed to change.

Our early iterations of the process went something like this:

  1. Use a Kryoflux and its corresponding software to take an image of the disk
  2. Mount the image in a tool like FTK Imager or HFSExplorer
  3. Export a list of the files in a somewhat consistent manner to serve as a manifest, metadata and de facto finding aid
  4. Bag it all up in Bagger.

This was slow, inconsistent, and not well-suited to the project timetable. I tried using fiwalk (included with BitCurator) to walk through a series of images and automatically generate manifests of their contents, but fiwalk does not support HFS and other, older filesystems. Considering 40% of our disks thus far were HFS (at this point, I was 100 disks in), fiwalk wasn’t going to save us. I could automate the process for 60% of the disks, but the remainder would still need to be handled individually–and I wouldn’t have those beautifully formatted DFXML (Digital Forensics XML) files to accompany them. I needed a fix.

Enter disktype and md5deep

I needed a way to a) mount a series of disk images, b) look inside, c) generate metadata on the file contents and d) produce a more human-readable manifest that could serve as a finding aid.

Ideally, the format of all that metadata would be consistent. Critically, the whole process would be as automated as possible.

This is where disktype and md5deep come in. I could use disktype to identify an image’s filesystem, mount it accordingly and then use md5deep to generate DFXML and .csv files. The first iteration of our script did just that, but md5deep doesn’t produce as much metadata as fiwalk. While I don’t have the skills to rewrite fiwalk, I do have the skills to write a simple bash script that routes disk images based on their filesystem to either md5deep or fiwalk. You can find that script here, and a visualization of how it works below:

whyte02

I could now turn this (collection of image files and corresponding imaging logs):

Whyte03

into this (collection of image files, logs, DFXML files, and CSV manifests):

Whyte04

Or, to put it another way, I could now take one of these:

Whyte05

And rapidly turn it into this ready-to-be-bagged package:

Whyte06

Challenges, Future Considerations and Questions

Are we going too fast?
Do we really want to do this quickly? What discoveries or insights will we miss by automating this process? There is value in slowing down and spending time with an artifact and learning from it. Opportunities to do this will likely come up thanks to outliers, but I still want to carve out some time to play around with how these images can be used and studied, individually and as a set.

Virus Checks:
We’re still investigating ways to run virus checks that are efficient and thorough, but not invasive (won’t modify the image in any way).  One possibility is to include the virus check in our bash script, but this will slow it down significantly and make quick passes through a collection of images impossible (during the early, testing phases of this pilot, this is critical). Another possibility is running virus checks before the images are bagged. This would let us run the virus checks overnight and then address any flagged images (so far, we’ve found viruses in ~3% of our disk images and most were boot-sector viruses). I’m curious to hear how others fit virus checks into their workflows, so please comment if you have suggestions or ideas.

Adding More Filesystem Recognition
Right now, the processing script only recognizes FAT and HFS filesystems and then routes them accordingly. So far, these are the only two filesystems that have come up in my work, but the plan is to add other filesystems to the script on an as-needed basis. In other words, if I happen to meet an Amiga disk on the road, I can add it then.

Access Copies:
This project is currently focused on creating preservation copies. For now, access requests are handled on an as-needed basis. This is definitely something that will require future work.

Error Checking:
Automating much of this process means we can complete the work with available resources, but it raises questions about error checking. If a human isn’t opening each image individually, poking around, maybe extracting a file or two, then how can we be sure of successful capture? That said, we do currently have some indicators: the Kryoflux log files, human monitoring of the imaging process (are there “bad” sectors? Is it worth taking a closer look?), and the DFXML and .csv manifest files (were they successfully created? Are there files in the image?). How are other archives handling automation and exception handling?

If you’d like to see our evolving workflow or follow along with our project timeline, you can do so here. Your feedback and comments are welcome.

———

Jess Whyte is a Masters Student in the Faculty of Information at the University of Toronto. She holds a two-year digital preservation internship with the University of Toronto Libraries and also works as a Research Assistant with the Digital Curation Institute.  

Resources:

Gengenbach, M. (2012). The way we do it here”: Mapping digital forensics workflows in collecting institutions.”. Unpublished master’s thesis, The University of North Carolina at Chapel Hill, Chapel Hill, North Carolina.

Goldman, B. (2011). Bridging the gap: taking practical steps toward managing born-digital collections in manuscript repositories. RBM: A Journal of Rare Books, Manuscripts and Cultural Heritage, 12(1), 11-24

Prael, A., & Wickner, A. (2015). Getting to Know FRED: Introducing Workflows for Born-Digital Content.

Digital Processing at the Rockefeller Archive Center

By Bonnie Gordon

This is the first post in our Spring 2016 series on processing digital materials, exploring how archivists conceive of, implement, and track activities to arrange and describe digital materials in archival collections. If you are interested in contributing to bloggERS!, check out our guidelines for writers or contact us at ers.mailer.blog@gmail.com

———

At the Rockefeller Archive Center, we’re working to get “digital processing” out of the hands of “digital” archivists and into the realm of “regular” archivists. We are using “digital processing” to mean description, arrangement, and initial preservation of born digital archival content stored on removable storage media. Our definition will likely expand over time, as we start to receive more born digital materials via network transfer and fewer acquisitions of floppy disks and CDs.

The vast majority of our born digital materials are on removable storage media and currently inaccessible to our researchers, donors, and staff. We have content on over 3,000 digital storage media items, which are rapidly deteriorating. Our backlog of digital media items includes over 2,500 optical disks, almost 200 3.5″ floppy disks, and almost 100 5.25″ floppy disks. There are also a handful of USB flash drives, hard drives, and older and unusual media (Bernoulli disks, Sy-Quest cartridges, 8″ floppy disks). This is a lot of work for one digital archivist! Having multiple “regular” archivists process these materials distributes the work, which means we can get through the backlog much more quickly. Additionally, integrating digital processing into regular processing work will prevent a future backlog from being created.

In order to help our processing archivists establish and enhance intellectual control of our born digital holdings, I’m working to provide them with the tools, workflows, and competencies needed to process digital materials.  Over the next several months, a core group of processing archivists will be trained and provided with documentation on digital media inventorying, digital forensics, and other born digital workflows. After training, archivists will be able to use the skills they gained in their “normal” processing projects. The core group of archivists trained on dealing with born digital materials will then be able to train other archivists. This will help digital processing be perceived as just another aspect of “regular” processing. Additionally, providing good workflow documentation gives our processing archivists the tools and competencies to do their jobs.

Streamlining our digital processing workflows is also a really important part of this. One step in this direction is to create a digital media inventory and disk imaging log that will be able to “talk” to our collections management system (ArchivesSpace). We currently have an inventory and imaging log, but they’re in a Microsoft Access database, which has a number of limitations, one of the primary ones being that it can’t integrate with our other systems. Integrating with ArchivesSpace reduces duplicate data entry, inconsistent data, and further integrates digital processing into our “regular” processing work.

The RAC’s processing archivists establish and enhance intellectual and physical control of our archival holdings, regardless of format, in order to facilitate user access. By fully integrating digital processing into “normal” processing activities, we will be able to preserve and provide access to unique born digital content stored on obsolete and decaying media.

———

Bonnie Gordon is an Assistant Digital Archivist at the Rockefeller Archive Center, where she works primarily with born digital materials and digital preservation workflows. She received her M.A. in Archives and Public History, with a concentration in Archives, from New York University.