Call for Contributions: Script It!

Scripting and working in the command line have become increasingly important skills for archivists, particularly for those who work with digital materials — at the same time, approaching these tools as a beginner can be intimidating. This series hopes to help break down barriers by allowing archivists to learn from their peers. We want to hear about how you use or are learning to use scripts (Bash, Python, Ruby, etc.) or the command line (one-liners, a favorite command line tool) in your day-to-day work, how scripts play into your processes and workflows, and how you are developing your knowledge in this area. How has this changed the way you think about your work? How has this changed your relationship with your colleagues or other stakeholders?

We’re particularly interested in posts that consist of a walk-through of a simple script (or one-liner) used in your digital archives workflow. Show us your script or command and tell us how it works.

A few other potential topics and themes for posts:

  • Stories of success or failure with scripting for digital archives
  • General “tips and tricks” for the command line/scripting
  • Independent or collaborative learning strategies for developing “tech” skills
  • A round-up of resources about a particular scripting language or related topic
  • Applying computational thinking to digital archives

Writing for bloggERS! “Script It!” Series

  • We encourage visual representations: Posts can include or largely consist of comics, flowcharts, a series of memes, etc!
  • Written content should be roughly 600-800 words in length
  • Write posts for a wide audience: anyone who stewards, studies, or has an interest in digital archives and electronic records, both within and beyond SAA
  • Align with other editorial guidelines as outlined in the bloggERS! guidelines for writers.

Posts for this series will start in July, so let us know if you are interested in contributing by sending an email to ers.mailer.blog@gmail.com!

Advertisements

Inaugural #bdaccess Bootcamp: A Success Story

By Margaret Peachy

This post is the nineteenth in a bloggERS series about access to born-digital materials.

____

At this year’s New England Archivists Spring Meeting, archivists who work with born-digital materials had the opportunity to attend the inaugural Born-Digital Access Bootcamp. The bootcamp was an idea generated at the born-digital hackfest, part of a session at SAA 2015, where a group of about 50 archivists came together to tackle the problem facing most archival repositories: How do we provide access to born-digital records, which can have different technical and ethical requirements than digitized materials?  Since 2015, a team has come together to form a bootcamp curriculum, reach out to organizations outside of SAA, and organize bootcamps at various conferences.

Excerpt of results from a survey administered in advance of the Bootcamp.

Alison Clemens and Jessica Farrell facilitated the day-long camp, which had about 30 people in attendance from institutions of all sizes and types, though the majority were academic. The attendees also brought a broad range of experience to the camp, from those just starting out thinking about this issue, to those who have implemented access solutions.

Continue reading

A Case Study in Failure (and Triumph!) from the Records Management Perspective

By Sarah Dushkin

____

This is the sixth post in the bloggERS series #digitalarchivesfail: A Celebration of Failure.

I’m the Records Coordinator for a global energy engineering, procurement, and construction  contractor, herein referred to as the “Company.” The Company does design, fabrication, installation, and commissioning of upstream and downstream technologies for operators. I manage the program for our hard copy and electronic records produced from our Houston office.

A few years ago our Records Management team was asked by the IT department to help create a process to archive digital records of closed projects created out of the Houston office. I saw the effort as an opportunity to expand the scope and authority of our records program to include digital records. Up to this point, our practice only covered paper records, and we asked employees to apply the paper record policies to their own electronic records.

The Records Management team’s role was limited to providing IT with advice on how to deploy a software tool where files could be stored for a long-term period. We were not included in the discussions on which software tool to use. It took us over a year to develop the new process with IT and standardize it into a published procedure. We had many areas of triumph and failure throughout the process. Here is a synopsis of the project.

Objective:
IT was told that retaining closed projects files on the local server was an unnecessary cost and was tasked with removing them. IT reached out to Records Management to develop a process to maintain the project files for the long-term in a more cost-effective solution that was nearline or offline, where records management policies could be applied.

Vault:
The software chosen was a proprietary cloud-based file storage center or “vault.” It has search, tagging, and records disposition capabilities. It is more cost-effective than storing files on the local server.

Process:
At 80% project completion, Records Management reaches out to active projects to discover their methods for storing files and the project completion schedule. 80% engineering completion is an important timeline for projects because most of the project team is still involved and the bulk of the work is complete. Records Management also gains knowledge of the project schedule so we can accurately apply the two-year timespan to when the files will be migrated off the local server and to the vault.  The two-year time span was created to ensure that all project files would be available to the project team during the typical warranty period. Two years after a project is closed, all technical files and data are exported from the current management system and ingested into the vault, and access groups are created so employees can view and download the files for reference as needed.

Deployment:
Last year, we began to apply the process to large active projects that had passed 80% engineering completion. Large projects are those that have greater than 5 million in revenue.

Observations:
Recently we have begun to audit the whole project with IT, and are just now identifying our areas of failure and triumph. We will conduct an analysis of these areas and assess where we can make improvements.

Our big areas of failure were related to stakeholder involvement in the development, deployment, and utilization of the vault.

Stakeholders, including the Records Management team, were not involved in the selection or development of the vault software tool. As a result, the vault development project lacked the resources required to make it as successful as possible.

In the deployment of the vault, we did not create an outreach campaign with training courses that would introduce the tool across our very large company. Due to this, many employees are still unaware of the vault. When we talk with departments and projects about methods to save old files for less money they are reluctant to try the solution because it seems like another way for IT to save money from their budget without thinking about the greater needs of the company. IT is still viewed as a support function that is inessential to the Company’s philosophy.

Lastly, we did not have methods to export project files from all systems for ingest into the vault; nor did we, in North America, have the authority to develop that solution. To be effective, that type of decision and process can only be developed by our corporate office in another country. The Company also does not make information about project closure available to most employees. A project end date can be determined by several factors, including when the final invoice was received or the end of the warranty period. This type of information is essential to the information lifecycle of a project, and since we had no involvement from upper level management, we were not able to devise a solution for easily discovering this information.

We had some triumphs throughout the process, though. Our biggest triumph is that this project gave Records Management an opportunity to showcase our knowledge of records retention and its value as a method to save money and maintain business continuity. We were able to collaborate with IT and promulgate a process. It gave us a great opportunity to grow by harnessing better relationships with the business lines. Although some departments and teams are still skeptical about the value of the vault, when we advertise it to other project teams, they see the vault as evidence that the Company cares about preserving their work. We earned our seat at the table with these players, but we still have to work on winning over more projects and departments. We’ve also preserved more than 30 TB of records and saved the Company several thousands of dollars by ingesting inactive project files into the vault.

I am optimistic that when we have support from upper management, we will be able to improve the vault process and infrastructure, and create an effective solution for utilizing records management policies to ensure legal compliance, maintain business continuity, and save money.

____

Sarah Dushkin earned her MSIS from the University of Texas at Austin School of Information with a focus in Archival Enterprise and Records Management. Afterwards, she sought to diversify her expertise by working outside of the traditional archival setting and moved to Houston to work in oil and gas. She has collaborated with management from across her company to refine their records management program and develop a process that includes the retention of electronic records and data. She lives in Sugar Land, Texas with her husband.

Fail4Lib: Acknowledging and Embracing Professional Failure

By Andreas Orphanides

____

This is the fifth post in the bloggERS series #digitalarchivesfail: A Celebration of Failure.

trainwreck
It could be worse.
Image title: Train wreck at Montparnasse
Credit: Studio Lévy et Fils, 1895
Copyright: Public domain

When was the last time you totally, completely, utterly loused up a project or a report or some other task in your professional life? When was the last time you dissected that failure, in meticulous detail, in front of a room full of colleagues? Let’s face it: we’ve all had the first experience, and I’d wager that most of us would pay good money to avoid the second.

It’s a given that we’ll all encounter failure professionally, but there’s a strong cultural disincentive to talk about it. Failure is bad. It is to be avoided at all costs. And should one fail, that failure should be buried away in a dark closet with one’s other skeletons. At the same time, it’s well acknowledged that failure is a critical step on the path to success. It’s only through failing and learning from that experience that we can make the necessary course corrections. In that sense, refusing to acknowledge or unpack failure is a disservice: failure is more valuable when well-understood than when ignored.

This philosophy — that we can gain value from failure by acknowledging and understanding it openly — is the underlying principle behind Fail4Lib, the perennial preconference workshop that takes place at the annual Code4Lib conference, and which completed its fifth iteration (Fail5Lib!) at Code4Lib 2017 in Los Angeles. Jason Casden (now of UNC Libraries) originally conceived of the Fail4Lib idea, and together he and I developed the concept into a workshop about understanding, analyzing, and coming to terms with professional failure in a safe, collegial environment.

Participants in a Fail4Lib workshop engage in a number of activities to foster a healthier relationship with failure: case study discussions to analyze high-profile failures such as the Challenger disaster and the Volkswagen diesel emissions scandal; lightning talks where brave souls share their own professional failures and talk about the lessons they learned; and an open bull session about risk, failure, and organizational culture, to brainstorm on how we can identify and manage failure, and how to encourage our organizations to become more failure-tolerant.

Fail4Lib’s goal is to help its participants to get better at failing. By practicing talking about and thinking about failure, we position ourselves to learn more from the failures of others as well as our own future failures. By sharing and talking through our failures we maximize the value of our experiences, we normalize the practice of openly acknowledging and discussing failure, and we reinforce the message to participants that it happens to all of us. And by brainstorming approaches to allow our institutions to be more failure-tolerant, we can begin making meaningful organizational change towards accepting failure as part of the development process.

The principles I’ve outlined here not only form the framework for the Fail4Lib workshop, they also represent a philosophy for engaging with professional failure in a constructive and blameless way. It’s only by normalizing the experience of failure that we can gain the most from it; in so doing, we make failure more productive, we accelerate our successes, and we make ourselves more resilient.

____

Andreas Orphanides is Associate Head, User Experience at the NCSU Libraries, where he develops user-focused solutions to support teaching, learning, and information discovery. He has facilitated Fail4Lib workshops at the annual Code4Lib conference since 2013. He holds a BA from Oberlin College and an MSLS from UNC-Chapel Hill.

OSS4Pres 2.0: Design Requirements for Better Open Source Tools

By Heidi Elaine Kelly

____

This is the second post in the bloggERS series describing outcomes of the #OSS4Pres 2.0 workshop at iPRES 2016, addressing open source tool and software development for digital preservation. This post outlines the work of the group tasked with “drafting a design guide and requirements for Free and Open Source Software (FOSS) tools, to ensure that they integrate easily with digital preservation institutional systems and processes.” 

The FOSS Development Requirements Group set out to create a design guide for FOSS tools to ensure easier adoption of open-source tools by the digital preservation community, including their integration with common end-to-end software and tools supporting digital preservation and access that are now in use by that community. 

The group included representatives of large digital preservation and access projects such as Fedora and Archivematica, as well as tool developers and practitioners, ensuring a range of perspectives were represented. The group’s initial discussion led to the creation of a list of minimum necessary requirements for developing open source tools for digital preservation, based on similar examples from the Open Preservation Foundation (OPF) and from other fields. Below is the draft list that the group came up with, followed by some intended future steps. We welcome feedback or additions to the list, as well as suggestions for where such a list might be hosted long term.

Minimum Necessary Requirements for FOSS Digital Preservation Tool Development

Necessities

  • Provide publicly accessible documentation and an issue tracker
  • Have a documented process for how people can contribute to development, report bugs, and suggest new documentation
  • Every tool should do the smallest possible task really well; if you are developing an end-to-end system, develop it in a modular way in keeping with this principle
  • Follow established standards and practices for development and use of the tool
  • Keep documentation up-to-date and versioned
  • Follow test-driven development philosophy
  • Don’t develop a tool without use cases, and stakeholders willing to validate those use cases
  • Use an open and permissive software license to allow for integrations and broader use

Recommendations

  • Have a mailing list, Slack or IRC channel, or other means for community interaction
  • Establish community guidelines
  • Provide a well-documented mechanism for integration with other tools/systems in different languages
  • Provide functionality of tool as a library, separating out the GUI and the actual functions
  • Package tool in an easy-to-use way; the more broadly you want the tool to be used, package it for different operating systems
  • Use a packaging format that supports any dependencies
  • Provide examples of functionality for potential users
  • Consider the organizational home or archive for the tool for long-term sustainability; develop your tool based on potential organizations’ guidelines
  • Consider providing a mechanism for internationalization of your tool (this is a broader community need as well, to identify the tools that exist and to incentivize this)

Premise

  • Digital preservation is an operating system-agnostic field

Next Steps

Feedback and Perspectives. Because of the expense of the iPRES conference (and its location in Switzerland), all of the group members were from relatively large and well-resourced institutions. The perspective of under-resourced institutions is very often left out of open-source development communities, as they are unable to support and contribute to such projects; in this case, this design guide would greatly benefit from the perspective of such institutions as to how FOSS tools can be developed to better serve their digital preservation needs. The group was also largely from North America and Europe, so this work would eventually benefit greatly from adding perspectives from the FOSS and digital preservation communities in South America, Asia, and Africa.

Institutional Home and Stewardship. When finalized, the FOSS development requirements list should live somewhere permanently and develop based on the ongoing needs of our community. As this line of communication between practitioners and tool developers is key to the continual development of better and more user-friendly digital preservation tools, we should continue to build on the work of this group.

Referenced FOSS Tool and Community Guides

____

heidi-elaine-kellyHeidi Elaine Kelly is the Digital Preservation Librarian at Indiana University, where she is responsible for building out the infrastructure to support long-term sustainability of digital content. Previously she was a DiXiT fellow at Huygens ING and an NDSR fellow at the Library of Congress.

Building Bridges and Filling Gaps: OSS4Pres 2.0 at iPRES 2016

By Heidi Elaine Kelly and Shira Peltzman

____

This is the first post in a bloggERS series describing outcomes of the #OSS4Pres 2.0 workshop at iPRES 2016.

Organized by Sam Meister (Educopia), Shira Peltzman (UCLA), Carl Wilson (Open Preservation Foundation), and Heidi Kelly (Indiana University), OSS4PRES 2.0 was a half-day workshop that took place during the 13th annual iPRES 2016 conference in Bern, Switzerland. The workshop aimed to bring together digital preservation practitioners, developers, and administrators in order to discuss the role of open source software (OSS) tools in the field.

Although several months have passed since the workshop wrapped up, we are sharing this information now in an effort to raise awareness of the excellent work completed during this event, to continue the important discussion that took place, and to hopefully broaden involvement in some of the projects that developed. First, however, a bit of background: The initial OSS4PRES workshop was held at iPRES 2015. Attended by over 90 digital preservation professionals from all areas of the open source community, individuals reported on specific issues related to open source tools, which were followed by small group discussions about the opportunities, challenges, and gaps that they observed. The energy from this initial workshop led to both the proposal of a second workshop, as well as a report that was published in Code4Lib Journal, OSS4EVA: Using Open-Source Tools to Fulfill Digital Preservation Requirements.

The overarching goal for the 2016 workshop was to build bridges and fill gaps within the open source community at large. In order to facilitate a focused and productive discussion, OSS4PRES 2.0 was organized into three groups, each of which was led by one of the workshop’s organizers. Additionally, Shira Peltzman floated between groups to minimize overlap and ensure that each group remained on task. In addition to maximizing our output, one of the benefits of splitting up into groups was that each group was able to focus on disparate but complementary aspects of the open source community.

Develop user stories for existing tools (group leader: Carl Wilson)

Carl’s group was comprised principally of digital preservation practitioners. The group scrutinized existing pain points associated with the day-to-day management of digital material, identified tools that had not yet been built that were needed by the open source community, and began to fill this gap by drafting functional requirements for these tools.

Define requirements for online communities to share information about local digital curation and preservation workflows (group leader: Sam Meister)

With an aim to strengthen the overall infrastructure around open source tools in digital preservation, Sam’s group focused on the larger picture by addressing the needs of the open source community at large. The group drafted a list of requirements for an online community space for sharing workflows, tool integrations, and implementation experiences, to facilitate connections between disparate groups, individuals, and organizations that use and rely upon open source tools.

Define requirements for new tools (group leader: Heidi Kelly)

Heidi’s group looked at how the development of open source digital preservation tools could be improved by implementing a set of minimal requirements to make them more user-friendly. Since a list of these requirements specifically for the preservation community had not existed previously, this list both fills a gap and facilitates the building of bridges, by enabling developers to create tools that are easier to use, implement, and contribute to.

Ultimately OSS4PRES 2.0 was an effort to make the open source community more open and diverse, and in the coming weeks we will highlight what each group managed to accomplish towards that end. The blog posts will provide an in-depth summary of the work completed both during and since the event took place, as well as a summary of next steps and potential project outcomes. Stay tuned!

____

peltzman_140902_6761_barnettShira Peltzman is the Digital Archivist for the UCLA Library where she leads the development of a sustainable preservation program for born-digital material. Shira received her M.A. in Moving Image Archiving and Preservation from New York University’s Tisch School of the Arts and was a member of the inaugural class of the National Digital Stewardship Residency in New York (NDSR-NY).

heidi-elaine-kellyHeidi Elaine Kelly is the Digital Preservation Librarian at Indiana University, where she is responsible for building out the infrastructure to support long-term sustainability of digital content. Previously she was a DiXiT fellow at Huygens ING and an NDSR fellow at the Library of Congress.

Defining Levels of Processing vs. Levels of Effort

By Carol Kussmann and Lara Friedman-Shedlov

This is the third post in the bloggERS series #digitalarchivesfail: A Celebration of Failure.

____

The Electronic Records Task Force (ERTF) at the University of Minnesota just completed its second year of work.  This year’s focus was on the processing of electronic records for units of the Archives and Special Collections.  The Archives and Special Collections (ASC) is home to the University of Minnesota’s collection of rare books, personal papers, and organizational archives. ASC is composed of 17 separate collecting units, each focusing on a specific subject area.  Each unit is run separately with some processing activities being done centrally through the Central Processing department.

We realized quickly that even more than traditional analog records processing, electronic records work would be greatly facilitated by utilizing Central Processing, rather than relying on each ASC unit to ingest and process this material.  In keeping with traditional archival best practices, Central Processing typically creates a processing plan for each collection.  The processing plan form records useful information to use during processing, which may be done immediately or at a later date, and assigns a level of processing to the incoming records. This procedure and form works very well with analog records and the Task Force initially established the same practice for electronic records.  However we learned that it is not always efficient to follow current practices and that processes and procedures must be evaluated on a continual basis.

The processing plan is a form about a page and a half long with fields to fill out describing the condition and processing needs of the accession.  Prior to being used for electronic records, fields included: Collection Name, Collection Date Span, Collection Number, Location(s), Extent (pre-processing), Desired Level of Processing, Restrictions/Redactions Needed, Custodial History, Separated Materials*, Related Materials, Preservation Concerns, Languages Other than English, Existing Order, Does the Collection need to be Reboxed/Refoldered, Are there Significant Pockets of Duplicates, Supplies Needed, Potential Series, Notes to Processors, Anticipated Time for Processing, Historical/Bibliographical Notes, Questions/Comments.

A few changes were made to include information about electronic records.  The definitions for Level of Processing were modified to include what was expected for Minimal, Intermediate, or Full level of processing of electronic records.  Preservation Concerns specifically asked if the collection included digital formats that are currently not supported or that are known to be problematic.  After these minor changes were made, the Task Force completed a processing plan for each new electronic accession.

After several months experience using the form, Task Force members began questioning the value of the processing plan for electronic records.  In the relatively few instances where accessions were initially reviewed for processing at a later date it captured useful information for the processor to refer back to without having to start from the beginning. However, the majority of electronic records that were being ingested were also being processed immediately and only a handful of the fields were relevant to the electronic recorded..  The only piece of information being captured on the Processing Plan that was not recorded elsewhere was the expected “level of processing” for the accession.  To address this, the level of processing was added to existing documentation for the electronic accessions eliminating the need for creating a Processing Plan for accessions that were to be immediately processed.

The level of processing itself soon became a point of contention among some of the Electronic Records Task Force members.  For electronic records, the level of processing only defined the level at which the collection would be described on a finding aid – collection, series, sub-series, or potentially to the item.  The following definitions were created based on Describing Archives: A Content Standard (DACS).

Minimal: There will be no file arrangement or renaming done for the purpose of description/discovery enhancement.  File formats will not be normalized.  Action will generally not be taken to address duplicate files or PII information identified during ingest.  Description will meet the requirements for DACS single level description (collection level).

Intermediate: Top level folder arrangement and top-level folder renaming for the purpose of description/discovery enhancement will be done as needed.  File formats will not be normalized. Some duplicates may be weeded and redaction of PII done.  Description will meet DACS multi-level elements: described to the series level with high research value series complemented with scope and content notes.

Full: Top level folder arrangement and renaming will be done as needed, but where appropriate renaming and arrangement may also be done down to the item level.  File normalization may be conducted as necessary or appropriate.  Identified duplicates will be removed as appropriate and PII will be redacted as needed. Description will meet DACS multi-level elements: described to series, subseries, or item level where appropriate with high research value components complemented with additional scope and content notes.

Discussions between the ERTF and unit staff about each accession assisted with assigning the appropriate level of processing.  This “level of processing,” however, did not always correlate with the amount of effort that was being given to an accession.  For example, a collection assigned a minimal level of processing could take days to address while a collection assigned a full level of processing might only take hours based on a number of factors.  Just because the minimal level of processing says that there will be no file arranging or renaming done – for the purpose of description/discovery – does not mean that no file arranging or renaming will be done for preservation or ingest purposes.  File renaming must often be done for preservation purposes.  If unsupported characters are found in file names these must be addressed.  If file names are too long this must also be addressed.

Other tasks that might be necessary to assist with the long-term management of these materials include:

  • Removing empty directories
  • Removing .DSStore and .Thumbs files
  • Removing identified PII while not necessary for the description, better protects the University. The less PII we need to manage, the less risk we put ourselves in.
  • Deleting duplicates (as much as the “level of processing” tries to limit this, as someone who needs to manage the storage space, continually adding duplicates will cause problems down the line). We have a program that easily removes them, so use it.

Therefore the “level of processing”, while helpful in setting expectations for final description, does not provide accurate insight into the amount of work that is being done to make electronic records accessible. In order to address the lack of correlation between the processing level assigned to an accession and the actual level of effort being given to the processing of the accession, a Levels of Effort document was drafted to help categorize the amount of staff time and resources put forth when working with electronic materials.  The expected level of effort may be more useful for setting priorities then a level of processing as there is a closer one-to-one relationship with the amount of time required to complete the processing.

This is another example of how we were not able to directly apply procedures developed for analog records towards electronic records. The key is finding the balance between not reinventing the wheel and doing things the way they have always been done.

Next Steps...Processing of and Access to Electronic Records
Poster by Valerie Collins and Carol Kussmann, on behalf of the University of Minnesota’s Electronic Records Task Force: https://osf.io/qya26/

____

Carol Kussmann is a Digital Preservation Analyst with the University of Minnesota Libraries Data & Technology – Digital Preservation & Repository Technologies unit, and co-chair of the University of Minnesota Libraries – Electronic Records Task Force (ERTF). Questions about the activities of the ERTF can be sent to: lib-ertf@umn.edu.

Lara Friedman-Shedlov is Description and Access Archivist for the Kautz Family YMCA Archives at the University of Minnesota Libraries.  Her current interests include born digital archives and diverse and inclusive metadata.

The Human and the Tech: Failing Gracefully While Archiving a Downsized Unit

By Mary Mellon and Heidi Kelly

This is the second post in the bloggERS series #digitalarchivesfail: A Celebration of Failure.

____

The human element of digital archiving has lately been covered very well, with well-known professionals like Bergis Jules and Hillel Arnold taking on various pieces of the topic.  At Indiana University (IU), we have a tacit commitment in most of our collections to the concept of a culture of care: taking care of those who entrust us with the longevity of their materials. However, sometimes even the best of intentions can lead to failures to achieve that goal, especially when the project involves a fraught issue, like downsizing and the loss or fundamental change of employment for people whose materials we want to bring into the collection. We wanted to share a story of one of our failures, as part of the #digitalarchivesfail series, because we hope that others can learn from our oversights. We hope that by sharing this out we can contribute to the conversation that projects like Documenting the Now have really mobilized.

Introduction: Indiana University Archives and the Born Digital Preservation Lab

Mary Mellon (MM): The mission of the Indiana University Archives is to collect, organize, preserve and make accessible records documenting Indiana University’s origins and development and the activities and achievements of its officers, faculty, students, alumni and benefactors. We often work with internal units and donors to preserve the institution’s legacy. In fulfilling our mission, we are beginning to receive increasing amounts of born digital material without in-house resources or specialized staff for maintaining a comprehensive digital preservation program. 

Heidi Kelly (HK): The Indiana University Libraries Born Digital Preservation Lab (BDPL) is a sub-unit of Digital Preservation. It started last January as a service modeled off of the Libraries’ Digitization Services unit, which works with various collection owners to digitize and make accessible books and other analog materials. In terms of daily management of the BDPL, I generally do outreach with collection units to plan new projects, and the Digital Preservation Technician, Luke Menzies, creates the workflows and solves any technical problems. The University Archives has been our main partner so far, since they have ongoing requests from both internal and external donors with a lot of born-digital materials.   

Developing a Project With a Downsized Unit

MM: In the middle of 2016, IU Archives staff were contacted by another unit on campus about transferring its records and faculty papers. Unfortunately, the academic unit was facing impending closure, and they were expected to move out of their physical location within a few months. After an initial assessment, we worked with unit’s administrative staff and faculty members, the IU Libraries’ facilities officer, and Auxiliary Library Facility staff to inventory, pack, and transfer boxes of papers, A/V material, and other physical media to archival storage. 

HK: The unit’s staff also suggested that they transfer, along with their paper records, all of the content from their server, which is how the BDPL got involved. The server contained a relatively small amount of content comparatively to the papers records, but “accessioning a server” was a new challenge for us. 

What We Did

MM: I should probably mention that the job of coordinating the transfer of the unit’s material landed on my desk on my second day of work at Indiana University (thanks, boss!), so I was not involved in the initial consultation process. While the IU Archives typically asks campus offices and donors to prepare boxes for transfer, the unit’s shrinking staff had many competing priorities resulting from the approaching closure (the former office manager had already left for a new job). They reached out for additional help about four weeks prior to closure, and the IU Archives assumed the responsibility of packing and physically transferring records at that point. As a result, the large volume of paper records to accession was the immediate focus of our work with the unit. 

In the end, we boxed about 48 linear feet of material, facing several unanticipated challenges along the way, namely coordinating with personnel who were transferring or leaving employment. The building that housed the unit was not regularly staffed anyway during the summer, requiring a new email back-and-forth to gain access every time we needed to resume file packing. This need for access also necessitated special trips to the office by unit staff. The staff were obviously stretched a bit thin with the closure in general, so any difficulties or setbacks in terms of transferring materials, whether paper or digital, just added to their stress.  

In addition, despite consulting with IU Archives staff and our transfer policies, the unit’s faculty exhibited an unfortunate level of modesty over the enduring value of their papers, which significantly slowed the process of acquiring paper materials. To top it all off, during a stretch of relentless rain, the basement, where most of the paper files were stored prior to transfer, flooded. The level of humidity necessitated moving the paper out as soon as possible, which again made the analog materials our main focus in the IU Archives.

HK: Accessioning the server, on the other hand, didn’t involve many trips to the unit’s physical location. Our main challenges were determining how best to ensure that everything got properly transferred, and how to actually transfer the digital content.

Because the server was Windows, this posed the first big issue for us. Our main workflow focuses on creating a disk image, which in this case was not optimal. We had a bash script that we were regularly using on our main Linux machines to inventory any large external hard drives, but we weren’t prepared for running it on a Windows operating system. Luke, having no experience with Python, amazingly adapted our script within a short period of time, but unfortunately this still set us back since the unit’s staff were working against the clock. I also didn’t realize that in explaining the technical issue to them, I had inadvertently encouraged them to search out their own solutions to effectively capture all of the information that our bash inventory was generating. This was a fundamental misstep, as I didn’t think about their attenuated timeline and how that might push a different response. While the staff’s proactivity was helpful, it compounded the amount of time they ultimately spent and left them feeling frustrated.

All of that said, we got the inventory working and then we faced a new set of challenges in terms of the actual transfer. First, we requested read-only access to the server through the unit’s IT department, but were unable to obtain the user privileges. We then tried to move things using Box, the university’s cloud storage, but discovered some major limitations. In comparing the inventory we generated on the user’s end with what we received in the BDPL, there were a lot of files missing. As we found out, the donor had set everything to upload, but received no notifications when the system failed to complete the process. Box had already been less than ideal in that some of the metadata we wanted to keep for every file was wiped as soon as the files were uploaded, so this made us certain that it was not the right solution. Finally, we landed on using external media for the transfer–an option that we had considered earlier but we did not have any media available early on in the project. 

What We Learned

HK: In digital archiving, and digital curation more broadly, we are constantly talking about building our <insert preferred mode of transport here> as we <corresponding verb> it. Our fumblings to figure out better and faster ways to preserve obsolete media are, often, comical. But they also have an impact in several ways that we hadn’t really considered prior to this project. Beyond the fact of ensuring better odds for the longevity of the content that we’re preserving, we’re preserving the tangible histories of Indiana University and of its staff. Our work means that people’s legacies here will persist, and our work with the unit really laid that bare.

Going forward, there are several parts of the BDPL’s workflow that will improve based on the failures of this particular project. First, we’ve learned that our communication of technical issues is not optimal. As a tech geek, I think that the way in which I frame questions can sometimes push people to answer and act in different ways than if the questions are framed by a non-tech person. I spend every day with this stuff, so it’s hard to step back from, “How good are you at command line?” to “Do you know what command line is?” But that’s key to effective relationship management, and that caused a problem in this situation. My goal in this case is to further rely on Mary and other Archives staff to communicate directly with donors. To me, this makes the most sense because we in the BDPL aren’t front-facing people, our role is as advisors to the collection managers, to the people who have been regularly interfacing with donors for years. They’re much better at that element, and we’ve got the technical stuff covered, so everybody’s happy if we can continue building out our service model in that way. Right now we’re focusing on training for the archivists, librarians and curators that are going to be working directly with donors. Another improvement we’ve made is the creation of a decision matrix for BDPL projects in order to better define how we make decisions about new projects. This will again help archivists and other staff as they work with donors. It will also help us to continue focusing on workflows rather than one-offs–building on the knowledge we gain from other, similar projects, instead of starting fresh every time. The necessity of focusing on pathways rather than solutions is again something that became clearer after this project.

MM: Truism alert: Born-digital materials really need to be an early part of the conversations with donors. In this case, we would have gained a few weeks that could have contributed to a smoother experience for all parties and more ideal solution preservation-wise than what we ended up with. Despite the fact that our guidelines for transferring records and paper state that we want electronic records as well as analog, we cannot count on donors to be aware of this policy, or to be proactive about offering up born-digital content if it will almost certainly lead to more complications. 

Seconding Heidi’s point, we at the Archives need to assume more responsibility in interactions with donors regarding transfer of materials, instead of leaving everything digital on the BDPL’s plate. The last thing we want is for donors to be discouraged from transferring born-digital material because it is too much of a hassle, or for the Archives to miss out on any contextual information about the transfers due to lack of involvement. We at the Archives are using this experience to develop formal workflows and policies in conjunction with the BDPL to optimize division of responsibilities between our offices based on expertise and resources. We’re not going to be bagging any files anytime soon on our own workstations, but we can certainly sit down with a donor to walk them through an initial born-digital survey and discuss transfer procedures and technical requirements as needed.

Conclusion

HK: In the end, we’re archiving the legacies of people and institutions. Creating a culture of care is easy to talk about, but when dealing with the legacies of staff who are being displaced or let go, it is crucial and much harder than we could have imagined. Empathy and communication are key, as is the understanding that failure is a fact in our field. We have to embrace it in order to learn from it. We hope that staff at other institutions can learn from our failures in this case.

MM: Ditto.

____

Mary Mellon is the Assistant Archivist at Indiana University Archives, where she manages digitization and encoding projects and provides research and outreach support. She previously worked at the Rubenstein Library at Duke University and the University Libraries at UNC-Chapel Hill. She holds an M.S. in Information Science from UNC-Chapel Hill and a B.A. from Duke University.

Heidi Kelly is the Digital Preservation Librarian at Indiana University. Her current focuses involve infrastructure development and services for born-digital objects. Previously Heidi worked at Huygens ING, Library of Congress, and Nazarbayev University. She holds an Master’s Degree in Library and Information Science from Wayne State University.

 

Latest #bdaccess Twitter Chat Recap

By Daniel Johnson and Seth Anderson

This post is the eighteenth in a bloggERS series about access to born-digital materials.

____

In preparation for the Born Digital Access Bootcamp: A Collaborative Learning Forum at the New England Archivists spring meeting, an ad-hoc born-digital access group with the Digital Library Federation recently held a set of #bdaccess Twitter chats. The discussions aimed to gain insight into issues that archives and library staff face when providing access to born-digital.

Here are a few ideas that were discussed during the two chats:

  • Backlogs, workflows, delivery mechanisms, lack of known standards, appraisal and familiarity with software were major barriers to providing access.
  • Participants were eager to learn more about new tools, existing functioning systems, providing access to restricted material and complicated objects, which institutions are already providing access to data, what researchers want/need, and if any user testing has been done.
  • Access is being prioritized by user demand, donor concerns, fragile formats and a general mandate that born-digital records are not preserved unless access is provided.
  • Very little user testing has been done.
  • A variety of archivists, IT staff and services librarians are needed to provide access.

You can search #bdaccess on Twitter to see how the conversation evolves or view the complete conversation from these chats on Storify.

The Twitter chats were organized by a group formed at the 2015 SAA annual meeting. Stay tuned for future chats and other ways to get involved!

____

Daniel Johnson is the digital preservation librarian at the University of Iowa, exploring, adapting, and implementing digital preservation policies and strategies for the long-term protection and access to digital materials.

Seth Anderson is the project manager of the MoMA Electronic Records Archive initiative, overseeing the implementation of policy, procedures, and tools for the management and preservation of the Museum of Modern Art’s born-digital records.