Improving Workflows at UNC Libraries’ Wilson Special Collections Library

by Erica Titkemeyer and Jessica Venlet

This is the tenth post in the bloggERS Script It! Series.

At Wilson Special Collections Library, we are always trying to find ways to improve our digital preservation workflows. Improving our skills with the command line and using existing command line tools has played a key role in workflow improvements. So, we’ve picked a few favorite tools and tips to share.

FFmpeg

We use FFmpeg for a number of daily tasks, whether it’s generating derivatives for preservation files or analyzing a video or audio file we’ve received through a born-digital accession.

Clearing embedded metadata and uses for FFprobe:
As part of our audio digitization work, we embed metadata into all preservation WAV files. This metadata follows guidelines set out by the Federal Agencies Digital Guidelines Initiative (FADGI) and mostly relates the file back to the original item it was digitized from, including its unique identifier, title, and the curatorial unit it is held by. It has come up a few times now where we have recognized inconsistencies in how this data is reflected in the file, that the data itself is incorrect, or the data is insufficient.

WAV file metadata
Look at that terrible metadata!

When large-scale issues have come up, particularly with legacy files in our backlog, we’ve made use of FFmpeg’s ‘-map_metadata’ command to batch delete the embedded metadata. Below is a script used to batch create brand new files without metadata, with “_clean” attached to their original file name:

For i in *.wav; do ffmpeg -i “$i” -map_metadata –1 –c:a copy “${i%.wav}_clean.wav”; done

After successfully removing metadata from the files, we use the tool BWF MetaEdit to batch embed the correct metadata that we have prepared in a .csv file.

For born-digital work, we regularly use the tool/command ‘ffprobe’, a stream analyzer that is part of the FFmpeg build. It allows us to quickly see data about AV files (such as duration, file size, codecs, aspect ratio, etc.) that are helpful in identifying files and making general appraisal decisions. As we grow our capabilities in preserving born digital AV, we also foresee the need to document this type of file data in our ingest documentation.

walk_to_dfxml.py

In our born-digital workflows we don’t disk image every digital storage device we receive by default. This workflow choice has benefits and disadvantages. One disadvantage is losing the ability to quickly document all timestamps associated with files. While our workflows were preserving last modified dates, other timestamps like access or creation dates were not as effectively captured. In search of a way to remedy this issue, I turned to Twitter for some advice on the capture and value of each timestamp. Several folks recommended generating DFXML which is usually used on disk images. Tim Walsh helpfully pointed to a python script walk_to_dfxml.py that can generate DFXML on directories instead of disk images. Workflow challenge solved!

DFXML output example
DFXML output example

Brunnhilde

Brunnhilde is another tool that was particularly helpful in consolidating tasks and tools. By kicking off Brunnhilde in the command line, we are able to: check for viruses, create checksums, identify file formats, identify duplicates, create a manifest, and run a PII scan. Additionally, this tool gives us a report that is useful for digital archives specialists, but also holds potential as an appraisal tool for consultations with curators. We’re still working out that aspect of the workflow, but when it comes to the technical steps Brunnhilde and the associated command line tools it includes has really improved our processing work.

Learning as We Go

Like many archivists, we had limited experience with using the command line before graduate school. In the course of our careers, we’ve had to learn a lot on the fly because so many great command line tools are essential for working with digital archives.

One thing that can be tricky when you are new is moving the cursor around the terminal easily. It seems like it should be a no brainer, but it’s really not so obvious.

  • For Macs, see the excellent Script Ahoy resource:
  • For PC, see this resource for a variety of shortcuts including moving. In general:
    • Home key moves to beginning. End key moves to the end.
    • Ctrl + left or right arrow moves the cursor around in chunks

Another helpful set of commands are remove (rm) and move (mv). We use these when dealing with extraneous files created through quality control applications in our AV workflow that we’d like to delete quickly, or when we need to separate derivatives (such as mp3s) from a large batch of preservation files (wav).

    • Important note about rm: it’s always smart to first use ‘echo’ to see what files you would be removing with your command (ex: echo rm *.lvl would list all the .lvl files that would be removed by your command).

If you are just starting out, you may consider exploring online tutorials or guides like:


Erica Titkemeyer is the Project Director and Audiovisual Conservator for the Southern Folklife Collection at the UNC Wilson Special Collections Library, coordinating inhouse digitization and outsourcing of audiovisual materials for preservation. Erica also participants in the improvement of online access and digital preservation for digitized materials.

Jessica Venlet works as the Assistant University Archivist for Digital Records & Records Management at the UNC Wilson Special Collections Library. In this role, Jessica is responsible for a variety of things related to both records management and digital preservation. In particular, she leads the acquisition and management of born-digital university records. She earned a Master of Science in Information degree from the University of Michigan.

Advertisements

Small-Scale Scripts for Large-Scale Analysis: Python at the Alexander Turnbull Library

by Flora Feltham

This is the third post in the bloggERS Script It! Series.

The Alexander Turnbull is a research library that holds archives and special collections within the National Library of New Zealand. This means exactly what you’d expect: manuscripts and archives, music, oral histories, photographs, and paintings, but also artefacts such as Katherine Mansfield’s typewriter and a surprising amount of hair. In 2008, the National Library established the National Digital Heritage Archive (NDHA), and has been actively collecting and managing born-digital materials since. I am one of two Digital Archivists who administer the transfer of born-digital heritage material to the Library. We also analyse files to ensure they have all the components needed for long-term preservation and ingest collections to the NDHA. We work closely with our digital preservation system administrators and the many other teams responsible for appraisal, arrangement and description, and providing access to born-digital heritage.

Why Scripting?

As archivists responsible for safely handling and managing born-digital heritage, we use scripts to work safely and sanely at scale. Python provides a flexible yet reliable platform for our work: we don’t have to download and learn a new piece of software every time we need to accomplish a different task. The increasing size and complexity of digital collections often means that machine processing is the only way to get our work done. A human could not reliably identify every duplicate file name in a collection of 200,000 files… but Python can. To protect original material, too, the scripting we do during appraisal and technical analysis is done using copies of collections. Here are some of the extremely useful tasks my team scripted recently:

  • Transfer
    • Generating a list of files on original storage media
    • Transferring files off the original digital media to our storage servers
  • Appraisal
    • Identifying duplicate files across different locations
    • Adding file extensions so material opens in the correct software
    • Flattening complex folder structures to support easy assessment
  • Technical Analysis
    • Sorting files into groups based on file extension to isolate unknown files
    • Extracting file signature information from unknown files

Our most-loved Python script even has a name: Safe Mover. Developed and maintained by our Digital Preservation Analyst, Safe Mover will generate file checksums, maintain metadata integrity, and check file authenticity, all the while copying material off digital storage media. Running something somebody else wrote was a great introduction to scripting. I finally understood that: a) you can do nimble computational work from a text editor; and b) a ‘script’ is just a set of instructions you write for the computer to follow.

Developing Skills Slowly, Systematically, and as Part of a Group

Once we recognised that we couldn’t do our work without scripting, my team started regular ‘Scripting Sessions’ with other colleagues who code. At each meeting we solve a genuine processing challenge from someone’s job, which allows us to apply what we learn immediately. We also write the code together on a big screen which, like learning any language, helped us become comfortable making mistakes. Recently, I accidentally copied 60,000 spreadsheets to my desktop and then wondered aloud why my laptop ground to a halt.

Outside of these meetings, learning to script has been about deliberately choosing to problem-solve using Python rather than doing it ‘by-hand’. Initially, this felt counter-intuitive because I was painfully slow: I really could have counted 200 folders on my fingers faster than I wrote a script to do the same thing.  But luckily for me, my team recognised the overall importance of this skill set and I also regularly remind myself: “this will be so useful the next 5000 times I need to inevitably do this task”.

The First Important Thing I Remembered How to Write

Practically every script I write starts with import os. It became so simple once I’d done it a few times: import is a command, and ‘os’ is the name of the thing I want. os is a Python module that allows you to interact with and leverage the functionality of your computer’s operating system. In general terms, a Python module is a just pre-existing code library for you to use. They are usually grouped around a particular theme or set of tasks.

The function I use the most is os.walk(). You tell os.walk() where to start and then it systematically traverses every folder beneath that. For every folder it finds, os.walk() will record three things: 1: the path to that folder, 2: a list of any sub-folders it contains, and 3: a list of any files it contains. Once os.walk() has completed its… well… walk, you have access to the name and location of every folder and file in your collection.

You then use Python to do something with this data: print it to the screen, write it in a spreadsheet, ask where something is, or open a file. Having access to this information becomes relevant to archivists very quickly. Just think about our characteristic tasks and concerns: identifying and recording original order, gaining intellectual control, maintaining authenticity. I often need to record or manipulate file paths, move actual files around the computer, or extract metadata stored in the operating system.

An Actual Script

Recently, we received a Mac-formatted 1TB hard drive from a local composer and performer. When #1 script Safe Mover stopped in the middle of transferring files, we wondered if there was a file path clash. Generally speaking, in a Windows formatted file system there’s a limit of 255 characters to a file path (“D:\so_it_pays_to\keep_your_file_paths\niceand\short.doc”).

Some older Mac systems have no limit on the number of file path characters so, if they’re really long, there can be a conflict when you’re transferring material to a Windows environment. To troubleshoot this problem we wrote a small script:

import os

top = "D:\example_folder_01\collection"
for root, dir_list, file_list in os.walk(top):
    for item in file_list:
    file_path = os.path.join(root,item)
    if len(file_path) > 255:
        print (file_path)

So that’s it– and running it over the hard drive gave us the answer we needed. But what is this little script actually doing?

# import the Python module 'os'.
import os

# tell our script where to start
top = " D:\example_folder_01\collection"
# now os.walk will systematically traverse the directory tree starting from 'top' and retain information in the variables root, dir_list, and file_list.
# remember: root is 'path to our current folder', dir_list is 'a list of sub-folder names', and file_list is 'a list of file names'.
for root, dir_list, file_list in os.walk(top):
    # for every file in those lists of files...
    for item in file_list:
        # ...store its location and name in a variable called 'file_path'.
        # os.path.join joins the folder path (root) with the file name for every file (item).
        file_path = os.path.join(root,item)
        # now we do the actual analysis. We want to know 'if the length of any file path is greater than 255 characters'.
        if len(file_path) > 255:
            # if so: print that file path to the screen so we can see it!
            print (file_path)

 

All it does is find every file path longer than 255 characters and print it to the screen. The archivists can then eyeball the list and decide how to proceed. Or, in the case of our 1TB hard drive, exclude that as a problem because it didn’t contain any really long file paths. But at least we now know that’s not the problem. So maybe we need… another script?


Flora Feltham is the Digital Archivist at the Alexander Turnbull Library, National Library of New Zealand Te Puna Mātauranga o Aotearoa. In this role she supports the acquisition, ingest, management, and preservation of born-digital heritage collections. She lives in Wellington.

Restructuring and Uploading ZIP Files to the Internet Archive with Bash

by Lindsey Memory and Nelson Whitney

This is the second post in the bloggERS Script It! series.

This blog is for anyone interested in uploading ZIP files into the Internet Archive.

The Harold B. Lee Library at Brigham Young University has been a scanning partner with the Internet Archive since 2009. Any loose-leaf or oversized items go through the convoluted workflow depicted below, which can take hours of painstaking mouse-clicking if you have a lot of items like we do (think 14,000 archival issues of the student newspaper). Items must be scanned individually as JPEGS, each JPEG must be reformatted into a JP2, the JP2s must all be zipped into ZIP files, then ZIPs are (finally) uploaded one-by-one into the Internet Archive.

old workflow
Workflow for uploading ZIP files to the Internet Archive.

Earlier this year, the engineers at Internet Archive published a single line of Python code that allows scan centers to upload multiple ZIP files into the Internet Archive at once (see “Bulk Uploads”) . My department has long dreamed of a script that could reformat Internet-Archive-bound items and upload them automatically. The arrival of the Python code got us moving.  I enlisted the help of the library’s senior software engineer and we discussed ways to compress scans, how Python scripts communicate with the Internet Archive, and ways to reorganize the scans’ directory files in a way conducive to a basic Bash script.

The project was delegated to Nelson Whitney, a student developer. Nelson wrote the script, troubleshot it with me repeatedly, and helped author this blog. Below we present his final script in two parts, written in Bash for iOS in Spring 2018.

Part 1: makeDirectories.sh

This simple command, executed through Terminal on a Mac, takes a list of identifiers (list.txt) and generates a set of organized subdirectories for each item on that list. These subdirectories house the JPEGs and are structured such that later they streamline the uploading process.

#! /bin/bash

# Move into the directory "BC-100" (the name of our quarto scanner), then move into the subdirectory named after whichever project we're scanning, then move into a staging subdirectory.
cd BC-100/$1/$2

# Takes the plain text "list.txt," which is saved inside the staging subdirectory, and makes a new subdirectory for each identifier on the list.
cat list.txt | xargs mkdir
# For each identifier subdirectory,
for d in */; do
  # Terminal moves into that directory and
  cd $d
  # creates three subdirectories inside named "01_JPGs_cropped,"
mkdir 01_JPGs_cropped
  # "02_JP2s,"
mkdir 02_JP2s
  # and "03_zipped_JP2_file," respectively.
mkdir 03_zipped_JP2_file
  # It also deposits a blank text file in each identifier folder for employees to write notes in.
  touch Notes.txt

  cd ..
done
file structure copy
Workflow for uploading ZIP files to the Internet Archive.

Part 2: macjpzipcreate.sh

This Terminal command can recursively move through subdirectories, going into 01_JPEG_cropped first and turning all JPEGs therein into JP2s. Terminal saves the JP2s into the subdirectory 02_JP2s, then zips the JP2s into a zip file and saves the zip in subdirectory 03_zipped_JP2_file. Finally, Terminal uploads the zip into the Internet Archive. Note that for the bulk upload to work, you must have configured Terminal with the “./ia configure” command and entered your IA admin login credentials.

#! /bin/bash

# Places the binary file "ia" into the project directory (this enables upload to Internet Archive)
cp ia BC-100/$1/$2
# Move into the directory "BC-100" (the name of our quarto scanner), then moves into the subdirectory named after whichever project we're scanning, then move into a staging subdirectory
cd BC-100/$1/$2

  # For each identifier subdirectory
  for d in */; do
  # Terminal moves into that identifier's directory and then the directory containing all the JPG files
  cd $d
  cd 01_JPGs_cropped

  # For each jpg file in the directory
  for jpg in< *.jpg; do
    # Terminal converts the jpg files into jp2 format using the sips command in MAC OS terminals
    sips -s format jp2 --setProperty formatOptions best $jpg --out ../02_JP2s/$jpg.jp2
  done

  cd ../02_JP2s
  # The directory variable contains a trailing slash. Terminal removes the trailing slash,
  d=${d%?}
  # gives the correct name to the zip file,
  im="_images"
  # and zips up all JP2 files.
  zip $d$im.zip *
  # Terminal moves the zip files into the intended zip file directory.
  mv $d$im.zip ../03_zipped_JP2_file

  # Terminal moves back up the project directory to where the ia script exists
  cd ../..
  # Uses the Internet-Archive-provided Python Script to upload the zip file to the internet
  ./ia upload $d $d/03_zipped_JP2_file/$d$im.zip --retries 10
  # Change the repub_state of the online identifier to 4, which marks the item as done in Internet Archive.
  ./ia metadata $d --modify=repub_state:4
done

The script has reduced the labor devoted to the Internet Archive by a factor of four. Additionally, it has bolstered Digital Initiatives’ relationship with IT. It was a pleasure working with Nelson; he gained real-time engineering experience working with a “client,” and I gained a valuable working knowledge of Terminal and the design of basic scripts.


 

lindsey_memoryLindsey Memory is the Digital Initiatives Workflows Supervisor at the Harold B. Lee Library at Brigham Young University. She loves old books and the view from her backyard.

 

 

Nelson Whitney

Nelson Whitney is an undergraduate pursuing a degree in Computer Science at Brigham Young University. He enjoys playing soccer, solving Rubik’s cubes, and spending time with his family and friends. One day, he hopes to do cyber security for the Air Force.

The Human and the Tech: Failing Gracefully While Archiving a Downsized Unit

By Mary Mellon and Heidi Kelly

This is the second post in the bloggERS series #digitalarchivesfail: A Celebration of Failure.

____

The human element of digital archiving has lately been covered very well, with well-known professionals like Bergis Jules and Hillel Arnold taking on various pieces of the topic.  At Indiana University (IU), we have a tacit commitment in most of our collections to the concept of a culture of care: taking care of those who entrust us with the longevity of their materials. However, sometimes even the best of intentions can lead to failures to achieve that goal, especially when the project involves a fraught issue, like downsizing and the loss or fundamental change of employment for people whose materials we want to bring into the collection. We wanted to share a story of one of our failures, as part of the #digitalarchivesfail series, because we hope that others can learn from our oversights. We hope that by sharing this out we can contribute to the conversation that projects like Documenting the Now have really mobilized.

Introduction: Indiana University Archives and the Born Digital Preservation Lab

Mary Mellon (MM): The mission of the Indiana University Archives is to collect, organize, preserve and make accessible records documenting Indiana University’s origins and development and the activities and achievements of its officers, faculty, students, alumni and benefactors. We often work with internal units and donors to preserve the institution’s legacy. In fulfilling our mission, we are beginning to receive increasing amounts of born digital material without in-house resources or specialized staff for maintaining a comprehensive digital preservation program. 

Heidi Kelly (HK): The Indiana University Libraries Born Digital Preservation Lab (BDPL) is a sub-unit of Digital Preservation. It started last January as a service modeled off of the Libraries’ Digitization Services unit, which works with various collection owners to digitize and make accessible books and other analog materials. In terms of daily management of the BDPL, I generally do outreach with collection units to plan new projects, and the Digital Preservation Technician, Luke Menzies, creates the workflows and solves any technical problems. The University Archives has been our main partner so far, since they have ongoing requests from both internal and external donors with a lot of born-digital materials.   

Developing a Project With a Downsized Unit

MM: In the middle of 2016, IU Archives staff were contacted by another unit on campus about transferring its records and faculty papers. Unfortunately, the academic unit was facing impending closure, and they were expected to move out of their physical location within a few months. After an initial assessment, we worked with unit’s administrative staff and faculty members, the IU Libraries’ facilities officer, and Auxiliary Library Facility staff to inventory, pack, and transfer boxes of papers, A/V material, and other physical media to archival storage. 

HK: The unit’s staff also suggested that they transfer, along with their paper records, all of the content from their server, which is how the BDPL got involved. The server contained a relatively small amount of content comparatively to the papers records, but “accessioning a server” was a new challenge for us. 

What We Did

MM: I should probably mention that the job of coordinating the transfer of the unit’s material landed on my desk on my second day of work at Indiana University (thanks, boss!), so I was not involved in the initial consultation process. While the IU Archives typically asks campus offices and donors to prepare boxes for transfer, the unit’s shrinking staff had many competing priorities resulting from the approaching closure (the former office manager had already left for a new job). They reached out for additional help about four weeks prior to closure, and the IU Archives assumed the responsibility of packing and physically transferring records at that point. As a result, the large volume of paper records to accession was the immediate focus of our work with the unit. 

In the end, we boxed about 48 linear feet of material, facing several unanticipated challenges along the way, namely coordinating with personnel who were transferring or leaving employment. The building that housed the unit was not regularly staffed anyway during the summer, requiring a new email back-and-forth to gain access every time we needed to resume file packing. This need for access also necessitated special trips to the office by unit staff. The staff were obviously stretched a bit thin with the closure in general, so any difficulties or setbacks in terms of transferring materials, whether paper or digital, just added to their stress.  

In addition, despite consulting with IU Archives staff and our transfer policies, the unit’s faculty exhibited an unfortunate level of modesty over the enduring value of their papers, which significantly slowed the process of acquiring paper materials. To top it all off, during a stretch of relentless rain, the basement, where most of the paper files were stored prior to transfer, flooded. The level of humidity necessitated moving the paper out as soon as possible, which again made the analog materials our main focus in the IU Archives.

HK: Accessioning the server, on the other hand, didn’t involve many trips to the unit’s physical location. Our main challenges were determining how best to ensure that everything got properly transferred, and how to actually transfer the digital content.

Because the server was Windows, this posed the first big issue for us. Our main workflow focuses on creating a disk image, which in this case was not optimal. We had a bash script that we were regularly using on our main Linux machines to inventory any large external hard drives, but we weren’t prepared for running it on a Windows operating system. Luke, having no experience with Python, amazingly adapted our script within a short period of time, but unfortunately this still set us back since the unit’s staff were working against the clock. I also didn’t realize that in explaining the technical issue to them, I had inadvertently encouraged them to search out their own solutions to effectively capture all of the information that our bash inventory was generating. This was a fundamental misstep, as I didn’t think about their attenuated timeline and how that might push a different response. While the staff’s proactivity was helpful, it compounded the amount of time they ultimately spent and left them feeling frustrated.

All of that said, we got the inventory working and then we faced a new set of challenges in terms of the actual transfer. First, we requested read-only access to the server through the unit’s IT department, but were unable to obtain the user privileges. We then tried to move things using Box, the university’s cloud storage, but discovered some major limitations. In comparing the inventory we generated on the user’s end with what we received in the BDPL, there were a lot of files missing. As we found out, the donor had set everything to upload, but received no notifications when the system failed to complete the process. Box had already been less than ideal in that some of the metadata we wanted to keep for every file was wiped as soon as the files were uploaded, so this made us certain that it was not the right solution. Finally, we landed on using external media for the transfer–an option that we had considered earlier but we did not have any media available early on in the project. 

What We Learned

HK: In digital archiving, and digital curation more broadly, we are constantly talking about building our <insert preferred mode of transport here> as we <corresponding verb> it. Our fumblings to figure out better and faster ways to preserve obsolete media are, often, comical. But they also have an impact in several ways that we hadn’t really considered prior to this project. Beyond the fact of ensuring better odds for the longevity of the content that we’re preserving, we’re preserving the tangible histories of Indiana University and of its staff. Our work means that people’s legacies here will persist, and our work with the unit really laid that bare.

Going forward, there are several parts of the BDPL’s workflow that will improve based on the failures of this particular project. First, we’ve learned that our communication of technical issues is not optimal. As a tech geek, I think that the way in which I frame questions can sometimes push people to answer and act in different ways than if the questions are framed by a non-tech person. I spend every day with this stuff, so it’s hard to step back from, “How good are you at command line?” to “Do you know what command line is?” But that’s key to effective relationship management, and that caused a problem in this situation. My goal in this case is to further rely on Mary and other Archives staff to communicate directly with donors. To me, this makes the most sense because we in the BDPL aren’t front-facing people, our role is as advisors to the collection managers, to the people who have been regularly interfacing with donors for years. They’re much better at that element, and we’ve got the technical stuff covered, so everybody’s happy if we can continue building out our service model in that way. Right now we’re focusing on training for the archivists, librarians and curators that are going to be working directly with donors. Another improvement we’ve made is the creation of a decision matrix for BDPL projects in order to better define how we make decisions about new projects. This will again help archivists and other staff as they work with donors. It will also help us to continue focusing on workflows rather than one-offs–building on the knowledge we gain from other, similar projects, instead of starting fresh every time. The necessity of focusing on pathways rather than solutions is again something that became clearer after this project.

MM: Truism alert: Born-digital materials really need to be an early part of the conversations with donors. In this case, we would have gained a few weeks that could have contributed to a smoother experience for all parties and more ideal solution preservation-wise than what we ended up with. Despite the fact that our guidelines for transferring records and paper state that we want electronic records as well as analog, we cannot count on donors to be aware of this policy, or to be proactive about offering up born-digital content if it will almost certainly lead to more complications. 

Seconding Heidi’s point, we at the Archives need to assume more responsibility in interactions with donors regarding transfer of materials, instead of leaving everything digital on the BDPL’s plate. The last thing we want is for donors to be discouraged from transferring born-digital material because it is too much of a hassle, or for the Archives to miss out on any contextual information about the transfers due to lack of involvement. We at the Archives are using this experience to develop formal workflows and policies in conjunction with the BDPL to optimize division of responsibilities between our offices based on expertise and resources. We’re not going to be bagging any files anytime soon on our own workstations, but we can certainly sit down with a donor to walk them through an initial born-digital survey and discuss transfer procedures and technical requirements as needed.

Conclusion

HK: In the end, we’re archiving the legacies of people and institutions. Creating a culture of care is easy to talk about, but when dealing with the legacies of staff who are being displaced or let go, it is crucial and much harder than we could have imagined. Empathy and communication are key, as is the understanding that failure is a fact in our field. We have to embrace it in order to learn from it. We hope that staff at other institutions can learn from our failures in this case.

MM: Ditto.

____

Mary Mellon is the Assistant Archivist at Indiana University Archives, where she manages digitization and encoding projects and provides research and outreach support. She previously worked at the Rubenstein Library at Duke University and the University Libraries at UNC-Chapel Hill. She holds an M.S. in Information Science from UNC-Chapel Hill and a B.A. from Duke University.

Heidi Kelly is the Digital Preservation Librarian at Indiana University. Her current focuses involve infrastructure development and services for born-digital objects. Previously Heidi worked at Huygens ING, Library of Congress, and Nazarbayev University. She holds an Master’s Degree in Library and Information Science from Wayne State University.