Practical Digital Preservation: In-House Solutions to Digital Preservation for Small Institutions

By Tyler McNally

This post is the tenth post in our series on processing digital materials.

Many archives don’t have the resources to install software or subscribe to a service such as Archivematica, but still have a mandate to collect and preserve born-digital records. Below is a digital-preservation workflow created by Tyler McNally at the University of Manitoba. If you have a similar workflow at your institution, include it in the comments. 

———

Recently I completed an internship at the University of Manitoba’s College of Medicine Archives, working with Medical Archivist Jordan Bass. A large part of my work during this internship dealt with building digital infrastructure for the archive to utilize in working on digital preservation. As a small operation, the archive does not have the resources to really pursue any kind of paid or difficult to use system.

Originally, our plan was to use the open-source, self-install version of Archivematica, but certain issues that cropped up made this impossible, considering the resources we had at hand. We decided that we would simply make our own digital-preservation workflow, using open-source and free software to convert our files for preservation and access, check for viruses, and create checksums—not every service that Archivematica offers, but enough to get our files stored safely. I thought other institutions of similar size and means might find the process I developed useful in thinking about their own needs and capabilities.

Continue reading

Call for Contributors – #digitalarchivesfail: A Celebration of Failure

Here on bloggERS!, we love to publish success stories. But we also believe in celebrating failure–the insights that emerge out of challenges, conundrums, and projects that didn’t quite work out as planned. All of us have failed and grown into wiser digital archives professionals as a result. We believe that failures don’t get enough airtime, and thanks to a brilliant idea from guest editor Rachel Appel, Digital Projects & Services Librarian at Temple University, we’re starting a new series to change that: #digitalarchivesfail: A Celebration of Failure.

So, tell us: when have you experienced failure when dealing with digital records, what did the experience reveal, and why is the wisdom gleaned worth celebrating? Tell us the story of your #digitalarchivesfail.

A few topics and themes to get you thinking (but we’re open to all ideas!):

  • Failed projects (What factors and complexities caused the project to fail? What’s the best way to pull the plug on a project? Are there workflows, tools, best practices, etc. that could be developed to help prevent similar failures?)
  • Experiences with troubleshooting and assessment (to identify or prevent points of failure)
  • Times when you’ve tried to make things work when they’ve failed or aren’t perfect
  • Murphy’s law
  • Areas where you think the archives profession might be “failing” and should focus its attention

In the spirit of celebrating failure, we encourage all authors to take pride in their #digitalarchivesfails, but if there is a story you really want to tell and you prefer to remain anonymous, we will accept unsigned posts.

Writing for bloggERS!

  • Posts should be between 200-600 words in length
  • Posts can take many forms: instructional guides, in-depth tool exploration, surveys, dialogues, point-counterpoint debates are all welcome!
  • Write posts for a wide audience: anyone who stewards, studies, or has an interest in digital archives and electronic records, both within and beyond SAA
  • Align with other editorial guidelines as outlined in the bloggERS! guidelines for writers

Posts for this series will start soon, so let us know ASAP if you are interested in contributing by sending an email to ers.mailer.blog@gmail.com!

____

Thanks to series guest editor Rachel Appel for inspiring this series and collaborating with us on this call for contributions!

Digital Preservation in NYC

This year’s meeting of the Preservation and Archiving Special Interest Group (PASIG) took place this October at the Museum of Modern Art in New York.  PASIG brings together an international community to share successes and challenges of digital preservation, with an emphasis on practical applications and solutions.

The conference was three days long, and kicked off with a day of “Bootcamp/101” sessions, focused on bringing everyone up to speed on what it is we’re preserving and how we can go about building infrastructures to support preservation.  Unfortunately I wasn’t able to arrive until Day 2, but many of the presentation slides are available online at the conference’s figshare page.

I arrived on Thursday morning, ready to jump into a morning of presentations and panel discussions on reproducibility and research data.  Vicky Steeves started the presentations with an explanation of reproducibility vs replication, a distinction well worth making especially for those with of us with less experience working with research data.  

“Reproducibility independently confirms results with the same data (and/or code) Replication independently confirms results with new data (and/or code)”

Steeves pointed out that the concerns of reproducibility are really an iceberg, because the environment in which the research was conducted often goes unnoticed–especially in a technological environment where research tools may rely on a certain version of a browser, hardware, or software tools.  These tools may be updated or change in a way that isn’t immediately visible.

One potential solution to this problem was presented by Fernando Chirigati of New York University.  He introduced the tool ReproZip, which allows the researcher to package the data files, libraries, and environment variables.  Reprozip runs in the background while the experiment is conducted, and documents the variables and technological dependencies that future researchers will need when reproducing an experiment in a future where tools and browsers may have changed.  The packaged data and environment variables can be archived, then unpackaged by ReproZip for future use.

Both Peter Brunhill from University of Edinburgh and Rachel Trent from George Washington University Libraries discussed the problem of reproducing research reliant on web resources.  Brunhill’s presentation, “Web Today, Gone Tomorrow” focused on the lack of persistence in web addresses, and the need for ongoing preservation of online articles and other academic resources.  To get an idea of the scope of this problem, 20-30% of referenced URLs are lost within 2 weeks of publication.  Brunhill presented the Hiberlink project, which aims to find solutions for this preservation gap through partnerships with academic publishing outlets.  Rachel Trent’s presentation, “Documenting the Demographic Imagination” discussed the challenges of preserving social media data for reproducible research.  Given the continued migration from one social media forum to another (myspace to facebook to twitter, etc), the archivist can’t assume that future researchers will understand the basis of any of these websites.  Trent discussed the usage of social media managers and web harvesters to automate the collection of social media data, and what metadata can be automatically extracted using these tools.  Trent and her team are now looking for feedback from the community on what’s missing from their social media metadata, and how researchers want to interact with this metadata.

After a brief lunch break, we dove into the challenges of preserving complex and very large data.  Karen Cariani presented on the public broadcasting media library and archives of WGBH.  Working with audio and video files, the preservation needs are significant and uncompressed preservation masters are very large.  The formats are complicated and proxy files are necessary for access purposes.  Cariani discussed how the HydraDAM2 project worked to fill this preservation gap, by extending the HydraDAM system to work with the Fedora 4 repository and creating a Hydra “head” for digital A/V preservation.   

Ben Fino-Radin continued on the theme of preservation at scale, discussing the creation of workflows for digitized time-based media holdings at the MoMA.  The digital repository uses Archivematica for ingest, Arkivum for storage, and Binder for managing these digital assets.  A single 120 minute film once restored at 4k resolution contains 4 Terabytes of data, so the workflows and systems for managing these files have to move quickly and efficiently.  This also means that the MoMA must be efficient in prioritizing film digitization efforts.

Day three focused on sustainability, not only sustaining our cultural and scientific heritage through digital preservation, but also on sustaining our planet and communities.  Eira Tansey from University of Cincinnati pointed out the obvious but rarely discussed point that archives require energy, digital archives especially so.  She urged the audience to consider the energy required for preservation during their daily work in the archive.  Some common practices or digital preservation may be a wasteful use of resources, such as preserving every derivative file as it is migrated from one format to the next, or considering file compression as the enemy of preservation.  She posted the entire text of her talk online, The Voice of One Crying Out in the Wilderness: Preservation in the Anthropocene.

Elvia Arroyo-Ramirez, Processing Archivist for Latin American Manuscript Collections, Princeton University, presented “Invisible Defaults and Perceived Limitations: Processing the Juan Gelman Files.”  She discussed how the systems we use contain the biases of the people who create them, pointing to systems that require file names be ‘cleaned’ or ‘scrubbed’ to remove ‘illegal characters’ including Spanish-language diacritic glyphs.  When working with a born-digital collection created in another language, those glyphs are vital to the understanding of those records.  She asked the community how we can intervene to make our tools and technologies reflect our mission to preserve the records and ‘do no harm.’  

The conference was concluded by Ingrid Burrington, neither an archivist nor a digital preservationist, but self-described writer, mapmaker and joke maker, and author of Networks of New York:  An Illustrated Field Guide to Urban Internet Infrastructure.  She discussed the physical infrastructure that makes up the internet and the corporate infrastructures that keep it running. She pointed to social media as crafting communication and products like Google Maps crafting our understanding of the world’s geography.  Companies like Google can skew their products away from reality–be that the blurring of sensitive government installation or their own data centers. Corporate interest and the public need for information do not always align.

This change of perspective was a great end to the conference, bringing us out of our technical comfort zones and making the audience consider how the work of digital preservation has larger and potentially more dire effects than we may realize.

 

profilephoto

 

Alice Sara Prael is the Digital Accessioning Archivist at Beinecke Rare Book & Manuscript Library at Yale University.  She works with born digital archival material through a centralized accessioning service.

#bdaccess Twitter Chat Recap

By Jess Farrell and Sarah Dorpinghaus

This post is the sixteenth in a bloggERS series about access to born-digital materials.

____

An ad-hoc born-digital access group with the Digital Library Federation recently held two successful and informative #bdaccess Twitter chats that scratched the surface of the born-digital access landscape. The discussions aimed to gain insight on how researchers want to access and use digital archives and included questions on research topics, access challenges, and discovery methods.

Here are a few ideas that were discussed during the two chats:

You can search #bdaccess on Twitter to see how the conversation evolves or view the complete conversation from these chats on Storify.

The Twitter chats were organized by a group formed at the 2015 SAA annual meeting. We are currently developing a bootcamp to share ideas and tools for providing access to born-digital materials and have teamed up with the Digital Library Federation to spread the word about the project. Stay tuned for future chats and other ways to get involved!

____

Jess Farrell is the curator of digital collections at Harvard Law School. Along with managing and preserving digital history, she’s currently fixated on inclusive collecting, labor issues in libraries, and decolonizing description.

Sarah Dorpinghaus is the Director of Digital Services at the University of Kentucky Libraries Special Collections Research Center. Although her research interests lie in the realm of born-digital archives, she has a budding pencil collection.

The Game is Afoot! Digital Sleuthing at the Electronic Records Section/Museum Archives Section Mystery Workshop

By Christine Wang

____

Every archivist wears many different hats. Detective is not usually one of them, but at this year’s SAA Annual Meeting in Atlanta, museum archivists donned their deerstalkers for the day as they delved into a mystery workshop designed to introduce participants to principles and practices in managing born-digital records within an institution.  

“Find the Person: Missing Curator Mystery Edition!” led participants (cast as the project archivist at the fictional Three Hills Museum) through the curious case of lead curator and director Jane Stevens, who seems to have suddenly vanished, only leaving behind a mysterious set of files. Tasked with finding out just what had happened to Jane, participants sifted through her filesphotographs, text files, spreadsheets, and other various documentsto solve the mystery (spoiler alert: turns out Jane was perfectly fine, having simply rushed off to Russia in her excitement to examine a potential J.M.W. Turner painting). In the process, they grappled with questions not only about Jane and her whereabouts, but also about the organization, protection, and preservation of files like the ones they were examiningthat is, of digital archiving and records management in a professional setting.

Rachel Chatalbash and Susan Hernandez from the Museum Archives Section, and Ann Cooper, Wendy Hagenmaier, and Carol Kussmann from the Electronic Records Section planned the workshop based on Wendy’s 2014 workshop for the Society of Georgia Archivists. Wendy and Ann led the workshop for the Museum Archivists Section at the 2016 SAA meeting. The materials for the 2016 workshop covered topics and methods in personal digital archiving to support participants in working with a mixture of personal and archival digital records and boost participants confidence in working with digital material.

This year’s workshop revised and expanded upon the ideas of the original personal digital archiving workshop materials, applying them to the management and archiving of born-digital records in a museum environment. If you would like to view the materials from the workshop, follow the links to the Workshop Activity Instructions and Additional Resources.

____

Christine Wang is the Nancy Horton Bartels Scholar Intern at the Yale Center for British Art Institutional Archives.

 

Software Preservation Network: Community Roadmapping for Moving Forward

By Susan Malsbury

This is the fifth post in our series on the Software Preservation Network 2016 Forum.
____

Software Preservation Network logo

The final session of the Software Preservation Forum was a community roadmapping activity with two objectives: to synthesize topics, patterns, and projects that came up during the forum, and to articulate steps and the time frame for future work. This session built off of two earlier activities in the day: an icebreaker in the morning and a brainstorming activity in the afternoon.

For the morning icebreaker, participants –armed with blank index cards and a pen–found someone in the room they hadn’t met before. After brief introductions they each shared one challenge that their organization faced with software and/or software preservation, and they wrote their partner’s challenge on their own index card. After five rounds of this, participants returned to their tables for the opening remarks from the Jessica Meyerson and Zach Vowell, and Cal Lee.

At the afternoon brainstorming activity, participants took the cards form the morning icebreaker as well as fresh cards and again paired with someone they hadn’t met. Each pair looked over their notes from the morning and wrote out goals, tasks, and projects that could respond to the challenges. By that point, we had three excellent sessions as well as casual conversations over lunch and coffee breaks to further inform potential projects.

I paired with Amy Stevenson from the Microsoft Corporation. Even though her organization is very different from mine (the New York Public Library), we easily identified projects that would address our own challenges as well as the challenges we gathered in the morning. The projects we identified  included the need for a software registry, educational resources, and a clearinghouse to provide discovery for software. We then placed our cards on a butcher paper timeline at the front of the room that spanned from right now to 2022–a six-year time frame with the first full year being 2017.

During the fourth session on partnerships, Jessica Meyerson entered the goals, projects, and ideas from the timeline into a spreadsheet so that for the fifth session we were ready to get road mapping! For this session we broke into three groups to discuss the roadmap and to work on our own group’s copy of the spreadsheet. Our group subdivided into smaller groups who each took a year of the timeline to edit and comment on. While we all focused on our year, conversation between subgroups flowed freely and people felt comfortable moving projects into other years or streamlining ideas across the entire time frame. Links to the master spreadsheet and our three versions can be found here.

Despite having  three separate groups, it was remarkable how much our edited roadmaps aligned with the others. Not surprisingly, most people felt like it was important to front-load steps regarding research, developing platforms for sharing information, and identifying similar projects to form partnerships. Projects in the later years would grow from this earlier research: creating the registry, establishing a coalition, and developing software metadata models.

I found the forum and this session in particular to be energizing. I had attended the talk that Jessica Meyerson and Zach Vowell gave at SAA in 2014 when they first formed the Software Preservation Network. While I was intrigued by the idea of software preservation it seemed a far off concept to me. At that time, there were still many other issues regarding digital archives that seemed far more pressing. When I heard other people’s challenges at the forum, and had space to think about my own,  I realized how important and timely software preservation is. As digital archives best practices are being codified, more and more we are realizing how dependent we are on (often obsolete) software to do our work.

____

Susan Malsbury is the Digital Archivist for The New York Public Library, working with born digital archival material across the three research centers of the Library. In this role, she assists curators with acquisitions; oversees technical services staff handling ingest and processing; and coordinates with public service staff to design and implement access systems for born digital content. Susan has worked with archives at NYPL in various capacities since 2007.

Pathways to Automated Appraisal for Born-Digital Records: An SAA 2016 ERS Breakout Discussion Recap

By Lora Davis
____

In a stroke of brilliant SAA scheduling (or, perhaps, blind chance) the 2016 Electronic Records Section’s annual business meeting immediately followed Thursday afternoon’s session 201 “From 0 to 400 GB: Confronting the Challenges of Born-Digital Photographs.” During this session, panelists Kristen Yarmey, Ed Busch, Chris Prom, Molly Tighe, and Gregory Wiedeman discussed a variety of steps they’ve taken to answer the question “What next?” following the (physical or digital) delivery of born-digital campus photographs to their repositories. I listened intently as Wiedeman recounted how he has employed the API of his campus’ chosen cloud-based online public photo database (SmugMug) to automate the description of born-digital campus photographs at large scale. By reusing the existing photographer-generated descriptive metadata stored in SmugMug, Wiedeman’s campus photographs “describe themselves.” This struck a chord with me as I look forward to my own institution’s upcoming National Digital Stewardship Residency project “Large-Scale Digital Stewardship: Preserving Johns Hopkins University’s Born-Digital Visual History.” But, I wondered, could a similar method be employed to automate appraisal?

As the formal portion of the ERS business meeting concluded, the Section broke into several unconference-style small group discussions. Inspired by the above, I volunteered to lead one on potential methods for automating the appraisal of born-digital records. Breakout participant Tammi Kim kept discussion notes, as a group of about 20 ERS members engaged in discussion. As is often the case, our conversation occasionally deviated from the primary topic of appraisal, but even these tangents proved fruitful. Some of the topics discussed and questions raised include:

  • The differences and distinctions between born-digital appraisal and weeding. Is the goal of minimizing the total size of digital records ingested (say, reducing 50TB of born-digital campus photographs to 10TB) analogous to actually doing appraisal on these records?
  • Could the type of facial recognition software discussed in session 201 be used not only for description purposes, but also to identify VIPs and other photographic content that would inform appraisal decisions?
  • If the record’s creator (say, a campus photographer) assigned rights or permissions metadata to a digital object, might that rights metadata be employed for appraisal in an MPLP-like fashion?
  • What are the differences between photographic and text-based digital records? Is automated, machine-actionable appraisal more likely to succeed with one type of record than another? (E.g. It is easier to search for text in word processing documents and OCRed PDFs than it is to “search” in photographs.)
  • How can “micro-tools” like ArchiveFinder (product mentioned, but I cannot locate a GitHub page) and FileAnalyzer help with the appraisal of large, complex directories of digital files? Additionally, while tools like ExifTool can read, write, and edit embedded technical metadata, how useful is technical metadata to appraisal decisions?
  • How might content creators be brought into appraisal decisions after content has been transferred to a repository? Can we ask creators to enhance or add metadata after the fact?
  • Where does appraisal actually fit in with processing workflows, especially when working with larger files like video and disk images? How do you manage the need for increased storage even at the appraisal stage?
  • What “traditional” approaches to analog appraisal do not necessarily apply to digital? Where does potential future use of records fit in with born-digital appraisal decisions?
  • Are born digital archives even sustainable monetarily or ecologically? Are we building the Tower of Babel? What about server farms and the offset of dirty fuels?

I encourage anyone who attended this discussion to add to this post and/or correct any of my poor-memory-induced misstatements above by commenting below. Similarly, whether you attended the breakout or not, let’s continue this conversation in the comments section!

Lora Davis is Digital Archivist at Johns Hopkins University, where she is tasked with creating, documenting, and managing workflows for acquiring, describing, processing, preserving, and providing access to born‐digital materials. Prior to her appointment at JHU in January 2016, Lora worked at Colgate University and the University of Delaware.

 

Announcing the First-Ever #bdaccess Twitter Chats: 10/27 @ 2 and 9pm EST

By Jess Farrell and Sarah Dorpinghaus

This post is the fifteenth in a bloggERS series about access to born-digital materials.

____

Contemplating how to provide access to born-digital materials? Wondering how to meet researcher needs for accessing and analyzing files? We are too! Join us for a Twitter chat on providing access to born digital records.

*When?* Thursday, October 27 at 2:00pm and 9:00pm EST
*How?* Follow #bdaccess for the discussion
*Who?* Researchers, information professionals, and anyone else interested in using born-digital records

Newly-conceived #bdaccess chats are organized by an ad-hoc group that formed at the 2015 SAA annual meeting. We are currently developing a bootcamp to share ideas and tools for providing access to born-digital materials and have teamed up with the Digital Library Federation to spread the word about the project.

Understanding how researchers want to access and use digital archives is key to our curriculum’s success, so we’re taking it to the Twitter streets to gather feedback from digital researchers. The following five questions will guide the discussion:

Q1. _What research topic(s) of yours and/or content types have required the use of born digital materials?_

Q2. _What challenges have you faced in accessing and/or using born digital content? Any suggested improvements?_

Q3. _What discovery methods do you think are most suitable for research with born digital material?_

Q4. _What information or tools do/could help provide the context needed to evaluate and use born digital material?_

Q5. _What information about collecting/providing access would you like to see accompanying born digital archives?_

Can’t join on the 27th? Follow #bdaccess for ongoing discussion and future chats!

____

Jess Farrell is the curator of digital collections at Harvard Law School. Along with managing and preserving digital history, she’s currently fixated on inclusive collecting, labor issues in libraries, and decolonizing description.

Sarah Dorpinghaus is the Director of Digital Services at the University of Kentucky Libraries Special Collections Research Center. Although her research interests lie in the realm of born-digital archives, she has a budding pencil collection.

Software Preservation Network: Prospects in Software Preservation Partnerships

By Karl-Rainer Blumenthal

This is the fourth post in our series on the Software Preservation Network 2016 Forum.
____

Software Preservation Network logoTo me, the emphases on the importances of partnership and collaboration were the brightest highlights of August’s Software Preservation Network (SPN) Forum at Georgia State University. The event’s theme, “Action Research: Empowering the Cultural Heritage Community and Mapping Out Next Steps for Software Preservation,” permeated early panels, presentations, and brainstorming exercises, empowering as they did the attending stewards of cultural heritage and technology to advocate the next steps most critical to their own goals in order to build the most broadly representative community. After considering surveys of collection and preservation practices, and case studies evocative of their legal and procedural challenges, attendees collaboratively summarized the specific obstacles to be overcome, strategies worth pursuing together, and goals that represent success. Four stewards guided us through this task with the day’s final panel of case studies, ideas, and a participatory exercise. Under the deceptively simple title of “Partnerships,” this group grounded its discourse in practical cases and progressively widened its circle to encompass the variously missioned parties needed to make software preservation a reality at scale.

Tim Walsh (@bitarchivist), Digital Archivist at the Canadian Centre for Architecture (CCA), introduced the origins of his museum’s software preservation mission in its research program Archaeology of the Digital. Advancing one of the day’s key motifs–of software as environment beyond mere artifact–Walsh explained that the CCA’s ongoing mission to preserve tools of the design trades compels it to preserve whole systems environments in order to provide researcher access to obsolete computer-assisted design (CAD) programs and their files. “There are no valid migration pathways,” he assured us; rather emulation is necessary to sustain access even when it is limited to the reading room. Attaining even that level of accessibility required CCA to reach license agreements with the creators/owners of legacy software, one of the first, most foundational partnerships that any stewarding organization must consider. To grow further still, these partnerships will need to include technical specialists and resource providers beyond CCA’s limited archives and IT staff.

Aliza Leventhal (@alizaleventhal), Corporate Librarian/Archivist at Sasaki Associates, confronts these challenges in her role within a multi-disciplinary design practice, where unencumbered access to the products of at least 14 different CAD programs is a regular need. To meet that need she has similarly reached out to software proprietors, but likewise cultivated an expanding community of stewards in the form of the SAA Architectural Records Roundtable’s CAD/BIM Taskforce. The Taskforce embraces a clearinghouse role for resources “that address the legal, technical and curatorial complexities” of preserving especially environmentally-dependent collections in repositories like her own and Walsh’s. In order to do so, however, Leventhal reminded us that more definitive standards for the actual artifacts, environments, and documentation that we seek to preserve must first be established by independent and (inter-)national authorities like International Organization for Standardization (ISO), the American Institute of Architects (AIA), the National Institute of Building Sciences, and yet unfounded organizations in the design arts realm. Among other things, after all, more technical alignment in this regard could enable multi-institutional repositories to distribute and share acquisition, storage, and access resources and expertise.

Nicholas Taylor (@nullhandle), Web Archiving Service Manager at Stanford University Libraries, asked attendees to imagine a future SPN serving such a role itself–as a multi-institutional service partnership that distributes legal, technical, and curatorial repository management responsibilities in the model of the LOCKSS Program. Citing the CLOCKSS Archive and other private networks as a complementary example from the realms of digital images, government documents, and scholarly publications, Taylor posited that such a partnership would empower participants to act independently as centralizing service nodes, and together in overarching governance. A community-governed partnership would need to meet functional technical requirements for preservation, speak to representative use cases, and, critically, articulate a sustainable business model in order to engender buy-in. If successful though, it could among other things consolidate the broader field’s needs to for licensing and IP agreements like CCA’s.

In addition to meeting its member organizations’ needs, this version of SPN, or a partnership like it, could benefit an even wider international community. Ryder Kouba (@rsko83), Digital Collections Archivist at the American University in Cairo, spoke to this potential from his perspective on the Technology and Research Working Group of UNESCO’s PERSIST Project. The project has already produced guidance on selecting digital materials for preservation among UNESCO’s 200+ member states. Its longer term ambitions, however, include the maintenance of the virtual environments in which members’ legacy software can be preserved and accessed. Defining the functional requirements and features of such a global resource will take the sustained and detailed input of a similarly globally-spanning community, beginning in the room in which the SPN Forum took place, but continuing on to the International Conference on Digital Preservation (iPres) and international convocations beyond.

blumenthal_spn_ersblog_1 blumenthal_spn_ersblog_2

 

 

 

 

 

 

Attendees compose matrices of software preservation needs, challenges, strategies, and outcomes. Photos by Karl-Rainer Blumenthal (left) and @karirene69 (right), CC BY-NC 2.0.

The different scales of partnership thus articulated, the panelists ended their session by facilitating breakout groups in the mapping of discrete problems that partnerships can solve through their necessary steps and towards ideal outcomes. At my table, for instance, the issue of “orphaned” software–software without advocates for long-term preservation–was projected through consolidation in a kind of PRONOM-like registry to get the maintenance that they deserve from partners invested in a LOCKSS-like network. Conceptually simple as each suggestion could be, it could also prompt such different valuations and/or reservations from among just the people in the room as to illustrate how difficult the prioritization of software preservation work can be for a team of partners, rather than independent actors. To accomplish the Forum attendees’ goals equitably as well as efficiently, more consensus needed to be reached concerning the timeline of next steps and meaningful benchmarks, something that we tackled in a final brainstorming session that Susan Malsbury will describe next!

____

Karl-Rainer Blumenthal is a Web Archivist for the Internet Archive’s Archive-It service, where he works with 450+ partner institutions to preserve and share web heritage. Karl seeks to steward collaboration among diversely missioned and resourced cultural heritage organizations through his professional work and research, as we continuously seek new, broadly accessible solutions to the challenges of complex media preservation.