Join the bloggERS team!

bloggERS!, the blog of the Electronic Records Section of the Society of American Archivists fosters communication and collaboration within the ERS and across the wider archival community.

Apply by February 15, 2023 to join the bloggERS Editorial Team! We’re seeking two volunteers to join us as Team Members. Access the brief application here.

Team Member responsibilities:

  • Term of service: through July 2023 (after July 2023, renewable with one-year term commitment)
  • Term limit: none
  • Duties: Manage (recruit, edit, publish) one post every six weeks

All information professionals with an interest in electronic records and digital archives are welcome to apply, including MLIS students and early-career professionals. Questions? You can always reach us at

New Year, New Resources!

Happy New Year from the editors at BloggERS! We’ve spent some time reviewing resources from the past year in order to create an easy-to-reference list. It is by no means comprehensive; just some interesting things that we discovered in our recent journeys through the digital landscape.

In addition to the list below, we’d love to hear what other new resources our readers are excited about! Feel free to respond in the comments and share widely with fellow electronic records practitioners.

Tools API Beta

This API, now in its third version, allows anyone with an API key to query machine-readable data available on the site. 

Browsertrix Cloud Crawling Service

Browsertrix Cloud is an open-source Chrome based crawling service developed by the Webrecorder community. 

It Takes a Village In Practice Toolkit

From the site: The ITAV in Practice Toolkit is an adaptable set of tools for practical use in planning and managing sustainability for open source software initiatives.

Aaru Data Preservation Suite

Aaru is an open-source disk imaging format with some really interesting features.


The Discmaster site enables visitors to browse and search a large dataset of vintage computer files from the Internet Archive.

IIPC Web Archiving Tools and Software 

A one-stop shop of resources and links for any of your web archiving needs!


Realities of Academic Data Sharing (RADS) Initiative Report

The RADS Initiative examines the myriad ways in which academic institutions are supporting the dissemination of and public access to federally funded research data.

Software Metadata Recommended Format Guide

The SMRF Guide summarizes and defines metadata elements used to describe software materials, including crosswalk mapping to MARC, Dublin Core, MODS, Wikidata, and Codemeta.

Legal and Ethical Considerations for Born-Digital Access from the DLF Born-Digital Access Working Group

Providing access to born-digital archival records is still a tricky subject, but this publication covers a lot of ground, including HIPAA and FERPA restrictions as well as copyright concerns.

Reference and Roundups

Open Grants Repository

The IMLS-funded Open Grants project seeks to promote transparency in the authorship of funding proposals by providing a searchable repository of grant applications.

Oxford Common File Layout Specification

From the site: This Oxford Common File Layout (OCFL) specification describes an application-independent approach to the storage of digital objects in a structured, transparent, and predictable manner. It is designed to promote long-term access and management of digital objects within digital repositories.

DPOE-N Digital Preservation Resource Guide

A clearinghouse of educational and informational resources related to digital preservation.

DPC Competency Framework 

“The Competency Framework presents information on the skills, knowledge, and competencies required to undertake or support digital preservation activities.”

BitCurator Tool Inventory

This inventory lists the workflow step, the accepted input (disk image, directory of files, etc.), the type of user interface (GUI/CLI), links to available documentation, and function of each tool in the BitCurator environment.

SAA Electronic Records Section Community Conversation: Legal and Ethical Considerations for Born-Digital Access

The SAA Electronic Records Section with the Records Management Section and the Privacy & Confidentiality Section invite you to attend a community conversation with the authors of the DLF Publication Legal and Ethical Considerations for Born-Digital Access and Archival discretion: a survey on the theory and practice of archival restrictions on Friday, January 13, 2023 at 1pm ET.

Let’s discuss your thoughts and needs about navigating restrictions for born-digital archives. Do these publications reflect methods you’ve used? Are there best practices or resources that should be included in future updates of the DLF publication? Do you have a restrictions related question that can’t be addressed with these resources that you’d like to bring to the table and crowdsource advice on? Or maybe you’ve used the recommendations from these documents and can share with others how you used it! 

The conversation will be based on your questions or stories, so please submit questions by January 9th here. Following a brief overview of the project, panelists will answer pre-submitted questions after which attendees will engage in group discussions. 

Register here.

Brought to you by:

SAA Electronic Records Section

SAA Records Management Section

SAA Privacy & Confidentiality Section

DLF Born-Digital Access Working Group

Reinventing COMPEL: Migration as a tool for project renewal or, a too-honest assessment of how projects can go very very wrong and how to get them going right again

We’ve all heard the horror stories: what began as a fairly simple digital humanities curation project (as if those even exist) quickly became a quagmire of confusion abruptly ended by a security vulnerability deemed too resource intensive to fix. But what happens when, instead of shuffling the project into the graveyard of “what can you do, these things happen,” an influx of new perspective and effort resurrects interest in the dormant collection? That is a recipe for a migration project! Take one defunct, poorly resourced project, add a dash of interest, a splash of digital elbow grease, and a heaping cup of “let’s just grab the original data and start over with a new infrastructure,” mix, and let rise for two years. Optional additions include: involvement of an international computer music society, project advisors in three different time zones, and a recognition of the privileges of academic study. This, in particular, is the recipe for COMPEL 2.0

The first iteration of this project involved a data dump from the WordPress site of the Society for Electro-Acoustic Music in the United States (SEAMUS).  As initially conceived, the project’s goals were to publish and preserve digital music compositions and performances from scholarly and artistic venues. Data from SEAMUS were strong-armed into a Hyrax repository built on top of Fedora. This initial venture failed (miserably), as the open source system was too cumbersome for part-time developers to manage and sat stagnant for a number of years until a security vulnerability in the operating system for the (now ancient and un-updated) repository necessitated a shut down before the data could be retrieved (a process deemed too time-intensive without a good enough return on the investment). The data, it turned out, had not been augmented or added to at all; it had merely been rearranged to conform to a standard that Fedora could deal with (don’t ask us what that standard looks like because we weren’t able to get the data out to see it). And so, we went back to basics and the original data from WordPress. It wasn’t pretty, but at least we could see it. 

Armed with the knowledge that we did not (nor would ever, probably) have the development resources needed to use a fully customizable infrastructure like Hyrax/Fedora, we explored more out-of-the-box infrastructures before landing, hesitantly, on a locally-hosted Omeka. While we could no longer dream of a fully customizable repository with a next-level, visually arresting interface (yeah, we know, we probably shouldn’t have dreamed of that in the first place), the Omeka instance has proved much less resource-intensive and much more stable. 

And so, we found ourselves once again migrating the data from the original source (WordPress) into a Dublin Core-esque environment. Getting the original SEAMUS data into the system—while not easy—was at least possible for a competent computer science graduate student (shout out to Javaid!).  Naturally the migration import wasn’t clean (because of course complicated objects need complicated, non-standard metadata), so the next approximately 500 years was spent manually fixing the records to conform to a non-standard metadata structure that tends to change as we learn new things about the nature of computer music objects and the social structures that have grown up around them. Our project advisors, all computer musicians from an international academic context, have been extremely helpful in helping us confront the complexity of this genre, for which we are (kinda, I guess) grateful. 

Armed with moderate success migrating the records from SEAMUS into a structured non-standard metadata schema, we then decided to further complicate the issue by adding in records from an international computer music conference: New Interfaces for Musical Expression (NIME). Naturally, adding in data from a time-bound, event-based organization (like a conference) has been super fun.  But it has also forced us to confront, through our metadata, both the temporal-spatial aspect of computer music (is a digital performance the same as a physical performance?) and the need to account for both physical and digital instrumentation. 

Ultimately, we’ve determined that this project needs to evolve as constantly and quickly as the technology and sounds that it is trying to capture; recognizing this has led us to a few important lessons that will inform the future of this project. 

  1. Data export is probably more important than data import. Whichever infrastructure you use, make sure you can get the data back out in a uniform structure. Test data export early, and test it often. Check it against import AND the user interface (not just the back end). 
  2. Don’t marry the infrastructure. The pace of technology changes so quickly that if an infrastructure isn’t killed by updates, it will probably be killed by security risks. Plan for this in advance and make sure that you can get the data out (see #1 above). 
  3. Unless you work for Google or Apple or another for-profit tech company, keep your project as low-tech as possible. Depend on as few developers or sysadmins as possible because their time is precious and they’re probably dealing with bigger, more important problems than you (like the administrator who gets hit with a ransomware attack). Keep your dependencies low and document, document, document EVERYTHING you do.

Andi Ogier is the Assistant Dean and Director of Data Services at Virginia Tech in Blacksburg, VA.

Hollis Wittman is the Metadata and Music Libraries Specialist at Virginia Tech, working remotely from Kalamazoo, MI.

Downsizing: Migrating multiple systems to a unified collection management suite

When I started my position at the Arthur H. Aufses, Jr., MD Archives at the Icahn School of Medicine at Mount Sinai in January 2020, one of my first mandates was to find a migration pathway from our locally hosted digital repository system, DSpace, to a software-as-a-service (SaaS) option. As I began this process, my first instinct was to zoom out and look at all of the systems used by the Archives. What I found was an ecosystem of archives and digital object management tools that had grown organically over the past ten-plus years. 

Pre-migration archives ecosystem. The middle lane (in yellow) represents all of the sets of metadata and digital objects managed by the Archives. The top lane (in green) shows all of the systems that managed the metadata and digital objects internally, and the bottom lane (in blue) shows how those assets were served publicly to users.

While this ecosystem has worked well over time, responding to the Archives’ needs as it grew, there were a few reasons I began looking into migrating out of them as a whole. The first, and perhaps most pressing, was that our partners in IT were moving towards a SaaS model, and that the local servers that hosted our platforms were no longer sustainable. Second, the growth of our collections was outpacing the ability of these systems to keep up, and it wasn’t always clear which system to use as they often had overlapping roles in managing the collection. And finally, we were able to identify several opportunities a new system would provide, namely better collection management capabilities and additional pathways for researchers to have direct access to collection description and digital objects.

With the overarching goal of migrating our digital objects, the first step was to strengthen our underlying metadata management at the collection level. Then, our hope was to be able to link digital objects directly to their related records within their description. Improved digital preservation capabilities were also a must on our list. This led us to Access to Memory (AtoM), an archival description and management environment, and Archivematica, a digital preservation system. We use a SaaS instance hosted by the vendor, Artefactual. We began using these tools in January 2021, and our AtoM site went live to the public in June 2021.

What has ensued is a multi-armed migration project. Our first task was to migrate a collection management system run on Microsoft Access and approximately sixty (of 120+) finding aids in Microsoft Word to the collection management system, AtoM. This has involved a lot of manual clean-up and moving unstructured data into CSV files for batch import. I was able to complete this step from January-June 2021. 

With a metadata infrastructure mostly in place, it was time to start moving over our digital objects. I began with our audiovisual material, 900+ files and 10+ TB of material. The metadata for audiovisual files was located in multiple spreadsheets, and all of the files were on local storage. The ingest process was straightforward, but re-linking the audiovisual files to the appropriate collection was a significant remediation step that took much longer than anticipated. While we initially projected this would only take a few weeks, we also underestimated the amount of time it would take for large files (100+ GB) to ingest through Archivematica, often at a rate of only one or two per workday. I ultimately worked on this through August 2022, alongside other migration activities.

I additionally worked on migrating almost 3,000 digital objects out of DSpace during this time. The Archives had hosted DSpace on two local servers: one managed all digital objects and was accessible to Archives staff only (“DSpace internal”); the second was a duplicate copy of only the publicly accessible files and this was available on the public web (“DSpace external”). DSpace external was a high-priority migration target for us. The IT team was particularly concerned about the longevity of this server due to security and hardware issues. 

I was able to export packages from DSpace and created a script that repackaged the files and metadata in a way that was easily understandable to Archivematica and AtoM. The resulting packages would be ingested through Archivematica, which in turn would send derivative access copies to AtoM. In AtoM, I was able to match the metadata CSVs generated by the script to the records in AtoM using the command-line CSV import function. We were able to decommission DSpace external in July 2022. We paused our DSpace internal migration (another 15,800 digital objects) to address InMagic/DB Textworks (described in the next section), returning to migrating the DSpace internal server in December 2022. 

We had used InMagic/DB Textworks, a legacy product now owned by Lucidea, since approximately 2006(!) to host our digitized image collections (almost 2000 digital files). While this tool was a powerhouse for us for many years, it was increasingly difficult to link the image files to related description. I began the project by exporting a CSV of the metadata from InMagic. I then wrote a Python script that created directories for each of the unique values, and pulled copies of the corresponding TIF files into each of the folders, effectively grouping material from the same physical folders. I had to manually match these to their existing identifiers. The server hosting the platform was sunset in November 2022, and all the remaining image files that did not have associated, structured metadata were moved to a shared drive. These files are now more readily accessible to archivists on staff and eventually will be preserved in Archivematica and made available via AtoM pending ongoing metadata remediation.

Post-migration archives ecosystem. The middle lane (in yellow) represents all of the sets of metadata and digital objects managed by the Archives. The top lane (in green) shows that only AtoM and Archivematica manage the digital objects and metadata , and the bottom lane (in blue) shows that the assets are served up to the public web by AtoM (which is linked out to from the Archives website).

We hope to conclude this project sometime in the first quarter of 2023, and you can keep up with our progress by checking out what’s available on our publicly accessible AtoM site (“Archives Catalog”). This is the first time I’ve laid this project out narratively. Admittedly many of the complexities have been glossed over for the sake of brevity, and as we wrap up the project, the work and its complexities will continue. Hopefully this sweeping overview has provided helpful insight into how we are undertaking a large-scale migration project. This project would not have been possible without institutional support, as well as the work of our colleagues in IT.

On a final note, I’m left thinking about Elizabeth McAulay’s great article, “Always Be Migrating.” The title has become something of a professional motto in these past months, and I’m already anticipating ways we can continue to iterate on our current metadata and digital object management practices, not to mention revamping our location module, redesigning our AtoM homepage, improving our documentation, and… I’ll save it for the Gantt chart! 

Stefana Breitwieser is the Digital Archivist at the Arthur H. Aufses, Jr., MD Archives at the Icahn School of Medicine at Mount Sinai in New York, New York. Her professional interests include providing researcher access to digital archival material and wrangling metadata.

Cleaning out the attic: A case study in organizing a digital dark archive

by Erin Wolfe

Email from Erin Wolfe to the University Arcvist inquiring about the digitized James Bee Journal collection on digmaster.

“Digmaster” was a dark archive server managed by University of Kansas (KU) Libraries primarily for storing digitized material from KU Libraries’ archival collections from 2005-2021. It was organized like many shared drives would be after years of use – well-organized in some areas, not in others, and with a significant amount of digital clutter. Unindexed and mostly undocumented, I thought of digmaster as the library’s digital attic: easy to use for file storage, but difficult in terms of understanding or finding content. 

At the time of the above email, I had been in the role of Metadata Librarian for about three years, and one of my long-time goals had been to bring some order to digmaster. Despite progress in a few targeted collections, my efforts had been largely thwarted this far by the overwhelming task of identifying and organizing unfamiliar materials. With the impetus of outdated server hardware and an impending data migration, I was attempting another pass at bringing some order to the stored files.

The James Bee collection caught my attention due to its large size (9,975 TIFs, 287 GB) and clearly descriptive filenames. Further discussion with the collection’s curator confirmed that this was a professionally digitized copy of a KU professor’s field journals from 1927-1995 that had been donated along with his physical papers. Following in-house procedures for newly digitized collections, I processed the files, added individual journals as multi-page items to the KU Libraries’ Digital Collections site (Islandora), parsed the EAD finding aid into individual MODS records using the Python libraries BeautifulSoup and xmlschema, and created links between the finding aid and the digitized journals.

Pie chart breaking down contents of digmaster by file type
High-level breakdown of archival storage by file count

Hoping to find additional materials I could process, I gathered a full list of all files to begin a systematic review of digmaster’s contents. Using the Python library pandas in Jupyter, I explored directories and subdirectories, retrieved counts of various file formats, identified possible duplicates, grouped related material, and other high-level tasks. After excluding system files and material not related to KU Libraries, I was left with a list of 575,043 files. The majority of these were quickly identified as necessary for storage-only (primarily low-resolution digitized books which the Libraries had neither the rights nor the desire to make public) or as non-image files. This left 97,326 files from archival collections that needed processing.

I started by creating a spreadsheet to track information about the various directories: file counts and formats, documentation (or lack thereof), collection/project name, whether the files existed in any other known locations, and a column to record notes on file dispensation.

The primary goal of this project was to do something tangible with as many files as possible, so I started with content that was (a) easily identifiable (usually through the file or directory name), (b) clearly organized, and (c) had the physical material represented in one of the Libraries’ systems (ArchivesSpace for finding aids or the catalog for book records).

Screenshot of digmaster tracking spreadsheet

Over the course of the next eighteen months, working mostly from home due to Covid closures, I explored the digmaster reports, reviewed documentation on other shared network drives, and talked with anyone that might have useful background information (fifteen colleagues across numerous departments, without whose help none of this would have been possible). During this process, I was able to identify an additional eight archival collections (2,264 TIFs) that had been digitized in full and had existing finding aids in ArchivesSpace. As before, I used Python to parse the EAD to collect metadata for item-level MODS records and added them as new collections to Islandora. In addition, I identified nineteen fully digitized books and three manuscript scrolls ranging from circa 1250 to the 1880s (4,023 TIFs), as well as additions to three existing digital collections (2,026 TIFs), all of which were added with item-level MODS records linked to the appropriate finding aid.

Bar graph showing disposition of digmaster files, in TBs
High-level breakdown of archival storage by file size. 584,903 files (1.9 TB, including the non-Libraries material) remain in dark storage; 5,690 files (328 GB) are stored for curator access; 86,050 files (2.5 TB) were able to be removed from storage; and 32,163 TIFs (1.4 TB) across 17 digital collections that were previously hidden are  now (or will be soon) publicly available.

Materials from an additional five collections were identified and processed. Three of these were not made public due to rights issues (411 TIFs). The other two did not have finding aids available, though one, which  included scanned photos from a local newspaper in the 1950s (1,975 TIFs), did have a handwritten item-level paper finding aid. Over the course of several months, an invaluable student worker transcribed the 253 pages. By August 2021, the electronic finding aid had been created, and we had a new Islandora collection with the images linked to ArchivesSpace. The other collection without a finding aid had the largest amount of material on digmaster (11,900 TIFs). This finding aid is currently in process, and some rights issues are being explored, but this collection will also be available in the near future.

Throughout this process, there was another category of materials: identified but not added to Islandora. This generally included (1) scans that were already online (either Islandora, or KU’s institutional or e-journal repository, 70,658 files); (2) content that was slated for deletion by curators, because they were no longer needed or involved rights issues (6,119 files); (3) non-preservation quality images (9,273 files); or (4) material that was currently unusable for Islandora (e.g., unidentifiable source, incomplete book scans, needs further research, etc., 5,690 files). Items in this final category did have potential value for curators and are being retained on a separate server for future reference.

By the end of this project, all the materials in digmaster were identified at some level, with actions or future work documented. The above graphic shows the disposition of files by size. Nearly 4 TBs of material were cleaned from the digital attic that will not need to be migrated or stored needlessly, the remainder of the files can be retrieved more easily in the future, and KU is able to provide access to more resources for our users.

Erin Wolfe is the Digital Initiatives Librarian at the University of Kansas, where he works with the Libraries on a variety of tasks involving the creation, access, and preservation of digital materials. His research interests include computational text analysis, text data mining and visualization, machine learning, and related areas.

We’ve Got Skills and They’re Multiplying…

By Sharon McMeekin

“Good Practice not Best Practice” has become something of a tenet of the Digital Preservation Coalition (DPC). It acknowledges that there is no “one size fits all” approach to digital preservation as all organizational contexts are different, from the objectives we’re aiming to fulfill to the resources we have available. It’s also a concept that was touched on by my colleagues Jen Mitcham and Paul Wheatley in their recent iPres 2022 paper, “Going for Gold or Good Enough?”, and in our new Strategic Plan for 2022-2027 where we’ve adopted “Good Practice” as the heading for a key group of objectives.

This concept of good practice is one of the reasons I’m a big fan of maturity models for the benchmarking and planning for development of digital preservation capabilities. Models like the NDSA’s Levels of Preservation and the DPC’s own Rapid Assessment Model (DPC RAM), offer a flexibility that allows us to assess where we are and to set relevant and achievable targets for building capability. However, it is important to remember that organizational digital preservation capabilities are nothing without the staff who develop and deliver them. And this poses the question: how can we begin to understand the skills required to support those capabilities?

Everything we do at the DPC is member driven and helping to understand skills and staffing requirements is a need that is consistently highlighted. Members have told us they need help with identifying skills gaps, structuring professional development, making the case for more staff, developing role descriptions, recruitment, and more. These requests fueled a project the DPC’s Workforce Development team (myself and our Training and Grants Manager, Dr Amy Currie) have been working on since August 2021 to develop a new Competency Framework for digital preservation, along with a set of accompanying resources.

Our key aims for the project were to develop a framework that encompassed good practice, was flexible and therefore applicable to a variety of uses and contexts, practical with clear processes for its use, and that aligned with DPC RAM, so that it could be used by organizations as part of the process of building digital preservation capabilities. Development of the Competency Framework started with a literature review of existing resources on digital preservation competencies, skills, and education, as well as current standards for good practice. We then progressed to an iterative process of design and review, incorporating feedback at key stages from colleagues at the DPC and our Workforce Development Sub-Committee, the membership of which is drawn from across the DPC membership. The final stage of feedback and review involved a 3-month DPC member preview of the Competency Framework alongside a pilot of one of the key accompanying resources.

The final version of the Digital Preservation Competency Framework defines the broad range of interdisciplinary skills required for successful digital preservation. It is important to note that these are the skills required across a team or group of staff members undertaking digital preservation activities at an organization, and not for an individual member of staff. The Framework aims to be applicable for organizations of any size and in any sector, to support a range of workforce development activities, to be preservation strategy and solution agnostic, and to be simple to understand and quick to apply. It sets out five main competency areas (“Governance, Resourcing, and Management”, “Communications and Advocacy”, “Information Technology”, “Legal and Social Responsibilities”, and “Digital Preservation Domain Specific”) where 28 skill elements are grouped and defined. The Framework also defines five levels of knowledge and experience from Novice through Expert. A detailed view of the Framework also provides example statements of how skills areas might be described in a role description and examples of related tasks.

High-level view of the five competency areas and 28 skill elements of the Digital Preservation Competency Framework (Creator: Digital Preservation Coalition). Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0).

As well as the Competency Framework itself, the DPC has also released two accompanying resources: The Competency Audit Toolkit (DPC CAT) and a set of example role descriptions. DPC CAT allows organizations to carry out three different audits relating to competencies and skills:

The DPC CAT Logo (Creator: Sharon McMeekin). Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0).
  • An audit of an individual’s skills, including a structured approach to planning for professional development
  • An audit of a role description to compare the role as described with the actual tasks and responsibilities undertaken by the role holder
  • A team/organizational-level audit of skills using the results of a DPC RAM assessment to define the skill levels required to support existing and target capabilities and identify where there may be gaps.

The set of role descriptions provide a guide to the skills and levels of knowledge and experience required across eight different roles from an information studies graduate, through different levels and types of practitioners, to those at managerial and executive levels. The role descriptions are not intended to be prescriptive, but rather an aid for a variety of workforce development tasks.

Both the Competency Framework, the accompanying resources, and full guides to using each are now available to all on the DPC website, and we will continue to support and develop these resources. This includes a plan to develop a toolkit to support recruitment as well as a “leveling-up” guide for those wishing to develop their skills within a particular area. We also welcome feedback (there’s a form on the Framework page on our website), which will be incorporated into future revisions as they are needed.

Sharon McMeekin is Head of Workforce Development with the Digital Preservation Coalition where she leads on training and professional development projects such as the DPC’s Competency Framework. She is also managing editor of the ‘Digital Preservation Handbook’ and was project manager and lead author of the Novice to Know-How learning pathway. Sharon is a qualified archivist and experienced practitioner and has contributed to a number of international training and development projects in digital preservation. She is a frequent guest lecturer for information management courses and is a trustee of the Scottish Council on Archives.

The Digital Preservation and Outreach Network: Another Resource for Your Toolkit

By Sarah Cuk

It’s no secret that the complex characteristics of digital information present complex challenges in its preservation. Yet the importance of maintaining digital objects within an evolving world of material, intellectual, technological, environmental, and ethical struggles persists. Many of the obstacles we confront in our preservation endeavors can feel overwhelming—sometimes impossible to overcome. One organization that can help with these issues is the Digital Preservation Outreach and Education Network. DPOE-N distributes microgrants and hardware and hosts workshops about various digital-preservation-related topics.

The Digital Preservation Coalition (DPC), a UK-based non-profit, defines digital preservation as a “series of managed activities necessary to ensure continued access to digital materials for as long as necessary” (Digital Preservation Coalition, 2015). The underlying goal is to ensure the continuity of authentic digital information within a world that changes over time—a continuous and generative process of adaptation, maintenance, and care. But that can be really challenging when there’s not enough money, resources, or support.

It can be difficult to stay up to date on evolving technologies in the face of growing obsolescence. It can be overwhelming to control an exponentially increasing amount of generated data when faced with organizational inadequacies and abysmal funding allocation. It can be impossible to prevent burn-out amongst ourselves, especially in the context of climate change and global economic instability. DPOE-N seeks to mediate some of these issues by facilitating skill-sharing and distributing the funding and hardware we need to complete preservation projects. 

DPOE-N began in 2010 at the Library of Congress as Digital Preservation Outreach and Education (DPOE), with a goal to “foster national outreach and education to encourage individuals and organizations to actively preserve their digital content” (Library of Congress). DPOE emphasized their “train the trainers” in-person courses at LoC (Baur and Cocciolo, 2022), focusing on curriculum, core principles, and building an instructor base. With the addition of the “N” in 2018, DPOE-N moved to the Pratt Institute School of Information and the New York University’s Moving Image Archiving and Preservation program, becoming “a more distributed model, creating a ‘network’ of resources” and providing “funds for those interested in practicing digital preservation in support of cultural heritage collections” (Baur and Cocciolo, 2022). In 2022, DPOE-N received another two-year grant from the Mellon Foundation, meaning there are currently funds available. You just need to apply. 

Professional Development Microgrants

A significant part of DPOE-N’s offerings include the professional development microgrants. If you’re an individual or an institution looking to bolster your digital preservation efforts, if you’re experiencing preservation challenges resulting from COVID-19, or if you’re an emerging professional wanting to learn more about current practices, you’re eligible to apply for up to $2,500 USD to cover the cost of trainings, conferences, and workshops hosted by our partners, some of whom I’ll list below. DPOE-N offers funding for workshops outside of the United States, but to be eligible to receive funding from the microgrants, you must be a U.S. citizen or U.S. permanent resident. 

The courses that DPOE-N promotes are varied and should relate to digital preservation to qualify for funding. Upcoming trainings can be found in our Training Database. Some highlights include Enacting Environmentally Sustainable Digital Preservation hosted by the Southeast Asia Regional Branch of the International Council on Archives (SARBICA), Email Archiving hosted by SAA, and Creating Preservation-Quality Oral Histories hosted by the Northeast Document Conservation Center (NEDCC). Visit the DPOE-N Training Database to see more.

Emergency Hardware Support

If you’re an organization with a 501(c)(3) public charity status operating within the United States or its territories, you can apply for up to $600 to receive emergency hardware, such as Western Digital’s 12TB or 16TB RAID External Hard Drives. If you have any questions about eligibility, we encourage you to contact us!

DPOE-N Workshops

Additionally, DPOE-N offers free workshops with various focuses for which you can apply. This past September, we hosted Operations and Systems Management for Cultural Heritage Professionals and have had workshops in sustainable web archiving, moving image and sound preservation, and command line interface. The workshops are offered freely, though we ask you to fill out an online RSVP.

You can check out DPOE-N on Instagram, Facebook, and Twitter. We also just launched a monthly newsletter that gives news updates, upcoming trainings and conferences, and testimonials from microgrant recipients. Visit our website to apply for microgrants and emergency hardware support and to browse for resources, upcoming workshops, trainings, and conferences. You can also read this poster to see a great overview of DPOE-N’s history and offerings!


Digital Preservation Handbook, 2nd Edition,, Digital Preservation Coalition © 2015

Digital Preservation Outreach and Education,, Library of Congress

Natalie Baur and Anthony Cocciolo, Digital Preservation Outreach and Education Network, 2020-2022 and Beyond, DPOE-N, 2022

Sarah Cuk is a graduate student at Pratt Institute pursuing an MSLIS with a focus on archives and special collections. She is a Research Fellow at DPOE-N and is interested in community archives, reference & instruction, and audiovisual preservation.

Call for bloggERS!: Case Studies in Migration

We are currently accepting submissions for a new mini-series of blog posts, Case Studies in Migration.

We’ve all been there: our digital collections are lingering on systems that are no longer working for staff members or researchers, and perhaps our partners in IT are (kindly) reminding us that our current set-up is deprecated. Enter the dreaded migration project.

What recent migration projects has your institution encountered? What are you migrating to and from? What was the process like? Do you have any takeaways, tips, or tricks to share?

Writing for bloggERS!:

  • We encourage visual representations: Posts can include or largely consist of comics, flowcharts, a series of memes, etc!
  • Written content should be roughly 600-800 words in length
  • Write posts for a wide audience: anyone who stewards, studies, or has an interest in digital archives and electronic records, both within and beyond SAA
  • See our editorial guidelines in the bloggERS! guidelines for writers.

Please let us know if you are interested in contributing by sending an email to!

Call for bloggERS: Blog Posts on the DLF Forum and NDSA Digital Preservation

With short weeks to go before the Digital Library Federation Forum (October 10-12, 2022) and the National Digital Stewardship Alliance Digital Preservation Conference (October 12-13, 2022), bloggERS! is seeking attendees who are interested in writing a re-cap or a blog post covering a particular session, theme, or topic relevant to SAA Electronic Records Section members.

Please let us know if you are interested in contributing by sending an email to! You can also let us know if you’re interested in writing a general re-cap or if you’d like to cover something more specific.

Writing for bloggERS!:

  • We encourage visual representations: Posts can include or largely consist of comics, flowcharts, a series of memes, etc!
  • Written content should be roughly 600-800 words in length
  • Write posts for a wide audience: anyone who stewards, studies, or has an interest in digital archives and electronic records, both within and beyond SAA
  • Align with other editorial guidelines as outlined in the bloggERS! guidelines for writers.

Please let us know if you are interested in contributing by sending an email to!