Mitigating Risk and Increasing Transparency, Securely and Globally

By Andrea Donohue and Nicolette Lodico


Managing records for a global organization is complex, especially as we shift into an exclusively digital records environment and generate increasing amounts of files and data. Many organizations place compliance and risk mitigation at the fore of their records management programs and focus on properly disposing of records or locking down records they retain in perpetuity. Still, other organizations are committed to preserving a faithful record to share with the global community even though their  work receives varying degrees of scrutiny and opposition, depending on the local context. 

Such is the case at the Ford Foundation. Finding the right balance is critical for protecting our grantees, who work with those closest to the problems the foundation is committed to addressing (often marginalized voices). By blending policy, procedure, training, and outreach, we work with our staff to understand what they find archivally valuable and what security concerns they have about making those archival records available over time. With that understanding, we develop mechanisms to account for the various security and privacy issues inherent in all records and archives programs. The work and balancing act is never-ending! 

“We place the subject matter experts at the center, empowering them to be the curators of their own legacy.”

Here are four principles that guide our work in sustaining a legally compliant and security-minded records and archives management program—one that protects our grantees and partners while fostering transparency and continuous learning.

1. Context is everything

This balance cannot exist without understanding the legal and geopolitical contexts in which grantees and foundation staff work. The context is not the same everywhere, so our treatment of the record cannot be uniform.

Ford’s Information Management (IM) team works with legal experts to ensure a legally compliant records and archives program in all jurisdictions where we have offices. This program goes a step further: we must also consider the cultural and geopolitical requirements that are not typically as overt as the legislation is. Doing so enables us to create compliant retention periods and handling instructions beyond the letter of local law.

2. Not all records are created equal

Not all record material is appropriate for external use. We employ three overarching records dispositions, which enable the Ford Foundation to be as transparent as possible while managing unnecessary risk and mitigating the security concerns of the at-risk populations we serve.

  • Temporary Records have no archival value but may have legal retention requirements. We destroy or delete these at the proper times.
  • Archival Records are appropriate for external use. We send them to our external archival repository for public service.
  • Archival-Closed Records are permanent records that are not suitable for external use. We preserve these records internally.

3. The embargo period is our friend

The IM team works closely with General Counsel to develop internal and external embargo periods for all archival records. These periods range from records made available immediately to those that remain closed for 25 years. The practice serves several purposes. First, it allows information that is sensitive, in the present, to become less sensitive and more appropriate for sharing over time. It also provides much-needed perspective and enables researchers to use foundation records in a more informed context. Finally, our embargo policy provides us the flexibility to extend restrictions for certain records should the need arise—e.g., with the benefit of hindsight or as political and social contexts shift over time.

4. Managing records is everyone’s responsibility

It would be impossible for the foundation to satisfy these commitments without fostering a culture that supports our records’ rigorous management. 

At Ford, records management training is mandatory for all staff to ensure they understand the policies and their responsibilities to those policies. We place the subject matter experts at the center, empowering them to be the curators of their own legacy. That is, they are empowered to both identify what is archivally valuable and what is not appropriate for external use or too sensitive for consumption. We place restrictions on such records and revisit those restrictions over time as global circumstances change. The goal is to safeguard restricted material until the organization feels the records no longer pose a risk, after which the restriction is lifted. 

There are many ways organizations can balance their desire to manage their records, preserve their legacy, and ensure sensitive information is protected, all while telling the story of those often marginalized voices doing the work. At Ford, we have found that a combination of sound, enterprise-wide policies; an organizational culture that understands the value of archival records; and putting subject matter experts at the center allows us to maintain the right balance between being transparent and mitigating risks to the organization and the people we support.


Andrea Donohue is the Senior Manager, Global Records and Archives at the Ford Foundation in New York, N.Y. where she has worked on the digital transformation of the foundation, its policies, procedures, and systems. Andrea has also developed and manages a globally compliant records and archives management program designed to mitigate risk while preserving organizational history and increasing access to information. Andrea has a Master’s Degree in Library Science and holds certifications as a Records Manager (CRM), an Information Governance Professional (IGP), and a Federal Records Manager. She serves as a member of several international records organizations and believes in freely sharing her work to contribute to the record and archive profession’s body of knowledge.

Nicolette Lodico is the Director of Global Information and Knowledge Management at the Ford Foundation in NYC, where she leads foundation-wide programs in records, archives, and knowledge management. Her work focuses on establishing practices to increase transparency, preserve institutional memory, and contribute to historical scholarship and public discourse through the responsible management of institutional records. She also is president of the Technology Association of Grantmakers, a non-profit organization that cultivates the strategic, equitable, and innovative use of technology to advance philanthropy. Nicki is passionate about minimizing barriers to sharing and finding information and to analyzing information to reveal new insights. Her current interests include ontologies, digital curation, metadata, and machine learning. She earned her M.L.S. from the State University of New York at Buffalo.

Call for Contributions – Publicity/Privacy: ethical archiving in an age of mass surveillance

The Electronic Records Section Blog invites all and sundry to write about their experiences, concerns, paranoid prognostications, and experiments walking the line of ethical archiving in an era of mass surveillance. 

Our next series, Publicity/Privacy, is an opportunity to discuss the complexities of preserving records and making them accessible without exposing their subjects or creators to immediate (or eventual) harm, particularly from a law enforcement apparatus which is overwhelmingly focused on controlling the vulnerable and defending the powerful.

Blog posts may include reflections on the following prompts, among others:

  • How do we balance including voices/records/realities of vulnerable groups in the archives with the reality that doing so may endanger them?
  • How do we manage the push/pull between documenting a faithful record for the future and exposing people in marginalized groups (and those in opposition to power) to material harm in the present?
  • How do we grapple with the reality of inequitable treatment by the justice system, especially when it means the marginalized voices whose exclusion from the archive we want to repair are targeted by police?
  • What strategies can we use to continue documenting in these areas while mitigating the risk (to individuals as well as institutions) of punitive legal action?
  • Beyond social media, what records have you encountered which pose such risks to donors, persons mentioned within the records, and collecting institutions?

Examples of completed or ongoing projects at archives addressing these issues are especially welcome.

If interested, please get in touch with the editorial team via ers.mailer.blog@gmail.com.

Is Physical Custody of Digital Archives Possible? Lifting the Lid on Data Centers

By Lori Eaton


As an archivist working largely with digital records, I’ve often felt a disconnect when thinking about physical custody of the electronic records in my care. Physical custody, according to Hunter (2003), falls in the accessioning process between legal control and intellectual control of records[1]. I can make a tactile connection to archival materials in the form of paper records, manuscripts, photos, negatives, slides, film, and video tapes. I can point to them on a shelf or hold them in my hands. But the idea of physical control of digital archives feels slippery to me. Even CDs, DVDs, and various physical media are only temporary locations for digital records. While I can point to the representations of electronic files rendered by my operating system and software applications, does that mean I have physical control? Is physical custody of digital records even possible?

To ease my unease, I decided to investigate where the digital records for one of the clients I work with are physically stored. This client chose Preservica to preserve their digital records. (It works well for them; however, this statement is not intended as an endorsement.) Based on my client’s geographic location, Preservica stores their data, with redundancies, at an Amazon Web Services (AWS) data center in Northern Virginia. Since I currently live in Georgia and travel is not advisable these days, I decided that touring a data center of any kind would help me visualize where the digital archives I manage live. I contacted the manager of a data center in Atlanta called DataBank for a tour.  

Image courtesy of DataBank ATL1

The first thing I learned was that not all data centers are created equal. DataBank ATL1 serves as a high-performance computing (HPC) center for organizations such as 911 call centers, hospitals, and government agencies that require a high level of security (we’re talking spy-novel level encryption), fast data recoverability, and near instantaneous access to their data through fiber optic connections rather than via internet-dependent connections. In contrast, requirements for digital preservation data storage are a bit more pedestrian. Reliable, redundant storage is criticalfor digital collections but cultural heritage institutions and other organizations preserving electronic archival records are less likely to require encryption or high-speed fiber optic access. Customers like Preservica, and therefore my client, can purchase data center space at a lower price point than a customer that requires HPC services. That’s not to say that security is not a consideration at lower-cost data centers. AWS security and compliance white paper describes their commitment to managing the security of on-site data.[2]

Setting aside the variables of speed and security, I still wanted to understand where inside the data center the 0s and 1s that actually make up my client’s data are stored. According to a colleague who consumes data storage services at many levels and price points, regardless of the type of data center, the hardware configurations they use to store data are conceptually similar. Spin drives or solid-state drives are connected to one other in RAIDs (redundant array of inexpensive disks or redundant array of independent disks). There are different RAID levels, each offering different levels of redundancy and performance (more on that here). In some RAIDs when a disk in an array fails, a new one is installed and data is copied to it from the remaining disks in the array, so no data is lost. The heart of most data centers are rows of racks that hold servers, multiple disk arrays, and other related hardware, along with components that help manage, cool, and power the drives.  

If I could identify which AWS data center in Northern Virginia manages my client’s data, I might have some hope of pointing to a particular disk, in a particular RAID, on a particular rack and saying, the archival records I’m responsible for managing are stored there. But disk failures mean that my client’s data may not remain in one place for long. When it comes to long-term digital preservation, data repair and redundancy are positive attributes. Add to that the best practice of geographic redundancy and different data storage locations for access copies, preservation copies, and dark archives and it becomes even more challenging to point to a specific location where digital archives are stored. Maybe that’s how it should be. To stay viable, useable, accessible, data must remain on the move, a step ahead of bit rot and format obsolescence.  But all that movement and multiplication certainly puts quotation marks around the idea of having “physical custody” of digital archives.


[1] Hunter, Gregory S. (2003). Developing and Maintaining Practical Archives. New York: Neal Schuman, pp. 101-109.

[2] https://docs.aws.amazon.com/whitepapers/latest/aws-overview/security-and-compliance.html


Lori Eaton is an archives consultant working with foundations and nonprofits. Previously, she was project archivist at the Dorothy A. Johnson Center for Philanthropy at Grand Valley State University. She has a MLIS and Graduate Certificate in Archival Administration from Wayne State University.

Perspectives and Experiences in Collection Development with COVID-19 Content

By Jillian Lohndorf and Sylvie Rollason-Cass

For some, the switch to remote work meant that web archiving became a much bigger part of their day-to-day. Not only were people looking for digital projects to focus on, but new web-based information and content related to COVID-19 was being generated quickly. Nearly every government, organization, and company had information to share, and librarians and archivists were working as quickly to document it as it was being created. The team at Archive-It, the web archiving service of the Internet Archive, helped facilitate the increased activity by offering steeply discounted data and providing opportunities for users to discuss strategies, challenges, outcomes, and generally share experiences. These opportunities came in the form of regular community calls, as well as presentations and discussion groups related to COVID-19 collecting at the annual (and newly virtual) Archive-It Partner meeting.  Information from these discussions, as well as the collecting activity itself, has proven to be a spotlight on the many lessons that Archive-It users have already learned about web archive collection development, and the many ways they continue to innovate.

The International Internet Preservation Consortium’s June 2020 capture of COVID-19 data reported from Palestine

The way an organization approached collection development was often dictated by already-established processes, which in turn, are often dictated by larger goals and requirements.  Some of these organizational decisions were influenced by the resources available, most notably, data and staff time.  Many organizations  have created new, discrete collections for their COVID-19 content.  The idea of creating thematic collections in response to an event is not new, as many organizations have created them to document web content related to what are described as “spontaneous events,” which in the past have included natural events and societal tragedies.  Other organizations choose to integrate new COVID-19 content into existing collections, contextualized along other regularly captured content from their institutions and communities.  Sometimes the content was in both new and existing collections, capturing institutional COVID-19 responses in established collections, while also creating separate collections with an exclusive COVID-19 focus.  For many, collection development decisions went hand-in-hand with descriptive practices, such as building out metadata using controlled language as a way to facilitate access to information.

Metadata record from the Pennsylvania Horticultural Society COVID-19 Collection

The collected content was often a reflection of both institution and community, and across a range of formats, pushing some organizations into new territory.  Social media, from Facebook to Twitter, was of particular importance. And for many, capturing COVID-19 dashboards and maps became important.  Some organizations had either internal or public calls for website nominations, providing an opportunity to engage with others to build inclusive collections.  With the amount of available information, and the rate of change, no organization was immune from making difficult decisions on what to capture. 

The pandemic is far from over and even when it is, we are likely to see repercussions for a long time to come. For many, collecting COVID-19 content will be a long-term project with new twists and turns along the way. As of writing this, Archive-It users have collected over 40TB of web content related to COVID-19, and as that amount of data continues to rise,  the lessons of web archiving will as well.


Jillian Lohndorf joined the Internet Archive in 2016 as an Archive-It Web Archivist. Previously, she worked in the Archives and Special Collections at DePaul University and Rotary International, and as Web Services Librarian for The Chicago School of Professional Psychology. She holds a Master of Science in Library and Information Science from the University of Illinois at Urbana-Champaign.

Sylvie Rollason-Cass is the Senior Web Archivist for Archive-It where she has been supporting the web archiving community for the past 8 years. She holds a Master of Science in Library and Information Science from the University of Illinois at Urbana-Champaign.

Recap: BitCurator Users Forum, October 13-16, 2020

In the lead-up to the 2020 BitCurator Users Forum, the session I looked forward to the most was titled “GREAT QUESTION!”. This was a returning session from the 2019 BitCurator conference, and was an opportunity for attendees to anonymously ask digital preservation questions they might not be comfortable asking otherwise. The session showed that no question was too “simple” or “basic” to be worth discussing, and that no matter where you are as a practitioner, there’s always more to learn.

Last year’s session was at the very end of the forum, and ended things on a fun note. Attendees could submit questions anonymously at any point in the conference, and moderators presented these questions for discussion until the session ended. Most of the questions turned into discussions about tools, methods, perspectives, and professional philosophies in a way that made these topics accessible and exciting. It was reassuring to see how much we’re still collectively figuring out as a field, and that sometimes digital preservation work is less about best practices, and more about adapting those practices so they work for you and your institution.

This year’s session, like the rest of the forum, took place over Zoom. The format didn’t change much from last year, aside from switching to a virtual session and adding more ways to answer questions. Question submissions went to a queue visible to the moderators, as well as an Airtable board where everyone could see both questions and answers. This allowed attendees to see and respond to other questions while the main conversation addressed questions one at a time. Questions in the response queue were prioritized through progressive stacking, a technique that gives priority to marginalized voices. In this case, there was a box on the question submission form which attendees could check if they were part of a group historically underrepresented or marginalized in digital preservation spaces (e.g. attendees of color, LGBTQ attendees). Submissions with this box checked were discussed first.

Attendees could submit answers anonymously via Airtable, answer verbally on Zoom, or respond in the chat. Further discussion (and chatter) happened both out loud and in the chat It was fun and conversational, but never chaotic. Question topics ranged from virus scanning and fixity checking, to tool recommendations and workload distribution. There were also questions about advocating for digital preservation, the ethical issues inherent in using law enforcement-derived tools for digital archives work, and handling the emotional toll of doing this work in the current moment. Each question sparked thoughtful, informative, and sometimes funny responses, and the option to submit written answers allowed attendees to keep answering questions after the session ended. The question submission form was left open as well, in case anyone thought of a question once the session was over.

Everyone seemed to get a lot out of the experience, and several people mentioned wanting to do something like it at future conferences, or on a regular basis. It was heartening to see that others had the same questions I did; it really emphasized how much we’re all still learning, and how important it is to have a community of fellow practitioners you can rely on and share ideas with. I liked how casual the session felt; since we used the chat in addition to speaking out loud and answering questions via Airtable, it was easier to expand on a point, talk about what worked, and commiserate about what didn’t. This made it a lot less intimidating to jump into the discussion; no one was staring at you, and you weren’t the only person speaking, you were just chatting with colleagues who had the same kinds of experiences, questions, and problems as you. I’m looking forward to seeing more conference sessions like this in the future, and hope to see similar ones in other venues.


Tori Maches is the Digital Archivist at UC San Diego Library. Her work currently includes developing and implementing born-digital processing workflows in Special Collections & Archives, and managing the Library’s overall web archiving work.

From Aspirational to Actionable: Working through the OSSArcFlow Guide

by Elizabeth Stauber


Before I begin extolling the virtues of theOSSArcFlow Guide to Documenting Born-Digital Archival Workflows, I must confess that I created an aspirational digital archiving workflow four years ago, and for its entire life it has existed purely as a decorative piece of paper hanging next to my computer. This workflow was extensive and contained as many open source tools as I could find. It was my attempt to follow every digital archiving best practice that has ever existed.

In actual practice, I never had time to follow this workflow. As a lone arranger at the Hogg Foundation for Mental Health, my attention is constantly divided. Instead, I found ways to incorporate aspects of digital archiving into my records management and archival description work, thus making the documentation fragmented. A birds-eye view of the entire lifecycle of the digital record was not captured – the transition points between accession and processing and description were unaccounted for.

Over the summer, a colleague suggested we go through theOSSArcFlow Guide to Documenting Born-Digital Archival Workflows together. Initially, I was skeptical, but my new home office needed some sprucing up, so I decided to go along. Immediately, I saw that the biggest difference between working through this guide and my prior, ill-fated attempt is that the OSSArcFlow Guide systematically helps you document what you already do. It is not shaming you for not properly updating every file type to the most archivally sound format or for not completing fixity checks every month. Rather, it showed me I am doing the best I can as one person managing an entire organization’s records and look how far I have come!

Taking the time to work through a structured approach for developing a workflow helped organize my digital archiving priorities and thoughts. It is easy to be haphazard as a lone arranger with so many competing projects. Following the guide allowed me to be systematic in my development and led to a better understanding of what I currently do in regards to digital archiving. For example, the act of categorizing my activities as appraisal, pre-accessioning, accessioning, arrangement, description, preservation, and access parceled out the disparate, but co-existing work into manageable amounts. It connected the different processes I already had, and revealed the overlaps and gaps in my workflow.

As I continued mapping out my activities, I was also able to more easily see the natural “pause” points in my workflow. This is important because digital archiving is often fit in around other work, and knowing when I can break from the workflow allows me to manage my time more efficiently – making it more likely that I will achieve progress on my digital archiving work. Having this workflow that documents my actual activities rather than my aspirational activities allows for easier future adaptability. Now I can spot more readily what needs to be added or removed. This is helpful in a lone arranger archive as it allows for flexibility and the opportunity for improvement over time.

The Hogg Foundation was established in 1940 by Ima Hogg. The Foundation’s archive houses many types of records from its 80 years of existence – newspapers, film, cassette tapes, and increasingly born-digital records. As the Foundation continues to make progress in transforming how communities promote mental health in everyday life, it is important to develop robust digital archiving workflows that capture this progress.

Now I understand my workflow as an evolving document that serves as the documentation of the connections between different activities, as well as a visualization to pinpoint areas for growth. My digital processing workflow is no longer simply a decorative piece of paper hanging next to my computer.


Elizabeth Stauber stewards the Hogg Foundation’s educational mission to document, archive and share the foundation’s history, which has become an important part of the histories of mental and public health in Texas, and the evolution of mental health discourse nationally and globally. Elizabeth provides access to the Hogg Foundation’s research, programs, and operations through the publicly accessible archive. Learn more about how to access our records here.

Laying Out the Horizon of Possibilities: Reflections on Developing the OSSArcFlow Guide to Documenting Born-Digital Archival Workflows

by Alexandra Chassanoff and Hannah Wang


OSSArcFlow (2017-2020) was an IMLS-funded grant initiative that began as a collaboration between the Educopia Institute and the University of North Carolina School of Library and Information Science. The goal of the project was to investigate, model, and synchronize born-digital curation workflows for collecting institutions who were using three leading open source software (OSS) platforms – BitCurator, Archivematica and ArchivesSpace. The team recruited a diverse group of twelve partner institutions, ranging from a state historical society to public libraries to academic archives and special collections units at large research universities and consortia.

OSSArcFlow partners at in-person meeting in Chapel Hill, NC (December 2017)
Creator: Educopia Institute

Early on in the project, it became clear that many institutions were planning for and carrying out digital preservation activities ad hoc rather than as part of fully formed workflows. The lack of “best practice” workflow models to consult also seemed to hinder institutions’ abilities to articulate what shape their ideal workflows might take. Creating visual workflow diagrams for each institution provided an important baseline from which to compare and contrast workflow steps, tools, software, roles, and other factors across institutions. It also played an important, if unexpected, role in helping the project team understand the sociotechnical challenges underlying digital curation work. While configuring systems and processing born-digital content, institutions make many important decisions – what to do, how to do it, when to do it, and why – that influence the contours of their workflows. These decisions and underlying challenges, however, are often hidden from view, and can only be made visible by articulating and documenting the actions taken at each stage of the process. Similarly, while partners noted that automation in workflows was highly desirable, the documented workflows revealed the highly customized local implementations at each institution, which prevented the team from writing generalizable scripts for metadata handoffs that could apply to more than one institution’s use case.

Another unexpected but important pivot in the project was a shift towards breakout group discussions to focus on shared challenges or “pain points” identified in our workflow analysis. For partners, talking through shared challenges and hearing suggested approaches proved immensely helpful in advancing their own digital preservation planning. Our observation echoes similar findings by Clemens et al. (2020) in “Participatory Archival Research and Development: The Born-Digital Access Initiative,” who note that “the engagement and vulnerability involved in sharing works in progress resonates with people, particularly practitioners who are working to determine and achieve best practices in still-developing areas of digital archives and user services.” These conversations not only helped to build camaraderie and a community of practice around digital curation, but also revealed that planning for more mature workflows seemed to ultimately depend on understanding more about what was possible.

Overall, our research on the OSSArcFlow project led us to understand more about how gaps in coordinated work practices and knowledge sharing can impact the ability of institutions to plan and advance their workflows. These gaps are not just technical but also social, and crucially, often embedded in the work practices themselves. Diagramming current practices helps to make these gaps more visible so that they can be addressed programmatically. 

At the same time, the use of research-in-practice approaches that prioritize practitioner experiences in knowledge pursuits can help institutions bridge these gaps between where they are today and where they want to be tomorrow.  As Clements et al. (2020) point out, “much of digital practice itself is research, as archivists test new methods and gather information about emerging areas of the field.” Our project findings show a significant difference between how the digital preservation literature conceptualizes workflow development and how boots-on-the-ground practitioners actually do the work of constructing workflows. Archival research and development projects should build in iterative practitioner reflections as a component of the R&D process, an important step for continuing to advance the work of doing digital preservation.  

Initially, we imagined that the Implementation Guide we would produce would focus on strategies used to synchronize workflows across three common OSS environments. Based on our project findings, however, it became clear that helping institutions articulate a plan for digital preservation through shared and collaborative documentation of workflows would provide an invaluable resource for institutions as they undertake similar activities. Our hope in writing the Guide to Documenting Born-Digital Archival Workflows is to provide a resource that focuses on common steps, tools, and implementation examples in service of laying out the “horizon of possibilities” for practitioners doing this challenging work.  

The authors would like to recognize and extend immense gratitude to the rest of the OSSArcFlow team and the project partners who helped make the project and its deliverables a success. The Guide to Documenting Born-Digital Archival Workflows was authored by Alexandra Chassanoff and Colin Post and edited by Katherine Skinner, Jessica Farrell, Brandon Locke, Caitlin Perry, Kari Smith, and Hannah Wang, with contributions from Christopher A. Lee, Sam Meister, Jessica Meyerson, Andrew Rabkin, and Yinglong Zhang, and design work from Hannah Ballard.


Alexandra Chassanoff is an Assistant Professor at the School of Library and Information Sciences at North Carolina Central University. Her research focuses on the use and users of born-digital cultural heritage. From 2017 to 2018, she was the OSSArcFlow Project Manager. Previously, she worked with the BitCurator and BitCurator Access projects while pursuing her doctorate in Information Science at UNC-Chapel Hill. She co-authored (with Colin Post and Katherine Skinner) the Guide to Documenting Born-Digital Archival Workflows.    

Hannah Wang is currently the Project Manager for BitCuratorEdu (IMLS, 2018-2021), where she manages the development of open learning objects for digital forensics and facilitates a community of digital curation educators. She served as the Project Manager for the final stage of OSSArcFlow and co-edited the Guide to Documenting Born-Digital Archival Workflows.

Scalability, Automation, and Impostor Syndrome! Oh My!

By Kathryn Slover


From October 13-16, 2020 BitCurator hosted their annual Users Forum virtually. The forum consisted of presentations on a variety of digital preservation topics, but one that stood out to me was the first panel on scalability and automation. This session consisted of 3 presentations:

In August I started in a new job at the University of Texas at Arlington, and this was my first BitCurator Users Forum (BUF). I was very excited to hear from other professionals in digital preservation and looked forward to learning about new resources I could use in my new position. While watching the first presentation, I was hit with some serious impostor syndrome as a lot of the terms flew over my head! I started rapidly writing down terms to Google later like DPX film scans, RAWcooked, and FFv1 Matroska. Joanna White’s presentation about the project to convert 3PB of DPX film scans into FFv1 Matroska video files using automation scripts at The British Film Institute sounded fascinating, but I must admit I was a tad overwhelmed (and by a tad I mean completely). I couldn’t help but think of all the things I didn’t know. 

After a moment (or several moments) of panic, the second presentation in this session restored my faith that I was, in fact, a Digital Archivist who did know things about digital preservation. Lynn Moulton’s presentation really resonated with me. As it turns out, during last year’s BUF, she struggled with the same feelings I had while watching the previous presentation. She spoke about her own experience with imposter syndrome and reminded me that, I, like most Digital Archivists, come from an archives background and not a computer science one. 

As someone relatively new to the world of digital archives, it was comforting to hear that I’m not the only one who sometimes gets overwhelmed and feels like a phony. Luckily, there are people like Lynn who share their own experience and detail how they overcame those feelings. She talked about her process, which includes looking at others’ documentation, testing solutions (and testing them again), the need for support, and the fact that failure is an important part of the process. By the end of her presentation, my overwhelmed feelings had subsided a bit. Even though I don’t know everything, there is an amazing community of digital preservation professionals out there that are dealing with similar issues and are always there to help.  

With my renewed energy, I was a bit more prepared for the final presentation of the session. David Cirella and Greta Graf presented on automating the packaging and ingest process of electronic resources at Yale University. This presentation felt a little bit more in my wheelhouse of knowledge. They focused primarily on scripting using Python and Bash Shell. Some automation had been implemented by my predecessor using Python, so I was particularly excited about this subject. Even though I wasn’t familiar with everything, I came out of this presentation with a few ideas about implementing automation at my new institution. 

I wasn’t expecting a session on scalability and automation to be such an emotional roller coaster, but it was an informative ride! In this session alone, I came to realize that there are varying levels of skill and expertise when it comes to the work of digital preservation. We all bring something to the table. The rest of the BUF sessions reinforced the fact that digital preservation professionals (no matter the project) are all trying to do the best they can. We all face obstacles in our work but, with a solid network of hard-working colleagues, we can do a lot. I learned about so many helpful tools and educational resources that can hopefully conquer my own feelings of impostor syndrome as they pop up. Overall, the BUF was an amazing experience and I am grateful I was able to learn from this conference (even if I do have a notebook filled with terms and tools to look up)! 


Kathryn Slover is the Digital Archivist at the University of Texas at Arlington Special Collections. She has a M.A. in Public History from Middle Tennessee State University and previously held the role of Electronic Processing Archivist at the South Carolina Department of Archives and History.

Call for bloggERS: Blog Posts on the BitCurator Users Forum

With short weeks to go before the virtual 2020 BitCurator Users Forum (October 13-16), bloggERS is seeking attendees who are interested in writing a re-cap or a blog post covering a particular session, theme, or topic relevant to SAA Electronic Records Section members. The program for the Forum is available here.

Please let us know if you are interested in contributing by sending an email to ers.mailer.blog@gmail.com! You can also let us know if you’re interested in writing a general re-cap or if you’d like to cover something more specific.

Writing for bloggERS!

  • We encourage visual representations: Posts can include or largely consist of comics, flowcharts, a series of memes, etc!
  • Written content should be roughly 600-800 words in length
  • Write posts for a wide audience: anyone who stewards, studies, or has an interest in digital archives and electronic records, both within and beyond SAA
  • Align with other editorial guidelines as outlined in the bloggERS! guidelines for writers.

Call for Submissions: Dispatches from a Distance, Returning & Reopening

Earlier this year, many of us were asked to work from home and distance ourselves from colleagues and friends due to the global spread of COVID-19. Some of us are still in this position of working remotely, some of us have returned to our places of work, and some of us are now somewhere in-between or mixing multiple modes of work.

As some small step in lessening the isolation between us, BloggERS! began publishing a series called “Dispatches from a Distance” to provide a forum for those of us facing disruption in our professional lives, whether that’s working from home or something else, to stay engaged with the community. Now that so many of us are returning to full- or part-time on-site work, we’d like to extend this series to include reflections on reopening, returning to work, and other anxieties facing the profession due to COVID-19. There is no specific topic or theme for submissions–rather, this is a space to share your thoughts on current projects or ideas you’d like to share with other readers of the Electronic Records Section blog.

Dispatches should be between 200-500 words and can be submitted here. Posts should adhere to the SAA code of ethics for archivists.

We look forward to hearing from all of you!

–The BloggERS! Editorial Subcommittee