ml4arc – Machine Learning, Deep Learning, and Natural Language Processing Applications in Archives

by Emily Higgs


On Friday, July 26, 2019, academics and practitioners met at Wilson Library at UNC Chapel Hill for “ml4arc – Machine Learning, Deep Learning, and Natural Language Processing Applications in Archives.” This meeting featured expert panels and participant-driven discussions about how we can use natural language processing – using software to understand text and its meaning – and machine learning – a branch of artificial intelligence that learns to infer patterns from data – in the archives.

The meeting was hosted by the RATOM Project (Review, Appraisal, and Triage of Mail).  The RATOM project is a partnership between the State Archives of North Carolina and the School of Information and Library Science at UNC Chapel Hill. RATOM will extend the email processing capabilities currently present in the TOMES software and BitCurator environment, developing additional modules for identifying and extracting the contents of email-containing formats, NLP tasks, and machine learning approaches. RATOM and the ml4arc meeting are generously supported by the Andrew W. Mellon Foundation.

Presentations at ml4arc were split between successful applications of machine learning and problems that could potentially be addressed by machine learning in the future. In his talk, Mike Shallcross from Indiana University identified archival workflow pain points that provide opportunities for machine learning. In particular, he sees the potential for machine learning to address issues of authenticity and integrity in digital archives, PII and risk mitigation, aggregate description, and how all these processes are (or are not) scalable and sustainable. Many of the presentations addressed these key areas and how natural language processing and machine learning can lend aid to archivists and records managers. Additionally, attendees got to see presentations and demonstrations from tools for email such as RATOM, TOMES, and ePADD. Euan Cochrane also gave a talk about the EaaSI sandbox and discussed potential relationships between software preservation and machine learning.

The meeting agenda had a strong focus on using machine learning in email archives; collecting and processing emails is a large encumbrance in many archives that can stand to benefit greatly from machine learning tools. For example, Joanne Kaczmarek from the University of Illinois presented a project processing capstone email accounts using an e-discovery and predictive coding software called Ringtail. In partnership with the Illinois State Archives, Kaczmarek used Ringtail to identify groups of “archival” and “non-archival” emails from 62 capstone accounts, and to further break down the “archival” category into “restricted” and “public.” After 3-4 weeks of tagging training data with this software, the team was able to reduce the volume of emails by 45% by excluding “non-archival” messages, and identify 1.8 million emails that met the criteria to be made available to the public. Manually, this tagging process could have easily taken over 13 years of staff time.

After the ml4arc meeting, I am excited to see the evolution of these projects and how natural language processing and machine learning can help us with our responsibilities as archivists and records managers. From entity extraction to PII identification, there are myriad possibilities for these technologies to help speed up our processes and overcome challenges.


Emily Higgs is the Digital Archivist for the Swarthmore College Peace Collection and Friends Historical Library. Before moving to Swarthmore, she was a North Carolina State University Libraries Fellow. She is also the Assistant Team Leader for the SAA ERS section blog.


Advertisements

Securing Our Digital Legacy: An Introduction to the Digital Preservation Coalition

by Sharon McMeekin, Head of Workforce Development


Nineteen years ago, the digital preservation community gathered in York, UK, for the Cedars Project’s Preservation 2000 conference. It was here that the first seeds were sown for what would become the Digital Preservation Coalition (DPC). Guided by Neil Beagrie, then of King’s College London and Jisc, work to establish the DPC continued over the next 18 months and, in 2002, representatives from 7 organizations signed the articles that formally constituted the DPC.

In the 17 years since its creation, the DPC has gone from strength to strength, the last 10 years under the leadership of current Executive Director, William Kilbride. The past decade has been a particular period of growth, as shown by the rise in the staff compliment from 2 to 7. We now have more than 90 members who represent an increasingly diverse group of organizations from 12 countries across sectors including cultural heritage, higher education, government, banking, industry, media, research and international bodies.

DPC staff, chair, and president

Our mission at the DPC is to:

[…] enable our members to deliver resilient long-term access to digital content and services, helping them to derive enduring value from digital assets and raising awareness of the strategic, cultural and technological challenges they face.

We work to achieve this through a broad portfolio of work across six strategic areas of activity: Community Engagement, Advocacy, Workforce Development, Capacity Building, Good Practice and Standards, and Management and Governance. Everything we do is member-driven and they guide our activities through the DPC Board, Representative Council, and Sub-Committees which oversee each strategic area.

Although the DPC is driven primarily by the needs of our members, we do also aim to contribute to the broader digital preservation community. As such, many of the resources we develop are made publicly available. In the remainder of this blog post, I’ll be taking a quick look at each of the DPC’s areas of activity and pointing out resources you might find useful.

1 | Community Engagement

First up is our work in the area of Community Engagement. Here our aim is to enable “a growing number of agencies and individuals in all sectors and in all countries to participate in a dynamic and mutually supportive digital preservation community”. Collaboration is a key to digital preservation success, and we hope to encourage and support it by helping build an inclusive and active community. An important step in achieving this aim was the publication of our ‘Inclusion and Diversity Policy’ in 2018.

Webinars are key to building community engagement amongst our members. We invite speakers to talk to our members about particular topics and share experiences through case studies. These webinars are recorded and made available for members to watch at a later date. We also run a monthly ‘Members Lounge’ to allow informal sharing of current work and discussion of issues as they arise and, on the public end of the website, a popular blog, covering case studies, new innovations, thought pieces, recaps of events and more.

2 | Advocacy

Our advocacy work campaigns “for a political and institutional climate more responsive and better informed about the digital preservation challenge”, as well as “raising awareness about the new opportunities that resilient digital assets create”. This tends to happen on several levels, from enabling and aiding members’ advocacy efforts within their own organizations, through raising legislators’ and policy makers’ awareness of digital preservation, to educating the wider populace.

To help those advocating for digital preservation within their own context, we have recently published our Executive Guide. The Guide provides a grab bag of statements and facts to help make the case for digital preservation, including key messages, motivators, opportunities to be gained and risks faced. We welcome any suggestions for additions or changes to this resource!

Our longest running advocacy activity is the biannual Digital Preservation Awards, last held in 2018. The Awards aim to celebrate excellence and innovation in digital preservation across a range of categories. This high-profile event has been joined in recent years by two other activities with a broad remit and engagement. The first is the Bit List of Digitally Endangered Species, which highlights at risk digital information, showing both where preservation work is needed and where efforts have been successful. Finally, there is World Digital Preservation Day (WDPD), a day to showcase digital preservation around the globe. Response to WDPD since its inauguration in 2017 has been exceptionally positive. There’s been tweets, blogs, events, webinars, and even a song and dance! This year WDPD is scheduled for 7th November, and we encourage everyone to get involved.

The nominees, winners, and judges for the 2018 Digital Preservation Awards

3 | Workforce Development

Workforce Development activities at the DPC focus on “providing opportunities for our members to acquire, develop and retain competent and responsive workforces that are ready to address the challenges of digital preservation”. There are many threads to this work, but key for our members are the scholarships we provide through our Career Development Fund and free access to the training courses we run.

At the moment we offer three training courses: ‘Getting Started with Digital Preservation’, ‘Making Progress with Digital Preservation’ and ‘Advocacy for Digital Preservation’, but we have plans to expand the portfolio in the coming year. All of our training courses are available to non-members for a modest fee, but at the moment are mostly held face to face in the UK and Ireland. A move to online training provision is, however, planned for 2020. We are also happy to share training resources and have set up a Slack workspace to enable this and greater collaboration with regards to digital preservation training.

Other resources that may prove helpful that fall under our Workforce Development heading include the ‘Digital Preservation Handbook’, a free online publication covering a digital preservation in the broadest sense. The Handbook aims to be a comprehensive guide for those starting with digital preservation, whilst also offering links additional resources. The content for Handbook was crowd-sourced from experts and has all been peer reviewed. Another useful and slightly less well-known series of publications are our ‘Topical Notes’, originally funded by the National Archives of Ireland, and intended to create resources that introduced key digital preservation issues to a non-specialist audience (particularly record creators). Each note is only two pages long and jargon-free, so a great resource to help raise awareness.

4 | Capacity Building

Perhaps the biggest area of DPC work covers Capacity Building, that is “supporting and assuring our members in the delivery and maintenance of high quality and sustainable digital preservation services through knowledge exchange, technology watch, research and development.” This can take the form of direct member support, helping with tasks such as policy development and procurement, as well as participation in research projects.

Our more advanced publication series, the Technology Watch Reports, also sit below the Capacity Building heading. Written by experts and peer reviewed, each report takes a deeper dive into a particular digital preservation issue. Our latest report on Email Preservation is currently available for member preview but will be publicly released shortly. Some other ‘classics’ include Preserving Social Media, Personal Digital Archiving, and the always popular The Open Archival Information System (OAIS) Reference Model: Introductory Guide (2nd Edition) (I always tell those new to OAIS to start here rather than the 200+ dry pages of the full standard!)

We also run around six thematic Briefing Day events a year on topical issues. As with the training, these are largely held in the UK and Ireland, but they are now also live-streamed for members. We support a number of Thematic Task Forces and Working Groups, with the ‘Web Archiving and Preservation Working Group’ being particularly active at the moment.

DPC members engaged in a brainstorming session

5 | Good Practice and Standards

Our Good Practice and Standards stream of work was a new addition as of the publication of our latest Strategic Plan (2018-22). Here we are contributing work towards “identifying and developing good practice and standards that make digital preservation achievable, supporting efforts to ensure services are tightly matched to shifting requirements.”

We hope this work will allow us to input into standards with the needs of our members in mind and facilitate the sharing of good practice that already happens across the coalition. This has already borne fruit in the shape of the forthcoming DPC Rapid Assessment Model, a maturity model to help with benchmarking digital preservation progress within your organization. You can read a bit more about it in this blog post by Jen Mitcham and the model will be released publicly in late September.

We also work with vendors through our Supporter Program and events like our ‘Digital Futures’ series to help bridge the gap between practice and solutions.

6 | Management and Governance

Our final stream of work is less focused on digital preservation and instead on “ensuring the DPC is a sustainable, competent organization focussed on member needs, providing a robust and trusted platform for collaboration within and beyond the Coalition.” This obviously relates to both the viability of the organization and well as good governance. It is essential that everything we do is transparent and that the members can both direct what we do and ensure accountability.

The Future

Before I depart, I thought I would share a little bit about some of our plans for the future. In the next few years we’ll be taking steps to further internationalize as an organization. At the moment our membership is roughly 75% UK and Ireland and 25% international, but those numbers are gradually moving closer and we hope that continues. With that in mind we will be investigating new ways to deliver services and resources online, as well as in languages beyond English. We’re starting this year with the publication of our prospectus in German, French and Spanish.

We’re also beginning to look forward to our 20th anniversary in 2022. It’s a Digital Preservation Awards Year, so that’s reason enough for a celebration, but we will also be welcoming the digital preservation community to Glasgow, Scotland, as hosts of iPRES 2022. Plans are already afoot for the conference, and we’re excited to make it a showcase for both the community and one of our home cities. Hopefully we’ll see you there, but I encourage you to make use of our resources and to get in touch soon!

Access our Knowledge Base: https://www.dpconline.org/knowledge-base

Follow us on Twitter: https://twitter.com/dpc_chat

Find out how to join us: https://www.dpconline.org/about/join-us


Sharon McMeekin is Head of Workforce Development with the Digital Preservation Coalition and leads on work including training workshops and their scholarship program. She is also Managing Editor of the ‘Digital Preservation Handbook’. With Masters degrees in Information Technology and Information Management and Preservation, both from the University of Glasgow, Sharon is an archivist by training, specializing in digital preservation. She is also an ILM qualified trainer. Before joining the DPC she spent five years as Digital Archivist with RCAHMS. As an invited speaker, Sharon presents on digital preservation at a wide variety of training events, conferences and university courses.

Midwest Archivists Conference 2019 meeting (MAC 2019)

by A.L. Carson

The Midwest Archivists Conference 2019 meeting, held April 3-6 in Detroit (in the GM Renaissance Center, which may have the distinction, with its concentric circle design, of being the most bewildering conference center I’ve ever been in), chose “Innovations, Transformation, Resurgence” as its theme. The organizers put out a call for participants to “consider the ways they have transformed their local communities and the world,” and it seemed to have struck a chord: the sessions reflected a sense of rootedness as well as a desire to increase and deepen connections between repositories, their holdings, the communities they represent, and (crucially) those they haven’t.

The programming took a broad perspective on the profession and practice of archives, giving space to multiple approaches and understandings of the work, from imposter syndrome to workflows, resulting in some really generative sessions. I attended a number of sessions focused on surfacing the histories of underserved and marginalized groups in the Midwest, notably “Together, We Make It: Making Collections Featuring Minority Groups More Accessible” and “Documenting the History of HIV/AIDS in the Midwest.”

Two standouts on the technical practice and electronic records side were “Computer-Assisted Appraisal of Electronic Records” and “Archival Revitalization: Transforming Technical Services with Innovative Workflows,” both of which were relevant to my (new) position as a processing archivist. For a play-by-play of some of these sessions, you can check out my MAC Twitter feed (yes, I live-tweet conferences). Both emphasized balancing competing priorities and unequal capacities, familiar themes for anyone working in archives. Leading off “Computer Assisted Appraisal,” Cal Lee reminded everyone that there was no such thing as a perfect machine system (which would remove the human labor from appraisal), and that the goal should never be to create one: that machines are tools, not agents. That emphasis on human action, particularly communicating across and about technological divides, was echoed again in “Archival Revitalization,” which focused on instances of implementation (new processes, tools, and workflows) that were made possible through and in turn assisted human collaboration. Both sessions, too, spoke to the importance of understanding iteration as an integral part of workflows (whether appraisal, processing, or providing access) rather than something to be engineered out of a process.

Thanks to scholarship and grant programs (of which we can always have more), a number of paraprofessionals and short-term or project archivists were able to attend and present, which enriched the programming significantly. There was a strong showing from the regional LIS students, both in their poster session on Friday and the general programming. Having just started my position at Iowa State, this was my first MAC; it was also my first time in Detroit, and overall I was favorably impressed. While the conference center itself is a marvel of hostile architecture (which made literal accessibility a real and not-to-be-downplayed challenge), the intellectual content of the presentations and general attitude of the attendees made it a fairly easy space in which to be a newcomer.

A.L. Carson is a processing archivist at Iowa State University, where they are engaged in developing processing, preservation, and access guidelines for digital records as well as increasing the availability of the traditional collections.

An Interview With Caitlin Birch — Digital Collections and Oral History Archivist at the Rauner Special Collections Library, Dartmouth

Interview conducted with Caitlin Birch by Juli Folk in March 2019

This is the third post in the Conversations series

Meet Caitlin Birch

Caitlin Birch is the Digital Collections and Oral History Archivist for the Rauner Special Collections Library at Dartmouth College in Hanover, New Hampshire: she sat down with Juli Folk, a graduate student at the University of Maryland-College Park iSchool, who is pursuing an archives-focused MLIS and certificate in Museum Scholarship and Material Culture. Caitlin’s descriptions of her career path, her roles and achievements, and her insights into the challenges she faces helped frame a discussion of helpful skill sets for working with born-digital archival records on a daily basis.

Caitlin’s Career Path

As an undergraduate, Caitlin majored in English, concentrating in journalism with minors in history and Irish studies. After a few years working as a reporter and editor, she began to consider a different career path, looking for other fields that emphasize constant learning, storytelling, and contributions to the historical record. In time, she decided on a dual degree (MA/MSLIS) in history and archives management from Simmons College (now Simmons University). Throughout grad school, her studies focused on both historical methods and original research as well as archival theory and practice.

When asked about the path to her current position, Caitlin responded, “To the extent that my program allowed, I tried to take courses with a digital focus whenever I could. I also completed two internships and worked in several paraprofessional positions, which were really invaluable to preparing me for professional work in the field. I finished my degrees in December 2013 and landed my job at Dartmouth a few months later.” She now works as the Digital Collections and Oral History Archivist for Rauner Special Collections Library, the home of Dartmouth College’s rare books, manuscripts, and archives, compartmentalized within the larger academic research library.

Favorite Aspects of Being an Archivist

For Caitlin, the best aspects of being an archivist are working at the intersection of history and technology; teaching and interacting with people every day; and having new opportunities to create, innovate, and learn. Her position includes roles in both oral history and born-digital records, and on any given day she may be juggling tasks like teaching students oral history methodology, working on the implementation of a digital repository, building Dartmouth’s web archiving program, managing staff, sharing reference desk duty, and staying abreast of the profession via involvement with the SAA and the New England Archivists Executive Board. “I like that no two days are the same,” she shared, adding, “I like that my work can have a positive impact on others.”

Challenges of Being an Archivist

Caitlin pointed out that aspects of the profession change and evolve at a pace that can make it difficult to keep up, especially when job- or project-related tasks demand so much attention. She also noted other challenges: “More and more we’re grappling with issues like the ethical implications of digital archives and the environmental impact of digital preservation.” That said, she finds that “the biggest challenge is also the biggest opportunity: most of what I do hasn’t been done before at Dartmouth. I’m the first digital archivist to be hired at my institution, so everything—infrastructure, policies, workflows, etc.—has been/is being built from the ground up. It’s exciting and often very daunting, especially because this corner of the archives field is dynamic.”

Advice for Students and Young Professionals

As a result, Caitlin emphasized the importance of experimentation and failure. “Traditional archival practice is well-defined and there are standards to guide it, but digital archives present all kinds of unique challenges that didn’t exist until very recently. Out of necessity, you have to innovate and try new things and learn from failure in order to get anywhere.” For this reason, she recommended building a good professional network and finding time to keep up with the professional literature. “It’s really key to cultivate a community of practice with colleagues at other institutions.”

When asked whether she sets aside time specified for these tasks or if she finds that networking and research are natural outputs of her daily work, Caitlin stated that networking comes more easily because of her involvement with professional organizations. However, finding time for professional literature and research proved more difficult, a concern Caitlin brought to her manager. In response, he encouraged her to block 1-2 hours on her calendar at the same time every week to catch up on reading and professional news. She remains grateful for that support: “I would hope that every manager in this profession encourages time for regular professional development. It may seem like it’s taking time away from job responsibilities, but in actuality it’s helping you to build the skills and knowledge you need for future innovation.”


SAA-bloggERS-headshot-Juli_Folk

Juli Folk is finishing the MLIS program at the University of Maryland-College Park iSchool, specializing in Archives and Digital Curation. Previously a corporate editor and project manager, Juli’s graduate work supplements her passions for writing, art, and technology with formal archival training, to refocus her career on cultural heritage institutions.

PASIG (Preservation and Archiving Special Interest Group) 2019 Recap

by Kelly Bolding

PASIG 2019 met the week of February 11th at El Colegio de México (commonly known as Colmex) in Mexico City. PASIG stands for Preservation and Archiving Special Interest Group, and the group’s meeting brings together an international group of practitioners, industry experts, vendors, and researchers to discuss practical digital preservation topics and approaches. This meeting was particularly special because it was the first time the group convened in Latin America (past meetings have generally been held in Europe and the United States). Excellent real-time bilingual translation for presentations given in both English and Spanish enabled conversations across geographical and lingual boundaries and made room to center Latin American preservationists’ perspectives and transformative post-custodial archival practice.

Perla Rodriguez of the Universidad Nacional Autónoma de México (UNAM) discusses an audiovisual preservation case study.

The conference began with broad overviews of digital preservation topics and tools to create a common starting ground, followed by more focused deep-dives on subsequent days. I saw two major themes emerge over the course of the week. The first was the importance of people over technology in digital preservation. From David Minor’s introductory session to Isabel Galina Russell’s overview of the digital preservation landscape in Mexico, presenters continuously surfaced examples of the “people side” of digital preservation (think: preservation policies, appraisal strategies, human labor and decision-making, keeping momentum for programs, communicating to stakeholders, ethical partnerships). One point that struck me during the community archives session was Verónica Reyes-Escudero’s discussion of “cultural competency as a tool for front-end digital preservation.” By conceptualizing interpersonal skills as a technology for facilitating digital preservation, we gain a broader and more ethically grounded idea of what it is we are really trying to do by preserving bits in the first place. Software and hardware are part of the picture, but they are certainly not the whole view.

The second major theme was that digital preservation is best done together. Distributed digital preservation platforms, consortial preservation models, and collaborative research networks were also well-represented by speakers from LOCKSS, Texas Digital Library (TDL), Duraspace, Open Preservation Foundation, Software Preservation Network, and others. The takeaway from these sessions was that the sheer resource-intensiveness of digital preservation means that institutions, both large and small, are going to have to collaborate in order to achieve their goals. PASIG seemed to be a place where attendees could foster and strengthen these collective efforts. Throughout the conference, presenters also highlighted failures of collaborative projects and the need for sustainable financial and governance models, particularly in light of recent developments at the Digital Preservation Network (DPN) and Digital Public Library of America (DPLA). I was particularly impressed by Mary Molinaro’s honest and informative discussion about the factors that led to the shuttering of DPN. Molinaro indicated that DPN would soon be publishing a final report in order to transparently share their model, flaws and all, with the broader community.

Touching on both of these themes, Carlos Martínez Suárez of Video Trópico Sur gave a moving keynote about his collaboration with Natalie M. Baur, Preservation Librarian at Colmex, to digitize and preserve video recordings he made while living with indigenous groups in the Mexican state of Chiapas. The question and answer portion of this session highlighted some of the ethical issues surrounding rights and consent when providing access to intimate documentation of people’s lives. While Colmex is not yet focusing on access to this collection, it was informative to hear Baur and others talk a bit about the ongoing technical, legal, and ethical challenges of a work-in-progress collaboration.

Presenters also provided some awesome practical tools for attendees to take home with them. One of the many great open resources session leaders shared was Frances Harrell (NEDCC) and Alexandra Chassanoff (Educopia)’s DigiPET: A Community Built Guide for Digital Preservation Education + Training Google document, a living resource for compiling educational tools that you can add to using this form. Julian Morley also shared a Preservation Storage Cost Model Google sheet that contains a template with a wealth of information about estimating the cost of different digital preservation storage models, including comparisons for several cloud providers. Amy Rudersdorf (AVP), Ben Fino-Radin (Small Data Industries), and Frances Harrell (NEDCC) also discussed helpful frameworks for conducting self-assessments.

Selina Aragon, Daina Bouquin, Don Brower, and Seth Anderson discuss the challenges of software preservation.

PASIG closed out by spending some time on the challenges involved with preserving emerging and complex formats. On the last afternoon of sessions, Amelia Acker (University of Texas at Austin) spoke about the importance of preserving APIs, terms of service, and other “born-networked” formats when archiving social media. She was followed by a panel of software preservationists who discussed different use cases for preserving binaries, source code, and other software artifacts.

Conference slides are all available online.

Thanks to the wonderful work of the PASIG 2019 steering, program, and local arrangements committees!


Kelly Bolding is the Project Archivist for Americana Manuscript Collections at Princeton University Library, as well as the team leader for bloggERS! She is interested in developing workflows for processing born-digital and audiovisual materials and making archival description more accurate, ethical, and inclusive.

Announcing the Digital Processing Framework

by Erin Faulder

Development of the Digital Processing Framework began after the second annual Born Digital Archiving eXchange unconference at Stanford University in 2016. There, a group of nine archivists saw a need for standardization, best practices, or general guidelines for processing digital archival materials. What came out of this initial conversation was the Digital Processing Framework (https://hdl.handle.net/1813/57659) developed by a team of 10 digital archives practitioners: Erin Faulder, Laura Uglean Jackson, Susanne Annand, Sally DeBauche, Martin Gengenbach, Karla Irwin, Julie Musson, Shira Peltzman, Kate Tasker, and Dorothy Waugh.

An initial draft of the Digital Processing Framework was presented at the Society of American Archivists’ Annual meeting in 2017. The team received feedback from over one hundred participants who assessed whether the draft was understandable and usable. Based on that feedback, the team refined the framework into a series of 23 activities, each composed of a range of assessment, arrangement, description, and preservation tasks involved in processing digital content. For example, the activity Survey the collection includes tasks like Determine total extent of digital material and Determine estimated date range.

The Digital Processing Framework’s target audience is folks who process born digital content in an archival setting and are looking for guidance in creating processing guidelines and making level-of-effort decisions for collections. The framework does not include recommendations for archivists looking for specific tools to help them process born digital material. We draw on language from the OAIS reference model, so users are expected to have some familiarity with digital preservation, as well as with the management of digital collections and with processing analog material.

Processing born-digital materials is often non-linear, requires technical tools that are selected based on unique institutional contexts, and blends terminology and theories from archival and digital preservation literature. Because of these characteristics, the team first defined 23 activities involved in digital processing that could be generalized across institutions, tools, and localized terminology. These activities may be strung together in a workflow that makes sense for your particular institution. They are:

  • Survey the collection
  • Create processing plan
  • Establish physical control over removeable media
  • Create checksums for transfer, preservation, and access copies
  • Determine level of description
  • Identify restricted material based on copyright/donor agreement
  • Gather metadata for description
  • Add description about electronic material to finding aid
  • Record technical metadata
  • Create SIP
  • Run virus scan
  • Organize electronic files according to intellectual arrangement
  • Address presence of duplicate content
  • Perform file format analysis
  • Identify deleted/temporary/system files
  • Manage personally identifiable information (PII) risk
  • Normalize files
  • Create AIP
  • Create DIP for access
  • Publish finding aid
  • Publish catalog record
  • Delete work copies of files

Within each activity are a number of associated tasks. For example, tasks identified as part of the Establish physical control over removable media activity include, among others, assigning a unique identifier to each piece of digital media and creating suitable housing for digital media. Taking inspiration from MPLP and extensible processing methods, the framework assigns these associated tasks to one of three processing tiers. These tiers include: Baseline, which we recommend as the minimum level of processing for born digital content; Moderate, which includes tasks that may be done on collections or parts of collections that are considered as having higher value, risk, or access needs; and Intensive, which includes tasks that should only be done to collections that have exceptional warrant. In assigning tasks to these tiers, practitioners balance the minimum work needed to adequately preserve the content against the volume of work that could happen for nuanced user access. When reading the framework, know that if a task is recommended at the Baseline tier, then it should also be done as part of any higher tier’s work.

We designed this framework to be a step towards a shared vocabulary of what happens as part of digital processing and a recommendation of practice, not a mandate. We encourage archivists to explore the framework and use it however it fits in their institution. This may mean re-defining what tasks fall into which tier(s), adding or removing activities and tasks, or stringing tasks into a defined workflow based on tier or common practice. Further, we encourage the professional community to build upon it in practical and creative ways.


Erin Faulder is the Digital Archivist at Cornell University Library’s Division of Rare and Manuscript Collections. She provides oversight and management of the division’s digital collections. She develops and documents workflows for accessioning, arranging and describing, and providing access to born-digital archival collections. She oversees the digitization of analog collection material. In collaboration with colleagues, Erin develops and refines the digital preservation and access ecosystem at Cornell University Library.

Call for Contributions: Making Tech Skills a Strategic Priority

As a follow-up to our popular Script It! Series — which attempted to break down barriers and demystify scripting with walkthroughs of simple scripts — we’re interested in learning more about how archival institutions (as such) encourage their archivists to develop and promote their technical literacy more generally. As Trevor Owens notes in his forthcoming book, The Theory and Craft of Digital Preservation, “the scale and inherent structures of digital information suggest working more with a shovel than with a tweezers.” Encouraging archivists to develop and promote their technical literacy is one such way to use a metaphorical shovel!

Maybe you work for an institution that explicitly encourages its employees to learn new technical skills. Maybe your team or institution has made technical literacy a strategic priority. Maybe you’ve formed a collaborative study group with your peers to learn a programming language. Whatever the case, we want to hear about it!

Writing for bloggERS! “Making Tech Skills a Strategic Priority” Series

  • We encourage visual representations: Posts can include or largely consist of comics, flowcharts, a series of memes, etc!
  • Written content should be roughly 600-800 words in length
  • Write posts for a wide audience: anyone who stewards, studies, or has an interest in digital archives and electronic records, both within and beyond SAA
  • Align with other editorial guidelines as outlined in the bloggERS! guidelines for writers.

Posts for this series will start in late November or December, so let us know if you are interested in contributing by sending an email to ers.mailer.blog@gmail.com!

Electronic Records at SAA 2018

With just weeks to go before the 2018 SAA Annual Meeting hits the capital, here’s a round-up of the sessions that might interest ERS members in particular. This year’s schedule offers plenty for archivists who deal with the digital, tackling current gnarly issues around transparency and access, and format-specific challenges like Web archiving and social media. Other sessions anticipate the opportunities and questions posed by new technologies: blockchain, artificial intelligence, and machine learning.

And of course, be sure to mark your calendars for the ERS annual meeting! This year’s agenda includes lightning talks from representatives from the IMLS-funded OSSArcFlow and Collections as Data projects, and the DLF Levels of Access research group. There will also be a mini-unconference session focused on problem-solving current challenges associated with the stewardship of electronic records. If you would like to propose an unconference topic or facilitate a breakout group, sign up here.

Wednesday, August 15

2:30-3:45

Electronic Records Section annual business meeting (https://archives2018.sched.com/event/ESmz/electronic-records-section)

Thursday, August 16

10:30-11:45

105 – Opening the Black Box: Transparency and Complexity in Digital  Preservation (https://archives2018.sched.com/event/ESlh)

12:00-1:15

Open Forums: Preservation of Electronic Government Information (PEGI) Project (https://archives2018.sched.com/event/ETNi/open-forums-preservation-of-electronic-government-information-pegi-project)

Open Forums: Safe Search Among Sensitive Content: Investigating Archivist and Donor Conceptions of Privacy, Secrecy, and Access (https://archives2018.sched.com/event/ETNh/open-forums-safe-search-among-sensitive-content-investigating-archivist-and-donor-conceptions-of-privacy-secrecy-and-access)

1:30-2:30

201 – Email Archiving Comes of Age (https://archives2018.sched.com/event/ESlo/201-email-archiving-comes-of-age)

204-Scheduling the Ephemeral: Creating and Implementing Records Management Policy for Social Media (https://archives2018.sched.com/event/ESls/204-scheduling-the-ephemeral-creating-and-implementing-records-management-policy-for-social-media)

Friday, August 17

2:00 – 3:00

501 – The National Archives Aims for Digital Future: Discuss NARA Strategic Plan and Future of Archives with NARA Leaders (https://archives2018.sched.com/event/ESmP/501-the-national-archives-aims-for-digital-future-discuss-nara-strategic-plan-and-future-of-archives-with-nara-leaders)

502 – This is not Skynet (yet): Why Archivists should care about Artificial Intelligence and Machine Learning (https://archives2018.sched.com/event/ESmQ/502-this-is-not-skynet-yet-why-archivists-should-care-about-artificial-intelligence-and-machine-learning)

504 – Equal Opportunities: Physical and Digital Accessibility of Archival Collections (https://archives2018.sched.com/event/ESmS/504-equal-opportunities-physical-and-digital-accessibility-of-archival-collections)

508 – Computing Against the Grain: Capturing and Appraising Underrepresented Histories of Computing (https://archives2018.sched.com/event/ESmW/508-computing-against-the-grain-capturing-and-appraising-underrepresented-histories-of-computing)

Saturday, August 18

8:30 – 10:15

605 – Taming the Web: Perspectives on the Transparent Management and Appraisal of Web Archives (https://archives2018.sched.com/event/ESme/605-taming-the-web-perspectives-on-the-transparent-management-and-appraisal-of-web-archives)

606 – Let’s Be Clear: Transparency and Access to Complex Digital Objects (https://archives2018.sched.com/event/ESmf/606-lets-be-clear-transparency-and-access-to-complex-digital-objects)

10:30 – 11:30

704 – Blockchain: What Is It and Why Should You Care (https://archives2018.sched.com/event/ESmo/704-blockchain-what-is-it-and-why-should-we-care)

 

Building Community for Archivematica

By Shira Peltzman, Nick Krabbenhoeft and Max Eckard


In March of 2018, the Archivematica User Forum held the first in an ongoing series of bi-monthly calls for active Archivematica users or stakeholders. Archivematica users (40 total!) from the United States, Canada, and the United Kingdom came together to share project updates and ongoing challenges and begin to work with their peers to identify and define community solutions.

Purpose

The Archivematica user community is large (and growing!), but formal communication channels between Archivematica users are limited. While the Archivematica Google Group is extremely valuable, it has some drawbacks. Artefactual prioritizes their paid support and training services there, and posts seem to focus primarily on announcing new releases or resolving errors. This sets an expectation that communication flows there from Artefactual to Archivematica users, rather than between Archivematica users. Likewise, Archivematica Camps are an exciting development, but at the moment these occur relatively infrequently and require participants to travel. As a result, it can be hard for Archivematica users to find partners and share work.

Enter the Archivematica User Forum. We hope these calls will fill this peer-to-peer communication void! Our goal is to create a space for discussion that will enable practitioners to connect with one another and identify common denominators, issues, and roadblocks that affect users across different organizations. In short, we are hoping that these calls will provide a broader and more dynamic forum for user engagement and support, and ultimately foster a more cohesive and robust user community.

Genesis

The User Forum is not the first group created to connect Archivematica users. Several regional groups already exist; the Texas Archivematica Users Groups and UK Archivematica Users Group (blog of their latest meeting) are amazing communities that meet regularly. But sometimes, the people trying to adapt, customize, and improve Archivematica the same way you are live in a different time zone.

That situation inspired the creation of this group. After realizing how often relationships would form because someone knew someone who knew someone doing something similar, creating a national forum where everyone had the chance to meet everyone else seemed like the natural choice.

Scope

It takes a lot to build a new community, so we have tried to keep the commitment light. To start with, the forum meets every two months. Second, it’s open to anyone using Archivematica that can make the call, 9AM on the West Coast, 12PM on the East Coast. That includes archivists, technologists, developers and any other experts actively using or experimenting with Archivematica.

Third, we have some in-scope and out-of-scope topics. In-scope includes anything that helps us continue to improve our usage of Archivematica: project announcements, bug tracking/diagnosis, desired features, recurring problems or concerns, documentation, checking-in on Archivematica implementations, and identifying other users that make use of the same features. Out-of-scope includes topics about getting started with digital preservation or Archivematica. Those are incredibly important topics, but an over commitment for this group.

Finally, we don’t have any official relationship with Artefactual Systems. We want to develop a user-led community that can identify areas for improvements and contribute to the long-term development of Archivematica. Part of the development is finding our voice as a community.

Current Activity

As of this blog post, the Archivematica Users Forum is two calls in. We’ve discussed project announcements, bug tracking/diagnosis, recurring problems or concerns, desired features (including this Features Request spreadsheet), local customizations and identifying other users that make use of the same features.

We spent a good deal of time during our first meeting on March 1, 2018 gathering and ranking topics that participants wanted to discuss during these calls, and intend to cover them in future calls. These topics, in order of interest, include:

Topic Number of Up-votes
Processing large AIPs (size and number of files) 12
Discussing reporting features, workflows, and code 10
How ingest is being tracked and QA’ed, both within and without Archivematica 9
Automation tools – how are people using them, issues folks are running into, etc. 7
How to manage multi-user installations and pipelines 7
Types of pipelines/workflows 7
Having more granularity in turning micro-services on and off 6
Troubleshooting the AIC functionality 3
What other types of systems people are using with Archivematica – DPN, etc. 3
Are people doing development work outside of Artefactual contracts? 2
How to add new micro-services 2
How to customize the FPR, how to manage and migrate customizations 2
How system architectures impact the throughput of Archivematica (large files, large numbers of files, backup schedules) 1

As you can see, there’s no shortage of potential topics! During that meeting, participants shared a number of development announcements:

  • dataverse Integration as a data source (Scholars Portal);
  • DIP creator for software/complex digital objects via Automation Tools (CCA);
  • reporting – development project to report on file format info via API queries (UCLA/NYPL);
  • turning off indexing to increase pipeline speed (Columbia);
  • micro-service added to post identifier to ArchivesSpace (UH); and
  • micro-service added to write README file to AIP (Denver Art Museum).

During our second meeting on May 3, 2018, we discussed types of pipelines/workflows as well as well as how folks decided to adopt another pipeline versus having multiple processing configurations or Storage Service locations. We heard from a number of institutions:

  • NYPL: Uses multiple pipelines – one is for disk images exclusively (they save all disk images even if they don’t end up in the finding aid) and the other is for packages of files associated to finding aid components. They are considering a third pipeline for born-digital video material. Their decision point on adopting a new pipeline is whether different workflows might require different format policies, and therefore different FPRs.
  • RAC: Uses multiple pipelines for digitization, AV, and born-digital archival transfers. Their decision point is based on amount of processing power required for different types of material.
  • Bentley: Uses one pipeline where processing archivists arrange and describe. They are considering a new pipeline with a more streamlined approach to packaging, and are curious when multiple configurations in a single pipeline is warranted versus creating multiple pipelines.
  • Kansas State: Uses two pipelines – one for digitization (images and text) and a second pipeline for special collections material (requires processing).
  • University of Houston: Uses two pipelines – one pipeline for digitization and a second pipeline for born-digital special collections.
  • UT San Antonio: Uses multiple configurations instead of multiple pipeline.

During that call, we also began to discuss the topic of how people deal with large transfers (size or number of files).

Next Call and Future Plans!

We hope you will consider joining us during our next call on July 5, 2018 at 12pm EDT / 9am PDT or at future bi-monthly calls, which are held on the first Thursday of every other month. Call in details are below!

Join from PC, Mac, Linux, iOS or Android:
https://ucla.zoom.us/j/854186191

  • iPhone one-tap (US): +16699006833,854186191# or +16465588656,854186191#
  • Telephone (US): +1 669 900 6833 or +1 646 558 8656
  • Meeting ID: 854 186 191

International numbers available: https://ucla.zoom.us/zoomconference?m=EYLpz4l8KdqWrLdoSAbf5AVRwxXt7OHo


Shira Peltzman is the Digital Archivist at the University of California, Los Angeles Library.

Nick Krabbenhoeft is the Head of Digital Preservation at the New York Public Library.

Max Eckard is the Lead Archivist for Digital Initiatives at the Bentley Historical Library.