Modeling archival problems in Computational Archival Science (CAS)

By Dr. Maria Esteva

____

It was Richard Marciano who almost two years ago convened a small multi-disciplinary group of researchers and professionals with experience using computational methods to solve archival problems, and encouraged us to define the work that we do under the label of Computational Archival Science (CAS.) The exercise proved very useful to communicate the concept to others, but also, for us to articulate how we think when we go about using computational methods to conduct our work. We introduced and refined the definition amongst a broader group of colleagues at the Finding New Knowledge: Archival Records in the Age of Big Data Symposium in April of 2016.

I would like to bring more archivists into the conversation by explaining how I combine archival and computational thinking.  But first, three notes to frame my approach to CAS: a) I learned to do this progressively over the course of many projects, b) I took graduate data analysis courses, and c) It takes a village. I started using data mining methods out of necessity and curiosity, frustrated with the practical limitations of manual methods to address electronic records. I had entered the field of archives because its theories, and the problems that they address are attractive to me, and when I started taking data analysis courses and developing my work, I saw how computational methods could help hypothesize and test archival theories. Coursework in data mining was key to learn methods that initially I understood as “statistics on steroids.” Now I can systematize the process, map it to different problems and inquiries, and suggest the methods that can be used to address them. Finally, my role as a CAS archivist is shaped through my ongoing collaboration with computer scientists and with domain scientists.

In a nutshell, the CAS process goes like this: we first define the problem at hand and identify key archival issues within. On this basis we develop a model, which is an abstraction  of the system that we are concerned with. The model can be a methodology or a workflow, and it may include policies, benchmarks, and deliverables. Then, an algorithm, which is a set of steps that are accomplished within a software and hardware environment, is designed to automate the model and solve the problem.

A project in which I collaborate with Dr. Weijia Xu, a computer scientist at the Texas Advanced Computing Center, and Dr. Scott Brandenberg, an engineering professor at UCLA illustrates a CAS case. To publish and archive large amounts of complex data from natural hazards engineering experiments, researchers would need to manually enter significant amounts of metadata, which has proven impractical and inconsistent. Instead, they need automated methods to organize and describe their data which may consist of reports, plans and drawings, data files and images among other document types. The archival challenge is to design such a method in a way that the scientific record of the experiments is accurately represented. For this, the model has to convey the dataset’s provenance and capture the right type of metadata. To build the model we asked the domain scientist to draw out a typical experiment steps, and to provide terms that characterize its conditions, tools, materials, and resultant data. Using this information we created a data model, which is a network of classes that represent the experiment process, and of metadata terms describing the process. The figures below are the workflow and corresponding data model for centrifuge experiments.

Figure 1. Workflow of a centrifuge experiment by Dr. Scott Brandenberg

 

Figure 2. Networked data model of the centrifuge experiment process by the archivist

Following, Dr. Weijia Xu created an algorithm that combines text mining methods to: a) identify the terms from the model that are present in data belonging to an experiment, b) extend the terms in the model to related ones present in the data, and c) based on the presence of all the terms, predict the classes to which data belongs to. Using this method, a dataset can be organized around classes/processes and steps, and corresponding metadata terms describe those classes.

In a CAS project, the archivist defines the problem and gathers the requirements that will shape the deliverables. He or she collaborates with the domain scientists to model the “problem” system, and with the computer scientist to design the algorithm. An interesting aspect is how the method is evaluated by all team members using data-driven and qualitative methods. Using the data model as the ground truth we assess if data gets correctly assigned to classes, and if the metadata terms correctly describe the content of the data files. At the same time, as new terms are found in the dataset and the data model gets refined, the domain scientist and the archivist review the accuracy of the resulting representation and the generalizability of the solution.

I look forward to hearing reactions to this work and about research perspectives and experiences from others in this space.

____
Dr. Maria Esteva is a researcher and data archivist/curator at the Texas Advanced Computing Center, at the University of Texas at Austin. She conducts research on, and implements large-scale archival processing and data curation systems using as a backdrop High Performance Computing infrastructure resources. Her email is: maria@tacc.utexas.edu

 

Building a “Computational Archival Science” Community

By Richard Marciano

———

When the bloggERS! series started at the beginning of 2015, some of the very first posts featured work on “computer generated archival description” and “big data and big challenges for archives,” so it seems appropriate to revisit this theme of automation and management of records at scale and provide an update on a recent symposium and several upcoming events.

Richard Marciano co-hosted a recent “Archival Records in the Age of Big Data” symposium. For more information about the recent Symposium, visit: http://dcicblog.umd.edu/cas/. The three-day program is listed online and has links to all the videos and slides. A list of participants can also be found at http://dcicblog.umd.edu/cas/attendees. The objectives of the Symposium were to:

  • address the challenges of big data for digital curation,
  • explore the conjunction of emerging digital methods and technologies,
  • identify and evaluate current trends,
  • determine possible research agendas, and
  • establish a community of practice.

Richard Marciano and Bill Underwood will be further exploring these themes at SAA in Atlanta on Friday, August 5, 9:30am – 10:45am, session 311, for those ERS aficionados interested in contributing to this emerging conversation. See: https://archives2016.sched.org/event/7f9D/311-archival-records-in-the-age-of-big-data

On April 26-28, 2016 the Digital Curation Innovation Center (DCIC) at the University of Maryland’s College of Information Studies (iSchool) convened a Symposium in collaboration with King’s College London. This invitation-only symposium, entitled Finding New Knowledge: Archival Records in the Age of Big Data, featured 52 participants from the UK, Canada, South Africa and the U.S. Among the participants were researchers, students, and representatives from federal agencies, cultural institutions, and consortia.

This group of experts gathered at Maryland’s iSchool to discuss and try to define computational archival science: an interdisciplinary field concerned with the application of computational methods and resources to large-scale records/archives processing, analysis, storage, long-term preservation, and access, with the aim of improving efficiency, productivity and precision in support of appraisal, arrangement and description, preservation and access decisions, and engaging and undertaking research with archival material.

This event, co-sponsored by Richard Marciano, Mark Hedges from King’s College London and Michael Kurtz from UMD’s iSchool, brought together thought leaders in this emerging CAS field:  Maria Esteva from the Texas Advanced Computing Center (TACC), Victoria Lemieux from the University of British Columbia School of Library, Archival and Information Studies (SLAIS), and Bill Underwood from Georgia Tech Research Institute (GTRI). There is growing interest in large-scale management, automation, and analysis of archival content and the realization of enhanced possibilities for scholarship through the integration of ‘computational thinking’ and ‘archival thinking.

To capitalize on the April Symposium, a follow-up workshop entitled Computational Archival Science: Digital Records in the Age of Big Data, will take place in Washington D.C. the 2nd week of December 2016 at the 2016 IEEE International Conference on Big Data. For information on the upcoming workshop, please visit: http://dcicblog.umd.edu/cas/ieee_big_data_2016_cas-workshop/. Paper contributions will be accepted until October 3, 2016.

———

Richard is a professor at Maryland’s iSchool and director of the Digital Curation Innovation Center (DCIC). His research interests include digital preservation, archives and records management, computational archival science, and big data. He holds degrees in Avionics and Electrical Engineering, a Master’s and Ph.D. in Computer Science from the University of Iowa, and conducted a Postdoc in Computational Geography.