SAA 2019 recap | Session 504: Building Community History Web Archives: Lessons Learned from the Community Webs Program

by Steven Gentry


Introduction

Session 504 focused on the Community Webs program and the experiences of archivists who worked at either the Schomburg Center for Research in Black Culture or the Grand Rapids Public Library. The panelists consisted of Sylvie Rollason-Cass (Web Archivist, Internet Archive), Makiba Foster (Manager, African American Research Library and Cultural Center, formerly the Assistant Chief Librarian, the Schomburg Center for Research in Black Culture), and Julie Tabberer (Head of Grand Rapids History & Special Collections).

Note: The content of this recap has been paraphrased from the panelists’ presentations and all quoted content is drawn directly from the panelists’ presentations.

Session summary

Sylvie Rollason-Cass began with an overview of web archiving and web archives, including:

  • The definition of web archiving.
  • The major components of web archives, including relevant capture tools (e.g. web crawlers, such as Wget or Heritrix) and playback software (e.g. Webrecorder Player).
  • The ARC and WARC web archive file formats. 

Rollason-Cass then noted both the necessity of web archiving—especially due to the web’s ephemeral nature—and that many organizations archiving web content are higher education institutions. The Community Webs program was therefore designed to get more public library institutions involved in web archiving, which is critical given that these institutions often collect unique local and/or regional material.

After a brief description of the issues facing public libraries and public library archives—such as a lack of relevant case studies—Rollason-Cass provided information about the institutions that joined the program, the resources provided by the Internet Archive as part of the program (e.g. a multi-year subscription to Archive-It), and the project’s results, including:

  • The creation of more than 200 diverse web archives (see the Remembering 1 October web archive for one example).
  • Institutions’ creation of collection development policies pertaining specifically to web archives, in addition to other local resources.
  • The production of an online course entitled “Web Archiving for Public Libraries.” 
  • The creation of the Community Webs program website.

Rollason-Cass concluded by noting that although some issues—such as resource limitations—may continue to limit public libraries’ involvement in web archiving, the Community Webs program has greatly increased the ability for other institutions to confidently archive web content. 

Makiba Foster then addressed her experiences as a Community Webs program member. After a brief description of the Schomburg Center, its mission, and unique place where “collections, community, and current events converge”, Foster highlighted the specific reasons for becoming more engaged with web archiving:

  • Like many other institutions, the Schomburg Center has long collected clippings files—and web archiving would allow this practice to continue.
  • Materials that document the experiences of the black community are prominent on the World Wide Web.
  • Marginalized community members often publish content on the Web.

Foster then described the #HashtagSyllabusMovement collection, a web archive of educational material “related to publicly produced and crowd-sourced content highlighting race, police violence, and other social justice issues within the Black community.” Foster had known this content could be lost, so—even before participating in the Community Webs program—she began collecting URLs. Upon joining the Community Webs program, Foster used Archive-It to archive various relevant materials (e.g. Google docs, blog posts, etc.) dated from 2014 to the current. Although some content was lost, the #HashtagSyllabusMovement collection both continues to grow—especially if, as Foster hopes, it begins to include international educational content—and shows the value of web archiving. 

In her conclusion, Foster addressed various success, challenges, and future endeavors:

  • Challenges:
    • Learning web archiving technology and having confidence in one’s decisions.
    • Curating content for the Center’s five divisions.
    • “Getting institutional support.”
  • Future Directions:
    • A new digital archivist will work with each division to collect and advocate for web archives.
    • Considering how to both do outreach for and catalog web archives.
    • Ideally, working alongside community groups to help them implement web archiving practices.

The final speaker, Julie Tabberer, addressed the value of public libraries’ involvement in web archives. After a brief overview of the Grand Rapids Public Library, the necessity of archives, and the importance of public libraries’ unique collecting efforts, Tabberer posited the following question: “Does it matter if public libraries are doing web archiving?” 

To test her hypothesis that “public libraries document mostly community web content [unlike academic archives],” Tabberer analyzed the seed URLs of fifty academic and public libraries to answer two specific questions:

  • “Is the institution crawling their own website?”
  • “What type of content [e.g. domain types] is being crawled [by each institution]?”

After acknowledging some caveats with her sampling and analysis—such as the fact that data analysis is still ongoing and that only Archive-It websites were examined—Taberrer showed audience members several graphics that revealed academic libraries 1.) Typically crawled their websites more so than public libraries and 2.) Captured more academic websites than public libraries.

Tabberer then concluded with several questions and arguments for the audience to consider:

  • In addition to encouraging more public libraries to archive web content—especially given their values of access and social justice—what other information institutions are underrepresented in this community?
  • Are librarians and archivists really collecting content that represents the community?
  • Even though resource limitations are problematic, academic institutions must expand their web archiving efforts.

Steven Gentry currently serves a Project Archivist at the Bentley Historical Library. His responsibilities include assisting with accessioning efforts, processing complex collections, and building various finding aids. He previously worked at St. Mary’s College of Maryland, Tufts University, and the University of Baltimore.

Managing Our Web-Based Content at the University of Minnesota

By Valerie Collins

____

This is the fourth post in the bloggERS series #digitalarchivesfail: A Celebration of Failure.

The University of Minnesota Archives manages the web archiving program for the Twin Cities campus. We use Archive-It to capture the bulk of our online content, but as we have discovered, managing subsets of our web content and bringing it into our collections has its unique challenges and requires creative approaches. We increasingly face requests to provide a permanent, accessible home for files that would otherwise be difficult to locate in a large archived website. Some content, like newsletters, is created in HTML and is not well-suited for  upload into the institutional repository (IR) we use to handle most of our digital content. Our success in managing web content that is created for the web (as opposed to uploaded and linked PDF files, for example) has been mixed.

In 2016, a department informed us that one of their web domains was going to be cleared of its current content and redirected. Since that website contained six years of University Relations press releases, available solely in HTML format, we were pretty keen on retrieving that content before it disappeared from the live web.

The department also wanted these releases saved, so they downloaded the contents of the website for us, converted each release into a PDF, and emailed them to us before that content was removed. Although we did have crawls of the press releases through Archive-It, we intended to use our institutional repository, the University Digital Conservancy (UDC), to preserve and provide access to the PDF files derived from the website.

So, when faced with the 2,920 files included in the transfer, labeled in no particularly helpful way, in non-chronological order, and with extraneous files included, I rolled up my sleeves and got to work. With the application of some helpful programs and a little more spreadsheet data entry than I would like to admit to, I ended up with some 2,000 articles renamed in chronological order. I grouped and combined the files by year, which was in keeping with the way we have previously provided access to press releases available in the UDC.

All that was left was to OCR and upload, right?

And everything screeched to a halt. Because of the way the files had been downloaded and converted, every page of every file contained renderable text from the original stylesheet hidden within an additional layer that prevented OCR’ing with our available tools, and we were unable to invest more time to find an acceptable solution.

Attempted extracted text

 

Thus, these news releases now sit in the UDC, six 1000 page documents that cannot be full-text searched but are, mercifully, in chronological order. The irony of having our born-digital materials return to the same limitations that plagued our analogue press releases, prior to the adoption of the UDC, has not been lost on us.

But this failure shines a light on the sometimes murky boundaries between archiving the web and managing web content in our archive. I have a website sitting on my desk, burned to a CD. The site is gone from the live web, and Archive-It never crawled it. We have a complete download of an intranet site sitting on our network drive–again, Archive-It never crawled that site. We handle increasing amounts of web content that never made it into Archive-It. But, using our IR to handle these documents is imperfect, too, and can require significant hands-on work when the content has to be stripped out of its original context (the website), and manipulated to meet the preservation requirements of that IR (file format, OCR).

Cross-pollination between our IR and our web archive is inevitable when they are both capturing the born-digital content of the University of Minnesota. Assisting departments with archiving their websites and web-based materials usually involves using a combination of the two, but raises questions of scalability. But even in our failure to bring those press releases all the way to the finish line, we were able to get pretty close using the tools we had available to us and were able to make the files available, and frankly, that’s an almost-success I can live with.

And, while we were running around with those press releases, another department posted a web-based multimedia annual report only to ask later whether it could be uploaded to the IR, with their previous annual reports. Onward!

____

Valerie Collins is a Digital Repositories & Records Archivist at the University of Minnesota Archives.