Students Reflect (Part 2 of 2): Failure and Learning Tech Skills

This is the fourth post in the bloggERS Making Tech Skills a Strategic Priority series.

As part of our “Making Tech Skills a Strategic Priority” series, the bloggERS team asked five current or recent MLIS/MSIS students to reflect on how they have learned the technology skills necessary to tackle their careers after school. In this post, Anna Speth and Jane Kelly reflect thoughtfully on adapting their mindsets to embrace new challenges and learn from failure.

Anna Speth, 2017 graduate, Simmons College

I am about to celebrate a year in my first full-time position, Librarian for Emerging Technology and Digital Projects at Pepperdine University.  In this role I work on digital initiatives, often in tandem with the archive, and direct our emerging technology makerspace. By choosing to center my graduate career on digital archiving, I felt well prepared for the digital initiatives piece.  However, running the makerspace has been a whirlwind of grappling with the world of emerging tech. My best piece of advice (which we’ve all heard a million times) is to maintain a “learner mindset.” I’m a traditional learner who has mastered the lecture-memorize-regurgitate academic system. This approach doesn’t do much when it comes to hands-on tech.  I am faced with 3D printers, VR systems, arduinos, ozobots, CONTENTdm, and more with minimal instruction. I watch tutorials, but these rarely offer a path to in-depth understanding. Instead, I’ve had to overcome the mindset that I’m not a tech person and will make something worse by messing with it. If the 3D printer doesn’t work, you certainly aren’t going to make it worse by taking it apart and trying to put it back together. If you don’t know how to reorder a multipage object on the backend of CONTENTdm, create a hidden sandbox collection and start experimenting.  Remember that the internet – Google, user forums, Reddit, company reps – is your friend. Also remember (and I tell this to kids in the makerspace just as often as I tell it to myself) that failure is your friend. If you mess something up, then all you’ve done is learn more about how the system works by learning how it doesn’t work. Iteration and perseverance are key. And, as this traditional learner has realized, a whole lot of fun!

Jane Kelly, 2018 grad, University of Illinois at Urbana-Champaign

Developing new tech skills has, at least for me, been a process of learning to fail. The intensive Introduction to Computer Science course I took several years ago was supposed to be fun – a benefit of being able to take college courses for almost nothing as a staff member on campus. It might have been fun for the first three weeks of the semester, but that was followed by a lot of agonizing, handwringing, and tears.

I now reflect on my time in that course as an intensive introduction to failure. This shift in mentality – learning how to fail, and how to accept it – has been key for me in being open to developing my tech skills on the job. I don’t worry so much about messing up, not knowing the answer, or the possibility of breaking my computer.

As a humanities student, it simply was never acceptable to me to turn in an assignment incomplete or “wrong.” In that computer science class, and in the information processing course I took at the iSchool at the University of Illinois a couple years later, an incomplete assignment could be a stellar attempt, proof of lessons learned, and an indication of where help is required. The rubric for good work is different for a computer science problem set than a history paper. It has been a valuable lesson to revisit as I try to develop my skills independently and in the workplace.

I have acquired and maintained my tech skills through a combination of computer science coursework before and during library school, an in-person SAA pre-conference sessions that my employer paid for, and, of course, the internet. Apps like Learn to Code with Python or free online courses can be an introduction to a programming language or a quick refresher since I inevitably forget much of what I learn in class before I can put it to work at a job. Google and Stack Exchange are lifesavers, both because I can often find the answer to my question about the mysterious error code I see in the terminal window and reassure myself that I’m not the first person to pose the question.

More than anything, my openness to what I once thought of as failure has been pivotal to my development. It can take a long time to learn and understand exactly what is going on under the hood with some new software or process, but that’s okay. Sometimes a fake-it-til-you-make-it mentality is exactly what’s needed to push yourself to tackle a new challenge. For me, learning tech skills is learning to be okay with failure as a learning process.


 

Speth-Anna_800x450Anna Speth is the Librarian for Emerging Technology and Digital Projects at Pepperdine’s Payson Library where she co-directs a makerspace and works with digital initiatives. Anna focuses on the point of connection between technology and history.  She holds a BA from Duke University and a MLIS from Simmons College.

 

ERS_jane-kellyJane Kelly is the Web Archiving Assistant for the #metoo Digital Media Collection at the Schlesinger Library on the History of Women in America and a 2018 graduate of the iSchool at the University of Illinois. Her interests lie at the intersection of digital archives and the people who use them.

Students Reflect (Part 1 of 2): Tech Skills In and Out of the Classroom

By London Stever, Hayley Wilson, and Adriana Casarez

This is the third post in the bloggERS Making Tech Skills a Strategic Priority series.

As part of our “Making Tech Skills a Strategic Priority” series, the bloggERS team asked five current and recent MLIS/MSIS students to reflect on how they have learned the technology skills necessary to tackle their careers after school. One major theme, as expressed by these three writers, is the need for a balance of learning inside and outside the classroom.

London Stever, 2018 graduate, University of Pittsburgh

Approaching the six-month anniversary of my MLIS graduation, I find myself reflecting on my technological growth. Going into graduate school, I expected little technology training. Naively, I believed that most archival jobs were paper-only, excepting occasional digitization projects. Imagine my surprise upon finding out the University of Pittsburgh required an introduction to HTML. This trend continued, as the university insisted students have balanced knowledge.

I took technology-focused courses ranging from a history of computers (useful for those expecting to work with older hardware) to an overview of open-source library repositories and learning management systems (not to be discounted by those going into academia). The most useful of these classes was the required digital humanities course. Since graduating, I have applied the practical introduction to ArchivesSpace and Archivematica – and the in-depth explanation of discoverability, access, and web crawling – to my current work at SAE International.

However, none of the information I learned in those classes would be helpful on its own. University did not prepare me for talking to the IT Department. Terminology used in archives and in IT often overlaps, but usage does not. Custom, in-house programs require troubleshooting, and university technology classes did not teach me those skills. Libraries and archives often need to work with software not specially designed for them, but the university did not address this.

Self-taught classes, YouTube videos, and outside certifications were the most useful technology education for me. Using these, I customized my education to meet the needs companies mention and my own learning needs, which focus on practical application I did not get in university. I understand troubleshooting, allowing me to use programs built fifteen years ago. Creating a blog or using a content services platform to increase discoverability and internal access is a breeze. In addition to the balanced digital to analog education of university, I also needed a balance of library and general technology education.

Hayley Wilson, current student, University of North Carolina at Chapel Hill

When registering for classes at UNC Chapel Hill prior to the Fall semester of 2017, I was informed that I was required to fulfill a technology competency requirement. I had the option to either take an at home test or take a technology course (for no credit). I decided to take the technology course because I assumed it would be beneficial to other classes I would be required to take as an MLS student.

As it turns out, as a library science student on the archives and records management track, I had a very strict set of courses I was required to take, with room for only two electives. None of these required courses were focused on technology or building technology skills. I have friends on the Information Science side of the program who are required to take numerous courses that have a strong focus on technology. Fortunately, while at SILS I have had numerous opportunities outside of the classroom to learn and build my technology skills through my various internships and graduate assistant positions. However, I don’t think that every student has the opportunity to do so in their jobs.

Adriana Cásarez, 2018 graduate, University of Texas at Austin

Entering my MSIS program with an interest in digital humanities, I expected my coursework would provide most of the expertise I needed to become a more tech-savvy researcher. Indeed, a survey course in digital humanities gave me an overview of digital tools and methodologies. Additionally, a more intense programming course for cultural data analysis taught me specialized coding for data analysis, machine learning and data visualization. The programming was challenging and using the command line was daunting, but I was fortunate to develop a network of motivated peers who also wanted to develop their technical aptitude.  

Sometimes, I felt I was learning just as many technical skills outside of my general coursework. The university library offered workshops on digital scholarship tools for the academic community. My technical skills and knowledge of trends in topics like text analysis, data curation, and metadata grew by attending as many as I could. The Digital Scholarship Librarian and I also organized co-working sessions for students working on digital scholarship projects. These sessions created a community of practice to share expertise, feedback, and support with others interested in developing their technical aptitude in a productive space. We discussed the successes and frustrations with our projects and with the technology that we were often independently teaching ourselves to use. These community meetups were invaluable avenues to learn from each other and further develop our technical capabilities.

With increased focus on digital archives, libraries and scholarship, students often feel expected to just know or to teach themselves technical skills independently. My experience in my MSIS program taught me that often others are in the same boat, experiencing similar frustrations but too embarrassed to ask for help or admit ignorance. Communities of practice are essential to create an environment where students felt comfortable discussing obstacles and developing technical skills together.


Stever-LondonLondon Stever is an archival consultant at SAE International, where she balances company culture with international and industry standards, including bridging the gap between IT and discovery partners. London graduated from the University of Pittsburgh’s MLIS – Archives program and is currently working on her CompTIA certifications. She values self-education and believes multilingualism and technological literacy are the keys to archival accessibility. Please email london.stever@outlook.com or go to londonstever.com to contact London.

IMG_0186-2

Hayley Wilson is originally from San Diego but moved to New York to attend New York University. She graduated from NYU with a BA in Art History and stayed in NYC to work for a couple of years before moving abroad to work. She then moved to North Carolina for graduate school and will be graduating in May with her master’s degree in Library Science with a concentration in Archives and Records Management.

casarez_headshotAdriana Cásarez is a recent MSIS graduate from the University of Texas at Austin. She has worked as a research assistant on a digital classics project for the Quantitative Criticism Lab. She also developed a digital collection of artistic depictions of the Aeneid using cultural heritage APIs. She aspires to work in digital scholarship and advocate for diversity and inclusivity in libraries.

Modeling archival problems in Computational Archival Science (CAS)

By Dr. Maria Esteva

____

It was Richard Marciano who almost two years ago convened a small multi-disciplinary group of researchers and professionals with experience using computational methods to solve archival problems, and encouraged us to define the work that we do under the label of Computational Archival Science (CAS.) The exercise proved very useful to communicate the concept to others, but also, for us to articulate how we think when we go about using computational methods to conduct our work. We introduced and refined the definition amongst a broader group of colleagues at the Finding New Knowledge: Archival Records in the Age of Big Data Symposium in April of 2016.

I would like to bring more archivists into the conversation by explaining how I combine archival and computational thinking.  But first, three notes to frame my approach to CAS: a) I learned to do this progressively over the course of many projects, b) I took graduate data analysis courses, and c) It takes a village. I started using data mining methods out of necessity and curiosity, frustrated with the practical limitations of manual methods to address electronic records. I had entered the field of archives because its theories, and the problems that they address are attractive to me, and when I started taking data analysis courses and developing my work, I saw how computational methods could help hypothesize and test archival theories. Coursework in data mining was key to learn methods that initially I understood as “statistics on steroids.” Now I can systematize the process, map it to different problems and inquiries, and suggest the methods that can be used to address them. Finally, my role as a CAS archivist is shaped through my ongoing collaboration with computer scientists and with domain scientists.

In a nutshell, the CAS process goes like this: we first define the problem at hand and identify key archival issues within. On this basis we develop a model, which is an abstraction  of the system that we are concerned with. The model can be a methodology or a workflow, and it may include policies, benchmarks, and deliverables. Then, an algorithm, which is a set of steps that are accomplished within a software and hardware environment, is designed to automate the model and solve the problem.

A project in which I collaborate with Dr. Weijia Xu, a computer scientist at the Texas Advanced Computing Center, and Dr. Scott Brandenberg, an engineering professor at UCLA illustrates a CAS case. To publish and archive large amounts of complex data from natural hazards engineering experiments, researchers would need to manually enter significant amounts of metadata, which has proven impractical and inconsistent. Instead, they need automated methods to organize and describe their data which may consist of reports, plans and drawings, data files and images among other document types. The archival challenge is to design such a method in a way that the scientific record of the experiments is accurately represented. For this, the model has to convey the dataset’s provenance and capture the right type of metadata. To build the model we asked the domain scientist to draw out a typical experiment steps, and to provide terms that characterize its conditions, tools, materials, and resultant data. Using this information we created a data model, which is a network of classes that represent the experiment process, and of metadata terms describing the process. The figures below are the workflow and corresponding data model for centrifuge experiments.

Figure 1. Workflow of a centrifuge experiment by Dr. Scott Brandenberg

 

Figure 2. Networked data model of the centrifuge experiment process by the archivist

Following, Dr. Weijia Xu created an algorithm that combines text mining methods to: a) identify the terms from the model that are present in data belonging to an experiment, b) extend the terms in the model to related ones present in the data, and c) based on the presence of all the terms, predict the classes to which data belongs to. Using this method, a dataset can be organized around classes/processes and steps, and corresponding metadata terms describe those classes.

In a CAS project, the archivist defines the problem and gathers the requirements that will shape the deliverables. He or she collaborates with the domain scientists to model the “problem” system, and with the computer scientist to design the algorithm. An interesting aspect is how the method is evaluated by all team members using data-driven and qualitative methods. Using the data model as the ground truth we assess if data gets correctly assigned to classes, and if the metadata terms correctly describe the content of the data files. At the same time, as new terms are found in the dataset and the data model gets refined, the domain scientist and the archivist review the accuracy of the resulting representation and the generalizability of the solution.

I look forward to hearing reactions to this work and about research perspectives and experiences from others in this space.

____
Dr. Maria Esteva is a researcher and data archivist/curator at the Texas Advanced Computing Center, at the University of Texas at Austin. She conducts research on, and implements large-scale archival processing and data curation systems using as a backdrop High Performance Computing infrastructure resources. Her email is: maria@tacc.utexas.edu

 

#snaprt chat Flashback: Archivist and Technologist Collaboration

By Ariadne Rehbein

This is a cross post in coordination with the SAA Students and New Archives Professionals Roundtable.

The spirit of community at the 2016 Code4Lib Conference in Philadelphia (March 7-10) served as inspiration for a recent SAA Students and New Archives Professionals Roundtable #snaprt Twitter chat. The conference was an exciting opportunity for archivists and librarians to learn about digital tools and projects that are free to use and open for further development, discuss needs for different technology solutions, gain a deeper understanding of technology work, and engage with larger cultural and technical issues within libraries and archives. SNAP’s Senior Social Media Coordinator hosted the chat on March 15, focusing the discussion on collaboration between archivists and technologists.

Many of the chat questions were influenced by discussions in the Code4Archives preconference workshop breakout group, “Whose job is that? Sharing how your team breaks down archives ‘tech’ work.” On the last day of the conference, SNAP invited participants through different Code4Lib and Society of American Archivist channels, such as the conference hashtag (#c4l16), the Code4Lib listserv, various SAA listservs, and the SNAP Facebook and Twitter accounts. All were invited to share suggestions or discussion questions for the chat. Participants included archives students and professionals with varying years of experience and focuses, such as digital curation, special collections, university archives, and government archives. Our chat questions were:

  • How do the expertise and knowledge of archivists and technologists who work together often overlap or differ? How much is important to understand of one another’s work? What are some ways to increase this knowledge?
  • What are some examples of technologies that archives currently use? What is their goal/ what are they used to do?
  • Who created and maintains these tools? Why might an archive choose one tool over another?
  • What kinds of tools and tech skills have new archivists learned post-LIS? What is this learning process like?
  • What are some examples of tasks or projects in an archival setting where the expertise of technologists is essential or extremely helpful? Please share any tips from these experiences.
  • Do you know of any blogs/posts that are helpful for born digital preservation / AV preservation / digitized content workflow?

Several different themes emerged in the chat:

  • The importance of an environment that supports relationships between those of different backgrounds and skills. Participants suggested developing a sharing a vocabulary to clearly convey information and providing casual opportunities to meet.
  • The decision to implement a technology solution to serve a need may involve a variety of considerations, such as level of institutional priority, cost, availability of technology professionals to manage or build the system, security, and applicability to other needs.
  • Participants suggested that students gain skills with a variety of different technologies, including relational databases, command line basics, Photoshop, Virtual Box, Bitcurator, and programming (through online tutorials.) The ability and willingness to learn on the job and teach others is important too! These are useful tools and may also help build a shared vocabulary.
  • Participants had engaged in a number of collaborative tasks or projects, such as performing digital forensics, building DIY History at the University of Iowa, implementing systems such as Preservica, and determining digital preservation storage solutions.
  • Some great resources are available for born-digital, digitized, and audiovisual preservation, including AV Preserve, the Digital Curation Google Group, the Bitcurator Consortium, The Signal blog, Chris Prom’s Practical E-Records, the Code4Lib listserv, Digital Preservation News, and National Digital Stewardship Residency blog posts.

Please visit Storify to read the full chat:

Storify of #snaprt chat about archivist and technologistsMany thanks to Wendy Hagenmaier of the ERS Steering Committee for inviting SNAP to share this post. #snaprt Twitter chats typically take place 3 times per month, on or around the 5th, 15th, and 25th at 8 PM ET. Participation is open to anyone interested in issues relevant to MLIS students and new archives professionals. To learn more about the chats, please visit our webpage.

Rehbein_snaprtcode4lib_ersblog_02Ariadne Rehbein strives to support students and new archives professionals as SNAP Roundtable’s Senior Social Media Coordinator. As Digital Asset Coordinator at the Arizona State University Libraries, she focuses on processing and stewardship of digital special collections and providing expertise on issues related to digital forensics, asset management workflows, and policies in accordance with community standards and best practices. She is a proud graduate of the Department of Information and Library Science at Indiana University Bloomington.

Retention of Technology-Based Interactives

The Cleveland Museum of Art’s Gallery One blends art, technology, and interpretation.  It includes real works from the museum’s collection as well as interactive, technology-based activities and games.  For example, Global Influences presents visitors with an artwork and asks them to guess which two countries on the map influenced the work in question; and crowd favorite Strike a Pose asks visitors to imitate the pose of a sculpture and invites them to save and share the resulting photograph.

It’s really cool stuff.  But as the museum plans a refresh of the space, the archives and IT department are starting to contemplate how to preserve the history of Gallery One.  The interactives will have to go, monitors and other hardware will be repurposed, and new artwork and interactive experiences will be installed.  We need to decide what to retain in archives and figure out how to collect and preserve whatever we decide to keep.

These pending decisions bring up familiar archival questions and ask us to apply them to complex digital materials: what about this gallery installation has enduring value?  Is it enough to retain a record of the look and feel of the space, perhaps create videos of the interactives?  Is it necessary to retain and preserve all of the code?

Records retention schedules call for the permanent retention of gallery labels, exhibition photographs, and other exhibition records but do not specifically address technology-based interactives.  The museum is developing an institutional repository for digital preservation using Fedora, but we are still in the testing phases for relatively simple image collections and we aren’t ready to ingest complex materials like the interactives from Gallery One.

As we work through these issues I would be grateful for input from the archives community.  How do we go about this? Does anyone have experience with the retention and preservation technology-based interactives?

Susan Hernandez is the Digital Archivist and Systems Librarian at the Cleveland Museum of Art. Her responsibilities include accessioning and preserving the museum’s electronic records; overseeing library and archives databases and systems; developing library and archives digitization programs; and serving on the development team for the museum’s institutional repository. Leave a comment or contact her directly at shernandez@clevelandart.org.

It May Work In Theory . . . Getting Down to Earth with Digital Workflows

Recently, Joe Coen, archivist at the Roman Catholic Diocese of Brooklyn, posted this to the ERS listserv:

I’m looking to find out what workflows you have for ingest of electronic records and what tools you are using for doing checksums, creating a wrapper for the files and metadata, etc. We will be taking in electronic and paper records from a closed high school next month and l want to do as much as l can according to best practices.

I’d appreciate any advice and suggestions you can give.
51069522_fa3dd37b07_z
“OK. I’ve connected the Fedora-BagIt-Library-Sleuthkit to the FTK-Bitcurator-Archivematica instance using the Kryoflux-Writeblocker-Labelmaker . . . now what?” (photo by e-magic, https://www.flickr.com/photos/emagic/51069522/).
Joe said a couple of people responded to his question directly, but that means we’ve missed an opportunity to learn as a community of archivists working with digital materials about the actual practices of other archivists working with digital materials.

There are a lot of different archivists working with electronic records—some are administrators, some are temps, some are lone arrangers, some are programmers, some are born digital archivists and some have digital archivy thrust upon them—and this diversity of interests and viewpoints is, to my mind, an untapped resource.

There are so many helpful articles and white papers out there offering general guidance and warning of common pitfalls, but sometimes, when you’re trying to cobble together an ingest workflow or planning a site visit, you just think, “Yeah, but how do I actually do this?”

Why don’t we do that here?

If you’ve got links to ingest workflows, transfer guidelines, in-house best practices, digital materials surveys, or any other formal or informal procedures that just might maybe, kinda, one day be helpful to another archivist, why not post or describe them in the comments?

I know I’ve often scoured the Internet for similar advice only to find it in a comment to a blog post.