2014 TCDL Abstracts

Sunday, April 27

Pre-Conference Workshop: Hacking DPLA | 1:00 – 5:00 PM | Perry-Castaneda Library Room 1.124 (UT Austin)

Monday, April 28

Opening Keynote Address (Amphitheater 204):

Inside the Digital Public Library of America
Dan Cohen, Founding Executive Director of the DPLA

Abstract
DPLA Executive Director Dan Cohen goes behind the scenes to discuss how the DPLA was created, how it functions as a portal and platform, what the staff is currently working on, and what’s to come for the young project and organization.


Session 1A (Amphitheater 204):

Digitizing San Antonio’s LGBTQ Publications: A Portal to the City’s Queer Past 
Melissa Gohlke, UT San Antonio

Abstract
Too often in the past, records of gay, lesbian, and transgender persons have been discarded or destroyed sending important filaments of history into the trash bin of time. Fortunately, queer publications that survive provide vital glimpses into the evolution of the communities that produced them and are an important source of ascertaining how gay, lesbian, and transgender organizations and individuals perceived and reacted to the world around them, built communities, and captured the pulse of their evolving culture. As interest in queer history and culture grows, efforts to collect, preserve, and digitize LGBTQ materials have intensified. The long-term benefits of preserving queer records such as LGBTQ serials through the digitization process cannot be understated. As more materials are digitally preserved and made available, opportunities for access and conservancy are greatly expanded. This presentation will cover one such opportunity at the UTSA Libraries Special Collections.

In 2012, UTSA Libraries Special Collections began a collaborative project with the Happy Foundation, a San Antonio non-profit GLBT archives. The project entailed digitization of several decades of queer periodicals housed at the foundation. This effort coincided with the purchase of a Zeutschel overheard scanner by the UTSA Libraries. The process included pickup, transport, digitization, and return of loaned periodicals and finally, ingest of digital objects and metadata into CONTENTdm. Two challenges came to light during the project:  1) tracking down publication creators to secure permission to digitize items and make them available on the internet 2) handling content that might be perceived as extremely provocative, pornographic, or possibly offensive.

At present, the UTSA Libraries Special Collections staff has digitized the bulk of local queer serials held at the Happy Foundation. These represent the basis of UTSA Special Collections GLBTQ Publications collection which includes the Calendar, the Marquise, River City Empty Closet, Out in San Antonio, and San Antonio Community News. WomanSpace and Rainbow Garden Club newsletter, also included in the digital collection, are physical records held at UTSA Special Collections. While the Digital GLBTQ Publications collection features primarily San Antonio periodicals, issues of queer serials from elsewhere are also represented. Several issues of One magazine, the nation’s first homosexual publication, are housed at the Happy Foundation and are available digitally through UTSA. Records donated by local and regional LGBTQ organizations and individuals , such as Lollie Johnson, San Antonio Lesbian Gay Assembly, the Texas Lesbian Conference, and San Antonio activist Michael McGowan augment UTSA Libraries Special Collections digital holdings of queer publications and provide research opportunities for scholars, students, and members of the community.

Keywords
archives; digital collections; LGBTQ digital records

The Texas Runaway Slave Project
Kyle Ainsworth, East Texas Research Center, Stephen F. Austin State University

Abstract
The Texas Runaway Slave Project (TRSP) at the East Texas Research Center, Ralph W. Steen Library (http://digital.sfasu.edu/cdm/landingpage/collection/RSP) is a digital collection of extant runaway slave advertisements, articles and capture notices from surviving Texas newspapers published through 1865. Launched October 1, 2013, the collection currently features over 225 runaway slaves. These come from the review of 1,610 issues in 40 newspaper collections. Working with a research estimate of 10,000 available digitized, microfilmed and original pre-1865 Texas newspapers, the TRSP collection is expected to document 1,500 to 1,600 runaway slaves when finished.

The presentation will cover some of the unique aspects that make the project very dynamic:

1. User Access: This project is ongoing. Instead of waiting for a complete dataset, the team decided to get the research thus far collected out and available to users.

2. Metadata: The TRSP displays up to 37 searchable fields, four Google maps, as well as an image and transcript for each document. This level of description and geographical mapping is an improvement on the four existing runaway slave project websites.[1]

3. ContentDM: The TRSP innovatively uses this software to give full access to object and item-level metadata. Compound objects (JPEG images of advertisements and PDF documents of the transcript) provide visitors with a user-friendly interface.

4. Webpage Design: Created with only 60 hours of labor, the webpage is minimalist but highly functional for both the casual student and the research scholar.

5. Labor: For so much output, the project still only consists of the project and content managers. An $18,000 grant request under review would provide six student assistants for two months of digital research.

6. Future Applications: Portraits of Freedom is an idea by the Project Manager to create an art exhibit featuring work by SFASU students and faculty drawn from the detailed runaway slave advertisements aggregated by the TRSP. The exhibit might include, but is not limited to, works of painting, drawing, photography, printmaking, sculpture, and art metal. If the project goes forward, it would debut in time for the Sesquicentennial (150 years) of the end of the Civil War.

[1] David Gwynn, North Carolina Runaway Slave Advertisements, 1751-1840 [2012], UNC-Greensboro, online at http://libcdm1.uncg.edu/cdm/landingpage/collection/RAS, c.2,400 runaways; Tom Costa, The Geography of Slavery in Virginia [2005], University of Virginia, online at http://www2.vcdh.virginia.edu/gos/, c.4,000 advertisements; Douglas Chambers and Max Grivno, Documenting Runaway Slaves [2013], University of Southern Mississippi, online at http://aquila.usm.edu/drs/, c.10,500 runaways; and Jean-Pierre Le Glaunec, Marronage in Saint-Domingue (Haiti): History, Memory, Technology [2010], Universite de Sherbrooke, online at http://www.marronnage.info/en/, 14,867 slaves.

Keywords
Digital Archives; ContentDM; Slavery

Session 1B (Classroom 203)

Content Management Systems and 3D Models: Creation, Interaction and Display
Dillon Wackerman, Stephen F. Austin State University; Ashley Thompson, Stephen F. Austin State University

Abstract
This presentation will explore methods of creating and displaying 3D images in relation to Content Management Systems and online collections. Examining the creation of 3D models through various platforms, we will discuss the interaction and feasibility for implementation of several common 3D formats.

The online display of 3D models has been in use for several years, most notably in archaeological reconstruction projects and more recently in digital imaging within the field of medical science. For this presentation, a 3D model is a visual representation that can be manipulated with various tools, which enable it to be turned, rotated and magnified among other functions. Apart from large-scale examples, the use of such models has yet to be fully utilized for the online display of cultural heritage objects and in particular within Content Management Systems such as CONTENTdm and Digital Commons.

The 3D file formats that this presentation will address necessarily carry over into the display of 3D models. Discussion will then also consider the relationship between these file formats and external or internal methods of display. This presentation will also address recent developments in the area of 3D model representation and how subsequent applications may change.

Keywords
3D models; digital libraries; content management systems

Place-Based Online Management Systems for Documenting the Built Environment
Josh Conrad, UT Austin, Hardy Heck Moore, Inc.

Abstract
In the fields of Cultural Resource Management and Historic Preservation, practitioners have a unique task: document, analyze, and determine the relative historical significance of places and other immovable objects in the built environment. In the age of digital documentation, these fields desperately desire – yet have had little progress in developing – sophisticated spatial database systems that offer a flexible tablet-based interface for surveying in the field everything from Victorian mansions to steel trestle bridges to 20-acre cemeteries to freestanding roadside neon signage. Such a system also requires integrated data analysis tools for conducting complex spatial queries, compiling project-specific displays of data on the fly, and exporting deliverable data to clients in formats ranging from printed inventories to database-agnostic tabulated flat files to proprietary formats specific to popular geographic information system software.

For the past several years, I have been working with the University of Texas’ Graduate Program in Historic Preservation, in collaboration with the UT School of Information and the City of Austin, to design and develop the Austin Historical Survey Wiki, a place-based online data management application for collecting and organizing information about the history of Austin’s built environment. This application aims to resurrect the extensive amount of dead archived data from past survey efforts and combine it with new efforts from the community of neighborhood historians eager to document and share the histories of the places that matter to them.

In addition, I am concurrently developing, with Austin-based architectural historians Hardy Heck Moore, Inc., a tablet-based web app that can allow everyone from professional historians to motivated neighbors to easily collect and view information, photos and scanned documents about the historic places in their own towns.

Utilizing open source database software including Drupal CMS, MySQL and PostGreSQL + PostGIS, hosted on Ubuntu/Linux cloud-based servers, this application offers an interesting case study of when to design highly-flexible database schemas that can integrate with spatial data and satisfy a demanding range of data input and output requirements.

Together, these two projects are tackling the core need in my field for a trade-specific, low-cost, fully integrated database system flexible enough to manage practically any type of immovable place-based heritage.

Keywords
spatial database; flexible schema; historic preservation

Square peg in a round hole: Using IRs to archive websites
Colleen Lyon, UT Austin; Katherine Miles, UT Austin

Abstract
Institutional repositories (IRs) were designed as a way to provide open access (OA) to research articles written by university faculty. Much of what is being produced on college campuses does not fit into the tidy research article category, and any IR administrator can tell you that users want to submit more than just research articles. In addition to the OA aspect of IRs, repository users are very interested in having a reliable way of pointing to their work, and many of them look at IRs as an archive for their digital scholarship.

One of the areas of digital scholarship that our users want to have preserved is websites. Our first response was to say that DSpace does not handle websites well, which is very true, but we received so many requests that we decided to look into ways of making websites work within the DSpace environment. We started out archiving department blogs, moved into students blogs, and are currently working on faculty websites. Each project presents a slightly different set of issues. We need to decide how to organize the content, how to present the content (screen shots, html, pdf, etc.), and how to describe the content. We have to consider whether to preserve all links listed on a page or just the ones that seem to provide important context. And, since websites are ever-changing, we need to come up with a plan for ongoing content capture.

We will provide attendees with an overview of the website archiving done so far in the University of Texas Digital Repository (UTDR), and we will discuss the main issues we’ve run into when working with groups on campus. We will also discuss the pros and cons of using IRs to archive websites as opposed to services like Archive-It.

Keywords
institutional repositories; website archiving; DSpace


Session 2A (Amphitheater 204):

This “24×7” session consists of 7-minute presentations, each comprising no more than 24 slides.

Aggregating Digital Collections Metadata into the DPLA: A Regional Tiered Services Model at the Mountain West Digital Library
Sandra A. McIntyre, Mountain West Digital Library

Abstract
As a service hub to the Digital Public Library of America (http://dp.la), the Mountain West Digital Library (http://mwdl.org) shares over 800,000 metadata records from more than 20 repositories located in Utah, Nevada, Idaho, Arizona, Montana, and Oregon.  From common metadata standards, to digitization and hosting services, to aggregation and normalization, the MWDL collaborative provides a range of services to a distributed network of memory institutions in the region, helping them to share their digital collections broadly.

Over the past twelve years, to distribute the services burden, the MWDL and its more than 120 partners have devised a tiered services model, providing different services at different levels of the collaborative. Tiering services brings benefits to everyone in the network, as it allows for efficiencies of scale on certain services that lend themselves to centralization, while keeping other services close to the partners and materials involved. It also distributes the cost of services across many institutions and allows for different cost-recovery models to be in effect simultaneously. On the other hand, a tiered services model also creates potential issues for the collaborative, including the management of more complex workflows, a need for closer collaboration to coordinate varying practices, and emerging issues of fairness and participation with respect to funding and governance.

With this Prezi presentation, MWDL director Sandra McIntyre will show how the MWDL distributed network inter-operates to offer a distinctive service mix, what issues have emerged with growth, and how the collaborative is moving forward with support from the Digital Public Library of America to adapt the tiered services model to meet the needs of more memory institutions in the Mountain West.

Keywords
Digital Public Library of America; service hub; digital library; collaboration; organizational development; metadata harvesting; aggregation; tiered services; funding, governance

Applying Visual Arts Pedagogy in the Training of new Digital Imaging Technicians
Derek Rankins, University of North Texas; Jeremy Moore, University of North Texas

Abstract
We propose a 24×7 presentation that provides insight into the training methods of UNT’s Digital Projects Lab. Derek Rankins and Jeremy Moore apply pedagogy from their visual arts backgrounds while training new digital imaging technicians. When students learn to scan items first, we assert they internalize a series of steps that, when completed, signify the job is done regardless of how the final image appears. Instead, student assistants now begin with quality control tasks so that they learn the difference between “good” and “bad” images before they are asked to create anything. This is further extended by having the students participate in peer reviews and buddy training sessions. The presentation will include sample images of common mistakes across a variety of imaging platforms.

Keywords
digital imaging

Flowcharting a Course Through Open-Source Waters, an eMOP guide to OCR
Matthew J. Christy, The Initiative for Digital Humanities, Media, and Culture, Texas A&M University

Abstract
The Early Modern OCR Project (eMOP), an Andrew W. Mellon Foundation funded grant project running out of the Initiative for Digital Humanities, Media, and Culture (IDHMC) at Texas A&M University, intends to use font and book history techniques to train modern Optical Character Recognition (OCR) engines. eMOP’s immediate goal is to make machine readable, or improve the readability, for 45 million pages of text from two major proprietary databases: Eighteenth Century Collections Online (ECCO) and Early English Books Online (EEBO). Generally, eMOP aims to improve the visibility of early modern texts by making their contents fully searchable. The current paradigm of searching special collections for early modern materials by either metadata alone or “dirty” OCR is inefficient for scholarly research (Mandell, 2013).

Now in year two, eMOP is turning towards one of their main goals: to produce a workflow, published in Taverna, for use by individuals and institutions with similar projects. Matthew Christy and Liz Grumbach, eMOP Co-Project Managers for Year Two, will present a series of interconnected workflows that represent the work being done by eMOP and give an idea of how eMOP work will benefit the library, and larger academic, communities. Our presentation will include flowcharts covering:

  • Wrangling the eMOP data and metadata. Our data set consists of the 45 million pages that make up the Eighteenth Century Collections Online (ECCO) and Early English Books Online (EEBO) commercial database, as well as over 46,000 had transcribed texts from the Text Creation Project (TCP). We have created our own DB and query/download tools to manage and access that data.
  • The eMOP Font History database being created. This DB is based on parsing the natural-language imprint lines of every document in EEBO.
  • Training Tesseract. We have developed our own tools and methods to optimize training of Google’s open source OCR engine Tesseract for work on pre-modern printed texts.
  • The eMOP controller. The controller is a software process that controls work from OCR’ing to scoring of results
  • The eMOP post-processing process. This process will score OCR results per page, and then decide which of two post-processes to route the page through. Pages that score well will be routed for further correction. Pages that score badly will be routed to a triage system which will determine what is causing the page to fail OCR’ing and tag them for appropriate pre-processing to rectify problems and later re-OCR’ing.
  • The eMOP post-processing scoring method.
  • The process for training eMOP’s triage system’s machine learning applications.

We will conclude with information where to find out more information about eMOP, as well as our open source code and workflows.

Keywords
digitization; OCR; open source; flowchart

Responsive Design Meets JFK: 1960s Images through Modern Technology
Eli Zoller, University of Texas at Arlington

Abstract
If you’ve always dreamed of creating a seamless website that would work for users across browsers and devices, you’ve come to the right place. Based on the foundations of Responsive Web Design and using Bootstrap, a front-end framework, you can create a site in the time that this presentation takes. This session will explore the movement towards responsive web design, the use and modification of Bootstrap, and the importance of inter-department relationships in creating a digital exhibit site at the UT Arlington Libraries.

Keywords
responsive web design; digital collections; bootstrap

Viva, VIVO! Research Discovery and Digital Libraries
Patrick Michael McLeod, University of North Texas

Abstract
VIVO is a web application that enables discovery of research and scholarly activity on both the micro and macro levels. At the micro, or institutional, level, VIVO provides a semantic application that organizes and exposes metadata concerning scholarly output by persons affiliated with the institution, producing an environment populated by structured data on publications, conference presentations, teaching, and other service. At the macro, or national, level, institutional VIVO instances provide RDF structured data to add the content of the institutional instances to a nation-wide network of VIVO instances.

There are natural affinities between VIVO and digital libraries with research collections. Integrating VIVO’s researcher information and digital libraries’ researcher holdings adds another layer of data richness to both endeavors. This talk will look at two examples of integrating these two worlds.

Keywords
digital libraries; linked data; VIVO

Session 2B (Classroom 203):

What’s Good for the Large Hadron Collider is Good for Libraries: REDDnet at Texas Tech University
Jim Brewer, Texas Tech University; Jayne Sappington, Texas Tech University; Alan Tackett, Vanderbilt University

Abstract
Over the last three years, the Texas Tech University (TTU) Libraries and the Research and Education Data Depot Network (REDDnet) have collaborated on a project to deliver a system that provides library data preservation and continuous access to digital information in research collections. TTU identified a need for Business Continuity when the University lost Internet access for an entire day. During that time Distance Students lost access to library services including digital resources.

REDDnet, developed under a National Science Foundation-funded infrastructure project at Vanderbilt University, provides storage capabilities with geographically distributed servers called depots that make use of Internet data striping with fault tolerant design. Members of the Advanced Computing Center for Research & Education (ACCRE) at Vanderbilt University have provided Librarians at TTU with tools that allow their MediaWiki environment to make use of REDDnet services and functions while keeping the off-the-shelf tools employed in MediaWiki unchanged. As a result, digital collections are provided with a robust mix of access, archive and business continuity support running under the same services that scientists use at the Large Hadron Collider (LHC) project working with either the LHC CMS or ATLAS experiments. MediaWiki is the open source toolset that drives Wikipedia and many other large services running on the Internet. TTU is using the MediaWiki service to provide access to a historical collection of images and data about sailing ships as well as to volumes of the University yearbooks.

The collaboration is built upon a design by the TTU Digital Library Unit which chose to use MediaWiki. This presentation will cover the use of MediaWiki as tool for providing access to digital data, the basic functionality of REDDnet as system for supporting access and management of large datasets, and the interface between MediaWiki and REDDnet which allows these systems to perform exceptionally well together. Vanderbilt presenters will give an overview of the history of the REDDnet project and illustrate the design features through the use of a system at an academic library. The wider use of REDDnet in the physics community, where the project was originally designed, will be covered as well as other projects and communities currently able to benefit from its capabilities. Future directions for REDDnet will also be covered.

The TTU and Vanderbilt collaboration has addressed key issues in the library environment: geographic distribution of systems to provide reliability and fault tolerance, archiving and preservation, and business continuity. These issues continue to remain areas of active development in nearly all library circles. Here the presenters offer a flexible set of ideas for handling the challenges which many libraries already face on a daily basis.

Keywords
distributed file systems; data preservation; data archiving; Internet striping; REDDnet; Texas Tech University; Vanderbilt University; MediaWiki

A Consortial Response to Data Sharing: The TDL Data Management Pilot Project
Debra Hanken Kurtz, Texas Digital Library

Abstract
In February 2013 the federal Office of Science and Technology Policy issued a mandate requiring federal funding agencies that spend at least $100 million per year on research and development to mandate public access to the metadata, published research, and data outputs that result from this funding. In response to the OSTP mandate and to the stated needs of its member libraries, the Texas Digital Library began to plan for a consortially developed and run data management service that would meet the requirements of the mandate and position libraries to play a crucial role in on-going conversations about data management at their universities.

A working group of representative TDL member schools and the Texas Advanced Computing Center began meeting in Fall 2013 to create a cross-institutional pilot project to ingest and make data accessible on the web.

The goals of this pilot are:

  1. To create services that meet emerging federal requirements for data and research publication for federally-funded research projects.
  2. To design and integrate a system for curating and managing data that support novel interdisciplinary research.
  3. To design services that will support the dissemination of research to the public in ways that are useful and effective in meeting the goals of the member institutions.

The group is working with environmental science research groups identified at Texas A&M University to ingest data in a variety of formats, develop and apply metadata to maximize discovery, measure access and usage, and track costs.

The project will build on existing TDL technologies and resources, including hosted DSpace institutional repositories, DuraCloud, and large-scale storage at the Texas Advanced Computing Center. It will deploy these resources strategically to develop a working service and identify areas of need for future development.

The pilot project will be completed in the fall of 2014. This presentation will provide an overview of the project and the group’s assumptions in taking it on, our progress to date, and information about the challenges faced thus far.

Participants on the pilot group include:

  • Bruce Herbert, PhD – Chair (Director of Digital Services & Scholarly Communications, Texas A&M Libraries and Professor of Geology, Texas A&M University)
  • Debra Hanken Kurtz  (Executive Director, Texas Digital Library)
  • Maria Esteva, PhD (Research Associate/Data Archivist, Texas Advanced Computing Center)
  • Colleen Lyon (Digital Repository Librarian, UT Austin)
  • Christie Peters (Science Research Support Librarian, University of Houston)
  • Ryan Steans (Director of Operations, Texas Digital Library)
  • Joseph Tan (Manager, Digital Services & Technology,  UT Southwestern Medical Library)
  • Megan Toups (Instruction/Liaison Librarian, Trinity University)

Keywords
data management; OSTP; open data


Session 3A (Amphitheater 204):

Finding Roots, Gems, and Inspiration: Understanding Ultimate Use of Digital Materials
Michele Reilly, Central Washington University; Santi Thompson, University of Houston

Abstract
The University of Houston Digital Library (UHDL) is the point of virtual access for digitized cultural, historical and research materials for the university’s libraries. UHDL developed a “digital cart” system (DCS) that allow users to download high resolution images from its collections.  The DCS records important information supplied by the user regarding the ultimate use of the downloaded images.  Until now, no formal analysis of the transaction log for the DCS has been completed.

This research is significant because little is known about the ultimate use of digital library materials. Current literature suggests that this problem is not uncommon among digital libraries around the world. Our analysis begins to fill a critical gap in the professional conversation on digital libraries by directly contributing to the small body of literature that is asking who uses digital libraries and for what purposes.

This presentation will outline how researchers analyzed data from portions of the transaction logs from the DCS from 2010 to 2013. From this analysis, they will highlight some of the interesting and innovative ultimate uses by patrons.  The researchers will discuss the study and offer audience members approaches for analyzing data to determine ultimate use and its ramifications inside and outside of the classroom.

Keywords
digital library; ultimate use; digital resources

Measuring Value and Impact: A Study of the UNT Digital Library Collections
Laura Waugh, University of North Texas

Abstract 
In the fall of 2013, the University of North Texas (UNT) Libraries conducted a study to investigate the value of their digital repositories as perceived by UNT faculty, staff, and graduate students. The digital repositories include the UNT Digital Library, its UNT Scholarly Works institutional repository collection, and The Portal to Texas History. The research objectives measured the relationship between the perceived value of the UNT Libraries’ digital repositories and the UNT faculty, staff, and graduate students’ scholarly outcomes, awareness of available resources, contributions to date and interest in contributing to these digital repositories. In addition, statistical analysis of university position, rank, department, age and gender are correlated. This presentation discusses the results of this study, the process of measuring value and impact of digital collections, and implications for longitudinal studies and improvements in digital repository services.

Keywords
digital libraries; institutional repositories; value studies

Session 3B (Classroom 203):

Five Years of the University of Michigan Library’s Computer and Video Game Archive: What We’ve Learned
David Carter, University of Michigan

Abstract
This past fall, the University of Michigan Library’s Computer & Video Game Archive celebrated its fifth anniversary. Open to the public, it serves the dual mission of preserving the electronic gaming experience and serving as a research and teaching resource for faculty, students and staff at the University.

This presentation will be an overview of what we’ve learned over the past five years, focusing on two areas:

1) Challenges of acquiring and preserving games in an active use environment.

2) Ways in which the archive has been used by researchers, students and instructors at the University.

Keywords
archives; games; video games

Newspapers, Annuals and Press Releases: Digitizing Baylor’s History
Darryl Stuhr, Baylor University Libraries

Abstract
In 2007, The Texas Collection of Baylor University, a special library and archive, was named the official Archive of the university. In 2010, a new director was appointed to the collection, and he quickly began to prioritize digitization projects. Additionally, he hired a temporary employee, the Texas Collection Digitization Consultant (TCDC), to help work on digitization projects.

The prioritization of the archival collections brought three important projects to the top of the list: (1) The Baylor Lariat, the student newspaper of Baylor University, 1900 to present; (2) The Round Up, the Baylor annual, 1896 to present; (3) The Baylor Press Releases, 1920 to 2005.

The Riley Digitization Center, an in-house digitization facility that opened in 2008 and staffed by the Digital Projects Group (DPG), provides the capability to digitize these collections on-campus. Early on, the DPG built prototype collections of the first three years of the student newspaper, and a decade of the annuals with the help of a library intern and Texas Collection-hired student worker. With the additional help of the temporary TCDC, the DPG was ready to take on all three projects.

This presentation will discuss the three projects and how they were managed, including workflow, project decisions, challenges, lessons learned, and how the three collections form a powerful research tool for the new University Archivist, as well as researchers across the country.

Keywords
yearbook; student newspaper; digitization; press releases


Session 4A (Amphitheater 204)

Panel: Serving as a Service Hub for the Digital Public Library of America
Panel Moderators: Anna Neatrour, Mountain West Digital Library.
Panelists: Jason Roy, Minnesota Digital Library; Tara Carlisle, Portal to Texas History; Rebekah Cummings, Mountain West Digital Library

Abstract
The Digital Public Library of America (DPLA) provides a portal for a rich variety of resources from libraries, museums, archives and cultural heritage institutions across the United States. The success of this portal is accomplished with the support of existing “service hubs,”  many of whom have been actively engaged in data aggregation within their states and regions for a decade or more. In this panel, DPLA service hubs representatives from the Minnesota Digital Library, The Portal to Texas History, and the Mountain West Digital Library will discuss how they share their content, the effects of contributing to DPLA, metadata interoperability and best practices, data exchange agreements, technology platforms, community outreach, new digitization initiatives, and other lessons learned from contributing to a national digital library.

Panel participants represent both statewide and regional collaboratives, providing varying perspectives on the topic. This session seeks to acquaint attendees with the relationships between multiple service hubs, their state and regional partners, and DPLA. The presenters will also clarify the difference between “content hubs” that maintain a one-to-one relationship with DPLA and “service hubs” that act as a portal for multiple institutions and provide an on-ramp for smaller memory institutions.

Co-Moderators and Panelists:
Anna Neatrour, Mountain West Digital Library

Panelists:
Jason Roy, Minnesota Digital Library
Tara Carlisle, Portal to Texas History
Rebekah Cummings, Mountain West Digital Library

Keywords
digital libraries; DPLA; Digital Public Library of America; metadata aggregation; community outreach; digitization

Session 4B (Classroom 203):

Introducing Piper, a Repository-Agnostic Batch Deposit Tool
Micah Cooper, Texas A&M University; James Creel, Texas A&M University; Doug Hahn, Texas A&M University; Bruce Herbert, Texas A&M University; Jeremy Huff, Texas A&M University; Yu “Lilly” Li, Texas A&M University; Alexey Maslov, Texas A&M University; Sarah Potvin, Texas A&M University

Abstract
Abstract:

Applications developers and librarians from the Texas A&M University Libraries will introduce Piper, a repository-agnostic content deposit tool. In addition to providing background on the impetus behind its creation and the intended/anticipated user base, we will demonstrate the tool and explain the process of its development.

Impetus.

Prior to Piper’s deployment, batch loads to our DSpace institutional repository were being handled primarily by one developer in the Digital Initiatives unit. DSpace affords various submission workflows for single-item submission, but batches of items must be loaded via the command line on the DSpace server.  This server can be an extremely sensitive environment in large organizations whose business cases require backups, firewalls, and high uptime.  As part of the workflow for batch loads, which came from diverse sources both inside and outside the Libraries, the developer had engineered procedures for metadata quality control prior to deposit. The developer frequently confronted batch loads with missing files or with incomplete or ill-formed metadata.

Design and goals.

The initial goal of Piper is to allow greater flexibility in our metadata workflow and enable a small group of non-technical staff to perform batch loads. The tool empowers staff with the privileges to assemble, check, and deposit batch loads through a graphical user interface. A central feature of Piper is its ability to validate metadata and files prior to deposit. The tool relies on a suite of automated and customizable verifiers to confirm that metadata are properly encoded and that files are correctly specified.

In its first phase, Piper is designed to mimic the work of the developer who had previously performed this work, with procedures for validating metadata and files and the flexibility to upload multiple content files and specialized licenses as part of item records. Once Piper has been honed for use as a tool for this specialized group, we plan to expand its functionality and facilitate and promote its usage by the larger Texas A&M community, as part of ongoing efforts to populate our repository with open access publications.

We have developed Piper in an iterative process whereby the customer chooses what features and fixes to be handled in a cycle (typically two weeks) and accepts or rejects the implementations after live testing and demonstration at the end of the cycle.  These practices are informed by the Agile school of project management popular in software development and other technical industries.  In this way we seek to minimize wasted development on unneeded features and enable continuous delivery of value to stakeholders.

Keywords
metadata; batch process; repositories; DSpace

Metadata Creation before Digitization: Strategies to Unveil Hidden Collections
Anton duPlessis, Cushing Memorial Library and Archives, Texas A&M University Libraries; Lisa Furubotten, Texas A&M University Libraries; Felicia Piscitelli, Cushing Memorial Libraries and Archives, Texas A&M University Libraries; Alma Beatriz Rivera-Aguilera, Biblioteca Francisco Xavier Clavigero, Universidad Iberoamericana Ciudad de Mexico; Angel Villalba Roldan, UNAM, Instituto de Investigaciones Bibliograficas, Hemeroteca Nacional de Mexico

Abstract
Many libraries are concerned about the invisibility – because inadequately registered – of their valuable research collections of primary source materials including rare books, photographs, and documents. Whether caused by limited staffing or lack of expertise, the unsatisfactory or nonexistent bibliographic description impedes discoverability and access while also precluding consideration for digitization or digitization on demand. Essentially — without at least rudimentary descriptive metadata — if you do not know what you have, you cannot evaluate the materials for digitization or easily digitize on a large scale, particularly when institutional preference and workflows favor collections with previously existent descriptive metadata.

We discuss a straightforward, inexpensive technique to generate item descriptions which can be expressed in various metadata schemas such as Dublin Core or MODS, while also simultaneously producing traditional MARC21 records for the library catalog. Students, with a user-friendly template, and guided by librarians and curators at Cushing Library, created descriptions for materials in the Mexican Colonial Collection. An interdisciplinary, international team tested different software seeking one which permitted the construction of a tool with a friendly interface permitting non-catalogers to input basic descriptive data, affording an opportunity to reevaluate the prevalent idea that special collections metadata generation requires highly specialized professionals.  Challenges included determining basic element sets for different type of collections, solutions for cross walking the data among different metadata schema, and the problem of adequate file naming for matching records to files upon the subsequent generation of digital versions of collection items.  As well as a tool and methodology that facilitates training and supporting student employees in creating basic data for describing homogeneous special collection materials (with an acceptable error rate), the project produced DC and MARC records that permit scholars to discover, and retrieve items within the Mexican Colonial Collection, and identify those which should be digitized.  Further testing with other collections and libraries will be done in the near future.

The funding of this project was granted by the Cataloging Hidden Special Collections and Archives Program from the Council of Library and Information Resources (CLIR).

Keywords
metadata generation; metadata creation cost; cataloging; special collections; project planning


Tuesday, April 29

Session 5A (Amphitheater 204):

Panel: It Takes a Village to Grow ORCIDs on Campus: Establishing and Integrating Unique Scholar Identifiers at Texas A&M 
Gail Clement, Texas A&M University Libraries; Sandra Tucker, Texas A&M University Libraries; Violeta Ilik, Texas A&M University Libraries; Douglas Hahn, Texas A&M University Libraries; Micah Cooper, Texas A&M University Libraries

Abstract
This panel presentation focuses on an innovative program at Texas A&M that is changing workflows and practices not only within the Libraries but also across campus. Led by a cross-unit team from the University  Libraries,  the ORCID Integration initiative at Texas A&M University aims to incorporate globally unique scholar identifiers into the research information systems and workflows used within and beyond the university. At the heart of the program is  the establishment of Open Researcher and Contributor ID (ORCID) identifiers for every graduate student, faculty member, and full time researcher on campus, and the integration of those identifiers in key campus systems: the Vireo system for electronic thesis and dissertation (ETD) submission and management; the OAK Trust digital repository for capturing and preserving campus research outputs; and the VIVO researcher profile system for establishing and representing the scholarly identity of each campus author.

ORCID is a new global standard being used by publishers, societies, universities, and funding agencies to distinguish authors unambiguously and permanently, in order to accurately associate a given author with his or her research contributions. Thanks to advocacy and education efforts by the Libraries, Texas A&M University Administration has determined that ORCID identifiers are necessary as a business operation for distinguishing the research contributions of each campus author; managing and preserving the institution’s research outputs; and ensuring that works created at and by the institution are easily discoverable and accessible in the rapidly expanding online information environment of the World Wide Web.

It has taken considerable input and efforts by library faculty and application developers, in consultation with university administration, the graduate studies office, and the campus IT department, to design, plan, and implement ORCID integration at the scale of a large research university.  Information sharing policies have seen revision and expansion; existing applications have gained new features and functions; new applications have come online; marketing and outreach campaigns raise awareness of the benefits of establishing a unique scholar identity for campus stakeholders; and learning and user support programs are assisting users in claiming and seeding their ORCID profiles while also preserving preferences for privacy.

This panel presentation includes representatives from each of the units contributing to the success of the ORCID Integration Project at Texas A&M.  Panelists will provide demonstrations of applications devised during the project and samples of learning and assessment materials used.  An open mike segment at the end of the panel presentation will enable attendees to ask questions about specific aspects of the project.

Keywords
ORCID; scholarly identity; scholarly communication

Session 5B (Classroom 203):

This “24×7” session consists of 7-minute presentations, each comprising no more than 24 slides.

The Archive of the Indigenous Languages of Latin America
Susan S. Kung, Archive of the Indigenous Languages of Latin America, UT Austin

Abstract
The Archive of the Indigenous Languages of Latin America (AILLA) is a completely digital repository at the University of Texas at Austin, LLILAS Benson Latin American Studies and Collections. AILLA has no physical presentation space; its collections are accessible only through its website (www.ailla.utexas.org) via parallel interfaces in both English and Spanish.

AILLA’s primary mission is the preservation of irreplaceable linguistic and cultural resources in and about the indigenous languages of Latin America, most of which are endangered. Most of the materials in the archive are primary field data that were collected and deposited (donated) by linguists and anthropologists for whom audio and video recordings are a central part of their research methodology. Many indigenous organizations have also donated the results of their investigations to AILLA. The majority of AILLA’s collection consists of audio and video recordings of discourse in a wide range of genres, including conversations, many types of narratives, songs, political oration, traditional myths, curing ceremonies, etc. Many recordings are accompanied by transcriptions and translations of the speech event. Other textual resources include dictionaries, grammars, ethnographic sketches, fieldnotes, articles, handouts and PowerPoint presentations. The collection also contains hundreds of photographs.

AILLA’s secondary mission is to make these valuable and useful resources maximally accessible via the Internet while simultaneously protecting personally, culturally and politically sensitive materials from inappropriate use and supporting the intellectual property rights of the creators. AILLA’s system of access levels allows creators and depositors to have finely-grained control over their materials, which lets them restrict their entire collections or only certain files within the collections. For example, recordings might be public while transcriptions might be restricted or vice versa. Sensitive materials are protected; however, AILLA’s directors, manager and depositors believe strongly that accessibility is equally important. Historically, very little of the fruit of linguistic and anthropological research has been genuinely available to the indigenous communities in which the research was done; AILLA aims to rectify that imbalance. Restrictions tend to keep speakers out, while researchers can generally gain access to archival materials through the academic network. Resources that are publically accessible can be heard and read by all speakers. Our policy is that if a resource can be made public, it should be made public; but if it is sensitive, it should be protected. Our goal is to ensure that the unique and wonderful resources preserved at AILLA can be used to maintain, revitalize and enrich the communities from which they arise.

AILLA was intended from the outset to function as a partner with its depositors, providing them with a means of both preserving and sharing, under appropriate terms, the fruits of their work with the indigenous peoples of Latin America. The archive accepts any legitimate resources that can be housed in a digital format.

Keywords
special collection; indigenous languages; Latin America

Being an ‘a11y’: Increasing Accessibility in Born Digital Preservation
Lisa Snider, Harry Ransom Center, UT Austin

Abstract
In the past few years, archivists and librarians have grappled with issues associated with the long term preservation of born digital materials. Are we considering the needs of people with disabilities when preserving these materials?

This presentation will explore how we can increase accessibility when preserving born digital materials. Taken from an archival point of view, the presentation will focus on one solution that may make our born digital material more accessible to people with disabilities.

Keywords
born digital; preservation; access; accessibility; people with disabilities; archives

Digital Collections in a Small Archives: Using Google Services to Help Present and Promote an Oral History Project
Erin Wolfe, Dole Archives, University of Kansas

Abstract
Providing online access to media collections, such as oral histories, can be challenging to do well, particularly for smaller institutions with limited resources. This presentation will focus on a recently completed project in which the Dole Archives leveraged freely available tools to provide access to a high profile oral history collection in a variety of formats, including streaming audio/video, full text searching capabilities, and a finding aid with direct links to digital content. By integrating Google services into our own website, the project receives benefits both from (a) local branding and exhibit/content hosting and (b) the increase of visibility of the materials to a wider audience through Google-based searches. Designed with end-user access in mind, it is our hope that this project will help to expand our audiences beyond the academic and be useful (and usable) for a variety of purposes, from K-12 student research to serving as a case study for future fundraising opportunities. This presentation should be of interest to institutions looking for a low-cost approach to providing online access to media collections or those who may be interested in seeing a new approach to using web-based tools to provide access to archival materials.

Keywords
digital collections; archives; oral history

Digital Repository for Beach Management Data
Laura Kane McElfresh, Texas A&M University at Galveston; David R. Baca, Texas A&M University at Galveston

Abstract
The Galveston Island Park Board of Trustees, a governmental entity created in 1962 by the Texas Legislature, is responsible for preserving and promoting the Island’s natural resources, including its beaches. The Park Board produces data and documents — studies, reports, policy advisories, and other information — which may not necessarily fall under the purview of government document depository mandates, but should still be openly accessible to citizens. Texas A&M University at Galveston, as an institute dedicated to higher education and scholarship in the marine sciences, marine engineering, and maritime professions, is a natural home for this kind of scientific and economic information. In January 2014, the Jack K. Williams Library at Texas A&M – Galveston and the Galveston Island Park Board of Trustees formed a partnership to create a repository for preservation and open sharing of these documents. This brief presentation will outline our progress to date.

Keywords
digital repositories

Examining Massive Digital Libraries
Andrew Weiss, California State University, Northridge

Abstract
Massive Digital Libraries can be defined as digitized book collections that rival or even surpass the current size of most physical “brick and mortar” libraries. Many of these MDLs reach sizes of several million volumes. The largest is Google Books at nearly 30 million volumes and the HathiTrust is a distant second at 11 million volumes.

This presentation will examine the results of two related studies.  For the first, a study currently being conducted examines levels of access for four Massive Digital Libraries, including Google Books, HathiTrust, Open Content Alliance’s Open Library, and Internet Archive among Spanish language and English language random samples.  In a preliminary study, differences in the level of access between Spanish and English language books were noted and compared.  This study provides a more complete examination of the data of nearly 1,200 records culled randomly from a library catalog.

In the second study, the author examines rates of error and problems associated with scanning Japanese language books found in the Google Books and HathiTrust Massive Digital Libraries. The study is based on interviews conducted by the author at Keio University in Tokyo, Japan, the sole Japanese organization to partner with Google Books, and on a current examination using randomized records retrieved from OCLC World Cat.  The results show a number of errors in metadata and scanning that occur.

The results of both studies point suggest that aggregated content development in massive digital libraries may be impacted negatively by a lack of diversity in partnerships.

Furthermore, problems of mass digitization of non-English, non-Western books occur due not only to the limited numbers available but also due to issues of copyright clearance, availability of materials and non-Western book binding techniques and printing technologies.

Keywords
digital libraries; analysis; metadata; diversity; Japanese language; Spanish language

National Museum of the Pacific War Oral History Program
Sarah Walch, National Museum of the Pacific War

Abstract
The Oral History program at the National Museum of the Pacific War (NMPW) has been a major volunteer effort for more than a decade.  Recently, the program transitioned leadership to the Nimitz Education and Research Center’s (NERC) archivist and librarian, as part of a larger effort to both provide long-term storage for analog and digital audio and transcript materials as well as to professionalize the program as the NERC team prepares to launch an oral history website to the public.

At the beginning of February 2014, the museum held more than 4,100 interviews.  To date, about half of the interviews have been transcribed, and they are currently available to researchers by request or appointment.  Interviews are conducted in person and by telephone.  I will trace the path that the interviews take from the moment they are collected to when they are disseminated to the public during my presentation today.

On January 31, 2014, I gave this presentation to a team of 25 volunteers in the Nimitz Hotel Ballroom, on the grounds of the NMPW complex, in Fredericksburg, Texas.  In addition to a copy of the slides, the attendees received several handouts including a schedule, a training manual developed in-house for both interviewers and transcribers, a draft of a skit that two volunteers presented (that pointed out several common interviewer errors), a Texas Historical Commission guide to collecting Military Oral History, and a “Some Rules for a Better Interview” one-sheet guide prepared by a senior oral historian.

The NERC’s oral history program falls within the conference’s theme Engaging Outliers: Context, Collections, and Community as an ongoing project and digital library use case with an emerging workflow.  To borrow an aptly put-together phrase, our team is building our digital archive as we fly it.

The oral history collection is a unique collection because while it accepts World War II oral histories in general, it focuses, as the museum does, on the Pacific Theater.  The National Museum of the Pacific War is the only museum in the country dedicated to telling the story of WWII in the Pacific.

When the oral histories were originally collected, presenting such material in an online format was an unheard of proposition.  Now, with content from such diverse sources as Admiral Chester Nimitz, former President Lyndon Johnson, Navajo Code Talker Carl Gorman, some of the Doolittle Raiders, prisoners of war, and many more, the value that scholars, students, journalists, historians, and society in general will gain by being able to search material cataloged to Dublin Core standards and posted at a website where the information architecture is high quality is remarkable.  Several prominent WWII authors have also enriched the collection by donating oral history interviews collected during their book research.  The NERC team of archivists and curators, under the leadership of the National Museum of the Pacific War, Texas Historical Commission and the Admiral Nimitz Foundation, is determined to see these materials realize their full promise and reach the widest audience possible.

Keywords
digital archive; oral history; pacific theater; World War II

The Niiyama Japanese Poetic Pottery: The unintended adaptation of a unique collection
Patrice-Andre Prud’homme, Milner Library, Illinois State University

Abstract
From the compilation of this exemplary collection of short poems of the Ogura Hyakunin Isshu in 1235 to its interpretation in ceramics exhibited at the Tokyo American Club in 1981, this presentation will examine the interactive digital transformation of a collection of one hundred pieces of pottery displayed in Flash and HTML5 with CSS3. Through his work, the potter Mitsuya Niiyama reveals the nature of Japanese sensitivity. The purpose of this case study is three-fold: 1) Demonstrate the collaborative work of the digitization process with departments inside and outside the library, 2) Explore the innovative process in the production of an interactive presentation, including HTML5 for added accessibility to mobile devices and 3) Adhere to digital preservation strategies and actions about content creation associated with metadata development. This elaborate transformative process of a unique collection of pottery is even more important in that it leads to the premise of accurate rendering of authenticated content over time, particularly when institutions with fewer resources may find it difficult to successfully engage in digital preservation.

Keywords
Japanese pottery; interactive media

Success and Growth: The Black Gospel Music Restoration Project
Timothy Logan, Baylor University Libraries

Abstract
Since its launch in 2006, the Black Gospel Music Restoration Project (BGMRP) at Baylor University has become a nationally recognized effort to catalog, capture and preserve materials from America’s “golden age” of black gospel music. Spurred by an impassioned New York Times op-ed written in 2005 by Baylor journalism professor Robert Darden, the BGMRP began with a lead gift from Charles Royce that provided for the purchase of equipment and the hiring of an audio engineer. What started with a small scanner in borrowed space has grown to an active center for digitization and preservation. Today, the gospel project’s team includes an audiovisual digitization specialist, a curator, an assistant director, and a number of graduate and undergraduate student workers. Their work focuses on the digitization of gospel recordings, regardless of the recording form from the original collections. The records are received, cleaned, cataloged, digitized, and returned to their owners, and other materials such as hymnbooks and sheet music are treated with equal care.

Working with major collectors across the country, the project has grown to include more than 2,200 digitized albums (78s, 45s, and 33s) whose digital objects are comprised of 8,196 images and 4,740 accessible audio tracks. The collection is open to the world via the Baylor University Libraries Digital Collections (http://digitalcollections.baylor.edu) and excerpts are available via the Baylor University iTunes U account.

The project has received extensive media coverage on outlets such as on NPR’s “Fresh Air” program, the Dallas Morning News and numerous print, broadcast and online news outlets. In 2012, Professor Darden and Tim Logan, Baylor Associate Vice President for Electronic Library, visited with a team from the Smithsonian Institution about including elements of the gospel collection in a new museum. In 2013, the leadership of the National Museum of African American Culture and Heritage – a division of the Smithsonian Institution – announced that materials from the BGMRP will be featured in the new museum’s permanent collection when it opens its doors on the National Mall in Washington, D.C. in 2015. This collaboration follows several years of conversations and ongoing discussions between Baylor and the Smithsonian and marks a major milestone in the ongoing story of this unique cultural heritage preservation undertaking. In addition to the inclusion in a major museum, the collection outreach includes hosting the Pruit Symposium on gospel music and connection with African American churches across the country, all of which demonstrate transformative ventures beyond traditional library contexts.

The proposed 24X7 presentation will provide a succinct overview of the origins, history, present activities, and future plans for the Black Gospel Music Restoration project.

Keywords
digitization; gospel music; Smithsonian


Closing Keynote Address (Amphitheater 204)

Thinking Different
Karen Coyle

Abstract
Library practices are based on a rich history covering centuries of expert knowledge. That’s the up-side. The down-side of that rich tradition is that there is so much of our practice that is based on “We’ve always done it that way.” This talk will challenge you to “think different” about common practices and begin to imagine a very different library.

Keywords
linked data


Poster Presentation Abstracts

Beyond Web-based Scholarly Works Repositories: The effect of institutional mandates on the faculty attitudes towards Institutional Repositories
Ahmet Meti Tmava, Daniel Gelaw Alemneh

Abstract
In the last decade there has been a push from academic institutions to encourage faculty to deposit their work in web-based scholarly work repositories, commonly known as institutional repositories (IR). IRs are responsible for collecting and preserving the intellectual works of faculty and students and making them widely available.

In light of the ever-evolving landscape of higher education, IRs seek to move beyond the custodial role and actively contribute to the advancement of scholarly communication. Understanding and addressing the issues faced by IRs requires a multidimensional approach that involves all stakeholders including: individual scholars and researchers, academic institutions and librarians, scholarly and scientific society publishers, commercial publishers, and government institutions. However, most researcher (Kim, 2010) agree that the main players are faculty members that can make-or-break an IR.

In spite of the fact that IRs are an innovation in scholarly communication they have been met with a resistance from faculty members.  Academics have been slow to embrace the concept of IRs, according to recent studies by Primary Research Group (2014), only 5% of journal articles published by the faculty members of the organizations have been archived in the IR. While a range of factors seem to influence use of repositories by researchers there is still no agreement how to resolve the challenge of getting authors to deposit content. The most recent survey by Nicholas et al (2014) suggested that while the size and use of repositories has been relatively modest, almost half of all institutions either have, or are planning, a repository mandate requiring deposit. However, Crow (2002) warned that faculty submission will have to be voluntary or risk encountering resistance from faculty members who might otherwise prove supportive.

The current situation of IRs is rather bleak and calls to question the effectiveness of the current ways of recruiting content, including institutional mandates. Nicholas et al argue that mandates vary based on the research community and/or institution. Their findings reveal that none of the participating institutions reported any attempt to force researchers to comply with the mandate and describe the current mandates as more educational rather than binding.  The same study concludes that 22 percent of the researchers were directly influenced by mandate to deposit their work, and this varied based on the age. Thus, the hope remains that with the mandates in place the new generation of researchers will get used to the idea of depositing their work.

This poster will revisit the content recruitment issues in general. Although there is an extensive body of relevant knowledge, discussions about IRs transformations, they are often based on opinion, and isolated experience of commentators, leaving out the main issue (i.e. institutional policies) and the main players (i.e. faculty). This paper will attempt to assess the effect of institutional mandates on the faculty attitudes towards IRs. We believe that analyzing and spotlighting the possible correlations between and among various factors are pertinent for understanding and shaping the ongoing transformation of IRs.

Keywords
institutional repositories; digital repositories; open access; policies; scholarly communication; content recruitment

Centralized to Scattered: Designing Project Workflows for a Dynamic Staff
Faedra Wills, Krystal Schenk

Abstract
How can staff collaborate on digital projects when they are dispersed throughout the library?  This is the challenge the new Digital Creations department was faced with after a library wide reorganization in the summer of 2013.   In 2011, the UT Arlington Libraries began mining faculty CVs for articles that we could add to our local institutional repository.  After the re-organization the staff previously working on this project were now scattered between three departments.  By leveraging the project management features of the newly adopted tool SharePoint, we are able to distribute the work of this project across staff, and departments.

In this presentation we will demonstrate how we are using SharePoint’s workflows, custom lists, task lists and shared calendars to help keep staff informed, generate reports and manage projects.  In particular, we will show how we use these features to help keep staff on task, and faculty informed of our progress.

Keywords
digital project workflows; collaboration; project management

Collection Size Descriptions as Archival Data: The Spectrum of physdesc
Sarah Buchanon, Hayoyang Li

Abstract
This poster presents insight into the functional vocabulary with which repositories describe the physical extent of their collections. The structured standard Encoded Archival Description (EAD) has provided repositories with a XML basis for representing archival finding aids since its creation and adoption during the 1990s. As one measure of its widespread adoption by collecting repositories, consider that the nationwide corpus of ArchiveGrid currently comprises over 120,000 EAD documents. The public database Texas Archival Resources Online similarly facilitates discovery of historical collections by displaying the contributions of EAD-structured finding aids from Texas repositories. The current version of EAD consists of 146 elements – an EAD tag and its formal element name – which provide the basis for these structured descriptions of collections. In this research we focus on one component of collection description, the <physdesc> tag, and report on the range of format types that appear in Texas collections. Beyond the colloquial names of box, photograph, and painting exist many outlier terms which present unique challenges and opportunities. The variation within the <physdesc> tag may be painless to the human reader during display, yet becomes problematic during natural language processing which requires normalization of collection sizes in order to perform statistical analysis.

Through the one element of Physical Description, repositories are charged with summarizing both the materiality and the quantity of the items contained in an entire collection. These descriptions speak to the physical form and enumerative values of all information artifacts in the collection through the use of four optional subelements: dimension, extent, genre characteristic, and physical facet. We demonstrate the effect of having relative leeway in terms of data structure requirements built into the formal definition of this element. Because “the information may be presented as plain text,” the end result of this definition is a dataset with wide internal variation that could impede the goal of assessing such collections through actionable data and its reuse in a broader context, such as by repository or region. With the third EAD Revision currently in gamma release (and set to replace EAD 2002 this spring), we consider our study in parallel with the following two developments: the continuation of the <physdesc> element as an unstructured option, and the creation of a new <physdescstructured> element which will formally adopt, rename, and add a fifth subelement to the four optionals listed above. In addition to version compatibility, EAD developers and adopters should facilitate integration of the legacy data corpus alongside data requirements to meet the dual goals of analysis and discovery. The Visualizing Archival Data / Augmented Processing Table project, of which this study is a part, aims to understand how such finding aid data can reveal the quality and granularity of collection arrangements, and through this, the layers of historical evidence that are made available to researchers seeking resources on specific topics, people, and organizations.

Keywords
descriptive metadata; structured data; digital libraries

Developing a Library Open Access Portal that Bypasses the Need for Authentication
Bruce E. Herbert, Sarah Potvin, Bennett Ponsford, Anne L. Highsmith

Abstract
Texas A&M University was established as Texas’ only land grant university through the First Morrill Act (1862), which sought to provide a broad segment of the population with a practical education that had direct relevance to their daily lives.  Our impact on society was later expanded through the creation of the agricultural experiment stations and the Cooperative Extension Service, which disseminate the results of experiment station research to improve the state’s agricultural industry.  The Sterling C. Evans Library at Texas A&M is building upon this history to help bring all of Texas A&M’s scholarly work to bear on many of society’s greatest challenges by promoting open access.  We are working to identify and advance appropriate information systems, practices, and policies that improves societal access to the scholarly and creative work at Texas A&M.

The Texas A&M University Libraries, has begun work to design a portal that bypasses the need for authentication and allows a user to search through a collection of open access materials. Working with Ex Libris, the vendor from which we license our Primo discovery layer, we have installed a separate instance of Primo aimed at aggregating open access materials and making them accessible to the public. This dedicated portal will draw materials identified as open access from the Primo Central Index, a “meta-aggregation of hundreds of millions of scholarly e-resources of global and regional importance,” including “journal articles, e-books, reviews, legal documents and more.” We are currently working to have OAK Trust open access items harvested into Primo Central and made available alongside harvests from other institutional repositories. In establishing this Portal to Open Access Resources, we will also work to identify materials that are legitimately open access (gratis) and that meet basic quality standards.

This poster presentation will discuss the technical aspects and policy decisions made during the design and implementation phase of the project, and show how the portal supports a Texas A&M University – K12 School District reforming their science, technological, engineering and mathematics (STEM) education.

Keywords
open access; portal; search

Did we scan that book twice?: Weeding the Texas Tech Dark Digital Archive
Heidi M. Winkler

Abstract
The Texas Tech University Libraries’ digital collections began in 2004 with the intent to digitize as many books as possible in the name of open access. By the fall of 2013, that mission had been revised to focus on the preservation of materials unique to Texas Tech. We decided it was not in the institution’s best interest to devote resources to files in our digital dark archive that did not meet this mission. Using the HathiTrust catalog as our guide, we set out on an online trek to discover just how many digitized books being preserved on our servers were, in fact, distinct items not held elsewhere. Along the way, we tackled questions of to what do we provide access on our DSpace versus archiving on our servers and just how unique is “unique”? Weeding a digital resources library requires a different process of consideration than the weeding of a physical library. Further, we used this project to refine our digital archiving and preservation practices, the most important of which was the establishment of an archive change log.

Keywords
digital libraries; digital archives; collection management

Digitizing the Fred Fehl Dance Collection
Chelsea Weathers, Jordan Mitchell, Emily Roehl

Abstract
The Harry Ransom Center’s performing arts department holds two vast collections of photographs by Fred Fehl—a prolific mid-twentieth century photographer of theater and dance based mainly in New York City. The Fred Fehl Theater Collection and the Fred Fehl Dance Collection each contain tens of thousands of 5 x 7 prints of various productions by multiple companies. For the past six months, a team of employees, interns, and volunteers has been working to digitize and catalog 5,000 of the 30,000 photographs in the Fred Fehl Dance Collection. Once digitized, the images and their metadata are uploaded onto the Ransom Center’s new digital collections website, which uses the platform CONTENTdm. Providing access to Fehl’s photos of dance productions, which run the gamut from the classical offerings of the American Ballet Theatre to Martha Graham’s groundbreaking modern dance, is a significant contribution to the fields of dance history, art history, cultural studies, and costume design. No other online library or archive currently provides images of Fehl’s photos in such breadth or depth, and the Ransom Center is in a unique position to do so because it holds the copyright to all of its Fehl photographs.

To execute the complex task of preparing the photographs for digitization, the performing arts curator Helen Baer, her associate Chelsea Weathers, and graduate interns Jordan Mitchell and Emily Roehl developed a workflow that entails two main streams. One focuses on the creation of consistent metadata, and the other focuses on the digitization of the photographs. After the institution of the workflow, undergraduate work study students and volunteers also began to contribute to the project. To date, nearly 1500 photographs from three different dance companies have been uploaded via CONTENTdm to the Ransom Center’s digital collections website. Access to this enormous collection of visual materials will be an invaluable resource for dance scholars, enthusiasts, historians, and the general public.

Keywords
archives; visual materials; photographs; dance; digital image collections

Harvesting Quality: Evaluating Metadata for Digital Collections
Paromita Biswas

Abstract
Metadata creation practices for digital library projects vary widely amongst libraries. Digital library projects often have to deal with multiple metadata creators, new formats and resources, and dynamic metadata standards for different communities (Park & Tosaka, 2010). As a result while accuracy and consistency in metadata are prioritized by field practitioners, metadata records created for specific digital projects may lack the quality needed to support successful end-user resource discovery and access. Park and Tosaka’s survey of metadata quality control in digital repositories and collections reveal that digital repositories often rely on periodic sampling or peer review of original metadata records as mechanisms for quality assurance (Park & Tosaka, 2010).

This poster proposal presents another means of running quality checks on metadata created for digital projects based on Hunter Library’s experience with the WorldCat Digital Collection Gateway tool used for harvesting metadata for digital collections into WorldCat. Hunter Library’s digital collections are described using Dublin Core in Contentdm and the Library has recently started harvesting its collections into WorldCat using Gateway. During harvesting the Gateway, by default, places the names of “creators” and “contributors” recorded in separate fields in the local metadata environment into one broad “Author” field for WorldCat users. A cursory review of this “Author” field in WorldCat for several harvested items  from one of the library’s collections revealed an unexpected presence of corporate body names alongside personal names. Consequently this led to an evaluation of how the “creator” and “contributor” fields had been used in that collection. The “Frequency Analysis” feature in Gateway proved to be particularly useful in this evaluation since it provided a breakdown of each field in a particular collection by the values used in that field and the number of times they had been used. For example, a high frequency usage of a particular name indicated that the usage had not been a random mistake but had been consistent. A subsequent analysis of the library’s digital collections’ metadata using “Frequency Analysis” revealed that for some collections, the “contributor” field had been used to record entities whose roles, in relation to the item described, spanned from publisher, printer, editor, or recipient of letter. However, the library’s then current metadata schema had limited the definition of the “contributor” field to entities who had a direct but secondary role in the creation of an item like editors or illustrators. This discrepancy between the library’s metadata schema and the usage of the “contributor” field led to a redefinition of the role of the “contributor.” The schema now incorporates the plethora of roles that “contributors” could have in relation to an item and recommends that the role of each “contributor” be explained in the “description” field to account for the diversity of roles. Updating of the schema has thus promoted consistency in recording the “contributor” field across the library’s digital collections while also possibly benefitting users searching for an item by the various names associated with it.

Keywords
digital libraries; metadata; quality control; harvesting

The Power of Collaboration: Creative Opportunities with Faculty
Faedra Wills, Jeff Downing

Abstract
With the growth of digital humanities, e-science and other web-based initiatives, there are many new and exciting opportunities for librarians to collaborate with faculty and the community.  For the past couple of years, UT-Arlington (UTA) library staff have worked with faculty on the occasional digital project, but those tended to be small in scale with limited long-term benefits.  In the summer of 2013, the library went through a reorganization. One of the outcomes of this reorganization was the establishment of a Digital Creations department.  As a result, staff now have the time and the tools to collaborate with faculty on larger and more complex digital initiatives.  This presentation will look at two recent staff/faculty collaborative projects here at UTA:  the creation of an university alumni military veterans website and the publication of an international educational journal on the OJS platform.  In this presentation, we will provide an overview of the projects and discuss how these projects originated; our success stories and lessons learned to be applied to future projects.

Keywords
collaboration; faculty; digital projects

Providing a Spatial Context for Library and Archival Collections: Mapping Historic Aggieland 
Kathy Weimer, Miriam Olivares

Abstract
Libraries and archives have large collections of historic maps and photos.  Creative digital exhibits allow users a unique framework to these collections, with mapping platforms providing a spatial context to collections and serving as a visually appealing browse mechanism.  Librarians and staff from the Map & GIS Library at Texas A&M University used Geographic Information Systems (GIS) technology to present “Mapping Historic Aggieland,” a digital collection of historic maps, aerial photos, and photos of significant sites and buildings on campus. These materials, which span a century, are gathered to tell the story of the growth of the university over 100 years.  GIS is used to display the digitized copies of the maps in georeferenced form, and photos in their correct geographic location on campus. Users, from alumni to current students make use of the digital collection and gain understanding of the expansion of the campus and styles of architecture over the years.  Archival photos of campus buildings include the dates that they were built which allow the user to browse the collection over time period using a time slider.  Esri’s ArcGIS Server and ArcGIS Viewer for Flex were used to create this web service, and will be described.  Other lightweight mapping tools will also be reviewed for those wanting to create a similar exhibit for their library or archival collection.

Keywords
maps; GIS; photos; archives; exhibits

Pushing the Boundaries of Open Access
Daniel Gelaw Alemneh, Mark E. Phillips, Jill Kleister

Abstract
The Open Access (OA) movement has become increasingly important in shaping the ways that academic libraries provide services to support the creation, organization, management and use of digital contents. The University of North Texas (UNT) has embraced the open access movement and seeks to bring scholarship to the widest possible audience. Our usage statistics show that users from more than 200 countries around the world visit the UNT Digital Libraries’ diverse collections.

Theses and dissertations represent a wealth of scholarly and artistic content created by masters and doctoral students in the degree-seeking process.  The University of North Texas (UNT) was one of the first three American universities to require electronic theses and dissertations (ETDs) for graduation, and by 1999 all theses and dissertations submitted by students in pursuit of advanced degrees were digital. We are intensely proud of the work of our students.  Currently, more than 90% of UNT’s ETDs are freely accessible to the public via the UNT Digital Library, while less than10% have been restricted by their authors for use by the UNT community only.

In light of supporting academic institutions initiative to advance digital scholarship for worldwide research, we started a new project contacting UNT alumni who restricted their ETDs in perpetuity. We contacted about 700 ETD authors, asking their permission to remove the restrictions from their theses or dissertations and make them openly available in the UNT Digital Library. This poster provides a preliminary analysis of the UNT‘s efforts to make students’ work accessible to a wider global audience.

Keywords
electronic theses and dissertations; digital curation; open access; ETD

Re-engineering a Website into a Digital Humanities Projects – We think!
Lynn Johnson, Ramona Holmes

Abstract
What happens when you have a website that needs a facelift and you want to evolve the contents into a digital humanities project? What makes it different? This poster session explores a process we are attempting at the University of Texas @ Arlington Libraries. Taking an amazing collection of US-Mexico War materials in our Special Collections, harnessing high collaboration with our Center for Greater Southwestern Studies, and re-imagining a website has been a six month process that tapped into an academic partnership and internal re-organization. We invite you to examine our painful process that used a project with no documentation, a small web presence, and completely new personal in a brand new unit. Learn from our unpleasant experience so you never have to go through this yourself!

Keywords
digital projects; website; humanities; collaboration

Sharing Research Broadly: Three Minute Thesis (3MT®) at Texas A&M University
Laura Hammons, Joelle Muenich

Abstract
The Three Minute Thesis (3MT®), initiated at the University of Queensland, Australia, is a competition in which graduate students attempt to convey the impact of their research to a general audience using compelling words and delivery – and, in just three minutes. Prompted by the Conference of Southern Graduate Schools, with the inclusion of a regional 3MT® competition at its 2014 Annual Meeting, Texas A&M University organized a campus-wide effort to promote, educate, and host a 3MT® competition. The 3MT® provides an excellent opportunity to promote graduate student thesis/dissertation research in transformative ways, while providing professional development to graduate students on the fine skills of orally and visually communicating the purpose and impact of their research to the world. As students generally engage in 3MT® efforts prior to or parallel with completing their degree programs, it also holds the potential – through collaborations among the Graduate School, Library, and others – to demonstrate to graduate student participants the value of scholarly communications, enrich the electronic thesis and dissertation with video that is more broadly appealing, etc. While Texas A&M University has only begun to consider these possibilities, the success of our first 3MT® competition holds promise for future initiatives. This poster will introduce the 3MT® program at Texas A&M University and consider the elements necessary for developing a successful initiative.

Keywords
electronic theses and dissertations

Texas Cowboy Churches: A Collection of Oral Histories
Ann Ellis

Abstract
The poster exhibits a project that is collaborative in nature, and unique in that it features a small demographic group not widely represented in current research. The project showcases primary resources for researchers in the areas of religion and Western American cultural heritage.

The Center for Digital Scholarship at Stephen F. Austin State University Library manages a variety of collections in its digital repository. The Texas Cowboy Church Oral History Project, a part of the Library’s Oral History Collection, is a unique collection that highlights a selected group of Cowboy Church members and pastors.

The series of interviews and oral histories was conducted by Jake McAdams, a graduate student in the Public History program in the Department of History at SFA.  His work preserves the idiosyncratic practices and attitudes of those affiliated with Cowboy Churches in Texas. Jake visited Cowboy Church members in several Texas locales and interviewed them with questions designed to explore their background and feelings regarding religion, and the reasons they selected a Cowboy Church as their religious community.  His interviews provide interesting primary source material for the study of a growing religious and cultural phenomenon.

The Center for Digital Scholarship worked with Jake to design and implement the digital presentation of his research project, and created the structural and descriptive metadata for the collection.

Keywords
digital libraries; oral histories; Cowboy Churches

Texas Documents: What are They Good For?
Krystal Schenk, Jeff Downing

Abstract
Although some people might have thought to use these documents as fuel to keep warm this past winter… UT Arlington Libraries has decided to digitize a core collection of legacy documents produced by Texas state agencies.  What seemed like a straightforward project: select documents, scan and upload to our institutional repository – quickly became a complex project.  Problems included creating a comprehensive list of our state document holdings, mapping titles and agencies to degree programs, and deciding which titles were candidates for scan to destroy vs. scan to retain, and how to use existing metadata from our catalog to populate the institutional repository metadata.  Early successes include developing a better understanding of how to find holdings for state documents and building a closer working relationships with our Special Collections, systems and cataloging units.

A possible outcome of this project could be the creation of a statewide collaborative where individual institutions would choose to become a “center of excellence” for a particular state agency. Based on ASERL’s Collaborative Federal Depository Program, institutions would take responsibility for collecting, digitizing and making available the works of their “adopted” agencies.

Keywords
institutional repository; Texas documents; digital project

Transforming Access to Texts with 18thConnect and TypeWright
Elizabeth Grumbach

Abstract
18thConnect is a digital aggregator and virtual research environment (VRE) for eighteenth-century researchers. As part of a larger community of VRE’s, all organized under the Advanced Research Consortium (ARC) and based on the NINES (Networked Infrastructure for Nineteenth-Century Electronic Scholarship) model for peer review and scholarship, 18thConnect has to tackle issues relevant to its period-specific research community. As a result, the TypeWright application was built for the 18thConnect platform in order to provide an easily-accessible, crowd-sourced correction tool for eighteenth-century texts.

The TypeWright tool was designed to solve issues with Optical Character Recognition (OCR) for early printed texts, specifically those in Gale/Cengage Learning’s Eighteenth-Century Collections Online (ECCO) subscription database, to provide accurate text for full-text searching, data mining, and the creation of digital scholarly editions. Because these texts were photographed, microfilmed, and then digitized over a period of 40 years, their quality negatively impacts OCR text output. In addition, early printing conventions, especially early typefaces and paper quality, cause OCR engines to mis-recognize the word images on a page. To foster the sustainability and use of these texts in scholarship, TypeWright was created to enable users to correct, by hand, save, and share their editing with the 18thConnect community.

For this poster presentation, I intend to focus on illuminating the following three aspects of the TypeWright tool:

1. Correcting a text in TypeWright, or, briefly explaining the accessible user interface.

When a user accesses the 18thConnect site, they can search for “TypeWright-enabled” texts, right now consisting of the 183,000 documents contained in ECCO. Once a user has selected a text, they are ported into the editing interface, which displays snippets of the page image for transcription in the text editing box below. The text editing box already contains the text generated by a previous OCR process, so that the user can either edit the text, or confirm the current text is correct.

2. Liberating a text in TypeWright, or, how users can request full text and XML for a document after completing correction;

After a user, or a group of users working collaboratively, have completed correcting a document, their work is reviewed by TypeWright administrators. If the work passes the evaluation process, then the user(s) are able to receive the corrected plain text or XML/TEI-encoded files. If the work fails evaluation (which is rare) users are instructed to look for common “correction” mistakes, and fix them.

3. Using a text after TypeWright correction, or, the benefit of crowdsourcing correction for the academic community.

Once a user has received their corrected text files, 18thConnect administrators advise users to use this data in their digital project, then submit that digital project for peer review to 18thConnect. In addition, the corrected text, per our agreements with Gale/Cengage Learning, return to that database to improve the searchability of this proprietary product, which constitutes an important resource for the eighteenth-century scholarly community.

Keywords
archives; optical character recognition; open source; tools; access; virtual research environment; digital edition

Unifying Digital Collection Software: Considerations at Biblioteca Francisco Xavier Clavigero, Universidad Iberoamerica Ciudad de Mexico
Alma Beatriz Rivera-Aguilera, Eduardo Cortes, Efrain Juarez, Gerardo Morales, Daniel Castro

Abstract
Nowadays digital collections managers face situations like: collection managed with different software, different content formats, different user needs, need of having a unique search box for all the resources,  give the user the opportunity to choose from diverse interface options, availability of mobile interfaces, digital preservation, scholar and institutional culture regarding digital repositories. This paper refers to how the Biblioteca Francisco Xavier Clavigero from Universidad Iberoamericana Ciudad de México is facing this challenges with the evaluation of the proposal of unify its digital collection management software and the technical and human implication of this proposal. Some of the preliminary conclusion is the need of doing research about local user needs, the implementation of technical solution taking into account not only interfaces but also preservation issues and the necessary involvement of all stakeholders in the final decision.

Keywords
digital libraries

The University of California Shared Images Project (UCSI): Sharing Art and Architecture Visual Resources Across a Multi-Campus System
Lynn Cunningham

Abstract
The University of California Shared Images project (UCSI) is a collaborative image digitization project shared across nine University of California (UC) campuses. The UCSI collection aggregates over a hundred thousand images of art and architecture from Visual Resources Collections at the nine participating UC campuses. The Visual Resources Collections and the libraries assign a Collection Development Liaison from each campus to participate in forming the collection. The collection supports classroom instruction in architecture, history of art, art practice, social sciences and humanities departments. The UCSI project also shares collections from image vendors such as Saskia Art & Architecture, Harthill Archive, and Archivision licensed with a consortial agreement through the California Digital Library at the Office of the President of the University of California.

UCSI participants utilize a web-based media management software, Shared Shelf (developed by ARTstor http://www.artstor.org/), to build and grow the collections. Shared Shelf has integrated vocabularies, cataloging tools, and customizable metadata schemas. All UCSI participants publish their collections directly to ARTstor from Shared Shelf. ARTstor serves as the end-user search interface for all of the UCSI contributing institutions. The ARTstor portal provides end-user access to not only the UCSI collections and the licensed vendor content, but also to over 1.5 million ARTstor subscription collection images.

This poster presents the shared objectives and outcome of the UCSI project. The complications and successes of such a project are examined. Considerations such as project logistics, shared metadata schemas, and image specifications are also discussed.

Keywords
image digitization

Who is Using Online Special Collections? The CUL Digital Collections Case Study
Cindy Boeke

Abstract
Since 2008, Southern Methodist University’s (SMU)  Central University Libraries (CUL) have digitized, cataloged and made available on the CUL Digital Collections web site some 35,000 image, text, video, and audio files from the holdings of its rich special collections. Since their inception, CUL’s digital collections have received more than 4 million page views from users around the world who access thousands of objects that portray Texas history, art, and culture, as well as Mexico, the U.S. West and Southwest, Latin America, Europe, the Civil War, World War II, railroads, and SMU history.

CUL uses a variety of methods to track who is using our 40 digital collections, so we can prioritize future digitization projects and ensure our scarce resources are used more effectively. Google Analytics, for example, provides a vast array of data that can be mined and analyzed to determine trends and popular topics on a local, national, and international basis.

It is relatively simple to count numbers in terms of page views and visits to the web site. What is more difficult to document are outcomes, or the ways CUL Digital Collections are being used to change fields of sudy or are having an impact on people’s lives.

To better understand outcomes from our digital collections, CUL has developed a user survey that is sent to researchers who license images, so we can determine how digitized items are being used to present new insights into fields of study.

Other efforts are underway to push items out in a variety of social media, including Instagram, Tumblr, Reddit, Wikipedia, Flickr: The Commons, Twitter, and more. The results, which are often surprising, help us uncover how CUL Digital Collections are changing people’s lives. This poster will provide examples of innovative ways people and communities around the world are using CUL’s digitized special collections, data that has opened our eyes to unanticipated topics of interest to the public, and tools that are helping us build new audiences for digital archives.

The overall topic for this poster was also discussed at a Birds of Feather session at Digital Frontiers, Denton, September 2013, entitled “Digital Collections Usage: Analyzing Data and Documenting Outcomes”. This poster will provide much more specific examples, along with update information.

Keywords
digital collections; usage; outcomes

Conceptualizing and Implementing a Webinar Series: Lessons learned from the Mountain West Digital Library Webinar Series
Rebekah Cummings

Abstract
Webinars are a low-cost and efficient training model that allow librarians to disseminate valuable information, connect with colleagues, and build and expand their communities beyond geographic and institutional boundaries. Yet, while many information specialists attend webinars on a regular basis, the task of hosting a webinar series may seem like a daunting and opaque challenge, even for enthusiastic webinar participants. In this poster session, Rebekah Cummings, Outreach Librarian at the Mountain West Digital Library, will demystify the process of implementing a successful webinar series including content creation, recruiting guest speakers, software selection, promotion, hosting the webinar, and follow-up. This session will include practical advice on how to host a webinar or webinar series, the costs and benefits associated with hosting webinars, and lessons learned from the Mountain West Digital Library’s Webinar Series.

Keywords
digital libraries; webinars; information technology; outreach

OMEKA, OK! (Or, What We Learned at the DPLA)
Betsy V. Martens

Abstract
This presentation covers the experiences of one instructor and three graduate students at the University of Oklahoma School of Library & Information Studies participating in the Digital Public Library of America OMEKA pilot project for student digital exhibition creation during the 2013 fall semeste. Although our exhibit on the History of American Literature was not selected by the DPLA for national exposure, we learned a lot along the way!

Keywords
digital collections; OMEKA; DPLA; education; digital exhibitions