Last week I went to a movie theater to test another type of closed captioning equipment. Previously, I had tested the Sony Entertainment Access Glasses available at Regal Cinemas. You place the glasses on your head (of course) and captions appear in the lenses. The captions move as your head moves.
This system was different. The CCR-100 from USL, Inc. looks somewhat like a ViewMaster on a stick. It’s designed to fit inside the cup holders in movie theater seats. A long stem leads up to a viewing box where the subtitles appear. The stem is sturdy but flexible, so that the viewer box can be adjusted to the proper angle, location, and height in front of the viewer.
CCR-100 in cup holder
In a previous blog post, I introduced Re-Resolver, our experimental software preservation project in which we attempt to recreate the classic, but no longer functional iOS app Resolver by analyzing its features and rewriting the app from scratch. Re-Resolver is open source and available on GitHub, and will be made available on the app store later this summer.
With the project, we are exploring this method of software preservation to find questions. Is this really preservation, or is it something else? Is this method worth the effort?
In this blog entry I’ll focus specifically on one unexpected dilemma that came up while trying to duplicate the original app: the conflict between accessibility and authenticity.
We’ve been working on an experimental digital preservation project.
Sometimes, digital preservation means something besides preserving an exact bit-by-bit copy of an item. Computer hardware and software required to support digital documents becomes obsolete and falls away from common existence, or else evolves so significantly that documents created with original versions of the hardware and software can no longer be used.
As an example, let’s imagine that we’ve collected a set of video journals of undergraduate students that were kept as part of an introductory but innovative telecommunications course in 1997. These hypothetical video journals were given to us on 100 MB Zip disks, and were created with RealVideo. In order to preserve these videos, one of the first things we’d want to do is copy them to another location besides the Zip disks, because we don’t want to depend on the storage lifespan of a single Zip disk. This move would not only support preservation, but also access. In 2016, it’s unlikely that any person – whether a casual user or a professional – would have a Zip drive available to access these files. It’s better to place them online somewhere, so that they can be accessed worldwide. Additionally, the RealVideo format is no longer practical – few people have players capable of playing RealVideo installed in their web browsers. So we’d want to transcode these videos into other formats, for example mp4/H.264 for access, and Matroksa/FFV1 for preservation. Zip drives are obsolete for our purposes, and RealVideo is obsolete for our purposes, but the content of the video journals may yet be interesting to someone, so we preserve the content in a way that is usable.
Sometimes the digital artifact that we are most interested in preserving is not a digital document created by a piece of software, but rather the software itself.
There’s an iPhone app that was released several years ago, in April 2010, by an Icelandic company called Fancy Pants Global. Fancy Pants Global described the software as “a handy little app that will help its users make those tough everyday decisions.” This decision making app is very basic, but highly rated by users because of its simple and attractive interface.
This week I learned more about how captions work at the movie theater. I went to a Regal cinema and saw the new Ghostbusters (2016) movie.
The captions at the theater aren’t projected on the screen, where everyone would be able to see them, and possibly be distracted by them. (The exception, of course, is for foreign language films that have been subtitled – though subtitles typically don’t describe other audio effects and music, while captions for the deaf and hard of hearing will contain descriptions of important sounds). Theaters that offer captions typically have special equipment available to support this feature.
Update 2016-08-15: The DC Deaf Moviegoers group has informed me that words on the screen are also an option.
This week Virginia Tech libraries are celebrating dissemination and free use of data with Open Data Week. As part of Open Data Week, Philip Young, curator of the Open@VT blog, worked with Code for New River Valley – a volunteer organization of software developers – to create technology focused sections about web scraping and APIs.
Last year I started speculating about a library mobile development lab at Virginia Tech, but I never fully developed the idea. Now happens to be a good time to communicate this idea to some of my colleagues in the library, and at other libraries, so I’m publishing some of my thoughts and will continue to revise them as things happen.
According to an April 2015 report from the Pew Research Center, 64% of all adults, and 85% of adults between the ages of 18-29 own a smartphone. In April 2014, analytics provider Flurry reported that the average U.S. consumer spent 2 hours and 42 minutes per day on a mobile device, including 2 hours and 19 minutes using apps. Banks, retail outlets, and towns have joined other organizations in developing and releasing mobile applications.
The popularity of smartphones among the typical college-aged population has created a demand for high-quality, native smartphone apps that provide access to educational resources and library services.
The lack of a centralized campus authority to provide I.T. services beyond basic network infrastructure, security, and mass software licensing needs has created an opportunity for the library to develop its own innovative mobile applications.
In addition, the status of the new library as the center of campus collaboration will allow the library to take a leadership role in helping other educators at the university to create learning technology tools that appeal to today’s device-wielding populace.
Today Patrick Murray-John, Omeka Director of Developer Outreach, is visiting Virginia Tech and hosting Omeka technical sessions and play dates along with Amanda French.
I’ve had a desire to learn more about Omeka for a while, as a pathway to work on interesting digital humanities projects. Today I dug up an experimental hack project that I put together in September, the ColorCodeSyntaxHighlighting plugin.
The ColorCodeSyntaxHighlighting Plugin for Omeka displays computer source code files inline on Omeka item pages, and performs color syntax highlighting similar to what you might find in your IDE.