distance education desktop

Online Teaching Closed Captioning Myth Buster

I had the good fortune to be able to attend the Online Teaching Conference in San Diego this June.  AST was a sponsor of the event, which drew about 700 educators from across California, as well as from several other states. It was great to be able to talk with so many passionate educators, all of whom are working to make quality education available to as many people as possible — a goal that we share here at AST. All of these educators were already able to see past the myth that to get a great education, learners need to go to a classroom three times a week, preferably in a building with ivy growing on the walls. They knew that great teachers can help produce great learning outcomes in online and hybrid classes, expanding learning opportunities to many who were previously unable to attend college courses.

However, when it comes to closed captioning video for the courses that these educators were teaching I was surprised to learn that there are several widely-held myths.  Most of the educators at this particular conference were from the California Community College system and are working on courses that are part of the Online Education Initiative, so some of the comments below are specific to those institutions.  However, much of what I cover below applies to all distance education and hybrid classes.

Top 5 Online Teaching Closed Captioning Myths

Closed Captions Can Be Created Automatically Using Speech Recognition

This is a pretty common myth among those who have not yet experimented with closed captioning. Speech recognition applications, such as Apple’s Siri and Nuance’s Dragon Naturally Speaking, have given many people the impression that speech recognition is “almost perfect” at recognizing what we say and turning it into text.  We know these speech recognition applications make mistakes, but we have the impression that it’s a simple matter to correct those mistakes. While in some cases that’s true, in practice Siri and Nuance’s Dragon have several big advantages that don’t apply to creating closed captions for video. For one thing, they are recognizing the voice of only one speaker, and they can learn the nuances of that speaker’s voice over time (pun intended).  In addition, they are often operating on a constrained vocabulary, levering the context and history of the user. When you ask Siri “where’s the nearest Starbucks?” the speech recognition engine knows that the word “Starbucks” is much more likely in this context than similar words or phrases, such as “star trucks.”  However, those same rules don’t apply when you are captioning a video about organic chemistry.

The fact is that when speech recognition is used to create closed captions the accuracy is often in the 50% to 80% range.  Yes, these results can be edited and corrected, but the average person could easily spend hours correcting the captioning errors generated by speech recognition in a short video. It’s definitely more efficient to have humans transcribe the video and use that transcription as a basis for the closed captions.

Closed Captioning Takes Too Long (When Using Humans)

Many people at the Online Teaching Conference told me that they had heard the it takes two weeks to get back closed captions from a closed captioning service. While this might have been true in the old days when videos were shipped on video tape, and it may still be true for some very low-tech captioning services, the typical turnaround time for closed captions created using CaptionSync by AST is just two days. CaptionSync automates as much of the process as possible to make the captioning process very efficient for the user, while still using professional human transcribers for transcription.

Closed Captioning Services are Too Expensive for Our College

At first blush, using a closed captioning service does seem expensive, especially if you are comparing it to various “free” or low cost options that are now available.  However, free and cheap are never as cheap as they seem, on many levels.  This is especially true for California Community Colleges, which have access to the DECT closed captioning grant program. One factor to keep in mind is potential legal costs and liability.  Recent lawsuits such as those filed against Harvard and MIT by the National Association for the Deaf point out that low-quality captions are no better than having no captions at all, and in many cases they are worse (causing confusion and wasted time for the deaf, hard of hearing, and others who rely on them). Even if your college or university’s staff take the time to correct all the errors in low-quality closed captioning, the cost of the staff time adds up, and it almost always more cost-effective to use a professional captioning service like CaptionSync.

Finally on the point of cost, in many cases there is grant funding or a centralized pool of funding available for captioning videos that you use in your online courses.  The DECT grant program mentioned above covers the cost of closed captioning videos used in approved online and hybrid courses at California Community Colleges. Even if you are not at a California Community College, if you are an instructor you should not assume that you are individually responsible for the cost of closed captioning. Contact your disability services or educational technology department to find out what is available, as they have the ability to set up CaptionSync accounts with preferred educator pricing. This leads in to the next myth.

It’s Easier to Create Closed Captions for My Videos Myself

Many professors and instructors that I spoke with at the Online Teaching Conference said that they had heard that the best practice was to create closed captions themselves for any videos that they created for their classes. If you write your own script for your videos and you strictly follow that script when you record video it is possible to save some cost by generating captions from your transcript.  YouTube can even do this “for free,” but the results are less than perfect. It takes time to format and upload your transcript, and if you didn’t follow the script perfectly, or you have music, sound effects, or background noise, it can be time-consuming to get accurate captions from your transcript. As an instructor, the time you spend on creating closed captions could be better spent on creating your course content and working with your students.

Closed Captioning Only Helps the Deaf and Hard of Hearing

This is the biggest myth of all, but many people are starting to realize that this is not true.  Closed captioning is useful for English language learners.  Closed captioning is consistent with the principals of Universal Design for Learning, providing an additional means of representation for content and addition channel for engagement. Finally, captioning enables in-video search, using tools like the CaptionSync Smart Player or other search tools that are available in lecture capture and online video platforms. 

Hopefully this helps dispel some of the myths related to closed captioning for online teaching. If you would like to learn more, please contact us for a free demo, or download our closed captioning handbook for higher education.

1 Comment

  1. Two overlooked uses for closed captioning: 1) English as a second language. 2) Children may pick up reading skills at an early age by multi-sensory input. They hear, see action, see the words. Both of my children read before Kindergarten. I am deaf and they always had CC.

Leave a Comment

Your email address will not be published. Required fields are marked *