Empty lecture hall

Webinar: How Accessible Video Helps Create Inclusive Learning Environments

In this webinar we discuss the features of accessible video players, and how they can be used to implement the principles of universal design for learning (UDL), and how captions and audio description can enhance learning and create a more inclusive learning environment. This is an updated version of webinars we have done previously on accessible video players and the benefits of captioning and described video.

Note: The following video should be considered an alternative to the Annotated Transcript, which contains descriptions of visual references in the media. Also, the pages listed in the Resources section are primarily text-based, and will be useful to those who do not have access to the visual content.

Video

Annotated Video Transcript

>> Art Morgan: Hello, this is Art Morgan. I’ll be presenting today on the topic of accessible video players, and how video players tie in with the concept of universal design for learning. Kevin Erler will also be joining us for the Q&A session.

In this session we’ll review the characteristics of accessible video players and provide examples. We’ll use the CaptionSync Smart Player for most of our examples, but many video players follow these accessibility guidelines to varying degrees, and we’ll be happy to talk about the differences.

We’ll also learn about how accessible video fits into the universal design for learning framework, which is often referred to as UDL. And we’ll talk about how captions and audio description can enhance learning, sometimes in unanticipated ways.

Let’s start by taking a look at what the CaptionSync Smart Player is. At its core it’s an accessible video player, designed to meet WCAG 2.0 Level AA guidelines. It adds the interactive transcript functionality that you may already be used to from your YouTube account. It’s much more than just an interactive transcript player, however. It allows viewers to benefit from the caption and description data to easily navigate and search the video, look up terms in the video, and clip and share portions of the video, providing a more interactive learning experience.

Let’s talk briefly about how the Smart Player works. The video is streamed from its original source, which could be a URL to a video, such as a YouTube or Vimeo video, or it could be a video source that CaptionSync integrates to using URLs, such as Kaltura or Google Drive. We don’t make a local copy, which is why you need to provide us with a URL to the video to begin with. This is really important, because it allows you to caption and to play content in the Smart Player that you don’t own or otherwise have a copy of. The Smart Player then puts a frame around the streaming video and pulls the caption data directly from your CaptionSync account. It adds the captions below the video stream, and displays the interactive transcript to the side of the video. In addition, it uses the caption data to provide a number of other additional useful features that we’ll see in just a moment.

OK, let’s look at the features of the CaptionSync Smart Player, and how these features can make educational video more inclusive and more interactive. I have an example here, with the video pane on the left. Below the video is an area for traditional captions. We put the captions below the video so they don’t obscure anything in the video itself.

Below that we have the video controls. These controls can be controlled using the keyboard, which is important for people who can’t use a mouse, and they’re also high contrast, which is important for users with moderately low vision. Included in the controls are buttons for speeding up or slowing down the video, which many users find very useful. Some learners like to speed up the video to review certain content more quickly, and others like to slow it down.

The interactive transcript pane is on the right. This particular video is about exoplanets, and let’s say I’ve been assigned this video, and I’m not sure exactly what the term exoplanet means. I can set the player to “Show definition from dictionary,” select the word exoplanet, and I get a definition.

Now let’s say I know that one of the questions I need to answer is how astronomers can use the light spectrum of a star to determine the chemicals in the planet’s atmosphere. I can search for “spectrum,” and jump to an appropriate spot in the video to review that portion.

>> Video Narrator: By observing the light of a star during a transit, astronomers can find the fingerprint of the exoplanet’s atmosphere in the spectrum of the star. Each element creates distinctive dark lines, absorption lines in the spectrum. So these lines act as chemical fingerprints.

>> Art Morgan: Now let’s imagine that you’re blind. The dialog in this video is quite informative, but it’s definitely not fully describing what’s being shown in the video. With the Smart Player and our audio description services, there’s a solution for that problem.

When audio description has been added to the video, we show the descriptions in the transcript pane with a light blue background. A blind user can either read the descriptions using her screen reader, or she can play the video and the player will insert audio descriptions created using text-to-speech. Here’s an example.

>> Text-To-Speech: A spectrum line runs across the bottom of the screen with colors ranging from purple to blue to green, yellow, and orange. Black lines intersect the spectrum at various points and are identified as ozone, oxygen, water, methane, or CO2 markers. The intersecting black lines are of various widths.

>> Video Narrator: So these lines act as chemical fingerprints revealing the makeup of the atmosphere. Also, the stronger the line, the more of the corresponding element is present in the atmosphere.

>> Art Morgan: Ok, so that’s audio description. Let me just review briefly a couple of the other control buttons below the video. The Preferences button, represented by a cog icon, allows you to hide the transcript pane, or configure it so that it appears underneath the video instead of to the side. And incidentally if you use the Smart Player on smaller screens it will automatically move the transcript pane below, in order to make both the video and the transcript more readable on tablets and mobile devices. Finally, the Clip and Share feature is useful if I’m a student and I want to share one portion of the video with a classmate or study group. I can select a specific portion of the video, select Generate URL, and then share that with my classmates.

So in summary, you can see that making video accessible isn’t just about adding captions and checking off a box that says your video is accessible. The Smart Player adds many benefits from a pedagogical standpoint. It’s consistent with universal design for learning principles in that it provides secondary representations of the content. It also provides additional ways for learners to engage with the content in non-linear ways that you don’t get with traditional video.

One other powerful feature of the Smart Player is the ability to embed the player right in to your own webpage. Once you have a Smart Player link, you can use the Embed button to generate an embed code, which is just a simple snippet of iframe code that you can drop into your own webpage and make the Smart Player appear right on your own page. This feature is useful for embedding the Smart Player right into your LMS pages. That’s a topic we cover in more detail in a separate webinar session, but I’m showing here an image of the Smart Player embedded in a D2L Brightspace page.

One other key feature of the Smart Player is the ability to enable you to caption other people’s YouTube videos. If you’ve ever had the need to present somebody else’s YouTube video and make it accessible, you’ll know that this can be quite a challenge, but the Smart Player makes it easy. We also cover this in more detail in a separate webinar session.

Now let’s shift gears and talk about some of the hidden benefits of video accessibility. Sometimes these factors are what help you “sell” your department leaders on why you should spend some time and money on video accessibility.

The first benefit is improved comprehension when watching videos with good captions. We have another slide with details on that in a moment.

The second benefit is improved indexing and search. If you’ve captioned and described your videos, your content becomes more discoverable, and it’s easier for both students and faculty to find specific content that they want to review or use in classes.

The third hidden benefit is viewer flexibility. If I’m on the bus or train, or in a noisy location, I can still watch a video and fully grasp the content if it has captions. Similarly, if I’m going for a run or a hike, sometimes I like to listen to courses with audio only. If the videos are fully described, I can still follow along with just audio, without looking at a screen.

The next benefit is improved accessibility for English as a Second Language viewers. Many people who are learning English understand the written content in captions better than the audio. And in fact, this is often true even for people who are very fluent in English. There are often words in a video that you can’t quite make out from the audio track alone, especially if there are accents, jargon, or new technical terms involved.

And this brings me to the last point on this slide, which is that providing captions and descriptions and other enhancements that are typically considered accessibility elements leverages the principles of Universal Design for Learning by providing additional representations of the content, and additional ways to interact with the content.

We’ll talk a little bit more about the UDL aspect at the end of the presentation.

I mentioned the potential for improved comprehension with captioned video on the last slide, and I’ll point out a couple of studies that highlight this point.

The first is a study that was done at Northern Illinois University, by Dr. Bryan Dallas and his colleagues. They did an experiment with a large lecture class at the university with two sections of the class, both with similar demographics. They had one section of the class watch a 15-minute TED Talk with captions, and the other section watched the same video without captions. Both sections then took a carefully-designed quiz after watching, to assess their understanding of the video. You’ve probably guessed which group performed better on the quiz; the group that watched with captions did significantly better than those who had no captions. And this was a video with high-quality audio, and there were no students with hearing accommodations in either group.

The second study is one that Kevin did here at AST several years ago. This research showed how important the quality of the captions is to comprehension. In this experiment people reviewed documents with different error rates, ranging from zero percent to 20% error rate, and rated the intelligibility of the documents on a scale from zero to 10. Somewhat surprisingly, even small error rates of two to three percent affected intelligibility significantly, and error rates of four or five percent made the document almost unintelligible.

This highlights the point that poor quality captioning, such as machine-generated captions or captions created using hybrid or crowd-sourcing techniques, can be in many ways be worse than no captions at all. The point is that if you’re going to provide captioning, you need to be all-in and do it with top quality. Anything less is a disservice to your viewers and learners, and is essentially a waste of time and resources.

On this slide by the way we have a photo of light bulbs, to emphasize the point that making video accessible is largely about making it easier to understand.

Finally, let’s review some of the explanations as to why captions and descriptions and more interactive video players may enhance learning. Earlier I described the research by Dr. Bryan Dallas that showed that students who viewed a 15 minute video with captions performed better on an assessment test than those who viewed the same video without captions.

This is consistent with the UDL principle that says that providing additional representations of the content, in this case a written representation of the audio content, can help many learners understand the content better. Audio description is less frequently thought of as a secondary representation — most people think of it as an accommodation — but you can imagine how the same principles might apply. In other words, some people, even if they are not blind, might benefit by being able to read a description of what they are seeing, or hear a description. The descriptions can augment people’s understanding of the visual content.

I’m showing on this slide an image of the brain with the text, Recognition Networks: The What of Learning. This image is from CAST.org, which is an excellent resource for more information about universal design for learning.

That’s it for this session. Our resource links include a link to information about the Able Player, which is an open source accessible video player, as well as detailed info about the Smart Player on our Support Center site. Our blog has links to past webinars and other video accessibility articles. If you have any further questions, please let us know.

Resources

Please contact us if you would like us to do a live webinar with Q&A about video accessibility for your campus or organization.

Leave a Comment

Your email address will not be published. Required fields are marked *