Milestone: 250 downloads of my thesis on vague language so far

Digital Commons tells me my thesis on vague language has been downloaded 250 times as of today. That’s a far cry from the handful of people who read a thesis that’s bound and shelved!

You can read the abstract and get the PDF at no cost: Keeping it vague: A study of vague language in an American Sign Language corpus and implications for interpreting between American Sign Language and English

Don’t let Internet video bulldogs bulldoze closed-captioning in the name of progress

President Barack Obama congratulating legislat...
President Barack Obama congratulating legislators and Stevie Wonder (Photo credit: theqspeaks)
Consumer Electronics Association CEO Gary Shap...
Consumer Electronics Association CEO Gary Shapiro introduces former Mass. Governor Mitt Romney (R) at CEA HQ in Arlington, VA. 5/28/2009 (Photo credit: Wikipedia)

Don’t let the Consumer Electronics Association (CEA) and Entertainment Software Association (ESA) persuade the FCC to exempt them from closed-captioning Internet video. Read the article below and click the links to read the actual petition; then, write to the FCC to uphold the 21st Century Communications and Video Accessibility Act (CVAA) that President Obama signed into law.

Trade groups hunt for online-video exemptions from disability-access rules – FierceOnlineVideo.

Related articles


Participated in an ASL Hangout On Air, discussed how to have better signed language videoconferences

Google Inc, 谷歌美国总部
Google Inc, 谷歌美国总部 (Photo credit: Yang and Yun’s Album)

Naomi Black at Google headquarters invited Willie King, Jared Evans, Ben Rubin, Richard Goodrowme, (and maybe others who couldn’t make it) to a Hangout On Air so she should show JAC Cook how Google’s videoconferencing technology works. We talked about some of the plusses (no pun intended) and minuses of Google+ HOA’s (Hangouts On Air, not Homeowners’ Associations). On the plus side, you have an attractive service and you don’t have to deal with firewalls; on the minus side, it is hard to have group talks in ASL when only one signer is in a big pane and all the others are in “thumbnails” in the “filmstrip” along the bottom of the screen. We discussed ways of moderating multi-signer videoconferences, such as having people hold up their hands when they want to talk and waiting to be called upon. Naomi reminded us you can select the thumbnail of the person you want to watch in the big pane, and a few of us recommended doing away with the screen-and-filmstrip layout and going to a more equally-sized multi-pane layout (or one where you can control the size of panes). Jared Evans & Willie King work at ZVRS and they said they would like to give Google some tips on more effective multi-point videoconferencing for signed language users.

The Brady Bunch opening grid, season one
The Brady Bunch opening grid, season one (Photo credit: Wikipedia)

I am glad that Google keeps seeking the opinions of the signing communities; I just hope they are willing to change the layout of Hangouts to a “Brady Bunch” grid format– or at least offer it as an alternate layout.

How about you? Does the current implementation of Google+ Hangouts work for you, or would you like to see changes made? Please leave your thoughts in comments below and/or send your feedback to Google! 🙂

Crowdsourcing to closed-caption videos with Amara

pictograms used by the United States National ...
pictograms used by the United States National Park Service. A package containing all NPS symbols is available at the Open Icon Library (Photo credit: Wikipedia)

Yesterday’s Hangout On Air on American Sign Language (ASL) and Deaf culture is now a video on YouTube, and that video is being crowdsourced for subtitles at Amara. If you’ve never heard of Amara (I hadn’t until yesterday), it is a website dedicated to crowdsourcing the captioning of videos. How it works is that anyone can embed a video on Amara, and anyone can caption it on a volunteer basis. Captioning is very time-consuming. It involves both transcription, line division, and time coding. The average rate of speech is somewhere around 5 syllables per second (Kendall, 2009, p. 145). You have to listen to a few seconds of a video, pause the video, type what you just heard, and repeat the process. The transcription has to be time-coded; i.e., the words have to be matched with the time they appear on the video, usually at about 32 characters per line[1], so that’s time-consuming too. For these reasons, when it comes to help with closed-captioning, the more the merrier, especially because so many people make videos pro bono. This video is over 48 minutes, and of course it’s pro bono. If you would like to closed-caption a few lines of the video on Amara, please do. A little work by a lot of people will get the job done.


1. I don’t like to repeat statistics without sources, but 32 and 35 characters appeared often on webpages. Screen Subtitling’s white paper “Closed caption subtitling” [PDF] said “the number of characters per line or row is a set limitation” (Screen, 2008, p. 2) with no specification of the limit or reference to the authority. I searched the Internet for the “set limitation” on characters per line, and I found the same numbers repeated in different places with no traceable references.’s “Closed captioning defined” page said, “the features of traditional captioning are: … 32 characters per line” with no citation. Welstech wiki said the Department of Education required 35 characters per line, yet when I searched the US Department of Education website, I could find no such specification.’s Closed Captioning FAQ answered the question, “What features are supported by CEA-608 closed captions for standard definition?” thus: ” […] A caption block can have up to 4 lines and up to 32 characters per line, although for accessibility reasons, it is recommended not to exceed 2 lines and 26 characters per line […].” I searched “CEA-608” to find the source, and I found the Consumer Electronics Association (CEA) CEA-608-E Standard Details page. Unfortunately, the standards are published in a printed book that costs $300, $225 for members. Can anyone quote the source of authority? If so, please leave a comment.


Kendall, Tyler S. (2009). Speech rate, pause, and linguistic variation: An examination through the sociolinguistic archive and analysis project (Doctoral dissertation). Retrieved from

Screen.  (July 2008). Closed caption subtitling. Retrieved from


American Sign Language (ASL) Hangout On Air, Interpreted

I participated in a Google+ Hangout On Air about American Sign Language (ASL) and Deaf culture by interpreting for Dylan, a Deaf man who shared his perspectives. I interpreted consecutively so that people could watch Dylan without voice interference; I also interpreted consecutively rather than simultaneously with the aim of providing a more accurate and natural interpretation. I interpreted for the first 15 minutes until 7pm PDT. For the rest of the Hangout, Dylan took questions in the Chat window and answered them using his voice. (more…)