Bimodal interpreters, not just sign language interpreters

Sign language interpreters are spoken language interpreters too

To talk about our work, it helps to have efficient terms that accurately define it. Typically, we ASL/English interpreters call ourselves “sign language interpreters,” while we call (for example) Spanish/English interpreters “spoken language interpreters.” Yet signed language is only half our language pair; the other half is spoken language; therefore, we are also spoken language interpreters.

How to distinguish, then, between interpreters who work with two spoken languages and interpreters who work with a spoken language and a signed language? Saying “signed-spoken” and “spoken-spoken” is a mouthful. Luckily, there are better terms for this comparison: bimodal and unimodal (Emmorey, Borinstein, Thompson, & Gollan, 2008). What we share with unimodal interpreters is that we are bilingual. What sets us apart is that we interpret between two modes: signed and spoken; therefore, we are bimodal interpreters.

Visual language interpreters are aural language interpreters too

I like the name of the Association of Visual Language Interpreters of Canada better than the Registry of Interpreters for the Deaf because we are interpreters for the H/hearing as much as we are interpreters for the D/deaf. Yet the term visual language interpreters fails to acknowledge that we are also aural language interpreters. This is where bimodal is more accurate. We interpret in two modes: call them aural and visual or audible and visible; either way, we are bimodal interpreters.

We are also bimodal when we do sight translation; i.e., interpreting from written text to signed language for those who have difficulty reading. An interpreter might also do tactile sight translation for a Deaf-Blind person who does not read Braille or cannot obtain a certain document in Braille. There are many different ways we facilitate communication; not all of them are visual, but they are all bimodal.

We need terms as inclusive and specific as our work

Bimodal is an accurate and comprehensive term for what we do to facilitate communication between D/deaf and hearing people. We, as a collective of individuals, serve a diversity of deaf (not always Deaf) consumers using a variety of methods to make audible language visible and vice versa.  Some use American Sign Language (ASL); some use manually coded English (MCE), a.k.a. pidgin sign English (PSE), or, preferably, contact language; some use oral methods such as mouthing and gestures; still others use cued speech. Whatever opinion people have of these modes of communication, there are D/deaf people who use them, and there interpreters and transliterators who serve those D/deaf people and their hearing interlocutors. Not all of these methods are bilingual, but they are all bimodal.

To be even more accurate, some of us sometimes interpret using audible, visible, and tactile methods between hearing, D/deaf, and Deaf-Blind people, so when we do that, we are trimodal interpreters.

Let scholarship inform our practice

The demand for bimodal interpreting services has always outpaced the supply of available practitioners, and consequently, federal funding has primarily been directed at increasing the number of available practitioners, not on research and development. As a result, we contend that the field has adopted and maintains a “culture of practice” rather than a “culture of scholarship.” (Nicodemus & Swabey, 2011)

There is a time and place for specialized terminology. I am not suggesting we start calling ourselves bimodal interpreters outside of the profession. I do not plan to say to hearing clients, “Hi, I’m your bimodal interpreter!” I will continue to call myself an interpreter first, and an ASL/English interpreter second. I might even slip and call myself a sign language interpreter if I am careless. However, when talking about our work vis-à-vis the work of interpreters who work in spoken languages only, I would like to see us compare bimodal interpreters with unimodal interpreters instead of sign(ed) language interpreters and spoken language interpreters. Fellow interpreter educators could start by introducing the term bimodal bilingual, if they have not already done so, and fellow interpreters could use the term in professional discussions. It would be ignorant to use the same terminology we have always used when scholarship informs us of a better option. We are professionals, and part of professional practice is scholarship. I believe it is time for us to take a more global, research-based view of what we do, and start talking about it in ways that demonstrate greater awareness.

References

Emmorey, K., Borinstein, H. B., Thompson, R. and Gollan, T. H. (2008). Bimodal bilingualism. In Bilingualism: Language and Cognition 11(1), 43–61. Retrieved from http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2600850/

Nicodemus, B. & Swabey, L. (2011). Bimodal bilingual interpreting in the U.S. healthcare system: A critical linguistic activity in need of investigation. In B. Nicodemus & L. Swabey (Eds.) Advances in Interpreting Research. Inquiry in action, 241–259. Philadelphia, PA: John Benjamins. Retrieved from https://www.academia.edu/5270051/
Bimodal_bilingual_interpreting_in_the_U.S._healthcare_system

Why there is no “Google Gesture” sign-to-speech translator

Aside from the irresponsible journalism that propagated this story in the first place, the basis for the concept is fundamentally flawed. There cannot be such thing as a wristband a signer can wear that will translate their signed language into spoken language; why? Because signed language is not just on the hands! Signed language is on the face and the body as well. The grammar of signed language is made through eyebrow, mouth, cheek, and even nose movements. Signed language is made with head nods and shakes, head and body tilts, and even shoulder shrugs. Anyone who ever took an introductory course in ASL should know this.

There is one other important flaw in the concept of a gesture-to-speech translation machine, and that is the notion that there is one “sign language.” No, folks, “sign language” is not universal! No sir, no ma’am. Even if Google were able to take input from a human interface device located on a signer’s body–even if that included all the points on the face and body necessary to read signed language–Google would have to add hundreds of signed languages into their Google Translate engine. Language is culture-bound, just as gesture is culture-bound. I’d like to see how this supposed “Google Gesture” would translate the thumbs up gesture, which can mean something like “up yours” in countries other than the United States.

American Sign Language (note that the A in ASL stands for American; i.e., not universal) is a much richer and more complex language than people give it credit for; in fact, so are all the signed languages in the world. Until enough people learn to appreciate the sophistication, complexity, and diversity of signed languages, we will continue to swallow false stories like this hook, line, and sinker.

Unicode: What the world needs now is love

Last week, a Deaf friend of mine made a good point about Unicode adding a “raised middle finger” symbol to the new standard: “They still need an ASL ILY emoji.” Right she is! If you can flip someone the bird, you should be able to say “I love you” too. Perhaps submitting a character proposal to Unicode is in order.
I+L+K=ILKsign

References

Unicode 7.0 Miscellaneous Symbols and Pictographs (PDF)
Original Facebook post:

Three lessons this interpreter is learning from teaching ASL

1. It takes patience and creativity to sign with people who know little sign language.

I have a new respect for Deaf people who take the time to sign with ASL students. Having more respect for Deaf people and more creativity in how I express myself is making me a better Deaf community member.

2. I’ve been doing it wrong.

Well, maybe not wrong, but there are things I never knew, such as that Y is considered a down letter; that is, Y is made by tilting the palm downward. I’m sure this is not a hard and fast rule; in fact, I can see even on the Signing Naturally DVD the language models do not always sign Y that way. Still, I never knew it ever tilted down at all. Now I see it in the way I and other signers spell the lexicalized #style and #yes. I also never knew that the sign WHEN meant what day, not what time. Again, I’m sure this is not a hard and fast rule, but I never knew it was a rule at all. Those are just two examples of several. Learning how to refine my signing is making me a better interpreter.

3. Now I see what my students have learned.

Since many of the interpreting students and working interpreters I teach have learned ASL with the Signing Naturally curriculum, I have a better idea of what they were taught. Knowing what my students have learned is making me a better interpreter trainer.