Real-time subtitling in Taiwan
By Sheng-Jie Chen (National Taiwan University of Science and Technolo)
Abstract
English:
Il presente articolo identifica alcuni problemi e possibili soluzioni connessi alla sottotitolazione in tempo reale a Taiwan. Delinea diverse modalità di sottotitolazione in tempo reale nonché la legislazione in materia e ne evince che ogni tipo di sottotitolazione in tempo reale è in qualche modo legato all’interpretazione simultanea. Vengono anche proposte alcune attività utili alla formazione del sottotitolatore in tempo reale ed è sottolineata la necessità di ulteriori ricerche sull’uso del riconoscimento del parlato sia per la produzione di sottotitoli in tempo reale che per la sottotitolazione automatica.
English:
This paper identifies the problems of real time-subtitling in Taiwan and suggests some solutions. It then delineates different types of real-time subtitling and related legislation issues, concluding that all real-time subtitling performed in Taiwan is in some way related to simultaneous interpreting. Many factors that may affect real-time subtitling have been identified. Based on the results of this study, the author suggests some tasks for real-time subtitler trainingand calls for further research on the use of speech recognition technology both to produce real-time subtitling in Chinese and automatic subtitling.
Keywords: real time subtitling, sottotitolazione in diretta, news translation, traduzione dei notiziari, anaphoric presupposistion, rispeakeraggio, voice technology recognition, riconoscimento vocale automatico
©inTRAlinea & Sheng-Jie Chen (2006).
"Real-time subtitling in Taiwan"
inTRAlinea Special Issue: Respeaking
Edited by: Carlo Eugeni & Gabriele Mack
This article can be freely reproduced under Creative Commons License.
Stable URL: https://www.intralinea.org/specials/article/1693
1. Introduction
This study investigates mainly the use of real-time subtitling in the news department of a television company in Taiwan. In the Taiwanese academia, subtitling generally means off-line subtitling of films or television programs from foreign languages into Chinese. Courses on off-line subtitling are offered in the foreign language departments of a few universities, mainly focusing on movies and news programs. Only on rare and special occasions do TV companies in Taiwan provide real-time subtitles. The most noticeable recent case of real-time subtitling occurred when charges of corruption against the President of Taiwan were discussed in several speeches, broadcast by two TV companies.
2. Factors affecting real-time subtitling in Taiwan
2.1 Languages, politics, and real-time subtitling
It is necessary to understand languages and politics in Taiwan in order to realize why sometimes real-time subtitling in Chinese is provided during live broadcasts. Taiwan has diverse languages and dialects, but only one official common written language. In terms of the population in Taiwan, the spoken languages are Taiwanese (including Hakka) 84%, Mandarin Chinese 14%, and the dialects and languages of nine aborigines tribes 2% (CIA 2006). However, all Taiwanese share Mandarin Chinese as their common written language, which is also used in real-time subtitling. It is impossible for the speakers of these languages to communicate with each other orally. According to Wang (1960), Mandarin Chinese and Taiwanese share 48.9% of cognate words, while German and English share 58.5% of cognate words. That is to say, there is a greater similarity between German and English than there is between Taiwanese and Mandarin Chinese. However, there is no statistics on the number of bi- or pluri-linguals in Taiwan.
In terms of politics, there are two major rival political groups in Taiwan, the Pan-Green and the Pan-Blue. The former is currently in power and it has a penchant for Taiwan independence. On the other hand, the latter tends to believe in unification with the People’s Republic of China. Recently, Taiwan President Chen Sui-bian and his family members have been accused of corruptions and embezzlement by his opponents and have been under investigation. The leader of the opposition (KMT or nationalist party), Taipei City Mayor Ma Ying-Jeou , has a Ph.D. from Harvard law school. The majority of President Chen’s supporters are speakers of Taiwanese living in central and southern Taiwan, while the majority of supporters of Taipei Mayor Ma Ying-Jeou are speakers of Mandarin Chinese living in northern Taiwan.
2.1.1 The background of the study
On June 20, 2006 at 20:00, the Taiwanese President Chen Sui-bian delivered his Report to the People of Taiwan to deny the opposition parties’corruption charges against his family and numerous other accusations[1]. According to Lin (2006), when President Chen delivered the speech, he spoke mainly in Taiwanese (99% of the speech) because the majority of his supporters are Taiwanese speakers. However, the ROC Presidential Office arranged the speech to be interpreted simultaneously into Mandarin Chinese in a conference room for foreign correspondents[2]. TVBS, a leading television company in Taiwan, broadcast this speech with real-time subtitling. On June 21, Ma Ying-Jeou , the leader of the major opposition party KMT, gave a speech to rebut the President’s report[3]. Again, TVBS provided also real-time subtitles of the speech. On November 5, 2005 finally, President Chen delivered a second address to further rebut corruption charges against him[4]. This time, both TVBS and CTITV, another leading national television company, provided subtitles in real-time. In this particular speech, President Chen used alternatively Taiwanese and Mandarin Chinese, and even repeated some parts of the speech both in Taiwanese and in Mandarin Chinese as if to make sure that the viewers of both dialects could understand the points that he was trying to make. When he wanted to make a point he deemed particularly important for his southern supporters to understand, he spoke in Taiwanese. On the contrary, when he wanted to make a point aimed at his opponents, he switched to Mandarin Chinese. When he wanted to make a point which he wanted both his supporters and opponents to understand, he would say it in Taiwanese first and then translate it into Mandarin Chinese by himself. So the real-time subtitler, too, had to switch between interlingual (from Taiwanese into Mandarin Chinese) and intralingual subtitling (from Mandarin Chinese into Mandarin Chinese). On this occasion, some other TV stations broadcast President Chen’s address providing written real-time summaries or free commentary.
2.2 Chinese character typing software
To write Chinese on the computer, at least three different types of software may be used: (1) phonetic based, (2) stroke or handwriting based, and (3) voice-based. Traditional QWERTY keyboards are used with both voice-based and stroke based typing software. IBM Via Voice is a voice-based software. Both Pinyin and Zhuin typing software are phonetic based. The former is widely used in the People’s Republic of China, while the latter is widely used in Taiwan and has a few varieties. The phonetic based software requires the user to type out the keys that represent each sound and tone. Most Chinese characters are made up of one to three sounds and one of five tones. For instance, in Ni3 Hao3 Ma0 (the three Chinese characters for How are you?) each character is made up of two sounds represented by letters, and one tone represented by numbers. The advantage with phonetic based software, especially Pinyin, is that it is quite easy (and actually most people in Taiwan have learned it in elementary school). The problem is that, when a word is typed out, several homophones appear for the user to choose from, which makes typing quite time consuming. Although more advanced phonetic-based software rely not only on sound but also on meaning or contexts, due to the homophones the speed of typing is still slower than the other typing software. According to a study by Meng (2002) the rate of typing using phonetic based software is 70-100 Chinese characters per minute, which is very slow for real-time subtitling.
Stroke based software include typing and handwriting software. In the first case, the user types out the keys that represent strokes that make up a word. For instance, in Ni3 Hao3 Ma0 (How are you?), typed out in Da Yi, one of the key stroked software, would be ant (the three keys typed in succession to come up with a Chinese character) for Ni3, lbg for Hao3, and ob for Ma0. For the stroke based handwritten software, the user writes the strokes on a stroke recognition board. The drawbacks with stroke based typing software are mainly two: first, it takes a long time to learn a stroke based software because the learners need to memorize each stroke of the Chinese character and its corresponding key on the computer keyboard. Second, when a word is typed out, sometimes other words bearing the same stroke characteristics would appear to the operator as options to choose from. As a consequence, a software automatically excluding less probable options, would be favoured by the subtitler. In this case, Chang Jie, another typing software, is considered more effective because most of the time, typing out a word, only one word will appear. Some news translator/narrators prefer Wuhsiami typing software. That is the reason why Chang Jie was favored by the real-time subtitler of TVBS. The stroke based handwriting software is too slow for real-time subtitling. The rate of typing using stroke based typing software is between 170 and 220 Chinese characters per minute (Meng 2002).
Voice based software requires the user to speak to the microphone for the voice recognizer in the computer to display the words on the monitor screen[5]. IBM Via Voice is rather difficult to use, partly because it is not part of bundled typing software in Microsoft Office and so it is not accessible in every computer. For any novice typist, it takes a long time to get used to any Chinese typing software, so it is not likely to become popular. In addition, owing to the many dialects in Taiwan and China, people speak with a wide variety of accents, and even among people who speak the same dialect, speakers have their own speaking and pronunciation traits. Respeaking has never been widely used in Taiwan because so far it is less accurate and slower than the stroke-based or phonetic-based software. As a result, no research has been carried out on the rate of typing using voice recognition software.
3. Literature review
3.1 Real-time subtitling in Europe
Four different methods have been used in Europe for producing real-time subtitling: phonetic keyboards, velotype keyboards, qwerty keyboards, and respeaking.
The first method involves a special phonetic keyboard designed for verbatim transcription… an average output accuracy of between 75 per cent and 95 per cent is generally achieved, at speeds of up to about 200 words per minute, while Velotype subtitling uses the Velotype syllabic chord keyboard, which can attain a speed of around 100-140 wpm with a trained operator. The third method uses an ordinary Qwerty keyboard (...). A maximum subtitling rate of about 80 wpm is typical. (The United Kingdom Office of Communication 2006). Real-time subtitling of TV programs, such as TV news programs, live TV shows, and weather reports instead is done by respeaking, using voice recognition technology (The Voice Project 2006).
Some European countries (Austria, England, the Netherlands, and Switzerland) use a certain amount of real-time subtitling; nevertheless, generally speaking, they use far less real-time subtitling than the United States. Some programming is produced by using Velotype stenography, and the text for subtitles is edited or summarized. Countries using Velotype for real-time subtitling include France, Sweden, Norway and Germany; other countries are now beginning to use stenographic live subtitling include the UK, the Netherlands, and Norway (National Center for Accessible Media).
3.2 Live captioning in North America
Real-time or live captions are used extensively in the U.S. and Canada for news and sports programs. True real-time captioning taps on a technology developed for court reporting, computer-aided real-time translation (Media Access Group at WGBH). The National Captioning Institute (NCI), a non-profit organisation with the mission of ensuring access to television programs through closed captioning, provides three kinds of live captioning services: real-time captioning (since 1982), live-display captioning, and live encoding. NCI recruits court reporters and retrains them to become real-time captioners who create captions from the spoken word at over 225 words per minute using a computerized system based on the stenographic shorthand used by court reporters. However, real-time captions always lag slightly behind the audio, generally by about two to three seconds. In addition, as there can be no proofreading, errors may occur - usually in the form of incorrect, though phonetically similar, words. NCI continuously assesses real-time captioning, aiming at an accuracy rate of 98% or better. It provides over 70,000 hours of live captioning annually for its clients. In 2001 it launched a Spanish real-time captioning system based, as English system, on the principles of court reporting, using stenographic keyboard to transcribe words (NCI’s Live Captioning ).
3.3 Real-time subtitling in Japan
In Japan the demand of subtitled TV news programs for the deaf population (350.000 persons) is very high. Keyboard entry of subtitles for news programs in this language cannot catch up with the delivery speed of speech because Japanese use ideographic characters, which require certain amount of time to select the right words among homonyms (Ando et al. 2000). NHK began subtitling of news programs using a real-time speech recognition system in March 2000. For an anchor announcer’s read speech, NHK’s speech recognition system can achieve sufficient performance to put subtitles on the closed captioning data channel (over 95% word accuracy in real-time processing). However, word accuracy for other speech such as reporters’ comments over a noisy background, is so degraded that the broadcast news subtitling service is limited to particular parts of news programs (Matsui et al. 2001). A study was conducted on the use of a real-time subtitling of news broadcasts in Japanese using a speech recognition system consisting of the transcription system and an error recognition and correction system. The results of the study showed that word recognition error rate of the system for studio announcer speech is 2.8% in the real-time recognition and the error rate can reduce to 0.8% by manual correction with an average total delay of 10.4 seconds (Ando et al. 2000: 189).
4. A study on real-time subtitling in Taiwan
4.1 Research questions
This case study attempted to answer the following questions:
1. How are real-time subtitles produced in Taiwan?
2. What different types of real-time subtitling are there in Taiwan?
3. How is the quality of real-time subtitling?
4. Why is real-time subtitling not regularly used in Taiwan?
4.2 Participants
There were six participants in the study: four experienced translators/narrators of English international news and English news programs working for leading television companies in Taiwan; a section chief of the foreign news department of another leading television company; a marketing manager of a company that supplies subtitling equipments to local TV companies; and the author himself, who served as action researcher. In particular:
Participant 1 is a news translator/narrator who had worked in the news department of a leading television company in Taipei for over 8 years before joining an international company.
Participant 2 is a section chief of the foreign news department of a leading TV company in Taiwan, which provides real-time subtitling for very important news programs.
Participant 3 is a news translator/narrator of a TV company.
Participant 4 is the business manager of a company selling subtitling equipments to local TV companies.
Participant 5 is the author of the study himself, who had subtitled over 300 English movies in Chinese and translated and dubbed some 30 Chinese martial arts programs in English.
Participant 6 is a former student of the author in an interpretation course in the MA extension program, had been working at a television company for over four years, summarizing foreign news programs and narrating them during post-production. The six participants, belonging to different companies or institutions, helped to triangulate or countercheck the results of the study.
4.3 Data
The data for the study were collected by e-mail correspondence, telephone interviews, and personal interviews with the participants, as well as the author’s reflection notes. The data were triangulated by verifying with each participant the accuracy of the findings and analyzed according to the principles of grounded theory (cf. Strauss & Corbin 1990).
The author set on by an investigation on the nature of real-time subtitling by calling the six participants first and then following up by confirming the results of the telephone conversations with an e-mail message. Participant 1, who had worked as a news translator/narrator of English news programs and English education programs in a television company in Taipei for over eight years, said that during her stay in this TV company, she had never done any real-time subtitling and neither had she seen anyone else doing it, but she had seen the real-time subtitles of Taiwan President Chen Sui-bien’s first Report to the People on June 20, 2006 produced by TVBS. Then the author called the manager of the news department of the TV company that produced real-time subtitling for more information. He said that his company considered real-time subtitling as an extra service, as recently for President Chen’s Report to the People and the speech of the Taipei City Mayor Ma Ying-Jeou . The business manager of a company that supplies subtitling equipments to local television companies confirmed these findings and provided more information. The former student of the author had never done real-time subtitling, nor heard of, or seen anyone doing it. He only became aware of the use of real-time subtitling during President Chen’s speech on November 5, 2006. Finally, a third news translator/narrator of another TV company was interviewed. She had never done real-time subtitling nor seen anyone doing it, but she knew a news translator/narrator using IBM Via Voice in translating news programs.
5. Results and discussion
5.1. How are real-time subtitles produced?
President Chen’s first speech was subtitled real-time by TVBS, a leading TV company in Taiwan, and his second speech, delivered on November 5, 2006, was subtitled real time by both TVBS and CTITV. Both TV companies hired a typist to provide real-time subtitles[6].
5.2 Different types of real-time subtitling
Real-time subtitling in Taiwan can be divided into three kinds: (1) intralingual, from Mandarin Chinese into Mandarin Chinese, (2) interlingual, from a foreign language into Mandarin Chinese, and (3) mixed (alternatively both Mandarin Chinese into Mandarin Chinese and Taiwanese and Mandarin Chinese.
5.2.1 Real-time subtitling form Mandarin Chinese into Mandarin Chinese
To rebut President Chen’s address to the nation, Taipei Mayor Ma Ying-Jeou spoke in Mandarin Chinese on TV on June 21, 2006. His speech was subtitled in real-time by TVBS with the same methods used for subtitling President Chen’s speech.
5.2.2 Real-time subtitling from a foreign language into Chinese
Real-time subtitling of important foreign news programs is sometimes provided by TVBS. During the real-time subtitling of English programs, two operators are required: a simultaneous interpreter and a typist. The former wears headphones and whispers his translation into Chinese to the typist, who listens to the interpretation and types it out, proofreads, and edits the subtitles. However, when the voice of the simultaneous interpreter is broadcast, no real-time subtitling is provided.
5.2.3 Real-time subtitling into Mandarin Chinese alternatively from Mandarin Chinese and from Taiwanese
When President Chen delivered his speeches on June 20 and November 5, 2006, he switched between Taiwanese and Mandarin Chinese and even used both, repeating the most salient parts of his speech twice. In this case real-time subtitling was provided as an extra service (according to the section chief of the TV company involved) to allow both the literate speakers of Mandarin Chinese and of Taiwanese to understand the President at any time during his speech.
5.3 Quality of real-time subtitling
It is generally agreed in literature that for the time being real-time subtitling cannot provide the same high quality product as post-production and can present several problems not only to the subtitler but also to the audience. Mainly for this reason, real-time subtitling is used only when there is not enough time to produce subtitles by other methods (United Kingdom Office of Communication 2006). A study conducted by using Automatic Speech Recognition (ASR) to produce real-time subtitling concluded that Automatic Speech Recognition can provide real time captioning directly from lecturers’speech in classrooms but it is difficult to obtain accuracy comparable to stenography (Wald 2006: 1). On the other hand, Lambourne et al. (2004) describe an Automatic Speech Recognition television subtitling system relying on two operators, one concentrating on respeaking and another concentrating on editing, and concluded that an experienced respeaker could, without editing, reach recognition rates quite acceptable for live broadcasts of sports such as golf.
Generally speaking, even though the real-time subtitles produced by the two Taiwanese TV companies on the occasions mentioned above both showed similar errors (including omissions, delays, non-words), they were understandable to cooperative viewers. However, if evaluated on the criteria used for post-produced subtitles, their quality was definitely disappointing according to the author’s observations while watching the real-time subtitling of President Chen’s speech by TVBS and CTITV on November 5, 2006. Probably for that reason, when some segments of President Chen’s second speech were broadcast again the day after, TVBS revised the subtitles and used the revised and edited version.
5.4 Why is real-time subtitling not regularly used in Taiwan?
a) According to an e-mail message sent by participant 6 to the author:
Subtitles are not required in the TV news broadcast.
Participant 6, a translator/narrator of a TV company, made the following remarks about why real-time subtitling is not used regularly in Taiwan.
As far as I know, subtitling in breaking news, whether it is foreign news or domestic news, is rarely used in Taiwan, but subtitling in featured news programs is a common practice.
The only broadcasting news subtitled with Chinese is Sisy’s World News on CTITV, but basically the program is regarded as a featured news program. TV companies are not required by law to provide subtitles to any program, let alone to dub news. News writers such as a reporter or a translator are required to narrate the news story they have written. And basically there are no subtitles in the news story.
b) The quality of real-time subtitles is not as good as that of post-produced subtitles. In addition, current real-time subtitling production for live broadcasts can not create the same high-quality subtitles as they are expected from pre-produced subtitles (BCI Guidelines Subtitling). Participant 1 commented that real-time subtitling in Taiwan tends to result in numerous errors, omissions, non-words. In addition, there tend to be delays between the beginning of an utterance and the appearance of the corresponding subtitle.
c) TV audience in Taiwan expect high quality subtitling, which current real-time technology can not provide. Participant 4 posited that TV audience in Taiwan have high expectations of the subtitles on TV. When they see many errors in the subtitles, they would call or send e-mail to the television company or newspaper offices to complain.
d) There is no law that regulates real-time subtitling. There is no law in Taiwan that mandates the use of real-time subtitling. The only law that regulates subtitling is Article 19 of the Taiwan Broadcast and Television Law (Quan Guo Fa Gui Zi Liao Ku 2006) which stipulates that programs in foreign languages must carry subtitles or narrations in Chinese and, if necessary, the authorities concerned may order to have films dubbed in Chinese.
e) There is no position for a Chinese typist in the TV company. For real-time subtitling of Taiwanese or Chinese programs, a professional typist was hired by the TV company involved to provide subtitles, but the TV company does not have a full-time position for her. She works on jobs where a large volume of typing is required. In the news department, people producing real-time subtitles are mostly translators and narrators of foreign news or staff working on local news.
6. Implications for professional practice
The results of the study show many interesting facts and allows to formulate some hypotheses and reflections/recommendations.
1. Real-time subtitling in Taiwan mostly involves some sort of translation, as political leaders can switch codes between Taiwanese, Mandarin Chinese, Hakka, and even aborigines languages.
2. Although up to now only stroke-based typing software has been used, different typing software may be used for producing real-time subtitling in the future, but there are some implications:
a) Interpreters who are also good typists may do real-time subtitling by typing by themselves, just as the typist in TVBS did when she subtitled both President Chen’s and Mayor Ma’s speeches into Mandarin Chinese.
b) The operators, be they interpreters/respeakers or interpreters/typists, must be rigorously trained.
c) In Taiwan, a real-time subtitler who does not know how to translate and narrate news would be unlikely to find a job in a TV company. The subtitler mentioned in this study, for example, could not find a job at TVBS because the company does not have a position for a full-time typist. All those who work in the translation department are required to translate foreign news on the computer. Some TV companies in Taiwan also require news translators to narrate their own translation as well. Currently in Taiwan there is no respeaking of any kind. An operator who can only do intralingual respeaking would be like a typist who can only type but can not translate.
3. Subtitling by respeaking is a tiring job, so at least two subtitlers are required for long assignments in order to take turns of no more than 20 minutes at a time.
4. Mental pre-editing may reduce errors. It is probably possible to reduce errors if the respeaker can mentally pre-edit by anticipating the errors and avoiding expressions that may confuse the computer. For instance, the respeaker may avoid using homophones. In that case, he may have to study common error types derived from an extensive study in respeaking error analysis in order not to repeat them. From his own experience of sight translating with IBM Via Voice 1.0, the author observed that one way to reduce errors was to have the respeaker trained rigorously (see also paragraph 7).
5. In some cases respeaking involving compression and rewording may be better than verbatim respeaking because in the first case the delivery can be logically reorganized for the viewers; on the other hand, a respeaker who simply repeats what is heard does not. As some speakers are not well educated or not trained in speaking, verbatim subtitling by repeating every single word of their speech would be difficult for the TV viewers to follow. In addition, when a speaker speaks too fast, subtitles reproducing every single word would be very fast and thus difficult for average viewers to read. If the target audience are deaf people, there may also be the need to consider para- and extra-linguistic aspects that may affect the transmission of the information.
Research should focus on automatic subtitling. Though, for the time being, it may sound like science fiction, voice recognition technology could be eventually developed to perform subtitling without the assistance of humans.
7. Implications for training
Real-time subtitling of TV programs in a foreign language can be produced by simultaneous interpreters who do their own typing or respeaking, if they are trained accordingly. In the training of typists for subtitling, speed and accuracy training are a must in addition to practicing. For simultaneous interpreters the following tasks may be used:
Shadowing: Simultaneous interpreters need to listen and speak at the same time. To do shadowing, the learner is required to listen to a spoken message in the source language and repeat it at the same time.
Lagging: Simultaneous interpreters often find themselves lagging behind the speaker and trying to catch up. To learn lagging, the learner is required to listen to a spoken message in the source language and repeat it at the same time by lagging behind the input by no more than one sentence. If the interpreters lag by over one sentence, the sentences that follow will mix together and become indistinguishable.
Clozing: Interpreters are often required to interpret segments of a message with words that they can not understand or can not hear clearly. In that case, the interpreter will have to fill in the gaps with words that make sense in the context. To learn clozing, the learner is required to listen to a message that is above the language level of the learner’s abilities and shadow. The words that the learners can not understand would serve as gaps for them to fill out during shadowing or interpreting.
Synonym search: Interpreters sometimes can understand what they hear, but they cannot come up fast enough with the right words in the target language. For this task, the learner may be shown or given some key words and asked to find as many synonyms or related words as possible.
Abstracting: When the speaker speaks very fast, the interpreter sometimes has to summarize or abstract. To learn abstracting, the learner listens to a message that is very fast and condense it in a few phrases or sentences.
Paraphrasing or respeaking: The interpreter should learn to paraphrase or respeak by reorganizing the source language input and rendering it logically in the following situations: (1) the source language is not entirely comprehensible, (2) the speaker speaks poorly, (3) the speaker speaks illogically, (4) the speech would be incomprehensible if translated word by word. For this task, the learner listens to a message and repeats it by using his own words and by speaking in a logical and professional manner.
These tasks are regularly used in interpreter training classes. Based on the students’language and interpretation abilities, they may be practiced in the mother language first, before being practiced in the foreign language. As the learners become more competent, they may be asked to listen to the foreign language and practice the tasks in their mother tongue. After they can do that, they may listen to their mother tongue and practice the tasks in the foreign language.
Other training components designed to prevent errors using voice recognition software may be (1) active listening: listening between the lines; (2) clear delivery: learning to speak in a way that does not confuse the computer; (3) paraphrasing and abstracting when the speakers speak too fast or illogically. The author is convinced that training can reduce errors in real-time subtitling by respeaking to an absolute minimum. It may be possible to design an accreditation test to allow only the best respeaker to do the job.
8. Conclusions
The results of this study suggest that real-time subtitling in Taiwan can be divided into three types: real-time subtitling from Mandarin Chinese to Mandarin Chinese, to mandarin Chinese from a code switching between Taiwanese and Mandarin Chinese, and from a foreign language to Mandarin Chinese. A typist, who uses Chang Jie Chinese typing software and Info PC subtitling software on a computer, provides real-time subtitles to live speeches on TV. However, if a part of the same program is broadcast a second time later on, the subtitles are revised. Due to time constraints, real-time subtitling frequently results in wrong, sometimes funny words, non-words, incomplete subtitles, errors, overtranslations, undertranslations, and unpleasantly long delays. Real-time subtitling by respeaking has never been used in Taiwan, but some news translators use voice recognition software on their jobs. Finally, the results of this study suggest that almost all real-time subtitling in Taiwan involves some kind of translation. Further research should focus on the processes and products of real-time subtitling and the possible usage of respeaking, a world trend, in real-time subtitling into Chinese.
Bibliographical references
Ando, A. Imai, T. Kobayashi, Haruo Isono, H. Nakabayashi, K. (2000). “Real-Time Transcription System for Simultaneous Subtitling of Japanese Broadcast News Programs”. IEEE Translations on Broadcasting, 46:3, 189-196.
CIA (2006). The Fact Book. Taiwan. [url=https://www.cia.gov/cia/publications/factbook/geos/tw.html]https://www.cia.gov/cia/publications/factbook/geos/tw.html[/url] (last access on Sept 18, 2006).
Lambourne, A. Hewitt, J. Lyon, C. Warren, S. (2004). Speech-Based Real-Time Subtitling Service, International Journal of Speech Technology, 7, 269-279.
Lin, Su-Ling. (2006). Guo yu tong bu kou yi ji zhe man di zhao yan jing [Simultaneous interpretation into Mandarin Chinese surprised the reporters].
[url=http://times.hinet.net/SpecialTopic/950621-speech/a8d4480aaebc.htm]http://times.hinet.net/SpecialTopic/950621-speech/a8d4480aaebc.htm[/url] (last access on January 10, 2007).
Matsui Atsushi, Hiroyuki Segi, Akio Kobayashi, Toru Imai, Akio Ando. (2001). Speech recognition of broadcast sports news. NHK Laboratories Note No. 472.
[url=http://www.nhk.or.jp/strl/publica/labnote/lab472.html]http://www.nhk.or.jp/strl/publica/labnote/lab472.html[/url] (last access on January, 18, 2007).
Media Access Group at WGBH. TechFacts: Information about captioning for video professionals Volume 4 - Translating the Facts: Captioning Around the World.
[url=http://main.wgbh.org/wgbh/pages/mag/resources/archive/techfacts/cctechfacts4.html]http://main.wgbh.org/wgbh/pages/mag/resources/archive/techfacts/cctechfacts4.html[/url]
(last access on January 15, 2007).
Meng, L. (2002). Dian Nao Hua Jiao Xue Ce Lue Dui Zhong Wen Shu Ru Xue Ce Xi Cheng Xiao Zhi Ying Xiang Tan Tao [A Study on the Effects of Computeralization of Teaching on the Effectiveness of Learning Chinese Computer Typing Methods]. Unpublished M.A. Thesis. The National Taiwan Normal University.
National Captioning Institute. NCI’s Live Captioning. [url=http://www.ncicap.org/livecapproc.asp]http://www.ncicap.org/livecapproc.asp[/url] (last access on January 14, 2007).
National Center for Accessible Media. International Captioning Project. Europe.
[url=http://ncam.wgbh.org/resources/icr/europe.html]http://ncam.wgbh.org/resources/icr/europe.html[/url] (last access on September 26, 2006).
Quan Guo Fa Gui Zi Liao Ku [National Regulation Data Bank]. (2006). Guang Bo Dian Hsi Fa [Broadcast and Television Law].
[url=http://law.moj.gov.tw/Scripts/Query4A.asp?FullDoc=all&Fcode=P0050001]http://law.moj.gov.tw/Scripts/Query4A.asp?FullDoc=all&Fcode=P0050001[/url] (last access on December 26, 2006).
Strauss, A. Corbin, M. (1990). Basics of qualitative research: Grounded theory procedures and techniques. Newbury Park, CA: Sage Publications, Inc.
The Voice Project. (2006). Voice to Texts with Subtitles. [url=http://www.respeaking.net/]http://www.respeaking.net/[/url] (last access on December 16, 2006).
United Kingdom Office of Communication. (2006). “4 - Real-Time Subtitling”.
[url=http://www.ofcom.org.uk/tv/ifi/guidance/tv_access_serv/subtitling_stnds/subtitling_4]http://www.ofcom.org.uk/tv/ifi/guidance/tv_access_serv/subtitling_stnds/subtitling_4[/url]
(last access on September 26, 2006).
Wald, M. (2006). Captioning for Deaf and Hard of Hearing People by Editing Automatic Speech Recognition in Real Time. [url=http://eprints.ecs.soton.ac.uk/12138/01/icchpmwsubmit.doc]http://eprints.ecs.soton.ac.uk/12138/01/icchpmwsubmit.doc[/url]
(last access on December 15, 2006).
Wang, Y. (1960). “Zhong guo wu da fang yan de fen lie nian dai de yan yu nian dai xue de shi tan [An investigation in the chronological study of the five major dialects during the disintegrating era]”. Yan yu yan jiu 38¡G 33-105.
Notes
[1] The text of President Chen Sui-bian’s televised report to the people of Taiwan was published by the Office of the President, Republic of China.
[url=http://www.gio.gov.tw/taiwan-website/4-oa/20060620/2006062001.html]http://www.gio.gov.tw/taiwan-website/4-oa/20060620/2006062001.html[/url] (last access on December 12, 2006)
[2] Most foreign journalists in Taiwan can understand Mandarin Chinese, but hardly any of them understands Taiwanese. This was the first time that a Taiwanese President’s speech was interpreted simultaneously into Mandarin Chinese for foreign journalists; usually interpreting for foreign news correspondents is into English.
[3] Taiwan ruling party lawmakers urged to join debate on motion to unseat Chen.
[url=http://www.rthk.org.hk/rthk/news/englishnews/20060622/20060622_56_319737.html]http://www.rthk.org.hk/rthk/news/englishnews/20060622/20060622_56_319737.html[/url] (last access on December 15, 2006)
[4] Taiwan’s Chen denies graft charge Taiwan’s President Chen Shui-bian has rejected corruption allegations against him and refused to step down despite mounting pressure. BBC.CO.UK. [url=http://news.bbc.co.uk/2/hi/asia-pacific/6118746.stm]http://news.bbc.co.uk/2/hi/asia-pacific/6118746.stm[/url] (last access on December 15, 2006)
[5] Another popular real-time subtitling product is called Take Time Code Subtitling Software.
[6]In the TV Company involved in this study, the typist watched the program live and typed out the subtitles on a regular QWERTY keyboard, and used Chang Jie typing software, and a Taiwan-made real-time subtitling software called Info PC.
©inTRAlinea & Sheng-Jie Chen (2006).
"Real-time subtitling in Taiwan"
inTRAlinea Special Issue: Respeaking
Edited by: Carlo Eugeni & Gabriele Mack
This article can be freely reproduced under Creative Commons License.
Stable URL: https://www.intralinea.org/specials/article/1693