Transcribing documentaries? Can respeaking be used efficiently?
Respeaking is increasingly used to offer live intralingual subtitles for the deaf and hard-of-hearing (Romero-Fresco 2011), hence guaranteeing media accessibility to a wider section of the population. However, our hypothesis is that respeaking could be efficiently used for other tasks, such as transcribing non-fictional products that do not have an already created script. This could benefit transcribers in their jobs but would also allow to (semi-)automatically generate an input for machine translation processes, to name just one example. This paper aims to present the results of an experiment in which three scenarios were compared: a) manual transcription; b) revision of an automatically generated transcription; c) respeaking. Three comparable 4-minute clips were selected, and 10 professional English transcribers were asked to do three different tasks: manually transcribe an excerpt, revise an automatically generated transcription (ASR), and respeak another excerpt. The order of the tasks and excerpt was randomized. The time spent in each of the tasks was monitorized, and data on the perceived effort and the transcriber experience were collected. This paper describes the experimental set-up as well as the results of this test.
Respeaking is increasingly used to offer live intralingual subtitles for the deaf and hard-of-hearing (Romero-Fresco 2011), hence guaranteeing media accessibility to a wider section of the population. However, our hypothesis is that respeaking could be efficiently used for other tasks, such as transcribing non-fictional products that do not have an already created script. This could benefit transcribers in their jobs but would also allow to (semi-)automatically generate an input for machine translation processes, to name just one example. This paper aims to present the results of an experiment in which three scenarios were compared: a) manual transcription; b) revision of an automatically generated transcription; c) respeaking. Three comparable 4-minute clips were selected, and 10 professional English transcribers were asked to do three different tasks: manually transcribe an excerpt, revise an automatically generated transcription (ASR), and respeak another excerpt. The order of the tasks and excerpt was randomized. The time spent in each of the tasks was monitorized, and data on the perceived effort and the transcriber experience were collected. This paper describes the experimental set-up as well as the results of this test.
- Type of material
- Terms of use
- Target audience
- Subject areas
- Tags
- Languages
- Media formats
- Accessibility features
- OER type
- Metadata and document(s)
Submitted by
Valeria Cervetti
17/11/2016
in the project Audiovisual Translation for the Web
last updated 17/11/2016
- Evaluations
- No evaluation
Please log in to add evaluation.
No comments yet.
Please log in to leave a comment.