Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Content Type
      Content Type
      Clear All
      Content Type
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Item Type
    • Is Full-Text Available
    • Subject
    • Publisher
    • Source
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
3,679 result(s) for "Closed captioning"
Sort by:
Football for all: the quality of the live closed captioning in the Super Bowl LII
In the U.S., new programming broadcast on television needs to be accessible to the hearing-impaired, and closed captioning is the tool most frequently used to provide such access. From all the audiovisual content available to viewers, live programs pose the greatest challenge to captioners, who, in this scenario, are required to provide accurate subtitles that reach the viewers with as limited delay as possible. Due to the difficulties involved in the production of real-time subtitles, research exploring the quality of live captioning is not yet abundant. However, having precise insights on this matter could help improve current practices and better accommodate the users’ needs. Drawing on existing literature from the field of Media Accessibility, this article presents the main findings of a study exploring the quality of the live closed captioning delivered during one of the most widely followed televised events of 2018 in the U.S.: the Super Bowl LII. The parameters that the Federal Communications Commission identifies with quality (completeness, placement, synchronicity and accuracy) were analyzed. The results point at completeness as the trait with more room for improvement and show that captions achieved impeccable placement, a commendable average latency and very high accuracy rates.
Subtitling Today
Nowadays subtitling accomplishes several purposes; it is meant for diverse audiences and comes in many forms. This collection of innovative contributions explores these different manifestations, and offers a snapshot of the state of the art of a dynamic and ever-evolving field of study. This volume intentionally assembles essays that analyse subtitling in various audiovisual genres, including television series, variety programmes, operas, operettas, feature films and live conferences, and that consider various languages, such as Chinese, English, Finnish, French, Italian, Japanese and Polish. It underscores both traditional and novel viewpoints and approaches to the subject, thus broadening the horizons of such a fascinating field. The diversity of topics tackled will encourage further reflection on a well-established research area, and, as such, the volume will appeal to both novice and expert researchers and professionals.
New insights into audiovisual translation and media accessibility : Media for All 2
This volume aims to take the pulse of the changes taking place in the thriving field of Audiovisual Translation and to offer new insights into both theoretical and practical issues. Academics and practitioners of proven international reputation are given voice in three distinctive sections pivoting around the main areas of subtitling and dubbing, media accessibility (subtitling for the deaf and hard-of-hearing and audio description), and didactic applications of AVT. Many countries, languages, transfer modes, audiences and genres are considered in order to provide the reader with a wide overview of the current state of the art in the field. This volume will be of interest not only for researchers, teachers and students in linguistics, translation and film studies, but also to translators and language professionals who want to expand their sphere of activity.
Movie Description
Audio description (AD) provides linguistic descriptions of movies and allows visually impaired people to follow a movie along with their peers. Such descriptions are by design mainly visual and thus naturally form an interesting data source for computer vision and computational linguistics. In this work we propose a novel dataset which contains transcribed ADs, which are temporally aligned to full length movies. In addition we also collected and aligned movie scripts used in prior work and compare the two sources of descriptions. We introduce the Large Scale Movie Description Challenge (LSMDC) which contains a parallel corpus of 128,118 sentences aligned to video clips from 200 movies (around 150 h of video in total). The goal of the challenge is to automatically generate descriptions for the movie clips. First we characterize the dataset by benchmarking different approaches for generating video descriptions. Comparing ADs to scripts, we find that ADs are more visual and describe precisely what is shown rather than what should happen according to the scripts created prior to movie production. Furthermore, we present and compare the results of several teams who participated in the challenges organized in the context of two workshops at ICCV 2015 and ECCV 2016.
Reading sounds : closed-captioned media and popular culture
Imagine a common movie scene: a hero confronts a villain. Captioning such a moment would at first glance seem as basic as transcribing the dialogue. But consider the choices involved: How do you convey the sarcasm in a comeback? Do you include a henchman's muttering in the background? Does the villain emit a scream, a grunt, or a howl as he goes down? And how do you note a gunshot without spoiling the scene? These are the choices closed captioners face every day. Captioners must decide whether and how to describe background noises, accents, laughter, musical cues, and even silences. When captioners describe a sound—or choose to ignore it—they are applying their own subjective interpretations to otherwise objective noises, creating meaning that does not necessarily exist in the soundtrack or the script. Reading Sounds looks at closed-captioning as a potent source of meaning in rhetorical analysis. Through nine engrossing chapters, Sean Zdenek demonstrates how the choices captioners make affect the way deaf and hard of hearing viewers experience media. He draws on hundreds of real-life examples, as well as interviews with both professional captioners and regular viewers of closed captioning. Zdenek's analysis is an engrossing look at how we make the audible visible, one that proves that better standards for closed captioning create a better entertainment experience for all viewers.
Examining the Educational Benefits of and Attitudes Toward Closed-Captioning Among Undergraduate Students
Closed-captioning technology has been available for decades and is often used by individuals with disabilities to access video-based information. Course-related videos are routinely shown in college classrooms throughout the United States, however it is unknown if closed-captions are educationally beneficial for all students. The purpose of this study was to examine the educational benefits of closed-captioning among undergraduate students without disabilities and their associated attitudes toward the technology. The use of closed-captions adheres to the principles of Universal Design that encourages stakeholders to build environments and products that are accessible to all individuals. However, more evidence-based research is needed on the utility of this technology in college classrooms. Two separate video-based studies were conducted at one university and groups were randomly assigned to “caption” or “no caption” conditions. It was hypothesized that exposure to closed-captions would increase students’ recall and understanding of video-based information and improve attitudes toward the technology. Results suggested that participants who were exposed to closed-captions scored significantly higher on the subsequent assessment. Participants who already used closed-captions in their daily lives had significantly more positive attitudes toward the technology. Recommendations for further study are provided.
Live captioning accuracy in English-language newscasts in the USA
After the Federal Communications Commission issued its first closed captioning quality regulations in 2014, the captioning industry in the USA must monitor the closed captions they produce in order to ensure they comply with the rules in place. In the case of live captioning, accuracy becomes a key aspect since, together with other parameters such as speed or latency, it shapes quality. Live captioning accuracy is often measured using the word error rate (WER), an instrument that informs about the words that have been mistakenly inserted, deleted and substituted in a closed caption. While this is convenient because WER is a fully automated metric, it does not consider correct edition or the fact that some captioning errors hamper viewers’ comprehension more than others. In order to account for correct edition and error severity, this article reports on the main findings of a research project aimed at exploring closed captioning accuracy using the NER model. The closed captions accompanying the national newscasts broadcast by four networks in the USA were analyzed for accuracy. The results point at an overall good accuracy, with almost 2/3 of the errors being minor, and slightly over 1/3 being standard or serious.
Exploring collaborative caption editing to augment video-based learning
Captions play a major role in making educational videos accessible to all and are known to benefit a wide range of learners. However, many educational videos either do not have captions or have inaccurate captions. Prior work has shown the benefits of using crowdsourcing to obtain accurate captions in a cost-efficient way, though there is a lack of understanding of how learners edit captions of educational videos either individually or collaboratively. In this work, we conducted a user study where 58 learners (in a course of 387 learners) participated in the editing of captions in 89 lecture videos that were generated by Automatic Speech Recognition (ASR) technologies. For each video, different learners conducted two rounds of editing. Based on editing logs, we created a taxonomy of errors in educational video captions (e.g., Discipline-Specific, General, Equations). From the interviews, we identified individual and collaborative error editing strategies. We then further demonstrated the feasibility of applying machine learning models to assist learners in editing. Our work provides practical implications for advancing video-based learning and for educational video caption editing.
State Regulation of Online Behavior: The Dormant Commerce Clause and Geolocation
When does the Dormant Commerce Clause preclude states from regulating internet activity-whether through state libel law or invasion of privacy law; through state laws requiring websites to accommodate disabled users (for instance, by providing closed captioning); through state bans on discriminating based on sexual orientation, religion, or criminal record; or through state laws that ban social media platforms from discriminating based on the viewpoint of users' speech? This Article argues that the constitutionality of such state regulation should generally turn on the feasibility of geolocation-the extent to which websites or other internet services can determine, reliably and inexpensively, which states users are coming from so that the sites can then apply the proper state law to each user (or, if need be, choose not to allow access to users from certain states). In recent years, geolocation has become feasible and is routinely used by major websites for ordinary business purposes. There is therefore more constitutional room for state regulation of internet services, including social media platforms, than often believed.