Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
Training end-to-end speech-to-text models on mobile phones
by
Prabhakar, T V
, Zitha, S
, Raghavendra Rao Suresh
, Rao, Pooja
in
Cellular telephones
/ Electronic devices
/ Environment models
/ Smartphones
/ Speech recognition
/ Training
2021
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
Training end-to-end speech-to-text models on mobile phones
by
Prabhakar, T V
, Zitha, S
, Raghavendra Rao Suresh
, Rao, Pooja
in
Cellular telephones
/ Electronic devices
/ Environment models
/ Smartphones
/ Speech recognition
/ Training
2021
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
Training end-to-end speech-to-text models on mobile phones
Paper
Training end-to-end speech-to-text models on mobile phones
2021
Request Book From Autostore
and Choose the Collection Method
Overview
Training the state-of-the-art speech-to-text (STT) models in mobile devices is challenging due to its limited resources relative to a server environment. In addition, these models are trained on generic datasets that are not exhaustive in capturing user-specific characteristics. Recently, on-device personalization techniques have been making strides in mitigating the problem. Although many current works have already explored the effectiveness of on-device personalization, the majority of their findings are limited to simulation settings or a specific smartphone. In this paper, we develop and provide a detailed explanation of our framework to train end-to-end models in mobile phones. To make it simple, we considered a model based on connectionist temporal classification (CTC) loss. We evaluated the framework on various mobile phones from different brands and reported the results. We provide enough evidence that fine-tuning the models and choosing the right hyperparameter values is a trade-off between the lowest WER achievable, training time on-device, and memory consumption. Hence, this is vital for a successful deployment of on-device training onto a resource-limited environment like mobile phones. We use training sets from speakers with different accents and record a 7.6% decrease in average word error rate (WER). We also report the associated computational cost measurements with respect to time, memory usage, and cpu utilization in mobile phones in real-time.
Publisher
Cornell University Library, arXiv.org
Subject
This website uses cookies to ensure you get the best experience on our website.