Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
Automatically Detecting Failures in Natural Language Processing Tools for Online Community Text
by
Pratt, Wanda
, McDonald, David W
, Hartzler, Andrea L
, Huh, Jina
, Park, Albert
in
Automatic Data Processing
/ Computational linguistics
/ Humans
/ Internet
/ Language processing
/ Natural language interfaces
/ Natural Language Processing
/ Original Paper
/ Social networks
2015
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
Automatically Detecting Failures in Natural Language Processing Tools for Online Community Text
by
Pratt, Wanda
, McDonald, David W
, Hartzler, Andrea L
, Huh, Jina
, Park, Albert
in
Automatic Data Processing
/ Computational linguistics
/ Humans
/ Internet
/ Language processing
/ Natural language interfaces
/ Natural Language Processing
/ Original Paper
/ Social networks
2015
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
Automatically Detecting Failures in Natural Language Processing Tools for Online Community Text
by
Pratt, Wanda
, McDonald, David W
, Hartzler, Andrea L
, Huh, Jina
, Park, Albert
in
Automatic Data Processing
/ Computational linguistics
/ Humans
/ Internet
/ Language processing
/ Natural language interfaces
/ Natural Language Processing
/ Original Paper
/ Social networks
2015
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
Automatically Detecting Failures in Natural Language Processing Tools for Online Community Text
Journal Article
Automatically Detecting Failures in Natural Language Processing Tools for Online Community Text
2015
Request Book From Autostore
and Choose the Collection Method
Overview
The prevalence and value of patient-generated health text are increasing, but processing such text remains problematic. Although existing biomedical natural language processing (NLP) tools are appealing, most were developed to process clinician- or researcher-generated text, such as clinical notes or journal articles. In addition to being constructed for different types of text, other challenges of using existing NLP include constantly changing technologies, source vocabularies, and characteristics of text. These continuously evolving challenges warrant the need for applying low-cost systematic assessment. However, the primarily accepted evaluation method in NLP, manual annotation, requires tremendous effort and time.
The primary objective of this study is to explore an alternative approach-using low-cost, automated methods to detect failures (eg, incorrect boundaries, missed terms, mismapped concepts) when processing patient-generated text with existing biomedical NLP tools. We first characterize common failures that NLP tools can make in processing online community text. We then demonstrate the feasibility of our automated approach in detecting these common failures using one of the most popular biomedical NLP tools, MetaMap.
Using 9657 posts from an online cancer community, we explored our automated failure detection approach in two steps: (1) to characterize the failure types, we first manually reviewed MetaMap's commonly occurring failures, grouped the inaccurate mappings into failure types, and then identified causes of the failures through iterative rounds of manual review using open coding, and (2) to automatically detect these failure types, we then explored combinations of existing NLP techniques and dictionary-based matching for each failure cause. Finally, we manually evaluated the automatically detected failures.
From our manual review, we characterized three types of failure: (1) boundary failures, (2) missed term failures, and (3) word ambiguity failures. Within these three failure types, we discovered 12 causes of inaccurate mappings of concepts. We used automated methods to detect almost half of 383,572 MetaMap's mappings as problematic. Word sense ambiguity failure was the most widely occurring, comprising 82.22% of failures. Boundary failure was the second most frequent, amounting to 15.90% of failures, while missed term failures were the least common, making up 1.88% of failures. The automated failure detection achieved precision, recall, accuracy, and F1 score of 83.00%, 92.57%, 88.17%, and 87.52%, respectively.
We illustrate the challenges of processing patient-generated online health community text and characterize failures of NLP tools on this patient-generated health text, demonstrating the feasibility of our low-cost approach to automatically detect those failures. Our approach shows the potential for scalable and effective solutions to automatically assess the constantly evolving NLP tools and source vocabularies to process patient-generated text.
Publisher
Journal of Medical Internet Research,JMIR Publications Inc
This website uses cookies to ensure you get the best experience on our website.