Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Series Title
      Series Title
      Clear All
      Series Title
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Content Type
    • Item Type
    • Is Full-Text Available
    • Subject
    • Publisher
    • Source
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
414 result(s) for "Electronic data processing Dictionaries."
Sort by:
BCS glossary of computing and ICT
The BCS Glossary is the most authoritative and comprehensive glossary of its kind on the market today. This unrivalled study aid and reference tool has newly updated entries and is divided into themed sections, making it more than just a list of definitions. Written in a style that is easily accessible to anybody with an interest in computing, it is specifically designed to support those taking computer courses or courses where computers are used, in schools and Further Education colleges.
A hybrid approach for named entity recognition in Chinese electronic medical record
Background With the rapid spread of electronic medical records and the arrival of medical big data era, the application of natural language processing technology in biomedicine has become a hot research topic. Methods In this paper, firstly, BiLSTM-CRF model is applied to medical named entity recognition on Chinese electronic medical record. According to the characteristics of Chinese electronic medical records, obtain the low-dimensional word vector of each word in units of sentences. And then input the word vector to BiLSTM to realize automatic extraction of sentence features. And then CRF performs sentence-level word tagging. Secondly, attention mechanism is added between the BiLSTM and the CRF to construct Attention-BiLSTM-CRF model, which can leverage document-level information to alleviate tagging inconsistency. In addition, this paper proposes an entity auto-correct algorithm to rectify entities according to historical entity information. At last, a drug dictionary and post-processing rules are well-built to rectify entities, to further improve performance. Results The final F1 scores of the BiLSTM-CRF and Attention-BiLSTM-CRF model on given test dataset are 90.15 and 90.82% respectively, both of which are higher than 89.26%, which is the best F1 score on the test dataset except ours. Conclusion Our approach can be used to recognize medical named entity on Chinese electronic medical records and achieves the state-of-the-art performance on the given test dataset.
Utility analysis and demonstration of real-world clinical texts: A case study on Japanese cancer-related EHRs
Real-world data (RWD) in the medical field, such as electronic health records (EHRs) and medication orders, are receiving increasing attention from researchers and practitioners. While structured data have played a vital role thus far, unstructured data represented by text (e.g., discharge summaries) are not effectively utilized because of the difficulty in extracting medical information. We evaluated the information gained by supplementing structured data with clinical concepts extracted from unstructured text by leveraging natural language processing techniques. Using a machine learning-based pretrained named entity recognition tool, we extracted disease and medication names from real discharge summaries in a Japanese hospital and linked them to medical concepts using medical term dictionaries. By comparing the diseases and medications mentioned in the text with medical codes in tabular diagnosis records, we found that: (1) the text data contained richer information on patient symptoms than tabular diagnosis records, whereas the medication-order table stored more injection data than text. In addition, (2) extractable information regarding specific diseases showed surprisingly small intersections among text, diagnosis records, and medication orders. Text data can thus be a useful supplement for RWD mining, which is further demonstrated by (3) our practical application system for drug safety evaluation, which exhaustively visualizes suspicious adverse drug effects caused by the simultaneous use of anticancer drug pairs. We conclude that proper use of textual information extraction can lead to better outcomes in medical RWD mining.
An Electronic Data Capture Framework (ConnEDCt) for Global and Public Health Research: Design and Implementation
When we were unable to identify an electronic data capture (EDC) package that supported our requirements for clinical research in resource-limited regions, we set out to build our own reusable EDC framework. We needed to capture data when offline, synchronize data on demand, and enforce strict eligibility requirements and complex longitudinal protocols. Based on previous experience, the geographical areas in which we conduct our research often have unreliable, slow internet access that would make web-based EDC platforms impractical. We were unwilling to fall back on paper-based data capture as we wanted other benefits of EDC. Therefore, we decided to build our own reusable software platform. In this paper, we describe our customizable EDC framework and highlight how we have used it in our ongoing surveillance programs, clinic-based cross-sectional studies, and randomized controlled trials (RCTs) in various settings in India and Ecuador. This paper describes the creation of a mobile framework to support complex clinical research protocols in a variety of settings including clinical, surveillance, and RCTs. We developed ConnEDCt, a mobile EDC framework for iOS devices and personal computers, using Claris FileMaker software for electronic data capture and data storage. ConnEDCt was tested in the field in our clinical, surveillance, and clinical trial research contexts in India and Ecuador and continuously refined for ease of use and optimization, including specific user roles; simultaneous synchronization across multiple locations; complex randomization schemes and informed consent processes; and collecting diverse types of data (laboratory, growth measurements, sociodemographic, health history, dietary recall and feeding practices, environmental exposures, and biological specimen collection). ConnEDCt is customizable, with regulatory-compliant security, data synchronization, and other useful features for data collection in a variety of settings and study designs. Furthermore, ConnEDCt is user friendly and lowers the risks for errors in data entry because of real time error checking and protocol enforcement.
Intelligent diagnosis with Chinese electronic medical records based on convolutional neural networks
Background Benefiting from big data, powerful computation and new algorithmic techniques, we have been witnessing the renaissance of deep learning, particularly the combination of natural language processing (NLP) and deep neural networks. The advent of electronic medical records (EMRs) has not only changed the format of medical records but also helped users to obtain information faster. However, there are many challenges regarding researching directly using Chinese EMRs, such as low quality, huge quantity, imbalance, semi-structure and non-structure, particularly the high density of the Chinese language compared with English. Therefore, effective word segmentation, word representation and model architecture are the core technologies in the literature on Chinese EMRs. Results In this paper, we propose a deep learning framework to study intelligent diagnosis using Chinese EMR data, which incorporates a convolutional neural network (CNN) into an EMR classification application. The novelty of this paper is reflected in the following: (1) We construct a pediatric medical dictionary based on Chinese EMRs. (2) Word2vec adopted in word embedding is used to achieve the semantic description of the content of Chinese EMRs. (3) A fine-tuning CNN model is constructed to feed the pediatric diagnosis with Chinese EMR data. Our results on real-world pediatric Chinese EMRs demonstrate that the average accuracy and F1-score of the CNN models are up to 81%, which indicates the effectiveness of the CNN model for the classification of EMRs. Particularly, a fine-tuning one-layer CNN performs best among all CNNs, recurrent neural network (RNN) (long short-term memory, gated recurrent unit) and CNN-RNN models, and the average accuracy and F1-score are both up to 83%. Conclusion The CNN framework that includes word segmentation, word embedding and model training can serve as an intelligent auxiliary diagnosis tool for pediatricians. Particularly, a fine-tuning one-layer CNN performs well, which indicates that word order does not appear to have a useful effect on our Chinese EMRs.
FasTag: Automatic text classification of unstructured medical narratives
Unstructured clinical narratives are continuously being recorded as part of delivery of care in electronic health records, and dedicated tagging staff spend considerable effort manually assigning clinical codes for billing purposes. Despite these efforts, however, label availability and accuracy are both suboptimal. In this retrospective study, we aimed to automate the assignment of top-level International Classification of Diseases version 9 (ICD-9) codes to clinical records from human and veterinary data stores using minimal manual labor and feature curation. Automating top-level annotations could in turn enable rapid cohort identification, especially in a veterinary setting. To this end, we trained long short-term memory (LSTM) recurrent neural networks (RNNs) on 52,722 human and 89,591 veterinary records. We investigated the accuracy of both separate-domain and combined-domain models and probed model portability. We established relevant baseline classification performances by training Decision Trees (DT) and Random Forests (RF). We also investigated whether transforming the data using MetaMap Lite, a clinical natural language processing tool, affected classification performance. We showed that the LSTM-RNNs accurately classify veterinary and human text narratives into top-level categories with an average weighted macro F1 score of 0.74 and 0.68 respectively. In the \"neoplasia\" category, the model trained on veterinary data had a high validation accuracy in veterinary data and moderate accuracy in human data, with F1 scores of 0.91 and 0.70 respectively. Our LSTM method scored slightly higher than that of the DT and RF models. The use of LSTM-RNN models represents a scalable structure that could prove useful in cohort identification for comparative oncology studies. Digitization of human and veterinary health information will continue to be a reality, particularly in the form of unstructured narratives. Our approach is a step forward for these two domains to learn from and inform one another.
A wavelet denoising approach based on unsupervised learning model
Image denoising plays an important role in image processing, which aims to separate clean images from noisy images. A number of methods have been presented to deal with this practical problem over the past several years. The best currently available wavelet-based denoising methods take advantage of the merits of the wavelet transform. Most of these methods, however, still have difficulties in defining the threshold parameter which can limit their capability. In this paper, we propose a novel wavelet denoising approach based on unsupervised learning model. The approach taken aims at exploiting the merits of the wavelet transform: sparsity, multi-resolution structure, and similarity with the human visual system, to adapt an unsupervised dictionary learning algorithm for creating a dictionary devoted to noise reduction. Using the K-Singular Value Decomposition (K-SVD) algorithm, we obtain an adaptive dictionary by learning over the wavelet decomposition of the noisy image. Experimental results on benchmark test images show that our proposed method achieves very competitive denoising performance and outperforms state-of-the-art denoising methods, especially in the peak signal to noise ratio (PSNR), the structural similarity (SSIM) index, and visual effects with different noise levels.