Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
26,663 result(s) for "Medical coding"
Sort by:
Integrating Agentic Artificial Intelligence to Automate International Classification of Diseases, Tenth Revision, Medical Coding
Automating ICD-10 coding from discharge summaries remains demanding because coders analyze clinical narratives while justifying decisions. This study compares three automation patterns: PLM-ICD as a standalone deep learning system emitting 15 codes per case, LLM-only generation with full autonomy, and a hybrid approach where PLM-ICD drafts candidates for an agentic LLM audit to accept or reject. All strategies were evaluated on 19,801 MIMIC-IV summaries using four LLMs spanning compact (Qwen2.5-3B-Instruct, Llama-3.2-3B-Instruct, Phi-4-mini-instruct) to large-scale (Sonnet-4.5). Precision guided evaluation because coders still supply any missing diagnoses. PLM-ICD alone reached 55.8% precision while always surfacing 15 suggestions. LLM-only generation lagged severely (1.5–34.6% precision) and produced inconsistent output sizes. The agentic audit delivered the best trade-off: compact LLMs reviewed the 15 candidates, discarded weak evidence, and returned 2–8 high-confidence codes. Llama-3.2-3B-Instruct, for example, improved from 1.5% as a generator to 55.1% as a verifier while trimming false positives by 73%. These results show that positioning LLMs as quality controllers, rather than primary generators, yields reliable support for clinical coding teams, while formal recall/F1 reporting remains future work for fully autonomous implementations.
Challenges and Solutions in Applying Large Language Models to Guideline-Based Management Planning and Automated Medical Coding in Health Care: Algorithm Development and Validation
Diagnostic errors and administrative burdens, including medical coding, remain major challenges in health care. Large language models (LLMs) have the potential to alleviate these problems, but their adoption has been limited by concerns regarding reliability, transparency, and clinical safety. This study introduces and evaluates 2 LLM-based frameworks, implemented within the Rhazes Clinician platform, designed to address these challenges: generation-assisted retrieval-augmented generation (GARAG) for automated evidence-based treatment planning and generation-assisted vector search (GAVS) for automated medical coding. GARAG was evaluated on 21 clinical test cases created by medically qualified authors. Each case was executed 3 times independently, and outputs were assessed using 4 criteria: correctness of references, absence of duplication, adherence to formatting, and clinical appropriateness of the generated management plan. GAVS was evaluated on 958 randomly selected admissions from the Medical Information Mart for Intensive Care (MIMIC)-IV database, in which billed International Classification of Diseases, Tenth Revision (ICD-10) codes served as the ground truth. Two approaches were compared: a direct GPT-4.1 baseline prompted to predict ICD-10 codes without constraints and GAVS, in which GPT-4.1 generated diagnostic entities that were each mapped onto the top 10 matching ICD-10 codes through vector search. Across the 63 outputs, 62 (98.4%) satisfied all evaluation criteria, with the only exception being a minor ordering inconsistency in one repetition of case 14. For GAVS, the 958 admissions contained 8576 assigned ICD-10 subcategory codes (1610 unique). The vanilla LLM produced 131,329 candidate codes, whereas GAVS produced 136,920. At the subcategory level, the vanilla LLM achieved 17.95% average recall (15.86% weighted), while GAVS achieved 20.63% (18.62% weighted), a statistically significant improvement (P<.001). At the category level, performance converged (32.60% vs 32.58% average weighted recall; P=.99). GARAG demonstrated a workflow that grounds management plans in diagnosis-specific, peer-reviewed guideline evidence, preserving fine-grained clinical detail during retrieval. GAVS significantly improved fine-grained diagnostic coding recall compared with a direct LLM baseline. Together, these frameworks illustrate how LLM-based methods can enhance clinical decision support and medical coding. Both were subsequently integrated into Rhazes Clinician, a clinician-facing web application that orchestrates LLM agents to call specialized tools, providing a single interface for physician use. Further independent validation and large-scale studies are required to confirm generalizability and assess their impact on patient outcomes.
Pediatric Coding Q&A: Expert Advice From the AAP Coding Hotline
For years, the American Academy of Pediatrics (AAP) Coding Hotline has been a trusted resource for pediatricians and others with coding conundrums. Pediatric Coding Q&A: Expert Advice From the AAP Coding Hotline is a compilation of the hotline's \"greatest hits, \" featuring guidance from our coding experts on everything from coding for specific clinical conditions to applying both common and evolving coding concepts. Organized by clinical and coding topics, pediatricians, office managers, and coders will benefit from AAP Coding Hotline expertise and experience. Examples of topics include * Asthma * Attention-deficit/hyperactivity disorder * Behavioral health * Billing and claims completion * Breastfeeding and lactation counseling * Foreign body removal * Newborn care (hospital or office) Readers will also find tips for effective billing practices, appropriate documentation, capturing all reportable services, and modifier use. Bonus content includes resources to learn more about specific conditions and coding concepts, as well as a series of coding fact sheets.
Medical billing & coding for dummies
Your complete guide to a career in medical billing and coding, updated with the latest changes in the ICD-10 and PPS This fully updated second edition of Medical Billing & Coding For Dummies provides readers with a complete overview of what to expect and how to succeed in a career in medical billing and coding. With healthcare providers moving more rapidly to electronic record systems, data accuracy and efficient data processing is more important than ever. Medical Billing & Coding For Dummies gives you everything you need to know to get started in medical billing and coding. This updated resource includes details on the most current industry changes in ICD-10 (10th revision of the International Statistical Classification of Diseases and Related Health Problems) and PPS (Prospective Payment Systems), expanded coverage on the differences between EHRs and MHRs, the latest certification requirements and standard industry practices, and updated tips and advice for dealing with government agencies and insurance companies. Prepare for a successful career in medical billing and coding Get the latest updates on changes in the ICD-10 and PPS Understand how the industry is changing and learn how to stay ahead of the curve Learn about flexible employment options in this rapidly growing industry Medical Billing & Coding For Dummies, 2nd Edition provides aspiring professionals with detailed information and advice on what to expect in a billing and coding career, ways to find a training program, certification options, and ways to stay competitive in the field.
Trends in Agricultural Triazole Fungicide Use in the United States, 1992–2016 and Possible Implications for Antifungal-Resistant Fungi in Human Disease
The fungus ( ) is the leading cause of invasive mold infections, which cause severe disease and death in immunocompromised people. Use of triazole antifungal medications in recent decades has improved patient survival; however, triazole-resistant infections have become common in parts of Europe and are emerging in the United States. Triazoles are also a class of fungicides used in plant agriculture, and certain triazole-resistant strains found causing disease in humans have been linked to environmental fungicide use. We examined U.S. temporal and geographic trends in the use of triazole fungicides using U.S. Geological Survey agricultural pesticide use estimates. Based on our analysis, overall tonnage of triazole fungicide use nationwide was relatively constant during 1992-2005 but increased during 2006-2016 to in 2016. During 1992-2005, triazole fungicide use occurred mostly in orchards and grapes, wheat, and other crops, but recent increases in use have occurred primarily in wheat, corn, soybeans, and other crops, particularly in Midwest and Southeast states. We conclude that, given the chemical similarities between triazole fungicides and triazole antifungal drugs used in human medicine, increased monitoring for environmental and clinical triazole resistance in would improve overall understanding of these interactions, as well as help identify strategies to mitigate development and spread of resistance. https://doi.org/10.1289/EHP7484.
Conformal Prediction and Large Language Models for Medical Coding
Abstract The assignment of current procedure terminology (CPT) codes to medical events is a highly cumbersome, logistic challenge for many healthcare organizations, as well as a significant contributor to medical expenses. Improvement in the allocation of medical resources and expenses dedicated to such tasks can be achieved through automation. However, because of the complex nature of medical records, automation of procedure terminologies is just now developing with the advent of machine learning methods. In this study, we develop a fine-tuned large language model (LLaMA-3B) as a high-reliability predictor for CPT codes. As input, we use 2018 pathology report text data, including gross report, microscopic description, brief medical history, and final diagnosis. We define our dataset with the top five most common technical component CPT codes, which account for 85% of all samples. As a (meta) predictor of the veracity of the large language model itself, we use the prediction’s softmax scalar value of the classification model, borrowing from similar recent approaches in conformal prediction. Specifically, the validation set, which is distinct from both the train and test set, is used to establish a prediction threshold below which the model withholds judgement. We show that, by fine tuning an off-the-shelf language model on pathology report text alone, we can achieve 95% prediction accuracy of the top 5 most common CPT codes on our dataset. We also then demonstrate the utility of combining large language models with conformal prediction. The two in combination raise our accuracy to 99.5% when we allow the model to abstain on making a prediction on 30% of the data, as is determined by a separate threshold on the scalar value of the predictor from the aforementioned validation set. We therefore present a highly flexible model for medical coding, which is simultaneously provably-reliable. Large language models can as such serve as a powerful tool for overcoming challenges in medical billing, thereby improving healthcare efficiency and reducing medical costs.
Unraveling the Ethical Enigma: Artificial Intelligence in Healthcare
The integration of artificial intelligence (AI) into healthcare promises groundbreaking advancements in patient care, revolutionizing clinical diagnosis, predictive medicine, and decision-making. This transformative technology uses machine learning, natural language processing, and large language models (LLMs) to process and reason like human intelligence. OpenAI's ChatGPT, a sophisticated LLM, holds immense potential in medical practice, research, and education. However, as AI in healthcare gains momentum, it brings forth profound ethical challenges that demand careful consideration. This comprehensive review explores key ethical concerns in the domain, including privacy, transparency, trust, responsibility, bias, and data quality. Protecting patient privacy in data-driven healthcare is crucial, with potential implications for psychological well-being and data sharing. Strategies like homomorphic encryption (HE) and secure multiparty computation (SMPC) are vital to preserving confidentiality. Transparency and trustworthiness of AI systems are essential, particularly in high-risk decision-making scenarios. Explainable AI (XAI) emerges as a critical aspect, ensuring a clear understanding of AI-generated predictions. Cybersecurity becomes a pressing concern as AI's complexity creates vulnerabilities for potential breaches. Determining responsibility in AI-driven outcomes raises important questions, with debates on AI's moral agency and human accountability. Shifting from data ownership to data stewardship enables responsible data management in compliance with regulations. Addressing bias in healthcare data is crucial to avoid AI-driven inequities. Biases present in data collection and algorithm development can perpetuate healthcare disparities. A public-health approach is advocated to address inequalities and promote diversity in AI research and the workforce. Maintaining data quality is imperative in AI applications, with convolutional neural networks showing promise in multi-input/mixed data models, offering a comprehensive patient perspective. In this ever-evolving landscape, it is imperative to adopt a multidimensional approach involving policymakers, developers, healthcare practitioners, and patients to mitigate ethical concerns. By understanding and addressing these challenges, we can harness the full potential of AI in healthcare while ensuring ethical and equitable outcomes.
The effectiveness of online learning in improving the knowledge about medical coding: a pilot study
Medical coding (MC) is the process of converting clinical terminologies and concepts into a list of codes, which may be alphanumeric or numeric, in order to provide well-written patient records and more accurate translation. The purpose of this study was to determine the effectiveness of an online course in increasing the knowledge of medical coding among medical graduates, and to assess their satisfaction towards online education. The study recruited fourteen recent medical graduates. The online course consisted of lectures, quizzes, and interactive activities. The study used a pre-test post-test design to assess the participants' knowledge of medical coding before and after the online course. Participants also completed a questionnaire to assess their satisfaction towards online education. The findings indicated that knowledge of participants about medical coding significantly improved after completing the online course (P=0.001). The results also showed that the majority of the participants (93%) found the online course to be effective for improving learning and all of them reported engagement and satisfaction. This study suggest that online courses can be an effective means of increasing knowledge about medical coding among medical graduates. The findings of this study are important for learners and tutors to improve and fine-tune future course offerings.
Potential loss of revenue due to errors in clinical coding during the implementation of the Malaysia diagnosis related group (MY-DRG®) Casemix system in a teaching hospital in Malaysia
Background The accuracy of clinical coding is crucial in the assignment of Diagnosis Related Groups (DRGs) codes, especially if the hospital is using Casemix System as a tool for resource allocations and efficiency monitoring. The aim of this study was to estimate the potential loss of income due to an error in clinical coding during the implementation of the Malaysia Diagnosis Related Group (MY-DRG ® ) Casemix System in a teaching hospital in Malaysia. Methods Four hundred and sixty-four (464) coded medical records were selected, re-examined and re-coded by an independent senior coder (ISC). This ISC re-examined and re-coded the error code that was originally entered by the hospital coders. The pre- and post-coding results were compared, and if there was any disagreement, the codes by the ISC were considered the accurate codes. The cases were then re-grouped using a MY-DRG ® grouper to assess and compare the changes in the DRG assignment and the hospital tariff assignment. The outcomes were then verified by a casemix expert. Results Coding errors were found in 89.4% (415/424) of the selected patient medical records. Coding errors in secondary diagnoses were the highest, at 81.3% (377/464), followed by secondary procedures at 58.2% (270/464), principal procedures of 50.9% (236/464) and primary diagnoses at 49.8% (231/464), respectively. The coding errors resulted in the assignment of different MY-DRG ® codes in 74.0% (307/415) of the cases. From this result, 52.1% (160/307) of the cases had a lower assigned hospital tariff. In total, the potential loss of income due to changes in the assignment of the MY-DRG ® code was RM654,303.91. Conclusions The quality of coding is a crucial aspect in implementing casemix systems. Intensive re-training and the close monitoring of coder performance in the hospital should be performed to prevent the potential loss of hospital income.