Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Content Type
      Content Type
      Clear All
      Content Type
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Item Type
    • Is Full-Text Available
    • Subject
    • Publisher
    • Source
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
238 result(s) for "Chellappa, Rama"
Sort by:
Predictive modeling of lean body mass, appendicular lean mass, and appendicular skeletal muscle mass using machine learning techniques: A comprehensive analysis utilizing NHANES data and the Look AHEAD study
This study addresses the pressing need for improved methods to predict lean mass in adults, and in particular lean body mass (LBM), appendicular lean mass (ALM), and appendicular skeletal muscle mass (ASMM) for the early detection and management of sarcopenia, a condition characterized by muscle loss and dysfunction. Sarcopenia presents significant health risks, especially in populations with chronic diseases like cancer and the elderly. Current assessment methods, primarily relying on Dual-energy X-ray absorptiometry (DXA) scans, lack widespread applicability, hindering timely intervention. Leveraging machine learning techniques, this research aimed to develop and validate predictive models using data from the National Health and Nutrition Examination Survey (NHANES) and the Action for Health in Diabetes (Look AHEAD) study. The models were trained on anthropometric data, demographic factors, and DXA-derived metrics to accurately estimate LBM, ALM, and ASMM normalized to weight. Results demonstrated consistent performance across various machine learning algorithms, with LassoNet, a non-linear extension of the popular LASSO method, exhibiting superior predictive accuracy. Notably, the integration of bone mineral density measurements into the models had minimal impact on predictive accuracy, suggesting potential alternatives to DXA scans for lean mass assessment in the general population. Despite the robustness of the models, limitations include the absence of outcome measures and cohorts highly vulnerable to muscle mass loss. Nonetheless, these findings hold promise for revolutionizing lean mass assessment paradigms, offering implications for chronic disease management and personalized health interventions. Future research endeavors should focus on validating these models in diverse populations and addressing clinical complexities to enhance prediction accuracy and clinical utility in managing sarcopenia.
Face recognition accuracy of forensic examiners, superrecognizers, and face recognition algorithms
Achieving the upper limits of face identification accuracy in forensic applications can minimize errors that have profound social and personal consequences. Although forensic examiners identify faces in these applications, systematic tests of their accuracy are rare. How can we achieve the most accurate face identification: using people and/or machines working alone or in collaboration? In a comprehensive comparison of face identification by humans and computers, we found that forensic facial examiners, facial reviewers, and superrecognizers were more accurate than fingerprint examiners and students on a challenging face identification test. Individual performance on the test varied widely. On the same test, four deep convolutional neural networks (DCNNs), developed between 2015 and 2017, identified faces within the range of human accuracy. Accuracy of the algorithms increased steadily over time, with the most recent DCNN scoring above the median of the forensic facial examiners. Using crowd-sourcing methods, we fused the judgments of multiple forensic facial examiners by averaging their rating-based identity judgments. Accuracy was substantially better for fused judgments than for individuals working alone. Fusion also served to stabilize performance, boosting the scores of lower-performing individuals and decreasing variability. Single forensic facial examiners fused with the best algorithm were more accurate than the combination of two examiners. Therefore, collaboration among humans and between humans and machines offers tangible benefits to face identification accuracy in important applications. These results offer an evidence-based roadmap for achieving the most accurate face identification possible.
Encoding of Demographic and Anatomical Information in Chest X-Ray-Based Severe Left Ventricular Hypertrophy Classifiers
Background. Severe left ventricular hypertrophy (SLVH) is a high-risk structural cardiac abnormality associated with increased risk of heart failure. It is typically assessed using echocardiography or cardiac magnetic resonance imaging, but these modalities are limited by cost, accessibility, and workflow burden. We introduce a deep learning framework that classifies SLVH directly from chest radiographs, without intermediate anatomical estimation models or demographic inputs. A key contribution of this work lies in interpretability. We quantify how clinically relevant attributes are encoded within internal representations, enabling transparent model evaluation and integration into AI-assisted workflows. Methods. We construct class-balanced subsets from the CheXchoNet dataset with equal numbers of SLVH-positive and negative cases while preserving the original train, validation, and test proportions. ResNet-18 is fine-tuned from ImageNet weights, and a Vision Transformer (ViT) encoder is pretrained via masked autoencoding with a trainable classification head. No anatomical or demographic inputs are used during training. We apply Mutual Information Neural Estimation (MINE) to quantify dependence between learned features and five attributes: age, sex, interventricular septal diameter (IVSDd), posterior wall diameter (LVPWDd), and internal diameter (LVIDd). Results. ViT achieves an AUROC of 0.82 [95% CI: 0.78–0.85] and an AUPRC of 0.80 [95% CI: 0.76–0.85], indicating strong performance in SLVH detection from chest radiographs. MINE reveals clinically coherent attribute encoding in learned features: age > sex > IVSDd > LVPWDd > LVIDd. Conclusions. This study shows that SLVH can be accurately classified from chest radiographs alone. The framework combines diagnostic performance with quantitative interpretability, supporting reliable deployment in triage and decision support.
Towards transforming malaria vector surveillance using VectorBrain: a novel convolutional neural network for mosquito species, sex, and abdomen status identifications
Malaria is a major public health concern, causing significant morbidity and mortality globally. Monitoring the local population density and diversity of the vectors transmitting malaria is critical to implementing targeted control strategies. However, the current manual identification of mosquitoes is a time-consuming and intensive task, posing challenges in low-resource areas like sub-Saharan Africa; in addition, existing automated identification methods lack scalability, mobile deployability, and field-test validity. To address these bottlenecks, a mosquito image database with fresh wild-caught specimens using basic smartphones is introduced, and we present a novel CNN-based architecture, VectorBrain, designed for identifying the species, sex, and abdomen status of a mosquito concurrently while being efficient and lightweight in computation and size. Overall, our proposed approach achieves 94.44±2% accuracy with a macro-averaged F1 score of 94.10±2% for the species classification, 97.66±1% accuracy with a macro-averaged F1 score of 96.17±1% for the sex classification, and 82.20±3.1% accuracy with a macro-averaged F1 score of 81.17±3% for the abdominal status classification. VectorBrain running on local mobile devices, paired with a low-cost handheld imaging tool, is promising in transforming the mosquito vector surveillance programs by reducing the burden of expertise required and facilitating timely response based on accurate monitoring.
View Invariance for Human Action Recognition
This paper presents an approach for viewpoint invariant human action recognition, an area that has received scant attention so far, relative to the overall body of work in human action recognition. It has been established previously that there exist no invariants for 3D to 2D projection. However, there exist a wealth of techniques in 2D invariance that can be used to advantage in 3D to 2D projection. We exploit these techniques and model actions in terms of view-invariant canonical body poses and trajectories in 2D invariance space, leading to a simple and effective way to represent and recognize human actions from a general viewpoint. We first evaluate the approach theoretically and show why a straightforward application of the 2D invariance idea will not work. We describe strategies designed to overcome inherent problems in the straightforward approach and outline the recognition algorithm. We then present results on 2D projections of publicly available human motion capture data as well on manually segmented real image sequences. In addition to robustness to viewpoint change, the approach is robust enough to handle different people, minor variabilities in a given action, and the speed of aciton (and hence, frame-rate) while encoding sufficient distinction among actions.[PUBLICATION ABSTRACT]
Evaluation and mitigation of cognitive biases in medical language models
Increasing interest in applying large language models (LLMs) to medicine is due in part to their impressive performance on medical exam questions. However, these exams do not capture the complexity of real patient–doctor interactions because of factors like patient compliance, experience, and cognitive bias. We hypothesized that LLMs would produce less accurate responses when faced with clinically biased questions as compared to unbiased ones. To test this, we developed the BiasMedQA dataset, which consists of 1273 USMLE questions modified to replicate common clinically relevant cognitive biases. We assessed six LLMs on BiasMedQA and found that GPT-4 stood out for its resilience to bias, in contrast to Llama 2 70B-chat and PMC Llama 13B, which showed large drops in performance. Additionally, we introduced three bias mitigation strategies, which improved but did not fully restore accuracy. Our findings highlight the need to improve LLMs’ robustness to cognitive biases, in order to achieve more reliable applications of LLMs in healthcare.
Face Processing: Advanced Modeling And Methods
Major strides have been made in face processing in the last ten years due to the fast growing need for security in various locations around the globe. A human eye can discern the details of a specific face with relative ease. It is this level of detail that researchers are striving to create with ever evolving computer technologies that will become our perfect mechanical eyes. The difficulty that confronts researchers stems from turning a 3D object into a 2D image. That subject is covered in depth from several different perspectives in this volume.This book begins with a comprehensive introductory chapter for those who are new to the field. A compendium of articles follows that is divided into three sections. The first covers basic aspects of face processing from human to computer. The second deals with face modeling from computational and physiological points of view. The third tackles the advanced methods, which include illumination, pose, expression, and more. Editors Zhao and Chellappa have compiled a concise and necessary text for industrial research scientists, students, and professionals working in the area of image and signal processing. *Contributions from over 35 leading experts in face detection, recognition and image processing*Over 150 informative images with 16 images in FULL COLOR illustrate and offer insight into the most up-to-date advanced face processing methods and techniques*Extensive detail makes this a need-to-own book for all involved with image and signal processing
From BoW to CNN: Two Decades of Texture Representation for Texture Classification
Texture is a fundamental characteristic of many types of images, and texture representation is one of the essential and challenging problems in computer vision and pattern recognition which has attracted extensive research attention over several decades. Since 2000, texture representations based on Bag of Words and on Convolutional Neural Networks have been extensively studied with impressive performance. Given this period of remarkable evolution, this paper aims to present a comprehensive survey of advances in texture representation over the last two decades. More than 250 major publications are cited in this survey covering different aspects of the research, including benchmark datasets and state of the art results. In retrospect of what has been achieved so far, the survey discusses open challenges and directions for future research.