Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
41
result(s) for
"Herath, H. M. K. K. M. B."
Sort by:
A Systematic Review of Medical Image Quality Assessment
by
Lee, Byeong-Il
,
Herath, H. M. S. S.
,
Herath, H. M. K. K. M. B.
in
Accuracy
,
Algorithms
,
Artificial intelligence
2025
Medical image quality assessment (MIQA) is vital in medical imaging and directly affects diagnosis, patient treatment, and general clinical results. Accurate and high-quality imaging is necessary to make accurate diagnoses, efficiently design treatments, and consistently monitor diseases. This review summarizes forty-two research studies on diverse MIQA approaches and their effects on performance in diagnostics, patient results, and efficiency in the process. It contrasts subjective (manual assessment) and objective (rule-driven) evaluation methods, underscores the growing promise of machine intelligence and machine learning (ML) in MIQA automation, and describes the existing MIQA challenges. AI-powered tools are revolutionizing MIQA with automated quality checks, noise reduction, and artifact removal, producing consistent and reliable imaging evaluation. Enhanced image quality is demonstrated in every examination to improve diagnostic precision and support decision making in the clinic. However, challenges still exist, such as variability in quality and variability in human ratings and small datasets hindering standardization. These must be addressed with better-quality data, low-cost labeling, and standardization. Ultimately, this paper reinforces the need for high-quality medical imaging and the potential of MIQA with the power of AI. It is crucial to advance research in this area to advance healthcare.
Journal Article
Advancing Medical Training with Mixed Reality and Haptic Feedback Simulator for Acupuncture Needling
by
Yi, Myunggi
,
Lee, Byeong-il
,
Guruge, Kasunika
in
Acupuncture
,
Acupuncture - education
,
Acupuncture Points
2025
Traditional acupuncture training often lacks consistent, objective feedback, while current extended reality (XR) solutions rarely include quantitative assessment. This study developed and evaluated a feedback-enabled mixed reality (MR) acupuncture simulator to improve skill acquisition through depth-responsive guidance. The system, used on Microsoft HoloLens 2, combines a MetaHuman-based virtual patient with expert-designed acupoint geometries. It provides depth-dependent vibrotactile cues via a wearable haptic device and calculates a composite score from normalized metrics, including insertion depth, angular deviation, tip-to-center distance, and task duration. Ten participants (eight novices and two experts) performed needle tasks at LI4, LI11, and TE3 across two sessions. Mean depth error decreased from 6.41 mm to 3.58 mm, and task time from 9.29 s to 6.83 s. At LI11, beginners improved in achieved depth (16.24 ± 1.88 mm to 19.74 ± 1.23 mm), reduced angular deviation (27.83° to 15.34°), and shortened completion time (38.77 s to 13.28 s). Experts outperformed novices (69.25 ± 21.64 vs. 56.26 ± 23.37), confirming construct validity. Usability evaluation showed a mean overall score of 4.46 ± 0.51 and excellent reliability (McDonald’s ω = 0.93). These results demonstrate that expert-informed scoring and depth-responsive haptic feedback substantially enhance accuracy, efficiency, and learning confidence, validating the system’s technical robustness and educational readiness for clinical acupuncture training.
Journal Article
Multi-Domain CoP Feature Analysis of Functional Mobility for Parkinson’s Disease Detection Using Wearable Pressure Insoles
2025
Parkinson’s disease (PD) impairs balance and gait through neuromotor dysfunction, yet conventional assessments often overlook subtle postural deficits during dynamic tasks. This study evaluated the diagnostic utility of center-of-pressure (CoP) features captured by pressure-sensing insoles during the Timed Up and Go (TUG) test. Using 39 PD and 38 control participants from the recently released open-access WearGait-PD dataset, the authors extracted 144 CoP features spanning positional, dynamic, frequency, and stochastic domains, including per-foot averages and asymmetry indices. Two scenarios were analyzed: the complete TUG and its 3 m walking segment. Model development followed a fixed protocol with a single participant-level 80/20 split; sequential forward selection with five-fold cross-validation optimized the number of features within the training set. Five classifiers were evaluated: SVM-RBF, logistic regression (LR), random forest (RF), k-nearest neighbors (k-NN), and Gaussian naïve Bayes (NB). LR performed best on the held-out test set (accuracy = 0.875, precision = 1.000, recall = 0.750, F1 = 0.857, ROC-AUC = 0.921) using a 23-feature subset. RF and SVM-RBF each achieved 0.812 accuracy. In contrast, applying the identical pipeline to the 3 m walking segment yielded lower performance (best model: k-NN, accuracy = 0.688, F1 = 0.615, ROC–AUC = 0.734), indicating that the multi-phase TUG task captures PD-related balance deficits more effectively than straight walking. All four feature families contributed to classification performance. Dynamic and frequency-domain descriptors, often appearing in both average and asymmetry form, were most consistently selected. These features provided robust magnitude indicators and offered complementary insights into reduced control complexity in PD.
Journal Article
Long Short-Term Memory-Enabled Electromyography-Controlled Adaptive Wearable Robotic Exoskeleton for Upper Arm Rehabilitation
by
Yasakethu, S. L. P.
,
Yi, Myunggi
,
Fernando, Dileepa
in
Adaptability
,
Artificial intelligence
,
Control algorithms
2025
Restoring strength, function, and mobility following an illness, accident, or surgery is the primary goal of upper arm rehabilitation. Exoskeletons offer adaptable support, enhancing patient engagement and accelerating recovery. This work proposes an adjustable, wearable robotic exoskeleton powered by electromyography (EMG) data for upper arm rehabilitation. Three activation levels—low, medium, and high—were applied to the EMG data to forecast the Pulse Width Modulation (PWM) based on the range of motion (ROM) angle. Conventional machine learning (ML) models, including K-Nearest Neighbor Regression (K-NNR), Support Vector Regression (SVR), and Random Forest Regression (RFR), were compared with neural network approaches, including Gated Recurrent Units (GRUs) and Long Short-Term Memory (LSTM) to determine the best ML model for the ROM angle prediction. The LSTM model emerged as the best predictor with a high accuracy of 0.96. The system achieved 0.89 accuracy in exoskeleton control and 0.85 accuracy in signal categorization. Additionally, the proposed exoskeleton demonstrated a 0.97 performance in ROM correction compared to conventional methods (p = 0.097). These findings highlight the potential of EMG-based, LSTM-enabled exoskeleton systems to deliver accurate and adaptive upper arm rehabilitation, particularly for senior citizens, by providing personalized and effective support.
Journal Article
Biomimetic Robotics and Sensing for Healthcare Applications and Rehabilitation: A Systematic Review
by
Hewage, Chaminda
,
Lee, Byeong-Il
,
Yasakethu, S. L. P.
in
Artificial intelligence
,
bio-inspired robotics
,
biomimetic sensing
2025
Biomimetic robotics and sensor technologies are reshaping the landscape of healthcare and rehabilitation. Despite significant progress across various domains, many areas within healthcare still demand further bio-inspired innovations. To advance this field effectively, it is essential to synthesize existing research, identify persistent knowledge gaps, and establish clear frameworks to guide future developments. This systematic review addresses these needs by analyzing 89 peer-reviewed sources retrieved from the Scopus database, focusing on the application of biomimetic robotics and sensing technologies in healthcare and rehabilitation contexts. The findings indicate a predominant focus on enhancing human mobility and support, with rehabilitative and assistive technologies comprising 61.8% of the reviewed literature. Additionally, 12.36% of the studies incorporate intelligent control systems and Artificial Intelligence (AI), reflecting a growing trend toward adaptive and autonomous solutions. Further technological advancements are demonstrated by research in bioengineering applications (13.48%) and innovations in soft robotics with smart actuation mechanisms (11.24%). The development of medical robots (7.87%) and wearable robotics, including exosuits (10.11%), underscores specific progress in clinical and patient-centered care. Moreover, the emergence of transdisciplinary approaches, present in 6.74% of the studies, highlights the increasing convergence of diverse fields in tackling complex healthcare challenges. By consolidating current research efforts, this review aims to provide a comprehensive overview of the state of the art, serving as a foundation for future investigations aimed at improving healthcare outcomes and enhancing quality of life.
Journal Article
Spectro-Image Analysis with Vision Graph Neural Networks and Contrastive Learning for Parkinson’s Disease Detection
by
Yi, Myunggi
,
Hewage, Chaminda
,
Malekroodi, Hadi Sedigh
in
Accuracy
,
Acoustic properties
,
Analysis
2025
This study presents a novel framework that integrates Vision Graph Neural Networks (ViGs) with supervised contrastive learning for enhanced spectro-temporal image analysis of speech signals in Parkinson’s disease (PD) detection. The approach introduces a frequency band decomposition strategy that transforms raw audio into three complementary spectral representations, capturing distinct PD-specific characteristics across low-frequency (0–2 kHz), mid-frequency (2–6 kHz), and high-frequency (6 kHz+) bands. The framework processes mel multi-band spectro-temporal representations through a ViG architecture that models complex graph-based relationships between spectral and temporal components, trained using a supervised contrastive objective that learns discriminative representations distinguishing PD-affected from healthy speech patterns. Comprehensive experimental validation on multi-institutional datasets from Italy, Colombia, and Spain demonstrates that the proposed ViG-contrastive framework achieves superior classification performance, with the ViG-M-GELU architecture achieving 91.78% test accuracy. The integration of graph neural networks with contrastive learning enables effective learning from limited labeled data while capturing complex spectro-temporal relationships that traditional Convolution Neural Network (CNN) approaches miss, representing a promising direction for developing more accurate and clinically viable speech-based diagnostic tools for PD.
Journal Article
MetaAcuPoint: MetaHuman-Generated Synthetic Data for Hand Acupoint Localization
2025
Background: Precise localization of acupuncture points (acupoints) is crucial for the clinical success of Traditional Korean Medicine (TKM). Traditional methods that rely on visual inspection and palpation are subjective and prone to inter- and intra-observer differences, making standardization challenging. The progress of data-driven localization techniques is also limited by the scarcity of annotated datasets and inconsistent labeling quality. Objective: This study presents MetaAcuPoint, a synthetic dataset created to overcome these limitations by providing high-fidelity, anatomically consistent hand images for acupoint localization. Methods: MetaAcuPoint was generated using MetaHuman avatars within Unreal Engine, resulting in 900 RGB hand images. Anatomically aligned, bone-attached sockets were implemented for five diagnostically relevant hand acupoints, ensuring millimeter-level precision and spatial consistency across various hand poses. Dataset validity was assessed by training a high-resolution network (HRNet-W48) within the MMPose framework and testing its performance on real-world forearm images. Results: The synthetic-trained model achieved a mean distance error (MDE) of 5.67 ± 3.13 pixels, closely aligning with the real-data baseline at 4.81 ± 2.85 pixels. Adding synthetic samples to real data further enhanced performance (MDE: 4.95 pixels). In contrast, manually annotated synthetic images yielded poorer results (MDE: 12.76 pixels), emphasizing the advantages of automated anatomical annotation. Generalization tests across four external datasets confirmed that the synthetic data-trained model outperformed the real-data-trained model, maintaining higher accuracy (MDE: 5.84–6.45 mm vs. 10.63–15.80 mm). Conclusions: MetaAcuPoint demonstrates the first example of synthetic-to-real generalization for hand acupoint localization. By combining photorealistic rendering with anatomically grounded annotation, the dataset offers a reliable resource to promote standardized, data-driven approaches in acupuncture research and practice.
Journal Article
Controlling an Anatomical Robot Hand Using the Brain-Computer Interface Based on Motor Imagery
2021
More than one billion people face disabilities worldwide, according to the World Health Organization (WHO). In Sri Lanka, there are thousands of people suffering from a variety of disabilities, especially hand disabilities, due to the civil war in the country. The Ministry of Health of Sri Lanka reports that by 2025, the number of people with disabilities in Sri Lanka will grow by 24.2%. In the field of robotics, new technologies for handicapped people are now being built to make their lives simple and effective. The aim of this research is to develop a 3-finger anatomical robot hand model for handicapped people and control (flexion and extension) the robot hand using motor imagery. Eight EEG electrodes were used to extract EEG signals from the primary motor cortex. Data collection and testing were performed for a period of 42 s timespan. According to the test results, eight EEG electrodes were sufficient to acquire the motor imagery for flexion and extension of finger movements. The overall accuracy of the experiments was found at 89.34% (mean = 22.32) at the 0.894 precision. We also observed that the proposed design provided promising results for the performance of the task (grab, hold, and release activities) of hand-disabled persons.
Journal Article
Evaluation of Functional Mobility of Elders Using Vision Attentive Model for Parkinson’s Disease
by
Yasakethu, S. L. P.
,
Dhanushi, R. G. D.
,
Gunaratne, D. A. N. P.
in
Automation
,
Balance
,
Brain research
2024
One of the disorders that affects the central nervous system the most severely is Parkinson's disease (PD). In 2019, the World Health Organization (WHO) reported that PD claimed the lives of 0.33 million people, an increase of almost 100% since 2000. The disease also caused 5.8 million disability-adjusted life years, an 81% increase since 2000. This emphasizes how dangerous PD may be in home settings, especially for the elderly. Currently, clinical approaches continue to be the mainstay of PD screening. Still, there's hope, thanks to developments in wearable sensor-based identification techniques. Nevertheless, methods such as the vision attentive paradigm are required to guarantee usability because older adults find them uncomfortable. Current systems frequently depend on isolated evaluations, which the WHO considers inadequate for thoroughly assessing PD through functional mobilities. This research aims to evaluate older persons with PD to close this gap. Timed Up and Go (TUG) time, gait speed, and fall score are the three main components integrated with the proposed system. The TUG test, gait speed, and fall ratio were validated using the vision attentive model and the traditional clinical method. Ethical norms were followed when testing in homes, hospitals, and elder care institutions. The suggested method's results show great potential, with an impressive 90.02% (precision 0.89) accuracy rate in identifying PD patients.
Journal Article