Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
308 result(s) for "joint label"
Sort by:
Remote Sensing Image Scene Classification via Label Augmentation and Intra-Class Constraint
In recent years, many convolutional neural network (CNN)-based methods have been proposed to address the scene classification tasks of remote sensing images. Since the number of training samples in RS datasets is generally small, data augmentation is often used to expand the training set. It is, however, not appropriate when original data augmentation methods keep the label and change the content of the image at the same time. In this study, label augmentation (LA) is presented to fully utilize the training set by assigning a joint label to each generated image, which considers the label and data augmentation at the same time. Moreover, the output of images obtained by different data augmentation is aggregated in the test process. However, the augmented samples increase the intra-class diversity of the training set, which is a challenge to complete the following classification process. To address the above issue and further improve classification accuracy, Kullback–Leibler divergence (KL) is used to constrain the output distribution of two training samples with the same scene category to generate a consistent output distribution. Extensive experiments were conducted on widely-used UCM, AID and NWPU datasets. The proposed method can surpass the other state-of-the-art methods in terms of classification accuracy. For example, on the challenging NWPU dataset, competitive overall accuracy (i.e., 91.05%) is obtained with a 10% training ratio.
Multi-atlas segmentation with joint label fusion and corrective learning—an open source implementation
Label fusion based multi-atlas segmentation has proven to be one of the most competitive techniques for medical image segmentation. This technique transfers segmentations from expert-labeled images, called atlases, to a novel image using deformable image registration. Errors produced by label transfer are further reduced by label fusion that combines the results produced by all atlases into a consensus solution. Among the proposed label fusion strategies, weighted voting with spatially varying weight distributions derived from atlas-target intensity similarity is a simple and highly effective label fusion technique. However, one limitation of most weighted voting methods is that the weights are computed independently for each atlas, without taking into account the fact that different atlases may produce similar label errors. To address this problem, we recently developed the joint label fusion technique and the corrective learning technique, which won the first place of the 2012 MICCAI Multi-Atlas Labeling Challenge and was one of the top performers in 2013 MICCAI Segmentation: Algorithms, Theory and Applications (SATA) challenge. To make our techniques more accessible to the scientific research community, we describe an Insight-Toolkit based open source implementation of our label fusion methods. Our implementation extends our methods to work with multi-modality imaging data and is more suitable for segmentation problems with multiple labels. We demonstrate the usage of our tools through applying them to the 2012 MICCAI Multi-Atlas Labeling Challenge brain image dataset and the 2013 SATA challenge canine leg image dataset. We report the best results on these two datasets so far.
Joint‐label fusion brain atlases for dementia research in Down syndrome
Research suggests a link between Alzheimer's Disease in Down Syndrome (DS) and the overproduction of amyloid plaques. Using Positron Emission Tomography (PET) we can assess the in‐vivo regional amyloid load using several available ligands. To measure amyloid distributions in specific brain regions, a brain atlas is used. A popular method of creating a brain atlas is to segment a participant's structural Magnetic Resonance Imaging (MRI) scan. Acquiring an MRI is often challenging in intellectually‐imparied populations because of contraindications or data exclusion due to significant motion artifacts or incomplete sequences related to general discomfort. When an MRI cannot be acquired, it is typically replaced with a standardized brain atlas derived from neurotypical populations (i.e. healthy individuals without DS) which may be inappropriate for use in DS. In this project, we create a series of disease and diagnosis‐specific (cognitively stable (CS‐DS), mild cognitive impairment (MCI‐DS), and dementia (DEM‐DS)) probabilistic group atlases of participants with DS and evaluate their accuracy of quantifying regional amyloid load compared to the individually‐based MRI segmentations. Further, we compare the diagnostic‐specific atlases with a probabilistic atlas constructed from similar‐aged cognitively‐stable neurotypical participants. We hypothesized that regional PET signals will best match the individually‐based MRI segmentations by using DS group atlases that aligns with a participant's disorder and disease status (e.g. DS and MCI‐DS). Our results vary by brain region but generally show that using a disorder‐specific atlas in DS better matches the individually‐based MRI segmentations than using an atlas constructed from cognitively‐stable neurotypical participants. We found no additional benefit of using diagnose‐specific atlases matching disease status. All atlases are made publicly available for the research community. Highlight Down syndrome (DS) joint‐label‐fusion atlases provide accurate positron emission tomography (PET) amyloid measurements. A disorder‐specific DS atlas is better than a neurotypical atlas for PET quantification. It is not necessary to use a disease‐state–specific atlas for quantification in aged DS. Dorsal striatum results vary, possibly due to this region and dementia progression.
Transfer EEG Emotion Recognition by Combining Semi-Supervised Regression with Bipartite Graph Label Propagation
Individual differences often appear in electroencephalography (EEG) data collected from different subjects due to its weak, nonstationary and low signal-to-noise ratio properties. This causes many machine learning methods to have poor generalization performance because the independent identically distributed assumption is no longer valid in cross-subject EEG data. To this end, transfer learning has been introduced to alleviate the data distribution difference between subjects. However, most of the existing methods have focused only on domain adaptation and failed to achieve effective collaboration with label estimation. In this paper, an EEG feature transfer method combined with semi-supervised regression and bipartite graph label propagation (TSRBG) is proposed to realize the unified joint optimization of EEG feature distribution alignment and semi-supervised joint label estimation. Through the cross-subject emotion recognition experiments on the SEED-IV data set, the results show that (1) TSRBG has significantly better recognition performance in comparison with the state-of-the-art models; (2) the EEG feature distribution differences between subjects are significantly minimized in the learned shared subspace, indicating the effectiveness of domain adaptation; (3) the key EEG frequency bands and channels for cross-subject EEG emotion recognition are achieved by investigating the learned subspace, which provides more insights into the study of EEG emotion activation patterns.
Telling You More Fluently: Effect of the Joint Presentation of Eco-Label Information on Consumers’ Purchase Intention
An eco-label is an important tool for identifying green products in the marketplace. Most eco-labels, however, present a single icon that is simple and carries limited information, thus creating cognitive barriers for consumers. As a result, eco-labels might not always effectively promote green consumption. Based on dual coding theory and the spatial contiguity effect, this study investigated the effect of the “joint presentation of eco-label information” (JPEI), which adds (functional/emotional) descriptive text to eco-labels, on improving consumers’ cognitive fluency in eco-labels and subsequent purchase intention. We conducted three studies and found that, compared with the “single presentation of eco-label information” (SPEI), JPEI improved the cognitive fluency of consumers with low eco-label knowledge. Furthermore, spatially contiguous JPEI was more effective than spatially partitioned JPEI for consumers with low eco-label knowledge. In addition, we specifically explored the information types of JPEI that were effective for consumers with low eco-label knowledge. Low-construal consumers had higher cognitive fluency and higher purchase intentions under functional JPEI, and high-construal consumers had higher cognitive fluency and higher purchase intentions under emotional JPEI. The results of this study enrich eco-label research and can provide theoretical guidance for marketing practices in eco-labels.
Adalimumab for the treatment of fistulas in patients with Crohn’s disease
Objective:To evaluate the efficacy of adalimumab in the healing of draining fistulas in patients with active Crohn’s disease (CD).Design:A phase III, multicentre, randomised, double-blind, placebo controlled study with an open-label extension was conducted in 92 sites.Patients:A subgroup of adults with moderate to severely active CD (CD activity index 220–450) for ⩾4 months who had draining fistulas at baseline.Interventions:All patients received initial open-label adalimumab induction therapy (80 mg/40 mg at weeks 0/2). At week 4, all patients were randomly assigned to receive double-blind placebo or adalimumab 40 mg every other week or weekly to week 56 (irrespective of fistula status). Patients completing week 56 of therapy were then eligible to enroll in an open-label extension.Main Outcome Measures:Complete fistula healing/closure (assessed at every visit) was defined as no drainage, either spontaneous or with gentle compression.Results:Of 854 patients enrolled, 117 had draining fistulas at both screening and baseline (70 randomly assigned to adalimumab and 47 to placebo). The mean number of draining fistulas per day was significantly decreased in adalimumab-treated patients compared with placebo-treated patients during the double-blind treatment period. Of all patients with healed fistulas at week 56 (both adalimumab and placebo groups), 90% (28/31) maintained healing following 1 year of open-label adalimumab therapy (observed analysis).Conclusions:In patients with active CD, adalimumab therapy was more effective than placebo for inducing fistula healing. Complete fistula healing was sustained for up to 2 years by most patients in an open-label extension trial.ClinicalTrials.gov Identifier: NCT00077779 and NCT00195715.
Label dependency modeling in Multi-Label Naïve Bayes through input space expansion
In the realm of multi-label learning, instances are often characterized by a plurality of labels, diverging from the single-label paradigm prevalent in conventional datasets. Multi-label techniques often employ a similar feature space to build classification models for every label. Nevertheless, labels typically convey distinct semantic information and should possess their own unique attributes. Several approaches have been suggested to identify label-specific characteristics for creating distinct categorization models. Our proposed methodology seeks to encapsulate and systematically represent label correlations within the learning framework. The innovation of improved multi-label Naïve Bayes ( iMLNB ) lies in its strategic expansion of the input space, which assimilates meta information derived from the label space, thereby engendering a composite input domain that encompasses both continuous and categorical variables. To accommodate the heterogeneity of the expanded input space, we refine the likelihood parameters of iMLNB using a joint density function, which is adept at handling the amalgamation of data types. We subject our enhanced iMLNB model to a rigorous empirical evaluation, utilizing six benchmark datasets. The performance of our approach is gauged against the traditional multi-label Naïve Bayes ( MLNB ) algorithm and is quantified through a suite of evaluation metrics. The empirical results not only affirm the competitive edge of our proposed method over the conventional MLNB but also demonstrate its superiority across the aforementioned metrics. This underscores the efficacy of modeling label dependencies in multi-label learning environments and positions our approach as a significant contribution to the field.
Dual-center study on AI-driven multi-label deep learning for X-ray screening of knee abnormalities
Knee abnormalities, such as meniscus tears and ligament injuries, are common in clinical practice and pose significant diagnostic challenges. While traditional imaging techniques—X-ray, Computed Tomography (CT) scan, and Magnetic Resonance Imaging (MRI)—are vital for assessment. However, X-rays and CT scans often fail to adequately visualize soft tissue injuries, and MRIs can be costly and time-consuming. To overcome these limitations, we developed an innovative AI-driven approach that allows for the detection of soft tissue abnormalities directly from X-ray images—a capability traditionally reserved for MRI or arthroscopy. We conducted a retrospective study with 4,215 patients from two medical centers, utilizing knee X-ray images annotated by orthopedic surgeons. The YOLOv11 model automated knee localization, while five convolutional neural networks—ResNet152, DenseNet121, MobileNetV3, ShuffleNetV2, and VGG19—were adapted for multi-label classification of eight conditions: meniscus tears (MENI), anterior cruciate ligament tears (ACL), posterior cruciate ligament injuries (PCL), medial collateral ligament injuries (MCL), lateral collateral ligament injuries (LCL), joint effusion (EFFU), bone marrow edema or contusion (CONT), and soft tissue injuries (STI). Data preprocessing involved normalization and Region of Interest (ROI) extraction, with training enhanced by spatial augmentations. Performance was assessed using mean average precision (mAP), F1-scores, and area under the curve (AUC). We also developed a Windows-based PyQt application and a Flask Web application for clinical integration, incorporating explainable AI techniques (GradCAM, ScoreCAM) for interpretability. The YOLOv11 model achieved precise knee localization with a mAP@0.5 of 0.995. In classification, ResNet152 outperformed others, recording a mAP of 90.1% in internal testing and AUCs up to 0.863 (EFFU) in external testing. End-to-end performance on the external set yielded a mAP of 86.1% and F1-scores of 84.0% with ResNet152. The Windows and web applications successfully processed imaging data, aligning with MRI and arthroscopic findings in cases like ACL and meniscus tears. Explainable AI visualizations clarified model decisions, highlighting key regions for complex injuries, such as concurrent ligament and soft tissue damage, enhancing clinical trust. This AI-driven model markedly improved the precision and efficiency of knee abnormality detection through X-ray analysis. By accurately identifying multiple coexisting conditions in a single pass, it offered a scalable tool to enhance diagnostic workflows and patient outcomes, especially in resource-constrained areas.
Label-free brain tumor imaging using Raman-based methods
IntroductionLabel-free Raman-based imaging techniques create the possibility of bringing chemical and histologic data into the operation room. Relying on the intrinsic biochemical properties of tissues to generate image contrast and optical tissue sectioning, Raman-based imaging methods can be used to detect microscopic tumor infiltration and diagnose brain tumor subtypes.MethodsHere, we review the application of three Raman-based imaging methods to neurosurgical oncology: Raman spectroscopy, coherent anti-Stokes Raman scattering (CARS) microscopy, and stimulated Raman histology (SRH).ResultsRaman spectroscopy allows for chemical characterization of tissue and can differentiate normal and tumor-infiltrated tissue based on variations in macromolecule content, both ex vivo and in vivo. To improve signal-to-noise ratio compared to conventional Raman spectroscopy, a second pulsed excitation laser can be used to coherently drive the vibrational frequency of specific Raman active chemical bonds (i.e. symmetric stretching of –CH2 bonds). Coherent Raman imaging, including CARS and stimulated Raman scattering microscopy, has been shown to detect microscopic brain tumor infiltration in fresh brain tumor specimens with submicron image resolution. Advances in fiber-laser technology have allowed for the development of intraoperative SRH as well as artificial intelligence algorithms to facilitate interpretation of SRH images. With molecular diagnostics becoming an essential part of brain tumor classification, preliminary studies have demonstrated that Raman-based methods can be used to diagnose glioma molecular classes intraoperatively.ConclusionsThese results demonstrate how label-free Raman-based imaging methods can be used to improve the management of brain tumor patients by detecting tumor infiltration, guiding tumor biopsy/resection, and providing images for histopathologic and molecular diagnosis.
Subdomain adaptation method based on transferable semantic alignment and class correlation
To address these challenges, we propose a subdomain adaptation framework driven by transferable semantic alignment and class correlation. First, source and target domains are divided into subdomains according to class labels, and a joint subdomain distribution alignment mechanism is introduced to reduce intra-class distribution divergence while enlarging inter-class disparities. Second, a domain-adaptive semantic consistency loss is employed to cluster semantically similar samples and separate dissimilar ones in a unified representation space, enabling precise cross-domain semantic alignment. Third, pseudo-label quality in the target domain is improved via temperature-based label smoothing, complemented by a class correlation matrix and a loss function capturing inter-class relationships to exploit intrinsic intra-class coherence and inter-class distinction. Extensive experiments on multiple public datasets demonstrate that the proposed method achieves superior average classification accuracy compared to existing approaches, validating the effectiveness of semantic alignment and class correlation modeling. By explicitly modeling intra-class coherence and inter-class distinction without additional architectural complexity, the framework effectively mitigates domain shift, enhances semantic alignment, and improves recognition performance on the target domain, offering a robust solution for deep unsupervised domain adaptation.