Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
149 result(s) for "Chen, Mingjin"
Sort by:
A Deep Learning Framework for Anesthesia Depth Prediction from Drug Infusion History
In the target-controlled infusion (TCI) of propofol and remifentanil intravenous anesthesia, accurate prediction of the depth of anesthesia (DOA) is very challenging. Patients with different physiological characteristics have inconsistent pharmacodynamic responses during different stages of anesthesia. For example, in TCI, older adults transition smoothly from the induction period to the maintenance period, while younger adults are more prone to anesthetic awareness, resulting in different DOA data distributions among patients. To address these problems, a deep learning framework that incorporates domain adaptation and knowledge distillation and uses propofol and remifentanil doses at historical moments to continuously predict the bispectral index (BIS) is proposed in this paper. Specifically, a modified adaptive recurrent neural network (AdaRNN) is adopted to address data distribution differences among patients. Moreover, a knowledge distillation pipeline is developed to train the prediction network by enabling it to learn intermediate feature representations of the teacher network. The experimental results show that our method exhibits better performance than existing approaches during all anesthetic phases in the TCI of propofol and remifentanil intravenous anesthesia. In particular, our method outperforms some state-of-the-art methods in terms of root mean square error and mean absolute error by 1 and 0.8, respectively, in the internal dataset as well as in the publicly available dataset.
A study of sparse representation-based classification for biometric verification based on both handcrafted and deep learning features
Biometric verification is generally considered a one-to-one matching task. In contrast, in this paper, we argue that the one-to-many competitive matching via sparse representation-based classification (SRC) can bring enhanced verification security and accuracy. SRC-based verification introduces non-target subjects to construct dynamic dictionary together with the client claimed and encodes the submitted feature. Owing to the sparsity constraint, a client can only be accepted when it defeats almost all non-target classes and wins a convincing sparsity-based matching score. This will make the verification more secure than those using one-to-one matching. However, intense competition may also lead to extremely inferior genuine scores when data degeneration occurs. Motivated by the latent benefits and concerns, we study SRC-based verification using two sparsity-based matching measures, three biometric modalities (i.e., face, palmprint, and ear) and their multimodal combinations based on both handcrafted and deep learning features. We finally approach a comprehensive study of SRC-based verification, including its methodology, characteristics, merits, challenges and the directions to resolve. Extensive experimental results demonstrate the superiority of SRC-based verification, especially when using multimodal fusion and advanced deep learning features. The concerns about its efficiency in large-scale user applications can be readily solved using a simple dictionary shrinkage strategy based on cluster analysis and random selection of non-target subjects.
Circ_0001806 relieves LPS-induced HK2 cell injury by regulating the expression of miR-942-5p and TXNIP
Sepsis is a systemic inflammatory disease that can cause a variety of diseases, including septic acute kidney injury (AKI). Circular RNAs (circRNAs) are believed to be involved in the development of this disease. This study aims to clarify the function of circ_0001806 in lipopolysaccharide (LPS)-induced HK2 cell model and its related mechanisms. Circ_0001806 was up-regulated in septic AKI serum specimens and LPS-induced HK2 cells. Circ_0001806 knockdown promoted cell proliferation and restrained apoptosis, inflammation and oxidative stress in LPS-induced HK2 cells. In mechanism, circ_0001806 can be used as a sponge for miR-942-5p, and miR-942-5p can directly target TXNIP. Functional experiments revealed that the miR-942-5p inhibitor could reverse the alleviating effect of circ_0001806 knockdown on LPS-induced HK2 cell injury, and TXNIP addition can also reverse the inhibitory effect of miR-942-5p overexpression on LPS-induced HK2 cell injury. In addition, circ_0001806 regulated TXNIP expression through sponging miR-942-5p. Besides, exosome-derived circ_0001806 was upregulated in LPS-induced HK2 cells, while was downregulated by GW4869. The results showed that circ_0001806 knockdown could reduce LPS-induced HK2 cell injury by regulating TXNIP expression via targeting miR-942-5p, indicating that circ_0001806 might be an important biomarker for alleviating sepsis-related AKI. This might provide therapeutic strategy for the treatment of sepsis.
Extracellular calcium elicits feedforward regulation of the roll-like receptor-triggered innate immune response
Despite the expanding knowledge on feedback regulation of Toll-like receptor (TLR) signaling, the feedforward regulation of TLR signaling for the proper innate response to invading microbes is not fully understood. Here, we report that extracellular calcium can coordinate the activation of the small GTPases Ras and Ras-proximate-1 (Rap1) upon TLR stimulation which favors activation of macrophages through a feedforward mechanism. We show that different doses of TLR agonists can trigger different levels of cytokine production, which can be potentiated by extracellular calcium but are impaired by the chelating reagent ethylene glycol tetraacetic acid (EGTA) or by knockdown of stromal interaction molecule 1 (STIM1). Upon TLR engagement, GTP-bound Ras levels are increased and GTP-bound Rap1 is decreased, which can be reversed by EGTA-mediated removal of extracellular calcium. Furthermore, we demonstrate that Rap1 knockdown rescues the inhibitory effects of EGTA on the TLR-triggered innate response. Examination of the TLR signaling pathway reveals that extracellular calcium may regulate the TLR response via feedforward activation of the extracellular signal-regulated kinase signaling pathway. Our data suggest that an influx of extracellular calcium, mediated by STIM 1-operated calcium channels, may transmit the information about the intensity of extracellular TLR stimuli to initiate innate responses at an appropriate level. Our study may provide mechanistic insight into the feedforward regulation of the TLR-triggered innate immune response.
Abdominal multi-organ segmentation in CT using Swinunter
Abdominal multi-organ segmentation in computed tomography (CT) is crucial for many clinical applications including disease detection and treatment planning. Deep learning methods have shown unprecedented performance in this perspective. However, it is still quite challenging to accurately segment different organs utilizing a single network due to the vague boundaries of organs, the complex background, and the substantially different organ size scales. In this work we used make transformer-based model for training. It was found through previous years' competitions that basically all of the top 5 methods used CNN-based methods, which is likely due to the lack of data volume that prevents transformer-based methods from taking full advantage. The thousands of samples in this competition may enable the transformer-based model to have more excellent results. The results on the public validation set also show that the transformer-based model can achieve an acceptable result and inference time.
A Transformer-based Prediction Method for Depth of Anesthesia During Target-controlled Infusion of Propofol and Remifentanil
Accurately predicting anesthetic effects is essential for target-controlled infusion systems. The traditional (PK-PD) models for Bispectral index (BIS) prediction require manual selection of model parameters, which can be challenging in clinical settings. Recently proposed deep learning methods can only capture general trends and may not predict abrupt changes in BIS. To address these issues, we propose a transformer-based method for predicting the depth of anesthesia (DOA) using drug infusions of propofol and remifentanil. Our method employs long short-term memory (LSTM) and gate residual network (GRN) networks to improve the efficiency of feature fusion and applies an attention mechanism to discover the interactions between the drugs. We also use label distribution smoothing and reweighting losses to address data imbalance. Experimental results show that our proposed method outperforms traditional PK-PD models and previous deep learning methods, effectively predicting anesthetic depth under sudden and deep anesthesia conditions.
Learning to In-paint: Domain Adaptive Shape Completion for 3D Organ Segmentation
We aim at incorporating explicit shape information into current 3D organ segmentation models. Different from previous works, we formulate shape learning as an in-painting task, which is named Masked Label Mask Modeling (MLM). Through MLM, learnable mask tokens are fed into transformer blocks to complete the label mask of organ. To transfer MLM shape knowledge to target, we further propose a novel shape-aware self-distillation with both in-painting reconstruction loss and pseudo loss. Extensive experiments on five public organ segmentation datasets show consistent improvements over prior arts with at least 1.2 points gain in the Dice score, demonstrating the effectiveness of our method in challenging unsupervised domain adaptation scenarios including: (1) In-domain organ segmentation; (2) Unseen domain segmentation and (3) Unseen organ segmentation. We hope this work will advance shape analysis and geometric learning in medical imaging.
Data-Centric Diet: Effective Multi-center Dataset Pruning for Medical Image Segmentation
This paper seeks to address the dense labeling problems where a significant fraction of the dataset can be pruned without sacrificing much accuracy. We observe that, on standard medical image segmentation benchmarks, the loss gradient norm-based metrics of individual training examples applied in image classification fail to identify the important samples. To address this issue, we propose a data pruning method by taking into consideration the training dynamics on target regions using Dynamic Average Dice (DAD) score. To the best of our knowledge, we are among the first to address the data importance in dense labeling tasks in the field of medical image analysis, making the following contributions: (1) investigating the underlying causes with rigorous empirical analysis, and (2) determining effective data pruning approach in dense labeling problems. Our solution can be used as a strong yet simple baseline to select important examples for medical image segmentation with combined data sources.
Ultraman: Single Image 3D Human Reconstruction with Ultra Speed and Detail
3D human body reconstruction has been a challenge in the field of computer vision. Previous methods are often time-consuming and difficult to capture the detailed appearance of the human body. In this paper, we propose a new method called \\emph{Ultraman} for fast reconstruction of textured 3D human models from a single image. Compared to existing techniques, \\emph{Ultraman} greatly improves the reconstruction speed and accuracy while preserving high-quality texture details. We present a set of new frameworks for human reconstruction consisting of three parts, geometric reconstruction, texture generation and texture mapping. Firstly, a mesh reconstruction framework is used, which accurately extracts 3D human shapes from a single image. At the same time, we propose a method to generate a multi-view consistent image of the human body based on a single image. This is finally combined with a novel texture mapping method to optimize texture details and ensure color consistency during reconstruction. Through extensive experiments and evaluations, we demonstrate the superior performance of \\emph{Ultraman} on various standard datasets. In addition, \\emph{Ultraman} outperforms state-of-the-art methods in terms of human rendering quality and speed. Upon acceptance of the article, we will make the code and data publicly available.
DanceTogether! Identity-Preserving Multi-Person Interactive Video Generation
Controllable video generation (CVG) has advanced rapidly, yet current systems falter when more than one actor must move, interact, and exchange positions under noisy control signals. We address this gap with DanceTogether, the first end-to-end diffusion framework that turns a single reference image plus independent pose-mask streams into long, photorealistic videos while strictly preserving every identity. A novel MaskPoseAdapter binds \"who\" and \"how\" at every denoising step by fusing robust tracking masks with semantically rich-but noisy-pose heat-maps, eliminating the identity drift and appearance bleeding that plague frame-wise pipelines. To train and evaluate at scale, we introduce (i) PairFS-4K, 26 hours of dual-skater footage with 7,000+ distinct IDs, (ii) HumanRob-300, a one-hour humanoid-robot interaction set for rapid cross-domain transfer, and (iii) TogetherVideoBench, a three-track benchmark centered on the DanceTogEval-100 test suite covering dance, boxing, wrestling, yoga, and figure skating. On TogetherVideoBench, DanceTogether outperforms the prior arts by a significant margin. Moreover, we show that a one-hour fine-tune yields convincing human-robot videos, underscoring broad generalization to embodied-AI and HRI tasks. Extensive ablations confirm that persistent identity-action binding is critical to these gains. Together, our model, datasets, and benchmark lift CVG from single-subject choreography to compositionally controllable, multi-actor interaction, opening new avenues for digital production, simulation, and embodied intelligence. Our video demos and code are available at https://DanceTog.github.io/.