Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
258 result(s) for "Wang, Manning"
Sort by:
Image synthesis-based multi-modal image registration framework by using deep fully convolutional networks
Multi-modal image registration has significant meanings in clinical diagnosis, treatment planning, and image-guided surgery. Since different modalities exhibit different characteristics, finding a fast and accurate correspondence between images of different modalities is still a challenge. In this paper, we propose an image synthesis-based multi-modal registration framework. Image synthesis is performed by a ten-layer fully convolutional network (FCN). The network is composed of 10 convolutional layers combined with batch normalization (BN) and rectified linear unit (ReLU), which can be trained to learn an end-to-end mapping from one modality to the other. After the cross-modality image synthesis, multi-modal registration can be transformed into mono-modal registration. The mono-modal registration can be solved by methods with lower computational complexity, such as sum of squared differences (SSD). We tested our method in T1-weighted vs T2-weighted, T1-weighted vs PD, and T2-weighted vs PD image registrations with BrainWeb phantom data and IXI real patients’ data. The result shows that our framework can achieve higher registration accuracy than the state-of-the-art multi-modal image registration methods, such as local mutual information (LMI) and α-mutual information (α-MI). The average registration errors of our method in experiment with IXI real patients’ data were 1.19, 2.23, and 1.57 compared to 1.53, 2.60, and 2.36 of LMI and 1.34, 2.39, and 1.76 of α-MI in T2-weighted vs PD, T1-weighted vs PD, and T1-weighted vs T2-weighted image registration, respectively. In this paper, we propose an image synthesis-based multi-modal image registration framework. A deep FCN model is developed to perform image synthesis for this framework, which can capture the complex nonlinear relationship between different modalities and discover complex structural representations automatically by a large number of trainable mapping and parameters and perform accurate image synthesis. The framework combined with the deep FCN model and mono-modal registration methods (SSD) can achieve fast and robust results in multi-modal medical image registration.
Plasma membrane-localized SlSWEET7a and SlSWEET14 regulate sugar transport and storage in tomato fruits
Sugars, especially glucose and fructose, contribute to the taste and quality of tomato fruits. These compounds are translocated from the leaves to the fruits and then unloaded into the fruits by various sugar transporters at the plasma membrane. SWEETs, are sugar transporters that regulate sugar efflux independently of energy or pH. To date, the role of SWEETs in tomato has received very little attention. In this study, we performed functional analysis of SlSWEET7a and SlSWEET14 to gain insight into the regulation of sugar transport and storage in tomato fruits. SlSWEET7a and SlSWEET14 were mainly expressed in peduncles, vascular bundles, and seeds. Both SlSWEET7a and SlSWEET14 are plasma membrane-localized proteins that transport fructose, glucose, and sucrose. Apart from the resulting increase in mature fruit sugar content, silencing SlSWEET7a or SlSWEET14 resulted in taller plants and larger fruits (in SlSWEET7a-silenced lines). We also found that invertase activity and gene expression of some SlSWEET members increased, which was consistent with the increased availability of sucrose and hexose in the fruits. Overall, our results demonstrate that suppressing SlSWEET7a and SlSWEET14 could be a potential strategy for enhancing the sugar content of tomato fruits.
Group-in-Group Relation-Based Transformer for 3D Point Cloud Learning
Deep point cloud neural networks have achieved promising performance in remote sensing applications, and the prevalence of Transformer in natural language processing and computer vision is in stark contrast to underexplored point-based methods. In this paper, we propose an effective transformer-based network for point cloud learning. To better learn global and local information, we propose a group-in-group relation-based transformer architecture to learn the relationships between point groups to model global information and between points within each group to model local semantic information. To further enhance the local feature representation, we propose a Radius Feature Abstraction (RFA) module to extract radius-based density features characterizing the sparsity of local point clouds. Extensive evaluation on public benchmark datasets demonstrate the effectiveness and competitive performance of our proposed method on point cloud classification and part segmentation.
Globally Optimal Linear Model Fitting with Unit-Norm Constraint
Robustly fitting a linear model from outlier-contaminated data is an important and basic task in many scientific fields, and it is often tackled by consensus set maximization. There have been several studies on globally optimal methods for consensus set maximization, but most of them are currently confined to problems with small number of input observations and low outlier ratios. In this paper, we develop a globally optimal algorithm aiming at consensus set maximization to solve the robust linear model fitting problems with the unit-norm constraint, which is based on the branch-and-bound optimization framework. The unit-norm constraint is utilized to fix the unknown scale of linear model parameters, and we propose a compact representation of the unit-bounded searching domain to avoid introducing the additional non-linearity in the unit-norm constraint. The compact representation leads to a geometrically derived bound, which accelerates the calculation and enables the method to handle the problems with large number of observations. Experiments on both synthetic and real data show that the proposed algorithm outperforms existing globally optimal methods, especially in low dimensional problems with large number of input observations and high outlier ratios. The implementation of the source code is publicly available https://github.com/YiruWangYuri/Demo-for-GoCR.
An efficient dual-branch framework via implicit self-texture enhancement for arbitrary-scale histopathology image super-resolution
High-quality whole-slide scanning is expensive, complex, and time-consuming, thus limiting the acquisition and utilization of high-resolution histopathology images in daily clinical work. Deep learning-based single-image super-resolution (SISR) techniques provide an effective way to solve this problem. However, the existing SISR models applied in histopathology images can only work in fixed integer scaling factors, decreasing their applicability. Though methods based on implicit neural representation (INR) have shown promising results in arbitrary-scale super-resolution (SR) of natural images, applying them directly to histopathology images is inadequate because they have unique fine-grained image textures different from natural images. Thus, we propose an Implicit Self-Texture Enhancement-based dual-branch framework (ISTE) for arbitrary-scale SR of histopathology images to address this challenge. The proposed ISTE contains a feature aggregation branch and a texture learning branch. We employ the feature aggregation branch to enhance the learning of the local details for SR images while utilizing the texture learning branch to enhance the learning of high-frequency texture details. Then, we design a two-stage texture enhancement strategy to fuse the features from the two branches to obtain the SR images. Experiments on publicly available datasets, including TMA, HistoSR, and the TCGA lung cancer datasets, demonstrate that ISTE outperforms existing fixed-scale and arbitrary-scale SR algorithms across various scaling factors. Additionally, extensive experiments have shown that the histopathology images reconstructed by the proposed ISTE are applicable to downstream pathology image analysis tasks.
Efficient Similarity Point Set Registration by Transformation Decomposition
Point set registration is one of the basic problems in computer vision. When the overlap ratio between point sets is small or the relative transformation is large, local methods cannot guarantee the accuracy. However, the time complexity of the branch and bound (BnB) optimization used in most existing global methods is exponential in the dimensionality of parameter space. Therefore, seven-Degrees of Freedom (7-DoF) similarity transformation is a big challenge for BnB. In this paper, a novel rotation and scale invariant feature is introduced to decouple the optimization of translation, rotation, and scale in similarity point set registration, so that BnB optimization can be done in two lower dimensional spaces. With the transformation decomposition, the translation is first estimated and then the rotation is optimized by maximizing a robust objective function defined on consensus set. Finally, the scale is estimated according to the potential correspondences in the obtained consensus set. Experiments on synthetic data and clinical data show that our method is approximately two orders of magnitude faster than the state-of-the-art global method and more accurate than a typical local method. When the outlier ratio with respect to the inliers is up to 1.0, our method still achieves accurate registration.
Preparation of AgInS2 quantum dots and their application for trypsin detection
Water-soluble AgInS 2 quantum dots were prepared by hot injection method using glutathione (GSH) as the stabilizer. The obtained AIS QDs were characterized by X-ray diffraction, X-ray photoelectron spectroscopy, High-resolution transmission electron microscopy, Photoluminescence spectrometer, and Dynamic light scattering. The results showed AIS QDs prepared at pH = 7.03 of reaction solution were of the strongest fluorescent and about 2.8 nm in size. AIS QDs were used directly as the fluorescent probe; a method for the determination of trypsin content was established based on the fluorescence quenching effect of trypsin on AIS QDs. Under the optimal condition, good linear relationships between the fluorescence quenching efficiency of AIS QDs and trypsin concentration were obtained in the range of 0.0625–4 μg mL −1 , 10–320 μg mL −1 and 0.2–1.6 mg·mL −1 , and the correlation coefficients were 0.994, 0.998 and 0.998, respectively. The detection limit was 0.4 μg mL −1 . The average recovery percent of trypsin in urine was between 96.9% and 105.9%. The possible fluorescence quenching mechanism was discussed in detail and the dynamic quenching mechanism of light-induced electron transfer was proposed.
SS-Pro: a simplified siamese contrastive learning approach for protein surface representation
Conclusion In this paper, we introduce a simple Siamese contrastive self-supervised learning framework for protein surface representation learning. The encoder in this framework can be adapted to various point cloud feature extraction backbone networks. Experiments show pre-trained networks consistently demonstrate performance improvements in two downstream tasks. In future work, we aim to explore more efficient protein surface feature extraction networks and delve into additional downstream tasks that better capture protein surface characteristics.
Prediction of cerebral aneurysm rupture using a point cloud neural network
ObjectiveAccurate prediction of cerebral aneurysm (CA) rupture is of great significance. We intended to evaluate the accuracy of the point cloud neural network (PC-NN) in predicting CA rupture using MR angiography (MRA) and CT angiography (CTA) data.Methods418 CAs in 411 consecutive patients confirmed by CTA (n=180) or MRA (n=238) in a single hospital were retrospectively analyzed. A PC-NN aneurysm model with/without parent artery involvement was used for CA rupture prediction and compared with ridge regression, support vector machine (SVM) and neural network (NN) models based on radiomics features. Furthermore, the performance of the trained PC-NN and radiomics-based models was prospectively evaluated in 258 CAs of 254 patients from five external centers.ResultsIn the internal test data, the area under the curve (AUC) of the PC-NN model trained with parent artery (AUC=0.913) was significantly higher than that of the PC-NN model trained without parent artery (AUC=0.851; p=0.041) and of the ridge regression (AUC=0.803; p=0.019), SVM (AUC=0.788; p=0.013) and NN (AUC=0.805; p=0.023) radiomics-based models. Additionally, the PC-NN model trained with MRA source data achieved a higher prediction accuracy (AUC=0.936) than that trained with CTA source data (AUC=0.824; p=0.043). In external data of prospective cohort patients, the AUC of PC-NN was 0.835, significantly higher than ridge regression (0.692; p<0.001), SVM (0.701; p<0.001) and NN (0.681; p<0.001) models.ConclusionPC-NNs can achieve more accurate CA rupture prediction than traditional radiomics-based models. Furthermore, the performance of the PC-NN model trained with MRA data was superior to that trained with CTA data.
A new robust markerless method for automatic image-to-patient registration in image-guided neurosurgery system
Background: Compared with the traditional point-based registration in the image-guided neurosurgery system, surface-based registration is preferable because it does not use fiducial markers before image scanning and does not require image acquisition dedicated for navigation purposes. However, most existing surface-based registration methods must include a manual step for coarse registration, which increases the registration time and elicits some inconvenience and uncertainty. Methods: A new automatic surface-based registration method is proposed, which applies 3D surface feature description and matching algorithm to obtain point correspondences for coarse registration and uses the iterative closest point (ICP) algorithm in the last step to obtain an image-to-patient registration. Results: Both phantom and clinical data were used to execute automatic registrations and target registration error (TRE) calculated to verify the practicality and robustness of the proposed method. In phantom experiments, the registration accuracy was stable across different downsampling resolutions (18-26 mm) and different support radii (2-6 mm). In clinical experiments, the mean TREs of two patients by registering full head surfaces were 1.30 mm and 1.85 mm. Conclusion: This study introduced a new robust automatic surface-based registration method based on 3D feature matching. The method achieved sufficient registration accuracy with different real-world surface regions in phantom and clinical experiments.