Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
550 result(s) for "Huo, Xing"
Sort by:
Smart phosphorescence from solid to water through progressive assembly strategy based on dual phosphorescent sources
Developing smart room‐temperature phosphorescence (RTP) materials with facile and efficient strategies have attracted increasing attention. Herein, tunable RTP materials with two phosphorescent sources and stepwise enhanced phosphorescence in water are obtained through an in‐situ self‐assembly strategy based on the sensitization of phosphors by trimesic acid (TMA) through simple doping and the rigidification of phosphors by hydrogen‐bonded organic frameworks (HOFs). As expected, doped TMA+phosphors simultaneously promote the RTP emission of phosphors and maintain TMA phosphorescence. In‐situ assembled HOF(MA‐TMA)@phosphors facilitate smart RTP emission in water due to the coexistence of phosphorescent HOF(MA‐TMA) host and phosphors guest. Additionally, such RTP materials with good processability demonstrate the application potential in information security, benefitting from their varied afterglow lifetimes and easy luminous recognition in the darkness. This work will inspire the design of dual phosphorescent source RTP systems and provide new strategies for the development of smart RTP materials in water.
Solid‐state room‐temperature phosphorescence activated by the end‐capping strategy of cyano groups
Avoiding the tedious process of crystal cultivation and directly obtaining organic crystals with desirable phosphorescent performance is of great significance for studying their structure and properties. Herein, a set of benzophenone‐cored phosphors with bright green afterglow are obtained on a large scale through in‐situ generation via an end‐capping strategy to suppress non‐radiative triplet excitons and reinforce the intermolecular interactions. The ordered arrangement of phosphors with alkyl‐cyano groups as regulators is crucial for the enhancement of room‐temperature phosphorescence (RTP) emission, which has been further verified by the attenuated lifetimes in isolated states through the formation of inclusion complexes upon binding with pillar[5]arenes. Moreover, the hierarchical interactions of phosphors, including hydrogen bonding, π‐π stacking interactions, and van der Waals forces, are quantified by crystal structures and theoretical calculation to deeply interpret the origins of RTP emission. With this study, we provide a potential strategy for the direct acquisition of crystalline organic phosphors and modulation of RTP. Customizable benzophenone‐cored phosphors with afterglow properties were directly fabricated by a symmetrical end‐capping strategy regulated by alkyl‐cyano groups, which showed tunable phosphorescence lifetimes, quantum yields, packing mode, and intermolecular interactions. Such a robust strategy sets the basis for the construction of organic room‐temperature phosphorescence (RTP) materials and unlocks more possibilities for the designing of RTP materials with well‐defined supramolecular structures.
NOG1 increases grain production in rice
During rice domestication and improvement, increasing grain yield to meet human needs was the primary objective. Rice grain yield is a quantitative trait determined by multiple genes, but the molecular basis for increased grain yield is still unclear. Here, we show that NUMBER OF GRAINS 1 ( NOG1 ), which encodes an enoyl-CoA hydratase/isomerase, increases the grain yield of rice by enhancing grain number per panicle without a negative effect on the number of panicles per plant or grain weight. NOG1 can significantly increase the grain yield of commercial high-yield varieties: introduction of NOG1 increases the grain yield by 25.8% in the NOG1 -deficient rice cultivar Zhonghua 17, and overexpression of NOG1 can further increase the grain yield by 19.5% in the NOG1 -containing variety Teqing. Interestingly, NOG1 plays a prominent role in increasing grain number, but does not change heading date or seed-setting rate. Our findings suggest that NOG1 could be used to increase rice production. Rice grain yield is a quantitative trait determined by multiple genes. Here, the authors find NOG1 , which encodes an enoyl-CoA hydratase/isomerase in fatty acid β-oxidation pathway, can increase grain yield by enhancing grain number per panicle without affecting the other yield component traits.
A high-throughput neurohistological pipeline for brain-wide mesoscale connectivity mapping of the common marmoset
Understanding the connectivity architecture of entire vertebrate brains is a fundamental but difficult task. Here we present an integrated neuro-histological pipeline as well as a grid-based tracer injection strategy for systematic mesoscale connectivity mapping in the common marmoset (Callithrix jacchus). Individual brains are sectioned into ~1700 20 µm sections using the tape transfer technique, permitting high quality 3D reconstruction of a series of histochemical stains (Nissl, myelin) interleaved with tracer labeled sections. Systematic in-vivo MRI of the individual animals facilitates injection placement into reference-atlas defined anatomical compartments. Further, by combining the resulting 3D volumes, containing informative cytoarchitectonic markers, with in-vivo and ex-vivo MRI, and using an integrated computational pipeline, we are able to accurately map individual brains into a common reference atlas despite the significant individual variation. This approach will facilitate the systematic assembly of a mesoscale connectivity matrix together with unprecedented 3D reconstructions of brain-wide projection patterns in a primate brain.
Solving the where problem and quantifying geometric variation in neuroanatomy using generative diffeomorphic mapping
A current focus in neuroscience is to map neuronal cell types in whole vertebrate brains using different imaging modalities. Mapping modern molecular and anatomical datasets into a common atlas includes challenges that existing workflows do not adequately address: multimodal signals, missing data or non reference signals, and quantification of individual variation. Our solution implements a generative model describing the likelihood of data given a sequence of transforms of an atlas, and a maximum a posteriori estimation framework. Our approach allows composition of mappings across chains of datasets rather than only pairs, and computes metrics for geometric quantification. We study a range of datasets (in/ex-vivo MRI, STP and fMOST, 2D serial histology, snRNAseq prepared tissue), quantifying cell density and geometric fluctuations across covariates, and reveal that individual variation is often greater than differences due to tissue processing techniques. We provide open source code, dataset standards, and a web interface. This establishes a quantitative workflow for unifying multi-modal whole-brain images in an atlas framework, validated using mouse datasets, enabling large scale integration of datasets essential to modern neuroscience. Challenges in mapping modern molecular and anatomical datasets into a common atlas are not fully addressed. Here authors present approaches to aligning multimodal neuroimaging data and quantifying geometric variability. Authors also make sure open-source code, dataset standards, and a web interface are available, enabling large scale integration of datasets essential to modern neuroscience.
Automatic 3D pelvimetry framework in CT images and its validation
In the field of spinal pathology, sagittal balance of the spine is usually judged by the spatial structure and morphology of pelvis, which can be represented by pelvic parameters. Pelvic parameters, including pelvic incidence, pelvic tilt and sacral slope, are therefore essential for the diagnosis and treatment of spinal disorders, however, it is a time-consuming and laborious procedure to measure these parameters by traditional methods. In this paper, an automatic measurement framework for pelvic CT images was proposed to calculate three-dimensional (3D) pelvic parameters with the support of deep learning technology. Pelvic images were first preprocessed, and 3D reconstruction was then performed to obtain 3D pelvic model by the Visualization Toolkit. DRINet was trained to segment the femoral head region in the pelvic images, and 3D sphere fitting was performed to locate the femoral heads. In addition, VGG16 was adopted to recognize images containing superior sacral endplate, and the plane growth algorithm was used to fit the plane so that the midpoint and normal vector of the superior sacral endplate could be obtained. Finally, 3D pelvic parameters were automatically calculated, and compared with manual measurements for 15 patients. The proposed framework automatically generated 3D pelvic models, and calculated two-dimensional (2D) and 3D pelvic parameters from continuous CT images. Experiments demonstrated that the framework can greatly speed up the calculation of pelvic parameters, and these parameters are accurate when compared with the manual measurements. In conclusion, the proposed framework demonstrates good performance on automatic pelvimetry measurement by incorporating deep learning technology, and can well replace the traditional methods for pelvic parameter measurement.
Quantitative separation of arterial and venous cerebral blood volume increases during voluntary locomotion
Voluntary locomotion is accompanied by large increases in cortical activity and localized increases in cerebral blood volume (CBV). We sought to quantitatively determine the spatial and temporal dynamics of voluntary locomotion-evoked cerebral hemodynamic changes. We measured single vessel dilations using two-photon microscopy and cortex-wide changes in CBV-related signal using intrinsic optical signal (IOS) imaging in head-fixed mice freely locomoting on a spherical treadmill. During bouts of locomotion, arteries dilated rapidly, while veins distended slightly and recovered slowly. The dynamics of diameter changes of both vessel types could be captured using a simple linear convolution model. Using these single vessel measurements, we developed a novel analysis approach to separate out spatially and temporally distinct arterial and venous components of the location-specific hemodynamic response functions (HRF) for IOS. The HRF of each pixel of was well fit by a sum of a fast arterial and a slow venous component. The HRFs of pixels in the limb representations of somatosensory cortex had a large arterial contribution, while in the frontal cortex the arterial contribution to the HRF was negligible. The venous contribution was much less localized, and was substantial in the frontal cortex. The spatial pattern and amplitude of these HRFs in response to locomotion in the cortex were robust across imaging sessions. Separating the more localized arterial component from the diffuse venous signals will be useful for dealing with the dynamic signals generated by naturalistic stimuli. [Display omitted] •Arteries and veins dilate with distinct dynamics during voluntary locomotion in mice.•Single vessel responses to locomotion were fit with a linear convolution model (LCM).•The intrinsic optical signal could be separated into arterial and venous components.•Arterial responses were more spatially localized than venous responses.
An improved CapsNet applied to recognition of 3D vertebral images
Deep learning is currently widely applied in medical image processing and has achieved good results. However, recognizing vertebrae via image processing remains a challenging problem due to their complex spatial structures. CapsNet is a newly proposed network whose characteristics compensate for some shortcomings of traditional CNNs, and it has been shown to perform well on many tasks, including medical image recognition. In this paper, we applied a modified CapsNet to recognise 3D vertebral images by introducing an RNN module into CapsNet to further enhance its learning ability. This new network is called RNNinCaps, and it achieves the highest recognition performance on 3D vertebral images (the average accuracy of RNNinCaps exceeds the accuracy of the original CapsNet by 46.2% and that of a traditional CNN by 12.6%). RNNinCaps also performs better than several mainstream networks. RNNinCaps can promotes CapsNet’s application in the field of 3D medical image recognition.
Infrared and Visible Image Fusion with Significant Target Enhancement
Existing fusion rules focus on retaining detailed information in the source image, but as the thermal radiation information in infrared images is mainly characterized by pixel intensity, these fusion rules are likely to result in reduced saliency of the target in the fused image. To address this problem, we propose an infrared and visible image fusion model based on significant target enhancement, aiming to inject thermal targets from infrared images into visible images to enhance target saliency while retaining important details in visible images. First, the source image is decomposed with multi-level Gaussian curvature filtering to obtain background information with high spatial resolution. Second, the large-scale layers are fused using ResNet50 and maximizing weights based on the average operator to improve detail retention. Finally, the base layers are fused by incorporating a new salient target detection method. The subjective and objective experimental results on TNO and MSRS datasets demonstrate that our method achieves better results compared to other traditional and deep learning-based methods.
Automatic Vertebral Rotation Angle Measurement of 3D Vertebrae Based on an Improved Transformer Network
The measurement of vertebral rotation angles serves as a crucial parameter in spinal assessments, particularly in understanding conditions such as idiopathic scoliosis. Historically, these angles were calculated from 2D CT images. However, such 2D techniques fail to comprehensively capture the intricate three-dimensional deformities inherent in spinal curvatures. To overcome the limitations of manual measurements and 2D imaging, we introduce an entirely automated approach for quantifying vertebral rotation angles using a three-dimensional vertebral model. Our method involves refining a point cloud segmentation network based on a transformer architecture. This enhanced network segments the three-dimensional vertebral point cloud, allowing for accurate measurement of vertebral rotation angles. In contrast to conventional network methodologies, our approach exhibits notable improvements in segmenting vertebral datasets. To validate our approach, we compare our automated measurements with angles derived from prevalent manual labeling techniques. The analysis, conducted through Bland–Altman plots and the corresponding intraclass correlation coefficient results, indicates significant agreement between our automated measurement method and manual measurements. The observed high intraclass correlation coefficients (ranging from 0.980 to 0.993) further underscore the reliability of our automated measurement process. Consequently, our proposed method demonstrates substantial potential for clinical applications, showcasing its capacity to provide accurate and efficient vertebral rotation angle measurements.