Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
39
result(s) for
"Chen, Liangjian"
Sort by:
The YAP1/GPX4 axis alleviates osteoporosis by affecting ferroptosis in osteoblasts
by
Luo, Jia
,
Chen, Liangjian
,
Tang, Zhipeng
in
Adaptor Proteins, Signal Transducing - genetics
,
Adaptor Proteins, Signal Transducing - metabolism
,
Animals
2025
Background
Osteoporosis (OP) is a disease in which weak bones increase the risk of fracture. It has been reported that the occurrence of ferroptosis accelerated the progression of OP. However, the underlying mechanism of ferroptosis in OP remains unclear.
Methods
Clinical samples from OP patients were collected and ovariectomized (OVX)-induced mouse models with GPX4 knockout was established. The expression of genes and proteins was determined by RT-qPCR, western blot, IHC and IF. Bone mineral density (BMD) of the lumbar vertebrae was evaluated using DXA. Pearson correlation analysis was used to analyze the relationship between GPX4 expression and BMD. The femoral morphology was detected by HE staining. Images and relevant parameters of the femur were acquired using micro-CT. Ultrastructural changes in mitochondria were observed using TEM. MDA and GSH levels in mice and cells were examined using commercial kits. Lipid peroxidation was detected using Bodipy-C11 fluorescent probe. ALP activity was measured using ALP staining and calcified nodules were examined using ARS staining. The interaction between YAP1 and GPX4 promoter was validated using ChIP and dual-luciferase reporter gene assay.
Results
GPX4 expression was downregulated in clinical samples of OP and positively correlated with BMD. GPX4 knockout exacerbated bone loss and promoted ferroptosis in OVX-induced mice. Besides, GPX4 overexpression inhibited ferroptosis and enhanced osteogenic potential of osteoblasts. Moreover, YAP1 positively regulated GPX4 expression in osteoblasts through activating transcriptional activity of GPX4 promoter and YAP1 overexpression suppressed ferroptosis and enhanced osteogenic potential of osteoblasts via enhancing GPX4 expression.
Conclusion
GPX4 was positively regulated by YAP1, which in turn inhibited ferroptosis and enhanced osteogenic potential of osteoblasts, thereby alleviating OA progression.
Journal Article
Arecoline Enhances Phosphodiesterase 4A Activity to Promote Transforming Growth Factor-β-Induced Buccal Mucosal Fibroblast Activation via cAMP-Epac1 Signaling Pathway
2021
Chewing areca nut (betel quid) is strongly associated with oral submucous fibrosis (OSF), a pre-cancerous lesion. Among the areca alkaloids, arecoline is the main agent responsible for fibroblast proliferation; however, the specific molecular mechanism of arecoline affecting the OSF remains unclear. The present study revealed that arecoline treatment significantly enhanced Transforming growth factor-β (TGF-β)-induced buccal mucosal fibroblast (BMF) activation and fibrotic changes. Arecoline interacts with phosphodiesterase 4A (PDE4A) to exert its effects through modulating PDE4A activity but not PDE4A expression. PDE4A silence reversed the effects of arecoline on TGF-β-induced BMFs activation and fibrotic changes. Moreover, the exchange protein directly activated by cAMP 1 (Epac1)-selective Cyclic adenosine 3′,5′-monophosphate (cAMP) analog (8-Me-cAMP) but not the protein kinase A (PKA)-selective cAMP analog (N6-cAMP) remarkably suppressed α-smooth muscle actin(α-SMA) and Collagen Type I Alpha 1 Chain (Col1A1) protein levels in response to TGF-β1 and arecoline co-treatment, indicating that cAMP-Epac1 but not cAMP-PKA signaling is involved in arecoline functions on TGF-β1-induced BMFs activation. In conclusion, arecoline promotes TGF-β1-induced BMFs activation through enhancing PDE4A activity and the cAMP-Epac1 signaling pathway during OSF. This novel mechanism might provide more powerful strategies for OSF treatment, requiring further in vivo and clinical investigation.
Journal Article
CircZNF367 promotes osteoclast differentiation and osteoporosis by interacting with FUS to maintain CRY2 mRNA stability
Background
Osteoporosis, characterized by reduced bone mass and deterioration of bone quality, is a significant health concern for postmenopausal women. Considering that the specific role of circRNAs in osteoporosis and osteoclast differentiation remains poorly understood, this study aims to shed light on their involvement in these processes to enhance our understanding and potentially contribute to improved treatment strategies for osteoporosis.
Methods
An osteoporotic model was constructed in vivo in ovariectomized mouse. In vitro, we induced osteoclast formation in bone marrow-derived macrophages (BMDMs) using M-CSF + RANKL. To assess osteoporosis in mice, we conducted HE staining. We used MTT and TRAP staining to measure cell viability and osteoclast formation, respectively, and also evaluated their mRNA and protein expression levels. In addition, RNA pull-down, RIP and luciferase reporter assays were performed to investigate interactions, and ChIP assay was used to examine the impact of circZNF367 knockdown on the binding between FUS and CRY2.
Results
We observed increased expression of CircZNF367, FUS and CRY2 in osteoporotic mice and M-CSF + RANKL-induced BMDMs. Functionally, knocking down circZNF367 inhibited osteoporosis in vivo
.
Furthermore, interference with circZNF367 suppressed osteoclast proliferation and the expression of TRAP, NFATc1, and c-FOS. Mechanistically, circZNF367 interacted with FUS to maintain CRY2 mRNA stability. Additionally, knocking down CRY2 rescued M-CSF + RANKL-induced osteoclast differentiation in BMDMs promoted by circZNF367 and FUS.
Conclusion
This study reveals that the circZNF367/FUS axis may accelerate osteoclasts differentiation by upregulating CRY2 in osteoporosis and suggests that targeting circZNF367 may have potential therapeutic effects on osteoporosis.
Journal Article
Microstructure, biodegradable behavior in different simulated body fluids, antibacterial effect on different bacteria and cytotoxicity of rolled Zn-Li-Ag alloy
2020
Rolled Zn-0.8Li-0.2Ag(wt%) alloy as candidates for biodegradable materials. The biodegradable behavior of Zn-0.8Li-0.2Ag alloy in different solutions (Ringer's, DMEM, SBF and DMEMp) was investigated. The cytotoxicity of Zn-0.8Li-0.2Ag alloy and its antibacterial properties against staphylococcus aureus, enterobacter faecalis and candida albicans were evaluated. The results showed that Zn-0.8Li-0.2Ag alloy consists of zinc matrix and a LiZn4 secondary phase. The presence of Cl− causes locally corroded of Zn-0.8Li-0.2Ag alloy in Ringer's solution, and its corrosion resistance is lower than that of the alloy which is uniformly corroded in other solutions containing CO32− and PO43−. Zn-0.8Li-0.2Ag alloy is non-toxic and exhibits better antibacterial properties than the experimental reference group without silver.
Journal Article
Deep Learning in 3d Hand Pose and Mesh Estimation
2020
3D Hand pose estimation is an important problem because of its wide range of potential applications, such as sign language translation, robotics, movement disorder detection and monitoring, and human-computer interaction (HCI). However, despite of the previous progress, it remains a challenge problem in the field of computer vision due to the difficulty to acquire high quality hand pose annotation. In this dissertation, we develop various of approaches to address this problem aiming for achieving a better estimation accuracy or provide easier training environment. First, to bridge the image quality gap between the synthetic dataset and real world dataset, we propose TAGAN(Tonality-Aligned Generative Adversarial Networks) to produce more realistic hand poses image.Second, to loose the requirement of paired RGB and Depth image requirement for most state-of-the-art $3$D hand pose estimator, we propose DGGAN(Depth-image Guided Generative Adversarial Networks) to let those hand pose estimator could be trained on RGB image only dataset.Third, since the accurate 3D hand pose estimation is very difficult to acquired, we propose the TASSN(Temporal-Aware Self-Supervised Network) with temporal consistency constraints which learns 3D hand poses and meshes from videos with only 2D keypoint position annotations. Last but not the least, since 3D hand pose from single image is intrinsically ill-posed.We want to build a multi-view hand mesh benchmark to tackle this problem from multi-view perspective. we design a spin match algorithm that enables a rigid mesh model matching with any target mesh ground truth.Based on the match algorithm, we propose an efficient pipeline to generate a large-scale multi-view hand mesh (MVHM) dataset with accurate 3D hand mesh and joint labels.
Dissertation
Extending Context Window of Large Language Models via Positional Interpolation
by
Chen, Liangjian
,
Tian, Yuandong
,
Wong, Sherman
in
Context
,
Interpolation
,
Large language models
2023
We present Position Interpolation (PI) that extends the context window sizes of RoPE-based pretrained LLMs such as LLaMA models to up to 32768 with minimal fine-tuning (within 1000 steps), while demonstrating strong empirical results on various tasks that require long context, including passkey retrieval, language modeling, and long document summarization from LLaMA 7B to 65B. Meanwhile, the extended model by Position Interpolation preserve quality relatively well on tasks within its original context window. To achieve this goal, Position Interpolation linearly down-scales the input position indices to match the original context window size, rather than extrapolating beyond the trained context length which may lead to catastrophically high attention scores that completely ruin the self-attention mechanism. Our theoretical study shows that the upper bound of interpolation is at least \\(\\sim 600 \\times\\) smaller than that of extrapolation, further demonstrating its stability. Models extended via Position Interpolation retain its original architecture and can reuse most pre-existing optimization and infrastructure.
PPT: token-Pruned Pose Transformer for monocular and multi-view human pose estimation
2022
Recently, the vision transformer and its variants have played an increasingly important role in both monocular and multi-view human pose estimation. Considering image patches as tokens, transformers can model the global dependencies within the entire image or across images from other views. However, global attention is computationally expensive. As a consequence, it is difficult to scale up these transformer-based methods to high-resolution features and many views. In this paper, we propose the token-Pruned Pose Transformer (PPT) for 2D human pose estimation, which can locate a rough human mask and performs self-attention only within selected tokens. Furthermore, we extend our PPT to multi-view human pose estimation. Built upon PPT, we propose a new cross-view fusion strategy, called human area fusion, which considers all human foreground pixels as corresponding candidates. Experimental results on COCO and MPII demonstrate that our PPT can match the accuracy of previous pose transformer methods while reducing the computation. Moreover, experiments on Human 3.6M and Ski-Pose demonstrate that our Multi-view PPT can efficiently fuse cues from multiple views and achieve new state-of-the-art results.
Identity-Aware Hand Mesh Estimation and Personalization from RGB Images
by
Chen, Liangjian
,
Yan, Xiangyi
,
Sun, Shanlin
in
Color imagery
,
Finite element method
,
Image reconstruction
2022
Reconstructing 3D hand meshes from monocular RGB images has attracted increasing amount of attention due to its enormous potential applications in the field of AR/VR. Most state-of-the-art methods attempt to tackle this task in an anonymous manner. Specifically, the identity of the subject is ignored even though it is practically available in real applications where the user is unchanged in a continuous recording session. In this paper, we propose an identity-aware hand mesh estimation model, which can incorporate the identity information represented by the intrinsic shape parameters of the subject. We demonstrate the importance of the identity information by comparing the proposed identity-aware model to a baseline which treats subject anonymously. Furthermore, to handle the use case where the test subject is unseen, we propose a novel personalization pipeline to calibrate the intrinsic shape parameters using only a few unlabeled RGB images of the subject. Experiments on two large scale public datasets validate the state-of-the-art performance of our proposed method.
Temporal-Aware Self-Supervised Learning for 3D Hand Pose and Mesh Estimation in Videos
2020
Estimating 3D hand pose directly from RGB imagesis challenging but has gained steady progress recently bytraining deep models with annotated 3D poses. Howeverannotating 3D poses is difficult and as such only a few 3Dhand pose datasets are available, all with limited samplesizes. In this study, we propose a new framework of training3D pose estimation models from RGB images without usingexplicit 3D annotations, i.e., trained with only 2D informa-tion. Our framework is motivated by two observations: 1)Videos provide richer information for estimating 3D posesas opposed to static images; 2) Estimated 3D poses oughtto be consistent whether the videos are viewed in the for-ward order or reverse order. We leverage these two obser-vations to develop a self-supervised learning model calledtemporal-aware self-supervised network (TASSN). By en-forcing temporal consistency constraints, TASSN learns 3Dhand poses and meshes from videos with only 2D keypointposition annotations. Experiments show that our modelachieves surprisingly good results, with 3D estimation ac-curacy on par with the state-of-the-art models trained with3D annotations, highlighting the benefit of the temporalconsistency in constraining 3D prediction models.
DGGAN: Depth-image Guided Generative Adversarial Networks for Disentangling RGB and Depth Images in 3D Hand Pose Estimation
2020
Estimating3D hand poses from RGB images is essentialto a wide range of potential applications, but is challengingowing to substantial ambiguity in the inference of depth in-formation from RGB images. State-of-the-art estimators ad-dress this problem by regularizing3D hand pose estimationmodels during training to enforce the consistency betweenthe predicted3D poses and the ground-truth depth maps.However, these estimators rely on both RGB images and thepaired depth maps during training. In this study, we proposea conditional generative adversarial network (GAN) model,called Depth-image Guided GAN (DGGAN), to generate re-alistic depth maps conditioned on the input RGB image, anduse the synthesized depth maps to regularize the3D handpose estimation model, therefore eliminating the need forground-truth depth maps. Experimental results on multiplebenchmark datasets show that the synthesized depth mapsproduced by DGGAN are quite effective in regularizing thepose estimation model, yielding new state-of-the-art resultsin estimation accuracy, notably reducing the mean3D end-point errors (EPE) by4.7%,16.5%, and6.8%on the RHD,STB and MHP datasets, respectively.