Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
14
result(s) for
"Dou, Chenlong"
Sort by:
Genome-wide association study in Han Chinese identifies four new susceptibility loci for coronary artery disease
by
Wang, Qianqian
,
Pu, Xiaodong
,
Liu, Depei
in
631/208/205/2138
,
631/208/727/2000
,
692/699/75/593/15
2012
Dongfeng Gu and colleagues report a genome-wide association study for coronary artery disease in Han Chinese individuals. They identify four loci newly associated with coronary artery disease.
We performed a meta-analysis of 2 genome-wide association studies of coronary artery disease comprising 1,515 cases and 5,019 controls followed by replication studies in 15,460 cases and 11,472 controls, all of Chinese Han ancestry. We identify four new loci for coronary artery disease that reached the threshold of genome-wide significance (
P
< 5 × 10
−8
). These loci mapped in or near
TTC32
-
WDR35
,
GUCY1A3
,
C6orf10-BTNL2
and
ATP2B1
. We also replicated four loci previously identified in European populations (in or near
PHACTR1
,
TCF21
,
CDKN2A-CDKN2B
and
C12orf51
). These findings provide new insights into pathways contributing to the susceptibility for coronary artery disease in the Chinese Han population.
Journal Article
Homogeneously‐Dimensionalizing Perovskite Surface by Dual‐Mechano‐Chemical Regulation for Efficient Solar Cells
by
Geng, Shengwei
,
Guo, Qiyao
,
Zhao, Yuanyuan
in
charge transfer
,
Crystallization
,
dimensionality heterointerface
2025
Precise manipulation on surface dimensionality benefits the improvement of efficiency and stability of perovskite solar cells, however, heterogeneity with the presence of substantial atomic‐scale impurities and micro‐wrinkles on perovskite surface that serve as transformation template challenges the formation of homogeneous heterointerface and thus weakens healing efficacy. To address this issue, herein, we propose a dual‐mechano‐chemical strategy is proposed to homogenize the morphologic‐compositional feature of perovskite surface by first polishing superficial nano‐impurities with energetic nanoparticles and then in situ dimensionalizing the defect‐free lattice to form a 2D/3D heterointerface with strengthened contact and homogeneous distribution. With the implement of this strategy, the reconstructed heterointerface not only accelerates charge transfer with minimized interfacial non‐radiative recombination losses, but also protects perovskite lattice from external attack. Consequently, an all‐air‐processed carbon‐based CsPbI2Br solar cell displays enhanced efficiency of 15.29% and elevated performance retention rate under dark storage over 1000 h, high temperature over 500 h as well as persistent operation over 200 h. This work provides a multidimensional surface engineering strategy for high‐efficiency and stable perovskite‐based photoelectric device, benefiting the large‐scale fabrication in the future. A homogeneous and strengthened 2D/3D perovskite heterointerface is realized by idealizing the perovskite surface to eliminate atomic‐scale impurities and micro‐wrinkles. Benefiting from the reinforcement of charge transfer and lattice solidification, an all‐air‐processed carbon‐based all‐inorganic CsPbI2Br device achieves an enhanced efficiency of 15.29% with improved stability, offering a deep insight on dimensionality engineering.
Journal Article
Predictive value of early kinetics of ctDNA combined with cfDNA and serum CEA for EGFR‐TKI treatment in advanced non‐small cell lung cancer
by
Ii, Jianghua
,
Li, Li
,
Zheng, Jie
in
Antigens
,
Carcinoembryonic Antigen
,
Carcinoma, Non-Small-Cell Lung - drug therapy
2022
Background Circulating tumor DNA (ctDNA) has made a breakthrough as an early biomarker in operable early‐stage cancer patients. However, the function of ctDNA combined with cell‐free DNA (cfDNA) as a predictor in advanced non‐small cell lung cancer (NSCLC) remains unknown. Here, we explored its potential as a biomarker for predicting the efficacy of epidermal growth factor receptor tyrosine kinase inhibitors (EGFR‐TKIs) in patients with advanced NSCLC. Methods A retrospective analysis was undertaken. Plasma collected from 51 patients with advanced NSCLC prior to and serially after starting treatment with EGFR‐TKIs was analyzed by next‐generation sequencing (NGS). The performance of ctDNA, cfDNA, and combining ctDNA with cfDNA were evaluated for their ability to predict survival outcomes. Results Patients with early undetectable ctDNA and increasing cfDNA had a markedly better progression‐free survival (PFS) (p < 0.001) and overall survival (OS) (p = 0.001) than those with early detectable ctDNA and decreasing cfDNA. Patients with early ctDNA clearance were more likely to have the ctDNA persistent clearance (p = 0.006). The early clearance rate of ctDNA in the normal carcinoembryonic antigen (CEA) group was significantly higher than in the low and high groups (p = 0.028). Patients with greater CEA decline had a higher early clearance rate of ctDNA than those with minor CEA change (p = 0.016). Conclusions We based this study on ctDNA and cfDNA, explored its prognostic predictive ability, and combined CEA to monitor EGFR‐TKI efficacy. This study may provide new perspectives and insights into the precise treatment strategies for NSCLC patients. The predictive model based on ctDNA and cfDNA showed four different types of prognoses of patients. The best curative effect was patients with an early ctDNA clearance combined with cfDNA increased, and the worst was patients without an early ctDNA clearance combined with cfDNA decreased. Our findings confirmed that CEA has a certain correlation with ctDNA. Combination monitoring of CEA and ctDNA may provide a convenient and feasible way to observe drug effects for patients on drug holidays.
Journal Article
GLARE: Agentic Reasoning for Legal Judgment Prediction
2025
Legal judgment prediction (LJP) has become increasingly important in the legal field. In this paper, we identify that existing large language models (LLMs) have significant problems of insufficient reasoning due to a lack of legal knowledge. Therefore, we introduce GLARE, an agentic legal reasoning framework that dynamically acquires key legal knowledge by invoking different modules, thereby improving the breadth and depth of reasoning. Experiments conducted on the real-world dataset verify the effectiveness of our method. Furthermore, the reasoning chain generated during the analysis process can increase interpretability and provide the possibility for practical applications.
Learning Interpretable Legal Case Retrieval via Knowledge-Guided Case Reformulation
2024
Legal case retrieval for sourcing similar cases is critical in upholding judicial fairness. Different from general web search, legal case retrieval involves processing lengthy, complex, and highly specialized legal documents. Existing methods in this domain often overlook the incorporation of legal expert knowledge, which is crucial for accurately understanding and modeling legal cases, leading to unsatisfactory retrieval performance. This paper introduces KELLER, a legal knowledge-guided case reformulation approach based on large language models (LLMs) for effective and interpretable legal case retrieval. By incorporating professional legal knowledge about crimes and law articles, we enable large language models to accurately reformulate the original legal case into concise sub-facts of crimes, which contain the essential information of the case. Extensive experiments on two legal case retrieval benchmarks demonstrate superior retrieval performance and robustness on complex legal case queries of KELLER over existing methods.
LawThinker: A Deep Research Legal Agent in Dynamic Environments
2026
Legal reasoning requires not only correct outcomes but also procedurally compliant reasoning processes. However, existing methods lack mechanisms to verify intermediate reasoning steps, allowing errors such as inapplicable statute citations to propagate undetected through the reasoning chain. To address this, we propose LawThinker, an autonomous legal research agent that adopts an Explore-Verify-Memorize strategy for dynamic judicial environments. The core idea is to enforce verification as an atomic operation after every knowledge exploration step. A DeepVerifier module examines each retrieval result along three dimensions of knowledge accuracy, fact-law relevance, and procedural compliance, with a memory module for cross-round knowledge reuse in long-horizon tasks. Experiments on the dynamic benchmark J1-EVAL show that LawThinker achieves a 24% improvement over direct reasoning and an 11% gain over workflow-based methods, with particularly strong improvements on process-oriented metrics. Evaluations on three static benchmarks further confirm its generalization capability. The code is available at https://github.com/yxy-919/LawThinker-agent .
Enabling Discriminative Reasoning in LLMs for Legal Judgment Prediction
2024
Legal judgment prediction is essential for enhancing judicial efficiency. In this work, we identify that existing large language models (LLMs) underperform in this domain due to challenges in understanding case complexities and distinguishing between similar charges. To adapt LLMs for effective legal judgment prediction, we introduce the Ask-Discriminate-Predict (ADAPT) reasoning framework inspired by human judicial reasoning. ADAPT involves decomposing case facts, discriminating among potential charges, and predicting the final judgment. We further enhance LLMs through fine-tuning with multi-task synthetic trajectories to improve legal judgment prediction accuracy and efficiency under our ADAPT framework. Extensive experiments conducted on two widely-used datasets demonstrate the superior performance of our framework in legal judgment prediction, particularly when dealing with complex and confusing charges.
A Silver Bullet or a Compromise for Full Attention? A Comprehensive Study of Gist Token-based Context Compression
2024
In this work, we provide a thorough investigation of gist-based context compression methods to improve long-context processing in large language models. We focus on two key questions: (1) How well can these methods replace full attention models? and (2) What potential failure patterns arise due to compression? Through extensive experiments, we show that while gist-based compression can achieve near-lossless performance on tasks like retrieval-augmented generation and long-document QA, it faces challenges in tasks like synthetic recall. Furthermore, we identify three key failure patterns: lost by the boundary, lost if surprise, and lost along the way. To mitigate these issues, we propose two effective strategies: fine-grained autoencoding, which enhances the reconstruction of original token information, and segment-wise token importance estimation, which adjusts optimization based on token dependencies. Our work provides valuable insights into the understanding of gist token-based context compression and offers practical strategies for improving compression capabilities.
A Silver Bullet or a Compromise for Full Attention? A Comprehensive Study of Gist Token-based Context Compression
2024
In this work, we provide a thorough investigation of gist-based context compression methods to improve long-context processing in large language models. We focus on two key questions: (1) How well can these methods replace full attention models? and (2) What potential failure patterns arise due to compression? Through extensive experiments, we show that while gist-based compression can achieve near-lossless performance on tasks like retrieval-augmented generation and long-document QA, it faces challenges in tasks like synthetic recall. Furthermore, we identify three key failure patterns: lost by the boundary, lost if surprise, and lost along the way. To mitigate these issues, we propose two effective strategies: fine-grained autoencoding, which enhances the reconstruction of original token information, and segment-wise token importance estimation, which adjusts optimization based on token dependencies. Our work provides valuable insights into the understanding of gist token-based context compression and offers practical strategies for improving compression capabilities.
UniGist: Towards General and Hardware-aligned Sequence-level Long Context Compression
by
Fang, Tianqing
,
Zhang, Hongming
,
Zhang, Zhisong
in
Compressive strength
,
Context
,
Large language models
2025
Large language models are increasingly capable of handling long-context inputs, but the memory overhead of key-value (KV) cache remains a major bottleneck for general-purpose deployment. While various compression strategies have been explored, sequence-level compression, which drops the full KV caches for certain tokens, is particularly challenging as it can lead to the loss of important contextual information. To address this, we introduce UniGist, a sequence-level long-context compression framework that efficiently preserves context information by replacing raw tokens with special compression tokens (gists) in a fine-grained manner. We adopt a chunk-free training strategy and design an efficient kernel with a gist shift trick, enabling optimized GPU training. Our scheme also supports flexible inference by allowing the actual removal of compressed tokens, resulting in real-time memory savings. Experiments across multiple long-context tasks demonstrate that UniGist significantly improves compression quality, with especially strong performance in detail-recalling tasks and long-range dependency modeling.