Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
3
result(s) for
"Yu, Yinggui"
Sort by:
A novel missense mutation of FOXC1 in an Axenfeld–Rieger syndrome patient with a congenital atrial septal defect and sublingual cyst: a case report and literature review
by
Tang, Min
,
Yu, Yinggui
,
Xu, Manhua
in
Abnormalities
,
Anterior Eye Segment - abnormalities
,
Atrial septal defects
2021
Background
Axenfeld–Rieger syndrome (ARS) is a rare autosomal dominant hereditary disease characterized primarily by maldevelopment of the anterior segment of both eyes, accompanied by developmental glaucoma, and other congenital anomalies. FOXC1 and PITX2 genes play important roles in the development of ARS.
Case presentation
The present report describes a 7-year-old boy with iris dysplasia, displaced pupils, and congenital glaucoma in both eyes. The patient presented with a congenital atrial septal defect and sublingual cyst. The patient’s family has no clinical manifestations. Next generation sequencing identified a pathogenic heterozygous missense variant in FOXC1 gene (NM_001453:c. 246C>A, p. S82R) in the patient. Sanger sequencing confirmed this result, and this mutation was not detected in the other three family members.
Conclusion
To the best of our knowledge, the results of our study reveal a novel mutation in the FOXC1 gene associated with ARS.
Journal Article
Receiving Routing Approach for Virtually Coupled Train Sets at a Railway Station
by
Zhao, Minghui
,
Zhang, Yinggui
,
Xu, Qianying
in
Communication
,
Cooperation
,
elitist and adaptive strategy
2023
Elaborated in several forms before being formally defined, virtually coupled train sets (VCTS) have become an issue for capacity increase with obvious shorter train intervals. As the station organization strategy is still ambiguous due to the lack of literature, the receiving routing problem for VCTS is studied in particular. First, the existing concept of VCTS is explained, which refers to the virtual connection of trains through safe and reliable communication technology, allowing short-interval collaborative operations without the need for physical equipment. Subsequently, the operating characteristics and receiving requirements are analyzed. With a summary of factors affecting receiving operations, a mathematical model is proposed with the objectives of minimizing operation duration and maximizing effectiveness, which is solved by an improved genetic algorithm (GA) with an elitist and adaptive strategy. Numerical tests are carried out 250 times based on a practical station and EMU parameters. The macro results show the valid pursuit of designed objectives with an average duration of 204.95 s and an efficiency of 91.76%. Microevolution of an optimal scheme indicates that safety requirements are met while the process duration is only 35.83% of the original CTCS-3 mode.
Journal Article
A Fast, Performant, Secure Distributed Training Framework For Large Language Model
2024
The distributed (federated) LLM is an important method for co-training the domain-specific LLM using siloed data. However, maliciously stealing model parameters and data from the server or client side has become an urgent problem to be solved. In this paper, we propose a secure distributed LLM based on model slicing. In this case, we deploy the Trusted Execution Environment (TEE) on both the client and server side, and put the fine-tuned structure (LoRA or embedding of P-tuning v2) into the TEE. Then, secure communication is executed in the TEE and general environments through lightweight encryption. In order to further reduce the equipment cost as well as increase the model performance and accuracy, we propose a split fine-tuning scheme. In particular, we split the LLM by layers and place the latter layers in a server-side TEE (the client does not need a TEE). We then combine the proposed Sparsification Parameter Fine-tuning (SPF) with the LoRA part to improve the accuracy of the downstream task. Numerous experiments have shown that our method guarantees accuracy while maintaining security.