Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
3 result(s) for "Yu, Yinggui"
Sort by:
A novel missense mutation of FOXC1 in an Axenfeld–Rieger syndrome patient with a congenital atrial septal defect and sublingual cyst: a case report and literature review
Background Axenfeld–Rieger syndrome (ARS) is a rare autosomal dominant hereditary disease characterized primarily by maldevelopment of the anterior segment of both eyes, accompanied by developmental glaucoma, and other congenital anomalies. FOXC1 and PITX2 genes play important roles in the development of ARS. Case presentation The present report describes a 7-year-old boy with iris dysplasia, displaced pupils, and congenital glaucoma in both eyes. The patient presented with a congenital atrial septal defect and sublingual cyst. The patient’s family has no clinical manifestations. Next generation sequencing identified a pathogenic heterozygous missense variant in FOXC1 gene (NM_001453:c. 246C>A, p. S82R) in the patient. Sanger sequencing confirmed this result, and this mutation was not detected in the other three family members. Conclusion To the best of our knowledge, the results of our study reveal a novel mutation in the FOXC1 gene associated with ARS.
Receiving Routing Approach for Virtually Coupled Train Sets at a Railway Station
Elaborated in several forms before being formally defined, virtually coupled train sets (VCTS) have become an issue for capacity increase with obvious shorter train intervals. As the station organization strategy is still ambiguous due to the lack of literature, the receiving routing problem for VCTS is studied in particular. First, the existing concept of VCTS is explained, which refers to the virtual connection of trains through safe and reliable communication technology, allowing short-interval collaborative operations without the need for physical equipment. Subsequently, the operating characteristics and receiving requirements are analyzed. With a summary of factors affecting receiving operations, a mathematical model is proposed with the objectives of minimizing operation duration and maximizing effectiveness, which is solved by an improved genetic algorithm (GA) with an elitist and adaptive strategy. Numerical tests are carried out 250 times based on a practical station and EMU parameters. The macro results show the valid pursuit of designed objectives with an average duration of 204.95 s and an efficiency of 91.76%. Microevolution of an optimal scheme indicates that safety requirements are met while the process duration is only 35.83% of the original CTCS-3 mode.
A Fast, Performant, Secure Distributed Training Framework For Large Language Model
The distributed (federated) LLM is an important method for co-training the domain-specific LLM using siloed data. However, maliciously stealing model parameters and data from the server or client side has become an urgent problem to be solved. In this paper, we propose a secure distributed LLM based on model slicing. In this case, we deploy the Trusted Execution Environment (TEE) on both the client and server side, and put the fine-tuned structure (LoRA or embedding of P-tuning v2) into the TEE. Then, secure communication is executed in the TEE and general environments through lightweight encryption. In order to further reduce the equipment cost as well as increase the model performance and accuracy, we propose a split fine-tuning scheme. In particular, we split the LLM by layers and place the latter layers in a server-side TEE (the client does not need a TEE). We then combine the proposed Sparsification Parameter Fine-tuning (SPF) with the LoRA part to improve the accuracy of the downstream task. Numerous experiments have shown that our method guarantees accuracy while maintaining security.