Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
87 result(s) for "Cheng, Yuan-Ren"
Sort by:
Acid-sensing ion channels: dual function proteins for chemo-sensing and mechano-sensing
Background Acid-sensing ion channels (ASICs) are a group of amiloride-sensitive ligand-gated ion channels belonging to the family of degenerin/epithelial sodium channels. ASICs are predominantly expressed in both the peripheral and central nervous system and have been characterized as potent proton sensors to detect extracellular acidification in the periphery and brain. Main body Here we review the recent studies focusing on the physiological roles of ASICs in the nervous system. As the major acid-sensing membrane proteins in the nervous system, ASICs detect tissue acidosis occurring at tissue injury, inflammation, ischemia, stroke, and tumors as well as fatiguing muscle to activate pain-sensing nerves in the periphery and transmit pain signals to the brain. Arachidonic acid and lysophosphocholine have been identified as endogenous non-proton ligands activating ASICs in a neutral pH environment. On the other hand, ASICs are found involved in the tether model mechanotransduction, in which the extracellular matrix and cytoplasmic cytoskeletons act like a gating-spring to tether the mechanically activated ion channels and thus transmit the stimulus force to the channels. Accordingly, accumulating evidence has shown ASICs play important roles in mechanotransduction of proprioceptors, mechanoreceptors and nociceptors to monitor the homoeostatic status of muscle contraction, blood volume, and blood pressure as well as pain stimuli. Conclusion Together, ASICs are dual-function proteins for both chemosensation and mechanosensation involved in monitoring physiological homoeostasis and pathological signals.
Evidence for the involvement of ASIC3 in sensory mechanotransduction in proprioceptors
Acid-sensing ion channel 3 (ASIC3) is involved in acid nociception, but its possible role in neurosensory mechanotransduction is disputed. We report here the generation of Asic3-knockout/eGFPf-knockin mice and subsequent characterization of heterogeneous expression of ASIC3 in the dorsal root ganglion (DRG). ASIC3 is expressed in parvalbumin (Pv+) proprioceptor axons innervating muscle spindles. We further generate a floxed allele of Asic3 ( Asic3 f/f ) and probe the role of ASIC3 in mechanotransduction in neurite-bearing Pv+ DRG neurons through localized elastic matrix movements and electrophysiology. Targeted knockout of Asic3 disrupts spindle afferent sensitivity to dynamic stimuli and impairs mechanotransduction in Pv+ DRG neurons because of substrate deformation-induced neurite stretching, but not to direct neurite indentation. In behavioural tasks, global knockout ( Asic3 −/− ) and Pv-Cre::Asic3 f/f mice produce similar deficits in grid and balance beam walking tasks. We conclude that, at least in mouse, ASIC3 is a molecular determinant contributing to dynamic mechanosensitivity in proprioceptors. Acid-sensing ion channel 3 (ASIC3) is known to play a role in nociception, but its role in low threshold neurosensory mechanotransduction is unclear. Here, the authors target ASIC3 expression in dorsal root ganglion parvalbumin positive neurons and find ASIC3 contributes to dynamic proprioception responses.
Benchmarking of eight recurrent neural network variants for breath phase and adventitious sound detection on a self-developed open-access lung sound database—HF_Lung_V1
A reliable, remote, and continuous real-time respiratory sound monitor with automated respiratory sound analysis ability is urgently required in many clinical scenarios—such as in monitoring disease progression of coronavirus disease 2019—to replace conventional auscultation with a handheld stethoscope. However, a robust computerized respiratory sound analysis algorithm for breath phase detection and adventitious sound detection at the recording level has not yet been validated in practical applications. In this study, we developed a lung sound database (HF_Lung_V1) comprising 9,765 audio files of lung sounds (duration of 15 s each), 34,095 inhalation labels, 18,349 exhalation labels, 13,883 continuous adventitious sound (CAS) labels (comprising 8,457 wheeze labels, 686 stridor labels, and 4,740 rhonchus labels), and 15,606 discontinuous adventitious sound labels (all crackles). We conducted benchmark tests using long short-term memory (LSTM), gated recurrent unit (GRU), bidirectional LSTM (BiLSTM), bidirectional GRU (BiGRU), convolutional neural network (CNN)-LSTM, CNN-GRU, CNN-BiLSTM, and CNN-BiGRU models for breath phase detection and adventitious sound detection. We also conducted a performance comparison between the LSTM-based and GRU-based models, between unidirectional and bidirectional models, and between models with and without a CNN. The results revealed that these models exhibited adequate performance in lung sound analysis. The GRU-based models outperformed, in terms of F1 scores and areas under the receiver operating characteristic curves, the LSTM-based models in most of the defined tasks. Furthermore, all bidirectional models outperformed their unidirectional counterparts. Finally, the addition of a CNN improved the accuracy of lung sound analysis, especially in the CAS detection tasks.
Probing the Effect of Acidosis on Tether-Mode Mechanotransduction of Proprioceptors
Proprioceptors are low-threshold mechanoreceptors involved in perceiving body position and strain bearing. However, the physiological response of proprioceptors to fatigue- and muscle-acidosis-related disturbances remains unknown. Here, we employed whole-cell patch-clamp recordings to probe the effect of mild acidosis on the mechanosensitivity of the proprioceptive neurons of dorsal root ganglia (DRG) in mice. We cultured neurite-bearing parvalbumin-positive (Pv+) DRG neurons on a laminin-coated elastic substrate and examined mechanically activated currents induced through substrate deformation-driven neurite stretch (SDNS). The SDNS-induced inward currents (ISDNS) were indentation depth-dependent and significantly inhibited by mild acidification (pH 7.2~6.8). The acid-inhibiting effect occurred in neurons with an ISDNS sensitive to APETx2 (an ASIC3-selective antagonist) inhibition, but not in those with an ISNDS resistant to APETx2. Detailed subgroup analyses revealed ISDNS was expressed in 59% (25/42) of Parvalbumin-positive (Pv+) DRG neurons, 90% of which were inhibited by APETx2. In contrast, an acid (pH 6.8)-induced current (IAcid) was expressed in 76% (32/42) of Pv+ DRG neurons, 59% (21/32) of which were inhibited by APETx2. Together, ASIC3-containing channels are highly heterogenous and differentially contribute to the ISNDS and IAcid among Pv+ proprioceptors. In conclusion, our findings highlight the importance of ASIC3-containing ion channels in the physiological response of proprioceptors to acidic environments.
A Progressively Expanded Database for Automated Lung Sound Analysis: An Update
We previously established an open-access lung sound database, HF_Lung_V1, and developed deep learning models for inhalation, exhalation, continuous adventitious sound (CAS), and discontinuous adventitious sound (DAS) detection. The amount of data used for training contributes to model accuracy. In this study, we collected larger quantities of data to further improve model performance and explored issues of noisy labels and overlapping sounds. HF_Lung_V1 was expanded to HF_Lung_V2 with a 1.43× increase in the number of audio files. Convolutional neural network–bidirectional gated recurrent unit network models were trained separately using the HF_Lung_V1 (V1_Train) and HF_Lung_V2 (V2_Train) training sets. These were tested using the HF_Lung_V1 (V1_Test) and HF_Lung_V2 (V2_Test) test sets, respectively. Segment and event detection performance was evaluated. Label quality was assessed. Overlap ratios were computed between inhalation, exhalation, CAS, and DAS labels. The model trained using V2_Train exhibited improved performance in inhalation, exhalation, CAS, and DAS detection on both V1_Test and V2_Test. Poor CAS detection was attributed to the quality of CAS labels. DAS detection was strongly influenced by the overlapping of DAS with inhalation and exhalation. In conclusion, collecting greater quantities of lung sound data is vital for developing more accurate lung sound analysis models.
A Survival Metadata Analysis Responsive Tool (SMART) for web-based analysis of patient survival and risk
Health information systems contain extensive amounts of patient data. Information relevant to public health and individuals’ medical histories are both available. In clinical research, the prediction of patient survival rates and identification of prognosis factors are major challenges. To alleviate the difficulties related to these factors, Metadata Utilities was developed to help researchers manage column definitions and information such as import/query/generator Metadata files. These utilities also include an automatic update mechanism to ensure consistency between the data and parameters of the batch produced in the conversion procedure. Survival Metadata Analysis Responsive Tool (SMART) provides a comprehensive set of statistical tests that are easy to understand, including support for analyzing nominal variables, ordinal variables, interval variables or ratio variables as means, standard deviations, maximum values, minimum values, and percentages. In this article, the development of a raw data source and transfer mechanism, Extract-Transform-Load (ETL), is described for data cleansing, extraction, transformation and loading. We also built a handy method for data presentation, which can be customized to the trial design. As demonstrated here, SMART is useful for risk-adjusted baseline cohort and randomized controlled trials.
Probing localized neural mechanotransduction through surface-modified elastomeric matrices and electrophysiology
Mechanotransduction of sensory neurons is of great interest to the scientific community, especially in areas such as pain, neurobiology, cardiovascular homeostasis and mechanobiology. We describe a method to investigate stretch-activated mechanotransduction in sensory nerves through subcellular stimulation. The method imposes localized mechanical stimulation through indentation of an elastomeric substrate and combines this mechanical stimulation with whole-cell patch clamp recording of the electrical response to single-nerve stretching. One significant advantage here is that the neurites are stretched with limited physical contact beyond their attachment to the polymer. When we imposed specific mechanical stimulation through the substrate, the stretched neurite fired and an action potential response was recorded. In addition, complementary protocols to control the molecules at the cell–substrate interface are presented. These techniques provide an opportunity to probe neurosensory mechanotransduction with a defined substrate, whose physical and molecular context can be modified to mimic physiologically relevant conditions. The entire process from fabrication to cellular recording takes 5 to 6 d.
Author Correction: A Survival Metadata Analysis Responsive Tool (SMART) for web-based analysis of patient survival and risk
A correction to this article has been published and is linked from the HTML and PDF versions of this paper. The error has been fixed in the paper.A correction to this article has been published and is linked from the HTML and PDF versions of this paper. The error has been fixed in the paper.
A Dual-Purpose Deep Learning Model for Auscultated Lung and Tracheal Sound Analysis Based on Mixed Set Training
Many deep learning-based computerized respiratory sound analysis methods have previously been developed. However, these studies focus on either lung sound only or tracheal sound only. The effectiveness of using a lung sound analysis algorithm on tracheal sound and vice versa has never been investigated. Furthermore, no one knows whether using lung and tracheal sounds together in training a respiratory sound analysis model is beneficial. In this study, we first constructed a tracheal sound database, HF_Tracheal_V1, containing 10448 15-s tracheal sound recordings, 21741 inhalation labels, 15858 exhalation labels, and 6414 continuous adventitious sound (CAS) labels. HF_Tracheal_V1 and our previously built lung sound database, HF_Lung_V2, were either combined (mixed set), used one after the other (domain adaptation), or used alone to train convolutional neural network bidirectional gate recurrent unit models for inhalation, exhalation, and CAS detection in lung and tracheal sounds. The results revealed that the models trained using lung sound alone performed poorly in tracheal sound analysis and vice versa. However, mixed set training or domain adaptation improved the performance for 1) inhalation and exhalation detection in lung sounds and 2) inhalation, exhalation, and CAS detection in tracheal sounds compared to positive controls (the models trained using lung sound alone and used in lung sound analysis and vice versa). In particular, the model trained on the mixed set had great flexibility to serve two purposes, lung and tracheal sound analyses, at the same time.
Benchmarking of eight recurrent neural network variants for breath phase and adventitious sound detection on a self-developed open-access lung sound database-HF_(L)ung_(V)1
A reliable, remote, and continuous real-time respiratory sound monitor with automated respiratory sound analysis ability is urgently required in many clinical scenarios-such as in monitoring disease progression of coronavirus disease 2019-to replace conventional auscultation with a handheld stethoscope. However, a robust computerized respiratory sound analysis algorithm for breath phase detection and adventitious sound detection at the recording level has not yet been validated in practical applications. In this study, we developed a lung sound database (HF_Lung_V1) comprising 9,765 audio files of lung sounds (duration of 15 s each), 34,095 inhalation labels, 18,349 exhalation labels, 13,883 continuous adventitious sound (CAS) labels (comprising 8,457 wheeze labels, 686 stridor labels, and 4,740 rhonchus labels), and 15,606 discontinuous adventitious sound labels (all crackles). We conducted benchmark tests using long short-term memory (LSTM), gated recurrent unit (GRU), bidirectional LSTM (BiLSTM), bidirectional GRU (BiGRU), convolutional neural network (CNN)-LSTM, CNN-GRU, CNN-BiLSTM, and CNN-BiGRU models for breath phase detection and adventitious sound detection. We also conducted a performance comparison between the LSTM-based and GRU-based models, between unidirectional and bidirectional models, and between models with and without a CNN. The results revealed that these models exhibited adequate performance in lung sound analysis. The GRU-based models outperformed, in terms of F1 scores and areas under the receiver operating characteristic curves, the LSTM-based models in most of the defined tasks. Furthermore, all bidirectional models outperformed their unidirectional counterparts. Finally, the addition of a CNN improved the accuracy of lung sound analysis, especially in the CAS detection tasks.