Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
1,870 result(s) for "Q values"
Sort by:
Strong control, conservative point estimation and simultaneous conservative consistency of false discovery rates: a unified approach
The false discovery rate (FDR) is a multiple hypothesis testing quantity that describes the expected proportion of false positive results among all rejected null hypotheses. Benjamini and Hochberg introduced this quantity and proved that a particular step-up p-value method controls the FDR. Storey introduced a point estimate of the FDR for fixed significance regions. The former approach conservatively controls the FDR at a fixed predetermined level, and the latter provides a conservatively biased estimate of the FDR for a fixed predetermined significance region. In this work, we show in both finite sample and asymptotic settings that the goals of the two approaches are essentially equivalent. In particular, the FDR point estimates can be used to define valid FDR controlling procedures. In the asymptotic setting, we also show that the point estimates can be used to estimate the FDR conservatively over all significance regions simultaneously, which is equivalent to controlling the FDR at all levels simultaneously. The main tool that we use is to translate existing FDR methods into procedures involving empirical processes. This simplifies finite sample proofs, provides a framework for asymptotic results and proves that these procedures are valid even under certain forms of dependence.
A direct approach to false discovery rates
Multiple-hypothesis testing involves guarding against much more complicated errors than single-hypothesis testing. Whereas we typically control the type I error rate for a single-hypothesis test, a compound error rate is controlled for multiple-hypothesis tests. For example, controlling the false discovery rate FDR traditionally involves intricate sequential p-value rejection methods based on the observed data. Whereas a sequential p-value method fixes the error rate and estimates its corresponding rejection region, we propose the opposite approach-we fix the rejection region and then estimate its corresponding error rate. This new approach offers increased applicability, accuracy and power. We apply the methodology to both the positive false discovery rate pFDR and FDR, and provide evidence for its benefits. It is shown that pFDR is probably the quantity of interest over FDR. Also discussed is the calculation of the q-value, the pFDR analogue of the p-value, which eliminates the need to set the error rate beforehand as is traditionally done. Some simple numerical examples are presented that show that this new approach can yield an increase of over eight times in power compared with the Benjamini-Hochberg FDR method.
Lateral Variations Across the Southern San Andreas Fault Zone Revealed From Analysis of Traffic Signals at a Dense Seismic Array
We image the shallow seismic structure across the Southern San Andreas Fault (SSAF) using signals from freight trains and trucks recorded by a dense nodal array, with a linear component perpendicular to SSAF and two 2D subarrays centered on the Banning Fault and Mission Creek Fault (MCF). Particle motion analysis in the frequency band 2–5 Hz shows that the examined traffic sources can be approximated as moving single‐ or multi‐point sources that primarily induce Rayleigh waves. Using several techniques, we resolve strong lateral variations of Rayleigh wave velocities and Q‐values across the SSAF, including 35% velocity reduction across MCF toward the northeast and strong attenuation around the two fault strands. We further resolve 10% mass density reduction and 45% shear modulus decrease across the MCF. These findings suggest that the MCF is currently the main strand of the SSAF in the area with important implications for seismic hazard assessments. Plain Language Summary Imaging the internal structure of fault zones is essential for understanding earthquake properties and processes. Here we utilize seismic data generated by trains and trucks in the Coachella valley and recorded by a dense seismic array to image the subsurface structure of two main strands of the Southern San Andreas Fault (SSAF). Several types of analyses allow us to resolve seismic velocities, attenuation coefficients, and mass density across the entire San Andreas Fault zone. The results show a clear contrast in physical properties across the Mission Creek strand of the SSAF, highlighting the presence of a bimaterial fault interface and suggesting that it is the main active strand of SSAF. The research opens up possibilities for using common rail and road traffic signals to derive high resolution imaging results of subsurface seismic properties at other locations. Key Points We detect frequent seismic signals from rail and road traffic in a dense array across the southern San Andreas fault zone We use the traffic signals to image shallow structural properties across the Banning and Mission Creek fault strands The resolved velocity and density contrasts across the Mission Creek fault suggest it is the main active strand of the Southern San Andreas Fault in the area
A statistical method for the conservative adjustment of false discovery rate (q-value)
Background q -value is a widely used statistical method for estimating false discovery rate (FDR), which is a conventional significance measure in the analysis of genome-wide expression data. q -value is a random variable and it may underestimate FDR in practice. An underestimated FDR can lead to unexpected false discoveries in the follow-up validation experiments. This issue has not been well addressed in literature, especially in the situation when the permutation procedure is necessary for p -value calculation. Results We proposed a statistical method for the conservative adjustment of q -value. In practice, it is usually necessary to calculate p -value by a permutation procedure. This was also considered in our adjustment method. We used simulation data as well as experimental microarray or sequencing data to illustrate the usefulness of our method. Conclusions The conservativeness of our approach has been mathematically confirmed in this study. We have demonstrated the importance of conservative adjustment of q -value, particularly in the situation that the proportion of differentially expressed genes is small or the overall differential expression signal is weak.
ORAD: a new framework of offline Reinforcement Learning with Q-value regularization
Offline Reinforcement Learning (RL) defines a framework for learning from previously collected static buffer. However, offline RL is prone to approximation errors caused by out-of-distribution (OOD) data and particularly inefficient for pixel-based learning tasks compared with state-based input control methods. Several pioneer efforts have been made to solve this problem; some use pessimistic Q-values approximation for unseen observation while others train a model to simulate the environment to train a model on previously collected data to learn policies. However, these methods require accurate and time-consuming estimation of the Q-values or the environment models. Based on this observation, we present offline RL methods with augmented data (ORAD), a handy but non-trivial extension to offline RL algorithms. We show that simple data augmentations, e.g. random translation and random crop, significantly elevate the performance of the state-of-the-art offline RL algorithms. Besides, we find that regularization of the Q-values can also enhance performance. Extensive experiments on the pixel-based input control-Atari demonstrate the superiority of ORAD over SOTA offline RL methods considering both performance and data efficiency, and reveal that ORAD is more effective for the pixel-based control.
optimal discovery procedure: a new approach to simultaneous significance testing
The Neyman-Pearson lemma provides a simple procedure for optimally testing a single hypothesis when the null and alternative distributions are known. This result has played a major role in the development of significance testing strategies that are used in practice. Most of the work extending single-testing strategies to multiple tests has focused on formulating and estimating new types of significance measures, such as the false discovery rate. These methods tend to be based on p-values that are calculated from each test individually, ignoring information from the other tests. I show here that one can improve the overall performance of multiple significance tests by borrowing information across all the tests when assessing the relative significance of each one, rather than calculating p-values for each test individually. The 'optimal discovery procedure' is introduced, which shows how to maximize the number of expected true positive results for each fixed number of expected false positive results. The optimality that is achieved by this procedure is shown to be closely related to optimality in terms of the false discovery rate. The optimal discovery procedure motivates a new approach to testing multiple hypotheses, especially when the tests are related. As a simple example, a new simultaneous procedure for testing several normal means is defined; this is surprisingly demonstrated to outperform the optimal single-test procedure, showing that a method which is optimal for single tests may no longer be optimal for multiple tests. Connections to other concepts in statistics are discussed, including Stein's paradox, shrinkage estimation and the Bayesian approach to hypothesis testing.
Assessing the genetic impact of Enterococcus faecalis infection on gastric cell line MKN74
Purpose Enterococcus faecalis ( E. faecalis ) is an important commensal microbiota member of the human gastrointestinal tract. It has been shown in many studies that infection rates with E. faecalis in gastric cancer significantly increase. It has been scientifically proven that some infections develop during the progression of cancer, but it is still unclear whether this infection factor is beneficial (reduction in metastasis) or harmful (increase in proliferation, invasion, stem cell-like phenotype) of the host. These opposed data can significantly contribute to the understanding of cancer progress when analyzed in detail. Methods The gene expression data were retrieved from Array Express (E-MEXP-3496). Variance, t test and linear regression analysis, hierarchical clustering, network, and pathway analysis were performed. Results In this study, we identified altered genes involved in E. faecalis infection in the gastric cell line MKN74 and the relevant pathways to understand whether the infection slows down cancer progression. Twelve genes corresponding 15 probe sets were downregulated following the live infection of gastric cancer cells with E. faecalis . We identified a network between these genes and pathways they belong to. Pathway analysis showed that these genes are mostly associated with cancer cell proliferation. Conclusion NDC80, NCAPG, CENPA, KIF23, BUB1B, BUB1, CASC5, KIF2C, CENPF, SPC25, SMC4, and KIF20A genes were found to be associated with gastric cancer pathogenesis. Almost all of these genes are effective in the proliferation of cancer cells, especially during the infection process. Therefore, it is hypothesized that downregulation of these genes may affect gastric cancer pathogenesis by reducing cell proliferation. And, it is predicted that E. faecalis infection may be an important factor for gastric cancer.
Integration of Radioactive Material with Microcalorimeter Detectors
Microcalorimeter detectors with embedded radioactive material offer many possibilities for new types of measurements and applications. We will discuss the designs and methods that we are developing for precise deposition of radioactive material and its encapsulation in the absorber of transition-edge sensor (TES) microcalorimeter detectors for two specific applications. The first application is total nuclear reaction energy (Q) spectroscopy for nuclear forensics measurements of trace actinide samples, where the goal is determination of ratios of isotopes with Q values in the range of 5–7 MeV. Simplified, rapid sample preparation and detector assembly is necessary for practical measurements, while maintaining good energy resolution. The second application is electron capture spectroscopy of isotopes with low Q values, such as 163 Ho, for measurement of neutrino mass. Detectors for electron capture spectroscopy are designed for measuring energies up to approximately 6 keV. Their smaller heat capacity and physical size present unique challenges. Both applications require precise deposition of radioactive material and encapsulation in an absorber with optimized thermal properties and coupling to the TES. We have made detectors for both applications with a variety of designs and assembly methods, and will present their development.
An Experimental Study on Fabricating an Inverted Mesa-Type Quartz Crystal Resonator Using a Cheap Wet Etching Process
In this study, a miniaturized high fundamental frequency quartz crystal microbalance (QCM) is fabricated for sensor applications using a wet etching technique. The vibration area is reduced in the fabrication of the high frequency QCM with an inverted mesa structure. To reduce the complexity of the side wall profile that results from anisotropic quartz etching, a rectangular vibration area is used instead of the conventional circular structure. QCMs with high Q values exceeding 25,000 at 47 MHz, 27,000 at 60 MHz, 24,000 at 73 MHz and 25,000 at 84 MHz are fabricated on 4 × 4 mm2 chips with small vibration areas of 1.2 × 1.4 mm2. A PMMA-based flow cell is designed and manufactured to characterize the behavior of the fabricated QCM chip in a liquid. Q values as high as 1,006 at 47 MHz, 904 at 62 MHz, 867 at 71 MHz and 747 at 84 MHz are obtained when one side of the chip is exposed to pure water. These results show that fabricated QCM chips can be used for bio- and chemical sensor applications in liquids.
Role of neutron rearrangement channels in sub-barrier fusion
Different factors influencing the sub-barrier fusion enhancement owing to neutron rearrangement with positive Q values are studied. It was found that opposite to the previous opinion the presence of positive Q values is necessary but not sufficient to observe enhancement of the sub-barrier fusion. “Rigidity” of colliding nuclei with respect to collective excitations plays a crucial role for the sub-barrier fusion enhancement due to neutron rearrangement. Neutron binding energy has a strong impact but only in the case of fusion of light nuclei.