Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
29,523 result(s) for "Error rates"
Sort by:
Synaptic circuits and their variations within different columns in the visual system of Drosophila
Circuit diagrams of brains are generally reported only as absolute or consensus networks; these diagrams fail to identify the accuracy of connections, however, for which multiple circuits of the same neurons must be documented. For this reason, the modular composition of the Drosophila visual system, with many identified neuron classes, is ideal. Using EM, we identified synaptic connections in the fly’s second visual relay neuropil, or medulla, in the 20 neuron classes in a so-called “core connectome,” those neurons present in seven neighboring columns. These connections identify circuits for motion. Their error rates for wiring reveal that <1% of contacts overall are not part of a consensus circuit but incorporate errors of either omission or commission. Autapses are occasionally seen. We reconstructed the synaptic circuits of seven columns in the second neuropil or medulla behind the fly’s compound eye. These neurons embody some of the most stereotyped circuits in one of the most miniaturized of animal brains. The reconstructions allow us, for the first time to our knowledge, to study variations between circuits in the medulla’s neighboring columns. This variation in the number of synapses and the types of their synaptic partners has previously been little addressed because methods that visualize multiple circuits have not resolved detailed connections, and existing connectomic studies, which can see such connections, have not so far examined multiple reconstructions of the same circuit. Here, we address the omission by comparing the circuits common to all seven columns to assess variation in their connection strengths and the resultant rates of several different and distinct types of connection error. Error rates reveal that, overall, <1% of contacts are not part of a consensus circuit, and we classify those contacts that supplement (E+) or are missing from it (E−). Autapses, in which the same cell is both presynaptic and postsynaptic at the same synapse, are occasionally seen; two cells in particular, Dm9 and Mi1, form ≥20-fold more autapses than do other neurons. These results delimit the accuracy of developmental events that establish and normally maintain synaptic circuits with such precision, and thereby address the operation of such circuits. They also establish a precedent for error rates that will be required in the new science of connectomics.
p-Values for High-Dimensional Regression
Assigning significance in high-dimensional regression is challenging. Most computationally efficient selection algorithms cannot guard against inclusion of noise variables. Asymptotically valid p-values are not available. An exception is a recent proposal by Wasserman and Roeder that splits the data into two parts. The number of variables is then reduced to a manageable size using the first split, while classical variable selection techniques can be applied to the remaining variables, using the data from the second split. This yields asymptotic error control under minimal conditions. This involves a one-time random split of the data, however. Results are sensitive to this arbitrary choice, which amounts to a \"p-value lottery\" and makes it difficult to reproduce results. Here we show that inference across multiple random splits can be aggregated while maintaining asymptotic control over the inclusion of noise variables. We show that the resulting p-values can be used for control of both family-wise error and false discovery rate. In addition, the proposed aggregation is shown to improve power while reducing the number of falsely selected variables substantially.
Multiple secondary outcome analyses: precise interpretation is important
Analysis of multiple secondary outcomes in a clinical trial leads to an increased probability of at least one false significant result among all secondary outcomes studied. In this paper, we question the notion that that if no multiplicity adjustment has been applied to multiple secondary outcome analyses in a clinical trial, then they must necessarily be regarded as exploratory. Instead, we argue that if individual secondary outcome results are interpreted carefully and precisely, there is no need to downgrade our interpretation to exploratory. This is because the probability of a false significant result for each comparison, the per-comparison wise error rate, does not increase with multiple testing. Strong effects on secondary outcomes should always be taken seriously and must not be dismissed purely on the basis of multiplicity concerns.
Comparison of Group Testing Algorithms for Case Identification in the Presence of Test Error
We derive and compare the operating characteristics of hierarchical and square array-based testing algorithms for case identification in the presence of testing error. The operating characteristics investigated include efficiency (i.e., expected number of tests per specimen) and error rates (i.e., sensitivity, specificity, positive and negative predictive values, per-family error rate, and per-comparison error rate). The methodology is illustrated by comparing different pooling algorithms for the detection of individuals recently infected with HIV in North Carolina and Malawi.
Polar code construction by estimating noise using bald hawk optimized recurrent neural network model
Polar codes are making significant progress in error-correcting coding due to their ability to reach the limit of the Shannon capacity of communication channels, indicating great advancements in the field. Decoding errors are common in real communication channels with noise. The main objective of this study is to develop a recurrent neural network decoder for robust polar code construction with the Bald Hawk Optimization (RNN-based Decoder with BHO) model that can estimate the error in information bits. This research presents a practical and significant innovation by combining recurrent neural networks (RNNs) for noise estimation in polar coding with a Bald Hawk optimization approach. Moreover, this synthesis of RNN-based noise estimation with Bald Hawk optimization makes the polar coding system more flexible and adaptive, allowing for more accurate noise estimation during decoding. In terms of frame errors, the Bit Error Rate (BER), Binary Phase Shifting Key-BER (BPSK-BER), and Frame Error Rate (FER) achieve the lowest error values of 0.0000087, 0.01519, and 0.000182, respectively. Similarly, in a 4 dB SNR context, the BER, BPSK-BER, and FER achieve values of 0.0000073, 0.02065, and 0.000108, respectively. The results shows that the proposed RNN-based decoder with BHO model outperforms the existing decoders.
A generalized Dunnett test for multi-arm multi-stage clinical studies with treatment selection
We generalize the Dunnett test to derive efficacy and futility boundaries for a flexible multi-arm multistage clinical trial for a normally distributed endpoint with known variance. We show that the boundaries control the familywise error rate in the strong sense. The method is applicable for any number of treatment arms, number of stages and number of patients per treatment per stage. It can be used for a wide variety of boundary types or rules derived from á-spending functions. Additionally, we show how sample size can be computed under a least favourable configuration power requirement and derive formulae for expected sample sizes.
An efficient reconfigurable code rate cooperative low-density parity check codes for gigabits wide code encoder/decoder operations
In recent days, extensive digital communication process has been performed. Due to this phenomenon, a proper maintenance of authentication, communication without any overhead such as signal attenuation code rate fluctuations during digital communication process can be minimized and optimized by adopting parallel encoder and decoder operations. To overcome the above-mentioned drawbacks by using proposed reconfigurable code rate cooperative (RCRC) and low-density parity check (LDPC) method. The proposed RCRC-LDPC is capable to operate over gigabits/sec data and it effectively performs linear encoding, dual diagonal form, widens the range of code rate and optimal degree distribution of LDPC mother code. The proposed method optimize the transmission rate and it is capable to operate on 0.98 code rate. It is the highest upper bounded code rate as compared to the existing methods. The proposed method optimizes the transmission rate and is capable to operate on a 0.98 code rate. It is the highest upper bounded code rate as compared to the existing methods. the proposed method's implementation has been carried out using MATLAB and as per the simulation result, the proposed method is capable of reaching a throughput efficiency greater than 8.2 (1.9) gigabits per second with a clock frequency of 160 MHz.
End-to-End DAE–LDPC–OFDM Transceiver with Learned Belief Propagation Decoder for Robust and Power-Efficient Wireless Communication
This paper presents a Deep Autoencoder–LDPC–OFDM (DAE–LDPC–OFDM) transceiver architecture that integrates a learned belief propagation (BP) decoder to achieve robust, energy-efficient, and adaptive wireless communication. Unlike conventional modular systems that treat encoding, modulation, and decoding as independent stages, the proposed framework performs end-to-end joint optimization of all components, enabling dynamic adaptation to varying channel and noise conditions. The learned BP decoder introduces trainable parameters into the iterative message-passing process, allowing adaptive refinement of log-likelihood ratio (LLR) statistics and enhancing decoding accuracy across diverse SNR regimes. Extensive experimental results across multiple datasets and channel scenarios demonstrate the effectiveness of the proposed design. At 10 dB SNR, the DAE–LDPC–OFDM achieves a BER of 1.72% and BLER of 2.95%, outperforming state-of-the-art models such as Transformer–OFDM, CNN–OFDM, and GRU–OFDM by 25–30%, and surpassing traditional LDPC–OFDM systems by 38–42% across all tested datasets. The system also achieves a PAPR reduction of 26.6%, improving transmitter power amplifier efficiency, and maintains a low inference latency of 3.9 ms per frame, validating its suitability for real-time applications. Moreover, it maintains reliable performance under time-varying, interference-rich, and multipath fading channels, confirming its robustness in realistic wireless environments. The results establish the DAE–LDPC–OFDM as a high-performance, power-efficient, and scalable architecture capable of supporting the demands of 6G and beyond, delivering superior reliability, low-latency performance, and energy-efficient communication in next-generation intelligent networks.
MITS: A Quantum Sorcerer’s Stone for Designing Surface Codes
In the evolving field of quantum computing, optimizing Quantum Error Correction (QEC) parameters is crucial due to the varying types and amounts of physical noise across quantum computers. Traditional simulators use a forward paradigm to derive logical error rates from inputs like code distance and rounds, but this can lead to resource wastage. Adjusting QEC parameters manually with tools like STIM is often inefficient, especially given the daily fluctuations in quantum error rates. To address this, we introduce MITS, a reverse engineering tool for STIM that automatically determines optimal QEC settings based on a given quantum computer’s noise model and a target logical error rate. This approach minimizes qubit and gate usage by precisely matching the necessary logical error rate with the constraints of qubit numbers and gate fidelity. Our investigations into various heuristics and machine learning models for MITS show that XGBoost and Random Forest regressions, with Pearson correlation coefficients of 0.98 and 0.96, respectively, are highly effective in this context.
Uplink and downlink performance analysis of a structured coded NOMA in Cognitive Radio Networks
This study examines the uplink and downlink communication in a structured coded nonorthogonal multiple access (NOMA) in the context of cognitive radio networks (CRNs). Due to the ever-increasing demand for spectrum-efficient communication systems, NOMA has emerged as an effective approach to enhance spectral efficiency by allowing multiple users to share the same frequency resources. Furthermore, CRN also improves spectrum utilization by enabling dynamic spectrum access while primary users are present. This work presents a method that can maximize the spectral efficiency by combining NOMA and CRN mechanisms. The suggested system is evaluated in terms of throughput, spectral efficiency, and bit error rate (BER). The collected results show that the proposed strategy performs better in reducing data mistakes when two users access the spectrum at different signal-to-noise ratios (SNR), with a 7 dB improvement for 1st user and a 2.5 dB improvement for the 2nd user, respectively, in the downlink scenario. Next, the exact BER expressions for both coded and uncoded uplink NOMA systems are introduced. As a result, the proposed system demonstrates superior performance and needs only 11 dB to reach 1 × 10−6 of BER while the uncoded system cannot operate in this harsh environment and the BER is fixed at 0.25 dB.