Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
36,917 result(s) for "detection errors"
Sort by:
Sensor data quality: a systematic review
Sensor data quality plays a vital role in Internet of Things (IoT) applications as they are rendered useless if the data quality is bad. This systematic review aims to provide an introduction and guide for researchers who are interested in quality-related issues of physical sensor data. The process and results of the systematic review are presented which aims to answer the following research questions: what are the different types of physical sensor data errors, how to quantify or detect those errors, how to correct them and what domains are the solutions in. Out of 6970 literatures obtained from three databases (ACM Digital Library, IEEE Xplore and ScienceDirect) using the search string refined via topic modelling, 57 publications were selected and examined. Results show that the different types of sensor data errors addressed by those papers are mostly missing data and faults e.g. outliers, bias and drift. The most common solutions for error detection are based on principal component analysis (PCA) and artificial neural network (ANN) which accounts for about 40% of all error detection papers found in the study. Similarly, for fault correction, PCA and ANN are among the most common, along with Bayesian Networks. Missing values on the other hand, are mostly imputed using Association Rule Mining. Other techniques include hybrid solutions that combine several data science methods to detect and correct the errors. Through this systematic review, it is found that the methods proposed to solve physical sensor data errors cannot be directly compared due to the non-uniform evaluation process and the high use of non-publicly available datasets. Bayesian data analysis done on the 57 selected publications also suggests that publications using publicly available datasets for method evaluation have higher citation rates.
Lower complexity error location detection block of adjacent error correcting decoder for SRAMs
Multiple cell upsets (MCUs) caused by radiation is an important issue related to the reliability of embedded static random access memories (SRAMs). Multiple random and adjacent error correcting codes have been extensively employed for several years to protect stored data in SRAMs against MCUs. A compact and fast error correcting codec is desirable in most of these applications. In this study, simplified expressions for error location detection (ELD) block for single error correction‐double error detection‐double adjacent error correction (SEC‐DED‐DAEC) and single error correction‐double error detection‐triple adjacent error correction (SEC‐DED‐TAEC) decoders have been obtained by employing Karnaugh map. The conventional SEC‐DED‐DAEC and SEC‐DED‐TAEC decoders have been designed and implemented in both field‐programmable gate array and ASIC platforms by considering these simplified ELD expressions. In FPGA platform, the proposed design for SEC‐DED‐DAEC and SEC‐DED‐TAEC decoders require 1.37–28.40% improvement in area and maximum 14.74% improvement in delay compared to existing designs. Whereas ASIC‐based designs provide 2.20–26.81% reduction in area and 0.30–28.96% reduction in delay compared to existing related works. So the proposed design can be considered as an efficient alternative of traditional adjacent error correcting decoders in resource constraint applications.
Realizing repeated quantum error correction in a distance-three surface code
Quantum computers hold the promise of solving computational problems that are intractable using conventional methods 1 . For fault-tolerant operation, quantum computers must correct errors occurring owing to unavoidable decoherence and limited control accuracy 2 . Here we demonstrate quantum error correction using the surface code, which is known for its exceptionally high tolerance to errors 3 – 6 . Using 17 physical qubits in a superconducting circuit, we encode quantum information in a distance-three logical qubit, building on recent distance-two error-detection experiments 7 – 9 . In an error-correction cycle taking only 1.1 μs, we demonstrate the preservation of four cardinal states of the logical qubit. Repeatedly executing the cycle, we measure and decode both bit-flip and phase-flip error syndromes using a minimum-weight perfect-matching algorithm in an error-model-free approach and apply corrections in post-processing. We find a low logical error probability of 3% per cycle when rejecting experimental runs in which leakage is detected. The measured characteristics of our device agree well with a numerical model. Our demonstration of repeated, fast and high-performance quantum error-correction cycles, together with recent advances in ion traps 10 , support our understanding that fault-tolerant quantum computation will be practically realizable. By using 17 physical qubits in a superconducting circuit to encode quantum information in a surface-code logical qubit, fast (1.1 μs) and high-performance (logical error probability of 3%) quantum error-correction cycles are demonstrated.
Exponential suppression of bit or phase errors with cyclic error correction
Realizing the potential of quantum computing requires sufficiently low logical error rates 1 . Many applications call for error rates as low as 10 −15 (refs. 2 – 9 ), but state-of-the-art quantum platforms typically have physical error rates near 10 −3 (refs. 10 – 14 ). Quantum error correction 15 – 17 promises to bridge this divide by distributing quantum logical information across many physical qubits in such a way that errors can be detected and corrected. Errors on the encoded logical qubit state can be exponentially suppressed as the number of physical qubits grows, provided that the physical error rates are below a certain threshold and stable over the course of a computation. Here we implement one-dimensional repetition codes embedded in a two-dimensional grid of superconducting qubits that demonstrate exponential suppression of bit-flip or phase-flip errors, reducing logical error per round more than 100-fold when increasing the number of qubits from 5 to 21. Crucially, this error suppression is stable over 50 rounds of error correction. We also introduce a method for analysing error correlations with high precision, allowing us to characterize error locality while performing quantum error correction. Finally, we perform error detection with a small logical qubit using the 2D surface code on the same device 18 , 19 and show that the results from both one- and two-dimensional codes agree with numerical simulations that use a simple depolarizing error model. These experimental demonstrations provide a foundation for building a scalable fault-tolerant quantum computer with superconducting qubits. Repetition codes running many cycles of quantum error correction achieve exponential suppression of errors with increasing numbers of qubits.
Logical quantum processor based on reconfigurable atom arrays
Suppressing errors is the central challenge for useful quantum computing 1 , requiring quantum error correction (QEC) 2 – 6 for large-scale processing. However, the overhead in the realization of error-corrected ‘logical’ qubits, in which information is encoded across many physical qubits for redundancy 2 – 4 , poses substantial challenges to large-scale logical quantum computing. Here we report the realization of a programmable quantum processor based on encoded logical qubits operating with up to 280 physical qubits. Using logical-level control and a zoned architecture in reconfigurable neutral-atom arrays 7 , our system combines high two-qubit gate fidelities 8 , arbitrary connectivity 7 , 9 , as well as fully programmable single-qubit rotations and mid-circuit readout 10 – 15 . Operating this logical processor with various types of encoding, we demonstrate improvement of a two-qubit logic gate by scaling surface-code 6 distance from d  = 3 to d  = 7, preparation of colour-code qubits with break-even fidelities 5 , fault-tolerant creation of logical Greenberger–Horne–Zeilinger (GHZ) states and feedforward entanglement teleportation, as well as operation of 40 colour-code qubits. Finally, using 3D [[8,3,2]] code blocks 16 , 17 , we realize computationally complex sampling circuits 18 with up to 48 logical qubits entangled with hypercube connectivity 19 with 228 logical two-qubit gates and 48 logical CCZ gates 20 . We find that this logical encoding substantially improves algorithmic performance with error detection, outperforming physical-qubit fidelities at both cross-entropy benchmarking and quantum simulations of fast scrambling 21 , 22 . These results herald the advent of early error-corrected quantum computation and chart a path towards large-scale logical processors. A programmable quantum processor based on encoded logical qubits operating with up to 280 physical qubits is described, in which improvement of algorithmic performance using a variety of error-correction codes is enabled.
A Bidirectional LSTM Language Model for Code Evaluation and Repair
Programming is a vital skill in computer science and engineering-related disciplines. However, developing source code is an error-prone task. Logical errors in code are particularly hard to identify for both students and professionals, and a single error is unexpected to end-users. At present, conventional compilers have difficulty identifying many of the errors (especially logical errors) that can occur in code. To mitigate this problem, we propose a language model for evaluating source codes using a bidirectional long short-term memory (BiLSTM) neural network. We trained the BiLSTM model with a large number of source codes with tuning various hyperparameters. We then used the model to evaluate incorrect code and assessed the model’s performance in three principal areas: source code error detection, suggestions for incorrect code repair, and erroneous code classification. Experimental results showed that the proposed BiLSTM model achieved 50.88% correctness in identifying errors and providing suggestions. Moreover, the model achieved an F-score of approximately 97%, outperforming other state-of-the-art models (recurrent neural networks (RNNs) and long short-term memory (LSTM)).
Exploring Metaheuristic Optimized Machine Learning for Software Defect Detection on Natural Language and Classical Datasets
Software is increasingly vital, with automated systems regulating critical functions. As development demands grow, manual code review becomes more challenging, often making testing more time-consuming than development. A promising approach to improving defect detection at the source code level is the use of artificial intelligence combined with natural language processing (NLP). Source code analysis, leveraging machine-readable instructions, is an effective method for enhancing defect detection and error prevention. This work explores source code analysis through NLP and machine learning, comparing classical and emerging error detection methods. To optimize classifier performance, metaheuristic optimizers are used, and algorithm modifications are introduced to meet the study’s specific needs. The proposed two-tier framework uses a convolutional neural network (CNN) in the first layer to handle large feature spaces, with AdaBoost and XGBoost classifiers in the second layer to improve error identification. Additional experiments using term frequency–inverse document frequency (TF-IDF) encoding in the second layer demonstrate the framework’s versatility. Across five experiments with public datasets, the accuracy of the CNN was 0.768799. The second layer, using AdaBoost and XGBoost, further improved these results to 0.772166 and 0.771044, respectively. Applying NLP techniques yielded exceptional accuracies of 0.979781 and 0.983893 from the AdaBoost and XGBoost optimizers.
Repeated quantum error detection in a surface code
The realization of quantum error correction is an essential ingredient for reaching the full potential of fault-tolerant universal quantum computation. Using a range of different schemes, logical qubits that are resistant to errors can be redundantly encoded in a set of error-prone physical qubits. One such scalable approach is based on the surface code. Here we experimentally implement its smallest viable instance, capable of repeatedly detecting any single error using seven superconducting qubits—four data qubits and three ancilla qubits. Using high-fidelity ancilla-based stabilizer measurements, we initialize the cardinal states of the encoded logical qubit with an average logical fidelity of 96.1%. We then repeatedly check for errors using the stabilizer readout and observe that the logical quantum state is preserved with a lifetime and a coherence time longer than those of any of the constituent qubits when no errors are detected. Our demonstration of error detection with its resulting enhancement of the conditioned logical qubit coherence times is an important step, indicating a promising route towards the realization of quantum error correction in the surface code. In a surface code consisting of four data and three ancilla qubits, repeated error detection is demonstrated. The lifetime and coherence time of the logical qubit are enhanced over those of any of the constituent qubits when no errors are detected.
Verb Form Recognition and Error Detection in English Articles Using Long Short-Term Memory and Grammar Checks
Error checking of verb forms in English articles is beneficial for learning English and improving the fluency of English texts. In this study, long shortterm memory (LSTM) was used to recognize the types of errors in verb forms. To maximize the utilization of textual context information, a bidirectional LSTM algorithm was employed. Simulation experiments were then conducted, and the algorithm was evaluated against the support vector machine (SVM) algorithm and the grammar rules-based algorithm. The bidirectional LSTM method demonstrated higher accuracy in recognizing the parts of speech of words and the types of verb form errors in the text. Additionally, the accuracy was more stable when faced with different types of verb form errors.
Nonclassicality and Coherent Error Detection via Pseudo-Entropy
Pseudo-entropy is a complex-valued generalization of entanglement entropy defined on non-Hermitian transition operators and induced by post-selection. We present a simulation-based protocol for detecting nonclassicality and coherent errors in quantum circuits using this pseudo-entropy measure Sˇ, focusing on its imaginary part ℑSˇ as a diagnostic tool. Our method enables resource-efficient classification of phase-coherent errors, such as those from miscalibrated CNOT gates, even under realistic noise conditions. By quantifying the transition between classical-like and quantum-like behavior through threshold analysis, we provide theoretical benchmarks for error classification that can inform hardware calibration strategies. Numerical simulations demonstrate that 55% of the parameter space remains classified as classical-like (below classification thresholds) at hardware-calibrated sensitivity levels, with statistical significance confirmed through rigorous sensitivity analysis. Robustness to noise and comparison with standard entropy-based methods are demonstrated in a simulation. While hardware validation remains necessary, this work bridges theoretical concepts of nonclassicality with practical quantum error classification frameworks, providing a foundation for experimental quantum computing applications.