Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
18,026 result(s) for "Decoding"
Sort by:
The Science of Learning to Read Words
The author reviews theory and research by Ehri and her colleagues to document how a scientific approach has been applied over the years to conduct controlled studies whose findings reveal how beginners learn to read words in and out of text. Words may be read by decoding letters into blended sounds or by predicting words from context, but the way that contributes most to reading and comprehending text is reading words automatically from memory by sight. The evidence shows that words are read from memory when graphemes are connected to phonemes. This bonds spellings of individual words to their pronunciations along with their meanings in memory. Readers must know grapheme–phoneme relations and have decoding skill to form connections, and must read words in text to associate spellings with meanings. Readers move through four developmental phases as they acquire knowledge about the alphabetic writing system and apply it to read and write words and build their sight vocabularies. Grapheme–phoneme knowledge and phonemic segmentation are key foundational skills that launch development followed subsequently by knowledge of syllabic and morphemic spelling–sound units. Findings show that when spellings attach to pronunciations and meanings in memory, they enhance memory for vocabulary words. This research underscores the importance of systematic phonics instruction that teaches students the knowledge and skills that are essential in acquiring word-reading skill.
Implementation of turbo decoding for VDES signal data segments
The International Telecommunication Union (ITU) mandates the use of turbo coding for VDES signal data segments. Previous studies primarily focused on turbo decoding for a specific link in VDES, often neglecting its applicability to all links. This paper proposes an FPGA-based implementation method for turbo code decoding, aiming to achieve fast and accurate turbo decoding of VDES signal data segments, while also ensuring compatibility with different links. The resulting turbo decoder can adapt to the different code rates and data segment lengths specified in the VDES. A comparison of the decoding speeds between DSP and FPGA confirms that the parallel computing capability of FPGA can significantly improve the decoding speed. Finally, the main onboard resources required for FPGA implementation of turbo decoding were provided.
Quasi-Optimal Path Convergence-Aided Automorphism Ensemble Decoding of Reed–Muller Codes
By exploiting the rich automorphisms of Reed–Muller (RM) codes, the recently developed automorphism ensemble (AE) successive cancellation (SC) decoder achieves a near-maximum-likelihood (ML) performance for short block lengths. However, the appealing performance of AE-SC decoding arises from the diversity gain that requires a list of SC decoding attempts, which results in a high decoding complexity. To address this issue, this paper proposes a novel quasi-optimal path convergence (QOPC)-aided early termination (ET) technique for AE-SC decoding. This technique detects strong convergence between the partial path metrics (PPMs) of SC constituent decoders to reliably identify the optimal decoding path at runtime. When the QOPC-based ET criterion is satisfied during the AE-SC decoding, only the identified path is allowed to proceed for a complete codeword estimate, while the remaining paths are terminated early. The numerical results demonstrated that for medium-to-high-rate RM codes in the short-length regime, the proposed QOPC-aided ET method incurred negligible performance loss when applied to fully parallel AE-SC decoding. Meanwhile, it achieved a complexity reduction that ranged from 35.9% to 47.4% at a target block error rate (BLER) of 10−3, where it consistently outperformed a state-of-the-art path metric threshold (PMT)-aided ET method. Additionally, under a partially parallel framework of AE-SC decoding, the proposed QOPC-aided ET method achieved a greater complexity reduction that ranged from 81.3% to 86.7% at a low BLER that approached 10−5 while maintaining a near-ML decoding performance.
List Decoding of Arıkan’s PAC Codes
Polar coding gives rise to the first explicit family of codes that provably achieve capacity with efficient encoding and decoding for a wide range of channels. However, its performance at short blocklengths under standard successive cancellation decoding is far from optimal. A well-known way to improve the performance of polar codes at short blocklengths is CRC precoding followed by successive-cancellation list decoding. This approach, along with various refinements thereof, has largely remained the state of the art in polar coding since it was introduced in 2011. Recently, Arıkan presented a new polar coding scheme, which he called polarization-adjusted convolutional (PAC) codes. At short blocklengths, such codes offer a dramatic improvement in performance as compared to CRC-aided list decoding of conventional polar codes. PAC codes are based primarily upon the following main ideas: replacing CRC codes with convolutional precoding (under appropriate rate profiling) and replacing list decoding by sequential decoding. One of our primary goals in this paper is to answer the following question: is sequential decoding essential for the superior performance of PAC codes? We show that similar performance can be achieved using list decoding when the list size L is moderately large (say, L⩾128). List decoding has distinct advantages over sequential decoding in certain scenarios, such as low-SNR regimes or situations where the worst-case complexity/latency is the primary constraint. Another objective is to provide some insights into the remarkable performance of PAC codes. We first observe that both sequential decoding and list decoding of PAC codes closely match ML decoding thereof. We then estimate the number of low weight codewords in PAC codes, and use these estimates to approximate the union bound on their performance. These results indicate that PAC codes are superior to both polar codes and Reed–Muller codes. We also consider random time-varying convolutional precoding for PAC codes, and observe that this scheme achieves the same superior performance with constraint length as low as ν=2.
Unpicking the Developmental Relationship Between Oral Language Skills and Reading Comprehension: It's Simple, But Complex
Listening comprehension and word decoding are the two major determinants of the development of reading comprehension. The relative importance of different language skills for the development of listening and reading comprehension remains unclear. In this 5-year longitudinal study, starting at age 7.5 years (n = 198), it was found that the shared variance between vocabulary, grammar, verbal working memory, and inference skills was a powerful longitudinal predictor of variations in both listening and reading comprehension. In line with the simple view of reading, listening comprehension, and word decoding, together with their interaction and curvilinear effects, explains almost all (96%) variation in early reading comprehension skills. Additionally, listening comprehension was a predictor of both the early and later growth of reading comprehension skills.
The Science of Reading Progresses
The simple view of reading is commonly presented to educators in professional development about the science of reading. The simple view is a useful tool for conveying the undeniable importance—in fact, the necessity—of both decoding and linguistic comprehension for reading. Research in the 35 years since the theory was proposed has revealed additional understandings about reading. In this article, we synthesize research documenting three of these advances: (1) Reading difficulties have a number of causes, not all of which fall under decoding and/or listening comprehension as posited in the simple view; (2) rather than influencing reading solely independently, as conceived in the simple view, decoding and listening comprehension (or in terms more commonly used in reference to the simple view today, word recognition and language comprehension) overlap in important ways; and (3) there are many contributors to reading not named in the simple view, such as active, self-regulatory processes, that play a substantial role in reading. We point to research showing that instruction aligned with these advances can improve students’ reading. We present a theory, which we call the active view of reading, that is an expansion of the simple view and can be used to convey these important advances to current and future educators. We discuss the need to lift up updated theories and models to guide practitioners’ work in supporting students’ reading development in classrooms and interventions.
Deconstructing multivariate decoding for the study of brain function
Multivariate decoding methods were developed originally as tools to enable accurate predictions in real-world applications. The realization that these methods can also be employed to study brain function has led to their widespread adoption in the neurosciences. However, prior to the rise of multivariate decoding, the study of brain function was firmly embedded in a statistical philosophy grounded on univariate methods of data analysis. In this way, multivariate decoding for brain interpretation grew out of two established frameworks: multivariate decoding for predictions in real-world applications, and classical univariate analysis based on the study and interpretation of brain activation. We argue that this led to two confusions, one reflecting a mixture of multivariate decoding for prediction or interpretation, and the other a mixture of the conceptual and statistical philosophies underlying multivariate decoding and classical univariate analysis. Here we attempt to systematically disambiguate multivariate decoding for the study of brain function from the frameworks it grew out of. After elaborating these confusions and their consequences, we describe six, often unappreciated, differences between classical univariate analysis and multivariate decoding. We then focus on how the common interpretation of what is signal and noise changes in multivariate decoding. Finally, we use four examples to illustrate where these confusions may impact the interpretation of neuroimaging data. We conclude with a discussion of potential strategies to help resolve these confusions in interpreting multivariate decoding results, including the potential departure from multivariate decoding methods for the study of brain function. •We highlight two sources of confusion that affect the interpretation of multivariate decoding results.•Confusion 1: The dual use of multivariate decoding for real-world predictions and interpretation in terms of brain function.•Confusion 2: A mixture of statistical and conceptual frameworks of classical univariate analysis and multivariate decoding.•We show six differences between univariate analysis and multivariate decoding and a different meaning of signal and noise.•We use four illustrative examples to reveal these confusions and the assumptions of multivariate decoding for interpretation.
Decoding error probability of random parity-check matrix ensemble over the erasure channel
In this paper we carry out an in-depth study on the average decoding error probability of the random parity-check matrix ensemble over the erasure channel under three decoding principles, namely unambiguous decoding, maximum likelihood decoding and list decoding. We obtain explicit formulas for the average decoding error probabilities of the random parity-check matrix ensemble under these three decoding principles and compute the error exponents. Moreover, for unambiguous decoding, we compute the variance of the decoding error probability of the random parity-check matrix ensemble and the error exponent of the variance, which implies a strong concentration result, that is, roughly speaking, the ratio of the decoding error probability of a random linear code in the ensemble and the average decoding error probability of the ensemble converges to 1 with high probability when the code length goes to infinity.
Online binary decision decoding using functional near-infrared spectroscopy for the development of brain–computer interface
In this paper, a functional near-infrared spectroscopy (fNIRS)-based online binary decision decoding framework is developed. Fourteen healthy subjects are asked to mentally make “yes” or “no” decisions in answers to the given questions. For obtaining “yes” decoding, the subjects are asked to perform a mental task that causes a cognitive load on the prefrontal cortex, while for making “no” decoding, they are asked to relax. Signals from the prefrontal cortex are collected using continuous-wave near-infrared spectroscopy. It is observed and verified, using the linear discriminant analysis (LDA) and the support vector machine (SVM) classifications, that the cortical hemodynamic responses for making a “yes” decision are distinguishable from those for making a “no” decision. Using mean values of the changes in the concentration of hemoglobin as features, binary decisions are classified into two classes, “yes” and “no,” with an average classification accuracy of 74.28 % using LDA and 82.14 % using SVM. These results demonstrate and suggest the feasibility of fNIRS for a brain–computer interface.
Beyond the Simple View of Reading
The simple view of reading describes reading as the product of decoding (D) and listening comprehension (LC). However, the simple view of reading has been challenged, and evidence has proved it to be too simple to explain the complexities of reading comprehension in the elementary school years. Hypotheses have been advanced that there are cognitive-linguistic factors that underlie the common variance between D and LC, which are malleable, although there is no clarity at this point regarding what these are. We propose that one such group of malleable cognitive factors is executive function (EF) skills. Further, we posit that EF skills play equally strong roles in explaining reading comprehension variance in emergent bilinguals and English monolinguals. We used multigroup structural equation modeling to determine the contribution of these constructs (D, LC, and EF) to reading comprehension in 425 emergent bilinguals and 302 English monolinguals in grades 2–4. The shared variance between D and LC was explained by direct and indirect effects in the models tested, with strong indirect effects for the EFs of cognitive flexibility and working memory through D and LC, respectively, for both language groups. The indirect effect of cognitive flexibility through LC on reading comprehension was considerably larger for emergent bilinguals than for English monolinguals. Considerations for a more nuanced view of the simple view of reading and its implications for practice are discussed.