Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Content Type
      Content Type
      Clear All
      Content Type
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Item Type
    • Is Full-Text Available
    • Subject
    • Publisher
    • Source
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
1,889 result(s) for "Sampling (signal processing)"
Sort by:
A survey of Monte Carlo methods for parameter estimation
Statistical signal processing applications usually require the estimation of some parameters of interest given a set of observed data. These estimates are typically obtained either by solving a multi-variate optimization problem, as in the maximum likelihood (ML) or maximum a posteriori (MAP) estimators, or by performing a multi-dimensional integration, as in the minimum mean squared error (MMSE) estimators. Unfortunately, analytical expressions for these estimators cannot be found in most real-world applications, and the Monte Carlo (MC) methodology is one feasible approach. MC methods proceed by drawing random samples, either from the desired distribution or from a simpler one, and using them to compute consistent estimators. The most important families of MC algorithms are the Markov chain MC (MCMC) and importance sampling (IS). On the one hand, MCMC methods draw samples from a proposal density, building then an ergodic Markov chain whose stationary distribution is the desired distribution by accepting or rejecting those candidate samples as the new state of the chain. On the other hand, IS techniques draw samples from a simple proposal density and then assign them suitable weights that measure their quality in some appropriate way. In this paper, we perform a thorough review of MC methods for the estimation of static parameters in signal processing applications. A historical note on the development of MC schemes is also provided, followed by the basic MC method and a brief description of the rejection sampling (RS) algorithm, as well as three sections describing many of the most relevant MCMC and IS algorithms, and their combined use. Finally, five numerical examples (including the estimation of the parameters of a chaotic system, a localization problem in wireless sensor networks and a spectral analysis application) are provided in order to demonstrate the performance of the described approaches.
Mathematical theory of sampling analog signals revisited
This paper is an attempt to formulate a unique mathematical theory of the sampling operation of analog signals by connecting all of its scattered in the literature fragments into a one whole. We think that there exists a need for performing this, since many of the fragments mentioned appear to be inconsistent with each other, especially in the cases when they are misunderstood and misinterpreted. Our hope is that this paper will meet expectations of many people working in the area of digital signal processing to have an ordered mathematical theory that describes the signal sampling process.
Compressed Sensing
Compressed sensing is an exciting, rapidly growing field, attracting considerable attention in electrical engineering, applied mathematics, statistics and computer science. This book provides the first detailed introduction to the subject, highlighting recent theoretical advances and a range of applications, as well as outlining numerous remaining research challenges. After a thorough review of the basic theory, many cutting-edge techniques are presented, including advanced signal modeling, sub-Nyquist sampling of analog signals, non-asymptotic analysis of random matrices, adaptive sensing, greedy algorithms and use of graphical models. All chapters are written by leading researchers in the field, and consistent style and notation are utilized throughout. Key background information and clear definitions make this an ideal resource for researchers, graduate students and practitioners wanting to join this exciting research area. It can also serve as a supplementary textbook for courses on computer vision, coding theory, signal processing, image processing and algorithms for efficient data processing.
Event-Based Sensing and Signal Processing in the Visual, Auditory, and Olfactory Domain: A Review
The nervous systems converts the physical quantities sensed by its primary receptors into trains of events that are then processed in the brain. The unmatched efficiency in information processing has long inspired engineers to seek brain-like approaches to sensing and signal processing. The key principle pursued in neuromorphic sensing is to shed the traditional approach of periodic sampling in favor of an event-driven scheme that mimicks sampling as it occurs in the nervous system, where events are preferably emitted upon the change of the sensed stimulus. In this paper we highlight the advantages and challenges of event-based sensing and signal processing in the visual, auditory and olfactory domains. We also provide a survey of the literature covering neuromorphic sensing and signal processing in all three modalities. Our aim is to facilitate research in event-based sensing and signal processing by providing a comprehensive overview of the research performed previously as well as highlighting conceptual advantages, current progress and future challenges in the field.
Anti-interrupted-sampling repeater jamming method based on frequency agility waveform and sparse recovery
Interrupted-sampling repeater jamming (ISRJ) is a type of intra-pulse coherent jamming that poses a significant threat to radar detection and tracking of targets. This paper proposes an ISRJ suppression method based on frequency agile waveform and sparse recovery, starting from the temporal discontinuity and modulation characteristics of ISRJ. This method is particularly suitable for scenarios with high jamming duty ratio (JDR) and high jammer sampling duty ratio (SDR). By dividing the transmitted waveform into sub-pulses with different carrier frequencies and applying a two-round block sparse algorithm, the method accurately recovers three parameters of ISRJ, achieving effective jamming identification, reconstruction, and cancellation. Additionally, a target detection technique based on robust sparse recovery is proposed, significantly improving the stability and accuracy of target detection. Comparative experimental results conducted in three scenarios confirm the effectiveness and superiority of this method under high JDR and SDR conditions.
Optimizing the frequency of ecological momentary assessments using signal processing
Ecological momentary assessment (EMA) is increasingly recognized as a vital tool for tracking the fluctuating nature of mental states and symptoms in psychiatric research. However, determining the optimal sampling rate - that is, deciding how often participants should be queried to report their symptoms - remains a significant challenge. To address this issue, our study utilizes the Nyquist-Shannon theorem from signal processing, which establishes that any sampling rate more than twice the highest frequency component of a signal is adequate. We applied the Nyquist-Shannon theorem to analyze two EMA datasets on depressive symptoms, encompassing a combined total of 35,452 data points collected over periods ranging from 30 to 90 days per individual. Our analysis of both datasets suggests that the most effective sampling strategy involves measurements at least every other week. We find that measurements at higher frequencies provide valuable and consistent information across both datasets, with significant peaks at weekly and daily intervals. Ideal frequency for measurements remains largely consistent, regardless of the specific symptoms used to estimate depression severity. For conditions in which abrupt or transient symptom dynamics are expected, such as during treatment, more frequent data collection is recommended. However, for regular monitoring, weekly assessments of depressive symptoms may be sufficient. We discuss the implications of our findings for EMA study optimization, address our study's limitations, and outline directions for future research.
Guest editorial: Deep learning‐based point cloud processing, compression and analysis
Point cloud data is a large collection of high dimensional 3D points with 3D coordinates and attributes, which has been one of the mainstream representations for emerging 3D applications, such as virtual reality, autonomous vehicles, and robotics. Due to the large‐scale unstructured high‐dimensional nature of point clouds, point cloud processing, transmitting and analysing has been challenging issues in multimedia signal processing and communication. Deep learning is a powerful tool to learn statistical knowledge from massive data. Advances in artificial intelligence, especially deep learning models are offering new opportunities for point cloud processing, compression and analysis. This special issue aims at promoting cutting‐edge research on deep learning‐based point cloud processing, including object detection, segmentation, registration, compression, and visual quality assessment.
Trends in Compressive Sensing for EEG Signal Processing Applications
The tremendous progress of big data acquisition and processing in the field of neural engineering has enabled a better understanding of the patient’s brain disorders with their neural rehabilitation, restoration, detection, and diagnosis. An integration of compressive sensing (CS) and neural engineering emerges as a new research area, aiming to deal with a large volume of neurological data for fast speed, long-term, and energy-saving purposes. Furthermore, electroencephalography (EEG) signals for brain–computer interfaces (BCIs) have shown to be very promising, with diverse neuroscience applications. In this review, we focused on EEG-based approaches which have benefited from CS in achieving fast and energy-saving solutions. In particular, we examine the current practices, scientific opportunities, and challenges of CS in the growing field of BCIs. We emphasized on summarizing major CS reconstruction algorithms, the sparse basis, and the measurement matrix used in CS to process the EEG signal. This literature review suggests that the selection of a suitable reconstruction algorithm, sparse basis, and measurement matrix can help to improve the performance of current CS-based EEG studies. In this paper, we also aim at providing an overview of the reconstruction free CS approach and the related literature in the field. Finally, we discuss the opportunities and challenges that arise from pushing the integration of the CS framework for BCI applications.
Multi-tier dynamic sampling weak RF signal estimation theory
This paper presents a theoretical analysis in discrete time for a multi-tier weak radiofrequency (RF) signal estimation process with N simultaneous signals. Discrete time dynamic sampling is introduced and is shown to provide the capability to extract signal parameter values with increased accuracy compared with accuracy of estimates obtained in prior work. This paper advances phase measurement approaches by proposing discrete time dynamic sampling which our paper shows offers the desirable capability for more accurate weak signal parameter estimates. For N=2 simultaneous signals with a strong signal at 850 MHz and a weak signal at 855 MHz, the results show that dynamically sampling the instantaneous frequency at 24 times the Nyquist rate provides weak signal frequency estimates that are within 1.7×10-5 of the actual weak signal frequency and weak signal amplitude estimates that are within 428 PPM of the actual weak signal amplitude. Results are also presented for situations with N=2 simultaneous 5G signals. In one case, the strong signal is 3950 MHz, and the weak signal is 3955 MHz; in the other case the strong case is 5950 MHz, and the weak signal is 5955 MHz. The results for these cases show that estimates obtained with dynamic sampling are more accurate than estimates provided using a single sample rate of 65 MSPS. This work has promising applications for weak signal parameters estimation using instantaneous frequency measurements.