Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
710 result(s) for "coding dictionaries"
Sort by:
The Deep Learning Solutions on Lossless Compression Methods for Alleviating Data Load on IoT Nodes in Smart Cities
Networking is crucial for smart city projects nowadays, as it offers an environment where people and things are connected. This paper presents a chronology of factors on the development of smart cities, including IoT technologies as network infrastructure. Increasing IoT nodes leads to increasing data flow, which is a potential source of failure for IoT networks. The biggest challenge of IoT networks is that the IoT may have insufficient memory to handle all transaction data within the IoT network. We aim in this paper to propose a potential compression method for reducing IoT network data traffic. Therefore, we investigate various lossless compression algorithms, such as entropy or dictionary-based algorithms, and general compression methods to determine which algorithm or method adheres to the IoT specifications. Furthermore, this study conducts compression experiments using entropy (Huffman, Adaptive Huffman) and Dictionary (LZ77, LZ78) as well as five different types of datasets of the IoT data traffic. Though the above algorithms can alleviate the IoT data traffic, adaptive Huffman gave the best compression algorithm. Therefore, in this paper, we aim to propose a conceptual compression method for IoT data traffic by improving an adaptive Huffman based on deep learning concepts using weights, pruning, and pooling in the neural network. The proposed algorithm is believed to obtain a better compression ratio. Additionally, in this paper, we also discuss the challenges of applying the proposed algorithm to IoT data compression due to the limitations of IoT memory and IoT processor, which later it can be implemented in IoT networks.
Challenges Associated with the Safety Signal Detection Process for Medical Devices
Previous safety issues involving medical devices have stressed the need for better safety signal detection. Various European Union (EU) national competent authorities have started to focus on strengthening the analysis of vigilance data. Consequently, article 90 of the new EU regulation states that the European Commission shall put in place systems and processes to actively monitor medical device safety signals. A systematic literature review was conducted to synthesize the current state of knowledge and investigate the present tools used for medical device safety signal detection. An electronic literature search was performed in Embase, Medline, Cochrane, Web of science, and Google scholar from inception until January 2017. Articles that included terms related to medical devices and terms associated with safety were selected. A further selection was based on the abstract review. A full review of the remaining articles was conducted to decide on which articles finally to consider relevant for this review. Completeness was assessed based on the content of the articles. Our search resulted in a total of 20,819 articles, of which 24 met the inclusion criteria and were subject to data extraction and completeness scoring. A wide range of data sources, especially spontaneous reporting systems and registries, used for the detection and assessment of product problems and patient harms associated with the use of medical devices, were studied. Coding is remarkably heterogeneous, no agreement on the preferred methods for signal detection exists, and no gold standard for signal detection has been established thus far. Data source harmonization, the development of gold standard signal detection methodologies and the standardization of coding dictionaries are amongst the recommendations to support the implementation of a new proactive approach to signal detection. The new safety surveillance system will be able to use real-world evidence to support regulatory decision-making across all jurisdictions.
Robust l2,1 Norm-Based Sparse Dictionary Coding Regularization of Homogenous and Heterogenous Graph Embeddings for Image Classifications
In the field of manifold learning, Marginal Fisher Analysis (MFA), Discriminant Neighborhood Embedding (DNE) and Double Adjacency Graph-based DNE (DAG-DNE) construct the graph embedding for homogeneous and heterogeneous k -nearest neighbors (i.e. double adjacency) before feature extraction. All of them have two shortcomings: (1) vulnerable to noise; (2) the number of feature dimensions is fixed and likely very large. Taking advantage of the sparsity effect and de-noising property of sparse dictionary, we add the l 2 , 1 norm-based sparse dictionary coding regularization term to the graph embedding of double adjacency, to form an objective function, which seeks a small amount of significant dictionary atoms for feature extraction. Since our initial objective function cannot generate the closed-form solution, we construct an auxiliary function instead. Theoretically, the auxiliary function has closed-form solution w.r.t. dictionary atoms and sparse coding coefficients in each iterative step and its monotonously decreased value can pull down the initial objective function value. Extensive experiments on the synthetic dataset, the Yale face dataset, the UMIST face dataset and the terrain cover dataset demonstrate that our proposed algorithm has the ability of pushing the separability among heterogenous classes onto much fewer dimensions, and robust to noise.
An Optimal Lempel Ziv Markov Based Microarray Image Compression Algorithm
In the recent years, microarray technology gained attention for concurrent monitoring of numerous microarray images. It remains a major challenge to process, store and transmit such huge volumes of microarray images. So, image compression techniques are used in the reduction of number of bits so that it can be stored and the images can be shared easily. Various techniques have been proposed in the past with applications in different domains. The current research paper presents a novel image compression technique i.e., optimized Linde–Buzo–Gray (OLBG) with Lempel Ziv Markov Algorithm (LZMA) coding technique called OLBG-LZMA for compressing microarray images without any loss of quality. LBG model is generally used in designing a local optimal codebook for image compression. Codebook construction is treated as an optimization issue and can be resolved with the help of Grey Wolf Optimization (GWO) algorithm. Once the codebook is constructed by LBG-GWO algorithm, LZMA is employed for the compression of index table and raise its compression efficiency additionally. Experiments were performed on high resolution Tissue Microarray (TMA) image dataset of 50 prostate tissue samples collected from prostate cancer patients. The compression performance of the proposed coding esd compared with recently proposed techniques. The simulation results infer that OLBG-LZMA coding achieved a significant compression performance compared to other techniques.
High-Payload Data-Hiding Method for AMBTC Decompressed Images
Data hiding is the art of embedding data into a cover image without any perceptual distortion of the cover image. Moreover, data hiding is a very crucial research topic in information security because it can be used for various applications. In this study, we proposed a high-capacity data-hiding scheme for absolute moment block truncation coding (AMBTC) decompressed images. We statistically analyzed the composition of the secret data string and developed a unique encoding and decoding dictionary search for adjusting pixel values. The dictionary was used in the embedding and extraction stages. The dictionary provides high data-hiding capacity because the secret data was compressed using dictionary-based coding. The experimental results of this study reveal that the proposed scheme is better than the existing schemes, with respect to the data-hiding capacity and visual quality.
Whats and Hows? The Practice-Based Typology of Narrative Analyses
Istotą jakościowych praktyk badawczych jest wieloparadygmatyczność, która rodzi współistnienie różnych podejść metodologicznych w analizie i badaniu ludzkich doświadczeń w świecie życia codziennego. Różnorodność ta jest szczególnie widoczna w dziedzinie badań i analizy danych narracyjnych. Celem artykułu jest refleksja metodologiczna nad tworzeniem typologii analiz narracyjnych i zarazem propozycja nowego sposobu typologizacji podejść analitycznych, opartego na łączeniu lingwistyki korpusowej i przetwarzania języka naturalnego z procedurami CAQDAS, analizy treści i Text Mining. Typologia ta jest oparta na analizie narracyjnych praktyk badawczych odzwierciedlonych w języku anglojęzycznych artykułów opublikowanych w pięciu uznanych na świecie jakościowych czasopismach metodologicznych w latach 2002-2016. W artykule wykorzystuję metodę słownikową w procesie kodowania artykułów, hierarchiczne grupowanie i modelowanie tematyczne w celu odkrywania w tych publikacjach różnych typów analiz narracyjnych i badania relacji semantycznych między nimi. Jednocześnie konfrontuję heurystyczną typologię Riessmana z podejściem opartym na lingwistyce i eksploracji danych w celu rozwijania spójnego obrazu metodologii analizy narracyjnej we współczesnej dziedzinie badań jakościowych. Ostatecznie przedstawiam nowy model myślenia o analizie narracyjnej.
United coding method for compound image compression
This paper proposes a compound image coding method named united coding (UC). In UC, several lossless coding tools such as dictionary-entropy coders, run-length encoding (RLE), Hextile, and a few filters used in portable network graphics (PNG) format are united into H.264 like intraframe hybrid video coding. The basic coding unit (BCU) has a size typically between 16 × 16 pixels to 64 × 64 pixels. All coders in UC are used to code each BCU. Then, the lossless coder that generates minimum bit-rate (R) is chosen as the optimal lossless coder. Finally, the final optimal coder is chosen from the lossy intraframe hybrid coder and the optimal lossless coder using R-D cost based optimization criterion. Moreover, the data coded by one lossless coder can be used as the dictionary of other lossless coders. Experimental results demonstrate that compared with H.264, UC achieves up to 20 dB PSNR improvement and better visual picture quality for compound images with mixed text, graphics and natural picture. Compared with lossless coders such as gzip and PNG, UC can achieve 2–5 times higher compression ratio with just a minor loss and keep partial-lossless picture quality. The partial-lossless nature of UC is indispensable for some typical applications, such as cloud computing and rendering, cloudlet-screen computing and remote desktop, where lossless coding of partial image regions is demanded. On the other hand, the implementation complexity and cost increment of UC is moderate, typically less than 25 % of a traditional hybrid coder such as H.264.
Dynamic sparse coding for sparse time-series modeling via first-order smooth optimization
Sparse coding, often called dictionary learning, has received significant attention in the fields of statistical machine learning and signal processing. However, most approaches assume iid data setup, which can be easily violated when the data retains certain statistical structures such as sequences where data samples are temporally correlated. In this paper we formulate a novel dynamic sparse coding problem, and propose an efficient algorithm that enforces smooth dynamics for the latent state vectors (codes) within a linear dynamic model while imposing sparseness of the state vectors. We overcome the added computational overhead originating from smooth dynamic constraints by adopting the recent first-order smooth optimization technique, adjusted for our problem instance. We demonstrate the improved prediction performance of our approach over the conventional sparse coding on several interesting real-world problems including financial asset return data forecasting and human motion estimation from silhouette videos.
Learning a common dictionary for subject-transfer decoding with resting calibration
Brain signals measured over a series of experiments have inherent variability because of different physical and mental conditions among multiple subjects and sessions. Such variability complicates the analysis of data from multiple subjects and sessions in a consistent way, and degrades the performance of subject-transfer decoding in a brain–machine interface (BMI). To accommodate the variability in brain signals, we propose 1) a method for extracting spatial bases (or a dictionary) shared by multiple subjects, by employing a signal-processing technique of dictionary learning modified to compensate for variations between subjects and sessions, and 2) an approach to subject-transfer decoding that uses the resting-state activity of a previously unseen target subject as calibration data for compensating for variations, eliminating the need for a standard calibration based on task sessions. Applying our methodology to a dataset of electroencephalography (EEG) recordings during a selective visual–spatial attention task from multiple subjects and sessions, where the variability compensation was essential for reducing the redundancy of the dictionary, we found that the extracted common brain activities were reasonable in the light of neuroscience knowledge. The applicability to subject-transfer decoding was confirmed by improved performance over existing decoding methods. These results suggest that analyzing multisubject brain activities on common bases by the proposed method enables information sharing across subjects with low-burden resting calibration, and is effective for practical use of BMI in variable environments. •Novel method for extracting spatial bases of brain signals shared by multisubjects.•Subject-transfer decoding using activities on the common spatial bases.•Calibration of the decoders for target subjects using resting-state recordings.•Robust EEG analysis results based on a dataset of more than forty subjects.•Better subject-transfer decoding performance than existing methods.
Discovering and characterizing dynamic functional brain networks in task FMRI
Many existing studies for the mapping of function brain networks impose an implicit assumption that the networks’ spatial distributions are constant over time. However, the latest research reports reveal that functional brain networks are dynamical and have time-varying spatial patterns. Furthermore, how these functional networks evolve over time has not been elaborated and explained in sufficient details yet. In this paper, we aim to discover and characterize the dynamics of functional brain networks via a windowed group-wise dictionary learning and sparse coding approach. First, we aggregated the sampled subjects’ fMRI signals into one big data matrix, and learned a common dictionary for all individuals via a group-wise dictionary learning step. Second, we obtained the dynamic time-varying functional networks by using the windowed time-varying sparse coding approach. Experimental results demonstrated that our windowed group-wise dictionary learning and sparse coding method can effectively detect the task-evoked networks and also characterize how these networks evolve over time. This work sheds novel insights on the dynamics mechanism of functional brain networks.