Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
14,469 result(s) for "Histograms"
Sort by:
Dense Trajectories and Motion Boundary Descriptors for Action Recognition
This paper introduces a video representation based on dense trajectories and motion boundary descriptors. Trajectories capture the local motion information of the video. A dense representation guarantees a good coverage of foreground motion as well as of the surrounding context. A state-of-the-art optical flow algorithm enables a robust and efficient extraction of dense trajectories. As descriptors we extract features aligned with the trajectories to characterize shape (point coordinates), appearance (histograms of oriented gradients) and motion (histograms of optical flow). Additionally, we introduce a descriptor based on motion boundary histograms (MBH) which rely on differential optical flow. The MBH descriptor shows to consistently outperform other state-of-the-art descriptors, in particular on real-world videos that contain a significant amount of camera motion. We evaluate our video representation in the context of action classification on nine datasets, namely KTH, YouTube, Hollywood2, UCF sports, IXMAS, UIUC, Olympic Sports, UCF50 and HMDB51. On all datasets our approach outperforms current state-of-the-art results.
Image sub-division and quadruple clipped adaptive histogram equalization (ISQCAHE) for low exposure image enhancement
In this paper, a novel image sub-division and quadruple clipped adaptive histogram equalization (ISQCAHE) technique is proposed for the enhancement of low exposure images. The proposed method involves, computation of the histogram which includes a new approach of image sub-division, enhancement controlling mechanism, modification of probability density function (PDF) and histogram equalization (HE). The original histogram is segmented into sub-histograms based on exposure threshold and mean, to preserve the brightness and entropy. Then, individual sub-histogram is clipped separately to control the enhancement rate. For enhancing the visual quality, HE is applied to individual sub-histogram using the modified PDF. The experimental results show that, the proposed ISQCAHE method avoids the unpleasant artifacts effectively and provide a natural appearance to the enhanced image. It is simple, adaptive and performs superior than other techniques in terms of visual quality, absolute mean brightness error, entropy, Natural image quality evaluation, brightness preservation, structure similarity index measure and feature similarity index measure.
PSNR vs SSIM: imperceptibility quality assessment for image steganography
Peak signal to noise ratio (PSNR) and structural index similarity (SSIM) are two measuring tools that are widely used in image quality assessment. Especially in the steganography image, these two measuring instruments are used to measure the quality of imperceptibility. PSNR is used earlier than SSIM, is easy, has been widely used in various digital image measurements, and has been considered tested and valid. SSIM is a newer measurement tool that is designed based on three factors i.e. luminance, contrast, and structure to better suit the workings of the human visual system. Some research has discussed the correlation and comparison of these two measuring tools, but no research explicitly discusses and suggests which measurement tool is more suitable for steganography. This study aims to review, prove, and analyze the results of PSNR and SSIM measurements on three spatial domain image steganography methods, i.e. LSB, PVD, and CRT. Color images were chosen as container images because human vision is more sensitive to color changes than grayscale changes. Based on the test results found several opposing findings, where LSB has the most superior value based on PSNR and PVD get the most superior value based on SSIM. Additionally, the changes based on the histogram are more noticeable in LSB and CRT than in PVD. Other analyzes such as RS attack also show results that are more in line with SSIM measurements when compared to PSNR. Based on the results of testing and analysis, this research concludes that SSIM is a better measure of imperceptibility in all aspects and it is preferable that in the next steganographic research at least use SSIM.
Histogram equalization using a selective filter
Many popular modern image processing software packages implement a naïve form of histogram equalization. This implementation is known to produce histograms that are not truly uniform. While exact histogram equalization techniques exist, these may produce undesirable artifacts in some scenarios. In this paper we consider the link between the established continuous theory for global histogram equalization and its discrete implementation, and we formulate a novel histogram equalization technique that builds upon and considerably improves the naïve approach. We show that we can linearly interpolate the cumulative distribution of a low-bit image by approximately dequantizing its intensities using a selective box filter. This helps to distribute the intensities more evenly. The proposed algorithm is subsequently evaluated and compared with existing works in the literature. We find that the method is capable of producing an equalized histogram that has a high entropy, while distances between similar intensities are preserved. The described approach has implications on several related image processing problems, e.g., edge detection.
Low illumination image enhancement based on improved CLAHE algorithm
In the case of low light, the target features will be weakened or even disappear, thus affecting the target detection effect based on aerial images. In order to improve the image effect, an improved CLAHE algorithm is designed. In the process of image enhancement, the CLAHE algorithm is improved according to image pixel size and histogram characteristics, which can improve the smoothness of CLAHE algorithm for uniform color parts. The experimental results show that the improved CLAHE algorithm is better than other histogram equalization algorithms, and the method has low cost and practical application value.
Vectorized Adaptive Histograms for Sparse Oblique Forests
Classification using sparse oblique random forests provides guarantees on uncertainty and confidence while controlling for specific error types. However, they use more data and more compute than other tree ensembles because they create deep trees and need to sort or histogram linear combinations of data at runtime. We provide a method for dynamically switching between histograms and sorting to find the best split. We further optimize histogram construction using vector intrinsics. Evaluating this on large datasets, our optimizations speedup training by 1.7-2.5x compared to existing oblique forests and 1.5-2x compared to standard random forests. We also provide a GPU and hybrid CPU-GPU implementation.
Land-Use and Land-Cover Classification Using a Human Group-Based Particle Swarm Optimization Algorithm with an LSTM Classifier on Hybrid Pre-Processing Remote-Sensing Images
Land-use and land-cover (LULC) classification using remote sensing imagery plays a vital role in many environment modeling and land-use inventories. In this study, a hybrid feature optimization algorithm along with a deep learning classifier is proposed to improve the performance of LULC classification, helping to predict wildlife habitat, deteriorating environmental quality, haphazard elements, etc. LULC classification is assessed using Sat 4, Sat 6 and Eurosat datasets. After the selection of remote-sensing images, normalization and histogram equalization methods are used to improve the quality of the images. Then, a hybrid optimization is accomplished by using the local Gabor binary pattern histogram sequence (LGBPHS), the histogram of oriented gradient (HOG) and Haralick texture features, for the feature extraction from the selected images. The benefits of this hybrid optimization are a high discriminative power and invariance to color and grayscale images. Next, a human group-based particle swarm optimization (PSO) algorithm is applied to select the optimal features, whose benefits are a fast convergence rate and ease of implementation. After selecting the optimal feature values, a long short-term memory (LSTM) network is utilized to classify the LULC classes. Experimental results showed that the human group-based PSO algorithm with a LSTM classifier effectively well differentiates the LULC classes in terms of classification accuracy, recall and precision. A maximum improvement of 6.03% on Sat 4 and 7.17% on Sat 6 in LULC classification is reached when the proposed human group-based PSO with LSTM is compared to individual LSTM, PSO with LSTM, and Human Group Optimization (HGO) with LSTM. Moreover, an improvement of 2.56% in accuracy is achieved, compared to the existing models, GoogleNet, Visual Geometric Group (VGG), AlexNet, ConvNet, when the proposed method is applied.
Understanding atom probe’s analytical performance for iron oxides using correlation histograms and ab initio calculations
Field evaporation from ionic or covalently bonded materials often leads to the emission of molecular ions. The metastability of these molecular ions, particularly under the influence of the intense electrostatic field (10 10 Vm −1 ), makes them prone to dissociation with or without an exchange of energy amongst them. These processes can affect the analytical performance of atom probe tomography (APT). For instance, neutral molecules formed through dissociation may not be detected at all or with a time of flight no longer related to their mass, causing their loss from the analysis. Here, we evaluated the changes in the measured composition of FeO, Fe 2 O 3 and Fe 3 O 4 across a wide range of analysis conditions. Possible dissociation reactions are predicted by density-functional theory calculations considering the spin states of the molecules. The energetically favoured reactions are traced on to the multi-hit ion correlation histograms, to confirm their existence within experiments, using an automated Python-based routine. The detected reactions are carefully analyzed to reflect upon the influence of these neutrals from dissociation reactions on the performance of APT for analysing iron oxides.
Mathematical analysis of histogram equalization techniques for medical image enhancement: a tutorial from the perspective of data loss
This tutorial demonstrates a novel mathematical analysis of histogram equalization techniques and its application in medical image enhancement. In this paper, conventional Global Histogram Equalization (GHE), Contrast Limited Adaptive Histogram Equalization (CLAHE), Histogram Specification (HS) and Brightness Preserving Dynamic Histogram Equalization (BPDHE) are re-investigated by a novel mathematical analysis. All these HE methods are widely employed by researchers in image processing and medical image diagnosis domain, however, this has been observed that these HE methods have significant limitation of data loss. In this paper, a mathematical proof is given that any kind of Histogram Equalization method is inevitable of data loss, because any HE method is a non-linear method. All these Histogram Equalization methods are implemented on two different datasets, they are, brain tumor MRI image dataset and colorectal cancer H and E-stained histopathology image dataset. Pearson Correlation Coefficient (PCC) and Structural Similarity Index Matrix (SSIM) both are found in the range of 0.6-0.95 for overall all HE methods. Moreover, those results are compared with Reinhard method which is a linear contrast enhancement method. The experimental results suggest that Reinhard method outperformed any HE methods for medical image enhancement. Furthermore, a popular CNN model VGG-16 is implemented, on the MRI dataset in order to prove that there is a direct correlation between less accuracy and data loss.
A Privacy-Preserving Image Retrieval Based on AC-Coefficients and Color Histograms in Cloud Environment
Content based image retrieval (CBIR) techniques have been widely deployed in many applications for seeking the abundant information existed in images. Due to large amounts of storage and computational requirements of CBIR, outsourcing image search work to the cloud provider becomes a very attractive option for many owners with small devices. However, owing to the private content contained in images, directly outsourcing retrieval work to the cloud provider apparently bring about privacy problem, so the images should be protected carefully before outsourcing. This paper presents a secure retrieval scheme for the encrypted images in the YUV color space. With this scheme, the discrete cosine transform (DCT) is performed on the Y component. The resulting DC coefficients are encrypted with stream cipher technology and the resulting AC coefficients as well as other two color components are encrypted with value permutation and position scrambling. Then the image owner transmits the encrypted images to the cloud server. When receiving a query trapdoor form on query user, the server extracts AC-coefficients histogram from the encrypted Y component and extracts two color histograms from the other two color components. The similarity between query trapdoor and database image is measured by calculating the Manhattan distance of their respective histograms. Finally, the encrypted images closest to the query image are returned to the query user.