Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
15,087
result(s) for
"Image retrieval"
Sort by:
A Benchmark Dataset for Performance Evaluation of Multi-Label Remote Sensing Image Retrieval
by
Weixun Zhou
,
Zhenfeng Shao
,
Ke Yang
in
convolutional neural networks
,
handcrafted features
,
multi-label benchmark dataset
2018
Benchmark datasets are essential for developing and evaluating remote sensing image retrieval (RSIR) approaches. However, most of the existing datasets are single-labeled, with each image in these datasets being annotated by a single label representing the most significant semantic content of the image. This is sufficient for simple problems, such as distinguishing between a building and a beach, but multiple labels are required for more complex problems, such as RSIR. This motivated us to present a new benchmark dataset termed \"MLRSIR\" that was labeled from an existing single-labeled remote sensing archive. MLRSIR contained a total of 17 classes, and each image had at least one of 17 pre-defined labels. We evaluated the performance of RSIR methods ranging from traditional handcrafted feature-based methods to deep-learning-based ones on MLRSIR. More specifically, we compared the performances of RSIR methods from both single-label and multi-label perspectives. These results presented the advantages of multiple labels over single labels for interpreting complex remote sensing images, and serve as a baseline for future research on multi-label RSIR.
Journal Article
Content-based image retrieval: A review of recent trends
by
Abdulhussain, Sadiq H.
,
Hameed, Ibtihaal M.
,
Mahmmod, Basheera M.
in
Algorithms
,
content-based image retrieval
,
Digital imaging
2021
With the availability of internet technology and the low-cost of digital image sensor, enormous amount of image databases have been created in different kind of applications. These image databases increase the demand to develop efficient image retrieval search methods that meet user requirements. Great attention and efforts have been devoted to improve content-based image retrieval method with a particular focus on reducing the semantic gap between low-level features and human visual perceptions. Due to the increasing research in this field, this paper surveys, analyses and compares the current state-of-the-art methodologies over the last six years in the CBIR field. This paper also provides an overview of CBIR framework, recent low-level feature extraction methods, machine learning algorithms, similarity measures, and a performance evaluation to inspire further research efforts.
Journal Article
Coverless image steganography using partial-duplicate image retrieval
by
Zhou, Zhili
,
Mu, Yan
,
Wu, Q. M. Jonathan
in
Artificial Intelligence
,
Color imagery
,
Computational Intelligence
2019
Most of the existing image steganographic approaches embed the secret information imperceptibly into a cover image by slightly modifying its content. However, the modification traces will cause some distortion in the stego-image, especially when embedding color image data that usually contain thousands of bits, which makes successful steganalysis possible. In this paper, we propose a novel coverless steganographic approach without any modification for transmitting secret color image. In our approach, instead of modifying a cover image to generate the stego-image, steganography is realized by using a set of proper partial duplicates of a given secret image as stego-images, which are retrieved from a natural image database. More specifically, after dividing each database image into a number of non-overlapping patches and indexing those images based on the features extracted from these patches, we search for the partial duplicates of the secret image in the database to obtain the stego-images, each of which shares one or several visually similar patches with the secret image. At the receiver end, by using the patches of the stego-images, our approach can approximately recover the secret image. Since the stego-images are natural ones without any modification traces, our approach can resist all of the existing steganalysis tools. Experimental results and analysis prove that our approach not only has strong resistance to steganalysis, but also has desirable security and high hiding capability.
Journal Article
A Model of Semantic-Based Image Retrieval Using C-Tree and Neighbor Graph
by
Le, Thanh Manh
,
Nhi, Nguyen Thi Uyen
,
Thanh The Van
in
Datasets
,
K-nearest neighbors algorithm
,
Retrieval
2022
The problems of image mining and semantic image retrieval play an important role in many areas of life. In this paper, a semantic-based image retrieval system is proposed that relies on the combination of C-Tree, which was built in our previous work, and a neighbor graph (called Graph-CTree) to improve accuracy. The k-Nearest Neighbor (k-NN) algorithm is used to classify a set of similar images that are retrieved on Graph-CTree to create a set of visual words. An ontology framework for images is created semi-automatically. SPARQL query is automatically generated from visual words and retrieve on ontology for semantics image. The experiment was performed on image datasets, such as COREL, WANG, ImageCLEF, and Stanford Dogs, with precision values of 0.888473, 0.766473, 0.839814, and 0.826416, respectively. These results are compared with related works on the same image dataset, showing the effectiveness of the methods proposed here.
Journal Article
Deep ensemble architectures with heterogeneous approach for an efficient content-based image retrieval
2024
In the field of digital image processing, content-based image retrieval (CBIR) has become essential for searching images based on visual content characteristics like color, shape, and texture, rather than relying on text-based annotations. To address the increasing demands for efficiency and precision in CBIR systems, we introduce the HybridEnsembleNet methodology. HybridEnsembleNet combines deep learning algorithms with an asymmetric retrieval framework to optimize feature extraction and comparison in extensive image databases. This novel approach, specifically custom-made for CBIR, employs a lightweight query structure skilled at handling large-scale data under resource-constrained environments. The experiments were performed on the ROxford and RParis datasets. The deep learning component of HybridEnsembleNet significantly refines the accuracy of image matching and retrieval. RParis The ROxford dataset, specifically in the medium and hard difficulty benchmarks, demonstrates an enhancement of 5.53% and 10.44%, respectively. Similarly, the RParis dataset, under medium and hard benchmarks, exhibits improvements of 3.01% and 5.83%, showcasing superior performance compared to existing models. By overcoming the traditional limitations of CBIR systems in mean average precision (mAP) metrics, HybridEnsembleNet provides a scalable, efficient, and more accurate solution for retrieving relevant images from vast digital libraries.
Journal Article
Leaf disease image retrieval with object detection and deep metric learning
2022
Rapid identification of plant diseases is essential for effective mitigation and control of their influence on plants. For plant disease automatic identification, classification of plant leaf images based on deep learning algorithms is currently the most accurate and popular method. Existing methods rely on the collection of large amounts of image annotation data and cannot flexibly adjust recognition categories, whereas we develop a new image retrieval system for automated detection, localization, and identification of individual leaf disease in an open setting, namely, where newly added disease types can be identified without retraining. In this paper, we first optimize the YOLOv5 algorithm, enhancing recognition ability in small objects, which helps to extract leaf objects more accurately; secondly, integrating classification recognition with metric learning, jointly learning categorizing images and similarity measurements, thus, capitalizing on prediction ability of available image classification models; and finally, constructing an efficient and nimble image retrieval system to quickly determine leaf disease type. We demonstrate detailed experimental results on three publicly available leaf disease datasets and prove the effectiveness of our system. This work lays the groundwork for promoting disease surveillance of plants applicable to intelligent agriculture and to crop research such as nutrition diagnosis, health status surveillance, and more.
Journal Article
Distribution Consistency Loss for Large-Scale Remote Sensing Image Retrieval
2020
Remote sensing images are featured by massiveness, diversity and complexity. These features put forward higher requirements for the speed and accuracy of remote sensing image retrieval. The extraction method plays a key role in retrieving remote sensing images. Deep metric learning (DML) captures the semantic similarity information between data points by learning embedding in vector space. However, due to the uneven distribution of sample data in remote sensing image datasets, the pair-based loss currently used in DML is not suitable. To improve this, we propose a novel distribution consistency loss to solve this problem. First, we define a new way to mine samples by selecting five in-class hard samples and five inter-class hard samples to form an informative set. This method can make the network extract more useful information in a short time. Secondly, in order to avoid inaccurate feature extraction due to sample imbalance, we assign dynamic weight to the positive samples according to the ratio of the number of hard samples and easy samples in the class, and name the loss caused by the positive sample as the sample balance loss. We combine the sample balance of the positive samples with the ranking consistency of the negative samples to form our distribution consistency loss. Finally, we built an end-to-end fine-tuning network suitable for remote sensing image retrieval. We display comprehensive experimental results drawing on three remote sensing image datasets that are publicly available and show that our method achieves the state-of-the-art performance.
Journal Article
Innovative local texture descriptor in joint of human-based color features for content-based image retrieval
by
Ghattaei, Mohammad
,
Kelishadrokhi, Morteza Karimian
,
Fekri-Ershad, Shervan
in
Color
,
Computer Imaging
,
Computer Science
2023
Image retrieval is one of the hot research topics in computer vision which has been paid much attention by researchers in the last decade. Image retrieval refers to retrieving more similar images to the query form a huge image database. It is used widely in different scopes such as medical and search engines. Texture and color information play an important role in image content recognition. So, in this paper an innovative approach is proposed based on a combination of color and texture features. In this respect, an extended version of local neighborhood difference patterns (ELNDP) is proposed for the first time to achieve discriminative features. The ELNDP exploits the advantages of LBP and LNDP texture descriptors. Also, for global features extraction, optimized color histogram features in HSV color space are used to extract color features. Finally, the extended Canberra distance metric is used to retrieve more relevant images which is not sensitive to lower values like classic Canberra. The performance of the proposed approach is evaluated on five benchmark datasets such as Corel 1 K, 5 K, 10 K, STex and Colored Brodatz. The results are evaluated in terms of average precision rate (APR), average recall rate (ARR), The experimental results show that the proposed approach provides higher retrieval performance in comparison with state-of-the-art methods in this area such as machine learning-based and deep learning-based approaches.
Journal Article
A deep neural network model for content-based medical image retrieval with multi-view classification
by
Kamath, S. Sowmya
,
Karthik, K.
in
Annotations
,
Artificial Intelligence
,
Artificial neural networks
2021
In medical applications, retrieving similar images from repositories is most essential for supporting diagnostic imaging-based clinical analysis and decision support systems. However, this is a challenging task, due to the multi-modal and multi-dimensional nature of medical images. In practical scenarios, the availability of large and balanced datasets that can be used for developing intelligent systems for efficient medical image management is quite limited. Traditional models often fail to capture the latent characteristics of images and have achieved limited accuracy when applied to medical images. For addressing these issues, a deep neural network-based approach for view classification and content-based image retrieval is proposed and its application for efficient medical image retrieval is demonstrated. We also designed an approach for body part orientation view classification labels, intending to reduce the variance that occurs in different types of scans. The learned features are used first to predict class labels and later used to model the feature space for similarity computation for the retrieval task. The outcome of this approach is measured in terms of error score. When benchmarked against 12 state-of-the-art works, the model achieved the lowest error score of 132.45, with 9.62–63.14% improvement over other works, thus highlighting its suitability for real-world applications.
Journal Article
Effective features in content-based image retrieval from a combination of low-level features and deep Boltzmann machine
2023
Image retrieval is a convenient way to browse and search for a set of similar images. The main challenge of Content-based Image Retrieval (CBIR) systems is to extract the appropriate feature vector for image description. In this research, a content-based image retrieval model focusing on extracting effective features is introduced. The introduced feature vector is a combination of low-level and mid-level Image Features. The extraction of low-level features of the image, including color, shape, and texture, was performed using auto-correlogram, Gabor wavelet transform, and multi-level fractal dimension analysis. The mid-level features of the image are also extracted through the use of the Deep Boltzmann Machine as well as by learning the low-level features of the image and the relationships between them. The resulting feature vector of Image retrieval based on the combination of low-level features and deep Boltzmann machine (LB-CBIR) is adjusted with the Corel 1 K dataset, and the performance of the proposed model is measured on the Corel 1 K-illumination, Corel 1 K-Scale, Corel 5 K, Corel 10 K, Oxford buildings and Caltech-256 dataset. The best-evaluated results on the mentioned datasets have been reported with the average precision criterion as 99.4% for Corel 1 K dataset, 94.2% for Corel 1 l-Scale, 82.05 for Corel 1 K-illumination, 98.2% for Corel 5 K dataset, 90.2% for Corel 10 K dataset, 64.1% for Oxford buildings, 32.12% for Caltech-256 dataset. Explainability of the feature vector and the value of the extracted features in the proposed model are also interpreted by calculating the Shapley value.
Journal Article