Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
905 result(s) for "RGB image"
Sort by:
Deep-Learning-Based Multispectral Image Reconstruction from Single Natural Color RGB Image—Enhancing UAV-Based Phenotyping
Multispectral images (MSIs) are valuable for precision agriculture due to the extra spectral information acquired compared to natural color RGB (ncRGB) images. In this paper, we thus aim to generate high spatial MSIs through a robust, deep-learning-based reconstruction method using ncRGB images. Using the data from the agronomic research trial for maize and breeding research trial for rice, we first reproduced ncRGB images from MSIs through a rendering model, Model-True to natural color image (Model-TN), which was built using a benchmark hyperspectral image dataset. Subsequently, an MSI reconstruction model, Model-Natural color to Multispectral image (Model-NM), was trained based on prepared ncRGB (ncRGB-Con) images and MSI pairs, ensuring the model can use widely available ncRGB images as input. The integrated loss function of mean relative absolute error (MRAEloss) and spectral information divergence (SIDloss) were most effective during the building of both models, while models using the MRAEloss function were more robust towards variability between growing seasons and species. The reliability of the reconstructed MSIs was demonstrated by high coefficients of determination compared to ground truth values, using the Normalized Difference Vegetation Index (NDVI) as an example. The advantages of using “reconstructed” NDVI over Triangular Greenness Index (TGI), as calculated directly from RGB images, were illustrated by their higher capabilities in differentiating three levels of irrigation treatments on maize plants. This study emphasizes that the performance of MSI reconstruction models could benefit from an optimized loss function and the intermediate step of ncRGB image preparation. The ability of the developed models to reconstruct high-quality MSIs from low-cost ncRGB images will, in particular, promote the application for plant phenotyping in precision agriculture.
Practicality and Robustness of Tree Species Identification Using UAV RGB Image and Deep Learning in Temperate Forest in Japan
Identifying tree species from the air has long been desired for forest management. Recently, combination of UAV RGB image and deep learning has shown high performance for tree identification in limited conditions. In this study, we evaluated the practicality and robustness of the tree identification system using UAVs and deep learning. We sampled training and test data from three sites in temperate forests in Japan. The objective tree species ranged across 56 species, including dead trees and gaps. When we evaluated the model performance on the dataset obtained from the same time and same tree crowns as the training dataset, it yielded a Kappa score of 0.97, and 0.72, respectively, for the performance on the dataset obtained from the same time but with different tree crowns. When we evaluated the dataset obtained from different times and sites from the training dataset, which is the same condition as the practical one, the Kappa scores decreased to 0.47. Though coniferous trees and representative species of stands showed a certain stable performance regarding identification, some misclassifications occurred between: (1) trees that belong to phylogenetically close species, (2) tree species with similar leaf shapes, and (3) tree species that prefer the same environment. Furthermore, tree types such as coniferous and broadleaved or evergreen and deciduous do not always guarantee common features between the different trees belonging to the tree type. Our findings promote the practicalization of identification systems using UAV RGB images and deep learning.
Object Pose Estimation Using Edge Images Synthesized from Shape Information
This paper presents a method for estimating the six Degrees of Freedom (6DoF) pose of texture-less objects from a monocular image by using edge information. The deep learning-based pose estimation method needs a large dataset containing pairs of an image and ground truth pose of objects. To alleviate the cost of collecting a dataset, we focus on the method using a dataset made by computer graphics (CG). This simulation-based method prepares a thousand images by rendering the computer-aided design (CAD) data of the object and trains a deep-learning model. As an inference stage, a monocular RGB image is entered into the model, and the object’s pose is estimated. The representative simulation-based method, Pose Interpreter Networks, uses silhouette images as the input, thereby enabling common feature (contour) extraction from RGB and CG images. However, estimating rotation parameters is less accurate. To overcome this problem, we propose a method to use edge information extracted from the object’s ridgelines for training the deep learning model. Since edge distribution changes largely according to the pose, the estimation of rotation parameters becomes more robust. Through an experiment with simulation data, we quantitatively proved the accuracy improvement compared to the previous method (error rate decreases at a certain condition are translation 22.9% and rotation: 43.4%). Moreover, through an experiment with physical data, we clarified the issues of this method and proposed an effective solution by fine-tuning (error rate decrease at a certain condition are translation 20.1% and rotation 57.7%).
Combining Canopy Coverage and Plant Height from UAV-Based RGB Images to Estimate Spraying Volume on Potato
Canopy coverage and plant height are the main crop canopy parameters, which can obviously reflect the growth status of crops on the field. The ability to identify canopy coverage and plant height quickly is critical for farmers or breeders to arrange their working schedule. In precision agriculture, choosing the opportunity and amount of farm inputs is the critical part, which will improve the yield and decrease the cost. The potato canopy coverage and plant height were quickly extracted, which could be used to estimate the spraying volume using the evaluation model obtained by indoor tests. The vegetation index approach was used to extract potato canopy coverage, and the color point cloud data method at different height rates was formed to estimate the plant height of potato at different growth stages. The original data were collected using a low-cost UAV, which was mounted on a high-resolution RGB camera. Then, the Structure from Motion (SFM) algorithm was used to extract the 3D point cloud from ordered images that could form a digital orthophoto model (DOM) and sparse point cloud. The results show that the vegetation index-based method could accurately estimate canopy coverage. Among EXG, EXR, RGBVI, GLI, and CIVE, EXG achieved the best adaptability in different test plots. Point cloud data could be used to estimate plant height, but when the potato coverage rate was low, potato canopy point cloud data underwent rarefaction; in the vigorous growth period, the estimated value was substantially connected with the measured value (R2 = 0.94). The relationship between the coverage area of spraying on potato canopy and canopy coverage was measured indoors to form the model. The results revealed that the model could estimate the dose accurately (R2 = 0.878). Therefore, combining agronomic factors with data extracted from the UAV RGB image had the ability to predict the field spraying volume.
In Vivo Evaluation of Cerebral Hemodynamics and Tissue Morphology in Rats during Changing Fraction of Inspired Oxygen Based on Spectrocolorimetric Imaging Technique
During surgical treatment for cerebrovascular diseases, cortical hemodynamics are often controlled by bypass graft surgery, temporary occlusion of arteries, and surgical removal of veins. Since the brain is vulnerable to hypoxemia and ischemia, interruption of cerebral blood flow reduces the oxygen supply to tissues and induces irreversible damage to cells and tissues. Monitoring of cerebral hemodynamics and alteration of cellular structure during neurosurgery is thus crucial. Sequential recordings of red-green-blue (RGB) images of in vivo exposed rat brains were made during hyperoxia, normoxia, hypoxia, and anoxia. Monte Carlo simulation of light transport in brain tissue was used to specify relationships among RGB-values and oxygenated hemoglobin concentration (CHbO), deoxygenated hemoglobin concentration (CHbR), total hemoglobin concentration (CHbT), hemoglobin oxygen saturation (StO2), and scattering power b. Temporal courses of CHbO, CHbR, CHbT, and StO2 indicated physiological responses to reduced oxygen delivery to cerebral tissue. A rapid decrease in light scattering power b was observed after respiratory arrest, similar to the negative deflection of the extracellular direct current (DC) potential in so-called anoxic depolarization. These results suggest the potential of this method for evaluating pathophysiological conditions and loss of tissue viability.
A novel color images security-based on SPN over the residue classes of quaternion integers$$\\:\\varvec{H}{\\left(\\mathbb{Z}\\right)}_{\\varvec{\\pi\\:}}
The exponential growth of multimedia data transmission has intensified the demand for advanced image encryption systems capable of resisting contemporary cryptanalytic attacks while maintaining computational efficiency. Conventional encryption schemes often fail to provide sufficient confusion and diffusion when applied to high-dimensional color images. To overcome these challenges, this paper proposes a novel Substitution–Permutation Network (SPN)-based RGB image encryption algorithm constructed over the residue classes of quaternion integers (RQCI’s) H\\left{(}{Z}{\\right)}_(π).  The method specifically addresses the problem of limited nonlinearity (NL) and weak algebraic complexity in existing S-box designs by introducing quaternion residue–based nonlinear substitution boxes (S-boxes) that exploit the four-dimensional nature of quaternion algebra (QA). The construction begins with quaternion prime (QP) selection and residue class formation, followed by affine mapping and coefficient decoupling to generate bijective and highly nonlinear S-boxes with strong avalanche characteristics. These S-boxes are then integrated into an SPN framework comprising substitution, permutation, and XOR diffusion layers applied independently to the red, green, and blue channels of an image. The use of quaternion arithmetic increases key sensitivity, expands the transformation space, and enhances resistance against differential, linear, and statistical attacks. Experimental evaluations demonstrate superior quantitative performance, with entropy approaching to the ideal value, Number of Pixel Change Rate (NPCR) exceeding 99.6%, Unified Average Changing Intensity (UACI) around 33.4%, and negligible correlation among adjacent pixels. Comparative results confirm that the proposed scheme achieves greater security and efficiency than existing SPN-based image ciphers. Hence, integrating quaternion residue class algebra with SPN architecture offers a mathematically grounded and practically efficient framework for robust color image encryption suitable for secure digital communication systems.
New color image encryption using hybrid optimization algorithm and Krawtchouk fractional transformations
This paper proposes a new method for encryption of RGB color images by combining two encryption approaches: the spatial approach and the transformation approach. The proposed method uses the 3D fractional modified Henon map (3D FrMHM) and the discrete fractional Krawtchouk moments (FrDKM). We have also proposed a new hybrid optimization algorithm (H-SSAOA) to optimize the parameters of the proposed Henon map and the parameters of the Krawtchouk fractional moments. This algorithm is based on the hybridization of two metaheuristic algorithms: the \"Salp Swarm Algorithm\" (SSA) and the \"Arithmetic Optimization Algorithm\" (AOA). The simulation results reveal the optimization efficiency of the proposed hybrid algorithm H-SSAOA compared to other meta-heuristic algorithms and the efficiency of the suggested encryption method for encrypting RGB color images in terms of sensitivity to the security key and resistance to different attacks.
A novel chaos-based image encryption using DNA sequence operation and Secure Hash Algorithm SHA-2
In this paper, we propose a novel image encryption algorithm based on a hybrid model of deoxyribonucleic acid (DNA) masking, a Secure Hash Algorithm SHA-2 and the Lorenz system. Our study uses DNA sequences and operations and the chaotic Lorenz system to strengthen the cryptosystem. The significant advantages of this approach are improving the information entropy which is the most important feature of randomness, resisting against various typical attacks and getting good experimental results. The theoretical analysis and experimental results show that the algorithm improves the encoding efficiency, enhances the security of the ciphertext and has a large key space and a high key sensitivity, and it is able to resist against the statistical and exhaustive attacks.
Cotton Yield Estimation Based on Vegetation Indices and Texture Features Derived From RGB Image
Yield monitoring is an important parameter to evaluate cotton productivity during cotton harvest. Nondestructive and accurate yield monitoring is of great significance to cotton production. Unmanned aerial vehicle (UAV) remote sensing has fast and repetitive acquisition ability. The visible vegetation indices has the advantages of low cost, small amount of calculation and high resolution. The combination of the UAV and visible vegetation indices has been more and more applied to crop yield monitoring. However, there are some shortcomings in estimating cotton yield based on visible vegetation indices only as the similarity between cotton and mulch film makes it difficult to differentiate them and yields may be saturated based on vegetation index estimates near harvest. Texture feature is another important remote sensing information that can provide geometric information of ground objects and enlarge the spatial information identification based on original image brightness. In this study, RGB images of cotton canopy were acquired by UAV carrying RGB sensors before cotton harvest. The visible vegetation indices and texture features were extracted from RGB images for cotton yield monitoring. Feature parameters were selected in different methods after extracting the information. Linear and nonlinear methods were used to build cotton yield monitoring models based on visible vegetation indices, texture features and their combinations. The results show that (1) vegetation indices and texture features extracted from the ultra-high-resolution RGB images obtained by UAVs were significantly correlated with the cotton yield; (2) The best model was that combined with vegetation indices and texture characteristics RF_ELM model, verification set R 2 was 0.9109, and RMSE was 0.91277 t.ha −1 . rRMSE was 29.34%. In conclusion, the research results prove that UAV carrying RGB sensor has a certain potential in cotton yield monitoring, which can provide theoretical basis and technical support for field cotton production evaluation.
Fault Detection in Power Equipment via an Unmanned Aerial System Using Multi Modal Data
The power transmission lines are the link between power plants and the points of consumption, through substations. Most importantly, the assessment of damaged aerial power lines and rusted conductors is of extreme importance for public safety; hence, power lines and associated components must be periodically inspected to ensure a continuous supply and to identify any fault and defect. To achieve these objectives, recently, Unmanned Aerial Vehicles (UAVs) have been widely used; in fact, they provide a safe way to bring sensors close to the power transmission lines and their associated components without halting the equipment during the inspection, and reducing operational cost and risk. In this work, a drone, equipped with multi-modal sensors, captures images in the visible and infrared domain and transmits them to the ground station. We used state-of-the-art computer vision methods to highlight expected faults (i.e., hot spots) or damaged components of the electrical infrastructure (i.e., damaged insulators). Infrared imaging, which is invariant to large scale and illumination changes in the real operating environment, supported the identification of faults in power transmission lines; while a neural network is adapted and trained to detect and classify insulators from an optical video stream. We demonstrate our approach on data captured by a drone in Parma, Italy.