Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
862 result(s) for "RGB images"
Sort by:
Deep-Learning-Based Multispectral Image Reconstruction from Single Natural Color RGB Image—Enhancing UAV-Based Phenotyping
Multispectral images (MSIs) are valuable for precision agriculture due to the extra spectral information acquired compared to natural color RGB (ncRGB) images. In this paper, we thus aim to generate high spatial MSIs through a robust, deep-learning-based reconstruction method using ncRGB images. Using the data from the agronomic research trial for maize and breeding research trial for rice, we first reproduced ncRGB images from MSIs through a rendering model, Model-True to natural color image (Model-TN), which was built using a benchmark hyperspectral image dataset. Subsequently, an MSI reconstruction model, Model-Natural color to Multispectral image (Model-NM), was trained based on prepared ncRGB (ncRGB-Con) images and MSI pairs, ensuring the model can use widely available ncRGB images as input. The integrated loss function of mean relative absolute error (MRAEloss) and spectral information divergence (SIDloss) were most effective during the building of both models, while models using the MRAEloss function were more robust towards variability between growing seasons and species. The reliability of the reconstructed MSIs was demonstrated by high coefficients of determination compared to ground truth values, using the Normalized Difference Vegetation Index (NDVI) as an example. The advantages of using “reconstructed” NDVI over Triangular Greenness Index (TGI), as calculated directly from RGB images, were illustrated by their higher capabilities in differentiating three levels of irrigation treatments on maize plants. This study emphasizes that the performance of MSI reconstruction models could benefit from an optimized loss function and the intermediate step of ncRGB image preparation. The ability of the developed models to reconstruct high-quality MSIs from low-cost ncRGB images will, in particular, promote the application for plant phenotyping in precision agriculture.
Combining Canopy Coverage and Plant Height from UAV-Based RGB Images to Estimate Spraying Volume on Potato
Canopy coverage and plant height are the main crop canopy parameters, which can obviously reflect the growth status of crops on the field. The ability to identify canopy coverage and plant height quickly is critical for farmers or breeders to arrange their working schedule. In precision agriculture, choosing the opportunity and amount of farm inputs is the critical part, which will improve the yield and decrease the cost. The potato canopy coverage and plant height were quickly extracted, which could be used to estimate the spraying volume using the evaluation model obtained by indoor tests. The vegetation index approach was used to extract potato canopy coverage, and the color point cloud data method at different height rates was formed to estimate the plant height of potato at different growth stages. The original data were collected using a low-cost UAV, which was mounted on a high-resolution RGB camera. Then, the Structure from Motion (SFM) algorithm was used to extract the 3D point cloud from ordered images that could form a digital orthophoto model (DOM) and sparse point cloud. The results show that the vegetation index-based method could accurately estimate canopy coverage. Among EXG, EXR, RGBVI, GLI, and CIVE, EXG achieved the best adaptability in different test plots. Point cloud data could be used to estimate plant height, but when the potato coverage rate was low, potato canopy point cloud data underwent rarefaction; in the vigorous growth period, the estimated value was substantially connected with the measured value (R2 = 0.94). The relationship between the coverage area of spraying on potato canopy and canopy coverage was measured indoors to form the model. The results revealed that the model could estimate the dose accurately (R2 = 0.878). Therefore, combining agronomic factors with data extracted from the UAV RGB image had the ability to predict the field spraying volume.
Practicality and Robustness of Tree Species Identification Using UAV RGB Image and Deep Learning in Temperate Forest in Japan
Identifying tree species from the air has long been desired for forest management. Recently, combination of UAV RGB image and deep learning has shown high performance for tree identification in limited conditions. In this study, we evaluated the practicality and robustness of the tree identification system using UAVs and deep learning. We sampled training and test data from three sites in temperate forests in Japan. The objective tree species ranged across 56 species, including dead trees and gaps. When we evaluated the model performance on the dataset obtained from the same time and same tree crowns as the training dataset, it yielded a Kappa score of 0.97, and 0.72, respectively, for the performance on the dataset obtained from the same time but with different tree crowns. When we evaluated the dataset obtained from different times and sites from the training dataset, which is the same condition as the practical one, the Kappa scores decreased to 0.47. Though coniferous trees and representative species of stands showed a certain stable performance regarding identification, some misclassifications occurred between: (1) trees that belong to phylogenetically close species, (2) tree species with similar leaf shapes, and (3) tree species that prefer the same environment. Furthermore, tree types such as coniferous and broadleaved or evergreen and deciduous do not always guarantee common features between the different trees belonging to the tree type. Our findings promote the practicalization of identification systems using UAV RGB images and deep learning.
Object Pose Estimation Using Edge Images Synthesized from Shape Information
This paper presents a method for estimating the six Degrees of Freedom (6DoF) pose of texture-less objects from a monocular image by using edge information. The deep learning-based pose estimation method needs a large dataset containing pairs of an image and ground truth pose of objects. To alleviate the cost of collecting a dataset, we focus on the method using a dataset made by computer graphics (CG). This simulation-based method prepares a thousand images by rendering the computer-aided design (CAD) data of the object and trains a deep-learning model. As an inference stage, a monocular RGB image is entered into the model, and the object’s pose is estimated. The representative simulation-based method, Pose Interpreter Networks, uses silhouette images as the input, thereby enabling common feature (contour) extraction from RGB and CG images. However, estimating rotation parameters is less accurate. To overcome this problem, we propose a method to use edge information extracted from the object’s ridgelines for training the deep learning model. Since edge distribution changes largely according to the pose, the estimation of rotation parameters becomes more robust. Through an experiment with simulation data, we quantitatively proved the accuracy improvement compared to the previous method (error rate decreases at a certain condition are translation 22.9% and rotation: 43.4%). Moreover, through an experiment with physical data, we clarified the issues of this method and proposed an effective solution by fine-tuning (error rate decrease at a certain condition are translation 20.1% and rotation 57.7%).
In Vivo Evaluation of Cerebral Hemodynamics and Tissue Morphology in Rats during Changing Fraction of Inspired Oxygen Based on Spectrocolorimetric Imaging Technique
During surgical treatment for cerebrovascular diseases, cortical hemodynamics are often controlled by bypass graft surgery, temporary occlusion of arteries, and surgical removal of veins. Since the brain is vulnerable to hypoxemia and ischemia, interruption of cerebral blood flow reduces the oxygen supply to tissues and induces irreversible damage to cells and tissues. Monitoring of cerebral hemodynamics and alteration of cellular structure during neurosurgery is thus crucial. Sequential recordings of red-green-blue (RGB) images of in vivo exposed rat brains were made during hyperoxia, normoxia, hypoxia, and anoxia. Monte Carlo simulation of light transport in brain tissue was used to specify relationships among RGB-values and oxygenated hemoglobin concentration (CHbO), deoxygenated hemoglobin concentration (CHbR), total hemoglobin concentration (CHbT), hemoglobin oxygen saturation (StO2), and scattering power b. Temporal courses of CHbO, CHbR, CHbT, and StO2 indicated physiological responses to reduced oxygen delivery to cerebral tissue. A rapid decrease in light scattering power b was observed after respiratory arrest, similar to the negative deflection of the extracellular direct current (DC) potential in so-called anoxic depolarization. These results suggest the potential of this method for evaluating pathophysiological conditions and loss of tissue viability.
Fault Detection in Power Equipment via an Unmanned Aerial System Using Multi Modal Data
The power transmission lines are the link between power plants and the points of consumption, through substations. Most importantly, the assessment of damaged aerial power lines and rusted conductors is of extreme importance for public safety; hence, power lines and associated components must be periodically inspected to ensure a continuous supply and to identify any fault and defect. To achieve these objectives, recently, Unmanned Aerial Vehicles (UAVs) have been widely used; in fact, they provide a safe way to bring sensors close to the power transmission lines and their associated components without halting the equipment during the inspection, and reducing operational cost and risk. In this work, a drone, equipped with multi-modal sensors, captures images in the visible and infrared domain and transmits them to the ground station. We used state-of-the-art computer vision methods to highlight expected faults (i.e., hot spots) or damaged components of the electrical infrastructure (i.e., damaged insulators). Infrared imaging, which is invariant to large scale and illumination changes in the real operating environment, supported the identification of faults in power transmission lines; while a neural network is adapted and trained to detect and classify insulators from an optical video stream. We demonstrate our approach on data captured by a drone in Parma, Italy.
An Efficient Biomass Estimation Model for Large-Scale Olea europaea L. by Integrating UAV-RGB and Usup.2-Net with Allometric Equations
What are the main findings? * First successful biomass estimation in Olea europaea L. using integrated UAV-RGB and U[sup.2]-Net. * U[sup.2]-Net combined with UAV-RGB images accurately extracted Olea europaea L. CA. First successful biomass estimation in Olea europaea L. using integrated UAV-RGB and U[sup.2]-Net. U[sup.2]-Net combined with UAV-RGB images accurately extracted Olea europaea L. CA. What are the implications of the main findings? * This study developed a high-accuracy biomass estimation model for Olea europaea L., providing critical technical support for the cultivation management and carbon sequestration assessment of this economically important species. * By innovatively integrating UAV imagery with the U[sup.2]-Net deep learning method, efficient and automated canopy extraction and biomass monitoring were achieved, demonstrating significant potential for broad application. This study developed a high-accuracy biomass estimation model for Olea europaea L., providing critical technical support for the cultivation management and carbon sequestration assessment of this economically important species. By innovatively integrating UAV imagery with the U[sup.2]-Net deep learning method, efficient and automated canopy extraction and biomass monitoring were achieved, demonstrating significant potential for broad application. Olea europaea L. is an economically and ecologically significant species, for which accurate biomass estimation provides critical insights for artificial propagation, yield forecasting, and carbon sequestration assessments. Currently, research on biomass estimation for Olea europaea L. remains scarce, and there is a lack of efficient, accurate, and scalable technical solutions. To address this gap, this study achieved, for the first time, non-destructive estimation of Olea europaea L. biomass across individual tree to plot scales by integrating UAV-RGB (Unmanned Aerial Vehicle-Red-Green-Blue) imagery with the U[sup.2]-Net model. This study initially developed allometric models for W-D-H, CA-D, and CA-H in Olea europaea L. (where W = biomass, D = ground diameter, H = tree height, and CA = canopy area). A single-parameter CA-based whole-plant biomass model was subsequently developed utilizing the optimal models. An innovative whole-plant biomass estimation model (UAV-RGB, U[sup.2]-Net Total Biomass, UUTB) that combines UAV-RGB imagery with U[sup.2]-Net at the sample-plot level was developed and assessed. The results revealed the following: (1) The model for Olea europaea L. aboveground biomass (AGB) was WA = 0.0025D1.943H0.690 (R [sup.2] = 0.912), the model for belowground biomass (BGB) was WB = 0.012D1.231H0.525 (R [sup.2] = 0.693), the model for CA-D was D = 4.31427C0.513 (R [sup.2] = 0.751), CA-H model was H = 226.51939C0.268 (R [sup.2] = 0.500). (2) The optimal AGB model for CA single-parameter was WA = 1.80901C1.181 (R [sup.2] = 0.845), and the model for BGB was WB = 1.25043C0.772 (R [sup.2] = 0.741). (3) The R [sup.2] of Olea europaea L. biomass, as estimated by CA derived from the U[sup.2]-Net and UUTB models, was 0.855. This study presents the first integration of UAV-RGB imagery and the U[sup.2]-Net model for biomass estimation in Olea europaea L., which not only addresses the research gap in species-specific allometric modeling but also overcomes the limitations of traditional manual measurement methods. The proposed approach provides a reliable technical foundation for accurate assessment of both economic yield and ecological carbon sequestration capacity.
Estimation of potato above-ground biomass based on unmanned aerial vehicle red-green-blue images with different texture features and crop height
Obtaining crop above-ground biomass (AGB) information quickly and accurately is beneficial to farmland production management and the optimization of planting patterns. Many studies have confirmed that, due to canopy spectral saturation, AGB is underestimated in the multi-growth period of crops when using only optical vegetation indices. To solve this problem, this study obtains textures and crop height directly from ultrahigh-ground-resolution (GDS) red-green-blue (RGB) images to estimate the potato AGB in three key growth periods. Textures include a grayscale co-occurrence matrix texture (GLCM) and a Gabor wavelet texture. GLCM-based textures were extracted from seven-GDS (1, 5, 10, 30, 40, 50, and 60 cm) RGB images. Gabor-based textures were obtained from magnitude images on five scales (scales 1–5, labeled S1–S5, respectively). Potato crop height was extracted based on the generated crop height model. Finally, to estimate potato AGB, we used (i) GLCM-based textures from different GDS and their combinations, (ii) Gabor-based textures from different scales and their combinations, (iii) all GLCM-based textures combined with crop height, (iv) all Gabor-based textures combined with crop height, and (v) two types of textures combined with crop height by least-squares support vector machine (LSSVM), extreme learning machine, and partial least squares regression techniques. The results show that (i) potato crop height and AGB first increase and then decrease over the growth period; (ii) GDS and scales mainly affect the correlation between GLCM- and Gabor-based textures and AGB; (iii) to estimate AGB, GLCM-based textures of GDS1 and GDS30 work best when the GDS is between 1 and 5 cm and 10 and 60 cm, respectively (however, estimating potato AGB based on Gabor-based textures gradually deteriorates as the Gabor convolution kernel scale increases); (iv) the AGB estimation based on a single-type texture is not as good as estimates based on multi-resolution GLCM-based and multiscale Gabor-based textures (with the latter being the best); (v) different forms of textures combined with crop height using the LSSVM technique improved by 22.97, 14.63, 9.74, and 8.18% (normalized root mean square error) compared with using only all GLCM-based textures, all Gabor-based textures, the former combined with crop height, and the latter combined with crop height, respectively. Therefore, different forms of texture features obtained from RGB images acquired from unmanned aerial vehicles and combined with crop height improve the accuracy of potato AGB estimates under high coverage.
A novel color images security-based on SPN over the residue classes of quaternion integers$$\\:\\varvec{H}{\\left(\\mathbb{Z}\\right)}_{\\varvec{\\pi\\:}}
Abstract The exponential growth of multimedia data transmission has intensified the demand for advanced image encryption systems capable of resisting contemporary cryptanalytic attacks while maintaining computational efficiency. Conventional encryption schemes often fail to provide sufficient confusion and diffusion when applied to high-dimensional color images. To overcome these challenges, this paper proposes a novel Substitution–Permutation Network (SPN)-based RGB image encryption algorithm constructed over the residue classes of quaternion integers (RQCI’s) $$\\:H{\\left(Z\\right)}_{\\pi\\:}.$$  The method specifically addresses the problem of limited nonlinearity (NL) and weak algebraic complexity in existing S-box designs by introducing quaternion residue–based nonlinear substitution boxes (S-boxes) that exploit the four-dimensional nature of quaternion algebra (QA). The construction begins with quaternion prime (QP) selection and residue class formation, followed by affine mapping and coefficient decoupling to generate bijective and highly nonlinear S-boxes with strong avalanche characteristics. These S-boxes are then integrated into an SPN framework comprising substitution, permutation, and XOR diffusion layers applied independently to the red, green, and blue channels of an image. The use of quaternion arithmetic increases key sensitivity, expands the transformation space, and enhances resistance against differential, linear, and statistical attacks. Experimental evaluations demonstrate superior quantitative performance, with entropy approaching to the ideal value, Number of Pixel Change Rate (NPCR) exceeding 99.6%, Unified Average Changing Intensity (UACI) around 33.4%, and negligible correlation among adjacent pixels. Comparative results confirm that the proposed scheme achieves greater security and efficiency than existing SPN-based image ciphers. Hence, integrating quaternion residue class algebra with SPN architecture offers a mathematically grounded and practically efficient framework for robust color image encryption suitable for secure digital communication systems.
A Two-Mode Underwater Smart Sensor Object for Precision Aquaculture Based on AIoT Technology
Monitoring the status of culture fish is an essential task for precision aquaculture using a smart underwater imaging device as a non-intrusive way of sensing to monitor freely swimming fish even in turbid or low-ambient-light waters. This paper developed a two-mode underwater surveillance camera system consisting of a sonar imaging device and a stereo camera. The sonar imaging device has two cloud-based Artificial Intelligence (AI) functions that estimate the quantity and the distribution of the length and weight of fish in a crowded fish school. Because sonar images can be noisy and fish instances of an overcrowded fish school are often overlapped, machine learning technologies, such as Mask R-CNN, Gaussian mixture models, convolutional neural networks, and semantic segmentation networks were employed to address the difficulty in the analysis of fish in sonar images. Furthermore, the sonar and stereo RGB images were aligned in the 3D space, offering an additional AI function for fish annotation based on RGB images. The proposed two-mode surveillance camera was tested to collect data from aquaculture tanks and off-shore net cages using a cloud-based AIoT system. The accuracy of the proposed AI functions based on human-annotated fish metric data sets were tested to verify the feasibility and suitability of the smart camera for the estimation of remote underwater fish metrics.