Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
1,049
result(s) for
"image-based"
Sort by:
Review: Application of Artificial Intelligence in Phenomics
by
Kim, Moon S.
,
Baek, Insuck
,
Nabwire, Shona
in
artificial intelligence
,
deep learning
,
field phenotyping
2021
Plant phenomics has been rapidly advancing over the past few years. This advancement is attributed to the increased innovation and availability of new technologies which can enable the high-throughput phenotyping of complex plant traits. The application of artificial intelligence in various domains of science has also grown exponentially in recent years. Notably, the computer vision, machine learning, and deep learning aspects of artificial intelligence have been successfully integrated into non-invasive imaging techniques. This integration is gradually improving the efficiency of data collection and analysis through the application of machine and deep learning for robust image analysis. In addition, artificial intelligence has fostered the development of software and tools applied in field phenotyping for data collection and management. These include open-source devices and tools which are enabling community driven research and data-sharing, thereby availing the large amounts of data required for the accurate study of phenotypes. This paper reviews more than one hundred current state-of-the-art papers concerning AI-applied plant phenotyping published between 2010 and 2020. It provides an overview of current phenotyping technologies and the ongoing integration of artificial intelligence into plant phenotyping. Lastly, the limitations of the current approaches/methods and future directions are discussed.
Journal Article
High-resolution synchrotron imaging shows that root hairs influence rhizosphere soil structure formation
by
Keith R. Daly
,
Anthony G. Bengough
,
Timothy S. George
in
Barley
,
Bulk density
,
Computed tomography
2017
In this paper, we provide direct evidence of the importance of root hairs on pore structure development at the root–soil interface during the early stage of crop establishment.
This was achieved by use of high-resolution (c. 5 μm) synchrotron radiation computed tomography (SRCT) to visualise both the structure of root hairs and the soil pore structure in plant–soil microcosms. Two contrasting genotypes of barley (Hordeum vulgare), with and without root hairs, were grown for 8d in microcosms packed with sandy loam soil at 1.2 g cm−3 dry bulk density. Root hairs were visualised within air-filled pore spaces, but not in the fine-textured soil regions.
We found that the genotype with root hairs significantly altered the porosity and connectivity of the detectable pore space (> 5 μm) in the rhizosphere, as compared with the no-hair mutants. Both genotypes showed decreasing pore space between 0.8 and 0.1mm from the root surface. Interestingly the root-hair-bearing genotype had a significantly greater soil pore volume-fraction at the root–soil interface.
Effects of pore structure on diffusion and permeability were estimated to be functionally insignificant under saturated conditions when simulated using image-based modelling.
Journal Article
Deep learning and computer vision will transform entomology
2021
Most animal species on Earth are insects, and recent reports suggest that their abundance is in drastic decline. Although these reports come from a wide range of insect taxa and regions, the evidence to assess the extent of the phenomenon is sparse. Insect populations are challenging to study, and most monitoring methods are labor intensive and inefficient. Advances in computer vision and deep learning provide potential new solutions to this global challenge. Cameras and other sensors can effectively, continuously, and noninvasively perform entomological observations throughout diurnal and seasonal cycles. The physical appearance of specimens can also be captured by automated imaging in the laboratory. When trained on these data, deep learning models can provide estimates of insect abundance, biomass, and diversity. Further, deep learning models can quantify variation in phenotypic traits, behavior, and interactions. Here, we connect recent developments in deep learning and computer vision to the urgent demand for more cost-efficient monitoring of insects and other invertebrates. We present examples of sensor-based monitoring of insects. We show how deep learning tools can be applied to exceptionally large datasets to derive ecological information and discuss the challenges that lie ahead for the implementation of such solutions in entomology. We identify four focal areas, which will facilitate this transformation: 1) validation of image-based taxonomic identification; 2) generation of sufficient training data; 3) development of public, curated reference databases; and 4) solutions to integrate deep learning and molecular tools.
Journal Article
Ligand-Enhanced Negative Images Optimized for Docking Rescoring
2022
Despite the pivotal role of molecular docking in modern drug discovery, the default docking scoring functions often fail to recognize active ligands in virtual screening campaigns. Negative image-based rescoring improves docking enrichment by comparing the shape/electrostatic potential (ESP) of the flexible docking poses against the target protein’s inverted cavity volume. By optimizing these negative image-based (NIB) models using a greedy search, the docking rescoring yield can be improved massively and consistently. Here, a fundamental modification is implemented to this shape-focused pharmacophore modelling approach—actual ligand 3D coordinates are incorporated into the NIB models for the optimization. This hybrid approach, labelled as ligand-enhanced brute-force negative image-based optimization (LBR-NiB), takes the best from both worlds, i.e., the all-roundedness of the NIB models and the difficult to emulate atomic arrangements of actual protein-bound small-molecule ligands. Thorough benchmarking, focused on proinflammatory targets, shows that the LBR-NiB routinely improves the docking enrichment over prior iterations of the R-NiB methodology. This boost can be massive, if the added ligand information provides truly essential binding information that was lacking or completely missing from the cavity-based NIB model. On a practical level, the results indicate that the LBR-NiB typically works well when the added ligand 3D data originates from a high-quality source, such as X-ray crystallography, and, yet, the NIB model compositions can also sometimes be improved by fusing into them, for example, with flexibly docked solvent molecules. In short, the study demonstrates that the protein-bound ligands can be used to improve the shape/ESP features of the negative images for effective docking rescoring use in virtual screening.
Journal Article
Image-based robotic-arm assisted unicompartmental knee arthroplasty provides high survival and good-to-excellent clinical outcomes at minimum 10 years follow-up
by
Malatesta, Alessandro
,
Barbo, Giovanni
,
Seracchioli, Stefano
in
Arthroplasty (knee)
,
Bone implants
,
Clinical outcomes
2023
Purpose
The purpose of the present study was to determine the incidence of revision and report on clinical outcomes at a minimum of 10 years follow-up in patients who had received a medial unicompartmental knee arthroplasty (UKA) with an three-dimensional image-based robotic system.
Methods
A total of 239 patients (247 knees), who underwent medial robotic-arm assisted (RA)-UKA at a single center between April 2011 and June 2013, were assessed. The mean age at surgery was 67.0 years (SD 8.4). Post-operatively, patients were administered the Forgotten Joint Score-12 (FJS-12) and asked about their satisfaction (from 1 to 5). Post-operative complications were recorded. Failure mechanisms, revisions and reoperations were collected. Kaplan–Meier survival curves were calculated, considering revision as the event of interest.
Results
A total of 188 patients (196 knees) were assessed at a mean follow-up of 11.1 years (SD 0.5, range 10.0–11.9), resulting in a 79.4% follow-up rate. Seven RA-UKA underwent revision, resulting in a survivorship rate of 96.4% (CI 94.6%–99.2%). Causes of revision included aseptic loosening (2 cases), infection (1 case), post-traumatic (1 case), and unexplained pain (3 cases). The mean FJS-12 and satisfaction were 82.2 (SD 23.9) and 4.4 (SD 0.9), respectively. Majority of cases (174/196, 88.8%) attained the Patient Acceptable Symptoms State (PASS, FJS-12 > 40.63). Male subjects had a higher probability of attaining a “forgotten joint” (
p
< 0.001) and high satisfaction (equal to 5,
p
< 0.05), when compared to females.
Conclusions
Three-dimensional image-based RA-UKA demonstrated high implant survivorship and good-to-excellent clinical outcomes at minimum 10 years follow-up. Pain of unknown origin represented the most common reason for RA-UKA revision.
Level of evidence
III.
Journal Article
A Review of Recent Developments in Driver Drowsiness Detection Systems
by
Albadawi, Yaman
,
Awad, Mohammed
,
Takruri, Maen
in
Algorithms
,
Artificial Intelligence
,
Automobile Driving
2022
Continuous advancements in computing technology and artificial intelligence in the past decade have led to improvements in driver monitoring systems. Numerous experimental studies have collected real driver drowsiness data and applied various artificial intelligence algorithms and feature combinations with the goal of significantly enhancing the performance of these systems in real-time. This paper presents an up-to-date review of the driver drowsiness detection systems implemented over the last decade. The paper illustrates and reviews recent systems using different measures to track and detect drowsiness. Each system falls under one of four possible categories, based on the information used. Each system presented in this paper is associated with a detailed description of the features, classification algorithms, and used datasets. In addition, an evaluation of these systems is presented, in terms of the final classification accuracy, sensitivity, and precision. Furthermore, the paper highlights the recent challenges in the area of driver drowsiness detection, discusses the practicality and reliability of each of the four system types, and presents some of the future trends in the field.
Journal Article
Image-Based Sexual Abuse
2017
Advances in technology have transformed and expanded the ways in which sexual violence can be perpetrated. One new manifestation of such violence is the non-consensual creation and/or distribution of private sexual images: what we conceptualise as ‘image-based sexual abuse’. This article delineates the scope of this new concept and identifies the individual and collective harms it engenders. We argue that the individual harms of physical and mental illness, together with the loss of dignity, privacy and sexual autonomy, combine to constitute a form of cultural harm that impacts directly on individuals, as well as on society as a whole. While recognising the limits of law, we conclude by considering the options for redress and the role of law, seeking to justify the deployment of the expressive and coercive powers of criminal and civil law as a means of encouraging cultural change.
Journal Article
Quantifying and Reducing the Operator Effect in LSPIV Discharge Measurements
2024
Operator choices, both in acquiring the video and data and in processing them, can be a prominent source of error in image‐based velocimetry methods applied to river discharge measurements. The Large Scale Particle Image Velocimetry (LSPIV) is known to be sensitive to the parameters and computation choices set by the user, but no systematic comparisons with discharge references or intercomparisons have been conducted yet to evaluate this operator effect in LSPIV. In this paper, an analysis of a video gauging intercomparison, the Video Globe Challenge 2020, is proposed to evaluate such operator effect. The analysis is based on the gauging reports of the 15 to 23 participants using the Fudaa‐LSPIV software and intents to identify the most sensitive parameters for the eight videos. The analysis highlighted the significant impact of the time interval, the grid points and the filters on the LSPIV discharge measurements. These parameters are often inter‐dependent and should be correctly set together to strongly reduce the discharge errors. Based on the results, several automated tools were proposed to reduce the operator effect. These tools consist of several parameter assistants to automatically set the orthorectification resolution, the grid and the time interval, and of a sequence of systematic and automatic filters to ensure reliable velocity measurements used for discharge estimation. The application of the assisted LSPIV workflow using the proposed tools leads to significant improvements of the discharge measurements with strong reductions of the inter‐participant variability. On the eight videos, the mean interquartile range of the discharge errors is reduced from 17% to 5% and the mean discharge bias is reduced from −9% to 1% with the assisted LSPIV workflow. The remaining inter‐participant variability is mainly due to the user‐defined surface velocity coefficient α.
Key Points
Video‐based river discharge measurements are sensitive to both measuring conditions and user‐defined parameters and options
The sensitivity of Large Scale Particle Image Velocimetry discharge computations to operator choices is quantified through a video streamgauging intercomparison
Proposed automatic settings and spurious velocity filters efficiently reduce discharge biases and inter‐operator variability
Journal Article
Synthetic Image Generation Using the Finite Element Method and Blender Graphics Program for Modeling of Vision-Based Measurement Systems
2021
Computer vision is a frequently used approach in static and dynamic measurements of various mechanical structures. Sometimes, however, conducting a large number of experiments is time-consuming and may require significant financial and human resources. On the contrary, the authors propose a simulation approach for performing experiments to synthetically generate vision data. Synthetic images of mechanical structures subjected to loads are generated in the following way. The finite element method is adopted to compute deformations of the studied structure, and next, the Blender graphics program is used to render images presenting that structure. As a result of the proposed approach, it is possible to obtain synthetic images that reliably reflect static and dynamic experiments. This paper presents the results of the application of the proposed approach in the analysis of a complex-shaped structure for which experimental validation was carried out. In addition, the second example of the process of 3D reconstruction of the examined structure (in a multicamera system) is provided. The results for the structure with damage (cantilever beam) are also presented. The obtained results allow concluding that the proposed approach reliably imitates the images captured during real experiments. In addition, the method can become a tool supporting the vision system configuration process before conducting final experimental research.
Journal Article