Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
2,024
result(s) for
"automated image recognition"
Sort by:
Applying deep learning to right whale photo identification
2019
Photo identification is an important tool for estimating abundance and monitoring population trends over time. However, manually matching photographs to known individuals is time-consuming. Motivated by recent developments in image recognition, we hosted a data science challenge on the crowdsourcing platform Kaggle to automate the identification of endangered North Atlantic right whales (Eubalaena glacialis). The winning solution automatically identified individual whales with 87% accuracy with a series of convolutional neural networks to identify the region of interest on an image, rotate, crop, and create standardized photographs of uniform size and orientation and then identify the correct individual whale from these passport-like photographs. Recent advances in deep learning coupled with this fully automated workflow have yielded impressive results and have the potential to revolutionize traditional methods for the collection of data on the abundance and distribution of wild populations. Presenting these results to a broad audience should further bridge the gap between the data science and conservation science communities.
La identificación fotográfica es una herramienta importante para la estimación de la abundancia y el monitoreo de las tendencias poblacionales en el tiempo. Sin embargo, corresponder las fotografías con los individuos conocidos requiere de mucho tiempo. Motivados por los avances recientes en el reconocimiento de imágenes, decidimos acoger un reto de datos científicos en la plataforma de colaboración masiva Kaggle para automatizar la identificación de ballenas francas del Atlántico norte (Eubalaena glacialis), especie que se encuentra en peligro de extinción. La solución ganadora identificó automáticamente a las ballenas individuales con una certeza del 87% y con una serie de redes neurales convolucionales para identificar la región de interés en una imagen, rotar, recortar, y crear fotografías estandarizadas de tamaño y orientación uniforme y después identificar al individuo correcto a partir de estas fotografías tamaño pasaporte. Los avances recientes en el aprendizaje profundo acoplados a este flujo de trabajo completamente automatizado han producido resultados impresionantes y tienen el potencial para revolucionar los métodos tradicionales de recolección de datos de abundancia y distribución de las poblaciones silvestres. La presentación de estos resultados ante un público amplio debería reducir aún más el vacío que existe entre los datos científicos y las comunidades científicas para la conservación.
照片识别是评估种群丰度和监测种群动态的重要工具。然而,人工地将照片与已知个体进行比对需耗费 大量时间。受到最近图像识别领域发展的启发我们在众包平台Kaggle网站举办了ー项数据科学挑战,来自动 识到濒危的北太平洋露脊鲸(Eubalaena glad alis)。获胜的方案能以87%的准确率自动识别鲸鱼个体,它利用 一系列卷积神经网络来找到图像上关注的区域,加以旋转、裁剪,并包!建统ー尺寸和方向的标准化照片,然后从 这些类似护照的照片中正确识别出鲸鱼个体。目前深度学习领域的进展加上这种完全自动化的工作流程,已取 得了显著成果,并有可能给野生动物种群丰度和分布的传统数据收集方法带来变革。我们将这些结果呈现给广 大受众以期进ー步缩小数据科学和保护科学群体之间的距离。
Journal Article
Automated identification of Monogeneans using digital image processing and K-nearest neighbour approaches
by
Tan, Wooi Boon
,
Town, Christopher
,
Dhillon, Sarinder Kaur
in
Algorithms
,
Animals
,
Bioinformatics
2016
Background
Monogeneans are flatworms (Platyhelminthes) that are primarily found on gills and skin of fishes. Monogenean parasites have attachment appendages at their haptoral regions that help them to move about the body surface and feed on skin and gill debris. Haptoral attachment organs consist of sclerotized hard parts such as hooks, anchors and marginal hooks. Monogenean species are differentiated based on their haptoral bars, anchors, marginal hooks, reproductive parts’ (male and female copulatory organs) morphological characters and soft anatomical parts. The complex structure of these diagnostic organs and also their overlapping in microscopic digital images are impediments for developing fully automated identification system for monogeneans (LNCS 7666:256-263, 2012), (ISDA; 457–462, 2011), (J Zoolog Syst Evol Res 52(2): 95–99. 2013;). In this study images of hard parts of the haptoral organs such as bars and anchors are used to develop a fully automated identification technique for monogenean species identification by implementing image processing techniques and machine learning methods.
Result
Images of four monogenean species namely
Sinodiplectanotrema malayanus
,
Trianchoratus pahangensis
,
Metahaliotrema mizellei
and
Metahaliotrema
sp. (undescribed) were used to develop an automated technique for identification. K-nearest neighbour (KNN) was applied to classify the monogenean specimens based on the extracted features. 50% of the dataset was used for training and the other 50% was used as testing for system evaluation. Our approach demonstrated overall classification accuracy of 90%. In this study Leave One Out (LOO) cross validation is used for validation of our system and the accuracy is 91.25%.
Conclusions
The methods presented in this study facilitate fast and accurate fully automated classification of monogeneans at the species level. In future studies more classes will be included in the model, the time to capture the monogenean images will be reduced and improvements in extraction and selection of features will be implemented.
Journal Article
Automated identification of copepods using digital image processing and artificial neural network
2015
Background
Copepods are planktonic organisms that play a major role in the marine food chain. Studying the community structure and abundance of copepods in relation to the environment is essential to evaluate their contribution to mangrove trophodynamics and coastal fisheries. The routine identification of copepods can be very technical, requiring taxonomic expertise, experience and much effort which can be very time-consuming. Hence, there is an urgent need to introduce novel methods and approaches to automate identification and classification of copepod specimens. This study aims to apply digital image processing and machine learning methods to build an automated identification and classification technique.
Results
We developed an automated technique to extract morphological features of copepods' specimen from captured images using digital image processing techniques. An Artificial Neural Network (ANN) was used to classify the copepod specimens from species
Acartia spinicauda, Bestiolina similis, Oithona aruensis, Oithona dissimilis, Oithona simplex, Parvocalanus crassirostris, Tortanus barbatus
and
Tortanus forcipatus
based on the extracted features. 60% of the dataset was used for a two-layer feed-forward network training and the remaining 40% was used as testing dataset for system evaluation. Our approach demonstrated an overall classification accuracy of 93.13% (100% for
A. spinicauda, B. similis
and
O. aruensis
, 95% for
T. barbatus
, 90% for
O. dissimilis
and
P. crassirostris
, 85% for
O. similis
and
T. forcipatus
).
Conclusions
The methods presented in this study enable fast classification of copepods to the species level. Future studies should include more classes in the model, improving the selection of features, and reducing the time to capture the copepod images.
Journal Article
How automated image analysis techniques help scientists in species identification and classification?
2018
Identification of taxonomy at a specific level is time consuming and reliant upon expert ecologists. Hence the demand for automated species identification incre-ased over the last two decades. Automation of data classification is primarily focussed on images while incorporating and analysing image data has recently become easier due to developments in computational technology. Research ef-forts on identification of species include specimens' image processing, extraction of identical features, followed by classifying them into correct categories. In this paper, we discuss recent automated species identification systems, mainly for categorising and evaluating their methods. We reviewed and compared different methods in step by step scheme of automated identification and classification systems of species images. The selection of methods is influenced by many variables such as level of classification, number of training data and complexity of images. The aim of writing this paper is to provide researchers and scientists an extensive background study on work related to automated species identification, focusing on pattern recognition techniques in building such systems for biodiversity studies. (Folia Morphol 2018; 77, 2: 179-193).
Journal Article
On How Crowdsourced Data and Landscape Organisation Metrics Can Facilitate the Mapping of Cultural Ecosystem Services: An Estonian Case Study
by
Heremans, Stien
,
Külvik, Mart
,
Chervanyov, Igor
in
Algorithms
,
Application programming interface
,
automated image recognition
2020
Social media continues to grow, permanently capturing our digital footprint in the form of texts, photographs, and videos, thereby reflecting our daily lives. Therefore, recent studies are increasingly recognising passively crowdsourced geotagged photographs retrieved from location-based social media as suitable data for quantitative mapping and assessment of cultural ecosystem service (CES) flow. In this study, we attempt to improve CES mapping from geotagged photographs by combining natural language processing, i.e., topic modelling and automated machine learning classification. Our study focuses on three main groups of CESs that are abundant in outdoor social media data: landscape watching, active outdoor recreation, and wildlife watching. Moreover, by means of a comparative viewshed analysis, we compare the geographic information system- and remote sensing-based landscape organisation metrics related to landscape coherence and colour harmony. We observed the spatial distribution of CESs in Estonia and confirmed that colour harmony indices are more strongly associated with landscape watching and outdoor recreation, while landscape coherence is more associated with wildlife watching. Both CES use and values of landscape organisation indices are land cover-specific. The suggested methodology can significantly improve the state-of-the-art with regard to CES mapping from geotagged photographs, and it is therefore particularly relevant for monitoring landscape sustainability.
Journal Article
Training 4.6-Bit Convolutional Neural Networks with a HardTanh Activation Function
2025
Low-bit quantization of neural networks is of great practical importance since it helps to significantly reduce the memory footprint and power consumption as well as increase the computational speed, which is especially important for mobile devices. Acting as a compromise between accurate 8-bit quantization and computationally efficient 4-bit quantization, 4.6-bit quantization is one of the promising methods. In this work, theoretical and practical aspects of training 4.6-bit convolutional neural networks with a HardTanh-type activation function are studied. Applying such activations, one can combine quantization and activation operations, simplifying the training procedure and reducing the computational cost of execution. Combinations of three different initialization strategies and four quantization algorithms are theoretically and experimentally studied for the class of neural networks involved. Special attention is paid to quantizing blocks with residual connections. According to the results, the best accuracy can be achieved by using initialization that balances the variance of gradients in the neural network with a layer-by-layer quantization algorithm that calibrates the weights of the neural network layers similar to the AdaQuant algorithm. For quantization of a block with a residual connection, an extended HardTanh-type activation function should be used, which is subsequently combined with the quantization operation.
Journal Article
Digital Excavation of Mediatized Urban Heritage: Automated Recognition of Buildings in Image Sources
by
Mager, Tino
,
Hein, Carola
in
Acknowledgment
,
Artificial intelligence
,
Artificial neural networks
2020
Digital technologies provide novel ways of visualizing cities and buildings. They also facilitate new methods of analyzing the built environment, ranging from artificial intelligence (AI) to crowdsourced citizen participation. Digital representations of cities have become so refined that they challenge our perception of the real. However, computers have not yet become able to detect and analyze the visible features of built structures depicted in photographs or other media. Recent scientific advances mean that it is possible for this new field of computer vision to serve as a critical aid to research. Neural networks now meet the challenge of identifying and analyzing building elements, buildings and urban landscapes. The development and refinement of these technologies requires more attention, simultaneously, investigation is needed in regard to the use and meaning of these methods for historical research. For example, the use of AI raises questions about the ways in which computer-based image recognition reproduces biases of contemporary practice. It also invites reflection on how mixed methods, integrating quantitative and qualitative approaches, can be established and used in research in the humanities. Finally, it opens new perspectives on the role of crowdsourcing in both knowledge dissemination and shared research. Attempts to analyze historical big data with the latest methods of deep learning, to involve many people—laymen and experts—in research via crowdsourcing and to deal with partly unknown visual material have provided a better understanding of what is possible. The article presents findings from the ongoing research project ArchiMediaL, which is at the forefront of the analysis of historical mediatizations of the built environment. It demonstrates how the combination of crowdsourcing, historical big data and deep learning simultaneously raises questions and provides solutions in the field of architectural and urban planning history.
Journal Article
Application of local fully Convolutional Neural Network combined with YOLO v5 algorithm in small target detection of remote sensing image
2021
This exploration primarily aims to jointly apply the local FCN (fully convolution neural network) and YOLO-v5 (You Only Look Once-v5) to the detection of small targets in remote sensing images. Firstly, the application effects of R-CNN (Region-Convolutional Neural Network), FRCN (Fast Region-Convolutional Neural Network), and R-FCN (Region-Based-Fully Convolutional Network) in image feature extraction are analyzed after introducing the relevant region proposal network. Secondly, YOLO-v5 algorithm is established on the basis of YOLO algorithm. Besides, the multi-scale anchor mechanism of Faster R-CNN is utilized to improve the detection ability of YOLO-v5 algorithm for small targets in the image in the process of image detection, and realize the high adaptability of YOLO-v5 algorithm to different sizes of images. Finally, the proposed detection method YOLO-v5 algorithm + R-FCN is compared with other algorithms in NWPU VHR-10 data set and Vaihingen data set. The experimental results show that the YOLO-v5 + R-FCN detection method has the optimal detection ability among many algorithms, especially for small targets in remote sensing images such as tennis courts, vehicles, and storage tanks. Moreover, the YOLO-v5 + R-FCN detection method can achieve high recall rates for different types of small targets. Furthermore, due to the deeper network architecture, the YOL v5 + R-FCN detection method has a stronger ability to extract the characteristics of image targets in the detection of remote sensing images. Meanwhile, it can achieve more accurate feature recognition and detection performance for the densely arranged target images in remote sensing images. This research can provide reference for the application of remote sensing technology in China, and promote the application of satellites for target detection tasks in related fields.
Journal Article
Automated plant species identification—Trends and future directions
by
Mäder, Patrick
,
Rzanny, Michael
,
Seeland, Marco
in
Artificial Intelligence
,
Automation
,
Biodiversity
2018
Current rates of species loss triggered numerous attempts to protect and conserve biodiversity. Species conservation, however, requires species identification skills, a competence obtained through intensive training and experience. Field researchers, land managers, educators, civil servants, and the interested public would greatly benefit from accessible, up-to-date tools automating the process of species identification. Currently, relevant technologies, such as digital cameras, mobile devices, and remote access to databases, are ubiquitously available, accompanied by significant advances in image processing and pattern recognition. The idea of automated species identification is approaching reality. We review the technical status quo on computer vision approaches for plant species identification, highlight the main research challenges to overcome in providing applicable tools, and conclude with a discussion of open and future research thrusts.
Journal Article
Automated segmentation by deep learning of loose connective tissue fibers to define safe dissection planes in robot-assisted gastrectomy
2021
The prediction of anatomical structures within the surgical field by artificial intelligence (AI) is expected to support surgeons’ experience and cognitive skills. We aimed to develop a deep-learning model to automatically segment loose connective tissue fibers (LCTFs) that define a safe dissection plane. The annotation was performed on video frames capturing a robot-assisted gastrectomy performed by trained surgeons. A deep-learning model based on U-net was developed to output segmentation results. Twenty randomly sampled frames were provided to evaluate model performance by comparing Recall and F1/Dice scores with a ground truth and with a two-item questionnaire on sensitivity and misrecognition that was completed by 20 surgeons. The model produced high Recall scores (mean 0.606, maximum 0.861). Mean F1/Dice scores reached 0.549 (range 0.335–0.691), showing acceptable spatial overlap of the objects. Surgeon evaluators gave a mean sensitivity score of 3.52 (with 88.0% assigning the highest score of 4; range 2.45–3.95). The mean misrecognition score was a low 0.14 (range 0–0.7), indicating very few acknowledged over-detection failures. Thus, AI can be trained to predict fine, difficult-to-discern anatomical structures at a level convincing to expert surgeons. This technology may help reduce adverse events by determining safe dissection planes.
Journal Article