Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
194 result(s) for "Bueno, Gloria"
Sort by:
Fast Fight Detection
Action recognition has become a hot topic within computer vision. However, the action recognition community has focused mainly on relatively simple actions like clapping, walking, jogging, etc. The detection of specific events with direct practical use such as fights or in general aggressive behavior has been comparatively less studied. Such capability may be extremely useful in some video surveillance scenarios like prisons, psychiatric centers or even embedded in camera phones. As a consequence, there is growing interest in developing violence detection algorithms. Recent work considered the well-known Bag-of-Words framework for the specific problem of fight detection. Under this framework, spatio-temporal features are extracted from the video sequences and used for classification. Despite encouraging results in which high accuracy rates were achieved, the computational cost of extracting such features is prohibitive for practical applications. This work proposes a novel method to detect violence sequences. Features extracted from motion blobs are used to discriminate fight and non-fight sequences. Although the method is outperformed in accuracy by state of the art, it has a significantly faster computation time thus making it amenable for real-time applications.
Data augmentation via warping transforms for modeling natural variability in the corneal endothelium enhances semi-supervised segmentation
Image segmentation of the corneal endothelium with deep convolutional neural networks (CNN) is challenging due to the scarcity of expert-annotated data. This work proposes a data augmentation technique via warping to enhance the performance of semi-supervised training of CNNs for accurate segmentation. We use a unique augmentation process for images and masks involving keypoint extraction, Delaunay triangulation, local affine transformations, and mask refinement. This approach accurately captures the natural variability of the corneal endothelium, enriching the dataset with realistic and diverse images. The proposed method achieved an increase in the mean intersection over union (mIoU) and Dice coefficient (DC) metrics of 17.2% and 4.8% respectively, for the segmentation task in corneal endothelial images on multiple CNN architectures. Our data augmentation strategy successfully models the natural variability in corneal endothelial images, thereby enhancing the performance and generalization capabilities of semi-supervised CNNs in medical image cell segmentation tasks.
BUS-UCLM: Breast ultrasound lesion segmentation dataset
This dataset comprises 38 breast ultrasound scans from patients, encompassing a total of 683 images. The scans were conducted using a Siemens ACUSON S2000TM Ultrasound System from 2022 to 2023. The dataset is specifically created for the purpose of segmenting breast lesions, with the goal of identifying the area and contour of the lesion, as well as classifying it as either benign or malignant. The images can be classified into three categories based on their findings: 419 are normal, 174 are benign, and 90 are malignant. The ground truth is given as RGB segmentation masks in individual files, with black indicating normal breast tissue and green and red indicating benign and malignant lesions, respectively. This dataset enables researchers to construct and evaluate machine learning models for identifying between benign and malignant tumours in authentic breast ultrasound images. The segmentation annotations provided by expert radiologists enable accurate model training and evaluation, making this dataset a valuable asset in the field of computer vision and public health.
Deep Neural Networks for HER2 Grading of Whole Slide Images with Subclasses Levels
HER2 overexpression is a prognostic and predictive factor observed in about 15% to 20% of breast cancer cases. The assessment of its expression directly affects the selection of treatment and prognosis. The measurement of HER2 status is performed by an expert pathologist who assigns a score of 0, 1, 2+, or 3+ based on the gene expression. There is a high probability of interobserver variability in this evaluation, especially when it comes to class 2+. This is reasonable as the primary cause of error in multiclass classification problems typically arises in the intermediate classes. This work proposes a novel approach to expand the decision limit and divide it into two additional classes, that is 1.5+ and 2.5+. This subdivision facilitates both feature learning and pathology assessment. The method was evaluated using various neural networks models capable of performing patch-wise grading of HER2 whole slide images (WSI). Then, the outcomes of the 7-class classification were merged back into 5 classes in accordance with the pathologists’ criteria and to compare the results with the initial 5-class model. Optimal outcomes were achieved by employing colour transfer for data augmentation, and the ResNet-101 architecture with 7 classes. A sensitivity of 0.91 was achieved for class 2+ and 0.97 for 3+. Furthermore, this model offers the highest level of confidence, ranging from 92% to 94% for 2+ and 96% to 97% for 3+. In contrast, a dataset containing only 5 classes demonstrates a sensitivity performance that is 5% lower for the same network.
Glomerulus Classification and Detection Based on Convolutional Neural Networks
Glomerulus classification and detection in kidney tissue segments are key processes in nephropathology used for the correct diagnosis of the diseases. In this paper, we deal with the challenge of automating Glomerulus classification and detection from digitized kidney slide segments using a deep learning framework. The proposed method applies Convolutional Neural Networks (CNNs) between two classes: Glomerulus and Non-Glomerulus, to detect the image segments belonging to Glomerulus regions. We configure the CNN with the public pre-trained AlexNet model and adapt it to our system by learning from Glomerulus and Non-Glomerulus regions extracted from training slides. Once the model is trained, labeling is performed by applying the CNN classification to the image blocks under analysis. The results of the method indicate that this technique is suitable for correct Glomerulus detection in Whole Slide Images (WSI), showing robustness while reducing false positive and false negative detections.
Approaching Adversarial Example Classification with Chaos Theory
Adversarial examples are one of the most intriguing topics in modern deep learning. Imperceptible perturbations to the input can fool robust models. In relation to this problem, attack and defense methods are being developed almost on a daily basis. In parallel, efforts are being made to simply pointing out when an input image is an adversarial example. This can help prevent potential issues, as the failure cases are easily recognizable by humans. The proposal in this work is to study how chaos theory methods can help distinguish adversarial examples from regular images. Our work is based on the assumption that deep networks behave as chaotic systems, and adversarial examples are the main manifestation of it (in the sense that a slight input variation produces a totally different output). In our experiments, we show that the Lyapunov exponents (an established measure of chaoticity), which have been recently proposed for classification of adversarial examples, are not robust to image processing transformations that alter image entropy. Furthermore, we show that entropy can complement Lyapunov exponents in such a way that the discriminating power is significantly enhanced. The proposed method achieves 65% to 100% accuracy detecting adversarials with a wide range of attacks (for example: CW, PGD, Spatial, HopSkip) for the MNIST dataset, with similar results when entropy-changing image processing methods (such as Equalization, Speckle and Gaussian noise) are applied. This is also corroborated with two other datasets, Fashion-MNIST and CIFAR 19. These results indicate that classifiers can enhance their robustness against the adversarial phenomenon, being applied in a wide variety of conditions that potentially matches real world cases and also other threatening scenarios.
Automated Diatom Classification (Part B): A Deep Learning Approach
Diatoms, a kind of algae microorganisms with several species, are quite useful for water quality determination, one of the hottest topics in applied biology nowadays. At the same time, deep learning and convolutional neural networks (CNN) are becoming an extensively used technique for image classification in a variety of problems. This paper approaches diatom classification with this technique, in order to demonstrate whether it is suitable for solving the classification problem. An extensive dataset was specifically collected (80 types, 100 samples/type) for this study. The dataset covers different illumination conditions and it was computationally augmented to more than 160,000 samples. After that, CNNs were applied over datasets pre-processed with different image processing techniques. An overall accuracy of 99% is obtained for the 80-class problem and different kinds of images (brightfield, normalized). Results were compared to previous presented classification techniques with different number of samples. As far as the authors know, this is the first time that CNNs are applied to diatom classification.
Diatom identification including life cycle stages through morphological and texture descriptors
Diatoms are unicellular algae present almost wherever there is water. Diatom identification has many applications in different fields of study, such as ecology, forensic science, etc. In environmental studies, algae can be used as a natural water quality indicator. The diatom life cycle consists of the set of stages that pass through the successive generations of each species from the initial to the senescent cells. Life cycle modeling is a complex process since in general the distribution of the parameter vectors that represent the variations that occur in this process is non-linear and of high dimensionality. In this paper, we propose to characterize the diatom life cycle by the main features that change during the algae life cycle, mainly the contour shape and the texture. Elliptical Fourier Descriptors (EFD) are used to describe the diatom contour while phase congruency and Gabor filters describe the inner ornamentation of the algae. The proposed method has been tested with a small algae dataset (eight different classes and more than 50 samples per type) using supervised and non-supervised classification techniques obtaining accuracy results up to 99% and 98% respectively.
A Low-Cost Automated Digital Microscopy Platform for Automatic Identification of Diatoms
Currently, microalgae (i.e., diatoms) constitute a generally accepted bioindicator of water quality and therefore provide an index of the status of biological ecosystems. Diatom detection for specimen counting and sample classification are two difficult time-consuming tasks for the few existing expert diatomists. To mitigate this challenge, in this work, we propose a fully operative low-cost automated microscope, integrating algorithms for: (1) stage and focus control, (2) image acquisition (slide scanning, stitching, contrast enhancement), and (3) diatom detection and a prospective specimen classification (among 80 taxa). Deep learning algorithms have been applied to overcome the difficult selection of image descriptors imposed by classical machine learning strategies. With respect to the mentioned strategies, the best results were obtained by deep neural networks with a maximum precision of 86% (with the YOLO network) for detection and 99.51% for classification, among 80 different species (with the AlexNet network). All the developed operational modules are integrated and controlled by the user from the developed graphical user interface running in the main controller. With the developed operative platform, it is noteworthy that this work provides a quite useful toolbox for phycologists in their daily challenging tasks to identify and classify diatoms.
Height‐based equations as screening tools for high blood pressure in pediatric practice, the GENOBOX study
Due to the absence of easily applicable cut‐off points to determine high blood pressure or hypertension in children, as in the adult population, blood pressure is rarely measured in the pediatrician's clinical routine. This has led to an underdiagnosis of high blood pressure or hypertension in children. For this reason, the present study evaluate the utility of five equations for the screening of high blood pressure in children: blood pressure to height ratio, modified blood pressure to height ratio, new modified blood pressure to height ratio, new simple formula and height‐based equations. The authors evaluated 1599 children between 5 and 18 years. The performance of the five equations was analyzed using the receiver‐operating characteristics curves for identifying blood pressure above P90th according to the American Academy of Pediatrics Clinical Practice Guideline 2017. All equations showed an area under the curve above 0.882. The new modified blood pressure to height ratio revealed a high sensitivity whereas the height‐based equations showed the best performance, with a positive predictive value above 88.2%. Finally, all equations showed higher positive predictive values in children with overweight or obesity. The height‐based equation obtained the highest PPV values above 71.1% in children with normal weight and above 90.2% in children with overweight or obesity. In conclusions, the authors recommend the use of the height‐based equations equation because it showed the best positive predictive values to identify children with elevated blood pressure, independently of their sex, pubertal and weight status.