Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Series Title
      Series Title
      Clear All
      Series Title
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Content Type
    • Item Type
    • Is Full-Text Available
    • Subject
    • Country Of Publication
    • Publisher
    • Source
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
52,585 result(s) for "machine vision"
Sort by:
Melanoma Classification Using a Novel Deep Convolutional Neural Network with Dermoscopic Images
Automatic melanoma detection from dermoscopic skin samples is a very challenging task. However, using a deep learning approach as a machine vision tool can overcome some challenges. This research proposes an automated melanoma classifier based on a deep convolutional neural network (DCNN) to accurately classify malignant vs. benign melanoma. The structure of the DCNN is carefully designed by organizing many layers that are responsible for extracting low to high-level features of the skin images in a unique fashion. Other vital criteria in the design of DCNN are the selection of multiple filters and their sizes, employing proper deep learning layers, choosing the depth of the network, and optimizing hyperparameters. The primary objective is to propose a lightweight and less complex DCNN than other state-of-the-art methods to classify melanoma skin cancer with high efficiency. For this study, dermoscopic images containing different cancer samples were obtained from the International Skin Imaging Collaboration datastores (ISIC 2016, ISIC2017, and ISIC 2020). We evaluated the model based on accuracy, precision, recall, specificity, and F1-score. The proposed DCNN classifier achieved accuracies of 81.41%, 88.23%, and 90.42% on the ISIC 2016, 2017, and 2020 datasets, respectively, demonstrating high performance compared with the other state-of-the-art networks. Therefore, this proposed approach could provide a less complex and advanced framework for automating the melanoma diagnostic process and expediting the identification process to save a life.
Understanding machine learning : from foundations to algorithms
\"Machine learning is one of the fastest growing areas of computer science, with far-reaching applications. The aim of this textbook is to introduce machine learning, and the algorithmic paradigms it offers, in a principled way. The book provides an extensive theoretical account of the fundamental ideas underlying machine learning and the mathematical derivations that transform these principles into practical algorithms. Following a presentation of the basics of the field, the book covers a wide array of central topics that have not been addressed by previous textbooks. These include a discussion of the computational complexity of learning and the concepts of convexity and stability; important algorithmic paradigms including stochastic gradient descent, neural networks, and structured output learning; and emerging theoretical concepts such as the PAC-Bayes approach and compression-based bounds. Designed for an advanced undergraduate or beginning graduate course, the text makes the fundamentals and algorithms of machine learning accessible to students and non-expert readers in statistics, computer science, mathematics, and engineering\"-- Provided by publisher.
Deep Neural Networks to Detect Weeds from Crops in Agricultural Environments in Real-Time: A Review
Automation, including machine learning technologies, are becoming increasingly crucial in agriculture to increase productivity. Machine vision is one of the most popular parts of machine learning and has been widely used where advanced automation and control have been required. The trend has shifted from classical image processing and machine learning techniques to modern artificial intelligence (AI) and deep learning (DL) methods. Based on large training datasets and pre-trained models, DL-based methods have proven to be more accurate than previous traditional techniques. Machine vision has wide applications in agriculture, including the detection of weeds and pests in crops. Variation in lighting conditions, failures to transfer learning, and object occlusion constitute key challenges in this domain. Recently, DL has gained much attention due to its advantages in object detection, classification, and feature extraction. DL algorithms can automatically extract information from large amounts of data used to model complex problems and is, therefore, suitable for detecting and classifying weeds and crops. We present a systematic review of AI-based systems to detect weeds, emphasizing recent trends in DL. Various DL methods are discussed to clarify their overall potential, usefulness, and performance. This study indicates that several limitations obstruct the widespread adoption of AI/DL in commercial applications. Recommendations for overcoming these challenges are summarized.
Hanbook of research on computer vision and image processing in the deep learning era
\"This book explores traditional and new areas of the computer vision, machine and deep learning combined to solve a range of problems with the objective to integrate the knowledge of the growing international community of researchers working on the application of Machine Learning and Deep Learning Methods in Vision and Robotics\"-- Provided by publisher.
Deep learning in visual computing and signal processing
\"This new volume, Deep Learning in Visual Computing and Signal Processing, covers the fundamentals and advanced topics in designing and deploying techniques using deep architectures and their application in visual computing and signal processing. The volume first lays out the fundamentals of deep learning as well as deep learning architectures and frameworks. It goes on to discuss deep learning in neural networks and deep learning for object recognition and detection models. It looks at the various specific applications of deep learning in visual and signal processing, such as in biorobotics, for automated brain tumor segmentation in MRI images, in neural networks for use in seizure classification, for digital forensic investigation based on deep learning, and more. Key features : covers both the fundamentals and the latest concepts in deep learning, presents some of the diverse applications of deep learning in visual computing and signal processing, and includes over 90 figures and tables to elucidate the text. An enlightening amalgamation of deep learning concepts with visual computing and signal processing applications, this valuable resource will serve as a guide for researchers, engineers, and students who want to have a quick start on learning and/or building deep learning systems. It provides a good theoretical and practical understanding and complete information and knowledge required to understand and build deep learning models from scratch\"-- Provided by publisher.
Next‐generation machine vision systems incorporating two‐dimensional materials: Progress and perspectives
Machine vision systems (MVSs) are an important component of intelligent systems, such as autonomous vehicles and robots. However, with the continuous increase in data and new application scenarios, new requirements are put forward for the next generation of MVS. There is an urgent need to find new material systems to complement the existing semiconductor technology based on thin‐film materials, and new architectures must be explored to improve efficiency. Because of their unique physical properties, two‐dimensional (2D) materials have received extensive attention for use in MVSs, especially in biomimetic ones: the human visual system, which can process complex visual information with low power consumption, provides a model for next‐generation MVSs. This review paper summarizes the progress and challenges of applying 2D material photodetectors in sense‐memory‐computational integration and biomimetic image sensors for machine vision. Machine vision system (MVS) simulates human visual functions to realize the observation and recognition of the objective world. New application scenarios and the increasing data put forward new requirements for MVS, including faster speed and lower energy consumption. This review summarizes a variety of attempts to apply 2D‐material photodetectors to MVSs, including vertical integration with silicon‐based readout circuits and integrated sensing; memory; and computing architecture. And 2D materials face many challenges in the transition from the laboratory to industry.
Deep learning approach for natural language processing, speech, and computer vision : techniques and use cases
\"Deep Learning Approach for Natural Language Processing, Speech, and Computer Vision provides an overview of general deep learning methodology and its applications of natural language processing (NLP), Speech and Computer Vision tasks. It simplifies and presents the concepts of deep learning in a comprehensive manner, with suitable, full-fledged examples of deep learning models, with aim to bridge the gap between the theoretical and the applications using case studies with code, experiments, and supporting analysis. Features: Covers latest developments in deep learning techniques as applied to audio analysis, computer vision, and Natural Language Processing Introduces contemporary applications of deep learning techniques as applied to audio, textual, and visual processing Discovers deep learning frameworks and libraries for NLP, Speech and Computer vision in Python Gives insights into using the tools and libraries in python for real-world applications. Provides easily accessible tutorials, and real-world case studies with code to provide hands-on experience. This book is aimed at researchers and graduate students in computer engineering, image, speech, and text processing\"-- Provided by publisher.
Learning Robust Multi-scale Representation for Neural Radiance Fields from Unposed Images
We introduce an improved solution to the neural image-based rendering problem in computer vision. Given a set of images taken from a freely moving camera at train time, the proposed approach could synthesize a realistic image of the scene from a novel viewpoint at test time. The key ideas presented in this paper are (i) Recovering accurate camera parameters via a robust pipeline from unposed day-to-day images is equally crucial in neural novel view synthesis problem; (ii) It is rather more practical to model object’s content at different resolutions since dramatic camera motion is highly likely in day-to-day unposed images. To incorporate the key ideas, we leverage the fundamentals of scene rigidity, multi-scale neural scene representation, and single-image depth prediction. Concretely, the proposed approach makes the camera parameters as learnable in a neural fields-based modeling framework. By assuming per view depth prediction is given up to scale, we constrain the relative pose between successive frames. From the relative poses, absolute camera pose estimation is modeled via a graph-neural network-based multiple motion averaging within the multi-scale neural-fields network, leading to a single loss function. Optimizing the introduced loss function provides camera intrinsic, extrinsic, and image rendering from unposed images. We demonstrate, with examples, that for a unified framework to accurately model multiscale neural scene representation from day-to-day acquired unposed multi-view images, it is equally essential to have precise camera-pose estimates within the scene representation framework. Without considering robustness measures in the camera pose estimation pipeline, modeling for multi-scale aliasing artifacts can be counterproductive. We present extensive experiments on several benchmark datasets to demonstrate the suitability of our approach.