Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
Is Full-Text AvailableIs Full-Text Available
-
YearFrom:-To:
-
More FiltersMore FiltersSubjectCountry Of PublicationPublisherSourceLanguagePlace of PublicationContributorsLocation
Done
Filters
Reset
23,599
result(s) for
"Deep learning applications"
Sort by:
Theoretical Understanding of Convolutional Neural Network: Concepts, Architectures, Applications, Future Directions
2023
Convolutional neural networks (CNNs) are one of the main types of neural networks used for image recognition and classification. CNNs have several uses, some of which are object recognition, image processing, computer vision, and face recognition. Input for convolutional neural networks is provided through images. Convolutional neural networks are used to automatically learn a hierarchy of features that can then be utilized for classification, as opposed to manually creating features. In achieving this, a hierarchy of feature maps is constructed by iteratively convolving the input image with learned filters. Because of the hierarchical method, higher layers can learn more intricate features that are also distortion and translation invariant. The main goals of this study are to help academics understand where there are research gaps and to talk in-depth about CNN’s building blocks, their roles, and other vital issues.
Journal Article
Review of deep learning: concepts, CNN architectures, challenges, applications, future directions
by
Fadhel, Mohammed A.
,
Zhang, Jinglan
,
Santamaría, J.
in
Application
,
Artificial neural networks
,
Big Data
2021
In the last few years, the deep learning (DL) computing paradigm has been deemed the Gold Standard in the machine learning (ML) community. Moreover, it has gradually become the most widely used computational approach in the field of ML, thus achieving outstanding results on several complex cognitive tasks, matching or even beating those provided by human performance. One of the benefits of DL is the ability to learn massive amounts of data. The DL field has grown fast in the last few years and it has been extensively used to successfully address a wide range of traditional applications. More importantly, DL has outperformed well-known ML techniques in many domains, e.g., cybersecurity, natural language processing, bioinformatics, robotics and control, and medical information processing, among many others. Despite it has been contributed several works reviewing the State-of-the-Art on DL, all of them only tackled one aspect of the DL, which leads to an overall lack of knowledge about it. Therefore, in this contribution, we propose using a more holistic approach in order to provide a more suitable starting point from which to develop a full understanding of DL. Specifically, this review attempts to provide a more comprehensive survey of the most important aspects of DL and including those enhancements recently added to the field. In particular, this paper outlines the importance of DL, presents the types of DL techniques and networks. It then presents convolutional neural networks (CNNs) which the most utilized DL network type and describes the development of CNNs architectures together with their main features, e.g., starting with the AlexNet network and closing with the High-Resolution network (HR.Net). Finally, we further present the challenges and suggested solutions to help researchers understand the existing research gaps. It is followed by a list of the major DL applications. Computational tools including FPGA, GPU, and CPU are summarized along with a description of their influence on DL. The paper ends with the evolution matrix, benchmark datasets, and summary and conclusion.
Journal Article
Deep Learning applications for COVID-19
by
Furht, Borko
,
Shorten, Connor
,
Khoshgoftaar, Taghi M.
in
Ambient intelligence
,
Application
,
Big Data
2021
This survey explores how Deep Learning has battled the COVID-19 pandemic and provides directions for future research on COVID-19. We cover Deep Learning applications in Natural Language Processing, Computer Vision, Life Sciences, and Epidemiology. We describe how each of these applications vary with the availability of big data and how learning tasks are constructed. We begin by evaluating the current state of Deep Learning and conclude with key limitations of Deep Learning for COVID-19 applications. These limitations include Interpretability, Generalization Metrics, Learning from Limited Labeled Data, and Data Privacy. Natural Language Processing applications include mining COVID-19 research for Information Retrieval and Question Answering, as well as Misinformation Detection, and Public Sentiment Analysis. Computer Vision applications cover Medical Image Analysis, Ambient Intelligence, and Vision-based Robotics. Within Life Sciences, our survey looks at how Deep Learning can be applied to Precision Diagnostics, Protein Structure Prediction, and Drug Repurposing. Deep Learning has additionally been utilized in Spread Forecasting for Epidemiology. Our literature review has found many examples of Deep Learning systems to fight COVID-19. We hope that this survey will help accelerate the use of Deep Learning for COVID-19 research.
Journal Article
A review of deep learning applications in human genomics using next-generation sequencing data
2022
Genomics is advancing towards data-driven science. Through the advent of high-throughput data generating technologies in human genomics, we are overwhelmed with the heap of genomic data. To extract knowledge and pattern out of this genomic data, artificial intelligence especially deep learning methods has been instrumental. In the current review, we address development and application of deep learning methods/models in different subarea of human genomics. We assessed over- and under-charted area of genomics by deep learning techniques. Deep learning algorithms underlying the genomic tools have been discussed briefly in later part of this review. Finally, we discussed briefly about the late application of deep learning tools in genomic. Conclusively, this review is timely for biotechnology or genomic scientists in order to guide them why, when and how to use deep learning methods to analyse human genomic data.
Journal Article
Memory-assisted reinforcement learning for diverse molecular de novo design
by
Engkvist, Ola
,
Bajorath, Jürgen
,
Blaschke, Thomas
in
Analysis
,
Artificial neural networks
,
Big Data in Chemistry
2020
In de novo molecular design, recurrent neural networks (RNN) have been shown to be effective methods for sampling and generating novel chemical structures. Using a technique called reinforcement learning (RL), an RNN can be tuned to target a particular section of chemical space with optimized desirable properties using a scoring function. However, ligands generated by current RL methods so far tend to have relatively low diversity, and sometimes even result in duplicate structures when optimizing towards desired properties. Here, we propose a new method to address the low diversity issue in RL for molecular design. Memory-assisted RL is an extension of the known RL, with the introduction of a so-called memory unit. As proof of concept, we applied our method to generate structures with a desired AlogP value. In a second case study, we applied our method to design ligands for the dopamine type 2 receptor and the 5-hydroxytryptamine type 1A receptor. For both receptors, a machine learning model was developed to predict whether generated molecules were active or not for the receptor. In both case studies, it was found that memory-assisted RL led to the generation of more compounds predicted to be active having higher chemical diversity, thus achieving better coverage of chemical space of known ligands compared to established RL methods.
Journal Article
Perception consistency ultrasound image super-resolution via self-supervised CycleGAN
by
Liu, Jianyong
,
Tao, Tao
,
Han, Jungong
in
Artificial Intelligence
,
Computational Biology/Bioinformatics
,
Computational Science and Engineering
2023
Due to the limitations of sensors, the transmission medium, and the intrinsic properties of ultrasound, the quality of ultrasound imaging is always not ideal, especially its low spatial resolution. To remedy this situation, deep learning networks have been recently developed for ultrasound image super-resolution (SR) because of the powerful approximation capability. However, most current supervised SR methods are not suitable for ultrasound medical images because the medical image samples are always rare, and usually, there are no low-resolution (LR) and high-resolution (HR) training pairs in reality. In this work, based on self-supervision and cycle generative adversarial network, we propose a new perception consistency ultrasound image SR method, which only requires the LR ultrasound data and can ensure the re-degenerated image of the generated SR one to be consistent with the original LR image, and vice versa. We first generate the HR fathers and the LR sons of the test ultrasound LR image through image enhancement, and then make full use of the cycle loss of LR–SR–LR and HR–LR–SR and the adversarial characteristics of the discriminator to promote the generator to produce better perceptually consistent SR results. The evaluation of PSNR/IFC/SSIM, inference efficiency and visual effects under the benchmark CCA-US and CCA-US datasets illustrate our proposed approach is effective and superior to other state-of-the-art methods.
Journal Article
A CNN–LSTM model for gold price time-series forecasting
by
Livieris, Ioannis E.
,
Pintelas, Emmanuel
,
Pintelas, Panagiotis
in
Artificial Intelligence
,
Computational Biology/Bioinformatics
,
Computational Science and Engineering
2020
Gold price volatilities have a significant impact on many financial activities of the world. The development of a reliable prediction model could offer insights in gold price fluctuations, behavior and dynamics and ultimately could provide the opportunity of gaining significant profits. In this work, we propose a new deep learning forecasting model for the accurate prediction of gold price and movement. The proposed model exploits the ability of convolutional layers for extracting useful knowledge and learning the internal representation of time-series data as well as the effectiveness of long short-term memory (LSTM) layers for identifying short-term and long-term dependencies. We conducted a series of experiments and evaluated the proposed model against state-of-the-art deep learning and machine learning models. The preliminary experimental analysis illustrated that the utilization of LSTM layers along with additional convolutional layers could provide a significant boost in increasing the forecasting performance.
Journal Article
Optimization of deep learning architecture based on multi-path convolutional neural network algorithm
2025
Current multi-stream convolutional neural network (MSCNN) exhibits notable limitations in path cooperation, feature fusion, and resource utilization when handling complex tasks. To enhance MSCNN's feature extraction ability, computational efficiency, and model robustness, this study conducts an in-depth investigation of these architectural deficiencies and proposes corresponding improvements. At present, there are some problems in multi-path architecture, such as isolated information among paths, low efficiency of feature fusion mechanism, and high computational complexity. These issues lead to insufficient performance of the model in robustness indicators such as noise resistance, occlusion sensitivity, and resistance to sample attacks. The architecture also faces challenges in data scalability efficiency and resource scalability requirements. Therefore, this study proposes an optimized model based on a dynamic path cooperation mechanism and lightweight design, innovatively introducing a path attention mechanism and feature-sharing module to enhance information interaction between paths. Self-attention fusion method is adopted to improve the efficiency of feature fusion. At the same time, by combining path selection and model pruning technology, the effective balance between model performance and computational resources demand is realized. The study employs three datasets, Canadian Institute for Advanced Research-10 (CIFAR-10), ImageNet, and Custom Dataset for performance comparison and simulation. The results show that the proposed optimized model is superior to the current mainstream model in many indicators. For example, on the Medical Images dataset, the optimized model's noise robustness, occlusion sensitivity, and sample attack resistance are 0.931, 0.950, and 0.709, respectively. On E-commerce Data, the optimized model's data scalability efficiency reaches 0.969, and the resource scalability requirement is only 0.735, showing excellent task adaptability and resource utilization efficiency. Therefore, the study provides a critical reference for the optimization and practical application of MSCNN, contributing to the application research of deep learning in complex tasks.
Journal Article
Hierarchical multi-scale vision transformer model for accurate detection and classification of brain tumors in MRI-based medical imaging
2025
Automated brain tumor detection represents a fundamental challenge in contemporary medical imaging, demanding both precision and computational feasibility for practical implementation. This research introduces a novel Vision Transformer (ViT) framework that incorporates an innovative Hierarchical Multi-Scale Attention (HMSA) methodology for automated detection and classification of brain tumors across four distinct categories: glioma, meningioma, pituitary adenoma, and healthy brain tissue. Our methodology presents several key innovations: (1) multi-resolution patch embedding strategy enabling feature extraction across different spatial scales (8
8, 16
16, and 32
32 patches), (2) computationally optimized transformer architecture achieving 35% reduction in training duration compared to conventional ViT implementations, and (3) probabilistic calibration mechanism enhancing prediction confidence for decision-making applications. Experimental validation was conducted using a comprehensive MRI dataset comprising 7023 T1-weighted contrast-enhanced images sourced from the publicly accessible Brain Tumor MRI Dataset. Our approach achieved superior classification performance with 98.7% accuracy while demonstrating significant improvements over conventional machine learning methodologies (Random Forest: 91.2%, Support Vector Machine: 89.8%, XGBoost: 92.5%), state-of-the-art CNN architectures (EfficientNet-B0: 96.5%, ResNet-50: 95.8%), standard transformers (ViT: 96.8%, Swin Transformer: 97.2%), and hybrid CNN-Transformer approaches (TransBTS: 96.9%, Swin-UNet: 96.6%). The model demonstrates excellent performance with precision of 0.986, recall of 0.988, F1-score of 0.987, and superior calibration quality (Expected Calibration Error: 0.023). The proposed framework establishes a computationally efficient approach for accurate brain tumor classification.
Journal Article