Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
27,343
result(s) for
"deep learning features"
Sort by:
Multimodal Brain Tumor Classification Using Deep Learning and Robust Feature Selection: A Machine Learning Application for Radiologists
by
Alhaisoni, Majed
,
Bukhari, Syed Ahmad Chan
,
Rehman, Amjad
in
Accuracy
,
Automation
,
Brain cancer
2020
Manual identification of brain tumors is an error-prone and tedious process for radiologists; therefore, it is crucial to adopt an automated system. The binary classification process, such as malignant or benign is relatively trivial; whereas, the multimodal brain tumors classification (T1, T2, T1CE, and Flair) is a challenging task for radiologists. Here, we present an automated multimodal classification method using deep learning for brain tumor type classification. The proposed method consists of five core steps. In the first step, the linear contrast stretching is employed using edge-based histogram equalization and discrete cosine transform (DCT). In the second step, deep learning feature extraction is performed. By utilizing transfer learning, two pre-trained convolutional neural network (CNN) models, namely VGG16 and VGG19, were used for feature extraction. In the third step, a correntropy-based joint learning approach was implemented along with the extreme learning machine (ELM) for the selection of best features. In the fourth step, the partial least square (PLS)-based robust covariant features were fused in one matrix. The combined matrix was fed to ELM for final classification. The proposed method was validated on the BraTS datasets and an accuracy of 97.8%, 96.9%, 92.5% for BraTs2015, BraTs2017, and BraTs2018, respectively, was achieved.
Journal Article
A Survey of Vision-Based Human Action Evaluation Methods
by
Lei, Qing
,
Du, Ji-Xiang
,
Chen, Duan-Sheng
in
action evaluation dataset
,
action quality assessment
,
Algorithms
2019
The fields of human activity analysis have recently begun to diversify. Many researchers have taken much interest in developing action recognition or action prediction methods. The research on human action evaluation differs by aiming to design computation models and evaluation approaches for automatically assessing the quality of human actions. This line of study has become popular because of its explosively emerging real-world applications, such as physical rehabilitation, assistive living for elderly people, skill training on self-learning platforms, and sports activity scoring. This paper presents a comprehensive survey of approaches and techniques in action evaluation research, including motion detection and preprocessing using skeleton data, handcrafted feature representation methods, and deep learning-based feature representation methods. The benchmark datasets from this research field and some evaluation criteria employed to validate the algorithms’ performance are introduced. Finally, the authors present several promising future directions for further studies.
Journal Article
Diabetic Retinopathy Detection from Fundus Images of the Eye Using Hybrid Deep Learning Features
by
Abdelhamid, Sherif E.
,
Alghazo, Runna
,
Butt, Muhammad Mohsin
in
Accuracy
,
Aneurysms
,
Automation
2022
Diabetic Retinopathy (DR) is a medical condition present in patients suffering from long-term diabetes. If a diagnosis is not carried out at an early stage, it can lead to vision impairment. High blood sugar in diabetic patients is the main source of DR. This affects the blood vessels within the retina. Manual detection of DR is a difficult task since it can affect the retina, causing structural changes such as Microaneurysms (MAs), Exudates (EXs), Hemorrhages (HMs), and extra blood vessel growth. In this work, a hybrid technique for the detection and classification of Diabetic Retinopathy in fundus images of the eye is proposed. Transfer learning (TL) is used on pre-trained Convolutional Neural Network (CNN) models to extract features that are combined to generate a hybrid feature vector. This feature vector is passed on to various classifiers for binary and multiclass classification of fundus images. System performance is measured using various metrics and results are compared with recent approaches for DR detection. The proposed method provides significant performance improvement in DR detection for fundus images. For binary classification, the proposed modified method achieved the highest accuracy of 97.8% and 89.29% for multiclass classification.
Journal Article
Deep versus Handcrafted Tensor Radiomics Features: Prediction of Survival in Head and Neck Cancer Using Machine Learning and Fusion Techniques
by
Rahmim, Arman
,
Rezaeijo, Seyed Masoud
,
Hosseinzadeh, Mahdi
in
Algorithms
,
Biomarkers
,
Cancer patients
2023
Background: Although handcrafted radiomics features (RF) are commonly extracted via radiomics software, employing deep features (DF) extracted from deep learning (DL) algorithms merits significant investigation. Moreover, a “tensor’’ radiomics paradigm where various flavours of a given feature are generated and explored can provide added value. We aimed to employ conventional and tensor DFs, and compare their outcome prediction performance to conventional and tensor RFs. Methods: 408 patients with head and neck cancer were selected from TCIA. PET images were first registered to CT, enhanced, normalized, and cropped. We employed 15 image-level fusion techniques (e.g., dual tree complex wavelet transform (DTCWT)) to combine PET and CT images. Subsequently, 215 RFs were extracted from each tumor in 17 images (or flavours) including CT only, PET only, and 15 fused PET-CT images through the standardized-SERA radiomics software. Furthermore, a 3 dimensional autoencoder was used to extract DFs. To predict the binary progression-free-survival-outcome, first, an end-to-end CNN algorithm was employed. Subsequently, we applied conventional and tensor DFs vs. RFs as extracted from each image to three sole classifiers, namely multilayer perceptron (MLP), random-forest, and logistic regression (LR), linked with dimension reduction algorithms. Results: DTCWT fusion linked with CNN resulted in accuracies of 75.6 ± 7.0% and 63.4 ± 6.7% in five-fold cross-validation and external-nested-testing, respectively. For the tensor RF-framework, polynomial transform algorithms + analysis of variance feature selector (ANOVA) + LR enabled 76.67 ± 3.3% and 70.6 ± 6.7% in the mentioned tests. For the tensor DF framework, PCA + ANOVA + MLP arrived at 87.0 ± 3.5% and 85.3 ± 5.2% in both tests. Conclusions: This study showed that tensor DF combined with proper machine learning approaches enhanced survival prediction performance compared to conventional DF, tensor and conventional RF, and end-to-end CNN frameworks.
Journal Article
A new pairwise deep learning feature for environmental microorganism image analysis
by
Li, Chen
,
Zhao, Xin
,
Shirahama, Kimiaki
in
Aquatic Pollution
,
Atmospheric Protection/Air Quality Control/Air Pollution
,
Deep learning
2022
Environmental microorganism
(EM) offers a highly efficient, harmless, and low-cost solution to environmental pollution. They are used in sanitation, monitoring, and decomposition of environmental pollutants. However, this depends on the proper identification of suitable microorganisms. In order to fasten, lower the cost, and increase consistency and accuracy of identification, we propose the novel pairwise deep learning features (PDLFs) to analyze microorganisms. The PDLFs technique combines the capability of handcrafted and deep learning features. In this technique, we leverage the Shi and Tomasi interest points by extracting deep learning features from patches which are centered at interest points’ locations. Then, to increase the number of potential features that have intermediate spatial characteristics between nearby interest points, we use Delaunay triangulation theorem and straight line geometric theorem to pair the nearby deep learning features. The potential of pairwise features is justified on the classification of EMs using SVMs, Linear discriminant analysis, Logistic regression, XGBoost and Random Forest classifier. The pairwise features obtain outstanding results of 99.17
%
, 91.34
%
, 91.32
%
, 91.48
%
, and 99.56
%
, which are the increase of about 5.95
%
, 62.40
%
, 62.37
%
, 61.84
%
, and 3.23
%
in accuracy, F1-score, recall, precision, and specificity respectively, compared to non-paired deep learning features.
Journal Article
A survey: facial micro-expression recognition
2018
Facial expression recognition plays a crucial role in a wide range of applications of psychotherapy, security systems, marketing, commerce and much more. Detecting a macro-expression, which is a direct representation of an ‘emotion,’ is a relatively straight-forward task. Playing a pivotal role as macro-expressions, micro-expressions are more accurate indicators of a train of thought or even subtle, passive or involuntary thoughts. Compared to macro-expressions, identifying micro-expressions is a much more challenging research question because their time spans are narrowed down to a fraction of a second, and can only be defined using a broader classification scale. This paper is an all-inclusive survey-cum-analysis of the various micro-expression recognition techniques. We analyze the general framework for micro-expression recognition system by decomposing the pipeline into fundamental components, namely face detecting, pre-processing, facial feature detection and extraction, datasets, and classification. We discuss the role of these elements and highlight the models and new trends that are followed in their design. Moreover, we provide an extensive analysis of micro-expression recognition systems by comparing their performance. We also discuss the new deep learning features that can, in the near future, replace the hand-crafted features for facial micro-expression recognition. This survey has been developed, focusing on the methodologies applied, databases used, performance regarding recognition accuracy and comparing these to distil the gaps in the efficiencies, future scope, and research potentials. Through this survey, we intend to look into this problem and develop a comprehensive and efficient recognition scheme. This study allows us to identify open issues and to determine future directions for designing real-world micro-expression recognition systems.
Journal Article
Pathology-based deep learning features for predicting basal and luminal subtypes in bladder cancer
2025
Background
Bladder cancer (BLCA) exists a profound molecular diversity, with basal and luminal subtypes having different prognostic and therapeutic outcomes. Traditional methods for molecular subtyping are often time-consuming and resource-intensive. This study aims to develop machine learning models using deep learning features from hematoxylin and eosin (H&E)-stained whole-slide images (WSIs) to predict basal and luminal subtypes in BLCA.
Methods
RNA sequencing data and clinical outcomes were downloaded from seven public BLCA databases, including TCGA, GEO datasets, and the IMvigor210C cohort, to assess the prognostic value of BLCA molecular subtypes. WSIs from TCGA were used to construct and validate the machine learning models, while WSIs from Shanghai Tenth People’s Hospital (STPH) and The Affiliated Guangdong Second Provincial General Hospital of Jinan University (GD2H) were used as external validations. Deep learning models were trained to obtained tumor patches within WSIs. WSI level deep learning features were extracted from tumor patches based on the RetCCL model. Support vector machine (SVM), random forest (RF), and logistic regression (LR) were developed using these features to classify basal and luminal subtypes.
Results
Kaplan-Meier survival and prognostic meta-analyses showed that basal BLCA patients had significantly worse overall survival compared to luminal BLCA patients (hazard ratio = 1.47, 95% confidence interval: 1.25–1.73,
P
< 0.001). The LR model based on tumor patch features selected by Resnet50 model demonstrated superior performance, achieving an area under the curve (AUC) of 0.88 in the internal validation set, and 0.81 and 0.64 in the external validation sets from STPH and GD2H, respectively. This model outperformed both junior and senior pathologists in the differentiation of basal and luminal subtypes (AUC: 0.85, accuracy: 74%, sensitivity: 66%, specificity: 82%).
Conclusions
This study showed the efficacy of machine learning models in predicting the basal and luminal subtypes of BLCA based on the extraction of deep learning features from tumor patches in H&E-stained WSIs. The performance of the LR model suggests that the integration of AI tools into the diagnostic process could significantly enhance the accuracy of molecular subtyping, thereby potentially informing personalized treatment strategies for BLCA patients.
Journal Article
Blended Emotion in-the-Wild: Multi-label Facial Expression Recognition Using Crowdsourced Annotations and Deep Locality Feature Learning
2019
Comprehending different categories of facial expressions plays a great role in the design of computational model analyzing human perceived and affective state. Authoritative studies have revealed that facial expressions in human daily life are in multiple or co-occurring mental states. However, due to the lack of valid datasets, most previous studies are still restricted to basic emotions with single label. In this paper, we present a novel multi-label facial expression database, RAF-ML, along with a new deep learning algorithm, to address this problem. Specifically, a crowdsourcing annotation of 1.2 million labels from 315 participants was implemented to identify the multi-label expressions collected from social network, then EM algorithm was designed to filter out unreliable labels. For all we know, RAF-ML is the first database in the wild that provides with crowdsourced cognition for multi-label expressions. Focusing on the ambiguity and continuity of blended expressions, we propose a new deep manifold learning network, called Deep Bi-Manifold CNN, to learn the discriminative feature for multi-label expressions by jointly preserving the local affinity of deep features and the manifold structures of emotion labels. Furthermore, a deep domain adaption method is leveraged to extend the deep manifold features learned from RAF-ML to other expression databases under various imaging conditions and cultures. Extensive experiments on the RAF-ML and other diverse databases (JAFFE, CK\\[+\\], SFEW and MMI) show that the deep manifold feature is not only superior in multi-label expression recognition in the wild, but also captures the elemental and generic components that are effective for a wide range of expression recognition tasks.
Journal Article
QoS Prediction for Service Recommendation with Deep Feature Learning in Edge Computing Environment
2020
Along with the popularity of intelligent services and mobile services, service recommendation has become a key task, especially the task based on quality-of-service (QoS) in edge computing environment. Most existing service recommendation methods have some serious defects, and cannot be directly adopted in edge computing environment. For example, most of existing methods cannot learn deep features of users or services, but in edge computing environment, there are a variety of devices with different configurations and different functions, and it is necessary to learn deep features behind those complex devices. In order to fully utilize hidden features, this paper proposes a new matrix factorization (MF) model with deep features learning, which integrates a convolutional neural network (CNN). The proposed mode is named Joint CNN-MF (JCM). JCM is capable of using the learned deep latent features of neighbors to infer the features of a user or a service. Meanwhile, to improve the accuracy of neighbors selection, the proposed model contains a novel similarity computation method. CNN learns the neighbors features, forms a feature matrix and infers the features of the target user or target service. We conducted experiments on a real-world service dataset under a batch of cases of data densities, to reflect the complex invocation cases in edge computing environment. The experimental results verify that compared to counterpart methods, our method can consistently achieve higher QoS prediction results.
Journal Article
A framework for efficient brain tumor classification using MRI images
2021
A brain tumor is an abnormal growth of brain cells inside the head, which reduces the patient's survival chance if it is not diagnosed at an earlier stage. Brain tumors vary in size, different in type, irregular in shapes and require distinct therapies for different patients. Manual diagnosis of brain tumors is less efficient, prone to error and time-consuming. Besides, it is a strenuous task, which counts on radiologist experience and proficiency. Therefore, a modern and efficient automated computer-assisted diagnosis (CAD) system is required which may appropriately address the aforementioned problems at high accuracy is presently in need. Aiming to enhance performance and minimise human efforts, in this manuscript, the first brain MRI image is pre-processed to improve its visual quality and increase sample images to avoid over-fitting in the network. Second, the tumor proposals or locations are obtained based on the agglomerative clustering-based method. Third, image proposals and enhanced input image are transferred to backbone architecture for features extraction. Fourth, high-quality image proposals or locations are obtained based on a refinement network, and others are discarded. Next, these refined proposals are aligned to the same size, and finally, transferred to the head network to achieve the desired classification task. The proposed method is a potent tumor grading tool assessed on a publicly available brain tumor dataset. Extensive experiment results show that the proposed method outperformed the existing approaches evaluated on the same dataset and achieved an optimal performance with an overall classification accuracy of 98.04%. Besides, the model yielded the accuracy of 98.17, 98.66, 99.24%, sensitivity (recall) of 96.89, 97.82, 99.24%, and specificity of 98.55, 99.38, 99.25% for Meningioma, Glioma, and Pituitary classes, respectively.
Journal Article