Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
12,868
result(s) for
"Benchmark analysis"
Sort by:
Advances in Computational Methodologies for Classification and Sub-Cellular Locality Prediction of Non-Coding RNAs
by
Dengel, Andreas
,
Ibrahim, Muhammad Ali
,
Asim, Muhammad Nabeel
in
Biomarkers
,
Classification
,
Datasets
2021
Apart from protein-coding Ribonucleic acids (RNAs), there exists a variety of non-coding RNAs (ncRNAs) which regulate complex cellular and molecular processes. High-throughput sequencing technologies and bioinformatics approaches have largely promoted the exploration of ncRNAs which revealed their crucial roles in gene regulation, miRNA binding, protein interactions, and splicing. Furthermore, ncRNAs are involved in the development of complicated diseases like cancer. Categorization of ncRNAs is essential to understand the mechanisms of diseases and to develop effective treatments. Sub-cellular localization information of ncRNAs demystifies diverse functionalities of ncRNAs. To date, several computational methodologies have been proposed to precisely identify the class as well as sub-cellular localization patterns of RNAs). This paper discusses different types of ncRNAs, reviews computational approaches proposed in the last 10 years to distinguish coding-RNA from ncRNA, to identify sub-types of ncRNAs such as piwi-associated RNA, micro RNA, long ncRNA, and circular RNA, and to determine sub-cellular localization of distinct ncRNAs and RNAs. Furthermore, it summarizes diverse ncRNA classification and sub-cellular localization determination datasets along with benchmark performance to aid the development and evaluation of novel computational methodologies. It identifies research gaps, heterogeneity, and challenges in the development of computational approaches for RNA sequence analysis. We consider that our expert analysis will assist Artificial Intelligence researchers with knowing state-of-the-art performance, model selection for various tasks on one platform, dominantly used sequence descriptors, neural architectures, and interpreting inter-species and intra-species performance deviation.
Journal Article
A Comprehensive Benchmark Analysis of Single Image Deraining: Current Challenges and Future Perspectives
2021
The capability of image deraining is a highly desirable component of intelligent decision-making in autonomous driving and outdoor surveillance systems. Image deraining aims to restore the clean scene from the degraded image captured in a rainy day. Although numerous single image deraining algorithms have been recently proposed, these algorithms are mainly evaluated using certain type of synthetic images, assuming a specific rain model, plus a few real images. It remains unclear how these algorithms would perform on rainy images acquired “in the wild” and how we could gauge the progress in the field. This paper aims to bridge this gap. We present a comprehensive study and evaluation of existing single image deraining algorithms, using a new large-scale benchmark consisting of both synthetic and real-world rainy images of various rain types. This dataset highlights diverse rain models (rain streak, rain drop, rain and mist), as well as a rich variety of evaluation criteria (full- and no-reference objective, subjective, and task-specific). We further provide a comprehensive suite of criteria for deraining algorithm evaluation, including full- and no-reference metrics, subjective evaluation, and the novel task-driven evaluation. The proposed benchmark is accompanied with extensive experimental results that facilitate the assessment of the state-of-the-arts on a quantitative basis. Our evaluation and analysis indicate the gap between the achievable performance on synthetic rainy images and the practical demand on real-world images. We show that, despite many advances, image deraining is still a largely open problem. The paper is concluded by summarizing our general observations, identifying open research challenges and pointing out future directions. Our code and dataset is publicly available at http://uee.me/ddQsw.
Journal Article
MisRoBÆRTa: Transformers versus Misinformation
2022
Misinformation is considered a threat to our democratic values and principles. The spread of such content on social media polarizes society and undermines public discourse by distorting public perceptions and generating social unrest while lacking the rigor of traditional journalism. Transformers and transfer learning proved to be state-of-the-art methods for multiple well-known natural language processing tasks. In this paper, we propose MisRoBÆRTa, a novel transformer-based deep neural ensemble architecture for misinformation detection. MisRoBÆRTa takes advantage of two state-of-the art transformers, i.e., BART and RoBERTa, to improve the performance of discriminating between real news and different types of fake news. We also benchmarked and evaluated the performances of multiple transformers on the task of misinformation detection. For training and testing, we used a large real-world news articles dataset (i.e., 100,000 records) labeled with 10 classes, thus addressing two shortcomings in the current research: (1) increasing the size of the dataset from small to large, and (2) moving the focus of fake news detection from binary classification to multi-class classification. For this dataset, we manually verified the content of the news articles to ensure that they were correctly labeled. The experimental results show that the accuracy of transformers on the misinformation detection problem was significantly influenced by the method employed to learn the context, dataset size, and vocabulary dimension. We observe empirically that the best accuracy performance among the classification models that use only one transformer is obtained by BART, while DistilRoBERTa obtains the best accuracy in the least amount of time required for fine-tuning and training. However, the proposed MisRoBÆRTa outperforms the other transformer models in the task of misinformation detection. To arrive at this conclusion, we performed ample ablation and sensitivity testing with MisRoBÆRTa on two datasets.
Journal Article
Benchmarking of Contactless Heart Rate Measurement Systems in ARM-Based Embedded Platforms
2023
Heart rate monitoring is especially important for aging individuals because it is associated with longevity and cardiovascular risk. Typically, this vital parameter can be measured using wearable sensors, which are widely available commercially. However, wearable sensors have some disadvantages in terms of acceptability, especially when used by elderly people. Thus, contactless solutions have increasingly attracted the scientific community in recent years. Camera-based photoplethysmography (also known as remote photoplethysmography) is an emerging method of contactless heart rate monitoring that uses a camera and a processing unit on the hardware side, and appropriate image processing methodologies on the software side. This paper describes the design and implementation of a novel pipeline for heart rate estimation using a commercial and low-cost camera as the input device. The pipeline’s performance was tested and compared on a desktop PC, a laptop, and three different ARM-based embedded platforms (Raspberry Pi 4, Odroid N2+, and Jetson Nano). The results showed that the designed and implemented pipeline achieved an average accuracy of about 96.7% for heart rate estimation, with very low variance (between 1.5% and 2.5%) across processing platforms, user distances from the camera, and frame resolutions. Furthermore, benchmark analysis showed that the Odroid N2+ platform was the most convenient in terms of CPU load, RAM usage, and average execution time of the algorithmic pipeline.
Journal Article
How to approach machine learning-based prediction of drug/compound–target interactions
2023
The identification of drug/compound–target interactions (DTIs) constitutes the basis of drug discovery, for which computational predictive approaches have been developed. As a relatively new data-driven paradigm, proteochemometric (PCM) modeling utilizes both protein and compound properties as a pair at the input level and processes them via statistical/machine learning. The representation of input samples (i.e., proteins and their ligands) in the form of quantitative feature vectors is crucial for the extraction of interaction-related properties during the artificial learning and subsequent prediction of DTIs. Lately, the representation learning approach, in which input samples are automatically featurized via training and applying a machine/deep learning model, has been utilized in biomedical sciences. In this study, we performed a comprehensive investigation of different computational approaches/techniques for protein featurization (including both conventional approaches and the novel learned embeddings), data preparation and exploration, machine learning-based modeling, and performance evaluation with the aim of achieving better data representations and more successful learning in DTI prediction. For this, we first constructed realistic and challenging benchmark datasets on small, medium, and large scales to be used as reliable gold standards for specific DTI modeling tasks. We developed and applied a network analysis-based splitting strategy to divide datasets into structurally different training and test folds. Using these datasets together with various featurization methods, we trained and tested DTI prediction models and evaluated their performance from different angles. Our main findings can be summarized under 3 items: (i) random splitting of datasets into train and test folds leads to near-complete data memorization and produce highly over-optimistic results, as a result, should be avoided, (ii) learned protein sequence embeddings work well in DTI prediction and offer high potential, despite interaction-related properties (e.g., structures) of proteins are unused during their self-supervised model training, and (iii) during the learning process, PCM models tend to rely heavily on compound features while partially ignoring protein features, primarily due to the inherent bias in DTI data, indicating the requirement for new and unbiased datasets. We hope this study will aid researchers in designing robust and high-performing data-driven DTI prediction systems that have real-world translational value in drug discovery.
Journal Article
Model definitions to identify appropriate benchmarks in judiciary
2022
In this manuscript we present a comparative analysis of benchmarks based on technical efficiency scores computed using Data Envelopment Analysis with two different model specifications. In one case, we adopt the number of settled cases as output and human resources as input; in the other case, we adopt the same model definition but with judicial expenditure as additional key input. Our findings show that the model specification containing both judicial expenditure and human resources is more appropriate than the model based only on human resources. Moreover, we show that, without considering the additional variable costs generated within the production process, those courts incorrectly identified as benchmarks might mislead the policy makers dealing with the reform process.
Journal Article
Ruthenium(II) Polypyridyl Complexes for Antimicrobial Photodynamic Therapy: Prospects for Application in Cystic Fibrosis Lung Airways
by
Nasir, Adeel
,
Müller, Mareike
,
Youf, Raphaëlle
in
Antibiotics
,
Antimicrobial agents
,
antimicrobial photodynamic therapy
2022
Antimicrobial photodynamic therapy (aPDT) depends on a variety of parameters notably related to the photosensitizers used, the pathogens to target and the environment to operate. In a previous study using a series of Ruthenium(II) polypyridyl ([Ru(II)]) complexes, we reported the importance of the chemical structure on both their photo-physical/physico-chemical properties and their efficacy for aPDT. By employing standard in vitro conditions, effective [Ru(II)]-mediated aPDT was demonstrated against planktonic cultures of Pseudomonas aeruginosa and Staphylococcus aureus strains notably isolated from the airways of Cystic Fibrosis (CF) patients. CF lung disease is characterized with many pathophysiological disorders that can compromise the effectiveness of antimicrobials. Taking this into account, the present study is an extension of our previous work, with the aim of further investigating [Ru(II)]-mediated aPDT under in vitro experimental settings approaching the conditions of infected airways in CF patients. Thus, we herein studied the isolated influence of a series of parameters (including increased osmotic strength, acidic pH, lower oxygen availability, artificial sputum medium and biofilm formation) on the properties of two selected [Ru(II)] complexes. Furthermore, these compounds were used to evaluate the possibility to photoinactivate P. aeruginosa while preserving an underlying epithelium of human bronchial epithelial cells. Altogether, our results provide substantial evidence for the relevance of [Ru(II)]-based aPDT in CF lung airways. Besides optimized nano-complexes, this study also highlights the various needs for translating such a challenging perspective into clinical practice.
Journal Article
MisRoB ae RTa
by
Truica, Ciprian-Octavian
,
Apostol, Elena-Simona
in
benchmark analysis
,
large dataset
,
misinformation detection
2022
Misinformation is considered a threat to our democratic values and principles. The spread of such content on social media polarizes society and undermines public discourse by distorting public perceptions and generating social unrest while lacking the rigor of traditional journalism. Transformers and transfer learning proved to be state-of-the-art methods for multiple well-known natural language processing tasks. In this paper, we propose MisRoB ae RTa, a novel transformer-based deep neural ensemble architecture for misinformation detection. MisRoB ae RTa takes advantage of two state-of-the art transformers, i.e., BART and RoBERTa, to improve the performance of discriminating between real news and different types of fake news. We also benchmarked and evaluated the performances of multiple transformers on the task of misinformation detection. For training and testing, we used a large real-world news articles dataset (i.e., 100,000 records) labeled with 10 classes, thus addressing two shortcomings in the current research: (1) increasing the size of the dataset from small to large, and (2) moving the focus of fake news detection from binary classification to multi-class classification. For this dataset, we manually verified the content of the news articles to ensure that they were correctly labeled. The experimental results show that the accuracy of transformers on the misinformation detection problem was significantly influenced by the method employed to learn the context, dataset size, and vocabulary dimension. We observe empirically that the best accuracy performance among the classification models that use only one transformer is obtained by BART, while DistilRoBERTa obtains the best accuracy in the least amount of time required for fine-tuning and training. However, the proposed MisRoB ae RTa outperforms the other transformer models in the task of misinformation detection. To arrive at this conclusion, we performed ample ablation and sensitivity testing with MisRoB ae RTa on two datasets.
Journal Article
Derivation of metabolic point of departure using high-throughput in vitro metabolomics: investigating the importance of sampling time points on benchmark concentration values in the HepaRG cell line
by
Sund, Jukka
,
Palosaari, Taina
,
Weber, Ralf J. M.
in
adverse outcome pathways
,
Aflatoxin B1
,
Aflatoxins
2023
Amongst omics technologies, metabolomics should have particular value in regulatory toxicology as the measurement of the molecular phenotype is the closest to traditional apical endpoints, whilst offering mechanistic insights into the biological perturbations. Despite this, the application of untargeted metabolomics for point-of-departure (POD) derivation via benchmark concentration (BMC) modelling is still a relatively unexplored area. In this study, a high-throughput workflow was applied to derive PODs associated with a chemical exposure by measuring the intracellular metabolome of the HepaRG cell line following treatment with one of four chemicals (aflatoxin B
1
, benzo[a]pyrene, cyclosporin A, or rotenone), each at seven concentrations (aflatoxin B
1
, benzo[a]pyrene, cyclosporin A: from 0.2048 μM to 50 μM; rotenone: from 0.04096 to 10 μM) and five sampling time points (2, 6, 12, 24 and 48 h). The study explored three approaches to derive PODs using benchmark concentration modelling applied to single features in the metabolomics datasets or annotated metabolites or lipids: (1) the 1st rank-ordered unannotated feature, (2) the 1st rank-ordered putatively annotated feature (using a recently developed HepaRG-specific library of polar metabolites and lipids), and (3) 25th rank-ordered feature, demonstrating that for three out of four chemical datasets all of these approaches led to relatively consistent BMC values, varying less than tenfold across the methods. In addition, using the 1st rank-ordered unannotated feature it was possible to investigate temporal trends in the datasets, which were shown to be chemical specific. Furthermore, a possible integration of metabolomics-driven POD derivation with the liver steatosis adverse outcome pathway (AOP) was demonstrated. The study highlights that advances in technologies enable application of in vitro metabolomics at scale; however, greater confidence in metabolite identification is required to ensure PODs are mechanistically anchored.
Journal Article
An experimental study of fog and cloud computing in CEP-based Real-Time IoT applications
by
Castillo-Cara, Manuel
,
Caminero Blanca
,
Tenorio-Trigoso Alonso
in
Cloud computing
,
Computer architecture
,
Cost analysis
2021
Internet of Things (IoT) has posed new requirements to the underlying processing architecture, specially for real-time applications, such as event-detection services. Complex Event Processing (CEP) engines provide a powerful tool to implement these services. Fog computing has raised as a solution to support IoT real-time applications, in contrast to the Cloud-based approach. This work is aimed at analysing a CEP-based Fog architecture for real-time IoT applications that uses a publish-subscribe protocol. A testbed has been developed with low-cost and local resources to verify the suitability of CEP-engines to low-cost computing resources. To assess performance we have analysed the effectiveness and cost of the proposal in terms of latency and resource usage, respectively. Results show that the fog computing architecture reduces event-detection latencies up to 35%, while the available computing resources are being used more efficiently, when compared to a Cloud deployment. Performance evaluation also identifies the communication between the CEP-engine and the final users as the most time consuming component of latency. Moreover, the latency analysis concludes that the time required by CEP-engine is related to the compute resources, but is nonlinear dependent of the number of things connected.
Journal Article