Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
73
result(s) for
"Biswas, Sujit"
Sort by:
Nasal delivery of an IgM offers broad protection from SARS-CoV-2 variants
2021
Resistance represents a major challenge for antibody-based therapy for COVID-19
1
,
2
,
3
–
4
. Here we engineered an immunoglobulin M (IgM) neutralizing antibody (IgM-14) to overcome the resistance encountered by immunoglobulin G (IgG)-based therapeutics. IgM-14 is over 230-fold more potent than its parental IgG-14 in neutralizing SARS-CoV-2. IgM-14 potently neutralizes the resistant virus raised by its corresponding IgG-14, three variants of concern—B.1.1.7 (Alpha, which first emerged in the UK), P.1 (Gamma, which first emerged in Brazil) and B.1.351 (Beta, which first emerged in South Africa)—and 21 other receptor-binding domain mutants, many of which are resistant to the IgG antibodies that have been authorized for emergency use. Although engineering IgG into IgM enhances antibody potency in general, selection of an optimal epitope is critical for identifying the most effective IgM that can overcome resistance. In mice, a single intranasal dose of IgM-14 at 0.044 mg per kg body weight confers prophylactic efficacy and a single dose at 0.4 mg per kg confers therapeutic efficacy against SARS-CoV-2. IgM-14, but not IgG-14, also confers potent therapeutic protection against the P.1 and B.1.351 variants. IgM-14 exhibits desirable pharmacokinetics and safety profiles when administered intranasally in rodents. Our results show that intranasal administration of an engineered IgM can improve efficacy, reduce resistance and simplify the prophylactic and therapeutic treatment of COVID-19.
An engineered IgM antibody administered intranasally in mice shows high prophylactic efficacy and therapeutic efficacy against SARS-CoV-2, and is also effective against multiple variants of concern that are resistant to IgG-based therapeutics.
Journal Article
Blockchain Empowered Federated Learning Ecosystem for Securing Consumer IoT Features Analysis
by
Shorfuzzaman, Mohammad
,
Alsufyani, Nawal
,
Alyami, Sultan
in
Access control
,
Artificial intelligence
,
Big Data
2022
Resource constraint Consumer Internet of Things (CIoT) is controlled through gateway devices (e.g., smartphones, computers, etc.) that are connected to Mobile Edge Computing (MEC) servers or cloud regulated by a third party. Recently Machine Learning (ML) has been widely used in automation, consumer behavior analysis, device quality upgradation, etc. Typical ML predicts by analyzing customers’ raw data in a centralized system which raises the security and privacy issues such as data leakage, privacy violation, single point of failure, etc. To overcome the problems, Federated Learning (FL) developed an initial solution to ensure services without sharing personal data. In FL, a centralized aggregator collaborates and makes an average for a global model used for the next round of training. However, the centralized aggregator raised the same issues, such as a single point of control leaking the updated model and interrupting the entire process. Additionally, research claims data can be retrieved from model parameters. Beyond that, since the Gateway (GW) device has full access to the raw data, it can also threaten the entire ecosystem. This research contributes a blockchain-controlled, edge intelligence federated learning framework for a distributed learning platform for CIoT. The federated learning platform allows collaborative learning with users’ shared data, and the blockchain network replaces the centralized aggregator and ensures secure participation of gateway devices in the ecosystem. Furthermore, blockchain is trustless, immutable, and anonymous, encouraging CIoT end users to participate. We evaluated the framework and federated learning outcomes using the well-known Stanford Cars dataset. Experimental results prove the effectiveness of the proposed framework.
Journal Article
MULTICAUSENET temporal attention for multimodal emotion cause pair extraction
2025
In the realm of emotion recognition, understanding the intricate relationships between emotions and their underlying causes remains a significant challenge. This paper presents MultiCauseNet, a novel framework designed to effectively extract emotion-cause pairs by leveraging multimodal data, including text, audio, and video. The proposed approach integrates advanced multimodal feature extraction techniques with attention mechanisms to enhance the understanding of emotional contexts. The key text, audio, and video features are extracted using BERT, Wav2Vec, and Vision transformers (ViTs), which are then employed to construct a comprehensive multimodal graph. The graph encodes the relationships between emotions and potential causes, and Graph Attention Networks (GATs) are used to weigh and prioritize relevant features across the modalities. To further improve performance, Transformers are employed to model intra-modal and inter-modal dependencies through self-attention and cross-attention mechanisms. This enables a more robust multimodal information fusion, capturing the global context of emotional interactions. This dynamic attention mechanism enables MultiCauseNet to capture complex interactions between emotional triggers and causes, improving extraction accuracy. Experiments on emotion benchmark datasets, including IEMOCAP and MELD achieved a WFI score of 73.02 and 53.67 respectively. The results for cause pair analysis are evaluated on ECF and ConvECPE with a Cause recognition F1 score of 65.12 and 84.51, and a Pair extraction F1 score of 55.12 and 51.34.
Journal Article
A Machine Learning-Based Anomaly Prediction Service for Software-Defined Networks
2022
Software-defined networking (SDN) has gained tremendous growth and can be exploited in different network scenarios, from data centers to wide-area 5G networks. It shifts control logic from the devices to a centralized entity (programmable controller) for efficient traffic monitoring and flow management. A software-based controller enforces rules and policies on the requests sent by forwarding elements; however, it cannot detect anomalous patterns in the network traffic. Due to this, the controller may install the flow rules against the anomalies, reducing the overall network performance. These anomalies may indicate threats to the network and decrease its performance and security. Machine learning (ML) approaches can identify such traffic flow patterns and predict the systems’ impending threats. We propose an ML-based service to predict traffic anomalies for software-defined networks in this work. We first create a large dataset for network traffic by modeling a programmable data center with a signature-based intrusion-detection system. The feature vectors are pre-processed and are constructed against each flow request by the forwarding element. Then, we input the feature vector of each request to a machine learning classifier for training to predict anomalies. Finally, we use the holdout cross-validation technique to evaluate the proposed approach. The evaluation results specify that the proposed approach is highly accurate. In contrast to baseline approaches (random prediction and zero rule), the performance improvement of the proposed approach in average accuracy, precision, recall, and f-measure is (54.14%, 65.30%, 81.63%, and 73.70%) and (4.61%, 11.13%, 9.45%, and 10.29%), respectively.
Journal Article
Enhancing machine learning-based forecasting of chronic renal disease with explainable AI
by
Singamsetty, Sanjana
,
Ghanta, Swetha
,
Biswas, Sujit
in
Artificial intelligence
,
Care and treatment
,
Chronic kidney disease
2024
Chronic renal disease (CRD) is a significant concern in the field of healthcare, highlighting the crucial need of early and accurate prediction in order to provide prompt treatments and enhance patient outcomes. This article presents an end-to-end predictive model for the binary classification of CRD in healthcare, addressing the crucial need for early and accurate predictions to enhance patient outcomes. Through hyperparameter optimization using GridSearchCV, we significantly improve model performance. Leveraging a range of machine learning (ML) techniques, our approach achieves a high predictive accuracy of 99.07% for random forest, extra trees classifier, logistic regression with L2 penalty, and artificial neural networks (ANN). Through rigorous evaluation, the logistic regression with L2 penalty emerges as the top performer, demonstrating consistent performance. Moreover, integration of Explainable Artificial Intelligence (XAI) techniques, such as Local Interpretable Model-agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP), enhances interpretability and reveals insights into model decision-making. By emphasizing an end-to-end model development process, from data collection to deployment, our system enables real-time predictions and informed healthcare decisions. This comprehensive approach underscores the potential of predictive modeling in healthcare to optimize clinical decision-making and improve patient care outcomes.
Journal Article
An intranasal nanoparticle STING agonist protects against respiratory viruses in animal models
2024
Respiratory viral infections cause morbidity and mortality worldwide. Despite the success of vaccines, vaccination efficacy is weakened by the rapid emergence of viral variants with immunoevasive properties. The development of an off-the-shelf, effective, and safe therapy against respiratory viral infections is thus desirable. Here, we develop NanoSTING, a nanoparticle formulation of the endogenous STING agonist, 2′−3′ cGAMP, to function as an immune activator and demonstrate its safety in mice and rats. A single intranasal dose of NanoSTING protects against pathogenic strains of SARS-CoV-2 (alpha and delta VOC) in hamsters. In transmission experiments, NanoSTING reduces the transmission of SARS-CoV-2 Omicron VOC to naïve hamsters. NanoSTING also protects against oseltamivir-sensitive and oseltamivir-resistant strains of influenza in mice. Mechanistically, NanoSTING upregulates locoregional interferon-dependent and interferon-independent pathways in mice, hamsters, as well as non-human primates. Our results thus implicate NanoSTING as a broad-spectrum immune activator for controlling respiratory virus infection.
Respiratory viral infection causes fast onset of pathology, and is often compounded by vaccination-resistant variants. Here, the authors show that a STING agonist nanoparticle, termed NanoSTING, helps protect against SARS-CoV-2 in hamsters and influenza in mice, thereby implicating NanoSTING as a broad-spectrum treatment for respiratory viral infections.
Journal Article
Blockchain controlled trustworthy federated learning platform for smart homes
by
Latif, Zohaib
,
Alenazi, Mohammed J. F.
,
Bairagi, Anupam Kumar
in
blockchain
,
computer network security
,
federated learning
2024
Smart device manufacturers rely on insights from smart home (SH) data to update their devices, and similarly, service providers use it for predictive maintenance. In terms of data security and privacy, combining distributed federated learning (FL) with blockchain technology is being considered to prevent single point failure and model poising attacks. However, adding blockchain to a FL environment can worsen blockchain's scaling issues and create regular service interruptions at SH. This article presents a scalable Blockchain‐based Privacy‐preserving Federated Learning (BPFL) architecture for an SH ecosystem that integrates blockchain and FL. BPFL can automate SHs' services and distribute machine learning (ML) operations to update IoT manufacturer models and scale service provider services. The architecture uses a local peer as a gateway to connect SHs to the blockchain network and safeguard user data, transactions, and ML operations. Blockchain facilitates ecosystem access management and learning. The Stanford Cars and an IoT dataset have been used as test bed experiments, taking into account the nature of data (i.e. images and numeric). The experiments show that ledger optimisation can boost scalability by 40–60% in BCN by reducing transaction overhead by 60%. Simultaneously, it increases learning capacity by 10% compared to baseline FL techniques. The figure presents a novel blockchain‐based federated learning architecture where a gateway peer enhances the scalability.
Journal Article
Groundwater Level Prediction Using a Multiple Objective Genetic Algorithm-Grey Relational Analysis Based Weighted Ensemble of ANFIS Models
by
El-Shafei, Ahmed
,
Roy, Dilip
,
Mattar, Mohamed
in
algorithms
,
Aquifers
,
Artificial intelligence
2021
Predicting groundwater levels is critical for ensuring sustainable use of an aquifer’s limited groundwater reserves and developing a useful groundwater abstraction management strategy. The purpose of this study was to assess the predictive accuracy and estimation capability of various models based on the Adaptive Neuro Fuzzy Inference System (ANFIS). These models included Differential Evolution-ANFIS (DE-ANFIS), Particle Swarm Optimization-ANFIS (PSO-ANFIS), and traditional Hybrid Algorithm tuned ANFIS (HA-ANFIS) for the one- and multi-week forward forecast of groundwater levels at three observation wells. Model-independent partial autocorrelation functions followed by frequentist lasso regression-based feature selection approaches were used to recognize appropriate input variables for the prediction models. The performances of the ANFIS models were evaluated using various statistical performance evaluation indexes. The results revealed that the optimized ANFIS models performed equally well in predicting one-week-ahead groundwater levels at the observation wells when a set of various performance evaluation indexes were used. For improving prediction accuracy, a weighted-average ensemble of ANFIS models was proposed, in which weights for the individual ANFIS models were calculated using a Multiple Objective Genetic Algorithm (MOGA). The MOGA accounts for a set of benefits (higher values indicate better model performance) and cost (smaller values indicate better model performance) performance indexes calculated on the test dataset. Grey relational analysis was used to select the best solution from a set of feasible solutions produced by a MOGA. A MOGA-based individual model ranking revealed the superiority of DE-ANFIS (weight = 0.827), HA-ANFIS (weight = 0.524), and HA-ANFIS (weight = 0.697) at observation wells GT8194046, GT8194048, and GT8194049, respectively. Shannon’s entropy-based decision theory was utilized to rank the ensemble and individual ANFIS models using a set of performance indexes. The ranking result indicated that the ensemble model outperformed all individual models at all observation wells (ranking value = 0.987, 0.985, and 0.995 at observation wells GT8194046, GT8194048, and GT8194049, respectively). The worst performers were PSO-ANFIS (ranking value = 0.845), PSO-ANFIS (ranking value = 0.819), and DE-ANFIS (ranking value = 0.900) at observation wells GT8194046, GT8194048, and GT8194049, respectively. The generalization capability of the proposed ensemble modelling approach was evaluated for forecasting 2-, 4-, 6-, and 8-weeks ahead groundwater levels using data from GT8194046. The evaluation results confirmed the useability of the ensemble modelling for forecasting groundwater levels at higher forecasting horizons. The study demonstrated that the ensemble approach may be successfully used to predict multi-week-ahead groundwater levels, utilizing previous lagged groundwater levels as inputs.
Journal Article
ALL-Net: integrating CNN and explainable-AI for enhanced diagnosis and interpretation of acute lymphoblastic leukemia
by
Ghanta, Swetha
,
Thiriveedhi, Abhiram
,
Biswas, Sujit
in
Acute lymphoblastic leukemia
,
Acute lymphocytic leukemia
,
Algorithms
2025
This article presents a new model, ALL-Net, for the detection of acute lymphoblastic leukemia (ALL) using a custom convolutional neural network (CNN) architecture and explainable Artificial Intelligence (XAI). A dataset consisting of 3,256 peripheral blood smear (PBS) images belonging to four classes—benign (hematogones), and the other three Early B, Pre-B, and Pro-B, which are subtypes of ALL, are utilized for training and evaluation. The ALL-Net CNN is initially designed and trained on the PBS image dataset, achieving an impressive test accuracy of 97.85%. However, data augmentation techniques are applied to augment the benign class and address the class imbalance challenge. The augmented dataset is then used to retrain the ALL-Net, resulting in a notable improvement in test accuracy, reaching 99.32%. Along with accuracy, we have considered other evaluation metrics and the results illustrate the potential of ALLNet with an average precision of 99.35%, recall of 99.33%, and F1 score of 99.58%. Additionally, XAI techniques, specifically the Local Interpretable Model-Agnostic Explanations (LIME) algorithm is employed to interpret the model’s predictions, providing insights into the decision-making process of our ALL-Net CNN. These findings highlight the effectiveness of CNNs in accurately detecting ALL from PBS images and emphasize the importance of addressing data imbalance issues through appropriate preprocessing techniques at the same time demonstrating the usage of XAI in solving the black box approach of the deep learning models. The proposed ALL-Net outperformed EfficientNet, MobileNetV3, VGG-19, Xception, InceptionV3, ResNet50V2, VGG-16, and NASNetLarge except for DenseNet201 with a slight variation of 0.5%. Nevertheless, our ALL-Net model is much less complex than DenseNet201, allowing it to provide faster results. This highlights the need for a more customized and streamlined model, such as ALL-Net, specifically designed for ALL classification. The entire source code of our proposed CNN is publicly available at https://github.com/Abhiram014/ALL-Net-Detection-of-ALL-using-CNN-and-XAI .
Journal Article
Exploring the fusion of lattice‐based quantum key distribution for secure Internet of Things communications
by
Hemant Kumar Reddy, K.
,
Ahmed, Mohammed Altaf
,
Goswami, Rajat S.
in
Algorithms
,
Communication
,
Computers
2024
The integration of lattice‐based cryptography principles with Quantum Key Distribution (QKD) protocols is explored to enhance security in the context of Internet of Things (IoT) ecosystems. With the advent of quantum computing, traditional cryptographic methods are increasingly susceptible to attacks, necessitating the development of quantum‐resistant approaches. By leveraging the inherent resilience of lattice‐based cryptography, a synergistic fusion with QKD is proposed to establish secure and robust communication channels among IoT devices. Through comprehensive Qiskit simulations and theoretical analysis, the feasibility, security guarantees, and performance implications of this novel hybrid approach are thoroughly investigated. The findings not only demonstrate the efficacy of lattice‐based QKD in mitigating quantum threats, but also highlight its potential to fortify IoT communications against emerging security challenges. Moreover, the authors provide valuable insights into the practical implementation considerations and scalability aspects of this fusion approach. This research contributes to advancing the understanding of quantum‐resistant cryptography for IoT applications and paves the way for further exploration and development in this critical domain. This research explores the integration of lattice‐based cryptography principles with Quantum Key Distribution protocols to enhance security in the context of Internet of Things ecosystems. With the advent of quantum computing, traditional cryptographic methods are increasingly susceptible to attacks, necessitating the development of quantum‐resistant approaches.
Journal Article