Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
22
result(s) for
"Energy-efficient AI"
Sort by:
Comparative analysis of model compression techniques for achieving carbon efficient AI
2025
The growing computational demands of models, such as BERT, have raised concerns about their environmental impact. This study addresses the pressing need for sustainable Artificial Intelligence practices by investigating the efficiency of model compression techniques in reducing the energy consumption and carbon emissions of transformer-based models without compromising performance. Specifically, we applied pruning, knowledge distillation, and quantization to transformer-based models (BERT, DistilBERT, ALBERT, and ELECTRA) using the Amazon Polarity Dataset for sentiment analysis. We also compared the energy efficiency of these compressed models against inherently carbon-efficient transformer models, such as TinyBERT and MobileBERT. To evaluate each model’s energy consumption and carbon emissions, we utilized the open-source tool CodeCarbon. Our findings indicate that applying model compression techniques resulted in a reduction in energy consumption of 32.097% for BERT with pruning and distillation,
% for DistilBERT with pruning, 7.12% for ALBERT with quantization, and 23.934% for ELECTRA with pruning and distillation, while maintaining performance metrics within a range of 95.871–99.062% accuracy, precision, recall, F1 score, and ROC AUC except for ALBERT with quantization. Specifically, BERT with pruning and distillation achieved 95.90% accuracy, 95.90% precision, 95.90% recall, 95.90% F1-score, and 98.87% ROC AUC; DistilBERT with pruning achieved 95.87% accuracy, 95.87% precision, 95.87% recall, 95.87% F1-score, and 99.06% ROC AUC; ELECTRA with pruning and distillation achieved 95.92% accuracy, 95.92% precision, 95.92% recall, 95.92% F1-score, and 99.30% ROC AUC; and ALBERT with quantization achieved 65.44% accuracy, 67.82% precision, 65.44% recall, 63.46% F1-score, and 72.31% ROC AUC, indicating significant performance degradation due to quantization sensitivity in its already compressed architecture. Overall, this demonstrates the potential for sustainable Artificial Intelligence practices using model compression.
Journal Article
Carbon efficient quantum AI: an empirical study of ansätz design trade-offs in QNN and QLSTM models
by
Soni, Jayesh
,
Tripathi, Sarvapriya
,
Upadhyay, Himanshu
in
639/705/1042
,
639/705/258
,
639/705/794
2025
The rising environmental cost of deep learning has placed Green AI, which promotes focus on reducing the carbon footprint of AI, at the forefront of sustainable computing. In this study, we investigate Quantum Machine Learning (QML) as a novel and energy-efficient alternative by benchmarking two quantum models, the Quantum Neural Network (QNN) and Quantum Long Short-Term Memory (QLSTM), on the N-BaIoT anomaly detection dataset. Our first phase of experiments compares the QNN and QLSTM models using ten distinct quantum circuit designs (ansätze A1–A10). We systematically compare trade-offs between classification performance, model complexity, training time, and energy consumption. The results indicate that simpler QNN ansätze can achieve accuracy comparable to more complex ones while consuming significantly less energy and converging faster. In particular, QNN with ansatz A4 provided the optimal balance between performance and energy efficiency, consistently outperforming QLSTM across most metrics. A detailed energy breakdown confirmed GPU usage as the dominant source of power consumption, underscoring the importance of circuit-efficient quantum design. To contextualize QML’s viability, we conducted a second phase of experiments comparing quantum models with three benchmark classical machine learning models: Artificial Neural Network (ANN), Long Short-Term Memory (LSTM), and CatBoost. We find that the classical models demonstrated faster training times and lower energy consumption, highlighting and contrasting the maturity of algorithmic development that classical ML algorithms have already seen. Finally, we examined the energy implications of developing quantum models on actual quantum hardware. This third phase of experiments compared training on IBM Qiskit’s emulation environment (running on GPU servers) versus execution on real IBM Quantum hardware. Highlighting the significant differences in execution time and energy footprint, extrapolated results indicate that quantum hardware still incurs higher energy costs. This suggests that further hardware-aware ansätz optimization and improvements in quantum infrastructure are essential to realizing carbon-efficient QML at scale.
Journal Article
Neural Network Implementation for Fire Detection in Critical Infrastructures: A Comparative Analysis on Embedded Edge Devices
by
Aramendia, Jon
,
Cabrera, Andrea
,
Martín, Jon
in
Accuracy
,
Artificial intelligence
,
Artificial neural networks
2025
This paper explores the application of artificial intelligence on edge devices to enhance security in critical infrastructures, with a specific focus on the use case of a battery-powered mobile system for fire detection in tunnels. The study leverages the YOLOv5 convolutional neural network (CNN) for real-time detection, focusing on a comparative analysis across three low-power platforms, NXP i.MX93, Xilinx Kria KV260, and NVIDIA Jetson Orin Nano, evaluating their performance in terms of detection accuracy (mAP), inference time, and energy consumption. The paper also presents a methodology for implementing neural networks on various platforms, aiming to provide a scalable approach to edge artificial intelligence (AI) deployment. The findings offer valuable insights into the trade-offs between computational efficiency and power consumption, guiding the selection of edge computing solutions in security-critical applications.
Journal Article
Lightweight and Low-Parametric Network for Hardware Inference of Obstructive Sleep Apnea
by
Mosa, Abu Saleh Mohammad
,
McCrae, Christina S.
,
Paul, Tanmoy
in
apnea
,
Artificial intelligence
,
Datasets
2024
Background: Obstructive sleep apnea is a sleep disorder that is linked to many health complications and can even be lethal in its severe form. Overnight polysomnography is the gold standard for diagnosing apnea, which is expensive, time-consuming, and requires manual analysis by a sleep expert. Artificial intelligence (AI)-embedded wearable device as a portable and less intrusive monitoring system is a highly desired alternative to polysomnography. However, AI models often require substantial storage capacity and computational power for edge inference which makes it a challenging task to implement the models in hardware with memory and power constraints. Methods: This study demonstrates the implementation of depth-wise separable convolution (DSC) as a resource-efficient alternative to spatial convolution (SC) for real-time detection of apneic activity. Single lead electrocardiogram (ECG) and oxygen saturation (SpO2) signals were acquired from the PhysioNet databank. Using each type of convolution, three different models were developed using ECG, SpO2, and model fusion. For both types of convolutions, the fusion models outperformed the models built on individual signals across all the performance metrics. Results: Although the SC-based fusion model performed the best, the DSC-based fusion model was 9.4, 1.85, and 11.3 times more energy efficient than SC-based ECG, SpO2, and fusion models, respectively. Furthermore, the accuracy, precision, and specificity yielded by the DSC-based fusion model were comparable to those of the SC-based individual models (~95%, ~94%, and ~94%, respectively). Conclusions: DSC is commonly used in mobile vision tasks, but its potential in clinical applications for 1-D signals remains unexplored. While SC-based models outperform DSC in accuracy, the DSC-based model offers a more energy-efficient solution with acceptable performance, making it suitable for AI-embedded apnea detection systems.
Journal Article
A Survey on Optimization Techniques for Edge Artificial Intelligence (AI)
by
Hewage, Chaminda
,
Lawrence, John Jeyasekaran
,
Chelliah, Pethuru Raj
in
AI model optimization
,
Airports
,
Algorithms
2023
Artificial Intelligence (Al) models are being produced and used to solve a variety of current and future business and technical problems. Therefore, AI model engineering processes, platforms, and products are acquiring special significance across industry verticals. For achieving deeper automation, the number of data features being used while generating highly promising and productive AI models is numerous, and hence the resulting AI models are bulky. Such heavyweight models consume a lot of computation, storage, networking, and energy resources. On the other side, increasingly, AI models are being deployed in IoT devices to ensure real-time knowledge discovery and dissemination. Real-time insights are of paramount importance in producing and releasing real-time and intelligent services and applications. Thus, edge intelligence through on-device data processing has laid down a stimulating foundation for real-time intelligent enterprises and environments. With these emerging requirements, the focus turned towards unearthing competent and cognitive techniques for maximally compressing huge AI models without sacrificing AI model performance. Therefore, AI researchers have come up with a number of powerful optimization techniques and tools to optimize AI models. This paper is to dig deep and describe all kinds of model optimization at different levels and layers. Having learned the optimization methods, this work has highlighted the importance of having an enabling AI model optimization framework.
Journal Article
The Role of Artificial Intelligence in Developing the Tall Buildings of Tomorrow
by
Aboulnaga, Mohsen
,
Emad, Samaa
,
Wanas, Ayman
in
AI and energy-efficient renovation
,
Architectural design
,
Architecture
2025
The application of artificial intelligence (AI) in tall buildings’ development provides transformative opportunities for facing population growth pressures and sustainability challenges in cities. This study presents a comprehensive review of both the current literature and the theoretical framework of AI and its role in construction, specifically analyzing the convergence of AI and skyscraper development. The research methodology combines scholarly sources, AI image generation techniques, an analytical approach, and a comparative analysis of traditional versus AI-enhanced approaches. This study identifies key domains where AI significantly impacts skyscraper evolution, including design optimization, energy management, construction processes, and operational efficiencies. It highlights short-term benefits like enhanced architectural design through rapid generative design iterations and material optimization, alongside long-term implications involving adaptive building technologies and sustainability enhancements. Additionally, it addresses the advantages and challenges of adopting AI in architecture, considering various factors (e.g., sustainability, security, and occupant well-being), as well as the impact of different climates on AI in architecture and construction. It also explores transformative applications across diverse skyscraper functions and how AI can bridge different cultures and technologies. The findings reveal AI’s substantial potential in TBs’ design and management, (i.e., structural optimization, energy saving, safety protocols, and operational efficiency) by leveraging innovative technologies such as machine learning, computer vision, and predictive modeling. In conclusion, AI’s dual role as both a revolutionary tool that enhances traditional architectural methods and a catalyst for new design paradigms prioritizing sustainability and resilience has been reflected. Ultimately, this research underscores the importance of balancing AI innovation with established architectural principles to foster a favorable urban future that embraces both technological advancement and foundational design values. This study serves as a base for future research in the AI field.
Journal Article
Modeling cognition through adaptive neural synchronization: a multimodal framework using EEG, fMRI, and reinforcement learning
by
Hall, Rashad
,
Crogman, Horace T.
,
Maleki, Maryam
in
cognitive modeling
,
EEG-fMRI integration
,
energy-efficient computation
2025
Understanding the cognitive process of thinking as a neural phenomenon remains a central challenge in neuroscience and computational modeling. This study addresses this challenge by presenting a biologically grounded framework that simulates adaptive decision making across cognitive states.
The model integrates neuronal synchronization, metabolic energy consumption, and reinforcement learning. Neural synchronization is simulated using Kuramoto oscillators, while energy dynamics are constrained by multimodal activity profiles. Reinforcement learning agents-Q-learning and Deep Q-Network (DQN)-modulate external inputs to maintain optimal synchrony with minimal energy cost. The model is validated using real EEG and fMRI data, comparing simulated and empirical outputs across spectral power, phase synchrony, and BOLD activity.
The DQN agent achieved rapid convergence, stabilizing cumulative rewards within 200 episodes and reducing mean synchronization error by over 40%, outperforming Q-learning in speed and generalization. The model successfully reproduced canonical brain states-focused attention, multitasking, and rest. Simulated EEG showed dominant alpha-band power (3.2 × 10
a.u.), while real EEG exhibited beta-dominance (3.2 × 10
a.u.), indicating accurate modeling of resting states and tunability for active tasks. Phase Locking Value (PLV) ranged from 0.9806 to 0.9926, with the focused condition yielding the lowest circular variance (0.0456) and a near significant phase shift compared to rest (
= -2.15,
= 0.075). Cross-modal validation revealed moderate correlation between simulated and real BOLD signals (
= 0.30, resting condition), with delayed inputs improving temporal alignment. General Linear Model (GLM) analysis of simulated BOLD data showed high region-specific prediction accuracy (
= 0.973-0.993,
< 0.001), particularly in prefrontal, parietal, and anterior cingulate cortices. Voxel-wise correlation and ICA decomposition confirmed structured network dynamics.
These findings demonstrate that the framework captures both electrophysiological and spatial aspects of brain activity, respects neuroenergetic constraints, and adaptively regulates brain-like states through reinforcement learning. The model offers a scalable platform for simulating cognition and developing biologically inspired neuroadaptive systems.
This work provides a novel and testable approach to modeling thinking as a biologically constrained control problem and lays the groundwork for future applications in cognitive modeling and brain-computer interfaces.
Journal Article
Inspiring from Galaxies to Green AI in Earth: Benchmarking Energy-Efficient Models for Galaxy Morphology Classification
by
Gkouvrikos, Emmanouil V.
,
Georgousis, Ilias
,
Alevizos, Vasileios
in
Accuracy
,
Algorithms
,
Artificial intelligence
2025
Recent advancements in space exploration have significantly increased the volume of astronomical data, heightening the demand for efficient analytical methods. Concurrently, the considerable energy consumption of machine learning (ML) has fostered the emergence of Green AI, emphasizing sustainable, energy-efficient computational practices. We introduce the first large-scale Green AI benchmark for galaxy morphology classification, evaluating over 30 machine learning architectures (classical, ensemble, deep, and hybrid) on CPU and GPU platforms using a balanced subset of the Galaxy Zoo dataset. Beyond traditional metrics (precision, recall, and F1-score), we quantify inference latency, energy consumption, and carbon-equivalent emissions to derive an integrated EcoScore that captures the trade-off between predictive performance and environmental impact. Our results reveal that a GPU-optimized multilayer perceptron achieves state-of-the-art accuracy of 98% while emitting 20× less CO2 than ensemble forests, which—despite comparable accuracy—incur substantially higher energy costs. We demonstrate that hardware–algorithm co-design, model sparsification, and careful hyperparameter tuning can reduce carbon footprints by over 90% with negligible loss in classification quality. These findings provide actionable guidelines for deploying energy-efficient, high-fidelity models in both ground-based data centers and onboard space observatories, paving the way for truly sustainable, large-scale astronomical data analysis.
Journal Article
Blockchain Technology in Logistics and Supply Chain Management A Bibliometric and Co-Citation Analysis
by
Govindaraj, Manoj
,
Vigneshwaran, K.S.
,
Seelaboyina, Radha
in
ai-enhanced smart contracts
,
Bibliometrics
,
Blockchain
2025
Just be sure that you are not giving advice that could backfire if seen in the wrong way, it is a great way to boost your reputation and gain people as followers. But existing studies point out major meagre adoption, high computational cost, low interoperability, privacy concerns, and energy inefficiencies as major challenges. Therefore, this study proposes a next-generation blockchain framework to overcome existing limitations by employing hybrid blockchain models, privacy-preserving techniques, energy-efficient consensus mechanisms, and adaptive smart contracts. Additionally, the research presents a cross-platform interoperability model to integrate blockchain with any enterprise resource planning (ERP) systems, Internet of Things (IoT) networks, and cloud logistics infrastructure in a seamless manner. Similarly, AI-assisted smart contracts and scalable blockchain architecture are suggested to improve supply chain management with realtime transaction capabilities. Through an examination of successful and failure cases in the blockchain domain, the study suggests a risk-mitigation framework, followed by a strategic roadmap for organizations to navigate through the bottlenecks towards adoption. The solutions proposed support economic feasibility, meet international standards and allow for a more resilient supply chain, making blockchain a viable solution to some of the pressures modern logistics faces.
Journal Article