Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
44
result(s) for
"Kuznetsov, Oleksandr"
Sort by:
Integrating Non-Positional Numbering Systems into E-Commerce Platforms: A Novel Approach to Enhance System Fault Tolerance
by
Krasnobayev, Victor
,
Kuznetsov, Oleksandr
in
Chinese remainder theorem
,
Data integrity
,
e-commerce platform architecture
2023
In the dynamic landscape of electronic commerce, the robustness of platforms is a critical determinant of operational continuity and trustworthiness, necessitating innovative approaches to fault tolerance. This study pioneers an advanced strategy for enhancing fault tolerance in e-commerce systems, utilizing non-positional numbering systems (NPNS) inspired by the mathematical robustness of the Chinese Remainder Theorem (CRT). Traditional systems rely heavily on positional numbering, which, despite its ubiquity, harbors limitations in flexibility and resilience against computational errors and system faults. In contrast, NPNS, characterized by their independence, equitability, and residue independence, introduce a transformative potential for system architecture, significantly increasing resistance to disruptions and computational inaccuracies. Our discourse extends beyond theoretical implications, delving into practical applications within contemporary e-commerce platforms. We introduce and elaborate on new terminologies, concepts, and a sophisticated classification system for fault-tolerance mechanisms within the framework of NPNS. This nuanced approach not only consolidates understanding but also identifies underexplored pathways for resilience in digital commerce infrastructure. Furthermore, this research highlights the empirical significance of adopting NPNS, offering a methodologically sound and innovative avenue to safeguard against system vulnerabilities. By integrating NPNS, platforms can achieve enhanced levels of redundancy and fault tolerance, essential for maintaining operational integrity in the face of unforeseen system failures. This integration signals a paradigm shift, emphasizing proactive fault mitigation strategies over reactive measures. Conclusively, this study serves as a seminal reference point for subsequent scholarly endeavors, advocating for a shift towards NPNS in e-commerce platforms. The practical adaptations suggested herein are poised to redefine stakeholders’ approach to system reliability, instigating a new era of confidence in e-commerce engagements.
Journal Article
Enhancing Steganography Detection with AI: Fine-Tuning a Deep Residual Network for Spread Spectrum Image Steganography
by
Kuznetsova, Kateryna
,
Frontoni, Emanuele
,
Kuznetsov, Oleksandr
in
Adaptability
,
Algorithms
,
Artificial intelligence
2024
This paper presents an extensive investigation into the application of artificial intelligence, specifically Convolutional Neural Networks (CNNs), in image steganography detection. We initially evaluated the state-of-the-art steganalysis model, SRNet, on various image steganography techniques, including WOW, HILL, S-UNIWARD, and the innovative Spread Spectrum Image Steganography (SSIS). We found SRNet’s performance on SSIS detection to be lower compared to other methods, prompting us to fine-tune the model using SSIS datasets. Subsequent experiments showed significant improvement in SSIS detection, albeit at the cost of minor performance degradation as to other techniques. Our findings underscore the potential and adaptability of AI-based steganalysis models. However, they also highlight the need for a delicate balance in model adaptation to maintain effectiveness across various steganography techniques. We suggest future research directions, including multi-task learning strategies and other machine learning techniques, to further improve the robustness and versatility of steganalysis models.
Journal Article
Enhancing Smart Communication Security: A Novel Cost Function for Efficient S-Box Generation in Symmetric Key Cryptography
by
Frontoni, Emanuele
,
Kuznetsov, Oleksandr
,
Poluyanenko, Nikolay
in
Algebra
,
Algorithms
,
Benchmarks
2024
In the realm of smart communication systems, where the ubiquity of 5G/6G networks and IoT applications demands robust data confidentiality, the cryptographic integrity of block and stream cipher mechanisms plays a pivotal role. This paper focuses on the enhancement of cryptographic strength in these systems through an innovative approach to generating substitution boxes (S-boxes), which are integral in achieving confusion and diffusion properties in substitution–permutation networks. These properties are critical in thwarting statistical, differential, linear, and other forms of cryptanalysis, and are equally vital in pseudorandom number generation and cryptographic hashing algorithms. The paper addresses the challenge of rapidly producing random S-boxes with desired cryptographic attributes, a task notably arduous given the complexity of existing generation algorithms. We delve into the hill climbing algorithm, exploring various cost functions and their impact on computational complexity for generating S-boxes with a target nonlinearity of 104. Our contribution lies in proposing a new cost function that markedly reduces the generation complexity, bringing down the iteration count to under 50,000 for achieving the desired S-box. This advancement is particularly significant in the context of smart communication environments, where the balance between security and performance is paramount.
Journal Article
Implementation of Kolmogorov–Arnold Networks for Efficient Image Processing in Resource-Constrained Internet of Things Devices
by
Nurpeisova, Ardak
,
Kuznetsov, Oleksandr
,
Shaushenova, Anargul
in
Accuracy
,
Approximation
,
Architecture
2025
This research investigates the implementation of Kolmogorov–Arnold networks (KANs) for image processing in resource-constrained IoTs devices. KANs represent a novel neural network architecture that offers significant advantages over traditional deep learning approaches, particularly in applications where computational resources are limited. Our study demonstrates the efficiency of KAN-based solutions for image analysis tasks in IoTs environments, providing comparative performance metrics against conventional convolutional neural networks. The experimental results indicate substantial improvements in processing speed and memory utilization while maintaining competitive accuracy. This work contributes to the advancement of AI-driven IoTs applications by proposing optimized KAN-based implementations suitable for edge computing scenarios. The findings have important implications for IoTs deployment in smart infrastructure, environmental monitoring, and industrial automation where efficient image processing is critical.
Journal Article
Dataset Dependency in CNN-Based Copy-Move Forgery Detection: A Multi-Dataset Comparative Analysis
by
Arnesano, Marco
,
Dell’Olmo, Potito Valle
,
Randieri, Cristian
in
Accuracy
,
Algorithms
,
Artificial intelligence
2025
Convolutional neural networks (CNNs) have established themselves over time as a fundamental tool in the field of copy-move forgery detection due to their ability to effectively identify and analyze manipulated images. Unfortunately, they still represent a persistent challenge in digital image forensics, underlining the importance of ensuring the integrity of digital visual content. In this study, we present a systematic evaluation of the performance of a convolutional neural network (CNN) specifically designed for copy-move manipulation detection, applied to three datasets widely used in the literature in the context of digital forensics: CoMoFoD, Coverage, and CASIA v2. Our experimental analysis highlighted a significant variability of the results, with an accuracy ranging from 95.90% on CoMoFoD to 27.50% on Coverage. This inhomogeneity has been attributed to specific structural factors of the datasets used, such as the sample size, the degree of imbalance between classes, and the intrinsic complexity of the manipulations. We also investigated different regularization techniques and data augmentation strategies to understand their impact on the network performance, finding that adopting the L2 penalty and reducing the learning rate led to an accuracy increase of up to 2.5% for CASIA v2, while on CoMoFoD we recorded a much more modest impact (1.3%). Similarly, we observed that data augmentation was able to improve performance on large datasets but was ineffective on smaller ones. Our results challenge the idea of universal generalizability of CNN architectures in the context of copy-move forgery detection, highlighting instead how performance is strictly dependent on the intrinsic characteristics of the dataset under consideration. Finally, we propose a series of operational recommendations for optimizing the training process, the choice of the dataset, and the definition of robust evaluation protocols aimed at guiding the development of detection systems that are more reliable and generalizable.
Journal Article
Machine Learning Analytics for Blockchain-Based Financial Markets: A Confidence-Threshold Framework for Cryptocurrency Price Direction Prediction
by
Kostenko, Oleksii
,
Kuznetsov, Oleksandr
,
Klymenko, Kateryna
in
Accuracy
,
algorithmic trading
,
Blockchain
2025
Blockchain-based cryptocurrency markets present unique analytical challenges due to their decentralized nature, continuous operation, and extreme volatility. Traditional price prediction models often struggle with the binary trade execution problem in these markets. This study introduces a confidence-based classification framework that separates directional prediction from execution decisions in cryptocurrency trading. We develop a neural network system that processes multi-scale market data, combining daily macroeconomic indicators with a high-frequency order book microstructure. The model trains exclusively on directional movements (up versus down) and uses prediction confidence levels to determine trade execution. We evaluate the framework across 11 major cryptocurrency pairs over 12 months. Experimental results demonstrate 82.68% direction accuracy on executed trades with 151.11-basis point average net profit per trade at 11.99% market coverage. Order book features dominate predictive importance (81.3% of selected features), validating the critical role of blockchain microstructure data for short-term price prediction. The confidence-based execution strategy achieves superior risk-adjusted returns compared to traditional classification approaches while providing natural risk management capabilities through selective trade execution. These findings contribute to blockchain technology applications in financial markets by demonstrating how a decentralized market microstructure can be leveraged for systematic trading strategies. The methodology offers practical implementation guidelines for cryptocurrency algorithmic trading while advancing the understanding of machine learning applications in blockchain-based financial systems.
Journal Article
Information Diffusion Modeling in Social Networks: A Comparative Analysis of Delay Mechanisms Using Population Dynamics
by
Artyshchuk, Iryna
,
Bakenova, Kamila
,
Kuznetsov, Oleksandr
in
Comparative analysis
,
Data analysis
,
Diffusion models
2025
This study presents a comprehensive analysis of information diffusion in social networks with time delay mechanisms. We first analyze real Reddit thread data, identifying limitations in the sample size. To overcome this, we develop synthetic network models with varied structural properties. Our approach tests three delay types (constant, uniform, exponential) across different network structures, using machine learning models to identify key factors influencing information coverage. The results show that spread probability consistently impacts diffusion across all datasets. Gradient Boosting models achieve R2 = 0.847 on synthetic data. Random networks with a constant delay mechanism and high spread probability (0.4) maximize coverage. When verified against test data, peak speed time emerges as the strongest predictor (r = 0.995, p < 0.001). Our findings provide practical recommendations for optimizing information spread in social networks and demonstrate the value of integrating real and synthetic data in diffusion modeling.
Journal Article
Optimizing Merkle Proof Size Through Path Length Analysis: A Probabilistic Framework for Efficient Blockchain State Verification
by
Arnesano, Marco
,
Kuznetsova, Kateryna
,
Frontoni, Emanuele
in
Algorithms
,
Blockchain
,
blockchain scalability
2025
This study addresses a critical challenge in modern blockchain systems: the excessive size of Merkle proofs in state verification, which significantly impacts scalability and efficiency. As highlighted by Ethereum’s founder, Vitalik Buterin, current Merkle Patricia Tries (MPTs) are highly inefficient for stateless clients, with worst-case proofs reaching approximately 300 MB. We present a comprehensive probabilistic analysis of path length distributions in MPTs to optimize proof size while maintaining security guarantees. Our novel mathematical model characterizes the distribution of path lengths in tries containing random blockchain addresses and validates it through extensive computational experiments. The findings reveal logarithmic scaling of average path lengths with respect to the number of addresses, with unprecedented precision in predicting structural properties across scales from 100 to 300 million addresses. The research demonstrates remarkable accuracy, with discrepancies between theoretical and experimental results not exceeding 0.01 across all tested scales. By identifying and verifying the right-skewed nature of path length distributions, we provide critical insights for optimizing Merkle proof generation and size reduction. Our practical implementation guidelines demonstrate potential proof size reductions of up to 70% through optimized path structuring and node layout. This work bridges the gap between theoretical computer science and practical blockchain engineering, offering immediate applications for blockchain client optimization and efficient state-proof generation.
Journal Article
Interpretable Predictive Modeling for Educational Equity: A Workload-Aware Decision Support System for Early Identification of At-Risk Students
by
Iklassova, Kainizhamal
,
Tokkuliyeva, Aizhan
,
Kuznetsov, Oleksandr
in
Accountability
,
accountability in AI
,
Accuracy
2025
Educational equity and access to quality learning opportunities represent fundamental pillars of sustainable societal development, directly aligned with the United Nations Sustainable Development Goal 4 (Quality Education). Student retention remains a critical challenge in higher education, with early disengagement strongly predicting eventual failure and limiting opportunities for social mobility. While machine learning models have demonstrated impressive predictive accuracy for identifying at-risk students, most systems prioritize performance metrics over practical deployment constraints, creating a gap between research demonstrations and real-world impact for social good. We present an accountable and interpretable decision support system that balances three competing objectives essential for responsible AI deployment: ultra-early prediction timing (day 14 of semester), manageable instructor workload (flagging 15% of students), and model transparency (multiple explanation mechanisms). Using the Open University Learning Analytics Dataset (OULAD) containing 22,437 students across seven modules, we develop predictive models from activity patterns, assessment performance, and demographics observable within two weeks. We compare threshold-based rules, logistic regression (interpretable linear modeling), and gradient boosting (ensemble modeling) using temporal validation where early course presentations train models tested on later cohorts. Results show gradient boosting achieves AUC (Area Under the ROC Curve, measuring discrimination ability) of 0.789 and average precision of 0.722, with logistic regression performing nearly identically (AUC 0.783, AP 0.713), revealing that linear modeling captures most predictive signal and makes interpretability essentially free. At our recommended threshold of 0.607, the predictive model flags 15% of students with 84% precision and 35% recall, creating actionable alert lists instructors can manage within normal teaching duties while maintaining accountability for false positives. Calibration analysis confirms that predicted probabilities match observed failure rates, ensuring trustworthy risk estimates. Feature importance modeling reveals that assessment completion and activity patterns dominate demographic factors, providing transparent evidence that behavioral engagement matters more than student background. We implement a complete decision support system generating instructor reports, explainable natural language justifications for each alert, and personalized intervention templates. Our contribution advances responsible AI for social good by demonstrating that interpretable predictive modeling can support equitable educational outcomes when designed with explicit attention to timing, workload, and transparency—core principles of accountable artificial intelligence.
Journal Article
Evaluating the Security of Merkle Trees: An Analysis of Data Falsification Probabilities
by
Kanonik, Dzianis
,
Kuznetsova, Kateryna
,
Kuznetsov, Oleksandr
in
Approximation
,
Blockchain
,
Communication
2024
Addressing the critical challenge of ensuring data integrity in decentralized systems, this paper delves into the underexplored area of data falsification probabilities within Merkle Trees, which are pivotal in blockchain and Internet of Things (IoT) technologies. Despite their widespread use, a comprehensive understanding of the probabilistic aspects of data security in these structures remains a gap in current research. Our study aims to bridge this gap by developing a theoretical framework to calculate the probability of data falsification, taking into account various scenarios based on the length of the Merkle path and hash length. The research progresses from the derivation of an exact formula for falsification probability to an approximation suitable for cases with significantly large hash lengths. Empirical experiments validate the theoretical models, exploring simulations with diverse hash lengths and Merkle path lengths. The findings reveal a decrease in falsification probability with increasing hash length and an inverse relationship with longer Merkle paths. A numerical analysis quantifies the discrepancy between exact and approximate probabilities, underscoring the conditions for the effective application of the approximation. This work offers crucial insights into optimizing Merkle Tree structures for bolstering security in blockchain and IoT systems, achieving a balance between computational efficiency and data integrity.
Journal Article