Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Series TitleSeries Title
-
Reading LevelReading Level
-
YearFrom:-To:
-
More FiltersMore FiltersContent TypeItem TypeIs Full-Text AvailableSubjectPublisherSourceDonorLanguagePlace of PublicationContributorsLocation
Done
Filters
Reset
808
result(s) for
"Intelligence service Computer network resources."
Sort by:
A comprehensive survey on machine learning for networking: evolution, applications and research opportunities
by
Shahriar, Nashid
,
Caicedo, Oscar M.
,
Salahuddin, Mohammad A.
in
Artificial intelligence
,
Computer Applications
,
Computer Communication Networks
2018
Machine Learning (ML) has been enjoying an unprecedented surge in applications that solve problems and enable automation in diverse domains. Primarily, this is due to the explosion in the availability of data, significant improvements in ML techniques, and advancement in computing capabilities. Undoubtedly, ML has been applied to various mundane and complex problems arising in network operation and management. There are various surveys on ML for specific areas in networking or for specific network technologies. This survey is original, since it jointly presents the application of diverse ML techniques in various key areas of networking across different network technologies. In this way, readers will benefit from a comprehensive discussion on the different learning paradigms and ML techniques applied to fundamental problems in networking, including traffic prediction, routing and classification, congestion control, resource and fault management, QoS and QoE management, and network security. Furthermore, this survey delineates the limitations, give insights, research challenges and future opportunities to advance ML in networking. Therefore, this is a timely contribution of the implications of ML for networking, that is pushing the barriers of autonomic network operation and management.
Journal Article
Time series-based workload prediction using the statistical hybrid model for the cloud environment
Resource management is addressed using infrastructure as a service. On demand, the resource management module effectively manages available resources. Resource management in cloud resource provisioning is aided by the prediction of central processing unit (CPU) and memory utilization. Using a hybrid ARIMA–ANN model, this study forecasts future CPU and memory utilization. The range of values discovered is utilized to make predictions, which is useful for resource management. In the cloud traces, the ARIMA model detects linear components in the CPU and memory utilization patterns. For recognizing and magnifying nonlinear components in the traces, the artificial neural network (ANN) leverages the residuals derived from the ARIMA model. The resource utilization patterns are predicted using a combination of linear and nonlinear components. From the predicted and previous history values, the Savitzky–Golay filter finds a range of forecast values. Point value forecasting may not be the best method for predicting multi-step resource utilization in a cloud setting. The forecasting error can be decreased by introducing a range of values, and we employ as reported by Engelbrecht HA and van Greunen M (in: Network and Service Management (CNSM), 2015 11th International Conference, 2015) OER (over estimation rate) and UER (under estimation rate) to cope with the error produced by over or under estimation of CPU and memory utilization. The prediction accuracy is tested using statistical-based analysis using Google's 29-day trail and BitBrain (BB).
Journal Article
Smart libraries: an emerging and innovative technological habitat of 21st century
2019
Purpose
The purpose of this paper is to discuss the emerging and innovative technologies which integrate together to form smart libraries. Smart libraries are the new generation libraries, which work with the amalgamation of smart technologies, smart users and smart services.
Design/methodology/approach
An extensive review of literature on “smart libraries” was carried to ascertain the emerging technologies in the smart library domain. Clarivate Analytic’s Web of Science and Sciverse Scopus were explored initially to ascertain the extent of literature published on Smart Libraries and their varied aspects. Literature was searched against various keywords like smart libraries, smart technologies, Internet of Things (IoT), Electronic resource management (ERM), Data mining, Artificial intelligence (AI), Ambient intelligence, Blockchain Technology and Augmented Reality. Later on, the works citing the literature on Smart Libraries were also explored to visualize a broad spectrum of emerging concepts about this growing trend in libraries.
Findings
The study confirms that smart libraries are becoming smarter with the emerging smart technologies, which enhances their working capabilities and satisfies the users associated with them. Implementing the smart technologies in the libraries has bridged the gap between the services offered by the libraries and the rapidly changing and competing needs of the humans.
Practical implications
The paper highlights the emerging smart technologies in smart libraries and how they influence the efficiency of libraries in terms of users, services and technological integration.
Originality/value
The paper tries to highlight the current technologies in the smart library set-ups for the efficient working of library set-ups.
Journal Article
Optimizing network bandwidth slicing identification: NADAM-enhanced CNN and VAE data preprocessing for enhanced interpretability
by
Alam, Md. Golam Rabiul
,
Mansoor, Nafees
,
Hossain, Shahriar
in
Accuracy
,
Algorithms
,
Artificial Intelligence
2025
Communication networks of the future will rely heavily on network slicing (NS), a technology that enables the creation of distinct virtual networks within a shared physical infrastructure. This capability is critical for meeting the diverse quality of service (QoS) requirements of various applications, from ultra-reliable low-latency communications to massive IoT deployments. To achieve efficient network slicing, intelligent algorithms are essential for optimizing network resources and ensuring QoS. Artificial Intelligence (AI) models, particularly deep learning techniques, have emerged as powerful tools for automating and enhancing network slicing processes. These models are increasingly applied in next-generation mobile and wireless networks, including 5G, IoT infrastructure, and software-defined networking (SDN), to allocate resources and manage network slices dynamically. In this paper, we propose an Interpretable Network Bandwidth Slicing Identification (INBSI) system that leverages a modified Convolutional Neural Network (CNN) architecture with Nesterov-accelerated Adaptive Moment Estimation (NADAM) optimization. Additionally, we use a Variational Autoencoder (VAE) for preprocessing initial data, along with reconstructed data for data validity assessment. The model we propose outperforms other alternatives and reaches an accuracy peak of (84%) in the system environment. A range of accuracy was achieved by (k-nearest neighbors algorithm) KNN (76%), Random Forest (69%), BaggingClassifier (70%), and Gaussian Naive Bayes (GaussianNB) (55%). The accuracy of additional methods varies, including Decision Trees, AdaBoost, Deep Neural Forest (DNF), and Multilayer Perceptrons (MLPs). We utilize two eXplainable Artificial Intelligence (XAI) approaches, Shapley Additive Explanations (SHAP) and Local Interpretable Model-Agnostic Explanations (LIME), to provide insight into the impact of certain input characteristics on the network slicing process. Our work highlights the potential of AI-driven solutions in network slicing, offering insights for operators to optimize resource allocation and enhance future network management.
Journal Article
QoS-Aware Task Offloading in Fog Environment Using Multi-agent Deep Reinforcement Learning
2023
With the surge of intelligent devices, applications of Internet of Things (IoT) are growing at a rapid pace. As a result, a massive amount of raw data is generated, which must be processed and stored. IoT devices standalone are not enough to handle large amount of data. Hence, to improve the performance, users started to push some jobs to far-situated cloud data centers, which would lead to more complications such as high bandwidth usage, service latency, and energy consumption. Fog computing emerges as a key enabling technology that brings cloud services closer to the end-user. However, owing to the unpredictability of tasks and Quality of Service (QoS) requirements of users, efficient task scheduling and resource allocation mechanisms are needed to balance the demand. To handle the problem efficiently, we have designed the task offloading problem as Markov Decision Process (MDP) by considering various user QoS factors including end-to-end latency, energy consumption, task deadline, and priority. Three different model-free off-policy Deep Reinforcement Learning (DRL) based solutions are outlined to maximize the reward in terms of resource utilization. Finally, extensive experimentation is conducted to validate and compare the efficiency and effectiveness of proposed mechanisms. Results show that with the proposed method, on average 96.23% of tasks can satisfy the deadline with an 8.25% increase.
Journal Article
A survey on application of machine learning for Internet of Things
2018
Internet of Things (IoT) has become an important network paradigm and there are lots of smart devices connected by IoT. IoT systems are producing massive data and thus more and more IoT applications and services are emerging. Machine learning, as an another important area, has obtained a great success in several research fields such as computer vision, computer graphics, natural language processing, speech recognition, decision-making, and intelligent control. It has also been introduced in networking research. Many researches study how to utilize machine learning to solve networking problems, including routing, traffic engineering, resource allocation, and security. Recently, there has been a rising trend of employing machine learning to improve IoT applications and provide IoT services such as traffic engineering, network management, security, Internet traffic classification, and quality of service optimization. This survey paper focuses on providing an overview of the application of machine learning in the domain of IoT. We provide a comprehensive survey highlighting the recent progresses in machine learning techniques for IoT and describe various IoT applications. The application of machine learning for IoT enables users to obtain deep analytics and develop efficient intelligent IoT applications. This paper is different from the previously published survey papers in terms of focus, scope, and breadth; specifically, we have written this paper to emphasize the application of machine learning for IoT and the coverage of most recent advances. This paper has made an attempt to cover the major applications of machine learning for IoT and the relevant techniques, including traffic profiling, IoT device identification, security, edge computing infrastructure, network management and typical IoT applications. We also make a discussion on research challenges and open issues.
Journal Article
ADAPTIVE6G: Adaptive Resource Management for Network Slicing Architectures in Current 5G and Future 6G Systems
by
Beard, Cory
,
Thantharate, Anurag
in
5G mobile communication
,
6G mobile communication
,
Adaptive learning
2023
Future intelligent wireless networks demand an adaptive learning approach towards a shared learning model to allow collaboration between data generated by network elements and virtualized functions. Current wireless network learning approaches have focused on traditional machine learning (ML) algorithms, which centralize the training data and perform sequential model learning over a large data set. However, performing training on a large dataset is inefficient; it is time-consuming and not energy and resource-efficient. Transfer Learning (TL) effectively addresses some challenges by training based on a small data set using pre-trained models for similar problems without impacting neural network model performance. TL is a technique that applies the knowledge (features, weights) gained from a previously trained ML model to another but related problem. This work proposes an Adaptive Learning framework ‘ADAPTIVE6G’, a novel approach for a network slicing architecture for resource management and load prediction in data-driven Beyond 5G (B5G), 6G Wireless systems influenced by the knowledge learning from TL techniques. We evaluated ADAPTIVE6G to solve complex network load estimation problems to promote a more fair and uniform distribution of network resources. We demonstrate that the ADAPTIVE6G model can reduce the Mean Squared Error (MSE) by more than 30% and improve the Correlation Coefficient ‘R’ by close to 6% while reducing under-provisioned resources.
Journal Article
A survey and taxonomy on energy efficient resource allocation techniques for cloud computing systems
by
Hameed, Abdul
,
Balaji, Pavan
,
Jayaraman, Prem Prakash
in
Allocations
,
Analysis
,
Artificial Intelligence
2016
In a cloud computing paradigm, energy efficient allocation of different virtualized ICT resources (servers, storage disks, and networks, and the like) is a complex problem due to the presence of heterogeneous application (e.g., content delivery networks, MapReduce, web applications, and the like) workloads having contentious allocation requirements in terms of ICT resource capacities (e.g., network bandwidth, processing speed, response time, etc.). Several recent papers have tried to address the issue of improving energy efficiency in allocating cloud resources to applications with varying degree of success. However, to the best of our knowledge there is no published literature on this subject that clearly articulates the research problem and provides research taxonomy for succinct classification of existing techniques. Hence, the main aim of this paper is to identify open challenges associated with energy efficient resource allocation. In this regard, the study, first, outlines the problem and existing hardware and software-based techniques available for this purpose. Furthermore, available techniques already presented in the literature are summarized based on the energy-efficient research dimension taxonomy. The advantages and disadvantages of the existing techniques are comprehensively analyzed against the proposed research dimension taxonomy namely: resource adaption policy, objective function, allocation method, allocation operation, and interoperability.
Journal Article
Highly Accurate and Reliable Wireless Network Slicing in 5th Generation Networks: A Hybrid Deep Learning Approach
by
Mumtaz Shahid
,
Ullah Zahid
,
Khan, Sulaiman
in
5G mobile communication
,
6G mobile communication
,
Algorithms
2022
In current era, the next generation networks like 5th generation (5G) and 6th generation (6G) networks requires high security, low latency with a high reliable standards and capacity. In these networks, reconfigurable wireless network slicing is considered as one of the key element for 5G and 6G networks. A reconfigurable slicing allows the operators to run various instances of the network using a single infrastructure for better quality of services (QoS). The QoS can be achieved by reconfiguring and optimizing these networks using Artificial intelligence and machine learning algorithms. To develop a smart decision-making mechanism for network management and restricting network slice failures, machine learning-enabled reconfigurable wireless network solutions are required. In this paper, we propose a hybrid deep learning model that consists of convolution neural network (CNN) and long short term memory (LSTM). The CNN performs resource allocation, network reconfiguration, and slice selection while the LSTM is used for statistical information (load balancing, error rate etc.) regarding network slices. The applicability of the proposed model is validated by using multiple unknown devices, slice failure, and overloading conditions. An overall accuracy of 95.17% is achieved by the proposed model that reflects its applicability.
Journal Article