Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
182 result(s) for "Chao, Han‐Chieh"
Sort by:
Big data analytics: a survey
The age of big data is now coming. But the traditional data analytics may not be able to handle such large quantities of data. The question that arises now is, how to develop a high performance platform to efficiently analyze big data and how to design an appropriate mining algorithm to find the useful things from big data. To deeply discuss this issue, this paper begins with a brief introduction to data analytics, followed by the discussions of big data analytics. Some important open issues and further research directions will also be presented for the next step of big data analytics.
Condensation of Data and Knowledge for Network Traffic Classification: Techniques, Applications, and Open Issues
The accurate and efficient classification of network traffic, including malicious traffic, is essential for effective network management, cybersecurity, and resource optimization. However, traffic classification methods in modern, complex, and dynamic networks face significant challenges, particularly at the network edge, where resources are limited and issues such as privacy concerns and concept drift arise. Condensation techniques offer a solution by reducing the data size, simplifying complex models, and transferring knowledge from traffic data. This paper explores data and knowledge condensation methods—such as coreset selection, data compression, knowledge distillation, and dataset distillation—within the context of traffic classification tasks. It clarifies the relationship between these techniques and network traffic classification, introducing each method and its typical applications. This paper also outlines potential scenarios for applying each condensation technique, highlighting the associated challenges and open research issues. To the best of our knowledge, this is the first comprehensive summary of condensation techniques specifically tailored for network traffic classification tasks.
Context-Aware Trust and Reputation Routing Protocol for Opportunistic IoT Networks
In opportunistic IoT (OppIoT) networks, non-cooperative nodes present a significant challenge to the data forwarding process, leading to increased packet loss and communication delays. This paper proposes a novel Context-Aware Trust and Reputation Routing (CATR) protocol for opportunistic IoT networks, which leverages the probability density function of the beta distribution and some contextual factors, to dynamically compute the trust and reputation values of nodes, leading to efficient data dissemination, where malicious nodes are effectively identified and bypassed during that process. Simulation experiments using the ONE simulator show that CATR is superior to the Epidemic protocol, the so-called beta-based trust and reputation evaluation system (denoted BTRES), and the secure and privacy-preserving structure in opportunistic networks (denoted PPHB+), achieving an improvement of 22%, 15%, and 9% in terms of average latency, number of messages dropped, and average hop count, respectively, under varying number of nodes, buffer size, time to live, and message generation interval.
Explainable Learning-Based Timeout Optimization for Accurate and Efficient Elephant Flow Prediction in SDNs
Accurately and efficiently predicting elephant flows (elephants) is crucial for optimizing network performance and resource utilization. Current prediction approaches for software-defined networks (SDNs) typically rely on complete traffic and statistics moving from switches to controllers. This leads to an extra control channel bandwidth occupation and network delay. To address this issue, this paper proposes a prediction strategy based on incomplete traffic that is sampled by the timeouts for the installation or reactivation of flow entries. The strategy involves assigning a very short hard timeout (Tinitial) to flow entries and then increasing it at a rate of r until flows are identified as elephants or out of their lifespans. Predicted elephants are switched to an idle timeout of 5 s. Logistic regression is used to model elephants based on a complete dataset. Bayesian optimization is then used to tune the trained model Tinitial and r over the incomplete dataset. The process of feature selection, model learning, and optimization is explained. An extensive evaluation shows that the proposed approach can achieve over 90% generalization accuracy over 7 different datasets, including campus, backbone, and the Internet of Things (IoT). Elephants can be correctly predicted for about half of their lifetime. The proposed approach can significantly reduce the controller–switch interaction in campus and IoT networks, although packet completion approaches may need to be applied in networks with a short mean packet inter-arrival time.
Collaborative Sensing-Aware Task Offloading and Resource Allocation for Integrated Sensing-Communication- and Computation-Enabled Internet of Vehicles (IoV)
Integrated Sensing, Communication, and Computation (ISCC) has become a key technology driving the development of the Internet of Vehicles (IoV) by enabling real-time environmental sensing, low-latency communication, and collaborative computing. However, the increasing sensing data within the IoV leads to demands of fast data transmission in the context of limited communication resources. To address this issue, we propose a Collaborative Sensing-Aware Task Offloading (CSTO) mechanism for ISCC to reduce the sensing tasks transmission delay. We formulate a joint task offloading and communication resource allocation optimization problem to minimize the total processing delay of all vehicular sensing tasks. To solve this mixed-integer nonlinear programming (MINLP) problem, we design a two-stage iterative optimization algorithm that decomposes the original optimization problem into a task offloading subproblem and a resource allocation subproblem, which are solved iteratively. In the first stage, a Deep Reinforcement Learning algorithm is used to determine task offloading decisions based on the initial setting. In the second stage, a convex optimization algorithm is employed to allocate communication bandwidth according to the current task offloading decisions. We conduct simulation experiments by varying different crucial parameters, and the results demonstrate the superiority of our scheme over other benchmark schemes.
Emotion classification based on brain wave: a survey
Brain wave emotion analysis is the most novel method of emotion analysis at present. With the progress of brain science, it is found that human emotions are produced by the brain. As a result, many brain-wave emotion related applications appear. However, the analysis of brain wave emotion improves the difficulty of analysis because of the complexity of human emotion. Many researchers used different classification methods and proposed methods for the classification of brain wave emotions. In this paper, we investigate the existing methods of brain wave emotion classification and describe various classification methods.
Kernel mixture model for probability density estimation in Bayesian classifiers
Estimating reliable class-conditional probability is the prerequisite to implement Bayesian classifiers, and how to estimate the probability density functions (PDFs) is also a fundamental problem for other probabilistic induction algorithms. The finite mixture model (FMM) is able to represent arbitrary complex PDFs by using a mixture of mutimodal distributions, but it assumes that the component mixtures follows a given distribution, which may not be satisfied for real world data. This paper presents a non-parametric kernel mixture model (KMM) based probability density estimation approach, in which the data sample of a class is assumed to be drawn by several unknown independent hidden subclasses. Unlike traditional FMM schemes, we simply use the k-means clustering algorithm to partition the data sample into several independent components, and the regional density diversities of components are combined using the Bayes theorem. On the basis of the proposed kernel mixture model, we present a three-step Bayesian classifier, which includes partitioning, structure learning, and PDF estimation. Experimental results show that KMM is able to improve the quality of estimated PDFs of conventional kernel density estimation (KDE) method, and also show that KMM-based Bayesian classifiers outperforms existing Gaussian, GMM, and KDE-based Bayesian classifiers.
Personalized Federated Learning Algorithm with Adaptive Clustering for Non-IID IoT Data Incorporating Multi-Task Learning and Neural Network Model Characteristics
The proliferation of IoT devices has led to an unprecedented integration of machine learning techniques, raising concerns about data privacy. To address these concerns, federated learning has been introduced. However, practical implementations face challenges, including communication costs, data and device heterogeneity, and privacy security. This paper proposes an innovative approach within the context of federated learning, introducing a personalized joint learning algorithm for Non-IID IoT data. This algorithm incorporates multi-task learning principles and leverages neural network model characteristics. To overcome data heterogeneity, we present a novel clustering algorithm designed specifically for federated learning. Unlike conventional methods that require a predetermined number of clusters, our approach utilizes automatic clustering, eliminating the need for fixed cluster specifications. Extensive experimentation demonstrates the exceptional performance of the proposed algorithm, particularly in scenarios with specific client distributions. By significantly improving the accuracy of trained models, our approach not only addresses data heterogeneity but also strengthens privacy preservation in federated learning. In conclusion, we offer a robust solution to the practical challenges of federated learning in IoT environments. By combining personalized joint learning, automatic clustering, and neural network model characteristics, we facilitate more effective and privacy-conscious machine learning in Non-IID IoT data settings.
Asynchronous Federated Learning for Elephant Flow Detection in Software Defined Networking Systems
This paper introduces an Asynchronous Federated Learning (AFL) approach to train an elephant flow model over Software Defined Networking (SDN) systems with distributed controllers. The AFL addresses the issues of data privacy and communication overhead in collecting network statistics over large-scaled SDN systems. It allows each local controller to train a local model based on its local statistics using Decision Tree and upload the local model to the root in an asynchronous manner, so that the root controller can aggregate each local model into a global model once a local model is received to improve its time efficiency. The AFL proposes to weight the performance of each local model to form the global model. The evaluation based on 5 real packet traces demonstrates the accuracy of the AFL is better than any local models and two classical federated learning approaches.
RL-BMAC: An RL-Based MAC Protocol for Performance Optimization in Wireless Sensor Networks
Applications of wireless sensor networks have significantly increased in the modern era. These networks operate on a limited power supply in the form of batteries, which are normally difficult to replace on a frequent basis. In wireless sensor networks, sensor nodes alternate between sleep and active states to conserve energy through different methods. Duty cycling is among the most commonly used methods. However, it suffers from problems like unnecessary idle listening, extra energy consumption, and packet drop rate. A Deep Reinforcement Learning-based B-MAC protocol called (RL-BMAC) has been proposed to address this issue. The proposed protocol deploys a deep reinforcement learning agent with fixed hyperparameters to optimize the duty cycling of the nodes. The reinforcement learning agent monitors essential parameters such as energy level, packet drop rate, neighboring nodes’ status, and preamble sampling. The agent stores the information as a representative state and adjusts the duty cycling of all nodes. The performance of RL-BMAC is compared to that of conventional B-MAC through extensive simulations. The results obtained from the simulations indicate that RL-BMAC outperforms B-MAC in terms of throughput by 58.5%, packet drop rate by 44.8%, energy efficiency by 35%, and latency by 26.93%