Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
1,444 result(s) for "Computer Appl. in Administrative Data Processing"
Sort by:
Multi-input CNN-GRU based human activity recognition using wearable sensors
Human Activity Recognition (HAR) has attracted much attention from researchers in the recent past. The intensification of research into HAR lies in the motive to understand human behaviour and inherently anticipate human intentions. Human activity data obtained via wearable sensors like gyroscope and accelerometer is in the form of time series data, as each reading has a timestamp associated with it. For HAR, it is important to extract the relevant temporal features from raw sensor data. Most of the approaches for HAR involves a good amount of feature engineering and data pre-processing, which in turn requires domain expertise. Such approaches are time-consuming and are application-specific. In this work, a Deep Neural Network based model, which uses Convolutional Neural Network, and Gated Recurrent Unit is proposed as an end-to-end model performing automatic feature extraction and classification of the activities as well. The experiments in this work were carried out using the raw data obtained from wearable sensors with nominal pre-processing and don’t involve any handcrafted feature extraction techniques. The accuracies obtained on UCI-HAR, WISDM, and PAMAP2 datasets are 96.20%, 97.21%, and 95.27% respectively. The results of the experiments establish that the proposed model achieved superior classification performance than other similar architectures.
Customer churn prediction system: a machine learning approach
The customer churn prediction (CCP) is one of the challenging problems in the telecom industry. With the advancement in the field of machine learning and artificial intelligence, the possibilities to predict customer churn has increased significantly. Our proposed methodology, consists of six phases. In the first two phases, data pre-processing and feature analysis is performed. In the third phase, feature selection is taken into consideration using gravitational search algorithm. Next, the data has been split into two parts train and test set in the ratio of 80% and 20% respectively. In the prediction process, most popular predictive models have been applied, namely, logistic regression, naive bayes, support vector machine, random forest, decision trees, etc. on train set as well as boosting and ensemble techniques are applied to see the effect on accuracy of models. In addition, K-fold cross validation has been used over train set for hyperparameter tuning and to prevent overfitting of models. Finally, the obtained results on test set have been evaluated using confusion matrix and AUC curve. It was found that Adaboost and XGboost Classifier gives the highest accuracy of 81.71% and 80.8% respectively. The highest AUC score of 84%, is achieved by both Adaboost and XGBoost Classifiers which outperforms over others.
Future directions for chatbot research: an interdisciplinary research agenda
Chatbots are increasingly becoming important gateways to digital services and information—taken up within domains such as customer service, health, education, and work support. However, there is only limited knowledge concerning the impact of chatbots at the individual, group, and societal level. Furthermore, a number of challenges remain to be resolved before the potential of chatbots can be fully realized. In response, chatbots have emerged as a substantial research area in recent years. To help advance knowledge in this emerging research area, we propose a research agenda in the form of future directions and challenges to be addressed by chatbot research. This proposal consolidates years of discussions at the CONVERSATIONS workshop series on chatbot research. Following a deliberative research analysis process among the workshop participants, we explore future directions within six topics of interest: (a) users and implications, (b) user experience and design, (c) frameworks and platforms, (d) chatbots for collaboration, (e) democratizing chatbots, and (f) ethics and privacy. For each of these topics, we provide a brief overview of the state of the art, discuss key research challenges, and suggest promising directions for future research. The six topics are detailed with a 5-year perspective in mind and are to be considered items of an interdisciplinary research agenda produced collaboratively by avid researchers in the field.
Artificial intelligence to automate the systematic review of scientific literature
Artificial intelligence (AI) has acquired notorious relevance in modern computing as it effectively solves complex tasks traditionally done by humans. AI provides methods to represent and infer knowledge, efficiently manipulate texts and learn from vast amount of data. These characteristics are applicable in many activities that human find laborious or repetitive, as is the case of the analysis of scientific literature. Manually preparing and writing a systematic literature review (SLR) takes considerable time and effort, since it requires planning a strategy, conducting the literature search and analysis, and reporting the findings. Depending on the area under study, the number of papers retrieved can be of hundreds or thousands, meaning that filtering those relevant ones and extracting the key information becomes a costly and error-prone process. However, some of the involved tasks are repetitive and, therefore, subject to automation by means of AI. In this paper, we present a survey of AI techniques proposed in the last 15 years to help researchers conduct systematic analyses of scientific literature. We describe the tasks currently supported, the types of algorithms applied, and available tools proposed in 34 primary studies. This survey also provides a historical perspective of the evolution of the field and the role that humans can play in an increasingly automated SLR process.
Nature inspired meta heuristic algorithms for optimization problems
Optimization and decision making problems in various fields of engineering have a major impact in this current era. Processing time and utilizing memory is very high for the currently available data. This is due to its size and the need for scaling from zettabyte to yottabyte. Some problems need to find solutions and there are other types of issues that need to improve their current best solution. Modelling and implementing a new heuristic algorithm may be time consuming but has some strong primary motivation - like a minimal improvement in the solution itself can reduce the computational cost. The solution thus obtained was better. In both these situations, designing heuristics and meta-heuristics algorithm has proved it’s worth. Hyper heuristic solutions will be needed to compute solutions in a much better time and space complexities. It creates a solution by combining heuristics to generate automated search space from which generalized solutions can be tuned out. This paper provides in-depth knowledge on nature-inspired computing models, meta-heuristic models, hybrid meta heuristic models and hyper heuristic model. This work’s major contribution is on building a hyper heuristics approach from a meta-heuristic algorithm for any general problem domain. Various traditional algorithms and new generation meta heuristic algorithms has also been explained for giving readers a better understanding.
A survey of word embeddings based on deep learning
The representational basis for downstream natural language processing tasks is word embeddings, which capture lexical semantics in numerical form to handle the abstract semantic concept of words. Recently, the word embeddings approaches, represented by deep learning, has attracted extensive attention and widely used in many tasks, such as text classification, knowledge mining, question-answering, smart Internet of Things systems and so on. These neural networks-based models are based on the distributed hypothesis while the semantic association between words can be efficiently calculated in low-dimensional space. However, the expressed semantics of most models are constrained by the context distribution of each word in the corpus while the logic and common knowledge are not better utilized. Therefore, how to use the massive multi-source data to better represent natural language and world knowledge still need to be explored. In this paper, we introduce the recent advances of neural networks-based word embeddings with their technical features, summarizing the key challenges and existing solutions, and further give a future outlook on the research and application.
Trust-driven reinforcement selection strategy for federated learning on IoT devices
Federated learning is a distributed machine learning approach that enables a large number of edge/end devices to perform on-device training for a single machine learning model, without having to share their own raw data. We consider in this paper a federated learning scenario wherein the local training is carried out on IoT devices and the global aggregation is done at the level of an edge server. One essential challenge in this emerging approach is IoT devices selection (also called scheduling), i.e., how to select the IoT devices to participate in the distributed training process. The existing approaches suggest to base the scheduling decision on the resource characteristics of the devices to guarantee that the selected devices would have enough resources to carry out the training. In this work, we argue that trust should be an integral part of the decision-making process and therefore design a trust establishment mechanism between the edge server and IoT devices. The trust mechanism aims to detect those IoT devices that over-utilize or under-utilize their resources during the local training. Thereafter, we introduce DDQN-Trust, a double deep Q learning-based selection algorithm that takes into account the trust scores and energy levels of the IoT devices to make appropriate scheduling decisions. Finally, we integrate our solution into four federated learning aggregation approaches, namely, FedAvg, FedProx, FedShare and FedSGD. Experiments conducted using a real-world dataset show that our DDQN-Trust solution always achieves better performance compared to two main benchmarks: the DQN and random scheduling algorithms. The results also reveal that FedProx outperforms the competitor aggregation models in terms of accuracy when integrated into our DDQN-Trust solution.
Application of deep reinforcement learning in stock trading strategies and stock forecasting
The role of the stock market across the overall financial market is indispensable. The way to acquire practical trading signals in the transaction process to maximize the benefits is a problem that has been studied for a long time. This paper put forward a theory of deep reinforcement learning in the stock trading decisions and stock price prediction, the reliability and availability of the model are proved by experimental data, and the model is compared with the traditional model to prove its advantages. From the point of view of stock market forecasting and intelligent decision-making mechanism, this paper proves the feasibility of deep reinforcement learning in financial markets and the credibility and advantages of strategic decision-making.
A survey and taxonomy on energy efficient resource allocation techniques for cloud computing systems
In a cloud computing paradigm, energy efficient allocation of different virtualized ICT resources (servers, storage disks, and networks, and the like) is a complex problem due to the presence of heterogeneous application (e.g., content delivery networks, MapReduce, web applications, and the like) workloads having contentious allocation requirements in terms of ICT resource capacities (e.g., network bandwidth, processing speed, response time, etc.). Several recent papers have tried to address the issue of improving energy efficiency in allocating cloud resources to applications with varying degree of success. However, to the best of our knowledge there is no published literature on this subject that clearly articulates the research problem and provides research taxonomy for succinct classification of existing techniques. Hence, the main aim of this paper is to identify open challenges associated with energy efficient resource allocation. In this regard, the study, first, outlines the problem and existing hardware and software-based techniques available for this purpose. Furthermore, available techniques already presented in the literature are summarized based on the energy-efficient research dimension taxonomy. The advantages and disadvantages of the existing techniques are comprehensively analyzed against the proposed research dimension taxonomy namely: resource adaption policy, objective function, allocation method, allocation operation, and interoperability.
Edge computing: current trends, research challenges and future directions
The edge computing (EC) paradigm brings computation and storage to the edge of the network where data is both consumed and produced. This variation is necessary to cope with the increasing amount of network-connected devices and data transmitted, that the launch of the new 5G networks will expand. The aim is to avoid the high latency and traffic bottlenecks associated with the use of Cloud Computing in networks where several devices both access and generate high volumes of data. EC also improves network support for mobility, security, and privacy. This paper provides a discussion around EC and summarized the definition and fundamental properties of the EC architectures proposed in the literature (Multi-access Edge Computing, Fog Computing, Cloudlet Computing, and Mobile Cloud Computing). Subsequently, this paper examines significant use cases for each EC architecture and debates some promising future research directions.