Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
49
result(s) for
"Yutaka Watanobe"
Sort by:
ChatGPT for Education and Research: Opportunities, Threats, and Strategies
by
Rahman, Md. Mostafizer
,
Watanobe, Yutaka
in
Artificial intelligence
,
ChatGPT
,
Computational linguistics
2023
In recent years, the rise of advanced artificial intelligence technologies has had a profound impact on many fields, including education and research. One such technology is ChatGPT, a powerful large language model developed by OpenAI. This technology offers exciting opportunities for students and educators, including personalized feedback, increased accessibility, interactive conversations, lesson preparation, evaluation, and new ways to teach complex concepts. However, ChatGPT poses different threats to the traditional education and research system, including the possibility of cheating on online exams, human-like text generation, diminished critical thinking skills, and difficulties in evaluating information generated by ChatGPT. This study explores the potential opportunities and threats that ChatGPT poses to overall education from the perspective of students and educators. Furthermore, for programming learning, we explore how ChatGPT helps students improve their programming skills. To demonstrate this, we conducted different coding-related experiments with ChatGPT, including code generation from problem descriptions, pseudocode generation of algorithms from texts, and code correction. The generated codes are validated with an online judge system to evaluate their accuracy. In addition, we conducted several surveys with students and teachers to find out how ChatGPT supports programming learning and teaching. Finally, we present the survey results and analysis.
Journal Article
Brain-Computer Interface: Advancement and Challenges
by
Muhammad Mohsin Kabir
,
Aklima Akter Lima
,
Sujoy Chandra Das
in
Algorithms
,
biomedical sensors
,
Brain
2021
Brain-Computer Interface (BCI) is an advanced and multidisciplinary active research domain based on neuroscience, signal processing, biomedical sensors, hardware, etc. Since the last decades, several groundbreaking research has been conducted in this domain. Still, no comprehensive review that covers the BCI domain completely has been conducted yet. Hence, a comprehensive overview of the BCI domain is presented in this study. This study covers several applications of BCI and upholds the significance of this domain. Then, each element of BCI systems, including techniques, datasets, feature extraction methods, evaluation measurement matrices, existing BCI algorithms, and classifiers, are explained concisely. In addition, a brief overview of the technologies or hardware, mostly sensors used in BCI, is appended. Finally, the paper investigates several unsolved challenges of the BCI and explains them with possible solutions.
Journal Article
Enhanced Robot Motion Block of A-Star Algorithm for Robotic Path Planning
by
Watanobe, Yutaka
,
Islam, Md Rashedul
,
Naruse, Keitaro
in
A algorithm
,
adaptive cost function
,
Algorithms
2024
An optimized robot path-planning algorithm is required for various aspects of robot movements in applications. The efficacy of the robot path-planning model is vulnerable to the number of search nodes, path cost, and time complexity. The conventional A-star (A*) algorithm outperforms other grid-based algorithms because of its heuristic approach. However, the performance of the conventional A* algorithm is suboptimal for the time, space, and number of search nodes, depending on the robot motion block (RMB). To address these challenges, this paper proposes an optimal RMB with an adaptive cost function to improve performance. The proposed adaptive cost function keeps track of the goal node and adaptively calculates the movement costs for quickly arriving at the goal node. Incorporating the adaptive cost function with a selected optimal RMB significantly reduces the searches of less impactful and redundant nodes, which improves the performance of the A* algorithm in terms of the number of search nodes and time complexity. To validate the performance and robustness of the proposed model, an extensive experiment was conducted. In the experiment, an open-source dataset featuring various types of grid maps was customized to incorporate the multiple map sizes and sets of source-to-destination nodes. According to the experiments, the proposed method demonstrated a remarkable improvement of 93.98% in the number of search nodes and 98.94% in time complexity compared to the conventional A* algorithm. The proposed model outperforms other state-of-the-art algorithms by keeping the path cost largely comparable. Additionally, an ROS experiment using a robot and lidar sensor data shows the improvement of the proposed method in a simulated laboratory environment.
Journal Article
Learning Path Recommendation System for Programming Education Based on Neural Networks
2020
Programming education has recently received increased attention due to growing demand for programming and information technology skills. However, a lack of teaching materials and human resources presents a major challenge to meeting this demand. One way to compensate for a shortage of trained teachers is to use machine learning techniques to assist learners. This article proposes a learning path recommendation system that applies a recurrent neural network to a learner's ability chart, which displays the learner's scores. In brief, a learning path is constructed from a learner's submission history using a trial-and-error process, and the learner's ability chart is used as an indicator of their current knowledge. An approach for constructing a learning path recommendation system using ability charts and its implementation based on a sequential prediction model and a recurrent neural network, are presented. Experimental evaluation is conducted with data from an e-learning system.
Journal Article
A Robust Deep Feature Extraction Method for Human Activity Recognition Using a Wavelet Based Spectral Visualisation Technique
by
Numan, Md Obaydullah Al
,
Islam, Md Rashedul
,
Watanobe, Yutaka
in
Activities of Daily Living
,
Algorithms
,
ambient assisted living
2024
Human Activity Recognition (HAR), alongside Ambient Assisted Living (AAL), are integral components of smart homes, sports, surveillance, and investigation activities. To recognize daily activities, researchers are focusing on lightweight, cost-effective, wearable sensor-based technologies as traditional vision-based technologies lack elderly privacy, a fundamental right of every human. However, it is challenging to extract potential features from 1D multi-sensor data. Thus, this research focuses on extracting distinguishable patterns and deep features from spectral images by time-frequency-domain analysis of 1D multi-sensor data. Wearable sensor data, particularly accelerator and gyroscope data, act as input signals of different daily activities, and provide potential information using time-frequency analysis. This potential time series information is mapped into spectral images through a process called use of ’scalograms’, derived from the continuous wavelet transform. The deep activity features are extracted from the activity image using deep learning models such as CNN, MobileNetV3, ResNet, and GoogleNet and subsequently classified using a conventional classifier. To validate the proposed model, SisFall and PAMAP2 benchmark datasets are used. Based on the experimental results, this proposed model shows the optimal performance for activity recognition obtaining an accuracy of 98.4% for SisFall and 98.1% for PAMAP2, using Morlet as the mother wavelet with ResNet-101 and a softmax classifier, and outperforms state-of-the-art algorithms.
Journal Article
Hierarchical Aggregation of Local Explanations for Student Adaptability
by
Watanobe, Yutaka
,
Nnadi, Leonard Chukwualuka
in
Adaptability
,
Artificial intelligence
,
Clinical decision making
2026
In this study, we present Hierarchical Local Interpretable Model-agnostic Explanations (H-LIME), an innovative extension of the LIME technique that provides interpretable machine learning insights across multiple levels of data hierarchy. While traditional local explanation methods focus on instance-level attributions, they often overlook systemic patterns embedded within educational structures. To address this limitation, H-LIME aggregates local explanations across hierarchical layers, Institution Type, Location, and Educational Level, thereby linking individual predictions to broader, policy-relevant trends. We evaluate H-LIME on a student adaptability dataset using a Random Forest model chosen for its superior explanation stability (approximately 4.5 times more stable than Decision Trees). The framework uncovers consistent global predictors of adaptability, such as education level and class duration, while revealing subgroup-specific factors, including network type and financial condition, whose influence varies across hierarchical contexts. This work demonstrates the effectiveness of H-LIME at uncovering multi-level patterns in educational data and its potential for supporting targeted interventions, strategic planning, and evidence-based decision-making. Beyond education, the hierarchical approach offers a scalable solution for enhancing interpretability in domains where structured data relationships are essential.
Journal Article
Frugal Self-Optimization Mechanisms for Edge–Cloud Continuum
2025
The increasing complexity of the Edge–Cloud Continuum (ECC), driven by the rapid expansion of the Internet of Things (IoT) and data-intensive applications, necessitates implementing innovative methods for automated and efficient system management. In this context, recent studies focused on the utilization of self-* capabilities that can be used to enhance system autonomy and increase operational proactiveness. Separately, anomaly detection and adaptive sampling techniques have been explored to optimize data transmission and improve systems’ reliability. The integration of those techniques within a single, lightweight, and extendable self-optimization module is the main subject of this contribution. The module was designed to be well suited for distributed systems, composed of highly resource-constrained operational devices (e.g., wearable health monitors, IoT sensors in vehicles, etc.), where it can be utilized to self-adjust data monitoring and enhance the resilience of critical processes. The focus is put on the implementation of two core mechanisms, derived from the current state-of-the-art: (1) density-based anomaly detection in real-time resource utilization data streams, and (2) a dynamic adaptive sampling technique, which employs Probabilistic Exponential Weighted Moving Average. The performance of the proposed module was validated using both synthetic and real-world datasets, which included a sample collected from the target infrastructure. The main goal of the experiments was to showcase the effectiveness of the implemented techniques in different, close to real-life scenarios. Moreover, the results of the performed experiments were compared with other state-of-the-art algorithms in order to examine their advantages and inherent limitations. With the emphasis put on frugality and real-time operation, this contribution offers a novel perspective on resource-aware autonomic optimization for next-generation ECC.
Journal Article
Correction: Ahmed et al. A Robust Deep Feature Extraction Method for Human Activity Recognition Using a Wavelet Based Spectral Visualisation Technique. Sensors 2024, 24, 4343
2025
There was an error in the original publication [...]
Journal Article
A hybrid transformer and attention based recurrent neural network for robust and interpretable sentiment analysis of tweets
by
Islam, Md Rashedul
,
Watanobe, Yutaka
,
Shovon, Md Sakib Hossain
in
639/705/1042
,
639/705/117
,
Attention
2024
Sentiment analysis is a pivotal tool in understanding public opinion, consumer behavior, and social trends, underpinning applications ranging from market research to political analysis. However, existing sentiment analysis models frequently encounter challenges related to linguistic diversity, model generalizability, explainability, and limited availability of labeled datasets. To address these shortcomings, we propose the Transformer and Attention-based Bidirectional LSTM for Sentiment Analysis (TRABSA) model, a novel hybrid sentiment analysis framework that integrates transformer-based architecture, attention mechanism, and recurrent neural networks like BiLSTM. The TRABSA model leverages the powerful RoBERTa-based transformer model for initial feature extraction, capturing complex linguistic nuances from a vast corpus of tweets. This is followed by an attention mechanism that highlights the most informative parts of the text, enhancing the model’s focus on critical sentiment-bearing elements. Finally, the BiLSTM networks process these refined features, capturing temporal dependencies and improving the overall sentiment classification into positive, neutral, and negative classes. Leveraging the latest RoBERTa-based transformer model trained on a vast corpus of 124M tweets, our research bridges existing gaps in sentiment analysis benchmarks, ensuring state-of-the-art accuracy and relevance. Furthermore, we contribute to data diversity by augmenting existing datasets with 411,885 tweets from 32 English-speaking countries and 7,500 tweets from various US states. This study also compares six word-embedding techniques, identifying the most robust preprocessing and embedding methodologies crucial for accurate sentiment analysis and model performance. We meticulously label tweets into positive, neutral, and negative classes using three distinct lexicon-based approaches and select the best one, ensuring optimal sentiment analysis outcomes and model efficacy. Here, we demonstrate that the TRABSA model outperforms the current seven traditional machine learning models, four stacking models, and four hybrid deep learning models, yielding notable gain in accuracy (94%) and effectiveness with a macro average precision of 94%, recall of 93%, and F1-score of 94%. Our further evaluation involves two extended and four external datasets, demonstrating the model’s consistent superiority, robustness, and generalizability across diverse contexts and datasets. Finally, by conducting a thorough study with SHAP and LIME explainable visualization approaches, we offer insights into the interpretability of the TRABSA model, improving comprehension and confidence in the model’s predictions. Our study results make it easier to analyze how citizens respond to resources and events during pandemics since they are integrated into a decision-support system. Applications of this system provide essential assistance for efficient pandemic management, such as resource planning, crowd control, policy formation, vaccination tactics, and quick reaction programs.
Journal Article
A Bidirectional LSTM Language Model for Code Evaluation and Repair
by
Rahman, Md. Mostafizer
,
Watanobe, Yutaka
,
Nakamura, Keita
in
Application programming interface
,
Automation
,
Computer aided software engineering
2021
Programming is a vital skill in computer science and engineering-related disciplines. However, developing source code is an error-prone task. Logical errors in code are particularly hard to identify for both students and professionals, and a single error is unexpected to end-users. At present, conventional compilers have difficulty identifying many of the errors (especially logical errors) that can occur in code. To mitigate this problem, we propose a language model for evaluating source codes using a bidirectional long short-term memory (BiLSTM) neural network. We trained the BiLSTM model with a large number of source codes with tuning various hyperparameters. We then used the model to evaluate incorrect code and assessed the model’s performance in three principal areas: source code error detection, suggestions for incorrect code repair, and erroneous code classification. Experimental results showed that the proposed BiLSTM model achieved 50.88% correctness in identifying errors and providing suggestions. Moreover, the model achieved an F-score of approximately 97%, outperforming other state-of-the-art models (recurrent neural networks (RNNs) and long short-term memory (LSTM)).
Journal Article