Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
16 result(s) for "Hsieh, Yi-Zeng"
Sort by:
A Q-learning-based swarm optimization algorithm for economic dispatch problem
In this paper, we treat optimization problems as a kind of reinforcement learning problems regarding an optimization procedure for searching an optimal solution as a reinforcement learning procedure for finding the best policy to maximize the expected rewards. This viewpoint motivated us to propose a Q -learning-based swarm optimization (QSO) algorithm. The proposed QSO algorithm is a population-based optimization algorithm which integrates the essential properties of Q -learning and particle swarm optimization. The optimization procedure of the QSO algorithm proceeds as each individual imitates the behavior of the global best one in the swarm. The best individual is chosen based on its accumulated performance instead of its momentary performance at each evaluation. Two data sets including a set of benchmark functions and a real-world problem—the economic dispatch (ED) problem for power systems—were used to test the performance of the proposed QSO algorithm. The simulation results on the benchmark functions show that the proposed QSO algorithm is comparable to or even outperforms several existing optimization algorithms. As for the ED problem, the proposed QSO algorithm has found solutions better than all previously found solutions.
Cerebral Small Vessel Disease Biomarkers Detection on MRI-Sensor-Based Image and Deep Learning
Magnetic resonance imaging (MRI) offers the most detailed brain structure image available today; it can identify tiny lesions or cerebral cortical abnormalities. The primary purpose of the procedure is to confirm whether there is structural variation that causes epilepsy, such as hippocampal sclerotherapy, local cerebral cortical dysplasia, and cavernous hemangioma. Cerebrovascular disease, the second most common factor of death in the world, is also the fourth leading cause of death in Taiwan, with cerebrovascular disease having the highest rate of stroke. Among the most common are large vascular atherosclerotic lesions, small vascular lesions, and cardiac emboli. The purpose of this thesis is to establish a computer-aided diagnosis system based on small blood vessel lesions in MRI images, using the method of Convolutional Neural Network and deep learning to analyze brain vascular occlusion by analyzing brain MRI images. Blocks can help clinicians more quickly determine the probability and severity of stroke in patients. We analyzed MRI data from 50 patients, including 30 patients with stroke, 17 patients with occlusion but no stroke, and 3 patients with dementia. This system mainly helps doctors find out whether there are cerebral small vessel lesions in the brain MRI images, and to output the found results into labeled images. The marked contents include the position coordinates of the small blood vessel blockage, the block range, the area size, and if it may cause a stroke. Finally, all the MRI images of the patient are synthesized, showing a 3D display of the small blood vessels in the brain to assist the doctor in making a diagnosis or to provide accurate lesion location for the patient.
ARCS-Assisted Teaching Robots Based on Anticipatory Computing and Emotional Big Data for Improving Sustainable Learning Efficiency and Motivation
Under the vigorous development of global anticipatory computing in recent years, there have been numerous applications of artificial intelligence (AI) in people’s daily lives. Learning analytics of big data can assist students, teachers, and school administrators to gain new knowledge and estimate learning information; in turn, the enhanced education contributes to the rapid development of science and technology. Education is sustainable life learning, as well as the most important promoter of science and technology worldwide. In recent years, a large number of anticipatory computing applications based on AI have promoted the training professional AI talent. As a result, this study aims to design a set of interactive robot-assisted teaching for classroom setting to help students overcoming academic difficulties. Teachers, students, and robots in the classroom can interact with each other through the ARCS motivation model in programming. The proposed method can help students to develop the motivation, relevance, and confidence in learning, thus enhancing their learning effectiveness. The robot, like a teaching assistant, can help students solving problems in the classroom by answering questions and evaluating students’ answers in natural and responsive interactions. The natural interactive responses of the robot are achieved through the use of a database of emotional big data (Google facial expression comparison dataset). The robot is loaded with an emotion recognition system to assess the moods of the students through their expressions and sounds, and then offer corresponding emotional responses. The robot is able to communicate naturally with the students, thereby attracting their attention, triggering their learning motivation, and improving their learning effectiveness.
The Estimation Life Cycle of Lithium-Ion Battery Based on Deep Learning Network and Genetic Algorithm
This study uses deep learning to model the discharge characteristic curve of the lithium-ion battery. The battery measurement instrument was used to charge and discharge the battery to establish the discharge characteristic curve. The parameter method tries to find the discharge characteristic curve and was improved by MLP (multilayer perceptron), RNN (recurrent neural network), LSTM (long short-term memory), and GRU (gated recurrent unit). The results obtained by these methods were graphs. We used genetic algorithm (GA) to obtain the parameters of the discharge characteristic curve equation.
The Application and Improvement of Deep Neural Networks in Environmental Sound Recognition
Neural networks have achieved great results in sound recognition, and many different kinds of acoustic features have been tried as the training input for the network. However, there is still doubt about whether a neural network can efficiently extract features from the raw audio signal input. This study improved the raw-signal-input network from other researches using deeper network architectures. The raw signals could be better analyzed in the proposed network. We also presented a discussion of several kinds of network settings, and with the spectrogram-like conversion, our network could reach an accuracy of 73.55% in the open-audio-dataset “Dataset for Environmental Sound Classification 50” (ESC50). This study also proposed a network architecture that could combine different kinds of network feeds with different features. With the help of global pooling, a flexible fusion way was integrated into the network. Our experiment successfully combined two different networks with different audio feature inputs (a raw audio signal and the log-mel spectrum). Using the above settings, the proposed ParallelNet finally reached the accuracy of 81.55% in ESC50, which also reached the recognition level of human beings.
The Real-Time Depth Estimation for an Occluded Person Based on a Single Image and OpenPose Method
In recent years, the breakthrough of neural networks and the rise of deep learning have led to the advancement of machine vision, which has been commonly used in the practical application of image recognition. Automobiles, drones, portable devices, behavior recognition, indoor positioning and many other industries also rely on the integrated application, and require the support of deep learning and machine vision. As for these technologies, there is a high demand for the accuracy related to the recognition of portraits or objects. The recognition of human figures is also a research goal that has drawn great attention in various fields. However, the portrait will be affected by various factors such as height, weight, posture, angle and whether it is covered or not, which affects the accuracy of recognition. This paper applies the application of deep learning to portraits with different poses and angles, especially the actual distance of a single lens for the shadowed portrait (depth estimation), so that it can be used for automatic control of drones in the future. Traditional methods for calculating depth using images are mainly divided into three types: one—single-lens estimation, two—lens estimation, and three—optical band estimation. In view of the fact that both the second and third categories require relatively large and expensive equipment to effectively perform distance calculations, numerous methods for calculating distance using a single lens have recently been produced. However, whether it is the use of traditional “units of distance measurement calibration”, “defocus distance measurement”, or the “three-dimensional grid space messages distance measurement method”, all of these face corresponding difficulties and problems. Additionally, they have to deal with outside disturbances and process the shadowed image. Therefore, under the new research method, OpenPose, which is proposed by Carnegie Mellon University, this paper intends to propose a depth algorithm for a single-lens occluded portrait to estimate the actual portrait distance for different poses, angles of view and obscuration.
An Eye-Tracking System based on Inner Corner-Pupil Center Vector and Deep Neural Network
The human eye is a vital sensory organ that provides us with visual information about the world around us. It can also convey such information as our emotional state to people with whom we interact. In technology, eye tracking has become a hot research topic recently, and a growing number of eye-tracking devices have been widely applied in fields such as psychology, medicine, education, and virtual reality. However, most commercially available eye trackers are prohibitively expensive and require that the user’s head remain completely stationary in order to accurately estimate the direction of their gaze. To address these drawbacks, this paper proposes an inner corner-pupil center vector (ICPCV) eye-tracking system based on a deep neural network, which does not require that the user’s head remain stationary or expensive hardware to operate. The performance of the proposed system is compared with those of other currently available eye-tracking estimation algorithms, and the results show that it outperforms these systems.
Wind Technologies for Wake Effect Performance in Windfarm Layout Based on Population-Based Optimization Algorithm
The focus of this study is under the auspices of China Steel Corporation, Taiwan, in carrying out the national energy policy of 2025 Non-Nuclear Home. Under this policy, an estimated 600 offshore wind turbines will be installed by 2025. In order to carry out the wind energy project effectively, a preliminary study must be conducted. In this article, we investigated the influence of the wake effect on the efficiency of the turbines’ layout in a windfarm. A distributed genetic algorithm is deployed to study the wind turbines’ layout in order to alleviate the detrimental wake effect. In the current stage of this research, the historical weather data of weather stations near the site of the 29th windfarm, Taiwan, were collected by Academia Sinica. Our wake effect resilient optimized windfarm showed superior performance over that of the conventional windfarm. Additionally, an operation cost minimization process is also demonstrated and implemented using an ant colony optimization algorithm to optimize the total length of the power-carrying interconnecting cables for the turbines inside the optimized windfarm.
Applying Artificial Intelligence (AI) Techniques to Implement a Practical Smart Cage Aquaculture Management System
Purpose This paper presents our team’s results to establish an AIoT smart cage culture management system. Methods According to the built system, the farmed field information is transmitted to the data platform of Ocean Cloud, and all collected data and analysis results can be applied to the cage culture field after the bigdata analysis. Results This management system successfully integrates AI and IoT technologies and is applied in cage culture. Using underwater biological analysis images and AI feeding as examples, this paper explains how the system integrates AI and IoT into a feasible framework that can constantly acquire information about the health status of fish, survival rate of fish, as well as the feed residuals. Conclusion The results of our research enable the aquaculture operators or owners to efficiently reduce the feed residual, monitor the growth of fish, and increase fish survival rate, thereby increasing the feed conversion rate.
The Wearable Physical Fitness Training Device Based on Fuzzy Theory
Mobile Edge Computing and Communication (MECC) can be deployed in close proximity with sensing devices and act as middleware between cloud and local networks. The health and fitness movement has become extremely popular recently. Endurance activities, such as marathons, triathlons, and cycling have also grown in popularity. However, with more people participating in these activities, more accidents and injuries occur—ranging from heat stroke, to heart attacks, shock, or hypoxia. All physical training activities include a risk of injury and accidents. Therefore, any research that offers a means of reducing injury risk will significantly contribute to the personal fitness field. Moreover, with the growing popularity of wearable devices and the rise of the MECC, the development and application of wearable devices that can connect to the MECC has become widespread, producing many new innovations. Although many wearable devices, such as wrist straps and smart watches, are available and able to detect individual physiological data, they cannot monitor the human body in a state of motion. Therefore, this study proposes a set of monitoring parameters for a novel wearable device connected to the MECC based on fitness management to assist fitness trainers in effective prompted strength training, and to offer timely warnings in the event of an injury risk. The data collected by the monitoring device using fuzzy theory include risk factor, body temperature, heart rate, and blood oxygen concentration. The proposed system can display in real-time the current physiological state of a wearer/user. The introduction of this device will hopefully enable trainers to immediately and effectively control and monitor the intensity of a training session, while increasing training safety, and offer crucial and immediate diagnostic information so that the correct treatment can be applied without delay in the event of injury.