Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
Adaptive Control of Nonlinear Non-Minimum Phase Systems Using Actor–Critic Reinforcement Learning
by
Jouili, Khalil
, Charfeddine, Monia
, Ben Moussa, Mongi
in
Accuracy
/ Adaptive control
/ Analysis
/ Canonical forms
/ Cascade control
/ Control engineering
/ Control systems
/ Controllers
/ Coordinate transformations
/ Data mining
/ Decoupling
/ Feedback linearization
/ Learning
/ Machine learning
/ Mathematical optimization
/ Multilayer perceptrons
/ Nested loops
/ Nonlinear control
/ Nonlinear systems
/ Output feedback
/ Reference signals
/ Symmetry
/ Time varying control
/ Tracking
2025
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
Adaptive Control of Nonlinear Non-Minimum Phase Systems Using Actor–Critic Reinforcement Learning
by
Jouili, Khalil
, Charfeddine, Monia
, Ben Moussa, Mongi
in
Accuracy
/ Adaptive control
/ Analysis
/ Canonical forms
/ Cascade control
/ Control engineering
/ Control systems
/ Controllers
/ Coordinate transformations
/ Data mining
/ Decoupling
/ Feedback linearization
/ Learning
/ Machine learning
/ Mathematical optimization
/ Multilayer perceptrons
/ Nested loops
/ Nonlinear control
/ Nonlinear systems
/ Output feedback
/ Reference signals
/ Symmetry
/ Time varying control
/ Tracking
2025
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
Adaptive Control of Nonlinear Non-Minimum Phase Systems Using Actor–Critic Reinforcement Learning
by
Jouili, Khalil
, Charfeddine, Monia
, Ben Moussa, Mongi
in
Accuracy
/ Adaptive control
/ Analysis
/ Canonical forms
/ Cascade control
/ Control engineering
/ Control systems
/ Controllers
/ Coordinate transformations
/ Data mining
/ Decoupling
/ Feedback linearization
/ Learning
/ Machine learning
/ Mathematical optimization
/ Multilayer perceptrons
/ Nested loops
/ Nonlinear control
/ Nonlinear systems
/ Output feedback
/ Reference signals
/ Symmetry
/ Time varying control
/ Tracking
2025
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
Adaptive Control of Nonlinear Non-Minimum Phase Systems Using Actor–Critic Reinforcement Learning
Journal Article
Adaptive Control of Nonlinear Non-Minimum Phase Systems Using Actor–Critic Reinforcement Learning
2025
Request Book From Autostore
and Choose the Collection Method
Overview
This study introduces a novel control strategy tailored to nonlinear systems with non-minimum phase (NMP) characteristics. The framework leverages reinforcement learning within a cascade control architecture that integrates an Actor–Critic structure. Controlling NMP systems poses significant challenges due to the inherent instability of their internal dynamics, which hinders effective output tracking. To address this, the system is reformulated using the Byrnes–Isidori normal form, allowing the decoupling of the input–output pathway from the internal system behavior. The proposed control architecture consists of two nested loops: an inner loop that applies input–output feedback linearization to ensure accurate tracking performance, and an outer loop that constructs reference signals to stabilize the internal dynamics. A key innovation in this design lies in the incorporation of symmetry principles observed in both system behavior and control objectives. By identifying and utilizing these symmetrical structures, the learning algorithm can be guided toward more efficient and generalized policy solutions, enhancing robustness. Rather than relying on classical static optimization techniques, the method employs a learning-based strategy inspired by previous gradient-based approaches. In this setup, the Actor—modeled as a multilayer perceptron (MLP)—learns a time-varying control policy for generating intermediate reference signals, while the Critic evaluates the policy’s performance using Temporal Difference (TD) learning. The proposed methodology is validated through simulations on the well-known Inverted Pendulum system. The results demonstrate significant improvements in tracking accuracy, smoother control signals, and enhanced internal stability compared to conventional methods. These findings highlight the potential of Actor–Critic reinforcement learning, especially when symmetry is exploited, to enable intelligent and adaptive control of complex nonlinear systems.
This website uses cookies to ensure you get the best experience on our website.