MbrlCatalogueTitleDetail

Do you wish to reserve the book?
Robust and Stochastic Receding Horizon Control
Robust and Stochastic Receding Horizon Control
Hey, we have placed the reservation for you!
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
Robust and Stochastic Receding Horizon Control
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Title added to your shelf!
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
Robust and Stochastic Receding Horizon Control
Robust and Stochastic Receding Horizon Control

Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
How would you like to get it?
We have requested the book for you! Sorry the robot delivery is not available at the moment
We have requested the book for you!
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
Robust and Stochastic Receding Horizon Control
Robust and Stochastic Receding Horizon Control
Dissertation

Robust and Stochastic Receding Horizon Control

2021
Request Book From Autostore and Choose the Collection Method
Overview
Chance constraints, unlike robust constraints, allow constraint violation up to some predefined level and arise in numerous applications. They are often imposed in a pointwise-in-time fashion in control problems. This thesis considers a class of chance constraints imposed in an average-in-time fashion to focus more on aggregate behaviours and discounted to achieve trade-offs between short-term and long-term performance in the model predictive control (MPC) framework. This thesis designs an MPC law for chance constrained stochastic systems with discrete-time linear dynamics and possibly unbounded additive disturbances. The chance constraint is defined as a discounted sum of violation probabilities over an infinite horizon. By penalising violation probabilities close to the initial time and assigning violation probabilities in the far future with vanishingly small weights, this form of constraints allows for an MPC law with guarantees of recursive feasibility by introducing an online constraint-tightening technique without an assumption of boundedness of the disturbance. We employ Chebyshev's inequality for constraint handling and formulate a computationally simple MPC optimisation problem. To mitigate the conservativeness of Chebyshev's inequality, a dynamic feedback gain is incorporated into the MPC law. This gain is selected online from a set of candidates generated by Pareto optimal solutions of a multiobjective optimisation problem. The closed loop system is guaranteed to satisfy the chance constraint and a quadratic stability condition. With dynamic feedback gain selection, the closed loop cost is reduced and a larger set of feasible initial conditions is obtained. This thesis also considers an application of stochastic MPC in networked control systems, where constrained linear systems are subject to stochastic additive disturbances and noisy measurements transmitted over a lossy communication channel. An MPC controller is designed to minimise a discounted cost subject to a discounted expectation constraint. Sensor data is assumed to be lost with a known probability. Data losses are accounted for by expressing the predicted control policy as an affine function of future observations, resulting in a convex optimal control problem. Recursive feasibility of online optimisation problems and constraint satisfaction are ensured similarly via the constraint-tightening technique. We show that the discounted cost evaluated along trajectories of the closed loop system is bounded. Under certain conditions, the averaged undiscounted closed loop cost accumulated over an infinite horizon also remains bounded.
Publisher
ProQuest Dissertations & Theses