Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
Rate Analysis of Coupled Distributed Stochastic Approximation for Misspecified Optimization
by
Yang, Yaqun
, Lei, Jinlong
in
Algorithms
/ Approximation
/ Convergence
/ Cost function
/ Learning
/ Optimization
/ Parameters
2024
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
Rate Analysis of Coupled Distributed Stochastic Approximation for Misspecified Optimization
by
Yang, Yaqun
, Lei, Jinlong
in
Algorithms
/ Approximation
/ Convergence
/ Cost function
/ Learning
/ Optimization
/ Parameters
2024
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
Rate Analysis of Coupled Distributed Stochastic Approximation for Misspecified Optimization
Paper
Rate Analysis of Coupled Distributed Stochastic Approximation for Misspecified Optimization
2024
Request Book From Autostore
and Choose the Collection Method
Overview
We consider an \\(n\\) agents distributed optimization problem with imperfect information characterized in a parametric sense, where the unknown parameter can be solved by a distinct distributed parameter learning problem. Though each agent only has access to its local parameter learning and computational problem, they mean to collaboratively minimize the average of their local cost functions. To address the special optimization problem, we propose a coupled distributed stochastic approximation algorithm, in which every agent updates the current beliefs of its unknown parameter and decision variable by stochastic approximation method; and then averages the beliefs and decision variables of its neighbors over network in consensus protocol. Our interest lies in the convergence analysis of this algorithm. We quantitatively characterize the factors that affect the algorithm performance, and prove that the mean-squared error of the decision variable is bounded by \\(O(1nk)+O(1n(1-_w))1k^1.5+O(1(1-_w)^2 )1k^2\\), where \\(k\\) is the iteration count and \\((1-_w)\\) is the spectral gap of the network weighted adjacency matrix. It reveals that the network connectivity characterized by \\((1-_w)\\) only influences the high order of convergence rate, while the domain rate still acts the same as the centralized algorithm. In addition, we analyze that the transient iteration needed for reaching its dominant rate \\(O(1nk)\\) is \\(O(n(1-_w)^2)\\). Numerical experiments are carried out to demonstrate the theoretical results by taking different CPUs as agents, which is more applicable to real-world distributed scenarios.
Publisher
Cornell University Library, arXiv.org
Subject
This website uses cookies to ensure you get the best experience on our website.