Distributed Optimization Algorithms in Large-Scale Directed Networks
Xi, Chenguang.
2017
-
Abstract: In the
interconnected world of today, distributed computation and optimization over large-scale
multi-agents networks are ubiquitous. The applications can be found in various fields
ranging from machine learning, signal processing, to computational finance. The
increasing interest in distributed optimization is further motivated by the emergence of
big data application. In one aspect, ... read morelarge datasets push centralized computation to the
limit and the need for distributed algorithms arises quite naturally. On a similar note,
transmitting the data collected in a distributed manner to a center is either too costly
or violates privacy. In this thesis, we aim to solve the distributed optimization
problem over multi-agent networks, where each agent has local private information and
objective functions. The goal is to have agents collaborate with each other to optimize
the sum of these local objective functions. Existing algorithms mostly deal with the
corresponding problems under the assumption that the underlying multi-agent network is
strongly-connected and undirected, i.e., if agent $i$ sends information to agent $j$,
then agent $j$ also sends information to agent $i$. The contribution of this work lies
in the relaxation of such assumptions on the network topology. In particular, we assume
the communication between the agents is described by a \emph{directed} graph. We mainly
propose four algorithms, Directed-Distributed Subgradient Descent (D-DSD),
Directed-Distributed Projection Subgradient (D-DPS), DEXTRA, and ADD-OPT. D-DSD and
D-DPS focus on the case when the objective functions are convex, but not necessarily
differentiable. Both of the proposed algorithms achieve convergence rates of
$O(\frac{\ln k}{\sqrt{k}})$, where $k$ is the number of iterations. D-DPS is the
generalization of D-DSD from unconstrained cases to constrained cases. When the
objective functions are relaxed to be smooth, i.e., they are convex as well as
differentiable, we propose DEXTRA and ADD-OPT that accelerate the convergence to a
linear rate $O(\tau^{k})$ for $0<\tau<1$. Moreover, ADD-OPT supports a wider and
more realistic range of step-sizes than DEXTRA. All four algorithms achieves the best
known rate of convergence for this class of problems under the appropriate
assumptions.
Thesis (Ph.D.)--Tufts University, 2017.
Submitted to the Dept. of Electrical Engineering.
Advisor: Usman Khan.
Committee: shuchin Aeron, Jason Rife, and Jose Bento.
Keyword: Mathematics.read less - ID:
- zg64tz61r
- Component ID:
- tufts:20678
- To Cite:
- TARC Citation Guide EndNote