Message Board

Dear readers, authors and reviewers,you can add a message on this page. We will reply to you as soon as possible!

2019 Volume 44 Issue 11
Article Contents

Xiao-qing ZHOU, Jue-you LI. Distributed Push-Sum Dual Averaging for Convex Optimizationover Time-Varying Directed Graphs[J]. Journal of Southwest China Normal University(Natural Science Edition), 2019, 44(11): 11-17. doi: 10.13718/j.cnki.xsxb.2019.11.002
Citation: Xiao-qing ZHOU, Jue-you LI. Distributed Push-Sum Dual Averaging for Convex Optimizationover Time-Varying Directed Graphs[J]. Journal of Southwest China Normal University(Natural Science Edition), 2019, 44(11): 11-17. doi: 10.13718/j.cnki.xsxb.2019.11.002

Distributed Push-Sum Dual Averaging for Convex Optimizationover Time-Varying Directed Graphs

More Information
  • Corresponding author: Jue-you LI
  • Received Date: 04/05/2018
    Available Online: 20/11/2019
  • MSC: O224;O236

  • A distributed convex optimization problem with simple constraints over time-varying directed graphs has been investigated in this paper. By combining the push-sum scheme and distributed dual averaging method, a distributed dual averaging algorithm with push-sum protocolhas first beenproposed. Then, the convergenceof the proposed method has beenestablished, and the explicit convergence rate with order of O($\frac{1}{{\sqrt T }}$) been obtained.Finally, a numerical example on linear regression problem has beenused to demonstrate the effectiveness of the proposed algorithm. Compared with existing works, our algorithm is not required the doublestochasticity of the communication weight matrices. Meanwhile, the proposed algorithm can be used to solve a large class of constrained convex optimization.
  • 加载中
  • [1] 洪奕光, 张艳琼.分布式优化:算法设计和收敛性分析[J].控制理论与应用, 2014, 31(7): 850-857.

    Google Scholar

    [2] 衣鹏, 洪奕光.分布式合作优化及其应用[J].中国科学:数学, 2016, 46(10):1547-1564.

    Google Scholar

    [3] NEDIC A, OZDAGLAR A.Distributed Subgradient Methods for Multi-Agent Optimization[J].IEEE Transactions on Automatic Control, 2009, 54(1):48-61. doi: 10.1109/TAC.2008.2009515

    CrossRef Google Scholar

    [4] DUCHI J C, AGARWAL A, WAINWRIGHT M J.Dual Averaging for Distributed Optimization:Convergence Analysis and Network Scaling[J].IEEE Transactions on Automatic Control, 2012, 57(3):592-606. doi: 10.1109/TAC.2011.2161027

    CrossRef Google Scholar

    [5] ZHU M, MARTINEZ S.On Distributed Convex Optimization under Inequality and Equality Constraints[J].IEEE Transactions on Automatic Control, 2012, 57(1):151-164. doi: 10.1109/TAC.2011.2167817

    CrossRef Google Scholar

    [6] LI J Y, WU C Z, WU Z Y, et al.Gradient-Free Method for Nonsmooth Distributed Optimization[J].Journal of Global Optimization, 2015, 61(2):325-340. doi: 10.1007/s10898-014-0174-2

    CrossRef Google Scholar

    [7] 闫兵.基于分布式数据库的图书馆自动管理系统设计[J].西南师范大学学报(自然科学版), 2016, 41(2):147-153.

    Google Scholar

    [8] 张豪, 韩易言, 吕庆国, 等.时变网络拓扑图下智能电网中基于优化算法的分布式调度响应[J].西南大学学报(自然科学版), 2018, 40(7):177-180.

    Google Scholar

    [9] 张鼎兴.一种面向智能电网的无线传感器网络簇路由算法[J].贵州师范大学学报(自然科学版), 2016, 34(5):93-97. doi: 10.3969/j.issn.1004-5570.2016.05.017

    CrossRef Google Scholar

    [10] 赵钢.基于分布式多引擎架构的网格工作流管理系统[J].西南大学学报(自然科学版), 2012, 34(11):100-107.

    Google Scholar

    [11] 吴其林, 方周, 张正金, 等.多信息流协作自组织网络资源优化策略研究综述[J].重庆工商大学学报(自然科学版), 2017, 34(3):39-48.

    Google Scholar

    [12] 黄庆东, 闫乔乔, 孙晴.基于Fiedler矢量的分布式自适应分簇算法[J].重庆邮电大学学报(自然科学版), 2017, 29(3): 301-306.

    Google Scholar

    [13] TSIANOS K I, LAWLOR S, RABBAT M G.Push-Sum Distributed Dual Averaging for Convex Optimization[C]//51st IEEE Conference on Decision and Control.New York: IEEE Press, 2012.

    Google Scholar

    [14] NEDIC A, OLSHEVSKY A.Distributed Optimization over Time-Varying Directed Graphs[J].IEEE Transactions on Automatic Control, 2015, 60(3):601-615. doi: 10.1109/TAC.2014.2364096

    CrossRef Google Scholar

  • 加载中
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Figures(2)

Article Metrics

Article views(1024) PDF downloads(142) Cited by(0)

Access History

Other Articles By Authors

Distributed Push-Sum Dual Averaging for Convex Optimizationover Time-Varying Directed Graphs

    Corresponding author: Jue-you LI

Abstract: A distributed convex optimization problem with simple constraints over time-varying directed graphs has been investigated in this paper. By combining the push-sum scheme and distributed dual averaging method, a distributed dual averaging algorithm with push-sum protocolhas first beenproposed. Then, the convergenceof the proposed method has beenestablished, and the explicit convergence rate with order of O($\frac{1}{{\sqrt T }}$) been obtained.Finally, a numerical example on linear regression problem has beenused to demonstrate the effectiveness of the proposed algorithm. Compared with existing works, our algorithm is not required the doublestochasticity of the communication weight matrices. Meanwhile, the proposed algorithm can be used to solve a large class of constrained convex optimization.

  • 分布式优化理论及应用已经成为当前系统和控制科学的重要研究方向之一.在优化理论研究过程中,算法的设计、收敛性分析和复杂性分析是研究的关键点[1].目前已有许多国内外学者研究了分布式优化的理论和算法[2-6].基于次梯度信息,现有的分布式优化算法大致可分为3类:primal分布式算法、dual分布式算法和primal-dual分布式算法.文献[3]最早提出了一种分布式次梯度算法来求解无约束的分布式优化问题,并给出了收敛性分析.随后,文献[4]在固定无向通讯网络中提出了分布式对偶平均次梯度算法,并给出了算法的收敛性分析,得到了收敛率依赖于网络规模和拓扑结构的结论.文献[5]基于Lagrangian对偶方法提出了primal-dual分布式次梯度算法来求解一类带有等式或不等式约束的分布式凸优化问题.目前分布式方法已广泛应用于各个领域[7-12].

    现有的分布式算法大多是基于无向通讯网络设计的,而许多实际问题的通讯网络往往是有向的,比如无线传感器网络、手机通讯网络等[13].而在无向网络中,分布式算法的设计往往需要一个很强的条件,即要求通讯权矩阵是双随机的,这样的要求对于有向通讯网络来说通常是不能满足的[14].最近,文献[13]通过引入push-sum协同机制,首次考虑了时间不变的有向通讯网络(假设其通讯权矩阵仅是列随机的)下的分布式优化问题,提出了push-sum对偶平均次梯度算法,给出了算法的收敛性分析,但他们考虑的通讯网络是时间不变的.随后,文献[14]考虑了时变有向网络下的分布式优化问题,并得到了算法的收敛率,但作者仅考虑了无约束的分布式优化问题.于是,在时变有向网络情形下,对带约束的分布式问题的研究就显得尤为必要.

    受文献[13-14]的启发,本文提出了在时变有向网络下的push-sum分布式对偶平均次梯度算法,分析了算法的收敛性,并得到了算法的收敛率为O($\frac{1}{{\sqrt T }}$).数值结果验证了所提出算法的有效性.本文将文献[13]中的时间不变的有向网络推广到了时变的有向网络中,且能用于求解带约束的分布式优化问题.

1.   问题描述
  • 多智能体网络往往需要所有智能体相互协作来极小化几个目标函数之和,其中每个智能体仅知道自身目标函数信息,并通过与其它智能体交换信息来协同达到整体最优目标值[14].本文主要考虑如下分布式约束优化问题[1]

    其中:智能体i仅知道它自身的目标函数fi(x),i=1,…,n;所有的智能体受到公共约束集$X \subset {{\mathbb{R}}^d}$的约束.令X*为问题(1)的最优解集.

2.   Push-sum分布式对偶平均算法
  • 假设考虑的图G(t)=(VE(t))是有向图,其中V={1,2,…,n}为节点或智能体集合,E(t)⊂V×V为边集合.对于节点i,用Niin(t) ={j|(ji)∈E(t)}∪{i}来表示它的内邻居集合;用Niout(t)={j|(ij)∈E(t)}∪{i}来表示它的外邻居集合;用di(t)=|Niout(t)|来表示它的外度.假设每个智能体i在每个时刻t仅知道它的外度di(t),定义A(t)为通讯权矩阵,其元素为

    通常A(t)是时变的列随机矩阵[14].为了求解问题(1),提出如下push-sum分布式对偶平均算法,简记为算法PS-DDA:

    步0:初始化wi(0)=1,zi(0)=0gi(0)=0t=0;

    步1:对每个智能体ii=1,…,n,更新

    步2:令t=t+1,转步1.

    其中步长序列{a(t)}t=0是非负单调递减的;gi(t)是fi(x)在x=xi(t)的次梯度;${\mathit{\Pi} }_X^\psi \left( {{\mathit{\pmb{z}}}, a\left( t \right)} \right): = \mathop {\arg \min }\limits_{x \in {\mathit{\pmb{X}}}} \left\{ {\langle {\mathit{\pmb{z}}}, {\mathit{\pmb{x}}}\rangle + \frac{{\psi ({\mathit{\pmb{x}}})}}{{a(t)}}} \right\}$是一个非欧几里得投影算子,这里$\psi :{{\mathbb{R}}^d} \to {\mathbb{R}}$是一个1-强凸函数[4],并且满足ψ(x)≥0和ψ(0)=0.

3.   收敛性分析
  • 下面将给出算法PS-DDA的收敛性分析.

    假设1  (a)有向图序列{G(t)}是B-强连通,即存在一个正整数B使得对于每个k≥0,EB(k)=$\begin{array}{*{20}{c}} {(k + 1)B - 1}\\ \cup \\ {i = kB} \end{array}$E(i)是强连通的[11].

    (b) 对i=1,…,nfi(x):X$\mathbb{R}$是凸函数,并且其次梯度gi(x)是有界的,即对任意xX,有‖gi(x)‖*L,这里gi(x)∈∂fi(x),‖·‖*是‖·‖的对偶范数.

    引理1[4]  假设步长序列{a(t)}t=0是非负单调递减的.对于∀x*X*,有

    这里$\hat {\mathit{\pmb{g}}}(t) = \frac{1}{n}\sum\limits_{i = 1}^n {{{\mathit{\pmb{g}}}_i}} (t)$${\mathit{\pmb{x}}}(t + 1) = {\mathit{{\Pi}}} _X^\psi \left( {\sum\limits_{r = 1}^t {\hat {\mathit{\pmb{g}}}} (r), a(t)} \right)$.

    引理2[4]  对于任意两个向量uv${{\mathbb{R}}^d}$,有‖ΠXψ(ua)-ΠXψ(va)‖≤auv*.

    引理3[14]  假设图序列{G(t)}是一致强连通的,则

    (a) 存在一个随机向量序列{φ(t)}t=1,使得对于ts≥0,有

    其中A(ts):=A(t)…A(s),ts≥0,C>0为常数,λ∈(0,1).

    (b) 令$\delta = \mathop {\inf }\limits_{t = 0, 1, \cdots } \left( {\begin{array}{*{20}{c}} {\min }\\ {1 \le i \le n} \end{array}{{[A(t) \cdots A(0)1]}_i}} \right)$,有$\delta = \frac{1}{{{n^{nB}}}}$,这里1是一个n维的全1向量.

    引进如下记号:${\widehat {\mathit{\pmb{x}}}_i}(T) = \frac{1}{T}\sum\limits_{t = 1}^T {{{\mathit{\pmb{x}}}_i}} (t)$$\bar {\mathit{\pmb{z}}}(t) = \frac{1}{n}\sum\limits_{i = 1}^n {{{\mathit{\pmb{z}}}_i}} (t)$${\mathit{\pmb{y}}}(t) = {\mathit{\Pi}} _X^\psi (\bar {\mathit{\pmb{z}}}(t)$a(t-1)),$\hat {\mathit{\pmb{y}}}\left( t \right) = \frac{1}{T}\sum\limits_{t = 1}^T {\mathit{\pmb{y}}} (t)$.为了得到算法PS-DDA的收敛性,首先证明如下一个重要引理.

    引理4  假设1成立,序列{zi(t)}t=1是由算法PS-DDA产生的.对于所有t≥1,则有

    这里$\frac{1}{{{n^{nB}}}}$δ < 1,0 < λ${\left( {1 - \frac{1}{{{n^{nB}}}}} \right)^{\frac{1}{B}}}$.

      由式(2)递推可得

    由式(3)递推可得

    进一步,由以上关系可得

    综合以上结果,并注意到zi(0)=0,于是,对于t≥0有

    由文献[14]的推论2知,[A(t-1:0)]il$\begin{array}{*{20}{l}} {\;\;{\rm{max}}}\\ {1 \le l \le n} \end{array}$[A(t-1:0)]il,i=1,2,…,n,进而由(6)式可得

    根据引理3有

    证毕.

    下面将陈述本文的主要结果.

    定理1  假设1成立,步长序列{a(t)}t=0是非负单调递减的.对任意T≥1和x*X*,有

      由F(x)的凸性和假设1(b)可得

    考虑式(6)中的第二项,并结合引理1可得

    再由Cauchy不等式和引理2可得

    现在估计式(6)中第三项

    综合式(7),(8)和(9),则有

    最后利用引理4,即可证得定理1的结论.证毕

    定理2  假设定理1的条件成立.令ψ(x*)≤R2,选择步长a(t)=$\frac{p}{{\sqrt t }}$(t≥1),这里的p是一个正常数.对任意T≥1和x*X*,有

      利用定理1的结果和ψ(x*)≤R2,则有

    注意到a(t)=$\frac{p}{{\sqrt t }}$(t≥1),有$\sum\limits_{t = 1}^T {\frac{1}{{\sqrt t }}} \le 2\;\;{\sqrt T }$.根据以上结果,结论得证.

      定理2的前两项给出了优化误差项,最后一项给出了网络误差项,这里的λ刻画了网络的连通性,δ刻画了网络的平衡性.同时,定理2表明

    它刻画了F(${{\hat {\mathit{\pmb{x}}}}_j}$(T))收敛到最优值F(x*)的速率是O($\frac{1}{{\sqrt T }}$),其中常数依赖于参数λδ.

4.   数值实验‖
  • 考虑一个l1回归问题[6]:给出n个点对(aibi)∈${{\mathbb{R}}^d}$×$\mathbb{R}$,需要估计未知参数x${{\mathbb{R}}^d}$,使得aiTxbi,即求解下述优化问题:

    显然F(x)是凸的,其次梯度是有界的,上界为L=maxiai‖.

    i=1,…,n,假设点对(aibi)是随机产生的,且服从标准正态分布.取步长a(t)=$\frac{{0.1}}{{\sqrt t }}$D=10.考虑最大误差$\max {\rm{ error }} = \begin{array}{*{20}{c}} {\max }\\ {i \in V} \end{array}\left[ {F\left( {{{\hat {\mathit{\pmb{x}}}}_i}(T)} \right) - F\left( {{{\mathit{\pmb{x}}}^*}} \right)} \right]$.

    图 1(a)(b)描述了节点个数Node=100,维数分别为Dim=2和Dim=4的最大误差的收敛曲线.从图 1(a)(b)可以看出,算法是收敛的.对比图 1(a)(b)知,随着问题维数增加,其收敛误差也随之增大.

    将本文PS-DDA算法与文献[11]中的分布式push-sum次梯度(PS-SG)算法进行对比. 图 2(a)(b)分别描述了节点个数Node=200和Node=400,维数为Dim=2的最大误差值的收敛曲线.通过对比发现,算法PS-DDA的收敛精度均略优于算法PS-SG.

Figure (2)  Reference (14)

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return