0%

使用强化学习求解鲁棒控制器

问题背景

不确定动态系统的鲁棒控制近年来受到了控制界的广泛关注[1],对于很多场景,例如化学过程,电力系统,机器人和航天工程中,被控系统往往不能够得到精确的数学模型或被控系统存在不确定性,因此获得系统的鲁棒性能对系统的精准控制起着重要作用。

鲁棒稳定性和最优控制器的设计具有一定的关系[2],求解鲁棒控制器在某些条件下可以转化为求解一个最优控制器的问题,对于离散时间线性系统来说,求解最优控制器,就是求解代数Riccatic方程,对于非线性系统,就是求解HJB方程。然而,对于一般的非线性系统,HJB方程的解析解可能不存在,通常,使用迭代的算法进行求解,可以采用ADP(approximate dynamic programming)的方法进行求解。

近些年,强化学习在解决不确定环境下的决策问题上取得了巨大的成功[3],通常,强化学习算法可以分为on-policy和off-policy两类算法,on-policy算法将每次迭代后的策略运用到被控对象上,而off-policy优化的策略和与环境交互的策略不一定是一个策略,即策略的更新可以在多步迭代以后。

目前,对于不确定性离散时间线性系统的鲁棒控制问题,已经有一些团队采用强化学习方法,在系统动力学信息完全未知或部分未知的情况下,使用自适应动态规划的算法,对贝尔曼方程进行求解。

本文章对使用强化学习求解线性离散时间不确定性鲁棒控制器的问题进行综述。

问题描述

含有不确定性的离散时间非线性系统可以用如下表示:

\[x_{k+1}=[A+\Delta(p)] x_{k}+B u_{k}\]

其中,系统状态\(x_{k} \in \mathbb{R}^{n}\),控制输入\(u_{k} \in \mathbb{R}^{m}\),漂移动态\(A+\Delta \in \mathbb{R}^{n \times n}\),输入动态\(B \in \mathbb{R}^{n \times m}\)\(p\)是不确定性参数,限制于一个有限集合\(\Omega\),系统状态矩阵包含标称部分和不确定部分,对应的标称系统可以表示为

\[x_{k+1}=A x_{k}+B u_{k}\]

\(u_{k}=K x_{k}\),鲁棒控制问题可以描述成以下形式:

对于闭环系统\(x_{k+1}=(A+B K) x_{k}+\Delta x_{k}\),找到一个状态反馈控制律\(u_{k}=K x_{k}\),对于任意的\(\forall p \in \Omega\),使得系统渐近稳定。

反馈增益\(k\)可以采用ARE方法进行设计,通过使用一个附属系统:

\[x_{k+1}=A x_{k}+B u_{k}+D v_{k}\]

可以将求解鲁棒控制器问题转化为求解一个最优控制问题,即:找到状态反馈控制器\(u_k=K^{\star}x_k\), \(v_k=L^{\star}x_k\),使得以下附属系统的代价函数最小:

\[V(x_k)=\frac{1}{2}\sum_{j=k}^{\infty}(x_j^TQx_j+x_j^TFx_j+\beta^2x_j^Tx_j+u_j^TR_1u_j+cv_j^TR_2v_j)\]

为了简化表示,代价函数中的第\(k\)项可以表示成:

\[\begin{aligned} r\left(x_{k}, u_{k}, v_{k}\right)=x_{k}^{\mathrm{T}} Q x_{k}+x_{k}^{\mathrm{T}} F x_{k}+\beta^{2} x_{k}^{\mathrm{T}} x_{k} +u_{k}^{\mathrm{T}} R_{1} u_{k}+v_{k}^{\mathrm{T}} R_{2} v_{k} \end{aligned}\]

使用ARE方程求解鲁棒控制器

假设存在一个正定矩阵\(P>0\),满足以下ARE: \[\begin{aligned} 0=-\left[\begin{array}{cc} B^{\mathrm{T}} & P A \\ D^{\mathrm{T}} & P A \end{array}\right]^{\mathrm{T}}\left[\begin{array}{cc} R_{1}+B^{\mathrm{T}} P B & B^{\mathrm{T}} P D \\ D^{\mathrm{T}} P B & R_{2}+D^{\mathrm{T}} P D \end{array}\right]^{-1} \times\left[\begin{array}{cc} B^{\mathrm{T}} & P A \\ D^{\mathrm{T}} & P A \end{array}\right]+A^{\mathrm{T}} P A-P+\bar{Q} \end{aligned}\]

其中,\(\overline{Q}=Q+F+\beta^2I\),贝尔曼方程迭代形式可以表示为:

\[V(x_k)=V(x_{k+1})+r(x_k,u_k,v_k)\]

定义哈密尔顿函数:

\[H(x_k, u_k,v_k)=x_k^TQx_k+x_k^TFx_k+\beta^2x_k^Tx_k+u_k^TR_1u_k+v_k^TR_2v_k+V(x_{k+1})-V(x_k)\]

价值函数\(V(x_k)\)可以表示成:

\[V(x_k)=x_k^TPx_k\]

基于文献[4]:

最优控制器的必要条件是:

\[\frac{\partial H\left(x_{k}, u_{k}, v_{k}\right)}{\partial u_{k}}=0, \quad \frac{\partial H\left(x_{k}, u_{k}, v_{k}\right)}{\partial v_{k}}=0\]

考虑哈密尔顿函数和二次型代价函数,上式可以写成:

\[\left[\begin{array}{cc} \left(R_{1}+B^{\mathrm{T}} P B\right) & B^{\mathrm{T}} P D \\ D^{\mathrm{T}} P B & \left(R_{2}+D^{\mathrm{T}} P D\right) \end{array}\right]\left[\begin{array}{l} u_{k}^{*} \\ v_{k}^{*} \end{array}\right]=-\left[\begin{array}{c} B^{\mathrm{T}} P A \\ D^{\mathrm{T}} P A \end{array}\right] x_{k}\]

令:

\[\begin{aligned} \mathcal{E} &=B^{\mathrm{T}} P A \\ \mathcal{G} &=D^{\mathrm{T}} P A \\ \mathcal{M} &=\left[\begin{array}{cc} \mathcal{M}_{11} & \mathcal{M}_{12} \\ \mathcal{M}_{21} & \mathcal{M}_{22} \end{array}\right] =\left[\begin{array}{cc} \left(R_{1}+B^{\mathrm{T}} P B\right) & B^{\mathrm{T}} P D \\ D^{\mathrm{T}} P B & \left(R_{2}+D^{\mathrm{T}} P D\right) \end{array}\right] \end{aligned}\]

那么\(u_k^{\star}\)\(v_k^{\star}\)可以表示成:

\[\left[\begin{array}{l} u_{k}^{*} \\ v_{k}^{*} \end{array}\right]=-\mathcal{M}^{-1}\left[\begin{array}{l} \mathcal{E} \\ \mathcal{G} \end{array}\right] x_{k}\]

\(u_k^{\star}\)\(v_k^{\star}\)满足:

\[\begin{aligned} 0=& \min _{u_{k}, v_{k}} H\left(x_{k}, u_{k}, v_{k}\right)=H\left(x_{k}, u_{k}^{*}, v_{k}^{*}\right) \\ =&\left[\begin{array}{c} u_{k}^{*} \\ v_{k}^{*} \end{array}\right]^{\mathrm{T}}\left[\begin{array}{cc} R_{1}+B^{\mathrm{T}} P B & B^{\mathrm{T}} P D \\ D^{\mathrm{T}} P B & R_{2}+D^{\mathrm{T}} P D \end{array}\right]\left[\begin{array}{c} u_{k}^{*} \\ v_{k}^{*} \end{array}\right] +\left[\begin{array}{c} u_{k}^{*} \\ v_{k}^{*} \end{array}\right]^{\mathrm{T}}\left[\begin{array}{c} B^{\mathrm{T}} P A \\ D^{\mathrm{T}} P A \end{array}\right] x_{k}+x_{k}^{\mathrm{T}}\left(A^{\mathrm{T}} P A-P\right) x_{k} +x_{k}^{\mathrm{T}}\left[A^{\mathrm{T}} P B A^{\mathrm{T}} P D\right]\left[\begin{array}{c} u_{k}^{*} \\ v_{k}^{*} \end{array}\right]+x_{k}^{\mathrm{T}} \bar{Q} x_{k} \end{aligned}\]

可以求出:

\[K^{*}=-\left[R_{1}+B^{\mathrm{T}} P B-B^{\mathrm{T}} P D\left(R_{2}+D^{\mathrm{T}} P D\right)^{-1} D^{\mathrm{T}} P B\right]^{-1}\left[B^{\mathrm{T}} P A-B^{\mathrm{T}} P D\left(R_{2}+D^{\mathrm{T}} P D\right)^{-1} D^{\mathrm{T}} P A\right]\]

\[L^{*}=-\left[R_{2}+D^{\mathrm{T}} P D-D^{\mathrm{T}} P B\left(R_{1}+B^{\mathrm{T}} P B\right)^{-1} B^{\mathrm{T}} P D\right]^{-1}\left[D^{\mathrm{T}} P A-D^{\mathrm{T}} P B\left(R_{1}+B^{\mathrm{T}} P B\right)^{-1} B^{\mathrm{T}} P A\right]\]

基于文献[5]:

ARE的解可以表示成意外一种形式:

\[P=A^{\mathrm{T}}\left(P^{-1}+B R_{1} B^{\mathrm{T}}+D R_{2} D^{\mathrm{T}}\right)^{-1} A+\bar{Q}\]

\[K^{*}=-R_{1}^{-1} B^{\mathrm{T}}\left(P^{-1}+B^{\mathrm{T}} R_{1}^{-1} B^{\mathrm{T}}+D^{\mathrm{T}} R_{2}^{-1} D^{\mathrm{T}}\right)^{-1} A\]

\[L^{*}=-R_{2}^{-1} D^{\mathrm{T}}\left(P^{-1}+B^{\mathrm{T}} R_{1}^{-1} B^{\mathrm{T}}+D^{\mathrm{T}} R_{2}^{-1} D^{\mathrm{T}}\right)^{-1} A\]

系统稳定性的正面见文献[6]:使用ARE的条件是系统的动态参数已知。

基于模型的on-policy强化学习算法

不加探测噪声

on-policy策略迭代的方法从一个初始的\(u^{0}\left(x_{k}\right)\)\(v^{0}\left(x_{k}\right)\),在第\(i\)次迭代中,策略\(u^{i}\left(x_{k}\right)\)\(v^{i}\left(x_{k}\right)\)可以通过求解以下的贝尔曼方程:

\[\begin{aligned} V^{i}\left(x_{k}\right) &=r\left(x_{k}, u_{k}^{i}, v_{k}^{i}\right)+V^{i}\left(x_{k+1}\right) \\ &=r\left(x_{k}, u_{k}^{i}, v_{k}^{i}\right)+V^{i}\left(A x_{k}+B u^{i}\left(x_{k}\right)+D v^{i}\left(x_{k}\right)\right) \end{aligned}\]

边界条件为:\(V^{i}(0)=0\),其中,\(x_{k+1}=A x_{k}+B u^{i}\left(x_{k}\right)+D v^{i}\left(x_{k}\right)\),控制律可以通过迭代进行更新:

\[\begin{array}{l} \left\{u^{i+1}\left(x_{k}\right), v^{i+1}\left(x_{k}\right)\right\} \\ =\underset{u_{k}, v_{k}}{\arg \min }\left\{r\left(x_{k}, u_{k}, v_{k}\right)+V^{i}\left(A x_{k}+B u_{k}+D v_{k}\right)\right\} \end{array}\]

写成\(K\)\(L\)的迭代公式为:

\[\begin{aligned} K^{i+1} =&-\left[R_{1}+B^{\mathrm{T}} P^{i} B-B^{\mathrm{T}} P^{i} D\left(R_{2}+D^{\mathrm{T}} P^{i} D\right)^{-1} D^{\mathrm{T}} P^{i} B\right]^{-1} \\ & \times\left[B^{\mathrm{T}} P^{i} A-B^{\mathrm{T}} P^{i} D\left(R_{2}+D^{\mathrm{T}} P^{i} D\right)^{-1} D^{\mathrm{T}} P^{i} A\right] \end{aligned}\]

\[\begin{aligned} L^{i+1}=&-\left[R_{2}+D^{\mathrm{T}} P^{i} D-D^{\mathrm{T}} P^{i} B\left(R_{1}+B^{\mathrm{T}} P^{i} B\right)^{-1} B^{\mathrm{T}} P^{i} D\right]^{-1} \\ & \times\left[D^{\mathrm{T}} P^{i} A-D^{\mathrm{T}} P^{i} B\left(R_{1}+B^{\mathrm{T}} P^{i} B\right)^{-1} B^{\mathrm{T}} P^{i} A\right] \end{aligned}\]

\(i \rightarrow \infty\) 时,算法可以保证稳定,此时,\(V^{i}\left(x_{k}\right) \rightarrow V^{*}\left(x_{k}\right)\)\(u^{i}\left(x_{k}\right) \rightarrow u^{*}\left(x_{k}\right)\)\(v^{i}\left(x_{k}\right) \rightarrow v^{*}\left(x_{k}\right)\),证明过程可以参考文献[]7]。

\(u^{i}\left(x_{k}\right)\)\(v^{i}\left(x_{k}\right)\)可以看作是\(u^{*}\left(x_{k}\right) \text { and } v^{*}\left(x_{k}\right)\)\(i\)次迭代的近似。而$u^{i+1}(x_{k}) \(和\)v{i+1}(x_{k})\(是由\)V{i}(x_{k})\(获得,其是\)u{i}(x_{k})\(和\)v{i}(x_{k})\(的代价函数,因此,在每次迭代时,更新的控制策略需要应用到系统中,然后价值函数\)V^{i}(x_{k})$才会有变化,这就是on-policy的体现。

加入探测噪声

强化学习算法中的"expoloration"和"exploitation"是一对权衡的策略,且对算法的效果有着重要的影响。持续激励的概念和ADP中的"exploration"有着密切的关系,能够保证学习的参数收敛到最优值。

在算法1中的策略迭代步骤中,on-policy贝尔曼方程可以写成:

\[\left(x_{k}^{\mathrm{T}} \otimes x_{k}^{\mathrm{T}}-x_{k+1}^{\mathrm{T}} \otimes x_{k+1}^{\mathrm{T}}\right) \operatorname{vec}\left(P^{i}\right)=r\left(x_{k}, u_{k}^{i}, v_{k}^{i}\right)\]

其中,\(P^{i}\)\(((n+1) n / 2)\)个独立变量,上式是一个最小平方方程。为了确保上式在迭代的过程中存在可行解,需要引入持续激励,根据参考文献【39】的定义:

若存在\(L>0\)\(\alpha_{0}>0\),使得:\(\sum_{i=k}^{k+L} \eta_{i} \eta_{i}^{\mathrm{T}} \geq \alpha_{0} I, \quad \forall k \geq i_{0}\),则称一个有界的信号向量\(\eta_{i} \in \mathbb{R}^{q}, q>1\)是持续激励的。

为了满足持续激励的条件,需要在控制输入中添加一个探测噪声\(e_k\)。因此,在第\(i\)次迭代中,应用到控制系统的控制信号为:

\[\bar{u}_{k}^{i}=u_{k}^{i}+e_{k}\]

得到加入探测噪声的on-policy的的贝尔曼方程:

\[\begin{aligned} x_{k}^{\mathrm{T}} \bar{P}^{i} x_{k}=& r\left(x_{k}, \bar{u}_{k}^{i}, v_{k}^{i}\right)+x_{k+1}^{\mathrm{T}} \bar{P}^{i} x_{k+1} \\ =& x_{k}^{\mathrm{T}} \bar{Q} x_{k}+\left(\bar{u}_{k}^{i}\right)^{\mathrm{T}} R_{1} \bar{u}_{k}^{i}+\left(v_{k}^{i}\right)^{\mathrm{T}} R_{2} v_{k}^{i} \\ &+\left(A x_{k}+B u_{k}^{i}+B e_{k}+D v_{k}^{i}\right)^{\mathrm{T}} \bar{P}^{i} \\ & \times\left(A x_{k}+B u_{k}^{i}+B e_{k}+D v_{k}^{i}\right) \end{aligned}\]

相比于算法1,算法2不会生成和算法1同样的解。算法2对于探测噪声更加鲁棒,算法1限制了on-policy强化学习算法的"exploration"。

在使用算法1和算法2的时候,需要将更新的控制策略应用到系统中来更新代价函数\(V^{i}\left(x_{k}\right)\),因此,on-policy是一个离线算法,同时在策略迭代的过程中,系统的动态(A, B, D)需要知道,因此,on-policy是一个基于模型的算法。为了避免离线计算的低效率等问题,介绍off-policy算法。

基于模型的off-policy强化学习算法

不加探测噪声

考虑对系统施加一个容许策略\(u_{k}=u\left(x_{k}\right) \text { and } u_{k}=u\left(x_{k}\right)\),则该系统可以重写为:

\[x_{k+1}=A^{i} x_{k}+B\left(u_{k}-K^{i} x_{k}\right)+D\left(v_{k}-L^{i} x_{k}\right)\]

其中,\(A^{i}=A+B K^{i}+D L^{i}, u_{k}^{i}=K^{i} x_{k}, v_{k}^{i}=L^{i} x_{k}\),定义策略\(u(\cdot)\)\(v(\cdot)\)是施加到系统上的控制策略,而\(u_{k}^{i}=K^{i} x_{k}\)\(v_{k}^{i}=L^{i} x_{k}\)是用于迭代学习的迭代策略。

考虑代价函数:\(V^{i}\left(x_{k}\right)=x_{k}^{\mathrm{T}} P^{i} x_{k}\),应用泰勒展开式将二次型的代价函数展开:

\[\begin{aligned} V^{i}\left(x_{k}\right) &-V^{i}\left(x_{k+1}\right) =2 x_{k+1}^{\mathrm{T}} P^{i}\left(x_{k}-x_{k+1}\right)+\left(x_{k}-x_{k+1}\right) P^{i}\left(x_{k}-x_{k+1}\right) \end{aligned}\]

将动态系统代入上式,可得:

\[\begin{array}{l} V^{i}\left(x_{k}\right)-V^{i}\left(x_{k+1}\right) \\ =x_{k}^{\mathrm{T}} P^{i} x_{k}-x_{k}^{\mathrm{T}}\left(A^{i}\right)^{\mathrm{T}} P^{i} A^{i} x_{k} \\ \quad-\left(u_{k}-K^{i} x_{k}\right)^{\mathrm{T}} B^{\mathrm{T}} P^{i} x_{k+1}-\left(u_{k}-K^{i} x_{k}\right)^{\mathrm{T}} B^{\mathrm{T}} P^{i} A^{i} x_{k} \\ \quad-\left(v_{k}-L^{i} x_{k}\right)^{\mathrm{T}} D^{\mathrm{T}} P^{i} x_{k+1}-\left(v_{k}-c L^{i} x_{k}\right)^{\mathrm{T}} D^{\mathrm{T}} P^{i} A^{i} x_{k} \end{array}\]

同时满足离散时间Lyapunov方程:

\[P^{i}=\bar{Q}+\left(K^{i}\right)^{\mathrm{T}} R_{1} K^{i}+\left(L^{i}\right)^{\mathrm{T}} R_{2} L^{i}+\left(A^{i}\right)^{\mathrm{T}} P^{i} A^{i}\]

其中,\(\bar{Q}=Q+F+\beta^{2} I\)。将上式代入,可得off-policy贝尔曼方程:

\[\begin{array}{l} x_{k}^{\mathrm{T}} P^{i} x_{k}-x_{k+1}^{\mathrm{T}} P^{i} x_{k+1} \\ =x_{k}^{\mathrm{T}} \bar{Q} x_{k}+x_{k}^{\mathrm{T}}\left(K^{i}\right)^{\mathrm{T}} R_{1} K^{i} x_{k}+x_{k}^{\mathrm{T}}\left(L^{i}\right)^{\mathrm{T}} R_{2} L^{i} x_{k} \\ \quad-\left(v_{k}-L^{i} x_{k}\right)^{\mathrm{T}} D^{\mathrm{T}} P^{i} x_{k+1}-\left(v_{k}-L^{i} x_{k}\right)^{\mathrm{T}} D^{\mathrm{T}} P^{i} A^{i} x_{k} \\ \quad-\left(u_{k}-K^{i} x_{k}\right)^{\mathrm{T}} B^{\mathrm{T}} P^{i} x_{k+1}-\left(u_{k}-K^{i} x_{k}\right)^{\mathrm{T}} B^{\mathrm{T}} P^{i} A^{i} x_{k} \end{array}\]

\(A^{i}=A+B K^{i}+D L^{i}\)代入off-policy贝尔曼方程可得:

\[\begin{array}{l} x_{k}^{\mathrm{T}} P^{i} x_{k}-\left(A x_{k}+B u_{k}+D v_{k}\right)^{\mathrm{T}} P^{i}\left(A x_{k}+B u_{k}+D v_{k}\right) \\ \quad=x_{k}^{\mathrm{T}} \bar{Q} x_{k}+x_{k}^{\mathrm{T}}\left(K^{i}\right)^{\mathrm{T}} R_{1} K^{i} x_{k}+x_{k}^{\mathrm{T}}\left(L^{i}\right)^{\mathrm{T}} R_{2} L^{i} x_{k} \\ \quad-\left(u_{k}-K^{i} x_{k}\right)^{\mathrm{T}} B^{\mathrm{T}} P^{i}\left(A x_{k}+B u_{k}+D v_{k}\right) \\ \quad-\left(u_{k}-K^{i} x_{k}\right)^{\mathrm{T}} B^{\mathrm{T}} P^{i}\left(A+B K^{i}+D L^{i}\right) x_{k} \\ \quad-\left(v_{k}-L^{i} x_{k}\right)^{\mathrm{T}} D^{\mathrm{T}} P^{i}\left(A x_{k}+B u_{k}+D v_{k}\right) \\ \quad-\left(v_{k}-L^{i} x_{k}\right)^{\mathrm{T}} D^{\mathrm{T}} P^{i}\left(A+B K^{i}+D L^{i}\right) x_{k} \end{array}\]

化简可得:

\[\begin{array}{l} x_{k}^{\mathrm{T}} P^{i} x_{k}-x_{k}^{\mathrm{T}} A^{\mathrm{T}} P^{i} A x_{k} \\ \quad=x_{k}^{\mathrm{T}} \bar{Q} x_{k}+x_{k}^{\mathrm{T}}\left(K^{i}\right)^{\mathrm{T}} R_{1} K^{i} x_{k}+x_{k}^{\mathrm{T}}\left(L^{i}\right)^{\mathrm{T}} R_{2} L^{i} x_{k} \\ \quad+x_{k}^{\mathrm{T}}\left(K^{i}\right)^{\mathrm{T}} B^{\mathrm{T}} P^{i} B K^{i} x_{k}+x_{k}^{\mathrm{T}}\left(K^{i}\right)^{\mathrm{T}} B^{\mathrm{T}} P^{i} D L^{i} x_{k} \\ \quad-2 x_{k}^{\mathrm{T}}\left(L^{i}\right)^{\mathrm{T}} D^{\mathrm{T}} P^{i} A x_{k}+x_{k}^{\mathrm{T}}\left(L^{i}\right)^{\mathrm{T}} D^{\mathrm{T}} P^{i} B K^{i} x_{k} \\ \quad-2 x_{k}^{\mathrm{T}}\left(K^{i}\right)^{\mathrm{T}} B^{\mathrm{T}} P^{i} A x_{k}+x_{k}^{\mathrm{T}}\left(L^{i}\right)^{\mathrm{T}} D^{\mathrm{T}} P^{i} D L^{i} x_{k} \end{array}\]

进一步化简可得和算法1相同的形式:

\[\begin{aligned} 0=& x_{k}^{\mathrm{T}} \bar{Q} x_{k}+x_{k}^{\mathrm{T}}\left(K^{i}\right)^{\mathrm{T}} R_{1} K^{i} x_{k}+x_{k}^{\mathrm{T}}\left(L^{i}\right)^{\mathrm{T}} R_{2} L^{i} x_{k} \\ &+x_{k}^{\mathrm{T}}\left(A_{k}+B K^{i}+D L^{i}\right)^{\mathrm{T}} P^{i}\left(A+B K^{i}+D L^{i}\right) x_{k} \\ &-x_{k}^{\mathrm{T}} P^{i} x_{k} \end{aligned}\]

所以从意义上而言,算法1和算法3是等价的。

加入探测噪声

在控制策略施加探测噪声:

\[\hat{u}_{k}=u_{k}+e_{k}\]

具有探测噪声的off-policy贝尔曼方程可以写成:

\[\begin{array}{l} x_{k}^{\mathrm{T}} \hat{P}^{i} x_{k}-\left[x_{k+1}+B e_{k}\right]^{\mathrm{T}} \hat{P}^{i}\left[x_{k+1}+B e_{k}\right] \\ =x_{k}^{\mathrm{T}} Q x_{k}+x_{k}^{\mathrm{T}}\left(K^{i}\right)^{\mathrm{T}} R_{1} K^{i} x_{k}+x_{k}^{\mathrm{T}}\left(L^{i}\right)^{\mathrm{T}} R_{2} L^{i} x_{k} \\ \quad-\left(u_{k}+e_{k}-K^{i} x_{k}\right)^{\mathrm{T}} B^{\mathrm{T}} \hat{P}^{i} A^{i} x_{k}-\left(v_{k}-L^{i} x_{k}\right) D^{\mathrm{T}} \hat{P}^{i} A^{i} x_{k} \\ \quad-\left(u_{k}+e_{k}-K^{i} x_{k}\right)^{\mathrm{T}} B^{\mathrm{T}} \hat{P}^{i}\left[x_{k+1}+B e_{k}\right] \\ \quad-\left(v_{k}-L^{i} x_{k}\right) D^{\mathrm{T}} \hat{P}^{i}\left[x_{k+1}+B e_{k}\right] \end{array}\]

无模型的off-policy强化学习算法

通过使用Kronecker积,off-policy贝尔曼方程可以写为:

\[\begin{aligned} \left(x_{k}^{\mathrm{T}} \otimes\right.&\left.x_{k}^{\mathrm{T}}\right) \operatorname{vec}\left(P^{i}\right)-\left(x_{k+1}^{\mathrm{T}} \otimes x_{k+1}^{\mathrm{T}}\right) \operatorname{vec}\left(P^{i}\right) \\ &+2\left[\left(v_{k}-L^{i} x_{k}\right)^{\mathrm{T}} \otimes x_{k}^{\mathrm{T}}\right] \operatorname{vec}\left(D^{\mathrm{T}} P^{i} A\right) \\ &+\left[\left(v_{k}-L^{i} x_{k}\right)^{\mathrm{T}} \otimes\left(u_{k}+K^{i} x_{k}\right)^{\mathrm{T}}\right] \operatorname{vec}\left(D^{\mathrm{T}} P^{i} B\right) \\ &+\left[\left(v_{k}-L^{i} x_{k}\right)^{\mathrm{T}} \otimes\left(v_{k}+L^{i} x_{k}\right)^{\mathrm{T}}\right] \operatorname{vec}\left(D^{\mathrm{T}} P^{i} D\right) \\ &+2\left[\left(u_{k}-K^{i} x_{k}\right)^{\mathrm{T}} \otimes x_{k}^{\mathrm{T}}\right] \operatorname{vec}\left(B^{\mathrm{T}} P^{i} A\right) \\ &+\left[\left(u_{k}-K^{i} x_{k}\right)^{\mathrm{T}} \otimes\left(u_{k}+K^{i} x_{k}\right)^{\mathrm{T}}\right] \operatorname{vec}\left(B^{\mathrm{T}} P^{i} B\right) \\ &+\left[\left(u_{k}-K^{i} x_{k}\right)^{\mathrm{T}} \otimes\left(v_{k}+L^{i} x_{k}\right)^{\mathrm{T}}\right] \operatorname{vec}\left(B^{\mathrm{T}} P^{i} D\right) \\ =& x_{k}^{\mathrm{T}} \bar{Q} x_{k}+x_{k}^{\mathrm{T}}\left(K^{i}\right)^{\mathrm{T}} R_{1} K^{i} x_{k}+x_{k}^{\mathrm{T}}\left(L^{i}\right)^{\mathrm{T}} R_{2} L^{i} x_{k} \end{aligned}\]

下面使用最小二乘法进行求解:

令:

\[X^{i}=\left[\left(X_{1}^{i}\right)^{\mathrm{T}}\left(X_{2}^{i}\right)^{\mathrm{T}}\left(X_{3}^{i}\right)^{\mathrm{T}}\left(X_{4}^{i}\right)^{\mathrm{T}}\left(X_{5}^{i}\right)^{\mathrm{T}}\left(X_{6}^{i}\right)^{\mathrm{T}}\left(X_{7}^{i}\right)^{\mathrm{T}}\right]^{\mathrm{T}}\]

其中,\(\begin{array}{l} X_{1}^{i}=\operatorname{vec}\left(P^{i}\right), X_{2}^{i}=\operatorname{vec}\left(D^{\mathrm{T}} P^{i} A\right), \quad X_{3}^{i}=\operatorname{vec}\left(D^{\mathrm{T}} P^{i} B\right) \\ X_{4}^{i}=\operatorname{vec}\left(D^{\mathrm{T}} P^{i} D\right), \quad X_{5}^{i}=\operatorname{vec}\left(B^{\mathrm{T}} P^{i} A\right) \\ X_{6}^{i}=\operatorname{vec}\left(B^{\mathrm{T}} P^{i} B\right), \quad X_{7}^{i}=\operatorname{vec}\left(B^{\mathrm{T}} P^{i} D\right) \end{array}\)

对于系统运行时在线采集的数据可以表示为:

\[H_{k}^{i}=\left[\begin{array}{lllllll} H_{x x}^{i k} & H_{v x}^{i k} & H_{v u}^{i k} & H_{v v}^{i k} & H_{u x}^{i k} & H_{u u}^{i k} & H_{u v}^{i k} \end{array}\right]\]

其中,

\[\begin{aligned} H_{x x}^{i k} &=\left(x_{k}^{\mathrm{T}} \otimes x_{k}^{\mathrm{T}}\right)-\left(x_{k+1}^{\mathrm{T}} \otimes x_{k+1}^{\mathrm{T}}\right) \\ H_{v x}^{i k} &=2\left[\left(v_{k}-L^{i} x_{k}\right)^{\mathrm{T}} \otimes x_{k}^{\mathrm{T}}\right] \\ H_{v u}^{i k} &=\left(v_{k}-L^{i} x_{k}\right)^{\mathrm{T}} \otimes\left(u_{k}+K^{i} x_{k}\right)^{\mathrm{T}} \\ H_{v v}^{i k} &=\left(v_{k}-L^{i} x_{k}\right)^{\mathrm{T}} \otimes\left(v_{k}+L^{i} x_{k}\right)^{\mathrm{T}} \\ H_{u x}^{i k} &=2\left[\left(u_{k}-K^{i} x_{k}\right)^{\mathrm{T}} \otimes x_{k}^{\mathrm{T}}\right] \\ H_{u u}^{i k} &=\left(u_{k}-K^{i} x_{k}\right)^{\mathrm{T}} \otimes\left(u_{k}+K^{i} x_{k}\right)^{\mathrm{T}} \\ H_{u v}^{i k} &=\left(u_{k}-K^{i} x_{k}\right)^{\mathrm{T}} \otimes\left(v_{k}+L^{i} x_{k}\right)^{\mathrm{T}} \end{aligned}\]

进而,代价函数的第\(k\)项可以通过在线测量的数据表示成如下形式:

\[\begin{aligned} r_{k}^{i}=x_{k}^{\mathrm{T}} Q x_{k}+x_{k}^{\mathrm{T}} F x_{k} &+\beta^{2} x_{k}^{\mathrm{T}} x_{k} +x_{k}^{\mathrm{T}}\left(K^{i}\right)^{\mathrm{T}} R_{1} K^{i} x_{k}+x_{k}^{\mathrm{T}}\left(L^{i}\right)^{\mathrm{T}} R_{2} L^{i} x_{k} \end{aligned}\]

最后,Kronecker积可以改写成:

\[H_{k}^{i} X^{i}=r_{k}\]

对于最小二乘法,需要最少\(N=n^{2}+m^{2}+r^{2}+2 m r+n r+m n\)个数据来进行求解,因此,对于系统,假设采集了\(N_{1} \geq N\)个不同的数据,写作:

\[H_{1: N_{1}} X^{i}=\left[\begin{array}{c} H_{1}^{i} \\ H_{2}^{i} \\ \vdots \\ H_{N_{1}}^{i} \end{array}\right] X^{i}=\left[\begin{array}{c} r_{1} \\ r_{1} \\ \vdots \\ r_{N_{1}} \end{array}\right]=r_{1: N_{1}}\] 因此,最小二乘解为:

\[\hat{X}^{i}=\left(H_{1: N_{1}}^{\mathrm{T}} H_{1: N_{1}}\right)^{-1} H_{1: N_{1}}^{\mathrm{T}} r_{1: N_{1}}\]

进而反馈增益\(K^{i}\)\(L^{i}\)可以通过下式更新:

\[\begin{aligned} K^{i+1}=&-\left[R_{1}+\hat{X}_{3}^{i}+\hat{X}_{6}^{i}\left(\hat{X}_{7}^{i}+R_{2}\right)^{-1} \hat{X}_{5}^{i}\right]^{-1} \\ & \times\left[\hat{X}_{2}^{i}-\hat{X}_{6}^{i}\left(\hat{X}_{7}^{i}+R_{2}\right)^{-1} \hat{X}_{4}^{i}\right] \end{aligned}\]

\[\begin{aligned} L^{i+1}=&-\left[R_{2}+\hat{X}_{7}^{i}-\hat{X}_{5}^{i}\left(R_{1}+\hat{X}_{3}^{i}\right)^{-1} \hat{X}_{6}^{i}\right]^{-1} \\ & \times\left[\hat{X}_{4}^{i}+\hat{X}_{5}^{i}\left(R_{1}+\hat{X}_{3}^{i}\right)^{-1} \hat{X}_{2}^{i}\right] \end{aligned}\]

总结

对于离散时间线性系统的鲁棒控制器求解问题,目前学术届取得了相应的成果,大多数是结合自适应动态规划方法,利用贝尔曼最优性原理,借助神经网络(Neural Network)和最小二乘辨识(Least Square Identification)的方法进行逼近求解最优解。早期的研究中,多用策略迭代和值迭代等on-policy的迭代方法,但是由于其存在的种种问题,在近几年的研究中,出现了off-policy的方法,克服了on-policy离线计算的效率低等问题,最近几年出现了许多新的强化学习算法,如何将最新的强化学习算法运用到求解鲁棒控制值得进一步深入研究。

参考文献

[1] Wang, Ding, Haibo He, and Derong Liu. "Adaptive critic nonlinear robust control: A survey." IEEE transactions on cybernetics 47.10 (2017): 3429-3451.

[2] Lin, Feng. Robust control design: an optimal control approach. Vol. 18. John Wiley & Sons, 2007.

[3] Sutton, Richard S., and Andrew G. Barto. Reinforcement learning: An introduction. MIT press, 2018.

[4] Lewis, Frank L., Draguna Vrabie, and Vassilis L. Syrmos. Optimal control. John Wiley & Sons, 2012.

[5] Tripathy, Niladri Sekhar, I. N. Kar, and Kolin Paul. "Stabilization of uncertain discrete-time linear system with limited communication." IEEE Transactions on Automatic Control 62.9 (2016): 4727-4733.

[6] Yang, Yongliang, et al. "Data-driven robust control of discrete-time uncertain linear systems via off-policy reinforcement learning." IEEE transactions on neural networks and learning systems 30.12 (2019): 3735-3747.

[7] Liu, Derong, and Qinglai Wei. "Policy iteration adaptive dynamic programming algorithm for discrete-time nonlinear systems." IEEE Transactions on Neural Networks and Learning Systems 25.3 (2013): 621-634.

[8] Tao, Gang. Adaptive control design and analysis. Vol. 37. John Wiley & Sons, 2003.

If you like my blog, please donate for me.