多智能体强化学习—QMIX
论文地址:https://arxiv.org/pdf/1803.11485.pdf
1 介绍
首先介绍一下VDN(value decomposition networks)顾名思义,VDN是一种价值分解的网络,采用对每个智能体的值函数进行整合,得到一个联合动作值函数。
为了简单阐述考虑两个智能体:(o-observations,a-actions,Q-action-value function)
当智能体观察他自己的目标时,但不一定是队友的目标,那么有:
当(o i , a i o^i,a^i o i ,a i)不足以完全建模Q ˉ i π ( s , a ) \bar{Q}{i}^{\pi}(\mathbf{s}, \mathbf{a})Q ˉi π(s ,a ),利用LSTM网络的历史观测获取额外信息(t时刻看到目标A,t+5时刻目标A被挡住了,利用t+5时刻的观测数据无法获得目标A的有效信息,只有利用历史观测数据从新定位目标A)
Q π ( s , a ) = : Q ˉ 1 π ( s , a ) + Q ˉ 2 π ( s , a ) ≈ Q ~ 1 π ( h 1 , a 1 ) + Q ~ 2 π ( h 2 , a 2 ) Q^{\pi}(\mathbf{s}, \mathbf{a})=: \bar{Q}{1}^{\pi}(\mathbf{s}, \mathbf{a})+\bar{Q}{2}^{\pi}(\mathbf{s}, \mathbf{a}) \approx \tilde{Q}{1}^{\pi}\left(h^{1}, a^{1}\right)+\tilde{Q}{2}^{\pi}\left(h^{2}, a^{2}\right)Q π(s ,a )=:Q ˉ1 π(s ,a )+Q ˉ2 π(s ,a )≈Q ~1 π(h 1 ,a 1 )+Q ~2 π(h 2 ,a 2 )
值分解网络旨在学习一个联合动作值函数 Q t o t ( τ , u ) Q{t o t}(\tau,u)Q t o t (τ,u ) ,其中 τ ∈ T ≡ τ n \tau \in T \equiv \tau^{n}τ∈T ≡τn 是一个联合动作-观测的历史轨迹,u u u是一个联合动作。它是由每个智能体 i i i独立计算其值函数 Q i ( τ i , u i ; θ i ) Q_{i}\left(\tau^{i}, u^{i} ; \theta^{i}\right)Q i (τi ,u i ;θi ),之后累加求和得到的。其关系如下所示:
Q t o t = ∑ i = 1 n Q i ( τ i , a i ; θ i ) Q_{t o t}=\sum_{i=1}^{n} Q_{i}\left(\tau_{i}, a_{i} ; \theta_{i}\right)Q t o t =i =1 ∑n Q i (τi ,a i ;θi )
具体请看原论文:https://arxiv.org/pdf/1706.05296.pdf
QMIX,和VDN类似,也是一种基于价值的方法,可以以集中的端到端方式训练分散策略。QMIX采用了一个网络,将联合动作值估计为每个智能体值的 复杂非线性组合(VDN是线性加和),且仅基于局部观测。并且在结构上施加约束,使联合动作值函数与每个智能体动作值函数之间是单调的,保证集中策略和分散策略之间的一致性。
IGM(Individual-Global-Max):
argmax u Q t o t ( τ , u ) = ( argmax u 1 Q 1 ( τ 1 , u 1 ) ⋮ argmax u n Q n ( τ n , u n ) ) \underset{\mathbf{u}}{\operatorname{argmax}} Q_{t o t}(\tau, \mathbf{u})=\left(\begin{array}{cc} \operatorname{argmax}{u^{1}} & Q{1}\left(\tau^{1}, u^{1}\right) \ \vdots \ {\operatorname{argmax}}{u^{n}} & Q{n}\left(\tau^{n}, u^{n}\right) \end{array}\right)u a r g m a x Q t o t (τ,u )=⎝⎜⎛a r g m a x u 1 ⋮a r g m a x u n Q 1 (τ1 ,u 1 )Q n (τn ,u n )⎠⎟⎞
其中,Q t o t Q_{tot}Q t o t 表示联合Q函数;Q i Q_i Q i 表示智能体 i的动作值函数。
IGM表示a r g m a x ( Q t o t ) argmax (Q_{tot})a r g m a x (Q t o t ) 与a r g m a x ( Q i ) argmax (Q_i)a r g m a x (Q i )得到相同结果,这表示在无约束条件的情况下,个体最优就代表整体最优。为了保证这一条件,QMIX以单调条件进行限制:
; 2 QMIX 算法框架
框架主要分三两部分:
- (a)混合网络结构(红色是超网络,为蓝色的混合网络层产生权重和偏差)
- (b)整体的QMIX架构
- (c)智能体网络结构
下面进行具体分析:
2.1 Agent network
输入:t t t时刻智能体a a a的观测值o t a o_t^a o t a 、t − 1 t-1 t −1时刻智能体a a a的动作u t − 1 a u_{t-1}^a u t −1 a
输出:t t t时刻智能体a a a的值函数Q a ( τ a , u t a ) Q_{a}\left(\tau^{a}, u_t^{a}\right)Q a (τa ,u t a )
Agent network由DRQN网络实现,根据不同的任务需求,不同智能体的网络可以进行单独训练,也可进行参数共享,DRQN是将DQN中的全连接层替换为GRU网络,其循环层由一个具有64维隐藏状态的GRU组成,循环网络在观测质量变化的情况下,具有更强的适应性。如图所示,其网络一共包含 3 层,输入层(MLP多层神经网络)→ 中间层(GRU门控循环神经网络)→ 输出层(MLP多层神经网络)
实现代码如下:
智能体网络参数配置:
agent: "rnn"
rnn_hidden_dim: 64
obs_agent_id: True
obs_last_action: True
RNN网络:
class RNNAgent(nn.Module):
def __init__(self, input_shape, args):
super(RNNAgent, self).__init__()
self.args = args
self.fc1 = nn.Linear(input_shape, args.rnn_hidden_dim)
self.rnn = nn.GRUCell(args.rnn_hidden_dim, args.rnn_hidden_dim)
self.fc2 = nn.Linear(args.rnn_hidden_dim, args.n_actions)
def init_hidden(self):
return self.fc1.weight.new(1, self.args.rnn_hidden_dim).zero_()
def forward(self, inputs, hidden_state):
x = F.relu(self.fc1(inputs))
h_in = hidden_state.reshape(-1, self.args.rnn_hidden_dim)
h = self.rnn(x, h_in)
q = self.fc2(h)
return q, h
2.2 Mixing network
输入:t t t时刻智能体a a a的值函数Q a ( τ a , u t a ) Q_{a}\left(\tau^{a}, u_t^{a}\right)Q a (τa ,u t a )、t t t时刻全局状态s s s
输出:t t t时刻联合动作价值函数Q t o t ( τ , u ) Q_{tot}\left(\tau, u\right)Q t o t (τ,u )
Mixing network是一个前馈神经网络,它以智能体网络的输出作为输入,单调地混合,产生Q t o t Q_{tot}Q t o t 的值,如图所示。为了保证的单调性约束,混合网络的权值w e i g h t s weights w e i g h t s被限制为非负值(偏差bias可以为负数)。这使得混合网络可以逼近任何单调函数。
混合网络的权值是由单独的超网络产生的。每个超网络以全局状态s s s作为输入,生成一层混合网络的权值。每个超网络由一个单一的线性层组成,然后是一个绝对值激活函数,确保混合网络的权值是非负的。偏差也以同样的方式产生,但偏差的生成网络没有绝对值激活函数。最终的偏差是由一个具有ReLU非线性的2层超网络产生。
实现代码如下:
mixer: "qmix"
mixing_embed_dim: 32
hypernet_layers: 2
hypernet_embed: 64
class QMixer(nn.Module):
def __init__(self, args):
super(QMixer, self).__init__()
self.args = args
self.n_agents = args.n_agents
self.state_dim = int(np.prod(args.state_shape))
self.embed_dim = args.mixing_embed_dim
if getattr(args, "hypernet_layers", 1) == 1:
self.hyper_w_1 = nn.Linear(self.state_dim, self.embed_dim * self.n_agents)
self.hyper_w_final = nn.Linear(self.state_dim, self.embed_dim)
elif getattr(args, "hypernet_layers", 1) == 2:
hypernet_embed = self.args.hypernet_embed
self.hyper_w_1 = nn.Sequential(nn.Linear(self.state_dim, hypernet_embed),
nn.ReLU(),
nn.Linear(hypernet_embed, self.embed_dim * self.n_agents))
self.hyper_w_final = nn.Sequential(nn.Linear(self.state_dim, hypernet_embed),
nn.ReLU(),
nn.Linear(hypernet_embed, self.embed_dim))
elif getattr(args, "hypernet_layers", 1) > 2:
raise Exception("Sorry >2 hypernet layers is not implemented!")
else:
raise Exception("Error setting number of hypernet layers.")
self.hyper_b_1 = nn.Linear(self.state_dim, self.embed_dim)
self.V = nn.Sequential(nn.Linear(self.state_dim, self.embed_dim),
nn.ReLU(),
nn.Linear(self.embed_dim, 1))
def forward(self, agent_qs, states):
bs = agent_qs.size(0)
states = states.reshape(-1, self.state_dim)
agent_qs = agent_qs.view(-1, 1, self.n_agents)
w1 = th.abs(self.hyper_w_1(states))
b1 = self.hyper_b_1(states)
w1 = w1.view(-1, self.n_agents, self.embed_dim)
b1 = b1.view(-1, 1, self.embed_dim)
hidden = F.elu(th.bmm(agent_qs, w1) + b1)
w_final = th.abs(self.hyper_w_final(states))
w_final = w_final.view(-1, self.embed_dim, 1)
v = self.V(states).view(-1, 1, 1)
y = th.bmm(hidden, w_final) + v
q_tot = y.view(bs, -1, 1)
return q_tot
2.3 算法更新流程
损失函数:L ( θ ) = ∑ i = 1 b [ ( y i t o t − Q t o t ( τ , u , s ; θ ) ) 2 ] \mathcal{L}(\theta)=\sum_{i=1}^{b}\left[\left(y_{i}^{t o t}-Q_{t o t}(\tau, \mathbf{u}, s ; \theta)\right)^{2}\right]L (θ)=∑i =1 b [(y i t o t −Q t o t (τ,u ,s ;θ))2 ]
其中b b b表示从经验池中采样的样本数量,y t o t = r + γ max u ′ Q t o t ( τ ′ , u ′ , s ′ ; θ − ) y^{t o t}=r+\gamma \max {\mathbf{u}^{\prime}} Q{t o t}\left(\tau^{\prime}, \mathbf{u}^{\prime}, s^{\prime} ; \theta^{-}\right)y t o t =r +γmax u ′Q t o t (τ′,u ′,s ′;θ−),θ − \theta^{-}θ−是目标网络的参数,
所以时序差分的误差可表示为:
T D e r r o r = ( r + γ Q t o t ( target ) ) − Q t o t ( evalutate ) \begin{aligned} {TDerror}=(r+\gamma Q { tot }(\text { target })) -Q { tot }(\text { evalutate }) \end{aligned}T D e r r o r =(r +γQ t o t (target ))−Q t o t (evalutate )
Q t o t ( target ) Q { tot }(\text { target })Q t o t (target ):状态s ′ s^{‘}s ′的情况下,所有行为中,获取的最大价值Q t o t Q{tot}Q t o t 。根据IGM条件,输入为此状态下每个智能体的最大动作价值。
Q t o t ( evalutate ) Q { tot }(\text { evalutate })Q t o t (evalutate ): 状态s s s的情况下,根据当前网络策略所能获得Q t o t Q{tot}Q t o t 。
实现代码如下:
参数配置:
action_selector: "epsilon_greedy"
epsilon_start: 1.0
epsilon_finish: 0.05
epsilon_anneal_time: 50000
runner: "episode"
buffer_size: 5000
target_update_interval: 200
动作选择:(ε-greedy)
class EpsilonGreedyActionSelector():
def __init__(self, args):
self.args = args
self.schedule = DecayThenFlatSchedule(args.epsilon_start, args.epsilon_finish, args.epsilon_anneal_time,
decay="linear")
self.epsilon = self.schedule.eval(0)
def select_action(self, agent_inputs, avail_actions, t_env, test_mode=False):
self.epsilon = self.schedule.eval(t_env)
if test_mode:
self.epsilon = 0.0
masked_q_values = agent_inputs.clone()
masked_q_values[avail_actions == 0.0] = -float("inf")
random_numbers = th.rand_like(agent_inputs[:, :, 0])
pick_random = (random_numbers < self.epsilon).long()
random_actions = Categorical(avail_actions.float()).sample().long()
picked_actions = pick_random * random_actions + (1 - pick_random) * masked_q_values.max(dim=2)[1]
return picked_actions
计算单个智能体估计的Q值
mac_out = []
self.mac.init_hidden(batch.batch_size)
for t in range(batch.max_seq_length):
agent_outs = self.mac.forward(batch, t=t)
mac_out.append(agent_outs)
mac_out = th.stack(mac_out, dim=1)
chosen_action_qvals = th.gather(mac_out[:, :-1], dim=3, index=actions).squeeze(3)
x_mac_out = mac_out.clone().detach()
x_mac_out[avail_actions == 0] = -9999999
max_action_qvals, max_action_index = x_mac_out[:, :-1].max(dim=3)
max_action_index = max_action_index.detach().unsqueeze(3)
is_max_action = (max_action_index == actions).int().float()
计算单个智能体目标Q值
target_mac_out = []
self.target_mac.init_hidden(batch.batch_size)
for t in range(batch.max_seq_length):
target_agent_outs = self.target_mac.forward(batch, t=t)
target_mac_out.append(target_agent_outs)
target_mac_out = th.stack(target_mac_out[1:], dim=1)
if self.args.double_q:
mac_out_detach = mac_out.clone().detach()
mac_out_detach[avail_actions == 0] = -9999999
cur_max_actions = mac_out_detach[:, 1:].max(dim=3, keepdim=True)[1]
target_max_qvals = th.gather(target_mac_out, 3, cur_max_actions).squeeze(3)
else:
target_max_qvals = target_mac_out.max(dim=3)[0]
根据损失函数,进行反向传播
if self.mixer is not None:
chosen_action_qvals = self.mixer(chosen_action_qvals, batch["state"][:, :-1])
target_max_qvals = self.target_mixer(target_max_qvals, batch["state"][:, 1:])
targets = rewards + self.args.gamma * (1 - terminated) * target_max_qvals
td_error = (chosen_action_qvals - targets.detach())
mask = mask.expand_as(td_error)
masked_td_error = td_error * mask
loss = (masked_td_error ** 2).sum() / mask.sum()
self.optimiser.zero_grad()
loss.backward()
grad_norm = th.nn.utils.clip_grad_norm_(self.params, self.args.grad_norm_clip)
self.optimiser.step()
if (episode_num - self.last_target_update_episode) / self.args.target_update_interval >= 1.0:
self._update_targets()
self.last_target_update_episode = episode_num
3 实验效果:
IQL、VDN和QMIX在StarCraft II六种不同的战斗地图上的获胜率。基于启发式的算法的性能用虚线表示。
; 参考:
博客:【QMIX】一种基于Value-Based多智能体算法
多智能体强化学习入门(五)——QMIX算法分析
多智能体强化学习入门Qmix
代码:https://github.com/wjh720/QPLEX
Original: https://blog.csdn.net/weixin_42985452/article/details/124048070
Author: Spgroc
Title: 多智能体强化学习—QMIX
原创文章受到原创版权保护。转载请注明出处:https://www.johngo689.com/648001/
转载文章受原作者版权保护。转载请注明原作者出处!