Diffusion扩散模型简述 + 代码demo

与GAN FLOW VAE类似扩散模型是一种生成模型。

Diffusion扩散模型简述 + 代码demo

; 需要用到的概率事实:

  1. 条件概率
  2. 马尔科夫链的转移公式
  3. 高斯分布的KL散度公式
    K L ( P , Q ) = l o g σ 2 σ 1 + σ 2 + ( μ 1 − μ 2 ) 2 2 σ 2 2 − 1 2 ( 其中 P . Q 为一维高斯分布 ) KL(P,Q)=log\frac{\sigma_2}{\sigma_1}+\frac{\sigma^2+(\mu_1-\mu_2)^2}{2\sigma_2^2} -\frac12 { \tiny(其中P.Q为一维高斯分布)}K L (P ,Q )=l o g σ1 ​σ2 ​​+2 σ2 2 ​σ2 +(μ1 ​−μ2 ​)2 ​−2 1 ​(其中P .Q 为一维高斯分布)
  4. 重参数技巧(从特殊高斯分布中采样点时不可导,将采样过程变为从标准分布N(0,1)采样的结果常量Z再用μ \mu μ,σ \sigma σ变为目标高斯分布)

Diffusion扩散模型简述 + 代码demo

Diffusion

项目描述 /
p ( ⋅ ) p( \cdot )p (⋅)X T X_T X T ​

各向同性的高斯分布
N ( X T ; 0 , I ) N(X_T;0,I)N (X T ​;0 ,I )X 0 X_0 X 0 ​

训练数据集(的分布)

  • ⇐ \Leftarrow ⇐:扩散过程,逐渐添加高斯噪声,有序到无序,熵增过程
    q ( x 1 : T ∣ x 0 ) : = Π t = 1 T q ( x t ∣ x t − 1 ) 其中 q ( x t ∣ x t − 1 ) : = N ( x t ; 1 − β t x t − 1 , β t I ) q(x_{1:T}|x_0):=\Pi_{t=1}^T q(x_t|x_{t-1}) \ 其中q(x_t|x_{t-1}):=N(x_t;{\sqrt{1-\beta_t}x_{t-1}},\beta_t I)q (x 1 :T ​∣x 0 ​):=Πt =1 T ​q (x t ​∣x t −1 ​)其中q (x t ​∣x t −1 ​):=N (x t ​;1 −βt ​​x t −1 ​,βt ​I )
    β t ∈ ( 0 , 1 ) 可以如参考文献 33 设置为冲参数化的参数或直接设置为学习率一样的超参数 所以正向过程是不含参数的。 \tiny \beta_t \in (0,1)可以如参考文献33设置为冲参数化的参数或直接设置为学习率一样的超参数\所以正向过程是不含参数的。βt ​∈(0 ,1 )可以如参考文献33 设置为冲参数化的参数或直接设置为学习率一样的超参数所以正向过程是不含参数的。

扩散过程的一个显著特性是,它允许以闭合形式在任意时间步t对xt进行采样:
令 a t = 1 − β t ⇓ a ˉ t = Π s = 1 t a s q ( x t ∣ x 0 ) = N ( x t ; a ˉ t x 0 , ( 1 − a ˉ t ) I ) 令a_t = 1-\beta_t \Downarrow \bar a_t=\Pi_{s=1}^t a_s \ q(x_t|x_0)=N(x_t;\sqrt{\bar a_t}x_0,(1-\bar a_t)I)令a t ​=1 −βt ​⇓a ˉt ​=Πs =1 t ​a s ​q (x t ​∣x 0 ​)=N (x t ​;a ˉt ​​x 0 ​,(1 −a ˉt ​)I )

上式的推导过程: x t = a t x t − 1 + 1 − a t z t − 1 , 在已知 x t − 1 时,确定 x t 的高斯分布,其随机性由标准正太分布 z t − 1 提供 = a t ( a t − 1 x t − 2 + 1 − a t − 1 z t − 2 ) + 1 − a t z t − 1 因为需要通过马尔科夫链获取 x t 的分布 = a t a t − 1 x t − 2 + ( a t 1 − a t − 1 z t − 2 + 1 − a t z t − 1 ) 因为需要通过马尔科夫链获取 x t 的分布 = a t a t − 1 x t − 2 + ( a t 1 − a t − 1 ) 2 + ( 1 − a t ) 2 z 标准正太分布方差的性质 = a t a t − 1 x t − 2 + 1 − a t a t − 1 z ˉ t − 2 z ˉ 为两个高斯分布的混合 = a ˉ t x 0 + 1 − a ˉ t z , 所以将上式写为 q ( x t ∣ x 0 ) = N ( x t ; a ˉ t x 0 , ( 1 − a ˉ t ) I ) 上式的推导过程:\ \tiny x_t= \sqrt{a_t}x_{t-1} + \sqrt{1-a_t}z_{t-1} \ ,在已知x_{t-1}时,确定x_t的高斯分布,其随机性由标准正太分布z_{t-1}提供\ \quad = \sqrt{a_t}(\sqrt{a_{t-1}}x_{t-2} + \sqrt{1-a_{t-1}}z_{t-2}) + \sqrt{1-a_t}z_{t-1} \ \ \ 因为需要通过马尔科夫链获取x_t的分布 \ \quad = \sqrt{a_t}\sqrt{a_{t-1}}x_{t-2} +(\sqrt{a_t} \sqrt{1-a_{t-1}}z_{t-2} + \sqrt{1-a_t}z_{t-1} )\ \ \ 因为需要通过马尔科夫链获取x_t的分布 \ \quad = \sqrt{a_t}\sqrt{a_{t-1}}x_{t-2} +\sqrt {(\sqrt{a_t} \sqrt{1-a_{t-1}})^2 +(\sqrt{1-a_t} )^2 }z \ \ \ 标准正太分布方差的性质 \ \quad =\sqrt{a_t a_{t-1}}x_{t-2} +\sqrt{1-a_ta_{t-1}}\bar z_{t-2} \ \ \qquad \qquad \quad \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \bar z为两个高斯分布的混合 \ =\sqrt{\bar{a}_t}x_0+\sqrt{1-\bar{a}_t}z,\quad \quad 所以将上式写为q(x_t|x_0)=N(x_t;\sqrt{\bar a_t}x_0,(1-\bar a_t)I)上式的推导过程:x t ​=a t ​​x t −1 ​+1 −a t ​​z t −1 ​,在已知x t −1 ​时,确定x t ​的高斯分布,其随机性由标准正太分布z t −1 ​提供=a t ​​(a t −1 ​​x t −2 ​+1 −a t −1 ​​z t −2 ​)+1 −a t ​​z t −1 ​因为需要通过马尔科夫链获取x t ​的分布=a t ​​a t −1 ​​x t −2 ​+(a t ​​1 −a t −1 ​​z t −2 ​+1 −a t ​​z t −1 ​)因为需要通过马尔科夫链获取x t ​的分布=a t ​​a t −1 ​​x t −2 ​+(a t ​​1 −a t −1 ​​)2 +(1 −a t ​​)2 ​z 标准正太分布方差的性质=a t ​a t −1 ​​x t −2 ​+1 −a t ​a t −1 ​​z ˉt −2 ​z ˉ为两个高斯分布的混合=a ˉt ​​x 0 ​+1 −a ˉt ​​z ,所以将上式写为q (x t ​∣x 0 ​)=N (x t ​;a ˉt ​​x 0 ​,(1 −a ˉt ​)I )

unti-Diffusion

  • ⇒ \Rightarrow ⇒:逆扩散过程(采样过程),无序到有序,熵减过程
    联合分布 p θ ( x 0 : T ) : = p ( x T ) Π t = 1 T p θ ( x t − 1 ∣ x t ) 其中 p θ ( x t − 1 ∣ x t ) : = N ( x t − 1 ; μ θ ( x t , t ) , Σ θ ( x t , t ) ) , 即假设 p θ ( x t − 1 ∣ x t ) 也为高斯分布 , 用网络拟合其中的系数 μ θ ( x t , t ) , Σ θ ( x t , t ) 联合分布p_{\theta}(x_{0:T}):=p(x_T)\Pi_{t=1}^T \ p_{\theta}(x_{t-1}|x_t) \ 其中p_{\theta}(x_{t-1}|x_t):= N{(x_{t-1};\mu_{\theta}(x_t,t) ,\Sigma_{\theta}(x_t,t))},\ 即假设p_{\theta}(x_{t-1}|x_t)也为高斯分布,用网络拟合其中的系数\ \mu_{\theta}(x_t,t) ,\Sigma_{\theta}(x_t,t)联合分布p θ​(x 0 :T ​):=p (x T ​)Πt =1 T ​p θ​(x t −1 ​∣x t ​)其中p θ​(x t −1 ​∣x t ​):=N (x t −1 ​;μθ​(x t ​,t ),Σθ​(x t ​,t )),即假设p θ​(x t −1 ​∣x t ​)也为高斯分布,用网络拟合其中的系数μθ​(x t ​,t ),Σθ​(x t ​,t )
  • 有了正向过程的分布,可以窥探逆向过程的分布,比如确定q ( x t − 1 ∣ x t , x 0 ) q(x_{t-1}| x_t,x_0)q (x t −1 ​∣x t ​,x 0 ​)的标准差和均值

根据贝叶斯定理转换 P ( A ∣ B ) 和 P ( B ∣ A ) q ( x t − 1 ∣ x t , x 0 ) = q ( x t ∣ x t − 1 . x 0 ) q ( x t − 1 ∣ x 0 ) q ( x t ∣ x 0 ) 正比于 ∝ e x p ( − 1 2 ( ( x t − a t x t − 1 ) 2 β t + ( x t − 1 − a ‾ t − 1 x 0 ) 2 1 − a ˉ t − 1 − ( x t − a ‾ t x 0 ) 2 1 − a ˉ t ) ) = e x p ( − 1 2 ( ( a t β t + 1 1 − a ˉ t − 1 ) x t − 1 2 − ( 2 a t β t x t + 2 a ‾ t 1 − a ˉ t x 0 ) x t − 1 + C ( x t , x 0 ) ) ) 然后由二次函数得到 − 2 a b 得到均值,和方差 得到方差 β ˉ = 1 ( a t β t + 1 1 − a ˉ t − 1 ) = 1 − a ˉ t − 1 1 − a ˉ t ⋅ β t 均值 u ˉ t ( x t , x 0 ) = ( a t β t x t + a ‾ t 1 − a t ˉ ) / ( α t β t + 1 1 − α ‾ t − 1 ) = a t ( 1 − a ‾ t − 1 ) 1 − a ˉ t x t + a ‾ t − 1 β t 1 − a ˉ t x 0 参数重整化技巧 ⇓ x t = = a ˉ t x 0 + 1 − a ˉ t z μ ˉ t = 1 a t ( x t − β t 1 − a ‾ t z t ) \tiny 根据贝叶斯定理 转换P(A|B) 和 P(B|A)\ q(x_{t-1}| x_t,x_0) = q(x_t|x_{t-1}.x_0)\frac{q(x_{t-1}|x_0)}{q(x_t|x_0)}\ 正比于\propto exp(-\frac{1}{2}( \frac{(x_t-\sqrt{a_t}x_{t-1})^2}{\beta_t} + \frac{(x_{t-1}-\sqrt{\overline{a}{t-1}}x_0)^2}{1-\bar{a}{t-1}} -\frac{ (x_t – \sqrt{ \overline{a}t}x_0)^2 }{1-\bar{a}_t} )) \ = exp(-\frac{1}{2}( (\frac{a_t}{\beta_t }+\frac{1}{1-\bar a{t-1}} )x_{t-1}^2 -(\frac{2\sqrt{a_t}}{\beta_t}x_t+\frac{2\sqrt{\overline a_t}}{1-\bar a_t}x_0)x_{t-1} +C(x_t,x_0) ))\ 然后由二次函数得到-\frac{2a}{b}得到均值,和方差\ 得到方差\bar \beta=\frac{1}{( \frac{a_t}{\beta_t }+\frac{1}{1-\bar a_{t-1}} )} = \frac{1-\bar a_{t-1}}{1-\bar a_t} \cdot \beta_t \ 均值\bar{u}t(x_t,x_0)=(\frac{\sqrt{a_t}}{\beta_t}x_t+\frac{\sqrt{\overline a_t}}{1-\bar{a_t}})/(\frac{\alpha_t}{\beta_t}+\frac{1}{1-\overline{\alpha}{t-1}}) \ =\frac{\sqrt{a_t}(1-\overline{a}{t-1})}{1-\bar{a}{t}}x_t+\frac{\sqrt{\overline{a}_{t-1}\beta_t}}{1-\bar a_t}x_0 \ 参数重整化技巧\Downarrow x_t==\sqrt{\bar{a}_t}x_0+\sqrt{1-\bar{a}_t}z \ \bar \mu_t=\frac{1}{\sqrt{a_t}}(x_t – \frac{\beta_t}{\sqrt{1-\overline{a}_t}}z_t)根据贝叶斯定理转换P (A ∣B )和P (B ∣A )q (x t −1 ​∣x t ​,x 0 ​)=q (x t ​∣x t −1 ​.x 0 ​)q (x t ​∣x 0 ​)q (x t −1 ​∣x 0 ​)​正比于∝e x p (−2 1 ​(βt ​(x t ​−a t ​​x t −1 ​)2 ​+1 −a ˉt −1 ​(x t −1 ​−a t −1 ​​x 0 ​)2 ​−1 −a ˉt ​(x t ​−a t ​​x 0 ​)2 ​))=e x p (−2 1 ​((βt ​a t ​​+1 −a ˉt −1 ​1 ​)x t −1 2 ​−(βt ​2 a t ​​​x t ​+1 −a ˉt ​2 a t ​​​x 0 ​)x t −1 ​+C (x t ​,x 0 ​)))然后由二次函数得到−b 2 a ​得到均值,和方差得到方差βˉ​=(βt ​a t ​​+1 −a ˉt −1 ​1 ​)1 ​=1 −a ˉt ​1 −a ˉt −1 ​​⋅βt ​均值u ˉt ​(x t ​,x 0 ​)=(βt ​a t ​​​x t ​+1 −a t ​ˉ​a t ​​​)/(βt ​αt ​​+1 −αt −1 ​1 ​)=1 −a ˉt ​a t ​​(1 −a t −1 ​)​x t ​+1 −a ˉt ​a t −1 ​βt ​​​x 0 ​参数重整化技巧⇓x t ​==a ˉt ​​x 0 ​+1 −a ˉt ​​z μˉ​t ​=a t ​​1 ​(x t ​−1 −a t ​​βt ​​z t ​)

  • 差值称为漂移量

loss函数

− l o g p θ ( x 0 ) ≤ − l o g p θ ( x 0 ) + D K L ( q ( x 1 : T ∣ x 0 ) ∣ ∣ p θ ( x 1 : T ∣ x 0 ) ) D K L ≥ 0 = − l o g p θ ( x 0 ) + E x 1 : T ∼ q ( x 1 : T ∣ x 0 ) [ l o g q ( x 1 : T ∣ x 0 ) p θ ( x 1 : T ∣ x 0 ) ] K L 散度公式展开为对 l o g q p 用 p 均值加权 P 用来表示样本的真实分布, q 用来表示模型所预测的分布 = − l o g p θ ( x 0 ) + E x 1 : T ∼ q ( x 1 : T ∣ x 0 ) [ l o g q ( x 1 : T ∣ x 0 ) p θ ( x 0 : T ) / p θ ( x 0 ) ] = − l o g p θ ( x 0 ) + E q [ l o g q ( x 1 : T ∣ x 0 ) p θ ( x 0 : T ) / p θ ( x 0 ) + l o g p θ ( x 0 ) ] = − l o g p θ ( x 0 ) + E q [ l o g q ( x 1 : T ∣ x 0 ) p θ ( x 0 : T ) / p θ ( x 0 ) ] + l o g p θ ( x 0 ) + l o g p θ ( x 0 ) 不受变量 q 加权的影响,直接移出来 = E q [ l o g q ( x 1 : T ∣ x 0 ) p θ ( x 0 : T ) / p θ ( x 0 ) ] 至此得到了 l o g 似然函数的上界 -log p_{\theta}(x_0) \leq -logp_{\theta}(x_0) + D_{KL}(q(x_{1:T}|x_0)|| p_{\theta}(x_{1:T}|x_0)) \ {\tiny \color{blue}D_{KL} \geq 0} \ \quad = -log p_{\theta}(x_0) +E_{x1:T \sim q(x1:T |x_0)}[log\frac{q(x_{1:T}|x_0)}{p_{\theta}(x_{1:T}|x_0)}] {\color{blue} \tiny KL散度公式展开为对log\frac{q}{p}用 p均值加权 P用来表示样本的真实分布,q用来表示模型所预测的分布} \ \quad = -log p_{\theta}(x_0) +E_{x1:T \sim q(x1:T |x_0)}[log\frac{q(x_{1:T}|x_0)}{p_{\theta}(x_{0:T})/p_{\theta}(x_0)}] \ \quad = -log p_{\theta}(x_0) +E_{q}[log\frac{q(x_{1:T}|x_0)}{p_{\theta}(x_{0:T})/p_{\theta}(x_0)}+logp_{\theta}(x_0)] \ \quad = -log p_{\theta}(x_0) +E_{q}[log\frac{q(x_{1:T}|x_0)}{p_{\theta}(x_{0:T})/p_{\theta}(x_0)}] +logp_{\theta}(x_0) {\color{blue} \tiny +logp_{\theta}(x_0) 不受变量q加权的影响,直接移出来}\ \quad =E_{q}[log\frac{q(x_{1:T}|x_0)}{p_{\theta}(x_{0:T})/p_{\theta}(x_0)}] {\color{blue} \tiny 至此得到了log似然函数的上界}\−l o g p θ​(x 0 ​)≤−l o g p θ​(x 0 ​)+D K L ​(q (x 1 :T ​∣x 0 ​)∣∣p θ​(x 1 :T ​∣x 0 ​))D K L ​≥0 =−l o g p θ​(x 0 ​)+E x 1 :T ∼q (x 1 :T ∣x 0 ​)​[l o g p θ​(x 1 :T ​∣x 0 ​)q (x 1 :T ​∣x 0 ​)​]K L 散度公式展开为对l o g p q ​用p 均值加权P 用来表示样本的真实分布,q 用来表示模型所预测的分布=−l o g p θ​(x 0 ​)+E x 1 :T ∼q (x 1 :T ∣x 0 ​)​[l o g p θ​(x 0 :T ​)/p θ​(x 0 ​)q (x 1 :T ​∣x 0 ​)​]=−l o g p θ​(x 0 ​)+E q ​[l o g p θ​(x 0 :T ​)/p θ​(x 0 ​)q (x 1 :T ​∣x 0 ​)​+l o g p θ​(x 0 ​)]=−l o g p θ​(x 0 ​)+E q ​[l o g p θ​(x 0 :T ​)/p θ​(x 0 ​)q (x 1 :T ​∣x 0 ​)​]+l o g p θ​(x 0 ​)+l o g p θ​(x 0 ​)不受变量q 加权的影响,直接移出来=E q ​[l o g p θ​(x 0 :T ​)/p θ​(x 0 ​)q (x 1 :T ​∣x 0 ​)​]至此得到了l o g 似然函数的上界
然后将 − l o g p θ ( x 0 ) 写成交叉熵的形式 L = E q ( x 0 ) [ − l o g p θ ( x 0 ) ] ≤ E q ( x 0 : T ) [ l o g q ( x 1 : T ∣ x 0 ) p θ ( x 0 : T ) ] 将刚才计算的结果带入 = E q ( x 0 : T ) [ l o g Π t = 1 T q ( x t ∣ x t − 1 ) p θ ( x T ) Π t = 1 T p θ ( x t − 1 ∣ x t ) ] 展开,上下类似,只不过一个时 q 扩散 , 一个是 p 逆扩散 = E q ( x 0 : T ) [ − l o g p θ ( x T ) + ∑ t = 1 T l o g q ( x t ∣ x t − 1 ) p θ ( x t − 1 ∣ x t ) ] = E q ( x 0 : T ) [ − l o g p θ ( x T ) + ∑ t = 2 T l o g q ( x t ∣ x t − 1 ) p θ ( x t − 1 ∣ x t ) + l o g q ( x t ∣ x 0 ) p θ ( x 0 ∣ x t ) ] 取出其中的一项 q ( x t ∣ x t − 1 ) = q ( x t ∣ x t − 1 , x 0 ) ⇓ = q ( x t − 1 ∣ x t , x 0 ) q ( x t ∣ x 0 ) q ( x t − 1 ∣ x 0 ) = E q ( x 0 : T ) [ − l o g p θ ( x T ) + ∑ t = 2 T l o g q ( x t − 1 ∣ x t , x 0 ) q ( x t ∣ x 0 ) q ( x t − 1 ∣ x 0 ) p θ ( x t − 1 ∣ x t ) + l o g q ( x t ∣ x 0 ) p θ ( x 0 ∣ x t ) ] = E q ( x 0 : T ) [ − l o g p θ ( x T ) + ∑ t = 2 T l o g q ( x t − 1 ∣ x t , x 0 ) p θ ( x t − 1 ∣ x t ) ⋅ q ( x t ∣ x 0 ) q ( x t − 1 ∣ x 0 ) + l o g q ( x t ∣ x 0 ) p θ ( x 0 ∣ x t ) ] = E q ( x 0 : T ) [ − l o g p θ ( x T ) + ∑ t = 2 T l o g q ( x t − 1 ∣ x t , x 0 ) p θ ( x t − 1 ∣ x t ) + ∑ t = 2 T l o g q ( x t ∣ x 0 ) q ( x t − 1 ∣ x 0 ) + l o g q ( x t ∣ x 0 ) p θ ( x 0 ∣ x t ) ] = E q ( x 0 : T ) [ − l o g p θ ( x T ) + ∑ t = 2 T l o g q ( x t − 1 ∣ x t , x 0 ) p θ ( x t − 1 ∣ x t ) + l o g ( Π t = 2 T q ( x t ∣ x 0 ) q ( x t − 1 ∣ x 0 ) ) + l o g q ( x t ∣ x 0 ) p θ ( x 0 ∣ x t ) ] = E q ( x 0 : T ) [ − l o g p θ ( x T ) + ∑ t = 2 T l o g q ( x t − 1 ∣ x t , x 0 ) p θ ( x t − 1 ∣ x t ) + l o g ( q ( x T ∣ x 0 ) q ( x 1 ∣ x 0 ) ) + l o g q ( x t ∣ x 0 ) p θ ( x 0 ∣ x t ) ] = E q ( x 0 : T ) [ l o g q ( x T ∣ x 0 ) − l o g p θ ( x T ) + ∑ t = 2 T l o g q ( x t − 1 ∣ x t , x 0 ) p θ ( x t − 1 ∣ x t ) + l o g ( − q ( x 1 ∣ x 0 ) ) ] = E q ( x 0 : T ) [ l o g q ( x T ∣ x 0 ) − l o g p θ ( x T ) + ∑ t = 2 T l o g q ( x t − 1 ∣ x t , x 0 ) p θ ( x t − 1 ∣ x t ) + l o g ( − q ( x 1 ∣ x 0 ) ) ] = E q [ D K L ( q ( x T ∣ x 0 ) ∣ ∣ p θ ( x T ) ) + ∑ t = 2 T D K L ( q ( x t − 1 ∣ x t , x 0 ) ∣ ∣ p θ ( x t − 1 ∣ x t ) ) + l o g ( − q ( x 1 ∣ x 0 ) ) ] ⇓ b l u e : 常量, r e d L t − 1 , b l a c k : L t − 1 且 t = 1 然后将-log p_{\theta}(x_0)写成交叉熵的形式 \ L = E_{q(x_0)}[-log p_{\theta}(x_0)] \ \leq E_{q(x_0:T)}[log\frac{q(x_{1:T}|x_0)}{p_{\theta}(x_{0:T})}] {\color{blue} \tiny 将刚才计算的结果带入} \ = E_{q(x_0:T)}[log\frac{ \Pi_{t=1}^{T} q(x_t|x_{t-1}) }{ p_{\theta}(x_{T}) \Pi_{t=1}^{T} p_{\theta}(x_{t-1}|x_{t}) } ] {\color{blue} \tiny 展开,上下类似,只不过一个时q扩散,一个是p逆扩散}\ = E_{ q(x_0:T)}[ -log p_{\theta}(x_T)+\sum_{t=1}^T log\frac{ q(x_t|x_{t-1}) }{ p_{\theta}(x_{t-1}|x_{t}) } ] {\color{blue} \tiny } \ = E_{ q(x_0:T)}[ -log p_{\theta}(x_T)+\sum_{t=2}^T log\frac{ q(x_t|x_{t-1}) }{ p_{\theta}(x_{t-1}|x_{t}) } {\color{blue} \tiny }+ log\frac{ q(x_t|x_{0}) }{ p_{\theta}(x_{0}|x_{t}) } ] {\color{blue} \tiny 取出其中的一项 } \ {\tiny q(x_t|x_{t-1})= q(x_t|x_{t-1},x_0) \Downarrow = \frac{q(x_{t-1}|x_t,x_0)q(x_t|x_0)}{ q(x_{t-1}|x_0)} } \ = E_{ q(x_0:T)}[ -log p_{\theta}(x_T)+\sum_{t=2}^T log\frac{ \frac{q(x_{t-1}|x_t,x_0)q(x_t|x_0)}{ q(x_{t-1}|x_0)} }{ p_{\theta}(x_{t-1}|x_{t}) } {\color{blue} \tiny }+ log\frac{ q(x_t|x_{0}) }{ p_{\theta}(x_{0}|x_{t}) } ] {\color{blue} \tiny }\ = E_{ q(x_0:T)}[ -log p_{\theta}(x_T)+\sum_{t=2}^T log \frac{q(x_{t-1}|x_t,x_0)}{p_{\theta}(x_{t-1}|x_{t}) }\cdot \frac{q(x_t|x_0)}{ q(x_{t-1}|x_0)} {\color{blue} \tiny }+ log\frac{ q(x_t|x_{0}) }{ p_{\theta}(x_{0}|x_{t}) } ] {\color{blue} \tiny }\ = E_{ q(x_0:T)}[ -log p_{\theta}(x_T)+\sum_{t=2}^T log \frac{q(x_{t-1}|x_t,x_0)}{p_{\theta}(x_{t-1}|x_{t}) }+\sum_{t=2}^T log\frac{q(x_t|x_0)}{ q(x_{t-1}|x_0)} {\color{blue} \tiny }+ log\frac{ q(x_t|x_{0}) }{ p_{\theta}(x_{0}|x_{t}) } ] {\color{blue} \tiny }\ = E_{ q(x_0:T)}[ -log p_{\theta}(x_T)+\sum_{t=2}^T log \frac{q(x_{t-1}|x_t,x_0)}{p_{\theta}(x_{t-1}|x_{t}) }+log(\Pi_{t=2}^T \frac{q(x_t|x_0)}{ q(x_{t-1}|x_0)} ) {\color{blue} \tiny }+ log\frac{ q(x_t|x_{0}) }{ p_{\theta}(x_{0}|x_{t}) } ] {\color{blue} \tiny }\ = E_{ q(x_0:T)}[ -log p_{\theta}(x_T)+\sum_{t=2}^T log \frac{q(x_{t-1}|x_t,x_0)}{p_{\theta}(x_{t-1}|x_{t}) }+log( \frac{q(x_T|x_0)}{ q(x_{1}|x_0)} ) {\color{blue} \tiny }+ log\frac{ q(x_t|x_{0}) }{ p_{\theta}(x_{0}|x_{t}) } ] {\color{blue} \tiny }\ = E_{ q(x_0:T)}[ logq(x_T|x_0)-log p_{\theta}(x_T)+\sum_{t=2}^T log \frac{q(x_{t-1}|x_t,x_0)}{p_{\theta}(x_{t-1}|x_{t}) }+log( -{ q(x_{1}|x_0)} ) {\color{blue} \tiny } ] {\color{blue} \tiny }\ = E_{ q(x_0:T)}[ {\color{blue}logq(x_T|x_0)-log p_{\theta}(x_T)}+ {\color{red}\sum_{t=2}^T log \frac{q(x_{t-1}|x_t,x_0)}{p_{\theta}(x_{t-1}|x_{t}) } }+log( -{ q(x_{1}|x_0)} ) {\color{blue} \tiny } ] {\color{blue} \tiny }\ = E_{ q}[ {\color{blue}DKL(q(x_T|x_0)||p_{\theta}(x_T))}+ {\color{red}\sum_{t=2}^T DKL( {q(x_{t-1}|x_t,x_0)} || {p_{\theta}(x_{t-1}|x_{t}) } )}+log( -{ q(x_{1}|x_0)} ) {\color{blue} \tiny } ] {\color{blue} \tiny }\ \ \Downarrow \ blue :\ 常量,red \ L_{t-1}, \ black :L_{t-1}且t=1 然后将−l o g p θ​(x 0 ​)写成交叉熵的形式L =E q (x 0 ​)​[−l o g p θ​(x 0 ​)]≤E q (x 0 ​:T )​[l o g p θ​(x 0 :T ​)q (x 1 :T ​∣x 0 ​)​]将刚才计算的结果带入=E q (x 0 ​:T )​[l o g p θ​(x T ​)Πt =1 T ​p θ​(x t −1 ​∣x t ​)Πt =1 T ​q (x t ​∣x t −1 ​)​]展开,上下类似,只不过一个时q 扩散,一个是p 逆扩散=E q (x 0 ​:T )​[−l o g p θ​(x T ​)+t =1 ∑T ​l o g p θ​(x t −1 ​∣x t ​)q (x t ​∣x t −1 ​)​]=E q (x 0 ​:T )​[−l o g p θ​(x T ​)+t =2 ∑T ​l o g p θ​(x t −1 ​∣x t ​)q (x t ​∣x t −1 ​)​+l o g p θ​(x 0 ​∣x t ​)q (x t ​∣x 0 ​)​]取出其中的一项q (x t ​∣x t −1 ​)=q (x t ​∣x t −1 ​,x 0 ​)⇓=q (x t −1 ​∣x 0 ​)q (x t −1 ​∣x t ​,x 0 ​)q (x t ​∣x 0 ​)​=E q (x 0 ​:T )​[−l o g p θ​(x T ​)+t =2 ∑T ​l o g p θ​(x t −1 ​∣x t ​)q (x t −1 ​∣x 0 ​)q (x t −1 ​∣x t ​,x 0 ​)q (x t ​∣x 0 ​)​​+l o g p θ​(x 0 ​∣x t ​)q (x t ​∣x 0 ​)​]=E q (x 0 ​:T )​[−l o g p θ​(x T ​)+t =2 ∑T ​l o g p θ​(x t −1 ​∣x t ​)q (x t −1 ​∣x t ​,x 0 ​)​⋅q (x t −1 ​∣x 0 ​)q (x t ​∣x 0 ​)​+l o g p θ​(x 0 ​∣x t ​)q (x t ​∣x 0 ​)​]=E q (x 0 ​:T )​[−l o g p θ​(x T ​)+t =2 ∑T ​l o g p θ​(x t −1 ​∣x t ​)q (x t −1 ​∣x t ​,x 0 ​)​+t =2 ∑T ​l o g q (x t −1 ​∣x 0 ​)q (x t ​∣x 0 ​)​+l o g p θ​(x 0 ​∣x t ​)q (x t ​∣x 0 ​)​]=E q (x 0 ​:T )​[−l o g p θ​(x T ​)+t =2 ∑T ​l o g p θ​(x t −1 ​∣x t ​)q (x t −1 ​∣x t ​,x 0 ​)​+l o g (Πt =2 T ​q (x t −1 ​∣x 0 ​)q (x t ​∣x 0 ​)​)+l o g p θ​(x 0 ​∣x t ​)q (x t ​∣x 0 ​)​]=E q (x 0 ​:T )​[−l o g p θ​(x T ​)+t =2 ∑T ​l o g p θ​(x t −1 ​∣x t ​)q (x t −1 ​∣x t ​,x 0 ​)​+l o g (q (x 1 ​∣x 0 ​)q (x T ​∣x 0 ​)​)+l o g p θ​(x 0 ​∣x t ​)q (x t ​∣x 0 ​)​]=E q (x 0 ​:T )​[l o g q (x T ​∣x 0 ​)−l o g p θ​(x T ​)+t =2 ∑T ​l o g p θ​(x t −1 ​∣x t ​)q (x t −1 ​∣x t ​,x 0 ​)​+l o g (−q (x 1 ​∣x 0 ​))]=E q (x 0 ​:T )​[l o g q (x T ​∣x 0 ​)−l o g p θ​(x T ​)+t =2 ∑T ​l o g p θ​(x t −1 ​∣x t ​)q (x t −1 ​∣x t ​,x 0 ​)​+l o g (−q (x 1 ​∣x 0 ​))]=E q ​[DK L (q (x T ​∣x 0 ​)∣∣p θ​(x T ​))+t =2 ∑T ​DK L (q (x t −1 ​∣x t ​,x 0 ​)∣∣p θ​(x t −1 ​∣x t ​))+l o g (−q (x 1 ​∣x 0 ​))]⇓b l u e :常量,re d L t −1 ​,b l a c k :L t −1 ​且t =1
论文假设 p θ ( x t − 1 ∣ x t ) 的方差为与 β 相关的常数,可训练参数仅有均值 , 主要关注红色部分 论文假设p_{\theta}(x_{t-1}|x_{t})的方差为与\beta相关的常数,可训练参数仅有均值,主要关注红色部分论文假设p θ​(x t −1 ​∣x t ​)的方差为与β相关的常数,可训练参数仅有均值,主要关注红色部分
⇓ K L ( P , Q ) = l o g σ 2 σ 1 + σ 2 + ( μ 1 − μ 2 ) 2 2 σ 2 2 − 1 2 ( 其中 P . Q 为一维高斯分布 ) L t − 1 = E q [ ( μ 1 − μ 2 ) 2 2 σ t 2 ] L t − 1 = E q [ ( μ t ( x t , x o ) − μ θ ( x t , t ) ) 2 2 σ t 2 ] + C L t − 1 = E q [ ( μ t ( x t , x o ) − μ θ ( x t , t ) ) 2 2 σ t 2 ] + C 其中 u ˉ t 逆行过程的均值,之前推导过 , \Downarrow KL(P,Q)=log\frac{\sigma_2}{\sigma_1}+\frac{\sigma^2+(\mu_1-\mu_2)^2}{2\sigma_2^2} -\frac12 { \tiny(其中P.Q为一维高斯分布)}\ L_{t-1}=E_q[ \frac{(\mu_1-\mu_2)^2}{2\sigma_t^2} ]\ L_{t-1}=E_q[ \frac{(\mu_t(x_t,x_o)-\mu_{\theta}(x_t,t))^2}{2\sigma_t^2} ]+C\ L_{t-1}=E_q[ \frac{(\mu_t(x_t,x_o)-\mu_{\theta}(x_t,t))^2}{2\sigma_t^2} ]+C\ 其中\bar u_t逆行过程的均值,之前推导过,⇓K L (P ,Q )=l o g σ1 ​σ2 ​​+2 σ2 2 ​σ2 +(μ1 ​−μ2 ​)2 ​−2 1 ​(其中P .Q 为一维高斯分布)L t −1 ​=E q ​[2 σt 2 ​(μ1 ​−μ2 ​)2 ​]L t −1 ​=E q ​[2 σt 2 ​(μt ​(x t ​,x o ​)−μθ​(x t ​,t ))2 ​]+C L t −1 ​=E q ​[2 σt 2 ​(μt ​(x t ​,x o ​)−μθ​(x t ​,t ))2 ​]+C 其中u ˉt ​逆行过程的均值,之前推导过,

L ( θ ) : = E t , x 0 , ε [ ∣ ∣ ε − ε θ ( a ‾ t , + 1 − a ‾ t ε , t ) ) ∣ ∣ 2 ] L ( θ ) : = E t , x 0 , ε [ ∣ ∣ ε − m o d e l θ ( a ‾ t , ε , t ) ) ∣ ∣ 2 ] L(\theta):=E_{t,x_0,\varepsilon} [|| \varepsilon -\varepsilon_{\theta}(\sqrt{\overline a_t},+ \sqrt{1-\overline a_t }\varepsilon,t))||^2] \ L(\theta):=E_{t,x_0,\varepsilon} [|| \varepsilon -model_{\theta}( \overline a_t,\varepsilon,t))||^2]L (θ):=E t ,x 0 ​,ε​[∣∣ε−εθ​(a t ​​,+1 −a t ​​ε,t ))∣∣2 ]L (θ):=E t ,x 0 ​,ε​[∣∣ε−m o d e l θ​(a t ​,ε,t ))∣∣2 ]

代码

import  matplotlib.pyplot as plt
import numpy as np
from sklearn.datasets import make_s_curve
import torch

s_curve , _  = make_s_curve(10**4 , noise = 0.1)
s_curve = s_curve[:,[0,2] ]/10.0

print("shape of moons :",np.shape(s_curve))

data = s_curve.T

fig,ax = plt.subplots()
ax.scatter(*data ,color='red',edgecolor='white')
ax.axis('off')
plt.show()
dataset = torch.Tensor(s_curve).float()

num_steps = 100

betas = torch.linspace(-6,6,num_steps)
betas = torch.sigmoid(betas)* (0.5e-2 - 1e-5) + 1e-5

alphas = 1 - betas
alphas_prod = torch.cumprod( alphas ,dim=0 )
alphas_prod_p = torch.cat([torch.tensor([1]).float() ,alphas_prod[:-1]],0)
alphas_bar_sqrt = torch.sqrt(alphas_prod)
one_minus_alphas_bar_log = torch.log(1-alphas_prod)
one_minus_alphas_bar_sqrt = torch.sqrt(1-alphas_prod)

assert  alphas_prod.shape == alphas_prod.shape == alphas_prod_p.shape \
        == alphas_bar_sqrt.shape == one_minus_alphas_bar_log.shape \
        == one_minus_alphas_bar_sqrt.shape
print("all the same shape:",betas.shape)

def q_x(x_0 ,t):
    noise = torch.randn_like(x_0)
    alphas_t = alphas_bar_sqrt[t]
    alphas_l_m_t = one_minus_alphas_bar_sqrt[t]

    return (alphas_t * x_0 + alphas_l_m_t * noise)

num_shows = 20
fig , axs = plt.subplots(2,10,figsize=(28,3))
plt.rc('text',color='blue')

for i in range(num_shows):
    j = i // 10
    k = i % 10
    t = i*num_steps//num_shows
    q_i = q_x(dataset ,torch.tensor( [t] ))
    axs[j,k].scatter(q_i[:,0],q_i[:,1],color='red',edgecolor='white')

    axs[j,k].set_axis_off()
    axs[j,k].set_title('$q(\mathbf{x}_{'+str(i*num_steps//num_shows)+'})$')
plt.show()

import torch
import torch.nn as nn
class MLPDiffusion(nn.Module):
    def __init__(self,n_steps,num_units=128):
        super(MLPDiffusion,self).__init__()
        self.linears = nn.ModuleList([
            nn.Linear(2,num_units),
            nn.ReLU(),
            nn.Linear(num_units,num_units),
            nn.ReLU(),
            nn.Linear(num_units, num_units),
            nn.ReLU(),
            nn.Linear(num_units, 2),]

        )
        self.step_embeddings = nn.ModuleList([
            nn.Embedding(n_steps,num_units),
            nn.Embedding(n_steps, num_units),
            nn.Embedding(n_steps, num_units)
        ])
    def forward(self,x,t):
        for idx,embedding_layer in enumerate(self.step_embeddings):
            t_embedding = embedding_layer(t)
            x = self.linears[2*idx](x)
            x += t_embedding
            x = self.linears[2*idx +1](x)

        x = self.linears[-1](x)
        return x

def diffusion_loss_fn(model,x_0,alphas_bar_sqrt,one_minus_alphas_bar_sqrt,n_steps):
    '''对任意时刻t进行采样计算loss'''
    batch_size = x_0.shape[0]

    t = torch.randint(0,n_steps,size=(batch_size//2,))
    t = torch.cat([t,n_steps-1-t],dim=0)
    t = t.unsqueeze(-1)

    a = alphas_bar_sqrt[t]

    e = torch.randn_like(x_0)

    aml = one_minus_alphas_bar_sqrt[t]

    x = x_0* a + e *aml

    output = model(x,t.squeeze(-1))

    return (e-output).square().mean()

def p_sample_loop(model ,shape ,n_steps,betas ,one_minus_alphas_bar_sqrt):
    '''从x[T]恢复x[T-1],x[T-2],......,x[0]'''

    cur_x = torch.randn(shape)
    x_seq = [cur_x]
    for i in reversed(range(n_steps)):
        cur_x = p_sample(model,cur_x, i ,betas,one_minus_alphas_bar_sqrt)
        x_seq.append(cur_x)
    return x_seq

def p_sample(model,x,t,betas,one_minus_alphas_bar_sqrt):
    '''从x[T]采样时刻t的重构值'''
    t = torch.tensor(t)
    coeff = betas[t] / one_minus_alphas_bar_sqrt[t]
    eps_theta = model(x,t)
    mean = (1/(1-betas[t].sqrt()) * (x-(coeff * eps_theta)))
    z = torch.randn_like(x)
    sigma_t = betas[t].sqrt()
    sample = mean + sigma_t * z
    return (sample)

seed = 1234
class EMA():
    '''构建一个参数平滑器'''
    def __init__(self,mu = 0.01):
        self.mu =mu
        self.shadow = {}
    def register(self,name,val):
        self.shadow[name] = val.clone()

    def __call__(self, name, x):
        assert name in self.shadow
        new_average = self.mu * x +(1.0 -self.mu) * self.shadow[name]
        self.shadow[name] = new_average.clone()
        return new_average

print('Training model ......')
'''
'''
batch_size = 128
dataloader = torch.utils.data.DataLoader(dataset,batch_size=batch_size,shuffle = True)
num_epoch = 4000
plt.rc('text',color='blue')

model = MLPDiffusion(num_steps)
optimizer = torch.optim.Adam(model.parameters(),lr = 1e-3)

for t in range(num_epoch):
    for idx,batch_x in enumerate(dataloader):
        loss = diffusion_loss_fn(model,batch_x,alphas_bar_sqrt,one_minus_alphas_bar_sqrt,num_steps)
        optimizer.zero_grad()
        loss.backward()
        torch.nn.utils.clip_grad_norm(model.parameters(),1.)
        optimizer.step()

    if (t% 100 == 0):
        print(loss)
        x_seq = p_sample_loop(model,dataset.shape,num_steps,betas,one_minus_alphas_bar_sqrt)

        fig ,axs = plt.subplots(1,10,figsize=(28,3))
        for i in range(1,11):
            cur_x = x_seq[i*10].detach()
            axs[i-1].scatter(cur_x[:,0],cur_x[:,1],color='red',edgecolor='white');
            axs[i-1].set_axis_off()
            axs[i-1].set_title('$q(\mathbf{x}_{'+str(i*10)+'})$')

参考与更多

2020 Denoising Diffusion Probabilistic Models
2015 Deep Unsupervised Learning using Nonequilibrium Thermodynamics
视频解读
DDPM的代码
https://github.com/openai/glide-text2im

基于扩散概率模型 (Diffusion Probabilistic Model ) 的音频生成模型
添加链接描述
https://www.jianshu.com/p/8b120d1881c1
(另辟蹊径—Denoising Diffusion Probabilistic 一种从噪音中剥离出图像/音频的模型)

Diffusion扩散模型简述 + 代码demo
Diffusion扩散模型简述 + 代码demo
paper Diffusion Models Beat GANs on Image Synthesis

; disco-diffusion

disco difussion repo:https://github.com/alembics/disco-diffusion
openai guided diffusion https://github.com/openai/guided-diffusion

在colab上运行的视频
github- docker – disco-diffusion

docker-本地运行版本
https://github.com/MohamadZeina/Disco_Diffusion_Local
实现中先使用了clip进行了连接文本和图像

https://github.com/afiaka87/clip-guided-diffusion

实验结果

Diffusion扩散模型简述 + 代码demo
  • 额,好像没有跑回到S
    Diffusion扩散模型简述 + 代码demo

Original: https://blog.csdn.net/ResumeProject/article/details/125017980
Author: FakeOccupational
Title: Diffusion扩散模型简述 + 代码demo

原创文章受到原创版权保护。转载请注明出处:https://www.johngo689.com/605743/

转载文章受原作者版权保护。转载请注明原作者出处!

(0)

大家都在看

亲爱的 Coder【最近整理,可免费获取】👉 最新必读书单  | 👏 面试题下载  | 🌎 免费的AI知识星球