Forward Diffusion Process
$$
\text{Goal: Gradually add Gausian noise and then reverse}
x_0 \text{\textasciitilde} q(x)\ \ \ \ \text{(original data)}
$$
$$ {\betat\in(0,1)}^t{t=1} \text{ is hyperparameter} q(xt|x{t-1})=\mathcal N(xt;\sqrt{1-\betat}x{t-1}, \betatI) q(xT|x0)=\prod{t=1}^Tq(xt|x_{t-1}) $$
- q is markov chain
- hyperparameter를 잘 조작하면 $lim{x\rightarrow\infin}Tx$ follows isotropic gaussian distribution
reparameterization trick
$$ q(xt|x{t-1})=\mathcal N(xt;\sqrt{1-\betat}x{t-1}, \betatI) \alphat=1-\betat,\ \bar\alphat=\prod{i=1}^t\alphai ; xt=\sqrt{\alphat}x{t-1}+\sqrt{1-\alphat}z{t-1} ;;;(z{t-1}\text{\textasciitilde} \mathcal N(0, 1)) = \sqrt{\alphat\alpha{t-1}}x{t-2} + \sqrt{1-\alphat}z{t-1} + \sqrt{\alphat(1-\alpha{t-1})}z{t-2} = \sqrt{\alphat\alpha{t-1}}x{t-2} + \sqrt{1-\alphat\alpha{t-1}}\bar z{t-2};(\text{merge two Gaussian}) = \sqrt{\bar\alphat}x0+\sqrt{1-\bar\alphat}z $$ $$ q(xt|x0)=\mathcal N(xt;\bar\alphat x0, (1 - \bar\alphat)I) $$
Reverse Diffusion Process
- 역방향을 재생성할 수 있다면 좋겠지만, 현실적으로 비효율적이다. 전체 dataset을 관찰해야하기 때문이다.
$$ P\theta(x{0:T})=p(xT)\prod{t=1}^T p\theta(x{t-1}|xt) p\theta(x{t-1}|xt)=\mathcal N(x{t-1};\mu\theta(xt, t), \Sigma\theta(x_t, t)) $$
- 각 gaussian process의 mean, variance를 parametrization하여 예측한다.
- true reverse process를 알 수 없으므로 condition에 $x_0$를 추가한다. 이는 계산 가능하다.
$$ q(x{t-1}|xt) q(xt|x{t-1}) $$ $$ q(x{t-1}|xt, x0)=\mathcal N(x{t-1};\tilde{\mu}(xt, x0), \tilde{\betat}I) q(x{t-1}|xt, x0) = q(xt|x{t-1}, x0)\frac{q(x{t-1}| x0)}{q(xt| x0)} \propto exp(-\frac{1}{2}(\frac{(xt-\sqrt{\alphat}x{t-1})^2}{\betat} + \frac{(x{t-1}-\sqrt{\alpha{t-1}}x0)^2}{1-\bar\alpha{t-1}}) - \frac{(xt-\sqrt{\bar\alphat}x0)^2}{1-\bar\alphat}) = exp( -\frac{1}{2} ( ( \frac{\alphat}{\betat} + \frac{1}{1-\bar\alpha{t-1}} ) x{t-1}^2 - 2 ( \frac{ 2\sqrt\alphat }{ \betat } xt + \frac{ 2\sqrt{ \bar\alphat } }{ 1-\bar\alpha{t-1} } x0 ) x{t-1} + C(xt, x0) ) ) $$ $$ \tilde{\betat} = \frac{ 1-\bar{\alpha}{t-1} } { 1-\bar{\alpha{t}} } \betat \tilde{\mu}(xt, x0) = \frac{ 1 } { \sqrt{\alphat} } ( xt - \frac{ \betat } { \sqrt{1-\bar\alphat} } z_t ) $$
-
we can use VLB as our loss.
ELBO
$$ L = - \int q(x0)\log{p(x0)} dx0 = - E{q(x0)}[\log{p\theta(x0)}] = - E{q(x0)}[\log{\frac{ p\theta(x{0:T}) }{ p\theta(x{1:T}|x0) }}] = - E{q(x0)}[\log{ \frac{ p\theta(x{0:T}) }{ p\theta(x{1:T}|x0) } \frac{ q\theta(x{1:T}|x0) }{ q\theta(x{1:T}|x_0) } }] \leq - E{q(x0)}[\log{ \frac{ p\theta(x{0:T}) }{ q\theta(x{1:T}|x_0) } }] ]] = L_{VLB} $$
$$ \mu\theta(xt, t) = \frac{1}{\sqrt{\alphat}}(xt-\frac{\betat}{\sqrt{1-\bar\alphat}}z\theta(xt, t)) $$