Phil Wang
|
2705e7c9b0
|
attention-based upsampling claims unsupported by local experiments, removing
|
2022-04-27 07:51:04 -07:00 |
|
Phil Wang
|
77141882c8
|
complete vit-vqgan from https://arxiv.org/abs/2110.04627
|
2022-04-26 17:20:47 -07:00 |
|
Phil Wang
|
0b28ee0d01
|
revert back to old upsampling, paper does not work
|
2022-04-26 07:39:04 -07:00 |
|
Phil Wang
|
f75d49c781
|
start a file for all attention-related modules, use attention-based upsampling in the unets in dalle-2
|
2022-04-25 18:59:10 -07:00 |
|
Phil Wang
|
3b520dfa85
|
bring in attention-based upsampling to strengthen vqgan-vae, seems to work as advertised in initial experiments in GAN
|
2022-04-25 17:27:45 -07:00 |
|
Phil Wang
|
f82917e1fd
|
prepare for turning off gradient penalty, as shown in GAN literature, GP needs to be only applied 1 out of 4 iterations
|
2022-04-23 07:52:10 -07:00 |
|
Phil Wang
|
05b74be69a
|
use null container pattern to cleanup some conditionals, save more cleanup for next week
|
2022-04-22 15:23:18 -07:00 |
|
Phil Wang
|
76b32f18b3
|
first pass at complete DALL-E2 + Latent Diffusion integration, latent diffusion on any layer(s) of the cascading ddpm in the decoder.
|
2022-04-22 13:53:13 -07:00 |
|
Phil Wang
|
461347c171
|
fix vqgan-vae for latent diffusion
|
2022-04-22 11:38:57 -07:00 |
|
Phil Wang
|
ad17c69ab6
|
prepare for latent diffusion in the first DDPM of the cascade in the Decoder
|
2022-04-21 17:54:31 -07:00 |
|