Kumar R
|
8647cb5e76
|
Val loss changes, with quite a few other changes. This is in place of the earlier PR(https://github.com/lucidrains/DALLE2-pytorch/pull/67) (#77)
* Val_loss changes - no rebased with lucidrains' master.
* Val Loss changes - now rebased with lucidrains' master
* train_diffusion_prior.py updates
* dalle2_pytorch.py updates
* __init__.py changes
* Update train_diffusion_prior.py
* Update dalle2_pytorch.py
* Update train_diffusion_prior.py
* Update train_diffusion_prior.py
* Update dalle2_pytorch.py
* Update train_diffusion_prior.py
* Update train_diffusion_prior.py
* Update train_diffusion_prior.py
* Update train_diffusion_prior.py
* Update README.md
* Update README.md
* Update README.md
* Update README.md
* Update README.md
* Update README.md
* Update README.md
* Update README.md
* Update README.md
|
2022-05-09 08:53:29 -07:00 |
|
Phil Wang
|
98df1ba51e
|
add diffusion prior trainer, which automatically takes care of the exponential moving average (training and sampling), as well as mixed precision, gradient clipping
|
2022-05-06 08:11:09 -07:00 |
|
Phil Wang
|
a9421f49ec
|
simplify Decoder training for the public
|
2022-04-30 11:45:18 -07:00 |
|
Phil Wang
|
5063d192b6
|
now completely OpenAI CLIP compatible for training
just take care of the logic for AdamW and transformers
used namedtuples for clip adapter embedding outputs
|
2022-04-29 13:05:01 -07:00 |
|
Phil Wang
|
2c6c91829d
|
refactor blurring training augmentation to be taken care of by the decoder, with option to downsample to previous resolution before upsampling (cascading ddpm). this opens up the possibility of cascading latent ddpm
|
2022-04-22 11:09:17 -07:00 |
|
Phil Wang
|
a1a8a78f21
|
fix everything and make sure it runs end to end, document everything in readme for public
|
2022-04-13 18:05:25 -07:00 |
|
Phil Wang
|
f283bf25be
|
scaffold
|
2022-04-07 07:29:34 -07:00 |
|