This commit is contained in:
Phil Wang
2022-04-20 12:24:23 -07:00
committed by GitHub
parent f027b82e38
commit 0b4ec34efb

View File

@@ -415,6 +415,7 @@ Offer training wrappers
- [ ] build out latent diffusion architecture, make it completely optional (additional autoencoder + some regularizations [kl and vq regs]) (figure out if latent diffusion + cascading ddpm can be used in conjunction)
- [ ] Extend diffusion head to use diffusion-gan (potentially using lightweight-gan) to speed up inference
- [ ] become an expert with unets, cleanup unet code, make it fully configurable, port all learnings over to https://github.com/lucidrains/x-unet
- [ ] copy the cascading ddpm code to a separate repo (perhaps https://github.com/lucidrains/denoising-diffusion-pytorch) as the main contribution of dalle2 really is just the prior network
- [ ] train on a toy task, offer in colab
## Citations