From 8b9bbec7d130564de9015cc095f1bc983f1a0df3 Mon Sep 17 00:00:00 2001 From: Phil Wang Date: Sun, 1 May 2022 09:32:57 -0700 Subject: [PATCH] project management --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 18c7986..6c6e8b6 100644 --- a/README.md +++ b/README.md @@ -820,13 +820,13 @@ Once built, images will be saved to the same directory the command is invoked - [x] take care of mixed precision as well as gradient accumulation within decoder trainer - [x] just take care of the training for the decoder in a wrapper class, as each unet in the cascade will need its own optimizer - [x] bring in tools to train vqgan-vae +- [x] add convnext backbone for vqgan-vae (in addition to vit [vit-vqgan] + resnet) - [ ] become an expert with unets, cleanup unet code, make it fully configurable, port all learnings over to https://github.com/lucidrains/x-unet - [ ] copy the cascading ddpm code to a separate repo (perhaps https://github.com/lucidrains/denoising-diffusion-pytorch) as the main contribution of dalle2 really is just the prior network - [ ] transcribe code to Jax, which lowers the activation energy for distributed training, given access to TPUs - [ ] train on a toy task, offer in colab - [ ] think about how best to design a declarative training config that handles preencoding for prior and training of multiple networks in decoder - [ ] extend diffusion head to use diffusion-gan (potentially using lightweight-gan) to speed up inference -- [ ] add convnext backbone for vqgan-vae (in addition to vit [vit-vqgan] + resnet) ## Citations