From c30f3806896f140ca38da887542b4bb39311ad40 Mon Sep 17 00:00:00 2001 From: Phil Wang Date: Tue, 3 May 2022 08:18:53 -0700 Subject: [PATCH] final reminder --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 7511ada..2a5ab42 100644 --- a/README.md +++ b/README.md @@ -821,7 +821,7 @@ Once built, images will be saved to the same directory the command is invoked - [x] just take care of the training for the decoder in a wrapper class, as each unet in the cascade will need its own optimizer - [x] bring in tools to train vqgan-vae - [x] add convnext backbone for vqgan-vae (in addition to vit [vit-vqgan] + resnet) -- [ ] become an expert with unets, cleanup unet code, make it fully configurable, port all learnings over to https://github.com/lucidrains/x-unet +- [ ] become an expert with unets, cleanup unet code, make it fully configurable, port all learnings over to https://github.com/lucidrains/x-unet (test out unet² in ddpm repo) - [ ] copy the cascading ddpm code to a separate repo (perhaps https://github.com/lucidrains/denoising-diffusion-pytorch) as the main contribution of dalle2 really is just the prior network - [ ] transcribe code to Jax, which lowers the activation energy for distributed training, given access to TPUs - [ ] pull logic for training diffusion prior into a class DiffusionPriorTrainer, for eventual script based + CLI based training