From 4e49373fc5bf6fe68c1d0904dbc3e07a58b857d6 Mon Sep 17 00:00:00 2001 From: Phil Wang Date: Sun, 22 May 2022 15:27:40 -0700 Subject: [PATCH] project management --- README.md | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/README.md b/README.md index 350839a..787e9b8 100644 --- a/README.md +++ b/README.md @@ -1078,6 +1078,7 @@ This library would not have gotten to this working state without the help of - [x] use an experimental tracker agnostic setup, as done here - [x] use pydantic for config drive training - [x] for both diffusion prior and decoder, all exponential moving averaged models needs to be saved and restored as well (as well as the step number) +- [x] offer save / load methods on the trainer classes to automatically take care of state dicts for scalers / optimizers / saving versions and checking for breaking changes - [ ] become an expert with unets, cleanup unet code, make it fully configurable, port all learnings over to https://github.com/lucidrains/x-unet (test out unet² in ddpm repo) - consider https://github.com/lucidrains/uformer-pytorch attention-based unet - [ ] transcribe code to Jax, which lowers the activation energy for distributed training, given access to TPUs - [ ] train on a toy task, offer in colab @@ -1087,11 +1088,9 @@ This library would not have gotten to this working state without the help of - [ ] test out grid attention in cascading ddpm locally, decide whether to keep or remove - [ ] interface out the vqgan-vae so a pretrained one can be pulled off the shelf to validate latent diffusion + DALL-E2 - [ ] make sure FILIP works with DALL-E2 from x-clip https://arxiv.org/abs/2111.07783 -- [ ] offer save / load methods on the trainer classes to automatically take care of state dicts for scalers / optimizers / saving versions and checking for breaking changes - [ ] bring in skip-layer excitatons (from lightweight gan paper) to see if it helps for either decoder of unet or vqgan-vae training - [ ] decoder needs one day worth of refactor for tech debt - [ ] allow for unet to be able to condition non-cross attention style as well -- [ ] for all model classes with hyperparameters that changes the network architecture, make it requirement that they must expose a config property, and write a simple function that asserts that it restores the object correctly - [ ] read the paper, figure it out, and build it https://github.com/lucidrains/DALLE2-pytorch/issues/89 ## Citations