Commit Graph

  • 680dfc4d93 yet more pydantic v2 stuff main lucidrains 2023-10-19 07:40:57 -07:00
  • b6fecae91a fix another pydantic 2 migration error lucidrains 2023-10-18 21:07:47 -07:00
  • dab2f74650 fix self_attn type on unetconfig 1.15.6 lucidrains 2023-10-18 21:02:50 -07:00
  • 1e173f4c66 more fixes to config 1.15.5 lucidrains 2023-10-18 20:27:32 -07:00
  • 410a6144e1 new einops is torch compile friendly 1.15.4 lucidrains 2023-10-18 15:45:09 -07:00
  • c6c3882dc1 fix all optional types in train config 1.15.3 lucidrains 2023-10-07 11:34:34 -07:00
  • 512b52bd78 1.15.2 1.15.2 Phil Wang 2023-10-04 09:38:46 -07:00
  • 147c156c8a Make TrackerLoadConfig optional (#306) Neil Kim Nielsen 2023-10-04 18:38:30 +02:00
  • 40843bcc21 pydantic 2 1.15.1 Phil Wang 2023-07-15 09:32:44 -07:00
  • c56336a104 pydantic 2 1.14.3 Phil Wang 2023-07-15 09:08:39 -07:00
  • 00e07b7d61 force einops 0.6.1 or greater and call allow_ops_in_compiled_graph 1.14.2 Phil Wang 2023-04-20 14:08:52 -07:00
  • deda18fb24 force einops 0.6.1 or greater and call allow_ops_in_compiled_graph 1.14.1 Phil Wang 2023-04-20 10:05:39 -07:00
  • 0069857cf8 remove einops exts for better pytorch 2.0 compile compatibility 1.14.0 Phil Wang 2023-04-20 07:05:29 -07:00
  • 580274be79 use .to(device) to avoid copy, within one_unet_in_gpu context 1.12.4 Phil Wang 2023-03-07 12:41:55 -08:00
  • 848e8a480a always rederive the predicted noise from the clipped x0 for ddim + predict noise objective 1.12.3 Phil Wang 2023-03-05 10:45:44 -08:00
  • cc58f75474 bump to newer package of clip-anytorch that allows for text encodings < maximum context length 1.12.2 Phil Wang 2023-03-04 09:37:25 -08:00
  • 3b2cf7b0bc fix for self conditioning in diffusion prior network https://github.com/lucidrains/DALLE2-pytorch/issues/273 1.12.1 Phil Wang 2023-02-11 17:18:40 -08:00
  • 984d62a373 default ddim sampling eta to 0 1.12.0 Phil Wang 2022-12-23 13:23:09 -08:00
  • 683dd98b96 extra insurance in case eos id is not there 1.11.4 Phil Wang 2022-12-15 10:54:21 -08:00
  • 067ac323da address https://github.com/lucidrains/DALLE2-pytorch/issues/266 1.11.2 Phil Wang 2022-11-23 08:41:18 -08:00
  • 91c8d1ca13 bug fix cosine annealing optimizer in prior trainer (#262) zion 2022-11-11 12:15:13 -08:00
  • 08238a7200 depend on open-clip-torch (#261) zion 2022-11-07 16:19:08 -08:00
  • 7166ad6711 add open clip to train_config (#260) zion 2022-11-07 15:44:36 -08:00
  • fbba0f9aaf bring in prediction of v objective, combining the findings from progressive distillation paper and imagen-video to the eventual extension of dalle2 to make-a-video 1.11.1 Phil Wang 2022-10-28 18:21:07 -07:00
  • 1892f1ac1d bring in prediction of v objective, combining the findings from progressive distillation paper and imagen-video to the eventual extension of dalle2 to make-a-video 1.11.0 Phil Wang 2022-10-28 18:09:34 -07:00
  • 9f37705d87 Add static graph param (#226) Romain Beaumont 2022-10-25 19:31:29 +02:00
  • c3df46e374 fix openclipadapter to be able to use latest open sourced sota model 1.10.9 Phil Wang 2022-10-23 15:12:09 -07:00
  • 41fabf2922 fix a dtype conversion issue for the diffusion timesteps in the diffusion prior, thanks to @JiaHeng-DLUT 0.10.8 Phil Wang 2022-10-19 09:26:00 -07:00
  • 5975e8222b Fix assert message (#253) Heng Jia 2022-10-18 23:50:59 +08:00
  • c18c080128 fix for use with larger openai clip models by extracting dimension of last layernorm in clip 1.10.7 Phil Wang 2022-09-29 09:09:41 -07:00
  • b39653cf96 fix readme dataloader example Phil Wang 2022-09-20 08:39:52 -07:00
  • 39f8b6cf16 show example of using SOTA open sourced open clip Phil Wang 2022-09-19 10:45:20 -07:00
  • d0c11b30b0 handle open clip adapter image size being a tuple 1.10.6 Phil Wang 2022-09-19 10:27:09 -07:00
  • 86e2d5ba84 Minor Decoder Train Script Fixes (#242) zion 2022-09-15 17:21:48 -07:00
  • 0d82dff9c5 in ddim, noise should be predicted after x0 is maybe clipped, thanks to @lukovnikov for pointing this out in another repository 1.10.5 Phil Wang 2022-09-01 09:40:47 -07:00
  • 8bbc956ff1 fix bug with misnamed variable in diffusion prior network 1.10.4 Phil Wang 2022-08-31 17:18:58 -07:00
  • 22019fddeb todo Phil Wang 2022-08-31 13:36:05 -07:00
  • 6fb7e91343 fix ddim to use alpha_cumprod 1.10.3 Phil Wang 2022-08-31 07:40:46 -07:00
  • 6520d17215 fix ddim to use alpha_cumprod 1.10.2 Phil Wang 2022-08-30 20:35:08 -07:00
  • ba58ae0bf2 add two asserts to diffusion prior to ensure matching image embedding dimensions for clip, diffusion prior network, and what was set on diffusion prior 1.10.1 Phil Wang 2022-08-28 10:11:37 -07:00
  • 1cc5d0afa7 upgrade to best downsample 1.10.0 Phil Wang 2022-08-25 10:37:02 -07:00
  • 59fa101c4d fix classifier free guidance for diffusion prior, thanks to @jaykim9870 for spotting the issue 1.9.0 Phil Wang 2022-08-23 08:28:54 -07:00
  • 916ece164c Merge pull request #234 from Veldrovive/deepspeed_fp16 Aidan Dempster 2022-08-20 19:01:43 -04:00
  • cbaadb6931 Fixed issues with clip and deepspeed fp16 Aidan 2022-07-29 16:57:27 +00:00
  • 083508ff8e cast attention matrix back to original dtype pre-softmax in attention 1.8.4 Phil Wang 2022-08-20 10:56:01 -07:00
  • 7762edd0ff make it work for @ethancohen123 Phil Wang 2022-08-19 11:28:58 -07:00
  • 3df86acc8b make it work for @ethancohen123 1.8.3 Phil Wang 2022-08-19 11:25:28 -07:00
  • de5e628773 cite einops Phil Wang 2022-08-17 08:58:41 -07:00
  • 1b4046b039 gratitude Phil Wang 2022-08-17 08:57:33 -07:00
  • 27f19ba7fa make sure diffusion prior trainer can operate with no warmup 1.8.2 Phil Wang 2022-08-15 14:27:40 -07:00
  • 8f38339c2b give diffusion prior trainer cosine annealing lr too 1.8.1 Phil Wang 2022-08-15 07:38:01 -07:00
  • 6b9b4b9e5e add cosine annealing lr schedule 1.8.0 Phil Wang 2022-08-15 07:29:56 -07:00
  • 44e09d5a4d add weight standardization behind feature flag, which may potentially work well with group norm Phil Wang 2022-08-14 11:34:45 -07:00
  • f8b005510c add weight standardization behind feature flag, which may potentially work well with group norm 1.7.0 Phil Wang 2022-08-14 11:33:18 -07:00
  • 34806663e3 make it so diffusion prior p_sample_loop returns unnormalized image embeddings 1.6.5 Phil Wang 2022-08-13 10:03:40 -07:00
  • dc816b1b6e dry up some code around handling unet outputs with learned variance 1.6.4 Phil Wang 2022-08-12 15:25:03 -07:00
  • 05192ffac4 fix self conditioning shape in diffusion prior 1.6.3 Phil Wang 2022-08-12 12:30:03 -07:00
  • 301a97197f fix self conditioning shape in diffusion prior 1.6.2 Phil Wang 2022-08-12 12:29:25 -07:00
  • 9440411954 make self conditioning technique work with diffusion prior 1.6.1 Phil Wang 2022-08-12 12:20:51 -07:00
  • 981d407792 comment Phil Wang 2022-08-12 11:41:23 -07:00
  • 7c5477b26d bet on the new self-conditioning technique out of geoffrey hintons group 1.6.0 Phil Wang 2022-08-12 11:36:08 -07:00
  • be3bb868bf add gradient checkpointing for all resnet blocks 1.5.0 Phil Wang 2022-08-02 19:21:44 -07:00
  • 451de34871 enforce clip anytorch version 1.4.6 Phil Wang 2022-07-30 10:07:55 -07:00
  • f22e8c8741 make open clip available for use with dalle2 pytorch 1.4.5 Phil Wang 2022-07-30 09:02:31 -07:00
  • 87432e93ad quick fix for linear attention 1.4.4 Phil Wang 2022-07-29 13:17:12 -07:00
  • d167378401 add cosine sim for self attention as well, as a setting 1.4.3 Phil Wang 2022-07-29 12:48:20 -07:00
  • 2d67d5821e change up epsilon in layernorm the case of using fp16, thanks to @Veldrovive for figuring out this stabilizes training 1.4.2 Phil Wang 2022-07-29 12:41:02 -07:00
  • dbb52cea9c change up epsilon in layernorm the case of using fp16, thanks to @Veldrovive for figuring out this stabilizes training 1.4.1 Phil Wang 2022-07-29 12:39:56 -07:00
  • 748c7fe7af allow for cosine sim cross attention, modify linear attention in attempt to resolve issue on fp16 1.4.0 Phil Wang 2022-07-29 11:12:18 -07:00
  • 80046334ad make sure entire readme runs without errors 1.2.2 Phil Wang 2022-07-28 10:17:43 -07:00
  • 36fb46a95e fix readme and a small bug in DALLE2 class 1.2.1 Phil Wang 2022-07-28 08:33:51 -07:00
  • 07abfcf45b rescale values in linear attention to mitigate overflows in fp16 setting 1.2.0 Phil Wang 2022-07-27 12:27:32 -07:00
  • 2e35a9967d product management Phil Wang 2022-07-26 11:10:16 -07:00
  • 406e75043f add upsample combiner feature for the unets 1.1.0 Phil Wang 2022-07-26 10:46:04 -07:00
  • 9646dfc0e6 fix path_or_state bug 1.0.6 Phil Wang 2022-07-26 09:47:54 -07:00
  • 62043acb2f fix repaint 1.0.5 Phil Wang 2022-07-24 15:29:06 -07:00
  • 9008531d62 fix repaint 1.0.4 Phil Wang 2022-07-24 15:22:59 -07:00
  • 417ff808e6 1.0.3 1.0.3 Phil Wang 2022-07-22 13:16:57 -07:00
  • f3d7e226ba Changed types to be generic instead of functions (#215) Aidan Dempster 2022-07-22 16:16:29 -04:00
  • 48a1302428 1.0.2 Phil Wang 2022-07-20 23:01:51 -07:00
  • ccaa46b81b Re-introduced change that was accidentally rolled back (#212) Aidan Dempster 2022-07-21 02:01:19 -04:00
  • 76d08498cc diffusion prior training updates from @nousr 1.0.1 Phil Wang 2022-07-20 18:05:27 -07:00
  • f9423d308b Prior updates (#211) zion 2022-07-20 18:04:26 -07:00
  • 06c65b60d2 1.0.0 1.0.0 Phil Wang 2022-07-19 19:08:17 -07:00
  • 4145474bab Improved upsampler training (#181) Aidan Dempster 2022-07-19 22:07:50 -04:00
  • 4b912a38c6 0.26.2 0.26.2 Phil Wang 2022-07-19 17:50:36 -07:00
  • f97e55ec6b Quality of life improvements for tracker savers (#210) Aidan Dempster 2022-07-19 20:50:18 -04:00
  • 291377bb9c @jacobwjs reports dynamic thresholding works very well and 0.95 is a better value v0.26.1 Phil Wang 2022-07-19 11:31:56 -07:00
  • 7f120a8b56 cleanup, CLI no longer necessary since Zion + Aidan have https://github.com/LAION-AI/dalle2-laion and colab notebook going Phil Wang 2022-07-19 09:47:44 -07:00
  • 8c003ab1e1 readme and citation Phil Wang 2022-07-19 09:36:45 -07:00
  • 723bf0abba complete inpainting ability using inpaint_image and inpaint_mask passed into sample function for decoder v0.26.0 Phil Wang 2022-07-19 09:26:55 -07:00
  • d88c7ba56c fix a bug with ddim and predict x0 objective v0.25.2 Phil Wang 2022-07-18 19:04:21 -07:00
  • 3676a8ce78 comments Phil Wang 2022-07-18 15:02:04 -07:00
  • da8e99ada0 fix sample bug v0.25.1 Phil Wang 2022-07-18 13:50:22 -07:00
  • 6afb886cf4 complete imagen-like noise level conditioning v0.25.0 Phil Wang 2022-07-18 13:43:57 -07:00
  • c7fe4f2f44 project management Phil Wang 2022-07-17 17:27:44 -07:00
  • a2ee3fa3cc offer way to turn off initial cross embed convolutional module, for debugging upsampler artifacts v0.24.3 Phil Wang 2022-07-15 17:29:10 -07:00
  • a58a370d75 takes care of a grad strides error at https://github.com/lucidrains/DALLE2-pytorch/issues/196 thanks to @YUHANG-Ma v0.24.2 Phil Wang 2022-07-14 15:28:34 -07:00
  • 1662bbf226 protect against random cropping for base unet v0.24.1 Phil Wang 2022-07-14 12:49:26 -07:00
  • 5be1f57448 update Phil Wang 2022-07-14 12:03:42 -07:00