Phil Wang
c18c080128
fix for use with larger openai clip models by extracting dimension of last layernorm in clip
1.10.7
2022-09-29 09:09:47 -07:00
Phil Wang
b39653cf96
fix readme dataloader example
2022-09-20 08:39:52 -07:00
Phil Wang
39f8b6cf16
show example of using SOTA open sourced open clip
2022-09-19 10:45:20 -07:00
Phil Wang
d0c11b30b0
handle open clip adapter image size being a tuple
1.10.6
2022-09-19 10:27:14 -07:00
zion
86e2d5ba84
Minor Decoder Train Script Fixes ( #242 )
...
* ensure tokenized text is on proper device
* fix lpips mage distribution
2022-09-15 17:21:48 -07:00
Phil Wang
0d82dff9c5
in ddim, noise should be predicted after x0 is maybe clipped, thanks to @lukovnikov for pointing this out in another repository
1.10.5
2022-09-01 09:40:47 -07:00
Phil Wang
8bbc956ff1
fix bug with misnamed variable in diffusion prior network
1.10.4
2022-08-31 17:19:05 -07:00
Phil Wang
22019fddeb
todo
2022-08-31 13:36:05 -07:00
Phil Wang
6fb7e91343
fix ddim to use alpha_cumprod
1.10.3
2022-08-31 07:40:46 -07:00
Phil Wang
ba58ae0bf2
add two asserts to diffusion prior to ensure matching image embedding dimensions for clip, diffusion prior network, and what was set on diffusion prior
1.10.1
2022-08-28 10:11:37 -07:00
Phil Wang
1cc5d0afa7
upgrade to best downsample
1.10.0
2022-08-25 10:37:02 -07:00
Phil Wang
59fa101c4d
fix classifier free guidance for diffusion prior, thanks to @jaykim9870 for spotting the issue
1.9.0
2022-08-23 08:29:01 -07:00
Aidan Dempster
916ece164c
Merge pull request #234 from Veldrovive/deepspeed_fp16
...
Fixed issues with clip and deepspeed fp16
2022-08-20 19:01:43 -04:00
Aidan
cbaadb6931
Fixed issues with clip and deepspeed fp16
...
Also more more general compatibility fixes
2022-08-20 17:58:32 +00:00
Phil Wang
083508ff8e
cast attention matrix back to original dtype pre-softmax in attention
1.8.4
2022-08-20 10:56:01 -07:00
Phil Wang
7762edd0ff
make it work for @ethancohen123
2022-08-19 11:28:58 -07:00
Phil Wang
de5e628773
cite einops
2022-08-17 08:58:41 -07:00
Phil Wang
1b4046b039
gratitude
2022-08-17 08:57:33 -07:00
Phil Wang
27f19ba7fa
make sure diffusion prior trainer can operate with no warmup
1.8.2
2022-08-15 14:27:40 -07:00
Phil Wang
8f38339c2b
give diffusion prior trainer cosine annealing lr too
1.8.1
2022-08-15 07:38:01 -07:00
Phil Wang
6b9b4b9e5e
add cosine annealing lr schedule
1.8.0
2022-08-15 07:29:56 -07:00
Phil Wang
44e09d5a4d
add weight standardization behind feature flag, which may potentially work well with group norm
2022-08-14 11:34:45 -07:00
Phil Wang
34806663e3
make it so diffusion prior p_sample_loop returns unnormalized image embeddings
1.6.5
2022-08-13 10:03:40 -07:00
Phil Wang
dc816b1b6e
dry up some code around handling unet outputs with learned variance
1.6.4
2022-08-12 15:25:03 -07:00
Phil Wang
05192ffac4
fix self conditioning shape in diffusion prior
1.6.3
2022-08-12 12:30:03 -07:00
Phil Wang
9440411954
make self conditioning technique work with diffusion prior
1.6.1
2022-08-12 12:20:51 -07:00
Phil Wang
981d407792
comment
2022-08-12 11:41:23 -07:00
Phil Wang
7c5477b26d
bet on the new self-conditioning technique out of geoffrey hintons group
1.6.0
2022-08-12 11:36:08 -07:00
Phil Wang
be3bb868bf
add gradient checkpointing for all resnet blocks
1.5.0
2022-08-02 19:21:44 -07:00
Phil Wang
451de34871
enforce clip anytorch version
1.4.6
2022-07-30 10:07:55 -07:00
Phil Wang
f22e8c8741
make open clip available for use with dalle2 pytorch
1.4.5
2022-07-30 09:02:31 -07:00
Phil Wang
87432e93ad
quick fix for linear attention
1.4.4
2022-07-29 13:17:12 -07:00
Phil Wang
d167378401
add cosine sim for self attention as well, as a setting
1.4.3
2022-07-29 12:48:20 -07:00
Phil Wang
2d67d5821e
change up epsilon in layernorm the case of using fp16, thanks to @Veldrovive for figuring out this stabilizes training
1.4.2
2022-07-29 12:41:02 -07:00
Phil Wang
748c7fe7af
allow for cosine sim cross attention, modify linear attention in attempt to resolve issue on fp16
1.4.0
2022-07-29 11:12:18 -07:00
Phil Wang
80046334ad
make sure entire readme runs without errors
1.2.2
2022-07-28 10:17:43 -07:00
Phil Wang
36fb46a95e
fix readme and a small bug in DALLE2 class
1.2.1
2022-07-28 08:33:51 -07:00
Phil Wang
07abfcf45b
rescale values in linear attention to mitigate overflows in fp16 setting
1.2.0
2022-07-27 12:27:38 -07:00
Phil Wang
2e35a9967d
product management
2022-07-26 11:10:16 -07:00
Phil Wang
406e75043f
add upsample combiner feature for the unets
1.1.0
2022-07-26 10:46:04 -07:00
Phil Wang
9646dfc0e6
fix path_or_state bug
1.0.6
2022-07-26 09:47:54 -07:00
Phil Wang
62043acb2f
fix repaint
1.0.5
2022-07-24 15:29:06 -07:00
Phil Wang
417ff808e6
1.0.3
1.0.3
2022-07-22 13:16:57 -07:00
Aidan Dempster
f3d7e226ba
Changed types to be generic instead of functions ( #215 )
...
This allows pylance to do proper type hinting and makes developing
extensions to the package much easier
2022-07-22 13:16:29 -07:00
Phil Wang
48a1302428
1.0.2
2022-07-20 23:01:51 -07:00
Aidan Dempster
ccaa46b81b
Re-introduced change that was accidentally rolled back ( #212 )
2022-07-20 23:01:19 -07:00
Phil Wang
76d08498cc
diffusion prior training updates from @nousr
1.0.1
2022-07-20 18:05:27 -07:00
zion
f9423d308b
Prior updates ( #211 )
...
* update configs for prior
add prior warmup to config
update example prior config
* update prior trainer & script
add deepspeed amp & warmup
adopt full accelerator support
reload at sample point
finish epoch resume code
* update tracker save method for prior
* helper functions for prior_loader
2022-07-20 18:04:26 -07:00
Phil Wang
06c65b60d2
1.0.0
1.0.0
2022-07-19 19:08:17 -07:00
Aidan Dempster
4145474bab
Improved upsampler training ( #181 )
...
Sampling is now possible without the first decoder unet
Non-training unets are deleted in the decoder trainer since they are never used and it is harder merge the models is they have keys in this state dict
Fixed a mistake where clip was not re-added after saving
2022-07-19 19:07:50 -07:00