Phil Wang
0f4edff214
derived value for image preprocessing belongs to the data config class
0.4.7
2022-05-22 18:42:40 -07:00
Phil Wang
501a8c7c46
small cleanup
0.4.6
2022-05-22 15:39:38 -07:00
Phil Wang
4e49373fc5
project management
2022-05-22 15:27:40 -07:00
Phil Wang
49de72040c
fix decoder trainer optimizer loading (since there are multiple for each unet), also save and load step number correctly
0.4.5
2022-05-22 15:21:00 -07:00
Phil Wang
271a376eaf
0.4.3
0.4.3
2022-05-22 15:10:28 -07:00
Phil Wang
e527002472
take care of saving and loading functions on the diffusion prior and decoder training classes
2022-05-22 15:10:15 -07:00
Phil Wang
c12e067178
let the pydantic config base model take care of loading configuration from json path
0.4.2
2022-05-22 14:47:23 -07:00
Phil Wang
c6629c431a
make training splits into its own pydantic base model, validate it sums to 1, make decoder script cleaner
0.4.1
2022-05-22 14:43:22 -07:00
Phil Wang
7ac2fc79f2
add renamed train decoder json file
2022-05-22 14:32:50 -07:00
Phil Wang
a1ef023193
use pydantic to manage decoder training configs + defaults and refactor training script
0.4.0
2022-05-22 14:27:40 -07:00
Phil Wang
d49eca62fa
dep
2022-05-21 11:27:52 -07:00
Phil Wang
8aab69b91e
final thought
2022-05-21 10:47:45 -07:00
Phil Wang
b432df2f7b
final cleanup to decoder script
2022-05-21 10:42:16 -07:00
Phil Wang
ebaa0d28c2
product management
2022-05-21 10:30:52 -07:00
Phil Wang
8b0d459b25
move config parsing logic to own file, consider whether to find an off-the-shelf solution at future date
2022-05-21 10:30:10 -07:00
Phil Wang
0064661729
small cleanup of decoder train script
2022-05-21 10:17:13 -07:00
Phil Wang
b895f52843
appreciation section
2022-05-21 08:32:12 -07:00
Phil Wang
80497e9839
accept unets as list for decoder
0.3.7
2022-05-20 20:31:26 -07:00
Phil Wang
f526f14d7c
bump
0.3.6
2022-05-20 20:20:40 -07:00
Phil Wang
8997f178d6
small cleanup with timer
2022-05-20 20:05:01 -07:00
Aidan Dempster
022c94e443
Added single GPU training script for decoder ( #108 )
...
Added config files for training
Changed example image generation to be more efficient
Added configuration description to README
Removed unused import
2022-05-20 19:46:19 -07:00
Phil Wang
430961cb97
it was correct the first time, my bad
0.3.5
2022-05-20 18:05:15 -07:00
Phil Wang
721f9687c1
fix wandb logging in tracker, and do some cleanup
2022-05-20 17:27:43 -07:00
Aidan Dempster
e0524a6aff
Implemented the wandb tracker ( #106 )
...
Added a base_path parameter to all trackers for storing any local information they need to
2022-05-20 16:39:23 -07:00
Aidan Dempster
c85e0d5c35
Update decoder dataloader ( #105 )
...
* Updated the decoder dataloader
Removed unnecessary logging for required packages
Transferred to using index width instead of shard width
Added the ability to select extra keys to return from the webdataset
* Added README for decoder loader
2022-05-20 16:38:55 -07:00
Phil Wang
db0642c4cd
quick fix for @marunine
0.3.3
2022-05-18 20:22:52 -07:00
Phil Wang
bb86ab2404
update sample, and set default gradient clipping value for decoder training
0.3.2
2022-05-16 17:38:30 -07:00
Phil Wang
ae056dd67c
samples
2022-05-16 13:46:35 -07:00
Phil Wang
033d6b0ce8
last update
2022-05-16 13:38:33 -07:00
Phil Wang
c7ea8748db
default decoder learning rate to what was in the paper
0.3.1
2022-05-16 13:33:54 -07:00
Phil Wang
13382885d9
final update to dalle2 repository for a while - sampling from prior in chunks automatically with max_batch_size keyword given
0.3.0
2022-05-16 12:57:31 -07:00
Phil Wang
c3d4a7ffe4
update working unconditional decoder example
2022-05-16 12:50:07 -07:00
Phil Wang
164d9be444
use a decorator and take care of sampling in chunks (max_batch_size keyword), in case one is sampling a huge grid of images
0.2.46
2022-05-16 12:34:28 -07:00
Phil Wang
5562ec6be2
status updates
2022-05-16 12:01:54 -07:00
Phil Wang
89ff04cfe2
final tweak to EMA class
0.2.44
2022-05-16 11:54:34 -07:00
Phil Wang
f4016f6302
allow for overriding use of EMA during sampling in decoder trainer with use_non_ema keyword, also fix some issues with automatic normalization of images and low res conditioning image if latent diffusion is in play
0.2.43
2022-05-16 11:18:30 -07:00
Phil Wang
1212f7058d
allow text encodings and text mask to be passed in on forward and sampling for Decoder class
0.2.42
2022-05-16 10:40:32 -07:00
Phil Wang
dab106d4e5
back to no_grad for now, also keep track and restore unet devices in one_unet_in_gpu contextmanager
0.2.40
2022-05-16 09:36:14 -07:00
Phil Wang
bb151ca6b1
unet_number on decoder trainer only needs to be passed in if there is greater than 1 unet, so that unconditional training of a single ddpm is seamless (experiment in progress locally)
0.2.39
2022-05-16 09:17:17 -07:00
zion
4a59dea4cf
Migrate to text-conditioned prior training ( #95 )
...
* migrate to conditioned prior
* unify reader logic with a wrapper (#1 )
* separate out reader logic
* support both training methods
* Update train prior to use embedding wrapper (#3 )
* Support Both Methods
* bug fixes
* small bug fixes
* embedding only wrapper bug
* use smaller val perc
* final bug fix for embedding-only
Co-authored-by: nousr <>
2022-05-15 20:16:38 -07:00
Phil Wang
ecf9e8027d
make sure classifier free guidance is used only if conditional dropout is present on the DiffusionPrior and Decoder classes. also make sure prior can have a different conditional scale than decoder
0.2.38
2022-05-15 19:09:38 -07:00
Phil Wang
36c5079bd7
LazyLinear is not mature, make users pass in text_embed_dim if text conditioning is turned on
0.2.37
2022-05-15 18:56:52 -07:00
Phil Wang
4a4c7ac9e6
cond drop prob for diffusion prior network should default to 0
02.36
2022-05-15 18:47:45 -07:00
Phil Wang
fad7481479
todo
2022-05-15 17:00:25 -07:00
Phil Wang
123658d082
cite Ho et al, since cascading ddpm is now trainable
2022-05-15 16:56:53 -07:00
Phil Wang
11d4e11f10
allow for training unconditional ddpm or cascading ddpms
0.2.35
2022-05-15 16:54:56 -07:00
Phil Wang
99778e12de
trainer classes now takes care of auto-casting numpy to torch tensors, and setting correct device based on model parameter devices
0.2.34
2022-05-15 15:25:45 -07:00
Phil Wang
0f0011caf0
todo
2022-05-15 14:28:35 -07:00
Phil Wang
7b7a62044a
use eval vs training mode to determine whether to call backprop on trainer forward
0.2.32
2022-05-15 14:20:59 -07:00
Phil Wang
156fe5ed9f
final cleanup for the day
2022-05-15 12:38:41 -07:00