Commit Graph

460 Commits

Author SHA1 Message Date
Phil Wang
1e4bb2bafb cast long as float before deriving sinusoidal pos emb v0.16.8 2022-07-05 18:01:22 -07:00
Phil Wang
ee75515c7d remove forcing of softmax in f32, in case it is interfering with deepspeed v0.16.7 2022-07-05 16:53:58 -07:00
Phil Wang
ec68243479 set ability to do warmup steps for each unet during training v0.16.6 2022-07-05 16:24:16 -07:00
Phil Wang
3afdcdfe86 need to keep track of training steps separately for each unet in decoder trainer v0.16.3 2022-07-05 15:17:59 -07:00
Phil Wang
b9a908ff75 bring in two tricks from the cogview paper for reducing the chances of overflow, for attention and layernorm v0.16.2 2022-07-05 14:27:04 -07:00
Phil Wang
e1fe3089df do bias-less layernorm manually v0.16.0 2022-07-05 13:09:58 -07:00
Phil Wang
6d477d7654 link to dalle2 laion 2022-07-05 11:43:07 -07:00
Phil Wang
531fe4b62f status 2022-07-05 10:46:55 -07:00
Phil Wang
ec5a77fc55 0.15.4 v0.15.4 2022-07-02 08:56:34 -07:00
Aidan Dempster
fac63c61bc Fixed variable naming issue (#183) 2022-07-02 08:56:03 -07:00
Phil Wang
3d23ba4aa5 add ability to specify full self attention on specific stages in the unet v0.15.3 2022-07-01 10:22:07 -07:00
Phil Wang
282c35930f 0.15.2 v0.15.2 2022-07-01 09:40:11 -07:00
Aidan Dempster
27b0f7ca0d Overhauled the tracker system (#172)
* Overhauled the tracker system
Separated the logging and saving capabilities
Changed creation to be consistent and initializing behavior to be defined by a class initializer instead of in the training script
Added class separation between different types of loaders and savers to make the system more verbose

* Changed the saver system to only save the checkpoint once

* Added better error handling for saving checkpoints

* Fixed an error where wandb would error when passed arbitrary kwargs

* Fixed variable naming issues for improved saver
Added more logging during long pauses

* Fixed which methods need to be dummy to immediatly return
Added the ability to set whether you find unused parameters

* Added more logging for when a wandb loader fails
2022-07-01 09:39:40 -07:00
Phil Wang
7b0edf9e42 allow for returning low resolution conditioning image on forward through decoder with return_lowres_cond_image flag v0.15.1 2022-07-01 09:35:39 -07:00
Phil Wang
a922a539de bring back convtranspose2d upsampling, allow for nearest upsample with hyperparam, change kernel size of last conv to 1, make configurable, cleanup v0.15.0 2022-07-01 09:21:47 -07:00
Phil Wang
8f2466f1cd blur sigma for upsampling training was 0.6 in the paper, make that the default value v0.14.1 2022-06-30 17:03:16 -07:00
Phil Wang
908ab83799 add skip connections for all intermediate resnet blocks, also add an extra resnet block for memory efficient version of unet, time condition for both initial resnet block and last one before output v0.14.0 2022-06-29 08:16:58 -07:00
Phil Wang
46a2558d53 bug in pydantic decoder config class v0.12.4 2022-06-29 07:17:35 -07:00
yytdfc
86109646e3 fix a bug of name error (#179) 2022-06-29 07:16:44 -07:00
Phil Wang
6a11b9678b bring in the skip connection scaling factor, used by imagen in their unets, cite original paper using it v0.12.3 2022-06-26 21:59:55 -07:00
Phil Wang
b90364695d fix remaining issues with deriving cond_on_text_encodings from child unet settings v0.12.2 2022-06-26 21:07:42 -07:00
zion
868c001199 bug fixes for text conditioning update (#175) 2022-06-26 16:12:32 -07:00
Phil Wang
032e83b0e0 nevermind, do not enforce text encodings on first unet v0.12.1 2022-06-26 12:45:05 -07:00
Phil Wang
2e85e736f3 remove unnecessary decoder setting, and if not unconditional, always make sure the first unet is condition-able on text v0.12.0 2022-06-26 12:32:17 -07:00
Aidan Dempster
f5760bdb92 Add data flexibility to decoder trainer (#165)
* Added the ability to train decoder with text embeddings

* Added the ability to train using on the fly generated embeddings with clip

* Clip now generates embeddings for whatever is not precomputed
2022-06-25 19:05:20 -07:00
zion
c453f468b1 autoswitch tqdm for notebooks (#171)
avoids printing the `tqdm` progress bar to a newline in notebooks when detected
2022-06-25 16:37:06 -07:00
zion
98f0c17759 add sampels-seen and ema decay (#166) 2022-06-24 15:12:09 -07:00
Phil Wang
a5b9fd6ca8 product management 2022-06-24 08:15:05 -07:00
Phil Wang
4b994601ae just make sure decoder learning rate is reasonable and help out budding researchers v0.11.5 2022-06-23 11:29:28 -07:00
zion
fddf66e91e fix params in decoder (#162) 2022-06-22 14:45:01 -07:00
Phil Wang
c8422ffd5d fix EMA updating buffers with non-float tensors 0.11.4 2022-06-22 07:16:39 -07:00
Conight
2aadc23c7c Fix train decoder config example (#160) 2022-06-21 22:17:06 -07:00
Phil Wang
c098f57e09 EMA for vqgan vae comes from ema_pytorch now 2022-06-20 15:29:08 -07:00
Phil Wang
0021535c26 move ema to external repo v0.11.3 2022-06-20 11:48:32 -07:00
Phil Wang
56883910fb cleanup 2022-06-20 11:14:55 -07:00
Phil Wang
893f270012 project management 2022-06-20 10:00:22 -07:00
Phil Wang
f545ce18f4 be able to turn off p2 loss reweighting for upsamplers v0.11.2 2022-06-20 09:43:31 -07:00
Phil Wang
fc7abf624d in paper, blur sigma was 0.6 v0.11.1 2022-06-20 09:05:08 -07:00
Phil Wang
67f0740777 small cleanup 2022-06-20 08:59:51 -07:00
Phil Wang
138079ca83 allow for setting beta schedules of unets differently in the decoder, as what was used in the paper was cosine, cosine, linear v0.11.0 2022-06-20 08:56:37 -07:00
zion
f5a906f5d3 prior train script bug fixes (#153) 2022-06-19 15:55:15 -07:00
Phil Wang
0215237fc6 update status 2022-06-19 09:42:24 -07:00
Phil Wang
461b91c5c1 also merge distributed training code for decoder, thanks to @Veldrovive v0.10.1 2022-06-19 09:26:44 -07:00
Aidan Dempster
58892135d9 Distributed Training of the Decoder (#121)
* Converted decoder trainer to use accelerate

* Fixed issue where metric evaluation would hang on distributed mode

* Implemented functional saving
Loading still fails due to some issue with the optimizer

* Fixed issue with loading decoders

* Fixed issue with tracker config

* Fixed issue with amp
Updated logging to be more logical

* Saving checkpoint now saves position in training as well
Fixed an issue with running out of gpu space due to loading weights into the gpu twice

* Fixed ema for distributed training

* Fixed isue where get_pkg_version was reintroduced

* Changed decoder trainer to upload config as a file

Fixed issue where loading best would error
2022-06-19 09:25:54 -07:00
Phil Wang
e37072a48c 0.10.0 v0.10.0 2022-06-19 08:50:53 -07:00
Phil Wang
41ca896413 depend on huggingface accelerate, move appreciation thread up for visibility 2022-06-19 08:50:35 -07:00
zion
fe19b508ca Distributed Training of the Prior (#112)
* distributed prior trainer

better EMA support

update load and save methods of prior

* update prior training script

add test evalution & ema validation

add more tracking metrics

small cleanup
2022-06-19 08:46:14 -07:00
Phil Wang
6651eafa93 one more residual, after seeing good results on unconditional generation locally v0.9.2 2022-06-16 11:18:02 -07:00
Phil Wang
e6bb75e5ab fix missing residual for highest resolution of the unet 2022-06-15 20:09:43 -07:00
Giorgos Zachariadis
b4c3e5b854 changed str in order to avoid confusions and collisions with Python (#147) 2022-06-15 13:41:16 -07:00