Phil Wang
e37072a48c
0.10.0
v0.10.0
2022-06-19 08:50:53 -07:00
Phil Wang
41ca896413
depend on huggingface accelerate, move appreciation thread up for visibility
2022-06-19 08:50:35 -07:00
zion
fe19b508ca
Distributed Training of the Prior ( #112 )
...
* distributed prior trainer
better EMA support
update load and save methods of prior
* update prior training script
add test evalution & ema validation
add more tracking metrics
small cleanup
2022-06-19 08:46:14 -07:00
Phil Wang
6651eafa93
one more residual, after seeing good results on unconditional generation locally
v0.9.2
2022-06-16 11:18:02 -07:00
Phil Wang
e6bb75e5ab
fix missing residual for highest resolution of the unet
2022-06-15 20:09:43 -07:00
Giorgos Zachariadis
b4c3e5b854
changed str in order to avoid confusions and collisions with Python ( #147 )
2022-06-15 13:41:16 -07:00
Phil Wang
b7f9607258
make memory efficient unet design from imagen toggle-able
v0.8.1
2022-06-15 13:40:26 -07:00
Phil Wang
2219348a6e
adopt similar unet architecture as imagen
v0.8.0
2022-06-15 12:18:21 -07:00
Phil Wang
9eea9b9862
add p2 loss reweighting for decoder training as an option
v0.7.1
2022-06-14 10:58:57 -07:00
Phil Wang
5d958713c0
fix classifier free guidance for image hiddens summed to time hiddens, thanks to @xvjiarui for finding this bug
v0.7.0
2022-06-13 21:01:50 -07:00
Phil Wang
0f31980362
cleanup
2022-06-07 17:31:38 -07:00
Phil Wang
bee5bf3815
fix for https://github.com/lucidrains/DALLE2-pytorch/issues/143
2022-06-07 09:03:48 -07:00
Phil Wang
350a3d6045
0.6.16
v0.6.16
2022-06-06 08:45:46 -07:00
Kashif Rasul
1a81670718
fix quadratic_beta_schedule ( #141 )
2022-06-06 08:45:14 -07:00
Phil Wang
934c9728dc
some cleanup
v0.6.15
2022-06-04 16:54:15 -07:00
Phil Wang
ce4b0107c1
0.6.13
0.6.13
2022-06-04 13:26:57 -07:00
zion
64c2f9c4eb
implement ema warmup from @crowsonkb ( #140 )
2022-06-04 13:26:34 -07:00
Phil Wang
22cc613278
ema fix from @nousr
v0.6.12
2022-06-03 19:44:36 -07:00
zion
83517849e5
ema module fixes ( #139 )
2022-06-03 19:43:51 -07:00
Phil Wang
708809ed6c
lower beta2 for adam down to 0.99, based on https://openreview.net/forum?id=2LdBqxc1Yv
v0.6.11
2022-06-03 10:26:28 -07:00
Phil Wang
9cc475f6e7
fix update_every within EMA
v0.6.10
2022-06-03 10:21:05 -07:00
Phil Wang
ffd342e9d0
allow for an option to constrain the variance interpolation fraction coming out from the unet for learned variance, if it is turned on
v0.6.9
2022-06-03 09:34:57 -07:00
Phil Wang
f8bfd3493a
make destructuring datum length agnostic when validating in training decoder script, for @YUHANG-Ma
2022-06-02 13:54:57 -07:00
Phil Wang
9025345e29
take a stab at fixing generate_grid_samples when real images have a greater image size than generated
2022-06-02 11:33:15 -07:00
Phil Wang
8cc278447e
just cast to right types for blur sigma and kernel size augs
v0.6.8
2022-06-02 11:21:58 -07:00
Phil Wang
38cd62010c
allow for random blur sigma and kernel size augmentations on low res conditioning (need to reread paper to see if the augmentation value needs to be fed into the unet for conditioning as well)
v0.6.7
2022-06-02 11:11:25 -07:00
Ryan Russell
1cc288af39
Improve Readability ( #133 )
...
Signed-off-by: Ryan Russell <git@ryanrussell.org >
2022-06-01 13:28:02 -07:00
Phil Wang
a851168633
make youtokentome optional package, due to reported installation difficulties
v0.6.6
2022-06-01 09:25:35 -07:00
Phil Wang
1ffeecd0ca
lower default ema beta value
v0.6.5
2022-05-31 11:55:21 -07:00
Phil Wang
3df899f7a4
patch
v0.6.4
2022-05-31 09:03:43 -07:00
Aidan Dempster
09534119a1
Fixed non deterministic optimizer creation ( #130 )
2022-05-31 09:03:20 -07:00
Phil Wang
6f8b90d4d7
add packaging package
v0.6.3
2022-05-30 11:45:00 -07:00
Phil Wang
b588286288
fix version
v0.6.2
2022-05-30 11:06:34 -07:00
Phil Wang
b693e0be03
default number of resnet blocks per layer in unet to 2 (in imagen it was 3 for base 64x64)
v0.6.1
2022-05-30 10:06:48 -07:00
Phil Wang
a0bed30a84
additional conditioning on image embedding by summing to time embeddings (for FiLM like conditioning in subsequent layers), from passage found in paper by @mhh0318
v0.6.0
2022-05-30 09:26:51 -07:00
zion
387c5bf774
quick patch for new prior loader ( #123 )
2022-05-29 16:25:53 -07:00
Phil Wang
a13d2d89c5
0.5.7
v0.5.7
2022-05-29 07:40:25 -07:00
zion
44d4b1bba9
overhaul prior dataloader ( #122 )
...
add readme for loader
2022-05-29 07:39:59 -07:00
Phil Wang
f12a7589c5
commit to trying out grid attention
2022-05-26 12:56:10 -07:00
Phil Wang
b8af2210df
make sure diffusion prior can be instantiated from pydantic class without clip
v0.5.6
2022-05-26 08:47:30 -07:00
Phil Wang
f4fe6c570d
allow for full customization of number of resnet blocks per down or upsampling layers in unet, as in imagen
v0.5.5
2022-05-26 08:33:31 -07:00
Phil Wang
645e207441
credit assignment
2022-05-26 08:16:03 -07:00
Phil Wang
00743b3a0b
update
2022-05-26 08:12:25 -07:00
Phil Wang
01589aff6a
cite maxvit properly
2022-05-26 07:12:25 -07:00
Phil Wang
7ecfd76cc0
fix evaluation config splat in training decoder script
2022-05-26 07:11:31 -07:00
Phil Wang
6161b61c55
0.5.4
v0.5.4
2022-05-25 09:32:17 -07:00
zion
1ed0f9d80b
use deterministic optimizer params ( #116 )
2022-05-25 09:31:43 -07:00
Phil Wang
f326a95e26
0.5.3
v0.5.3
2022-05-25 09:07:28 -07:00
zion
d7a0a2ce4b
add more support for configuring prior ( #113 )
2022-05-25 09:06:50 -07:00
Phil Wang
f23fab7ef7
switch over to scale shift conditioning, as it seems like Imagen and Glide used it and it may be important
v0.5.2
2022-05-24 21:46:12 -07:00