zion
|
64c2f9c4eb
|
implement ema warmup from @crowsonkb (#140)
|
2022-06-04 13:26:34 -07:00 |
|
Phil Wang
|
22cc613278
|
ema fix from @nousr
v0.6.12
|
2022-06-03 19:44:36 -07:00 |
|
zion
|
83517849e5
|
ema module fixes (#139)
|
2022-06-03 19:43:51 -07:00 |
|
Phil Wang
|
708809ed6c
|
lower beta2 for adam down to 0.99, based on https://openreview.net/forum?id=2LdBqxc1Yv
v0.6.11
|
2022-06-03 10:26:28 -07:00 |
|
Phil Wang
|
9cc475f6e7
|
fix update_every within EMA
v0.6.10
|
2022-06-03 10:21:05 -07:00 |
|
Phil Wang
|
ffd342e9d0
|
allow for an option to constrain the variance interpolation fraction coming out from the unet for learned variance, if it is turned on
v0.6.9
|
2022-06-03 09:34:57 -07:00 |
|
Phil Wang
|
f8bfd3493a
|
make destructuring datum length agnostic when validating in training decoder script, for @YUHANG-Ma
|
2022-06-02 13:54:57 -07:00 |
|
Phil Wang
|
9025345e29
|
take a stab at fixing generate_grid_samples when real images have a greater image size than generated
|
2022-06-02 11:33:15 -07:00 |
|
Phil Wang
|
8cc278447e
|
just cast to right types for blur sigma and kernel size augs
v0.6.8
|
2022-06-02 11:21:58 -07:00 |
|
Phil Wang
|
38cd62010c
|
allow for random blur sigma and kernel size augmentations on low res conditioning (need to reread paper to see if the augmentation value needs to be fed into the unet for conditioning as well)
v0.6.7
|
2022-06-02 11:11:25 -07:00 |
|
Ryan Russell
|
1cc288af39
|
Improve Readability (#133)
Signed-off-by: Ryan Russell <git@ryanrussell.org>
|
2022-06-01 13:28:02 -07:00 |
|
Phil Wang
|
a851168633
|
make youtokentome optional package, due to reported installation difficulties
v0.6.6
|
2022-06-01 09:25:35 -07:00 |
|
Phil Wang
|
1ffeecd0ca
|
lower default ema beta value
v0.6.5
|
2022-05-31 11:55:21 -07:00 |
|
Phil Wang
|
3df899f7a4
|
patch
v0.6.4
|
2022-05-31 09:03:43 -07:00 |
|
Aidan Dempster
|
09534119a1
|
Fixed non deterministic optimizer creation (#130)
|
2022-05-31 09:03:20 -07:00 |
|
Phil Wang
|
6f8b90d4d7
|
add packaging package
v0.6.3
|
2022-05-30 11:45:00 -07:00 |
|
Phil Wang
|
b588286288
|
fix version
v0.6.2
|
2022-05-30 11:06:34 -07:00 |
|
Phil Wang
|
b693e0be03
|
default number of resnet blocks per layer in unet to 2 (in imagen it was 3 for base 64x64)
v0.6.1
|
2022-05-30 10:06:48 -07:00 |
|
Phil Wang
|
a0bed30a84
|
additional conditioning on image embedding by summing to time embeddings (for FiLM like conditioning in subsequent layers), from passage found in paper by @mhh0318
v0.6.0
|
2022-05-30 09:26:51 -07:00 |
|
zion
|
387c5bf774
|
quick patch for new prior loader (#123)
|
2022-05-29 16:25:53 -07:00 |
|
Phil Wang
|
a13d2d89c5
|
0.5.7
v0.5.7
|
2022-05-29 07:40:25 -07:00 |
|
zion
|
44d4b1bba9
|
overhaul prior dataloader (#122)
add readme for loader
|
2022-05-29 07:39:59 -07:00 |
|
Phil Wang
|
f12a7589c5
|
commit to trying out grid attention
|
2022-05-26 12:56:10 -07:00 |
|
Phil Wang
|
b8af2210df
|
make sure diffusion prior can be instantiated from pydantic class without clip
v0.5.6
|
2022-05-26 08:47:30 -07:00 |
|
Phil Wang
|
f4fe6c570d
|
allow for full customization of number of resnet blocks per down or upsampling layers in unet, as in imagen
v0.5.5
|
2022-05-26 08:33:31 -07:00 |
|
Phil Wang
|
645e207441
|
credit assignment
|
2022-05-26 08:16:03 -07:00 |
|
Phil Wang
|
00743b3a0b
|
update
|
2022-05-26 08:12:25 -07:00 |
|
Phil Wang
|
01589aff6a
|
cite maxvit properly
|
2022-05-26 07:12:25 -07:00 |
|
Phil Wang
|
7ecfd76cc0
|
fix evaluation config splat in training decoder script
|
2022-05-26 07:11:31 -07:00 |
|
Phil Wang
|
6161b61c55
|
0.5.4
v0.5.4
|
2022-05-25 09:32:17 -07:00 |
|
zion
|
1ed0f9d80b
|
use deterministic optimizer params (#116)
|
2022-05-25 09:31:43 -07:00 |
|
Phil Wang
|
f326a95e26
|
0.5.3
v0.5.3
|
2022-05-25 09:07:28 -07:00 |
|
zion
|
d7a0a2ce4b
|
add more support for configuring prior (#113)
|
2022-05-25 09:06:50 -07:00 |
|
Phil Wang
|
f23fab7ef7
|
switch over to scale shift conditioning, as it seems like Imagen and Glide used it and it may be important
v0.5.2
|
2022-05-24 21:46:12 -07:00 |
|
Phil Wang
|
857b9fbf1e
|
allow for one to stop grouping out weight decayable parameters, to debug optimizer state dict problem
v0.5.1
|
2022-05-24 21:42:32 -07:00 |
|
Phil Wang
|
8864fd0aa7
|
bring in the dynamic thresholding technique from the Imagen paper, which purportedly improves classifier free guidance for the cascading ddpm
0.5.0a
|
2022-05-24 18:15:14 -07:00 |
|
Phil Wang
|
72bf159331
|
update
v0.5.0
|
2022-05-24 08:25:40 -07:00 |
|
Phil Wang
|
e5e47cfecb
|
link to aidan's test run
|
2022-05-23 12:41:46 -07:00 |
|
Phil Wang
|
fa533962bd
|
just use an assert to make sure clip image channels is never different than the channels of the diffusion prior and decoder, if clip is given
v0.4.14
|
2022-05-22 22:43:14 -07:00 |
|
Phil Wang
|
276abf337b
|
fix and cleanup image size determination logic in decoder
0.4.11
|
2022-05-22 22:28:45 -07:00 |
|
Phil Wang
|
ae42d03006
|
allow for saving of additional fields on save method in trainers, and return loaded objects from the load method
0.4.10
|
2022-05-22 22:14:25 -07:00 |
|
Phil Wang
|
4d346e98d9
|
allow for config driven creation of clip-less diffusion prior
|
2022-05-22 20:36:20 -07:00 |
|
Phil Wang
|
2b1fd1ad2e
|
product management
|
2022-05-22 19:23:40 -07:00 |
|
zion
|
82a2ef37d9
|
Update README.md (#109)
block in a section that links to available pre-trained models for those who are interested
|
2022-05-22 19:22:30 -07:00 |
|
Phil Wang
|
5c397c9d66
|
move neural network creations off the configuration file into the pydantic classes
0.4.8
|
2022-05-22 19:18:18 -07:00 |
|
Phil Wang
|
0f4edff214
|
derived value for image preprocessing belongs to the data config class
0.4.7
|
2022-05-22 18:42:40 -07:00 |
|
Phil Wang
|
501a8c7c46
|
small cleanup
0.4.6
|
2022-05-22 15:39:38 -07:00 |
|
Phil Wang
|
4e49373fc5
|
project management
|
2022-05-22 15:27:40 -07:00 |
|
Phil Wang
|
49de72040c
|
fix decoder trainer optimizer loading (since there are multiple for each unet), also save and load step number correctly
0.4.5
|
2022-05-22 15:21:00 -07:00 |
|
Phil Wang
|
271a376eaf
|
0.4.3
0.4.3
|
2022-05-22 15:10:28 -07:00 |
|