Phil Wang
|
8cc278447e
|
just cast to right types for blur sigma and kernel size augs
v0.6.8
|
2022-06-02 11:21:58 -07:00 |
|
Phil Wang
|
38cd62010c
|
allow for random blur sigma and kernel size augmentations on low res conditioning (need to reread paper to see if the augmentation value needs to be fed into the unet for conditioning as well)
v0.6.7
|
2022-06-02 11:11:25 -07:00 |
|
Ryan Russell
|
1cc288af39
|
Improve Readability (#133)
Signed-off-by: Ryan Russell <git@ryanrussell.org>
|
2022-06-01 13:28:02 -07:00 |
|
Phil Wang
|
a851168633
|
make youtokentome optional package, due to reported installation difficulties
v0.6.6
|
2022-06-01 09:25:35 -07:00 |
|
Phil Wang
|
1ffeecd0ca
|
lower default ema beta value
v0.6.5
|
2022-05-31 11:55:21 -07:00 |
|
Phil Wang
|
3df899f7a4
|
patch
v0.6.4
|
2022-05-31 09:03:43 -07:00 |
|
Aidan Dempster
|
09534119a1
|
Fixed non deterministic optimizer creation (#130)
|
2022-05-31 09:03:20 -07:00 |
|
Phil Wang
|
6f8b90d4d7
|
add packaging package
v0.6.3
|
2022-05-30 11:45:00 -07:00 |
|
Phil Wang
|
b588286288
|
fix version
v0.6.2
|
2022-05-30 11:06:34 -07:00 |
|
Phil Wang
|
b693e0be03
|
default number of resnet blocks per layer in unet to 2 (in imagen it was 3 for base 64x64)
v0.6.1
|
2022-05-30 10:06:48 -07:00 |
|
Phil Wang
|
a0bed30a84
|
additional conditioning on image embedding by summing to time embeddings (for FiLM like conditioning in subsequent layers), from passage found in paper by @mhh0318
v0.6.0
|
2022-05-30 09:26:51 -07:00 |
|
zion
|
387c5bf774
|
quick patch for new prior loader (#123)
|
2022-05-29 16:25:53 -07:00 |
|
Phil Wang
|
a13d2d89c5
|
0.5.7
v0.5.7
|
2022-05-29 07:40:25 -07:00 |
|
zion
|
44d4b1bba9
|
overhaul prior dataloader (#122)
add readme for loader
|
2022-05-29 07:39:59 -07:00 |
|
Phil Wang
|
f12a7589c5
|
commit to trying out grid attention
|
2022-05-26 12:56:10 -07:00 |
|
Phil Wang
|
b8af2210df
|
make sure diffusion prior can be instantiated from pydantic class without clip
v0.5.6
|
2022-05-26 08:47:30 -07:00 |
|
Phil Wang
|
f4fe6c570d
|
allow for full customization of number of resnet blocks per down or upsampling layers in unet, as in imagen
v0.5.5
|
2022-05-26 08:33:31 -07:00 |
|
Phil Wang
|
645e207441
|
credit assignment
|
2022-05-26 08:16:03 -07:00 |
|
Phil Wang
|
00743b3a0b
|
update
|
2022-05-26 08:12:25 -07:00 |
|
Phil Wang
|
01589aff6a
|
cite maxvit properly
|
2022-05-26 07:12:25 -07:00 |
|
Phil Wang
|
7ecfd76cc0
|
fix evaluation config splat in training decoder script
|
2022-05-26 07:11:31 -07:00 |
|
Phil Wang
|
6161b61c55
|
0.5.4
v0.5.4
|
2022-05-25 09:32:17 -07:00 |
|
zion
|
1ed0f9d80b
|
use deterministic optimizer params (#116)
|
2022-05-25 09:31:43 -07:00 |
|
Phil Wang
|
f326a95e26
|
0.5.3
v0.5.3
|
2022-05-25 09:07:28 -07:00 |
|
zion
|
d7a0a2ce4b
|
add more support for configuring prior (#113)
|
2022-05-25 09:06:50 -07:00 |
|
Phil Wang
|
f23fab7ef7
|
switch over to scale shift conditioning, as it seems like Imagen and Glide used it and it may be important
v0.5.2
|
2022-05-24 21:46:12 -07:00 |
|
Phil Wang
|
857b9fbf1e
|
allow for one to stop grouping out weight decayable parameters, to debug optimizer state dict problem
v0.5.1
|
2022-05-24 21:42:32 -07:00 |
|
Phil Wang
|
8864fd0aa7
|
bring in the dynamic thresholding technique from the Imagen paper, which purportedly improves classifier free guidance for the cascading ddpm
0.5.0a
|
2022-05-24 18:15:14 -07:00 |
|
Phil Wang
|
72bf159331
|
update
v0.5.0
|
2022-05-24 08:25:40 -07:00 |
|
Phil Wang
|
e5e47cfecb
|
link to aidan's test run
|
2022-05-23 12:41:46 -07:00 |
|
Phil Wang
|
fa533962bd
|
just use an assert to make sure clip image channels is never different than the channels of the diffusion prior and decoder, if clip is given
v0.4.14
|
2022-05-22 22:43:14 -07:00 |
|
Phil Wang
|
276abf337b
|
fix and cleanup image size determination logic in decoder
0.4.11
|
2022-05-22 22:28:45 -07:00 |
|
Phil Wang
|
ae42d03006
|
allow for saving of additional fields on save method in trainers, and return loaded objects from the load method
0.4.10
|
2022-05-22 22:14:25 -07:00 |
|
Phil Wang
|
4d346e98d9
|
allow for config driven creation of clip-less diffusion prior
|
2022-05-22 20:36:20 -07:00 |
|
Phil Wang
|
2b1fd1ad2e
|
product management
|
2022-05-22 19:23:40 -07:00 |
|
zion
|
82a2ef37d9
|
Update README.md (#109)
block in a section that links to available pre-trained models for those who are interested
|
2022-05-22 19:22:30 -07:00 |
|
Phil Wang
|
5c397c9d66
|
move neural network creations off the configuration file into the pydantic classes
0.4.8
|
2022-05-22 19:18:18 -07:00 |
|
Phil Wang
|
0f4edff214
|
derived value for image preprocessing belongs to the data config class
0.4.7
|
2022-05-22 18:42:40 -07:00 |
|
Phil Wang
|
501a8c7c46
|
small cleanup
0.4.6
|
2022-05-22 15:39:38 -07:00 |
|
Phil Wang
|
4e49373fc5
|
project management
|
2022-05-22 15:27:40 -07:00 |
|
Phil Wang
|
49de72040c
|
fix decoder trainer optimizer loading (since there are multiple for each unet), also save and load step number correctly
0.4.5
|
2022-05-22 15:21:00 -07:00 |
|
Phil Wang
|
271a376eaf
|
0.4.3
0.4.3
|
2022-05-22 15:10:28 -07:00 |
|
Phil Wang
|
e527002472
|
take care of saving and loading functions on the diffusion prior and decoder training classes
|
2022-05-22 15:10:15 -07:00 |
|
Phil Wang
|
c12e067178
|
let the pydantic config base model take care of loading configuration from json path
0.4.2
|
2022-05-22 14:47:23 -07:00 |
|
Phil Wang
|
c6629c431a
|
make training splits into its own pydantic base model, validate it sums to 1, make decoder script cleaner
0.4.1
|
2022-05-22 14:43:22 -07:00 |
|
Phil Wang
|
7ac2fc79f2
|
add renamed train decoder json file
|
2022-05-22 14:32:50 -07:00 |
|
Phil Wang
|
a1ef023193
|
use pydantic to manage decoder training configs + defaults and refactor training script
0.4.0
|
2022-05-22 14:27:40 -07:00 |
|
Phil Wang
|
d49eca62fa
|
dep
|
2022-05-21 11:27:52 -07:00 |
|
Phil Wang
|
8aab69b91e
|
final thought
|
2022-05-21 10:47:45 -07:00 |
|
Phil Wang
|
b432df2f7b
|
final cleanup to decoder script
|
2022-05-21 10:42:16 -07:00 |
|