zion
98f0c17759
add sampels-seen and ema decay ( #166 )
2022-06-24 15:12:09 -07:00
zion
fddf66e91e
fix params in decoder ( #162 )
2022-06-22 14:45:01 -07:00
Aidan Dempster
58892135d9
Distributed Training of the Decoder ( #121 )
...
* Converted decoder trainer to use accelerate
* Fixed issue where metric evaluation would hang on distributed mode
* Implemented functional saving
Loading still fails due to some issue with the optimizer
* Fixed issue with loading decoders
* Fixed issue with tracker config
* Fixed issue with amp
Updated logging to be more logical
* Saving checkpoint now saves position in training as well
Fixed an issue with running out of gpu space due to loading weights into the gpu twice
* Fixed ema for distributed training
* Fixed isue where get_pkg_version was reintroduced
* Changed decoder trainer to upload config as a file
Fixed issue where loading best would error
2022-06-19 09:25:54 -07:00
Phil Wang
bee5bf3815
fix for https://github.com/lucidrains/DALLE2-pytorch/issues/143
2022-06-07 09:03:48 -07:00
Phil Wang
f8bfd3493a
make destructuring datum length agnostic when validating in training decoder script, for @YUHANG-Ma
2022-06-02 13:54:57 -07:00
Phil Wang
9025345e29
take a stab at fixing generate_grid_samples when real images have a greater image size than generated
2022-06-02 11:33:15 -07:00
Phil Wang
7ecfd76cc0
fix evaluation config splat in training decoder script
2022-05-26 07:11:31 -07:00
Phil Wang
5c397c9d66
move neural network creations off the configuration file into the pydantic classes
2022-05-22 19:18:18 -07:00
Phil Wang
0f4edff214
derived value for image preprocessing belongs to the data config class
2022-05-22 18:42:40 -07:00
Phil Wang
501a8c7c46
small cleanup
2022-05-22 15:39:38 -07:00
Phil Wang
c12e067178
let the pydantic config base model take care of loading configuration from json path
2022-05-22 14:47:23 -07:00
Phil Wang
c6629c431a
make training splits into its own pydantic base model, validate it sums to 1, make decoder script cleaner
2022-05-22 14:43:22 -07:00
Phil Wang
a1ef023193
use pydantic to manage decoder training configs + defaults and refactor training script
2022-05-22 14:27:40 -07:00
Phil Wang
b432df2f7b
final cleanup to decoder script
2022-05-21 10:42:16 -07:00
Phil Wang
8b0d459b25
move config parsing logic to own file, consider whether to find an off-the-shelf solution at future date
2022-05-21 10:30:10 -07:00
Phil Wang
0064661729
small cleanup of decoder train script
2022-05-21 10:17:13 -07:00
Phil Wang
80497e9839
accept unets as list for decoder
2022-05-20 20:31:26 -07:00
Phil Wang
8997f178d6
small cleanup with timer
2022-05-20 20:05:01 -07:00
Aidan Dempster
022c94e443
Added single GPU training script for decoder ( #108 )
...
Added config files for training
Changed example image generation to be more efficient
Added configuration description to README
Removed unused import
2022-05-20 19:46:19 -07:00