This website requires JavaScript.
Explore
Help
Sign In
aljaz
/
DALLE2-pytorch
Watch
1
Star
1
Fork
0
You've already forked DALLE2-pytorch
mirror of
https://github.com/lucidrains/DALLE2-pytorch.git
synced
2025-12-19 17:54:20 +01:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
d5318aef4fdd0b17132ca34c8b65a75945c38b41
DALLE2-pytorch
/
dalle2_pytorch
History
Phil Wang
f82917e1fd
prepare for turning off gradient penalty, as shown in GAN literature, GP needs to be only applied 1 out of 4 iterations
2022-04-23 07:52:10 -07:00
..
data
bring in the simple tokenizer released by openai, but also plan on leaving room for custom tokenizer with yttm
2022-04-12 09:23:17 -07:00
__init__.py
refactor blurring training augmentation to be taken care of by the decoder, with option to downsample to previous resolution before upsampling (cascading ddpm). this opens up the possibility of cascading latent ddpm
2022-04-22 11:09:17 -07:00
cli.py
give time tokens a surface area of 2 tokens as default, make it so researcher can customize which unet actually is conditioned on image embeddings and/or text encodings
2022-04-20 10:04:47 -07:00
dalle2_pytorch.py
use null container pattern to cleanup some conditionals, save more cleanup for next week
2022-04-22 15:23:18 -07:00
tokenizer.py
bring in the simple tokenizer released by openai, but also plan on leaving room for custom tokenizer with yttm
2022-04-12 09:23:17 -07:00
train.py
get ready for all the training related classes and functions
2022-04-12 09:54:50 -07:00
vqgan_vae.py
prepare for turning off gradient penalty, as shown in GAN literature, GP needs to be only applied 1 out of 4 iterations
2022-04-23 07:52:10 -07:00