This website requires JavaScript.
Explore
Help
Sign In
aljaz
/
DALLE2-pytorch
Watch
1
Star
1
Fork
0
You've already forked DALLE2-pytorch
mirror of
https://github.com/lucidrains/DALLE2-pytorch.git
synced
2025-12-19 09:44:19 +01:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
59b1a77d4dabe39dc88180bd85295c1eff8f9b36
DALLE2-pytorch
/
dalle2_pytorch
History
Phil Wang
59b1a77d4d
be a bit more conservative and stick with layernorm (without bias) for now, given @borisdayma results
https://twitter.com/borisdayma/status/1517227191477571585
2022-04-22 11:14:54 -07:00
..
data
bring in the simple tokenizer released by openai, but also plan on leaving room for custom tokenizer with yttm
2022-04-12 09:23:17 -07:00
__init__.py
refactor blurring training augmentation to be taken care of by the decoder, with option to downsample to previous resolution before upsampling (cascading ddpm). this opens up the possibility of cascading latent ddpm
2022-04-22 11:09:17 -07:00
cli.py
give time tokens a surface area of 2 tokens as default, make it so researcher can customize which unet actually is conditioned on image embeddings and/or text encodings
2022-04-20 10:04:47 -07:00
dalle2_pytorch.py
be a bit more conservative and stick with layernorm (without bias) for now, given @borisdayma results
https://twitter.com/borisdayma/status/1517227191477571585
2022-04-22 11:14:54 -07:00
tokenizer.py
bring in the simple tokenizer released by openai, but also plan on leaving room for custom tokenizer with yttm
2022-04-12 09:23:17 -07:00
train.py
get ready for all the training related classes and functions
2022-04-12 09:54:50 -07:00
vqgan_vae.py
prepare for latent diffusion in the first DDPM of the cascade in the Decoder
2022-04-21 17:54:31 -07:00