Logo
Explore Help
Sign In
aljaz/DALLE2-pytorch
1
1
Fork 0
You've already forked DALLE2-pytorch
mirror of https://github.com/lucidrains/DALLE2-pytorch.git synced 2025-12-19 17:54:20 +01:00
Code Issues Packages Projects Releases Wiki Activity
Files
b0cd5f24b67fe7dda6bc5771ec9073a8556e7271
DALLE2-pytorch/dalle2_pytorch
History
Phil Wang b0cd5f24b6 take care of gradient accumulation automatically for researchers, by passing in a max_batch_size on the decoder or diffusion prior trainer forward
2022-05-14 17:04:09 -07:00
..
data
bring in the simple tokenizer released by openai, but also plan on leaving room for custom tokenizer with yttm
2022-04-12 09:23:17 -07:00
dataloaders
Add a dataloader for training the decoder (#57)
2022-05-05 07:08:45 -07:00
__init__.py
some cleanup
2022-05-09 16:50:21 -07:00
cli.py
add missing import (#56)
2022-05-04 07:42:20 -07:00
dalle2_pytorch.py
normalize conditioning tokens outside of cross attention blocks
2022-05-14 14:23:52 -07:00
optimizer.py
be able to customize adam eps
2022-05-14 13:55:04 -07:00
tokenizer.py
bring in the simple tokenizer released by openai, but also plan on leaving room for custom tokenizer with yttm
2022-04-12 09:23:17 -07:00
train_vqgan_vae.py
make sure vqgan-vae trainer supports mixed precision
2022-05-06 10:44:16 -07:00
train.py
take care of gradient accumulation automatically for researchers, by passing in a max_batch_size on the decoder or diffusion prior trainer forward
2022-05-14 17:04:09 -07:00
vqgan_vae.py
remove convnext blocks, they are illsuited for generative work, validated by early experimental results at https://github.com/lucidrains/video-diffusion-pytorch
2022-05-05 07:07:21 -07:00
Powered by Gitea Version: 1.24.6 Page: 112ms Template: 6ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API