This website requires JavaScript.
Explore
Help
Sign In
aljaz
/
DALLE2-pytorch
Watch
1
Star
1
Fork
0
You've already forked DALLE2-pytorch
mirror of
https://github.com/lucidrains/DALLE2-pytorch.git
synced
2025-12-19 17:54:20 +01:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
13382885d94c45cb6b4fee0c788d3d6f99a1086d
DALLE2-pytorch
/
dalle2_pytorch
History
Phil Wang
13382885d9
final update to dalle2 repository for a while - sampling from prior in chunks automatically with max_batch_size keyword given
2022-05-16 12:57:31 -07:00
..
data
bring in the simple tokenizer released by openai, but also plan on leaving room for custom tokenizer with yttm
2022-04-12 09:23:17 -07:00
dataloaders
allow for overriding use of EMA during sampling in decoder trainer with use_non_ema keyword, also fix some issues with automatic normalization of images and low res conditioning image if latent diffusion is in play
2022-05-16 11:18:30 -07:00
__init__.py
unet_number on decoder trainer only needs to be passed in if there is greater than 1 unet, so that unconditional training of a single ddpm is seamless (experiment in progress locally)
2022-05-16 09:17:17 -07:00
cli.py
add missing import (
#56
)
2022-05-04 07:42:20 -07:00
dalle2_pytorch.py
allow for overriding use of EMA during sampling in decoder trainer with use_non_ema keyword, also fix some issues with automatic normalization of images and low res conditioning image if latent diffusion is in play
2022-05-16 11:18:30 -07:00
optimizer.py
be able to customize adam eps
2022-05-14 13:55:04 -07:00
tokenizer.py
bring in the simple tokenizer released by openai, but also plan on leaving room for custom tokenizer with yttm
2022-04-12 09:23:17 -07:00
trackers.py
experiment tracker agnostic
2022-05-15 09:56:40 -07:00
trainer.py
final update to dalle2 repository for a while - sampling from prior in chunks automatically with max_batch_size keyword given
2022-05-16 12:57:31 -07:00
vqgan_vae_trainer.py
unet_number on decoder trainer only needs to be passed in if there is greater than 1 unet, so that unconditional training of a single ddpm is seamless (experiment in progress locally)
2022-05-16 09:17:17 -07:00
vqgan_vae.py
remove convnext blocks, they are illsuited for generative work, validated by early experimental results at
https://github.com/lucidrains/video-diffusion-pytorch
2022-05-05 07:07:21 -07:00