This website requires JavaScript.
Explore
Help
Sign In
aljaz
/
DALLE2-pytorch
Watch
1
Star
1
Fork
0
You've already forked DALLE2-pytorch
mirror of
https://github.com/lucidrains/DALLE2-pytorch.git
synced
2025-12-19 09:44:19 +01:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
bb151ca6b16b17264250f5eeb275903dbbbe86ae
DALLE2-pytorch
/
dalle2_pytorch
History
Phil Wang
bb151ca6b1
unet_number on decoder trainer only needs to be passed in if there is greater than 1 unet, so that unconditional training of a single ddpm is seamless (experiment in progress locally)
2022-05-16 09:17:17 -07:00
..
data
bring in the simple tokenizer released by openai, but also plan on leaving room for custom tokenizer with yttm
2022-04-12 09:23:17 -07:00
dataloaders
Migrate to text-conditioned prior training (
#95
)
2022-05-15 20:16:38 -07:00
__init__.py
unet_number on decoder trainer only needs to be passed in if there is greater than 1 unet, so that unconditional training of a single ddpm is seamless (experiment in progress locally)
2022-05-16 09:17:17 -07:00
cli.py
add missing import (
#56
)
2022-05-04 07:42:20 -07:00
dalle2_pytorch.py
make sure classifier free guidance is used only if conditional dropout is present on the DiffusionPrior and Decoder classes. also make sure prior can have a different conditional scale than decoder
2022-05-15 19:09:38 -07:00
optimizer.py
be able to customize adam eps
2022-05-14 13:55:04 -07:00
tokenizer.py
bring in the simple tokenizer released by openai, but also plan on leaving room for custom tokenizer with yttm
2022-04-12 09:23:17 -07:00
trackers.py
experiment tracker agnostic
2022-05-15 09:56:40 -07:00
trainer.py
unet_number on decoder trainer only needs to be passed in if there is greater than 1 unet, so that unconditional training of a single ddpm is seamless (experiment in progress locally)
2022-05-16 09:17:17 -07:00
vqgan_vae_trainer.py
unet_number on decoder trainer only needs to be passed in if there is greater than 1 unet, so that unconditional training of a single ddpm is seamless (experiment in progress locally)
2022-05-16 09:17:17 -07:00
vqgan_vae.py
remove convnext blocks, they are illsuited for generative work, validated by early experimental results at
https://github.com/lucidrains/video-diffusion-pytorch
2022-05-05 07:07:21 -07:00