This website requires JavaScript.
Explore
Help
Sign In
aljaz
/
DALLE2-pytorch
Watch
1
Star
1
Fork
0
You've already forked DALLE2-pytorch
mirror of
https://github.com/lucidrains/DALLE2-pytorch.git
synced
2025-12-20 02:04:19 +01:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
bb86ab2404ab8df149b9e083dcc05fe1068416b1
DALLE2-pytorch
/
dalle2_pytorch
History
Phil Wang
bb86ab2404
update sample, and set default gradient clipping value for decoder training
2022-05-16 17:38:30 -07:00
..
data
bring in the simple tokenizer released by openai, but also plan on leaving room for custom tokenizer with yttm
2022-04-12 09:23:17 -07:00
dataloaders
allow for overriding use of EMA during sampling in decoder trainer with use_non_ema keyword, also fix some issues with automatic normalization of images and low res conditioning image if latent diffusion is in play
2022-05-16 11:18:30 -07:00
__init__.py
unet_number on decoder trainer only needs to be passed in if there is greater than 1 unet, so that unconditional training of a single ddpm is seamless (experiment in progress locally)
2022-05-16 09:17:17 -07:00
cli.py
add missing import (
#56
)
2022-05-04 07:42:20 -07:00
dalle2_pytorch.py
allow for overriding use of EMA during sampling in decoder trainer with use_non_ema keyword, also fix some issues with automatic normalization of images and low res conditioning image if latent diffusion is in play
2022-05-16 11:18:30 -07:00
optimizer.py
default decoder learning rate to what was in the paper
2022-05-16 13:33:54 -07:00
tokenizer.py
bring in the simple tokenizer released by openai, but also plan on leaving room for custom tokenizer with yttm
2022-04-12 09:23:17 -07:00
trackers.py
experiment tracker agnostic
2022-05-15 09:56:40 -07:00
trainer.py
update sample, and set default gradient clipping value for decoder training
2022-05-16 17:38:30 -07:00
vqgan_vae_trainer.py
unet_number on decoder trainer only needs to be passed in if there is greater than 1 unet, so that unconditional training of a single ddpm is seamless (experiment in progress locally)
2022-05-16 09:17:17 -07:00
vqgan_vae.py
remove convnext blocks, they are illsuited for generative work, validated by early experimental results at
https://github.com/lucidrains/video-diffusion-pytorch
2022-05-05 07:07:21 -07:00