This website requires JavaScript.
Explore
Help
Sign In
aljaz
/
DALLE2-pytorch
Watch
1
Star
1
Fork
0
You've already forked DALLE2-pytorch
mirror of
https://github.com/lucidrains/DALLE2-pytorch.git
synced
2025-12-20 02:04:19 +01:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
164d9be444ddbf6f61203e3e1090947351da638f
DALLE2-pytorch
/
dalle2_pytorch
History
Phil Wang
164d9be444
use a decorator and take care of sampling in chunks (max_batch_size keyword), in case one is sampling a huge grid of images
2022-05-16 12:34:28 -07:00
..
data
bring in the simple tokenizer released by openai, but also plan on leaving room for custom tokenizer with yttm
2022-04-12 09:23:17 -07:00
dataloaders
allow for overriding use of EMA during sampling in decoder trainer with use_non_ema keyword, also fix some issues with automatic normalization of images and low res conditioning image if latent diffusion is in play
2022-05-16 11:18:30 -07:00
__init__.py
unet_number on decoder trainer only needs to be passed in if there is greater than 1 unet, so that unconditional training of a single ddpm is seamless (experiment in progress locally)
2022-05-16 09:17:17 -07:00
cli.py
add missing import (
#56
)
2022-05-04 07:42:20 -07:00
dalle2_pytorch.py
allow for overriding use of EMA during sampling in decoder trainer with use_non_ema keyword, also fix some issues with automatic normalization of images and low res conditioning image if latent diffusion is in play
2022-05-16 11:18:30 -07:00
optimizer.py
be able to customize adam eps
2022-05-14 13:55:04 -07:00
tokenizer.py
bring in the simple tokenizer released by openai, but also plan on leaving room for custom tokenizer with yttm
2022-04-12 09:23:17 -07:00
trackers.py
experiment tracker agnostic
2022-05-15 09:56:40 -07:00
trainer.py
use a decorator and take care of sampling in chunks (max_batch_size keyword), in case one is sampling a huge grid of images
2022-05-16 12:34:28 -07:00
vqgan_vae_trainer.py
unet_number on decoder trainer only needs to be passed in if there is greater than 1 unet, so that unconditional training of a single ddpm is seamless (experiment in progress locally)
2022-05-16 09:17:17 -07:00
vqgan_vae.py
remove convnext blocks, they are illsuited for generative work, validated by early experimental results at
https://github.com/lucidrains/video-diffusion-pytorch
2022-05-05 07:07:21 -07:00