use @clip-anytorch , thanks to @rom1504

This commit is contained in:
Phil Wang
2022-04-30 06:40:54 -07:00
parent 0d1c07c803
commit e2f9615afa
3 changed files with 4 additions and 10 deletions

View File

@@ -499,9 +499,7 @@ loss.backward()
Although there is the possibility they are using an unreleased, more powerful CLIP, you can use one of the released ones, if you do not wish to train your own CLIP from scratch. This will also allow the community to more quickly validate the conclusions of the paper.
First you'll need to install <a href="https://github.com/openai/CLIP#usage">the prerequisites</a>
Then to use a pretrained OpenAI CLIP, simply import `OpenAIClipAdapter` and pass it into the `DiffusionPrior` or `Decoder` like so
To use a pretrained OpenAI CLIP, simply import `OpenAIClipAdapter` and pass it into the `DiffusionPrior` or `Decoder` like so
```python
import torch

View File

@@ -172,11 +172,7 @@ class OpenAIClipAdapter(BaseClipAdapter):
self,
name = 'ViT-B/32'
):
try:
import clip
except ImportError:
print('you must install openai clip in order to use this adapter - `pip install git+https://github.com/openai/CLIP.git` - more instructions at https://github.com/openai/CLIP#usage')
import clip
openai_clip, _ = clip.load(name)
super().__init__(openai_clip)
@@ -1636,4 +1632,3 @@ class DALLE2(nn.Module):
return images[0]
return images

View File

@@ -10,7 +10,7 @@ setup(
'dream = dalle2_pytorch.cli:dream'
],
},
version = '0.0.72',
version = '0.0.73',
license='MIT',
description = 'DALL-E 2',
author = 'Phil Wang',
@@ -23,6 +23,7 @@ setup(
],
install_requires=[
'click',
'clip-anytorch',
'einops>=0.4',
'einops-exts>=0.0.3',
'kornia>=0.5.4',