remove text masking altogether in favor of deriving from text encodings (padded text encodings must be pad value of 0.)

This commit is contained in:
Phil Wang
2022-07-12 15:40:31 -07:00
parent bb3ff0ac67
commit e76e89f9eb
4 changed files with 28 additions and 41 deletions

View File

@@ -421,7 +421,7 @@ For the layperson, no worries, training will all be automated into a CLI tool, a
## Training on Preprocessed CLIP Embeddings
It is likely, when scaling up, that you would first preprocess your images and text into corresponding embeddings before training the prior network. You can do so easily by simply passing in `image_embed`, `text_embed`, and optionally `text_encodings` and `text_mask`
It is likely, when scaling up, that you would first preprocess your images and text into corresponding embeddings before training the prior network. You can do so easily by simply passing in `image_embed`, `text_embed`, and optionally `text_encodings`
Working example below