Merge pull request #1719 from Ding3LI/master

Add missing clarifications and method usages
This commit is contained in:
Richard Beales
2023-04-16 07:01:20 +01:00
committed by GitHub
2 changed files with 8 additions and 5 deletions

View File

@@ -50,7 +50,10 @@ SMART_TOKEN_LIMIT=8000
### MEMORY
################################################################################
# MEMORY_BACKEND - Memory backend type (Default: local)
### MEMORY_BACKEND - Memory backend type
# local - Default
# pinecone - Pinecone (if configured)
# redis - Redis (if configured)
MEMORY_BACKEND=local
### PINECONE

View File

@@ -132,8 +132,8 @@ pip install -r requirements.txt
- `smart_llm_model_deployment_id` - your gpt-4 deployment ID
- `embedding_model_deployment_id` - your text-embedding-ada-002 v2 deployment ID
- Please specify all of these values as double-quoted strings
> Replace string in angled brackets (<>) to your own ID
```yaml
# Replace string in angled brackets (<>) to your own ID
azure_model_map:
fast_llm_model_deployment_id: "<my-fast-llm-deployment-id>"
...
@@ -344,9 +344,9 @@ By default, Auto-GPT is going to use LocalCache instead of redis or Pinecone.
To switch to either, change the `MEMORY_BACKEND` env variable to the value that you want:
`local` (default) uses a local JSON cache file
`pinecone` uses the Pinecone.io account you configured in your ENV settings
`redis` will use the redis cache that you configured
* `local` (default) uses a local JSON cache file
* `pinecone` uses the Pinecone.io account you configured in your ENV settings
* `redis` will use the redis cache that you configured
## View Memory Usage