mirror of
https://github.com/aljazceru/Auto-GPT.git
synced 2026-02-21 14:14:40 +01:00
README fix, update Table Of Contents, fix and better memory backend setup guide
This commit is contained in:
54
README.md
54
README.md
@@ -48,19 +48,20 @@ Your support is greatly appreciated
|
||||
- [Docker](#docker)
|
||||
- [Command Line Arguments](#command-line-arguments)
|
||||
- [🗣️ Speech Mode](#️-speech-mode)
|
||||
- [List of IDs with names from eleven labs, you can use the name or ID:](#list-of-ids-with-names-from-eleven-labs-you-can-use-the-name-or-id)
|
||||
- [OpenAI API Keys Configuration](#openai-api-keys-configuration)
|
||||
- [🔍 Google API Keys Configuration](#-google-api-keys-configuration)
|
||||
- [Setting up environment variables](#setting-up-environment-variables)
|
||||
- [Memory Backend Setup](#memory-backend-setup)
|
||||
- [Redis Setup](#redis-setup)
|
||||
- [🌲 Pinecone API Key Setup](#-pinecone-api-key-setup)
|
||||
- [Milvus Setup](#milvus-setup)
|
||||
- [Setting up environment variables](#setting-up-environment-variables-1)
|
||||
- [Setting Your Cache Type](#setting-your-cache-type)
|
||||
- [View Memory Usage](#view-memory-usage)
|
||||
- [🧠 Memory pre-seeding](#-memory-pre-seeding)
|
||||
- [💀 Continuous Mode ⚠️](#-continuous-mode-️)
|
||||
- [GPT3.5 ONLY Mode](#gpt35-only-mode)
|
||||
- [🖼 Image Generation](#-image-generation)
|
||||
- [Selenium](#selenium)
|
||||
- [⚠️ Limitations](#️-limitations)
|
||||
- [🛡 Disclaimer](#-disclaimer)
|
||||
- [🐦 Connect with Us on Twitter](#-connect-with-us-on-twitter)
|
||||
@@ -262,7 +263,18 @@ export GOOGLE_API_KEY="YOUR_GOOGLE_API_KEY"
|
||||
export CUSTOM_SEARCH_ENGINE_ID="YOUR_CUSTOM_SEARCH_ENGINE_ID"
|
||||
```
|
||||
|
||||
## Redis Setup
|
||||
## Memory Backend Setup
|
||||
|
||||
By default, Auto-GPT is going to use LocalCache.
|
||||
To switch to either, change the `MEMORY_BACKEND` env variable to the value that you want:
|
||||
|
||||
- `local` (default) uses a local JSON cache file
|
||||
- `pinecone` uses the Pinecone.io account you configured in your ENV settings
|
||||
- `redis` will use the redis cache that you configured
|
||||
- `milvus` will use the milvus that you configured
|
||||
|
||||
### Redis Setup
|
||||
|
||||
> _**CAUTION**_ \
|
||||
This is not intended to be publicly accessible and lacks security measures. Therefore, avoid exposing Redis to the internet without a password or at all
|
||||
1. Install docker desktop
|
||||
@@ -301,20 +313,6 @@ Pinecone enables the storage of vast amounts of vector-based memory, allowing fo
|
||||
2. Choose the `Starter` plan to avoid being charged.
|
||||
3. Find your API key and region under the default project in the left sidebar.
|
||||
|
||||
### Milvus Setup
|
||||
|
||||
[Milvus](https://milvus.io/) is a open-source, high scalable vector database to storage huge amount of vector-based memory and provide fast relevant search.
|
||||
|
||||
- setup milvus database, keep your pymilvus version and milvus version same to avoid compatible issues.
|
||||
- setup by open source [Install Milvus](https://milvus.io/docs/install_standalone-operator.md)
|
||||
- or setup by [Zilliz Cloud](https://zilliz.com/cloud)
|
||||
- set `MILVUS_ADDR` in `.env` to your milvus address `host:ip`.
|
||||
- set `MEMORY_BACKEND` in `.env` to `milvus` to enable milvus as backend.
|
||||
- optional
|
||||
- set `MILVUS_COLLECTION` in `.env` to change milvus collection name as you want, `autogpt` is the default name.
|
||||
|
||||
### Setting up environment variables
|
||||
|
||||
In the `.env` file set:
|
||||
- `PINECONE_API_KEY`
|
||||
- `PINECONE_ENV` (example: _"us-east4-gcp"_)
|
||||
@@ -338,15 +336,17 @@ export PINECONE_ENV="<YOUR_PINECONE_REGION>" # e.g: "us-east4-gcp"
|
||||
export MEMORY_BACKEND="pinecone"
|
||||
```
|
||||
|
||||
## Setting Your Cache Type
|
||||
### Milvus Setup
|
||||
|
||||
By default, Auto-GPT is going to use LocalCache instead of redis or Pinecone.
|
||||
[Milvus](https://milvus.io/) is a open-source, high scalable vector database to storage huge amount of vector-based memory and provide fast relevant search.
|
||||
|
||||
To switch to either, change the `MEMORY_BACKEND` env variable to the value that you want:
|
||||
|
||||
* `local` (default) uses a local JSON cache file
|
||||
* `pinecone` uses the Pinecone.io account you configured in your ENV settings
|
||||
* `redis` will use the redis cache that you configured
|
||||
- setup milvus database, keep your pymilvus version and milvus version same to avoid compatible issues.
|
||||
- setup by open source [Install Milvus](https://milvus.io/docs/install_standalone-operator.md)
|
||||
- or setup by [Zilliz Cloud](https://zilliz.com/cloud)
|
||||
- set `MILVUS_ADDR` in `.env` to your milvus address `host:ip`.
|
||||
- set `MEMORY_BACKEND` in `.env` to `milvus` to enable milvus as backend.
|
||||
- optional
|
||||
- set `MILVUS_COLLECTION` in `.env` to change milvus collection name as you want, `autogpt` is the default name.
|
||||
|
||||
## View Memory Usage
|
||||
|
||||
@@ -355,7 +355,8 @@ To switch to either, change the `MEMORY_BACKEND` env variable to the value that
|
||||
|
||||
## 🧠 Memory pre-seeding
|
||||
|
||||
# python autogpt/data_ingestion.py -h
|
||||
python autogpt/data_ingestion.py -h
|
||||
|
||||
usage: data_ingestion.py [-h] (--file FILE | --dir DIR) [--init] [--overlap OVERLAP] [--max_length MAX_LENGTH]
|
||||
|
||||
Ingest a file or a directory with multiple files into memory. Make sure to set your .env before running this script.
|
||||
@@ -368,7 +369,8 @@ options:
|
||||
--overlap OVERLAP The overlap size between chunks when ingesting files (default: 200)
|
||||
--max_length MAX_LENGTH The max_length of each chunk when ingesting files (default: 4000)
|
||||
|
||||
# python autogpt/data_ingestion.py --dir seed_data --init --overlap 200 --max_length 1000
|
||||
python autogpt/data_ingestion.py --dir seed_data --init --overlap 200 --max_length 1000
|
||||
|
||||
This script located at autogpt/data_ingestion.py, allows you to ingest files into memory and pre-seed it before running Auto-GPT.
|
||||
|
||||
Memory pre-seeding is a technique that involves ingesting relevant documents or data into the AI's memory so that it can use this information to generate more informed and accurate responses.
|
||||
|
||||
Reference in New Issue
Block a user