From 34bedec0440f3a43739c6b989ee8506379d4c13c Mon Sep 17 00:00:00 2001
From: Toran Bruce Richards
Date: Sun, 16 Apr 2023 18:54:56 +1200
Subject: [PATCH] Updates sponsors list
---
README.md | 71 +++++++++++++++++++++++++++++--------------------------
1 file changed, 37 insertions(+), 34 deletions(-)
diff --git a/README.md b/README.md
index cad5699f..ab1a1593 100644
--- a/README.md
+++ b/README.md
@@ -19,20 +19,25 @@ https://user-images.githubusercontent.com/22963551/228855501-2f5777cf-755b-4407-
💖 Help Fund Auto-GPT's Development 💖
-If you can spare a coffee, you can help to cover the API costs of developing Auto-GPT and help push the boundaries of fully autonomous AI!
-A full day of development can easily cost as much as $20 in API costs, which for a free project is quite limiting.
+If you can spare a coffee, you can help to cover the costs of developing Auto-GPT and help push the boundaries of fully autonomous AI!
Your support is greatly appreciated
+Development of this free, open-source project is made possible by all the contributors and sponsors. If you'd like to sponsor this project and have your avatar or company logo appear below click here.
-
- Development of this free, open-source project is made possible by all the contributors and sponsors. If you'd like to sponsor this project and have your avatar or company logo appear below click here.
-
Individual Sponsors
+Enterprise Sponsors
+
+
+
+Monthly Sponsors
-
+
+
+
+
## Table of Contents
@@ -48,20 +53,19 @@ Your support is greatly appreciated
- [Docker](#docker)
- [Command Line Arguments](#command-line-arguments)
- [🗣️ Speech Mode](#️-speech-mode)
- - [List of IDs with names from eleven labs, you can use the name or ID:](#list-of-ids-with-names-from-eleven-labs-you-can-use-the-name-or-id)
- - [OpenAI API Keys Configuration](#openai-api-keys-configuration)
- [🔍 Google API Keys Configuration](#-google-api-keys-configuration)
- [Setting up environment variables](#setting-up-environment-variables)
- [Memory Backend Setup](#memory-backend-setup)
- [Redis Setup](#redis-setup)
- [🌲 Pinecone API Key Setup](#-pinecone-api-key-setup)
- [Milvus Setup](#milvus-setup)
+ - [Setting up environment variables](#setting-up-environment-variables-1)
+ - [Setting Your Cache Type](#setting-your-cache-type)
- [View Memory Usage](#view-memory-usage)
- [🧠 Memory pre-seeding](#-memory-pre-seeding)
- [💀 Continuous Mode ⚠️](#-continuous-mode-️)
- [GPT3.5 ONLY Mode](#gpt35-only-mode)
- [🖼 Image Generation](#-image-generation)
- - [Selenium](#selenium)
- [⚠️ Limitations](#️-limitations)
- [🛡 Disclaimer](#-disclaimer)
- [🐦 Connect with Us on Twitter](#-connect-with-us-on-twitter)
@@ -263,18 +267,7 @@ export GOOGLE_API_KEY="YOUR_GOOGLE_API_KEY"
export CUSTOM_SEARCH_ENGINE_ID="YOUR_CUSTOM_SEARCH_ENGINE_ID"
```
-## Memory Backend Setup
-
-By default, Auto-GPT is going to use LocalCache.
-To switch to either, change the `MEMORY_BACKEND` env variable to the value that you want:
-
-- `local` (default) uses a local JSON cache file
-- `pinecone` uses the Pinecone.io account you configured in your ENV settings
-- `redis` will use the redis cache that you configured
-- `milvus` will use the milvus that you configured
-
-### Redis Setup
-
+## Redis Setup
> _**CAUTION**_ \
This is not intended to be publicly accessible and lacks security measures. Therefore, avoid exposing Redis to the internet without a password or at all
1. Install docker desktop
@@ -313,6 +306,20 @@ Pinecone enables the storage of vast amounts of vector-based memory, allowing fo
2. Choose the `Starter` plan to avoid being charged.
3. Find your API key and region under the default project in the left sidebar.
+### Milvus Setup
+
+[Milvus](https://milvus.io/) is a open-source, high scalable vector database to storage huge amount of vector-based memory and provide fast relevant search.
+
+- setup milvus database, keep your pymilvus version and milvus version same to avoid compatible issues.
+ - setup by open source [Install Milvus](https://milvus.io/docs/install_standalone-operator.md)
+ - or setup by [Zilliz Cloud](https://zilliz.com/cloud)
+- set `MILVUS_ADDR` in `.env` to your milvus address `host:ip`.
+- set `MEMORY_BACKEND` in `.env` to `milvus` to enable milvus as backend.
+- optional
+ - set `MILVUS_COLLECTION` in `.env` to change milvus collection name as you want, `autogpt` is the default name.
+
+### Setting up environment variables
+
In the `.env` file set:
- `PINECONE_API_KEY`
- `PINECONE_ENV` (example: _"us-east4-gcp"_)
@@ -336,17 +343,15 @@ export PINECONE_ENV="" # e.g: "us-east4-gcp"
export MEMORY_BACKEND="pinecone"
```
-### Milvus Setup
+## Setting Your Cache Type
-[Milvus](https://milvus.io/) is a open-source, high scalable vector database to storage huge amount of vector-based memory and provide fast relevant search.
+By default, Auto-GPT is going to use LocalCache instead of redis or Pinecone.
-- setup milvus database, keep your pymilvus version and milvus version same to avoid compatible issues.
- - setup by open source [Install Milvus](https://milvus.io/docs/install_standalone-operator.md)
- - or setup by [Zilliz Cloud](https://zilliz.com/cloud)
-- set `MILVUS_ADDR` in `.env` to your milvus address `host:ip`.
-- set `MEMORY_BACKEND` in `.env` to `milvus` to enable milvus as backend.
-- optional
- - set `MILVUS_COLLECTION` in `.env` to change milvus collection name as you want, `autogpt` is the default name.
+To switch to either, change the `MEMORY_BACKEND` env variable to the value that you want:
+
+* `local` (default) uses a local JSON cache file
+* `pinecone` uses the Pinecone.io account you configured in your ENV settings
+* `redis` will use the redis cache that you configured
## View Memory Usage
@@ -355,8 +360,7 @@ export MEMORY_BACKEND="pinecone"
## 🧠 Memory pre-seeding
- python autogpt/data_ingestion.py -h
-
+# python autogpt/data_ingestion.py -h
usage: data_ingestion.py [-h] (--file FILE | --dir DIR) [--init] [--overlap OVERLAP] [--max_length MAX_LENGTH]
Ingest a file or a directory with multiple files into memory. Make sure to set your .env before running this script.
@@ -369,8 +373,7 @@ options:
--overlap OVERLAP The overlap size between chunks when ingesting files (default: 200)
--max_length MAX_LENGTH The max_length of each chunk when ingesting files (default: 4000)
- python autogpt/data_ingestion.py --dir seed_data --init --overlap 200 --max_length 1000
-
+# python autogpt/data_ingestion.py --dir seed_data --init --overlap 200 --max_length 1000
This script located at autogpt/data_ingestion.py, allows you to ingest files into memory and pre-seed it before running Auto-GPT.
Memory pre-seeding is a technique that involves ingesting relevant documents or data into the AI's memory so that it can use this information to generate more informed and accurate responses.