diff --git a/README.md b/README.md
index f60aa9ff..07a088e3 100644
--- a/README.md
+++ b/README.md
@@ -1,15 +1,19 @@
# Auto-GPT: An Autonomous GPT-4 Experiment
+[](https://github.com/Significant-Gravitas/Auto-GPT/actions/workflows/ci.yml)
+[](https://discord.gg/autogpt)
+[](https://github.com/Significant-Gravitas/Auto-GPT/stargazers)
+[](https://twitter.com/SigGravitas)
+
+
+
+
### π΄ π΄ π΄ Urgent: USE `stable` not `master` π΄ π΄ π΄
-> This information takes precedence over any other information provided and should be followed accordingly.
+**Download the latest `stable` release from here: https://github.com/Significant-Gravitas/Auto-GPT/releases/latest.**
+The `master` branch may often be in a **broken** state.
-Our workflow has been improved, but please note that `master` branch may often be in a **broken** state.
-Please download the latest `stable` release from here: https://github.com/Torantulino/Auto-GPT/releases/latest.
+
-
-[](https://twitter.com/SigGravitas)
-[](https://discord.gg/autogpt)
-[](https://github.com/Significant-Gravitas/Auto-GPT/actions/workflows/ci.yml)
Auto-GPT is an experimental open-source application showcasing the capabilities of the GPT-4 language model. This program, driven by GPT-4, chains together LLM "thoughts", to autonomously achieve whatever goal you set. As one of the first examples of GPT-4 running fully autonomously, Auto-GPT pushes the boundaries of what is possible with AI.
@@ -37,42 +41,6 @@ Development of this free, open-source project is made possible by all the
-
-
-## Table of Contents
-
-- [Auto-GPT: An Autonomous GPT-4 Experiment](#auto-gpt-an-autonomous-gpt-4-experiment)
- - [π΄ π΄ π΄ Urgent: USE `stable` not `master` π΄ π΄ π΄](#----urgent-use-stable-not-master----)
- - [Demo (30/03/2023):](#demo-30032023)
- - [Table of Contents](#table-of-contents)
- - [π Features](#-features)
- - [π Requirements](#-requirements)
- - [πΎ Installation](#-installation)
- - [π§ Usage](#-usage)
- - [Logs](#logs)
- - [Docker](#docker)
- - [Command Line Arguments](#command-line-arguments)
- - [π£οΈ Speech Mode](#οΈ-speech-mode)
- - [π Google API Keys Configuration](#-google-api-keys-configuration)
- - [Setting up environment variables](#setting-up-environment-variables)
- - [Memory Backend Setup](#memory-backend-setup)
- - [Redis Setup](#redis-setup)
- - [π² Pinecone API Key Setup](#-pinecone-api-key-setup)
- - [Milvus Setup](#milvus-setup)
- - [Weaviate Setup](#weaviate-setup)
- - [Setting up environment variables](#setting-up-environment-variables-1)
- - [Setting Your Cache Type](#setting-your-cache-type)
- - [View Memory Usage](#view-memory-usage)
- - [π§ Memory pre-seeding](#-memory-pre-seeding)
- - [π Continuous Mode β οΈ](#-continuous-mode-οΈ)
- - [GPT3.5 ONLY Mode](#gpt35-only-mode)
- - [πΌ Image Generation](#-image-generation)
- - [β οΈ Limitations](#οΈ-limitations)
- - [π‘ Disclaimer](#-disclaimer)
- - [π¦ Connect with Us on Twitter](#-connect-with-us-on-twitter)
- - [Run tests](#run-tests)
- - [Run linter](#run-linter)
-
## π Features
- π Internet access for searches and information gathering
@@ -83,16 +51,17 @@ Development of this free, open-source project is made possible by all the ) to your own ID
- azure_model_map:
- fast_llm_model_deployment_id: ""
- ...
- ```
- - Details can be found here: https://pypi.org/project/openai/ in the `Microsoft Azure Endpoints` section and here: https://learn.microsoft.com/en-us/azure/cognitive-services/openai/tutorials/embeddings?tabs=command-line for the embedding model.
+ - See [OpenAI API Keys Configuration](#openai-api-keys-configuration) to obtain your OpenAI API key.
+ - Obtain your ElevenLabs API key from: https://elevenlabs.io. You can view your xi-api-key using the "Profile" tab on the website.
+ - If you want to use GPT on an Azure instance, set `USE_AZURE` to `True` and then follow these steps:
+ - Rename `azure.yaml.template` to `azure.yaml` and provide the relevant `azure_api_base`, `azure_api_version` and all the deployment IDs for the relevant models in the `azure_model_map` section:
+ - `fast_llm_model_deployment_id` - your gpt-3.5-turbo or gpt-4 deployment ID
+ - `smart_llm_model_deployment_id` - your gpt-4 deployment ID
+ - `embedding_model_deployment_id` - your text-embedding-ada-002 v2 deployment ID
+ - Please specify all of these values as double-quoted strings
+ ```yaml
+ # Replace string in angled brackets (<>) to your own ID
+ azure_model_map:
+ fast_llm_model_deployment_id: ""
+ ...
+ ```
+ - Details can be found here: https://pypi.org/project/openai/ in the `Microsoft Azure Endpoints` section and here: https://learn.microsoft.com/en-us/azure/cognitive-services/openai/tutorials/embeddings?tabs=command-line for the embedding model.
## π§ Usage
1. Run `autogpt` Python module in your terminal
-```
-python -m autogpt
-```
+ ```
+ python -m autogpt
+ ```
2. After each action, choose from options to authorize command(s),
exit the program, or provide feedback to the AI.
@@ -175,30 +147,40 @@ python -m autogpt --debug
You can also build this into a docker image and run it:
-```
+```bash
docker build -t autogpt .
docker run -it --env-file=./.env -v $PWD/auto_gpt_workspace:/app/auto_gpt_workspace autogpt
```
-You can pass extra arguments, for instance, running with `--gpt3only` and `--continuous` mode:
+Or if you have `docker-compose`:
+```bash
+docker-compose run --build --rm auto-gpt
```
+
+You can pass extra arguments, for instance, running with `--gpt3only` and `--continuous` mode:
+```bash
docker run -it --env-file=./.env -v $PWD/auto_gpt_workspace:/app/auto_gpt_workspace autogpt --gpt3only --continuous
```
+```bash
+docker-compose run --build --rm auto-gpt --gpt3only --continuous
+```
+
### Command Line Arguments
Here are some common arguments you can use when running Auto-GPT:
> Replace anything in angled brackets (<>) to a value you want to specify
+
* View all available command line arguments
-```bash
-python -m autogpt --help
-```
+ ```bash
+ python -m autogpt --help
+ ```
* Run Auto-GPT with a different AI Settings file
-```bash
-python -m autogpt --ai-settings
-```
+ ```bash
+ python -m autogpt --ai-settings
+ ```
* Specify a memory backend
-```bash
-python -m autogpt --use-memory
-```
+ ```bash
+ python -m autogpt --use-memory
+ ```
> **NOTE**: There are shorthands for some of these flags, for example `-m` for `--use-memory`. Use `python -m autogpt --help` for more information
@@ -285,30 +267,24 @@ To switch to either, change the `MEMORY_BACKEND` env variable to the value that
### Redis Setup
> _**CAUTION**_ \
This is not intended to be publicly accessible and lacks security measures. Therefore, avoid exposing Redis to the internet without a password or at all
-1. Install docker desktop
-```bash
-docker run -d --name redis-stack-server -p 6379:6379 redis/redis-stack-server:latest
-```
-> See https://hub.docker.com/r/redis/redis-stack-server for setting a password and additional configuration.
+1. Install docker (or Docker Desktop on Windows)
+2. Launch Redis container
+ ```bash
+ docker run -d --name redis-stack-server -p 6379:6379 redis/redis-stack-server:latest
+ ```
+ > See https://hub.docker.com/r/redis/redis-stack-server for setting a password and additional configuration.
+3. Set the following settings in `.env`
+ > Replace **PASSWORD** in angled brackets (<>)
+ ```bash
+ MEMORY_BACKEND=redis
+ REDIS_HOST=localhost
+ REDIS_PORT=6379
+ REDIS_PASSWORD=
+ ```
-2. Set the following environment variables
-> Replace **PASSWORD** in angled brackets (<>)
-```bash
-MEMORY_BACKEND=redis
-REDIS_HOST=localhost
-REDIS_PORT=6379
-REDIS_PASSWORD=
-```
-You can optionally set
-
-```bash
-WIPE_REDIS_ON_START=False
-```
-
-To persist memory stored in Redis
+ You can optionally set `WIPE_REDIS_ON_START=False` to persist memory stored in Redis.
You can specify the memory index for redis using the following:
-
```bash
MEMORY_INDEX=
```
@@ -353,8 +329,9 @@ export MEMORY_BACKEND="pinecone"
- or setup by [Zilliz Cloud](https://zilliz.com/cloud)
- set `MILVUS_ADDR` in `.env` to your milvus address `host:ip`.
- set `MEMORY_BACKEND` in `.env` to `milvus` to enable milvus as backend.
-- optional
- - set `MILVUS_COLLECTION` in `.env` to change milvus collection name as you want, `autogpt` is the default name.
+
+**Optional:**
+- set `MILVUS_COLLECTION` in `.env` to change milvus collection name as you want, `autogpt` is the default name.
### Weaviate Setup
@@ -380,7 +357,7 @@ MEMORY_INDEX="Autogpt" # name of the index to create for the application
## View Memory Usage
-1. View memory usage by using the `--debug` flag :)
+View memory usage by using the `--debug` flag :)
## π§ Memory pre-seeding
@@ -415,7 +392,7 @@ You can adjust the `max_length` and overlap parameters to fine-tune the way the
Memory pre-seeding is a technique for improving AI accuracy by ingesting relevant data into its memory. Chunks of data are split and added to memory, allowing the AI to access them quickly and generate more accurate responses. It's useful for large datasets or when specific information needs to be accessed quickly. Examples include ingesting API or GitHub documentation before running Auto-GPT.
-β οΈ If you use Redis as your memory, make sure to run Auto-GPT with the `WIPE_REDIS_ON_START` set to `False` in your `.env` file.
+β οΈ If you use Redis as your memory, make sure to run Auto-GPT with the `WIPE_REDIS_ON_START=False` in your `.env` file.
β οΈFor other memory backend, we currently forcefully wipe the memory when starting Auto-GPT. To ingest data with those memory backend, you can call the `data_ingestion.py` script anytime during an Auto-GPT run.
@@ -430,9 +407,9 @@ Use at your own risk.
1. Run the `autogpt` python module in your terminal:
-```bash
-python -m autogpt --speak --continuous
-```
+ ```bash
+ python -m autogpt --speak --continuous
+ ```
2. To exit the program, press Ctrl + C