* Fix docs

* Add short section about testing to contribution guide

* Add back note for voice configuration

* Remove LICENSE symlink from docs/

* Fix site_url in mkdocs.yml
This commit is contained in:
Reinier van der Leer
2023-04-26 20:14:14 +02:00
committed by GitHub
parent 109fa04c7c
commit 76df14b831
14 changed files with 522 additions and 381 deletions

View File

@@ -1,4 +1,4 @@
# Code of Conduct for auto-gpt # Code of Conduct for Auto-GPT
## 1. Purpose ## 1. Purpose
@@ -37,4 +37,3 @@ This Code of Conduct is adapted from the [Contributor Covenant](https://www.cont
## 6. Contact ## 6. Contact
If you have any questions or concerns, please contact the project maintainers. If you have any questions or concerns, please contact the project maintainers.

View File

@@ -4,21 +4,11 @@ First of all, thank you for considering contributing to our project! We apprecia
This document provides guidelines and best practices to help you contribute effectively. This document provides guidelines and best practices to help you contribute effectively.
## Table of Contents
- [Code of Conduct](#code-of-conduct)
- [Getting Started](#getting-started)
- [How to Contribute](#how-to-contribute)
- [Reporting Bugs](#reporting-bugs)
- [Suggesting Enhancements](#suggesting-enhancements)
- [Submitting Pull Requests](#submitting-pull-requests)
- [Style Guidelines](#style-guidelines)
- [Code Formatting](#code-formatting)
- [Pre-Commit Hooks](#pre-commit-hooks)
## Code of Conduct ## Code of Conduct
By participating in this project, you agree to abide by our [Code of Conduct](CODE_OF_CONDUCT.md). Please read it to understand the expectations we have for everyone who contributes to this project. By participating in this project, you agree to abide by our [Code of Conduct]. Please read it to understand the expectations we have for everyone who contributes to this project.
[Code of Conduct]: https://significant-gravitas.github.io/Auto-GPT/code-of-conduct.md
## 📢 A Quick Word ## 📢 A Quick Word
Right now we will not be accepting any Contributions that add non-essential commands to Auto-GPT. Right now we will not be accepting any Contributions that add non-essential commands to Auto-GPT.
@@ -84,6 +74,7 @@ isort .
``` ```
### Pre-Commit Hooks ### Pre-Commit Hooks
We use pre-commit hooks to ensure that code formatting and other checks are performed automatically before each commit. To set up pre-commit hooks for this project, follow these steps: We use pre-commit hooks to ensure that code formatting and other checks are performed automatically before each commit. To set up pre-commit hooks for this project, follow these steps:
Install the pre-commit package using pip: Install the pre-commit package using pip:
@@ -103,7 +94,14 @@ If you encounter any issues or have questions, feel free to reach out to the mai
Happy coding, and once again, thank you for your contributions! Happy coding, and once again, thank you for your contributions!
Maintainers will look at PR that have no merge conflicts when deciding what to add to the project. Make sure your PR shows up here: Maintainers will look at PR that have no merge conflicts when deciding what to add to the project. Make sure your PR shows up here:
https://github.com/Significant-Gravitas/Auto-GPT/pulls?q=is%3Apr+is%3Aopen+-label%3Aconflicts
https://github.com/Significant-Gravitas/Auto-GPT/pulls?q=is%3Apr+is%3Aopen+-is%3Aconflict+ ## Testing your changes
## Testing If you add or change code, make sure the updated code is covered by tests.
To increase coverage if necessary, [write tests using `pytest`].
For more info on running tests, please refer to ["Running tests"](https://significant-gravitas.github.io/Auto-GPT/testing/).
[write tests using `pytest`]: https://realpython.com/pytest-python-testing/

View File

@@ -89,28 +89,20 @@ Your support is greatly appreciated. Development of this free, open-source proje
- 🗃️ File storage and summarization with GPT-3.5 - 🗃️ File storage and summarization with GPT-3.5
- 🔌 Extensibility with Plugins - 🔌 Extensibility with Plugins
## 📋 Requirements
Choose an environment to run Auto-GPT in (pick one):
- [Docker](https://docs.docker.com/get-docker/) (*recommended*)
- Python 3.10 or later (instructions: [for Windows](https://www.tutorialspoint.com/how-to-install-python-in-windows))
- [VSCode + devcontainer](https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-containers)
## Quickstart ## Quickstart
1. Set up your OpenAI [API Keys](https://platform.openai.com/account/api-keys) 1. Get an OpenAI [API Key](https://platform.openai.com/account/api-keys)
2. Download the [latest release](https://github.com/Significant-Gravitas/Auto-GPT/releases/latest) 2. Download the [latest release](https://github.com/Significant-Gravitas/Auto-GPT/releases/latest)
3. Follow the [installation instructions][docs/install] 3. Follow the [installation instructions][docs/setup]
4. Configure any additional features you want, or install some [plugins][docs/plugins] 4. Configure any additional features you want, or install some [plugins][docs/plugins]
5. [Run][docs/usage] the app 5. [Run][docs/usage] the app
Please see the [documentation][docs] linked below for full setup instructions and configuration options. Please see the [documentation][docs] for full setup instructions and configuration options.
[docs]: https://significant-gravitas.github.io/Auto-GPT/ [docs]: https://significant-gravitas.github.io/Auto-GPT/
## 📖 Documentation ## 📖 Documentation
* [⚙️ Installation][docs/install] * [⚙️ Setup][docs/setup]
* [💻 Usage][docs/usage] * [💻 Usage][docs/usage]
* [🔌 Plugins][docs/plugins] * [🔌 Plugins][docs/plugins]
* Configuration * Configuration
@@ -119,7 +111,7 @@ Please see the [documentation][docs] linked below for full setup instructions an
* [🗣️ Voice (TTS)](https://significant-gravitas.github.io/Auto-GPT/configuration/voice/) * [🗣️ Voice (TTS)](https://significant-gravitas.github.io/Auto-GPT/configuration/voice/)
* [🖼️ Image Generation](https://significant-gravitas.github.io/Auto-GPT/configuration/imagegen/) * [🖼️ Image Generation](https://significant-gravitas.github.io/Auto-GPT/configuration/imagegen/)
[docs/install]: https://significant-gravitas.github.io/Auto-GPT/installation/ [docs/setup]: https://significant-gravitas.github.io/Auto-GPT/setup/
[docs/usage]: https://significant-gravitas.github.io/Auto-GPT/usage/ [docs/usage]: https://significant-gravitas.github.io/Auto-GPT/usage/
[docs/plugins]: https://significant-gravitas.github.io/Auto-GPT/plugins/ [docs/plugins]: https://significant-gravitas.github.io/Auto-GPT/plugins/

View File

@@ -1 +0,0 @@
../LICENSE

View File

@@ -1,14 +1,58 @@
## 🖼 Image Generation # 🖼 Image Generation configuration
By default, Auto-GPT uses DALL-e for image generation. To use Stable Diffusion, a [Hugging Face API Token](https://huggingface.co/settings/tokens) is required. | Config variable | Values | |
| ---------------- | ------------------------------- | -------------------- |
| `IMAGE_PROVIDER` | `dalle` `huggingface` `sdwebui` | **default: `dalle`** |
Once you have a token, set these variables in your `.env`: ## DALL-e
In `.env`, make sure `IMAGE_PROVIDER` is commented (or set to `dalle`):
``` ini
# IMAGE_PROVIDER=dalle # this is the default
```
Further optional configuration:
| Config variable | Values | |
| ---------------- | ------------------ | -------------- |
| `IMAGE_SIZE` | `256` `512` `1024` | default: `256` |
## Hugging Face
To use text-to-image models from Hugging Face, you need a Hugging Face API token.
Link to the appropriate settings page: [Hugging Face > Settings > Tokens](https://huggingface.co/settings/tokens)
Once you have an API token, uncomment and adjust these variables in your `.env`:
``` ini ``` ini
IMAGE_PROVIDER=huggingface IMAGE_PROVIDER=huggingface
HUGGINGFACE_API_TOKEN=YOUR_HUGGINGFACE_API_TOKEN HUGGINGFACE_API_TOKEN=your-huggingface-api-token
``` ```
Further optional configuration:
| Config variable | Values | |
| ------------------------- | ---------------------- | ---------------------------------------- |
| `HUGGINGFACE_IMAGE_MODEL` | see [available models] | default: `CompVis/stable-diffusion-v1-4` |
[available models]: https://huggingface.co/models?pipeline_tag=text-to-image
## Stable Diffusion WebUI
It is possible to use your own self-hosted Stable Diffusion WebUI with Auto-GPT:
``` ini
IMAGE_PROVIDER=sdwebui
```
!!! note
Make sure you are running WebUI with `--api` enabled.
Further optional configuration:
| Config variable | Values | |
| --------------- | ----------------------- | -------------------------------- |
| `SD_WEBUI_URL` | URL to your WebUI | default: `http://127.0.0.1:7860` |
| `SD_WEBUI_AUTH` | `{username}:{password}` | *Note: do not copy the braces!* |
## Selenium ## Selenium
``` shell ``` shell
sudo Xvfb :10 -ac -screen 0 1024x768x24 & DISPLAY=:10 <YOUR_CLIENT> sudo Xvfb :10 -ac -screen 0 1024x768x24 & DISPLAY=:10 <YOUR_CLIENT>

View File

@@ -1,10 +1,12 @@
## Setting Your Cache Type ## Setting Your Cache Type
By default, Auto-GPT is going to use LocalCache instead of redis or Pinecone. By default, Auto-GPT set up with Docker Compose will use Redis as its memory backend.
Otherwise, the default is LocalCache (which stores memory in a JSON file).
To switch to either, change the `MEMORY_BACKEND` env variable to the value that you want: To switch to a different backend, change the `MEMORY_BACKEND` in `.env`
to the value that you want:
* `local` (default) uses a local JSON cache file * `local` uses a local JSON cache file
* `pinecone` uses the Pinecone.io account you configured in your ENV settings * `pinecone` uses the Pinecone.io account you configured in your ENV settings
* `redis` will use the redis cache that you configured * `redis` will use the redis cache that you configured
* `milvus` will use the milvus cache that you configured * `milvus` will use the milvus cache that you configured
@@ -20,32 +22,39 @@ Links to memory backends
- [Weaviate](https://weaviate.io) - [Weaviate](https://weaviate.io)
### Redis Setup ### Redis Setup
> _**CAUTION**_ \
This is not intended to be publicly accessible and lacks security measures. Therefore, avoid exposing Redis to the internet without a password or at all
1. Install docker (or Docker Desktop on Windows).
2. Launch Redis container.
``` shell !!! important
If you have set up Auto-GPT using Docker Compose, then Redis is included, no further
setup needed.
!!! caution
This setup is not intended to be publicly accessible and lacks security measures.
Avoid exposing Redis to the internet without a password or at all!
1. Launch Redis container
:::shell
docker run -d --name redis-stack-server -p 6379:6379 redis/redis-stack-server:latest docker run -d --name redis-stack-server -p 6379:6379 redis/redis-stack-server:latest
```
> See https://hub.docker.com/r/redis/redis-stack-server for setting a password and additional configuration.
3. Set the following settings in `.env`. 3. Set the following settings in `.env`
> Replace **PASSWORD** in angled brackets (<>)
``` shell :::ini
MEMORY_BACKEND=redis MEMORY_BACKEND=redis
REDIS_HOST=localhost REDIS_HOST=localhost
REDIS_PORT=6379 REDIS_PORT=6379
REDIS_PASSWORD=<PASSWORD> REDIS_PASSWORD=<PASSWORD>
```
You can optionally set `WIPE_REDIS_ON_START=False` to persist memory stored in Redis. Replace `<PASSWORD>` by your password, omitting the angled brackets (<>).
You can specify the memory index for redis using the following: Optional configuration:
``` shell
MEMORY_INDEX=<WHATEVER> - `WIPE_REDIS_ON_START=False` to persist memory stored in Redis between runs.
``` - `MEMORY_INDEX=<WHATEVER>` to specify a name for the memory index in Redis.
The default is `auto-gpt`.
!!! info
See [redis-stack-server](https://hub.docker.com/r/redis/redis-stack-server) for
setting a password and additional configuration.
### 🌲 Pinecone API Key Setup ### 🌲 Pinecone API Key Setup
@@ -56,65 +65,57 @@ Pinecone lets you store vast amounts of vector-based memory, allowing the agent
3. Find your API key and region under the default project in the left sidebar. 3. Find your API key and region under the default project in the left sidebar.
In the `.env` file set: In the `.env` file set:
- `PINECONE_API_KEY` - `PINECONE_API_KEY`
- `PINECONE_ENV` (example: _"us-east4-gcp"_) - `PINECONE_ENV` (example: `us-east4-gcp`)
- `MEMORY_BACKEND=pinecone` - `MEMORY_BACKEND=pinecone`
Alternatively, you can set them from the command line (advanced):
For Windows Users:
``` shell
setx PINECONE_API_KEY "<YOUR_PINECONE_API_KEY>"
setx PINECONE_ENV "<YOUR_PINECONE_REGION>" # e.g: "us-east4-gcp"
setx MEMORY_BACKEND "pinecone"
```
For macOS and Linux users:
``` shell
export PINECONE_API_KEY="<YOUR_PINECONE_API_KEY>"
export PINECONE_ENV="<YOUR_PINECONE_REGION>" # e.g: "us-east4-gcp"
export MEMORY_BACKEND="pinecone"
```
### Milvus Setup ### Milvus Setup
[Milvus](https://milvus.io/) is an open-source, highly scalable vector database to store huge amounts of vector-based memory and provide fast relevant search. And it can be quickly deployed by docker locally or as a cloud service provided by [Zilliz Cloud](https://zilliz.com/). [Milvus](https://milvus.io/) is an open-source, highly scalable vector database to store
huge amounts of vector-based memory and provide fast relevant search. It can be quickly
deployed with docker, or as a cloud service provided by [Zilliz Cloud](https://zilliz.com/).
1. Deploy your Milvus service, either locally using docker or with a managed Zilliz Cloud database. 1. Deploy your Milvus service, either locally using docker or with a managed Zilliz Cloud database:
- [Install and deploy Milvus locally](https://milvus.io/docs/install_standalone-operator.md) - [Install and deploy Milvus locally](https://milvus.io/docs/install_standalone-operator.md)
- <details><summary>Set up a managed Zilliz Cloud database <i>(click to expand)</i></summary> - Set up a managed Zilliz Cloud database
1. Go to [Zilliz Cloud](https://zilliz.com/) and sign up if you don't already have account. 1. Go to [Zilliz Cloud](https://zilliz.com/) and sign up if you don't already have account.
2. In the *Databases* tab, create a new database. 2. In the *Databases* tab, create a new database.
- Remember your username and password - Remember your username and password
- Wait until the database status is changed to RUNNING. - Wait until the database status is changed to RUNNING.
3. In the *Database detail* tab of the database you have created, the public cloud endpoint, such as: 3. In the *Database detail* tab of the database you have created, the public cloud endpoint, such as:
`https://xxx-xxxx.xxxx.xxxx.zillizcloud.com:443`. `https://xxx-xxxx.xxxx.xxxx.zillizcloud.com:443`.
</details>
2. Run `pip3 install pymilvus` to install the required client library. 2. Run `pip3 install pymilvus` to install the required client library.
Make sure your PyMilvus version and Milvus version are [compatible](https://github.com/milvus-io/pymilvus#compatibility) to avoid issues. Make sure your PyMilvus version and Milvus version are [compatible](https://github.com/milvus-io/pymilvus#compatibility)
to avoid issues.
See also the [PyMilvus installation instructions](https://github.com/milvus-io/pymilvus#installation). See also the [PyMilvus installation instructions](https://github.com/milvus-io/pymilvus#installation).
3. Update `.env` 3. Update `.env`:
- `MEMORY_BACKEND=milvus` - `MEMORY_BACKEND=milvus`
- One of: - One of:
- `MILVUS_ADDR=host:ip` (for local instance) - `MILVUS_ADDR=host:ip` (for local instance)
- `MILVUS_ADDR=https://xxx-xxxx.xxxx.xxxx.zillizcloud.com:443` (for Zilliz Cloud) - `MILVUS_ADDR=https://xxx-xxxx.xxxx.xxxx.zillizcloud.com:443` (for Zilliz Cloud)
*The following settings are **optional**:* The following settings are **optional**:
- Set `MILVUS_USERNAME='username-of-your-milvus-instance'`
- Set `MILVUS_PASSWORD='password-of-your-milvus-instance'` - `MILVUS_USERNAME='username-of-your-milvus-instance'`
- Set `MILVUS_SECURE=True` to use a secure connection. Only use if your Milvus instance has TLS enabled. - `MILVUS_PASSWORD='password-of-your-milvus-instance'`
Setting `MILVUS_ADDR` to a `https://` URL will override this setting. - `MILVUS_SECURE=True` to use a secure connection.
- Set `MILVUS_COLLECTION` if you want to change the collection name to use in Milvus. Defaults to `autogpt`. Only use if your Milvus instance has TLS enabled.
*Note: setting `MILVUS_ADDR` to a `https://` URL will override this setting.*
- `MILVUS_COLLECTION` to change the collection name to use in Milvus.
Defaults to `autogpt`.
### Weaviate Setup ### Weaviate Setup
[Weaviate](https://weaviate.io/) is an open-source vector database. It allows to store data objects and vector embeddings from ML-models and scales seamlessly to billion of data objects. [An instance of Weaviate can be created locally (using Docker), on Kubernetes or using Weaviate Cloud Services](https://weaviate.io/developers/weaviate/quickstart). [Weaviate](https://weaviate.io/) is an open-source vector database. It allows to store
Although still experimental, [Embedded Weaviate](https://weaviate.io/developers/weaviate/installation/embedded) is supported which allows the Auto-GPT process itself to start a Weaviate instance. To enable it, set `USE_WEAVIATE_EMBEDDED` to `True` and make sure you `pip install "weaviate-client>=3.15.4"`. data objects and vector embeddings from ML-models and scales seamlessly to billion of
data objects. To set up a Weaviate database, check out their [Quickstart Tutorial](https://weaviate.io/developers/weaviate/quickstart).
Although still experimental, [Embedded Weaviate](https://weaviate.io/developers/weaviate/installation/embedded)
is supported which allows the Auto-GPT process itself to start a Weaviate instance.
To enable it, set `USE_WEAVIATE_EMBEDDED` to `True` and make sure you `pip install "weaviate-client>=3.15.4"`.
#### Install the Weaviate client #### Install the Weaviate client
@@ -128,7 +129,7 @@ $ pip install weaviate-client
In your `.env` file set the following: In your `.env` file set the following:
``` shell ``` ini
MEMORY_BACKEND=weaviate MEMORY_BACKEND=weaviate
WEAVIATE_HOST="127.0.0.1" # the IP or domain of the running Weaviate instance WEAVIATE_HOST="127.0.0.1" # the IP or domain of the running Weaviate instance
WEAVIATE_PORT="8080" WEAVIATE_PORT="8080"
@@ -150,7 +151,7 @@ View memory usage by using the `--debug` flag :)
Memory pre-seeding allows you to ingest files into memory and pre-seed it before running Auto-GPT. Memory pre-seeding allows you to ingest files into memory and pre-seed it before running Auto-GPT.
``` shell ``` shell
# python data_ingestion.py -h $ python data_ingestion.py -h
usage: data_ingestion.py [-h] (--file FILE | --dir DIR) [--init] [--overlap OVERLAP] [--max_length MAX_LENGTH] usage: data_ingestion.py [-h] (--file FILE | --dir DIR) [--init] [--overlap OVERLAP] [--max_length MAX_LENGTH]
Ingest a file or a directory with multiple files into memory. Make sure to set your .env before running this script. Ingest a file or a directory with multiple files into memory. Make sure to set your .env before running this script.
@@ -172,15 +173,32 @@ Note that you can also use the `--file` argument to ingest a single file into me
The DIR path is relative to the auto_gpt_workspace directory, so `python data_ingestion.py --dir . --init` will ingest everything in `auto_gpt_workspace` directory. The DIR path is relative to the auto_gpt_workspace directory, so `python data_ingestion.py --dir . --init` will ingest everything in `auto_gpt_workspace` directory.
You can adjust the `max_length` and `overlap` parameters to fine-tune the way the documents are presented to the AI when it "recall" that memory: You can adjust the `max_length` and `overlap` parameters to fine-tune the way the
- Adjusting the overlap value allows the AI to access more contextual information from each chunk when recalling information, but will result in more chunks being created and therefore increase memory backend usage and OpenAI API requests. documents are presented to the AI when it "recall" that memory:
- Reducing the `max_length` value will create more chunks, which can save prompt tokens by allowing for more message history in the context, but will also increase the number of chunks.
- Increasing the `max_length` value will provide the AI with more contextual information from each chunk, reducing the number of chunks created and saving on OpenAI API requests. However, this may also use more prompt tokens and decrease the overall context available to the AI.
Memory pre-seeding is a technique for improving AI accuracy by ingesting relevant data into its memory. Chunks of data are split and added to memory, allowing the AI to access them quickly and generate more accurate responses. It's useful for large datasets or when specific information needs to be accessed quickly. Examples include ingesting API or GitHub documentation before running Auto-GPT. - Adjusting the overlap value allows the AI to access more contextual information
from each chunk when recalling information, but will result in more chunks being
created and therefore increase memory backend usage and OpenAI API requests.
- Reducing the `max_length` value will create more chunks, which can save prompt
tokens by allowing for more message history in the context, but will also
increase the number of chunks.
- Increasing the `max_length` value will provide the AI with more contextual
information from each chunk, reducing the number of chunks created and saving on
OpenAI API requests. However, this may also use more prompt tokens and decrease
the overall context available to the AI.
⚠️ If you use Redis as your memory, make sure to run Auto-GPT with the `WIPE_REDIS_ON_START=False` in your `.env` file. Memory pre-seeding is a technique for improving AI accuracy by ingesting relevant data
into its memory. Chunks of data are split and added to memory, allowing the AI to access
them quickly and generate more accurate responses. It's useful for large datasets or when
specific information needs to be accessed quickly. Examples include ingesting API or
GitHub documentation before running Auto-GPT.
For other memory backends, we currently forcefully wipe the memory when starting Auto-GPT. To ingest data with those memory backends, you can call the `data_ingestion.py` script anytime during an Auto-GPT run. !!! attention
If you use Redis for memory, make sure to run Auto-GPT with `WIPE_REDIS_ON_START=False`
Memories will be available to the AI immediately as they are ingested, even if ingested while Auto-GPT is running. For other memory backends, we currently forcefully wipe the memory when starting
Auto-GPT. To ingest data with those memory backends, you can call the
`data_ingestion.py` script anytime during an Auto-GPT run.
Memories will be available to the AI immediately as they are ingested, even if ingested
while Auto-GPT is running.

View File

@@ -1,49 +1,37 @@
## 🔍 Google API Keys Configuration ## 🔍 Google API Keys Configuration
Note: !!! note
This section is optional. use the official google api if you are having issues with error 429 when running a google search. This section is optional. Use the official Google API if search attempts return
To use the `google_official_search` command, you need to set up your Google API keys in your environment variables. error 429. To use the `google_official_search` command, you need to set up your
Google API key in your environment variables.
Create your project: Create your project:
1. Go to the [Google Cloud Console](https://console.cloud.google.com/). 1. Go to the [Google Cloud Console](https://console.cloud.google.com/).
2. If you don't already have an account, create one and log in. 2. If you don't already have an account, create one and log in
3. Create a new project by clicking on the "Select a Project" dropdown at the top of the page and clicking "New Project". 3. Create a new project by clicking on the *Select a Project* dropdown at the top of the
4. Give it a name and click "Create". page and clicking *New Project*
Set up a custom search API and add to your .env file: 4. Give it a name and click *Create*
5. Go to the [APIs & Services Dashboard](https://console.cloud.google.com/apis/dashboard). 5. Set up a custom search API and add to your .env file:
6. Click "Enable APIs and Services". 5. Go to the [APIs & Services Dashboard](https://console.cloud.google.com/apis/dashboard)
7. Search for "Custom Search API" and click on it. 6. Click *Enable APIs and Services*
8. Click "Enable". 7. Search for *Custom Search API* and click on it
9. Go to the [Credentials](https://console.cloud.google.com/apis/credentials) page. 8. Click *Enable*
10. Click "Create Credentials". 9. Go to the [Credentials](https://console.cloud.google.com/apis/credentials) page
11. Choose "API Key". 10. Click *Create Credentials*
12. Copy the API key. 11. Choose *API Key*
13. Set it as an environment variable named `GOOGLE_API_KEY` on your machine (see how to set up environment variables below). 12. Copy the API key
14. [Enable](https://console.developers.google.com/apis/api/customsearch.googleapis.com) the Custom Search API on your project. (Might need to wait few minutes to propagate) 13. Set it as the `GOOGLE_API_KEY` in your `.env` file
14. [Enable](https://console.developers.google.com/apis/api/customsearch.googleapis.com)
the Custom Search API on your project. (Might need to wait few minutes to propagate.)
Set up a custom search engine and add to your .env file: Set up a custom search engine and add to your .env file:
15. Go to the [Custom Search Engine](https://cse.google.com/cse/all) page. 15. Go to the [Custom Search Engine](https://cse.google.com/cse/all) page
16. Click "Add". 16. Click *Add*
17. Set up your search engine by following the prompts. You can choose to search the entire web or specific sites. 17. Set up your search engine by following the prompts.
18. Once you've created your search engine, click on "Control Panel". You can choose to search the entire web or specific sites
19. Click "Basics". 18. Once you've created your search engine, click on *Control Panel*
20. Copy the "Search engine ID". 19. Click *Basics*
21. Set it as an environment variable named `CUSTOM_SEARCH_ENGINE_ID` on your machine (see how to set up environment variables below). 20. Copy the *Search engine ID*
21. Set it as the `CUSTOM_SEARCH_ENGINE_ID` in your `.env` file
_Remember that your free daily custom search quota allows only up to 100 searches. To increase this limit, you need to assign a billing account to the project to profit from up to 10K daily searches._ _Remember that your free daily custom search quota allows only up to 100 searches. To increase this limit, you need to assign a billing account to the project to profit from up to 10K daily searches._
### Setting up environment variables
For Windows Users:
```
setx GOOGLE_API_KEY "YOUR_GOOGLE_API_KEY"
setx CUSTOM_SEARCH_ENGINE_ID "YOUR_CUSTOM_SEARCH_ENGINE_ID"
```
For macOS and Linux users:
```
export GOOGLE_API_KEY="YOUR_GOOGLE_API_KEY"
export CUSTOM_SEARCH_ENGINE_ID="YOUR_CUSTOM_SEARCH_ENGINE_ID"
```

View File

@@ -1,4 +1,4 @@
## Voice # Text to Speech
Enter this command to use TTS _(Text-to-Speech)_ for Auto-GPT Enter this command to use TTS _(Text-to-Speech)_ for Auto-GPT
@@ -6,24 +6,32 @@ Enter this command to use TTS _(Text-to-Speech)_ for Auto-GPT
python -m autogpt --speak python -m autogpt --speak
``` ```
Eleven Labs provides voice technologies such as voice design, speech synthesis, and premade voices that Auto-GPT can use for speech. Eleven Labs provides voice technologies such as voice design, speech synthesis, and
premade voices that Auto-GPT can use for speech.
1. Go to [Eleven Labs](https://beta.elevenlabs.io/) and make an account if you don't already have one. 1. Go to [ElevenLabs](https://beta.elevenlabs.io/) and make an account if you don't
already have one.
2. Choose and setup the `Starter` plan. 2. Choose and setup the `Starter` plan.
3. Click the top right icon and find "Profile" to locate your API Key. 3. Click the top right icon and find "Profile" to locate your API Key.
In the `.env` file set: In the `.env` file set:
- `ELEVENLABS_API_KEY` - `ELEVENLABS_API_KEY`
- `ELEVENLABS_VOICE_1_ID` (example: _"premade/Adam"_) - `ELEVENLABS_VOICE_1_ID` (example: _"premade/Adam"_)
### List of IDs with names from eleven labs. You can use the name or ID: ### List of available voices
- Rachel : 21m00Tcm4TlvDq8ikWAM !!! note
- Domi : AZnzlk1XvdvUeBnXmlld You can use either the name or the voice ID to configure a voice
- Bella : EXAVITQu4vr4xnSDxMaL
- Antoni : ErXwobaYiN019PkySvjV | Name | Voice ID |
- Elli : MF3mGyEYCl7XYWbV9V6O | ------ | -------- |
- Josh : TxGEqnHWrfWFTfGW9XjX | Rachel | `21m00Tcm4TlvDq8ikWAM` |
- Arnold : VR6AewLTigWG4xSOukaG | Domi | `AZnzlk1XvdvUeBnXmlld` |
- Adam : pNInz6obpgDQGcFmaJgB | Bella | `EXAVITQu4vr4xnSDxMaL` |
- Sam : yoZ06aMxZJJ28mfd3POQ | Antoni | `ErXwobaYiN019PkySvjV` |
| Elli | `MF3mGyEYCl7XYWbV9V6O` |
| Josh | `TxGEqnHWrfWFTfGW9XjX` |
| Arnold | `VR6AewLTigWG4xSOukaG` |
| Adam | `pNInz6obpgDQGcFmaJgB` |
| Sam | `yoZ06aMxZJJ28mfd3POQ` |

View File

@@ -1,115 +0,0 @@
# 💾 Installation
## ⚠️ OpenAI API Keys Configuration
Get your OpenAI API key from: [https://platform.openai.com/account/api-keys](https://platform.openai.com/account/api-keys).
To use OpenAI API key for Auto-GPT, you **NEED** to have billing set up (AKA paid account).
You can set up paid account at [https://platform.openai.com/account/billing/overview](https://platform.openai.com/account/billing/overview).
Important: It's highly recommended that you track your usage on [the Usage page](https://platform.openai.com/account/usage).
You can also set limits on how much you spend on [the Usage limits page](https://platform.openai.com/account/billing/limits).
![For OpenAI API key to work, set up paid account at OpenAI API > Billing](./imgs/openai-api-key-billing-paid-account.png)
**PLEASE ENSURE YOU HAVE DONE THIS STEP BEFORE PROCEEDING. OTHERWISE, NOTHING WILL WORK!**
## General setup
1. Make sure you have one of the environments listed under [**requirements**](https://github.com/Significant-Gravitas/Auto-GPT#-requirements) set up.
_To execute the following commands, open a CMD, Bash, or Powershell window by navigating to a folder on your computer and typing `CMD` in the folder path at the top, then press enter. Make sure you have [Git](https://git-scm.com/downloads) installed for your O/S._
2. Clone the repository using Git, or download the [latest stable release](https://github.com/Significant-Gravitas/Auto-GPT/releases/latest) (`Source code (zip)`, at the bottom of the page).
``` shell
git clone -b stable https://github.com/Significant-Gravitas/Auto-GPT.git
```
3. Navigate to the directory where you downloaded the repository.
``` shell
cd Auto-GPT
```
5. Configure Auto-GPT:
1. Find the file named `.env.template` in the main `Auto-GPT` folder. This file may be hidden by default in some operating systems due to the dot prefix. To reveal hidden files, follow the instructions for your specific operating system (e.g., in Windows, click on the "View" tab in File Explorer and check the "Hidden items" box; in macOS, press Cmd + Shift + .).
2. Create a copy of this file and call it `.env` by removing the `template` extension. The easiest way is to do this in a command prompt/terminal window `cp .env.template .env`.
3. Open the `.env` file in a text editor.
4. Find the line that says `OPENAI_API_KEY=`.
5. After the `"="`, enter your unique OpenAI API Key (without any quotes or spaces).
6. Enter any other API keys or Tokens for services you would like to use. To activate and adjust a setting, remove the `# ` prefix.
7. Save and close the `.env` file.
You have now configured Auto-GPT.
Notes:
- See [OpenAI API Keys Configuration](#openai-api-keys-configuration) to get your OpenAI API key.
- Get your ElevenLabs API key from: [ElevenLabs](https://elevenlabs.io). You can view your xi-api-key using the "Profile" tab on the website.
- If you want to use GPT on an Azure instance, set `USE_AZURE` to `True` and then follow these steps:
- Rename `azure.yaml.template` to `azure.yaml` and provide the relevant `azure_api_base`, `azure_api_version` and all the deployment IDs for the relevant models in the `azure_model_map` section:
- `fast_llm_model_deployment_id` - your gpt-3.5-turbo or gpt-4 deployment ID
- `smart_llm_model_deployment_id` - your gpt-4 deployment ID
- `embedding_model_deployment_id` - your text-embedding-ada-002 v2 deployment ID
``` shell
# Please specify all of these values as double-quoted strings
# Replace string in angled brackets (<>) to your own ID
azure_model_map:
fast_llm_model_deployment_id: "<my-fast-llm-deployment-id>"
...
```
Details can be found here: [https://pypi.org/project/openai/](https://pypi.org/project/openai/) in the `Microsoft Azure Endpoints` section and here: [learn.microsoft.com](https://learn.microsoft.com/en-us/azure/cognitive-services/openai/tutorials/embeddings?tabs=command-line) for the embedding model.
If you're on Windows you may need to install [msvc-170](https://learn.microsoft.com/en-us/cpp/windows/latest-supported-vc-redist?view=msvc-170).
4. Follow the further instructions for running Auto-GPT with [Docker](#run-with-docker) (*recommended*), or [Docker-less](#run-docker-less)
### Run with Docker
Easiest is to run with `docker-compose`:
``` shell
docker-compose build auto-gpt
docker-compose run --rm auto-gpt
```
By default, this will also start and attach a Redis memory backend.
For related settings, see [Memory > Redis setup](./configuration/memory.md#redis-setup).
You can also build and run it with "vanilla" docker commands:
``` shell
docker build -t auto-gpt .
docker run -it --env-file=.env -v $PWD:/app auto-gpt
```
You can pass extra arguments, for instance, running with `--gpt3only` and `--continuous` mode:
``` shell
docker-compose run --rm auto-gpt --gpt3only --continuous
```
``` shell
docker run -it --env-file=.env -v $PWD:/app --rm auto-gpt --gpt3only --continuous
```
Alternatively, you can pull the latest release directly from [Docker Hub](https://hub.docker.com/r/significantgravitas/auto-gpt) and run that:
``` shell
docker run -it --env OPENAI_API_KEY='your-key-here' --rm significantgravitas/auto-gpt
```
Or with `ai_settings.yml` presets mounted:
``` shell
docker run -it --env OPENAI_API_KEY='your-key-here' -v $PWD/ai_settings.yaml:/app/ai_settings.yaml --rm significantgravitas/auto-gpt
```
### Run without Docker
Simply run `./run.sh` (Linux/macOS) or `.\run.bat` (Windows) in your terminal. This will install any necessary Python packages and launch Auto-GPT.
### Run with Dev Container
1. Install the [Remote - Containers](https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-containers) extension in VS Code.
2. Open command palette and type in Dev Containers: Open Folder in Container.
3. Run `./run.sh`.

210
docs/setup.md Normal file
View File

@@ -0,0 +1,210 @@
# Setting up Auto-GPT
## 📋 Requirements
Choose an environment to run Auto-GPT in (pick one):
- [Docker](https://docs.docker.com/get-docker/) (*recommended*)
- Python 3.10 or later (instructions: [for Windows](https://www.tutorialspoint.com/how-to-install-python-in-windows))
- [VSCode + devcontainer](https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-containers)
## 🗝️ Getting an API key
Get your OpenAI API key from: [https://platform.openai.com/account/api-keys](https://platform.openai.com/account/api-keys).
!!! attention
To use the OpenAI API with Auto-GPT, we strongly recommend **setting up billing**
(AKA paid account). Free accounts are [limited][openai/api limits] to 3 API calls per
minute, which can cause the application to crash.
You can set up a paid account at [Manage account > Billing > Overview](https://platform.openai.com/account/billing/overview).
[openai/api limits]: https://platform.openai.com/docs/guides/rate-limits/overview#:~:text=Free%20trial%20users,RPM%0A40%2C000%20TPM
!!! important
It's highly recommended that you keep keep track of your API costs on [the Usage page](https://platform.openai.com/account/usage).
You can also set limits on how much you spend on [the Usage limits page](https://platform.openai.com/account/billing/limits).
![For OpenAI API key to work, set up paid account at OpenAI API > Billing](./imgs/openai-api-key-billing-paid-account.png)
## Setting up Auto-GPT
### Set up with Docker
1. Make sure you have Docker installed, see [requirements](#requirements)
2. Pull the latest image from [Docker Hub]
:::shell
docker pull significantgravitas/auto-gpt
3. Create a folder for Auto-GPT
4. In the folder, create a file called `docker-compose.yml` with the following contents:
:::yaml
version: "3.9"
services:
auto-gpt:
image: significantgravitas/auto-gpt
depends_on:
- redis
env_file:
- .env
environment:
MEMORY_BACKEND: ${MEMORY_BACKEND:-redis}
REDIS_HOST: ${REDIS_HOST:-redis}
volumes:
- ./:/app
profiles: ["exclude-from-up"]
redis:
image: "redis/redis-stack-server:latest"
5. Create the necessary [configuration](#configuration) files. If needed, you can find
templates in the [repository].
6. Continue to [Run with Docker](#run-with-docker)
[Docker Hub]: https://hub.docker.com/r/significantgravitas/auto-gpt
[repository]: https://github.com/Significant-Gravitas/Auto-GPT
### Set up with Git
!!! important
Make sure you have [Git](https://git-scm.com/downloads) installed for your OS.
!!! info
To execute the given commands, open a CMD, Bash, or Powershell window.
On Windows: press ++win+x++ and pick *Terminal*, or ++win+r++ and enter `cmd`
1. Clone the repository
:::shell
git clone -b stable https://github.com/Significant-Gravitas/Auto-GPT.git
2. Navigate to the directory where you downloaded the repository
:::shell
cd Auto-GPT
### Set up without Git/Docker
!!! warning
We recommend to use Git or Docker, to make updating easier.
1. Download `Source code (zip)` from the [latest stable release](https://github.com/Significant-Gravitas/Auto-GPT/releases/latest)
2. Extract the zip-file into a folder
### Configuration
1. Find the file named `.env.template` in the main `Auto-GPT` folder. This file may
be hidden by default in some operating systems due to the dot prefix. To reveal
hidden files, follow the instructions for your specific operating system:
[Windows][show hidden files/Windows], [macOS][show hidden files/macOS].
2. Create a copy of `.env.template` and call it `.env`;
if you're already in a command prompt/terminal window: `cp .env.template .env`.
3. Open the `.env` file in a text editor.
4. Find the line that says `OPENAI_API_KEY=`.
5. After the `=`, enter your unique OpenAI API Key *without any quotes or spaces*.
6. Enter any other API keys or tokens for services you would like to use.
!!! note
To activate and adjust a setting, remove the `# ` prefix.
7. Save and close the `.env` file.
!!! info
Get your ElevenLabs API key from: [ElevenLabs](https://elevenlabs.io). You can view your xi-api-key using the "Profile" tab on the website.
!!! info "Using a GPT Azure-instance"
If you want to use GPT on an Azure instance, set `USE_AZURE` to `True` and
make an Azure configuration file:
- Rename `azure.yaml.template` to `azure.yaml` and provide the relevant `azure_api_base`, `azure_api_version` and all the deployment IDs for the relevant models in the `azure_model_map` section:
- `fast_llm_model_deployment_id`: your gpt-3.5-turbo or gpt-4 deployment ID
- `smart_llm_model_deployment_id`: your gpt-4 deployment ID
- `embedding_model_deployment_id`: your text-embedding-ada-002 v2 deployment ID
Example:
:::yaml
# Please specify all of these values as double-quoted strings
# Replace string in angled brackets (<>) to your own ID
azure_model_map:
fast_llm_model_deployment_id: "<my-fast-llm-deployment-id>"
...
Details can be found in the [openai-python docs], and in the [Azure OpenAI docs] for the embedding model.
If you're on Windows you may need to install an [MSVC library](https://learn.microsoft.com/en-us/cpp/windows/latest-supported-vc-redist?view=msvc-170).
[show hidden files/Windows]: https://support.microsoft.com/en-us/windows/view-hidden-files-and-folders-in-windows-97fbc472-c603-9d90-91d0-1166d1d9f4b5
[show hidden files/macOS]: https://www.pcmag.com/how-to/how-to-access-your-macs-hidden-files
[openai-python docs]: https://github.com/openai/openai-python#microsoft-azure-endpoints
[Azure OpenAI docs]: https://learn.microsoft.com/en-us/azure/cognitive-services/openai/tutorials/embeddings?tabs=command-line
## Running Auto-GPT
### Run with Docker
Easiest is to use `docker-compose`. Run the commands below in your Auto-GPT folder.
1. Build the image. If you have pulled the image from Docker Hub, skip this step.
:::shell
docker-compose build auto-gpt
2. Run Auto-GPT
:::shell
docker-compose run --rm auto-gpt
By default, this will also start and attach a Redis memory backend. If you do not
want this, comment or remove the `depends: - redis` and `redis:` sections from
`docker-compose.yml`.
For related settings, see [Memory > Redis setup](./configuration/memory.md#redis-setup).
You can pass extra arguments, e.g. running with `--gpt3only` and `--continuous`:
``` shell
docker-compose run --rm auto-gpt --gpt3only --continuous
```
If you dare, you can also build and run it with "vanilla" docker commands:
``` shell
docker build -t auto-gpt .
docker run -it --env-file=.env -v $PWD:/app auto-gpt
docker run -it --env-file=.env -v $PWD:/app --rm auto-gpt --gpt3only --continuous
```
[docker-compose file]: https://github.com/Significant-Gravitas/Auto-GPT/blob/stable/docker-compose.yml
### Run with Dev Container
1. Install the [Remote - Containers](https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-containers) extension in VS Code.
2. Open command palette with ++f1++ and type `Dev Containers: Open Folder in Container`.
3. Run `./run.sh`.
### Run without Docker
Simply run the startup script in your terminal. This will install any necessary Python
packages and launch Auto-GPT.
- On Linux/MacOS:
:::shell
./run.sh
- On Windows:
:::shell
.\run.bat
If this gives errors, make sure you have a compatible Python version installed. See also
the [requirements](./installation.md#requirements).

View File

@@ -1,39 +1,46 @@
## Run tests # Running tests
To run all tests, run the following command: To run all tests, use the following command:
``` ``` shell
pytest pytest
``` ```
To run just without integration tests: If `pytest` is not found:
``` shell
python -m pytest
``` ```
### Running specific test suites
- To run without integration tests:
:::shell
pytest --without-integration pytest --without-integration
```
To run just without slow integration tests: - To run without *slow* integration tests:
``` :::shell
pytest --without-slow-integration pytest --without-slow-integration
```
To run tests and see coverage, run the following command: - To run tests and see coverage:
``` :::shell
pytest --cov=autogpt --without-integration --without-slow-integration pytest --cov=autogpt --without-integration --without-slow-integration
## Runing the linter
This project uses [flake8](https://flake8.pycqa.org/en/latest/) for linting.
We currently use the following rules: `E303,W293,W291,W292,E305,E231,E302`.
See the [flake8 rules](https://www.flake8rules.com/) for more information.
To run the linter:
``` shell
flake8 .
``` ```
## Run linter Or:
``` shell
This project uses [flake8](https://flake8.pycqa.org/en/latest/) for linting. We currently use the following rules: `E303,W293,W291,W292,E305,E231,E302`. See the [flake8 rules](https://www.flake8rules.com/) for more information. python -m flake8 .
To run the linter, run the following command:
```
flake8 autogpt/ tests/
# Or, if you want to run flake8 with the same configuration as the CI:
flake8 autogpt/ tests/ --select E303,W293,W291,W292,E305,E231,E302
``` ```

View File

@@ -1,67 +1,48 @@
# Usage # Usage
Open a terminal and run the startup script: ## Command Line Arguments
- On Linux/MacOS:
``` shell
./run.sh
```
- On Windows:
``` shell
.\run.bat
```
- Using Docker:
``` shell
docker-compose run --rm auto-gpt
```
Running with `--help` lists all the possible command line arguments you can pass: Running with `--help` lists all the possible command line arguments you can pass:
``` shell ``` shell
./run.sh --help ./run.sh --help # on Linux / macOS
# or with docker .\run.bat --help # on Windows
```
!!! info
For use with Docker, replace the script in the examples with
`docker-compose run --rm auto-gpt`:
:::shell
docker-compose run --rm auto-gpt --help docker-compose run --rm auto-gpt --help
``` docker-compose run --rm auto-gpt --ai-settings <filename>
2. After each response from Auto-GPT, choose from the options to authorize command(s), !!! note
exit the program, or provide feedback to the AI. Replace anything in angled brackets (<>) to a value you want to specify
1. Authorize a single command by entering `y`
2. Authorize a series of _N_ continuous commands by entering `y -N`. For example, entering `y -10` would run 10 automatic iterations.
3. Enter any free text to give feedback to Auto-GPT.
4. Exit the program by entering `n`
## Command Line Arguments
Here are some common arguments you can use when running Auto-GPT: Here are some common arguments you can use when running Auto-GPT:
> Replace anything in angled brackets (<>) to a value you want to specify
* View all available command line arguments
``` shell
python -m autogpt --help
```
* Run Auto-GPT with a different AI Settings file * Run Auto-GPT with a different AI Settings file
``` shell
python -m autogpt --ai-settings <filename>
```
* Specify a memory backend
``` shell
python -m autogpt --use-memory <memory-backend>
```
> **NOTE**: There are shorthands for some of these flags, for example `-m` for `--use-memory`. Use `python -m autogpt --help` for more information :::shell
./run.sh --ai-settings <filename>
* Specify a memory backend
:::shell
./run.sh --use-memory <memory-backend>
!!! note
There are shorthands for some of these flags, for example `-m` for `--use-memory`.
Use `./run.sh --help` for more information.
### Speak Mode ### Speak Mode
Enter this command to use TTS _(Text-to-Speech)_ for Auto-GPT Enter this command to use TTS _(Text-to-Speech)_ for Auto-GPT
``` ``` shell
python -m autogpt --speak ./run.sh --speak
``` ```
### 💀 Continuous Mode ⚠️ ### 💀 Continuous Mode ⚠️
@@ -71,34 +52,38 @@ Continuous mode is NOT recommended.
It is potentially dangerous and may cause your AI to run forever or carry out actions you would not usually authorize. It is potentially dangerous and may cause your AI to run forever or carry out actions you would not usually authorize.
Use at your own risk. Use at your own risk.
1. Run the `autogpt` python module in your terminal:
``` shell ``` shell
python -m autogpt --continuous ./run.sh --continuous
``` ```
To exit the program, press ++ctrl+c++
2. To exit the program, press Ctrl + C
### ♻️ Self-Feedback Mode ⚠️ ### ♻️ Self-Feedback Mode ⚠️
Running Self-Feedback will **INCREASE** token use and thus cost more. This feature enables the agent to provide self-feedback by verifying its own actions and checking if they align with its current goals. If not, it will provide better feedback for the next loop. To enable this feature for the current loop, input `S` into the input field. Running Self-Feedback will **INCREASE** token use and thus cost more. This feature enables the agent to provide self-feedback by verifying its own actions and checking if they align with its current goals. If not, it will provide better feedback for the next loop. To enable this feature for the current loop, input `S` into the input field.
### GPT3.5 ONLY Mode ### GPT-3.5 ONLY Mode
If you don't have access to the GPT4 api, this mode will allow you to use Auto-GPT! If you don't have access to GPT-4, this mode allows you to use Auto-GPT!
``` shell ``` shell
python -m autogpt --gpt3only ./run.sh --gpt3only
``` ```
### GPT4 ONLY Mode You can achieve the same by setting `SMART_LLM_MODEL` in `.env` to `gpt-3.5-turbo`.
If you do have access to the GPT4 api, this mode will allow you to use Auto-GPT solely using the GPT-4 API for increased intelligence (and cost!) ### GPT-4 ONLY Mode
If you have access to GPT-4, this mode allows you to use Auto-GPT solely with GPT-4.
This may give your bot increased intelligence.
``` shell ``` shell
python -m autogpt --gpt4only ./run.sh --gpt4only
``` ```
!!! warning
Since GPT-4 is more expensive to use, running Auto-GPT in GPT-4-only mode will
increase your API costs.
## Logs ## Logs
Activity and error logs are located in the `./output/logs` Activity and error logs are located in the `./output/logs`
@@ -106,5 +91,5 @@ Activity and error logs are located in the `./output/logs`
To print out debug logs: To print out debug logs:
``` shell ``` shell
python -m autogpt --debug ./run.sh --debug
``` ```

View File

@@ -1,20 +1,27 @@
site_name: Auto-GPT site_name: Auto-GPT
site_url: https://github.com/Significant-Gravitas/Auto-GPT site_url: https://significantgravitas.github.io/Auto-GPT/
repo_url: https://github.com/Significant-Gravitas/Auto-GPT repo_url: https://github.com/Significant-Gravitas/Auto-GPT
nav: nav:
- Home: index.md - Home: index.md
- Installation: installation.md - Setup: setup.md
- Usage: usage.md - Usage: usage.md
- Plugins: plugins.md - Plugins: plugins.md
- Testing: testing.md
- Configuration: - Configuration:
- Search: configuration/search.md - Search: configuration/search.md
- Memory: configuration/memory.md - Memory: configuration/memory.md
- Voice: configuration/voice.md - Voice: configuration/voice.md
- Image Generation: configuration/imagegen.md - Image Generation: configuration/imagegen.md
- Contributing:
- Contribution guide: contributing.md
- Running tests: testing.md
- Code of Conduct: code-of-conduct.md - Code of Conduct: code-of-conduct.md
- Contributing: contributing.md
- License: LICENSE - License: https://github.com/Significant-Gravitas/Auto-GPT/blob/master/LICENSE
theme: readthedocs theme: readthedocs
markdown_extensions:
admonition:
codehilite:
pymdownx.keys:

View File

@@ -34,6 +34,7 @@ isort
gitpython==3.1.31 gitpython==3.1.31
auto-gpt-plugin-template auto-gpt-plugin-template
mkdocs mkdocs
pymdown-extensions
# OpenAI and Generic plugins import # OpenAI and Generic plugins import