Rename Auto-GPT to AutoGPT (#5301)

* Rename to AutoGPT

Signed-off-by: Merwane Hamadi <merwanehamadi@gmail.com>

* Update autogpts/autogpt/BULLETIN.md

Co-authored-by: Reinier van der Leer <pwuts@agpt.co>

* Update BULLETIN.md

* Update docker-compose.yml

* Update autogpts/forge/tutorials/001_getting_started.md

Co-authored-by: Reinier van der Leer <pwuts@agpt.co>

* Update autogpts/autogpt/tests/unit/test_logs.py

Co-authored-by: Reinier van der Leer <pwuts@agpt.co>

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update introduction.md

* Update plugins.md

---------

Signed-off-by: Merwane Hamadi <merwanehamadi@gmail.com>
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
This commit is contained in:
merwanehamadi
2023-09-22 15:49:29 -07:00
committed by GitHub
parent bb627442d4
commit 8f41dbe27d
70 changed files with 242 additions and 243 deletions

View File

@@ -40,7 +40,7 @@ Further optional configuration:
## Stable Diffusion WebUI
It is possible to use your own self-hosted Stable Diffusion WebUI with Auto-GPT:
It is possible to use your own self-hosted Stable Diffusion WebUI with AutoGPT:
```ini
IMAGE_PROVIDER=sdwebui

View File

@@ -2,11 +2,11 @@
The Pinecone, Milvus, Redis, and Weaviate memory backends were rendered incompatible
by work on the memory system, and have been removed.
Whether support will be added back in the future is subject to discussion,
feel free to pitch in: https://github.com/Significant-Gravitas/Auto-GPT/discussions/4280
feel free to pitch in: https://github.com/Significant-Gravitas/AutoGPT/discussions/4280
## Setting Your Cache Type
By default, Auto-GPT set up with Docker Compose will use Redis as its memory backend.
By default, AutoGPT set up with Docker Compose will use Redis as its memory backend.
Otherwise, the default is LocalCache (which stores memory in a JSON file).
To switch to a different backend, change the `MEMORY_BACKEND` in `.env`
@@ -22,7 +22,7 @@ to the value that you want:
The Pinecone, Milvus, Redis, and Weaviate memory backends were rendered incompatible
by work on the memory system, and have been removed.
Whether support will be added back in the future is subject to discussion,
feel free to pitch in: https://github.com/Significant-Gravitas/Auto-GPT/discussions/4280
feel free to pitch in: https://github.com/Significant-Gravitas/AutoGPT/discussions/4280
## Memory Backend Setup
@@ -37,12 +37,12 @@ Links to memory backends
The Pinecone, Milvus, Redis, and Weaviate memory backends were rendered incompatible
by work on the memory system, and have been removed.
Whether support will be added back in the future is subject to discussion,
feel free to pitch in: https://github.com/Significant-Gravitas/Auto-GPT/discussions/4280
feel free to pitch in: https://github.com/Significant-Gravitas/AutoGPT/discussions/4280
### Redis Setup
!!! important
If you have set up Auto-GPT using Docker Compose, then Redis is included, no further
If you have set up AutoGPT using Docker Compose, then Redis is included, no further
setup needed.
!!! caution
@@ -80,7 +80,7 @@ Links to memory backends
The Pinecone, Milvus, Redis, and Weaviate memory backends were rendered incompatible
by work on the memory system, and have been removed.
Whether support will be added back in the future is subject to discussion,
feel free to pitch in: https://github.com/Significant-Gravitas/Auto-GPT/discussions/4280
feel free to pitch in: https://github.com/Significant-Gravitas/AutoGPT/discussions/4280
### 🌲 Pinecone API Key Setup
@@ -100,7 +100,7 @@ In the `.env` file set:
The Pinecone, Milvus, Redis, and Weaviate memory backends were rendered incompatible
by work on the memory system, and have been removed.
Whether support will be added back in the future is subject to discussion,
feel free to pitch in: https://github.com/Significant-Gravitas/Auto-GPT/discussions/4280
feel free to pitch in: https://github.com/Significant-Gravitas/AutoGPT/discussions/4280
### Milvus Setup
@@ -144,7 +144,7 @@ deployed with docker, or as a cloud service provided by [Zilliz Cloud](https://z
The Pinecone, Milvus, Redis, and Weaviate memory backends were rendered incompatible
by work on the memory system, and have been removed.
Whether support will be added back in the future is subject to discussion,
feel free to pitch in: https://github.com/Significant-Gravitas/Auto-GPT/discussions/4280
feel free to pitch in: https://github.com/Significant-Gravitas/AutoGPT/discussions/4280
### Weaviate Setup
[Weaviate](https://weaviate.io/) is an open-source vector database. It allows to store
@@ -152,7 +152,7 @@ data objects and vector embeddings from ML-models and scales seamlessly to billi
data objects. To set up a Weaviate database, check out their [Quickstart Tutorial](https://weaviate.io/developers/weaviate/quickstart).
Although still experimental, [Embedded Weaviate](https://weaviate.io/developers/weaviate/installation/embedded)
is supported which allows the Auto-GPT process itself to start a Weaviate instance.
is supported which allows the AutoGPT process itself to start a Weaviate instance.
To enable it, set `USE_WEAVIATE_EMBEDDED` to `True` and make sure you `poetry add weaviate-client@^3.15.4`.
#### Install the Weaviate client
@@ -189,13 +189,13 @@ View memory usage by using the `--debug` flag :)
!!! warning
Data ingestion is broken in v0.4.7 and possibly earlier versions. This is a known issue that will be addressed in future releases. Follow these issues for updates.
[Issue 4435](https://github.com/Significant-Gravitas/Auto-GPT/issues/4435)
[Issue 4024](https://github.com/Significant-Gravitas/Auto-GPT/issues/4024)
[Issue 2076](https://github.com/Significant-Gravitas/Auto-GPT/issues/2076)
[Issue 4435](https://github.com/Significant-Gravitas/AutoGPT/issues/4435)
[Issue 4024](https://github.com/Significant-Gravitas/AutoGPT/issues/4024)
[Issue 2076](https://github.com/Significant-Gravitas/AutoGPT/issues/2076)
Memory pre-seeding allows you to ingest files into memory and pre-seed it before running Auto-GPT.
Memory pre-seeding allows you to ingest files into memory and pre-seed it before running AutoGPT.
```shell
$ python data_ingestion.py -h
@@ -214,7 +214,7 @@ options:
# python data_ingestion.py --dir DataFolder --init --overlap 100 --max_length 2000
```
In the example above, the script initializes the memory, ingests all files within the `Auto-Gpt/auto_gpt_workspace/DataFolder` directory into memory with an overlap between chunks of 100 and a maximum length of each chunk of 2000.
In the example above, the script initializes the memory, ingests all files within the `AutoGPT/auto_gpt_workspace/DataFolder` directory into memory with an overlap between chunks of 100 and a maximum length of each chunk of 2000.
Note that you can also use the `--file` argument to ingest a single file into memory and that data_ingestion.py will only ingest files within the `/auto_gpt_workspace` directory.
@@ -238,14 +238,14 @@ Memory pre-seeding is a technique for improving AI accuracy by ingesting relevan
into its memory. Chunks of data are split and added to memory, allowing the AI to access
them quickly and generate more accurate responses. It's useful for large datasets or when
specific information needs to be accessed quickly. Examples include ingesting API or
GitHub documentation before running Auto-GPT.
GitHub documentation before running AutoGPT.
!!! attention
If you use Redis for memory, make sure to run Auto-GPT with `WIPE_REDIS_ON_START=False`
If you use Redis for memory, make sure to run AutoGPT with `WIPE_REDIS_ON_START=False`
For other memory backends, we currently forcefully wipe the memory when starting
Auto-GPT. To ingest data with those memory backends, you can call the
`data_ingestion.py` script anytime during an Auto-GPT run.
AutoGPT. To ingest data with those memory backends, you can call the
`data_ingestion.py` script anytime during an AutoGPT run.
Memories will be available to the AI immediately as they are ingested, even if ingested
while Auto-GPT is running.
while AutoGPT is running.

View File

@@ -1,13 +1,13 @@
# Configuration
Configuration is controlled through the `Config` object. You can set configuration variables via the `.env` file. If you don't have a `.env` file, create a copy of `.env.template` in your `Auto-GPT` folder and name it `.env`.
Configuration is controlled through the `Config` object. You can set configuration variables via the `.env` file. If you don't have a `.env` file, create a copy of `.env.template` in your `AutoGPT` folder and name it `.env`.
## Environment Variables
- `AI_SETTINGS_FILE`: Location of the AI Settings file relative to the Auto-GPT root directory. Default: ai_settings.yaml
- `AI_SETTINGS_FILE`: Location of the AI Settings file relative to the AutoGPT root directory. Default: ai_settings.yaml
- `AUDIO_TO_TEXT_PROVIDER`: Audio To Text Provider. Only option currently is `huggingface`. Default: huggingface
- `AUTHORISE_COMMAND_KEY`: Key response accepted when authorising commands. Default: y
- `AZURE_CONFIG_FILE`: Location of the Azure Config file relative to the Auto-GPT root directory. Default: azure.yaml
- `AZURE_CONFIG_FILE`: Location of the Azure Config file relative to the AutoGPT root directory. Default: azure.yaml
- `BROWSE_CHUNK_MAX_LENGTH`: When browsing website, define the length of chunks to summarize. Default: 3000
- `BROWSE_SPACY_LANGUAGE_MODEL`: [spaCy language model](https://spacy.io/usage/models) to use when creating chunks. Default: en_core_web_sm
- `CHAT_MESSAGES_ENABLED`: Enable chat messages. Optional
@@ -22,7 +22,7 @@ Configuration is controlled through the `Config` object. You can set configurati
- `GITHUB_USERNAME`: GitHub Username. Optional.
- `GOOGLE_API_KEY`: Google API key. Optional.
- `GOOGLE_CUSTOM_SEARCH_ENGINE_ID`: [Google custom search engine ID](https://programmablesearchengine.google.com/controlpanel/all). Optional.
- `HEADLESS_BROWSER`: Use a headless browser while Auto-GPT uses a web browser. Setting to `False` will allow you to see Auto-GPT operate the browser. Default: True
- `HEADLESS_BROWSER`: Use a headless browser while AutoGPT uses a web browser. Setting to `False` will allow you to see AutoGPT operate the browser. Default: True
- `HUGGINGFACE_API_TOKEN`: HuggingFace API, to be used for both image generation and audio to text. Optional.
- `HUGGINGFACE_AUDIO_TO_TEXT_MODEL`: HuggingFace audio to text model. Default: CompVis/stable-diffusion-v1-4
- `HUGGINGFACE_IMAGE_MODEL`: HuggingFace model to use for image generation. Default: CompVis/stable-diffusion-v1-4
@@ -33,17 +33,17 @@ Configuration is controlled through the `Config` object. You can set configurati
- `OPENAI_API_KEY`: *REQUIRED*- Your [OpenAI API Key](https://platform.openai.com/account/api-keys).
- `OPENAI_ORGANIZATION`: Organization ID in OpenAI. Optional.
- `PLAIN_OUTPUT`: Plain output, which disables the spinner. Default: False
- `PLUGINS_CONFIG_FILE`: Path of the Plugins Config file relative to the Auto-GPT root directory. Default: plugins_config.yaml
- `PROMPT_SETTINGS_FILE`: Location of the Prompt Settings file relative to the Auto-GPT root directory. Default: prompt_settings.yaml
- `PLUGINS_CONFIG_FILE`: Path of the Plugins Config file relative to the AutoGPT root directory. Default: plugins_config.yaml
- `PROMPT_SETTINGS_FILE`: Location of the Prompt Settings file relative to the AutoGPT root directory. Default: prompt_settings.yaml
- `REDIS_HOST`: Redis Host. Default: localhost
- `REDIS_PASSWORD`: Redis Password. Optional. Default:
- `REDIS_PORT`: Redis Port. Default: 6379
- `RESTRICT_TO_WORKSPACE`: The restrict file reading and writing to the workspace directory. Default: True
- `SD_WEBUI_AUTH`: Stable Diffusion Web UI username:password pair. Optional.
- `SD_WEBUI_URL`: Stable Diffusion Web UI URL. Default: http://localhost:7860
- `SHELL_ALLOWLIST`: List of shell commands that ARE allowed to be executed by Auto-GPT. Only applies if `SHELL_COMMAND_CONTROL` is set to `allowlist`. Default: None
- `SHELL_ALLOWLIST`: List of shell commands that ARE allowed to be executed by AutoGPT. Only applies if `SHELL_COMMAND_CONTROL` is set to `allowlist`. Default: None
- `SHELL_COMMAND_CONTROL`: Whether to use `allowlist` or `denylist` to determine what shell commands can be executed (Default: denylist)
- `SHELL_DENYLIST`: List of shell commands that ARE NOT allowed to be executed by Auto-GPT. Only applies if `SHELL_COMMAND_CONTROL` is set to `denylist`. Default: sudo,su
- `SHELL_DENYLIST`: List of shell commands that ARE NOT allowed to be executed by AutoGPT. Only applies if `SHELL_COMMAND_CONTROL` is set to `denylist`. Default: sudo,su
- `SMART_LLM`: LLM Model to use for "smart" tasks. Default: gpt-4
- `STREAMELEMENTS_VOICE`: StreamElements voice to use. Default: Brian
- `TEMPERATURE`: Value of temperature given to OpenAI. Value from 0 to 2. Lower is more deterministic, higher is more random. See https://platform.openai.com/docs/api-reference/completions/create#completions/create-temperature

View File

@@ -1,13 +1,13 @@
# Text to Speech
Enter this command to use TTS _(Text-to-Speech)_ for Auto-GPT
Enter this command to use TTS _(Text-to-Speech)_ for AutoGPT
```shell
python -m autogpt --speak
```
Eleven Labs provides voice technologies such as voice design, speech synthesis, and
premade voices that Auto-GPT can use for speech.
premade voices that AutoGPT can use for speech.
1. Go to [ElevenLabs](https://beta.elevenlabs.io/) and make an account if you don't
already have one.