mirror of
https://github.com/aljazceru/Auto-GPT.git
synced 2025-12-18 14:34:23 +01:00
Rename Auto-GPT to AutoGPT (#5301)
* Rename to AutoGPT Signed-off-by: Merwane Hamadi <merwanehamadi@gmail.com> * Update autogpts/autogpt/BULLETIN.md Co-authored-by: Reinier van der Leer <pwuts@agpt.co> * Update BULLETIN.md * Update docker-compose.yml * Update autogpts/forge/tutorials/001_getting_started.md Co-authored-by: Reinier van der Leer <pwuts@agpt.co> * Update autogpts/autogpt/tests/unit/test_logs.py Co-authored-by: Reinier van der Leer <pwuts@agpt.co> * Update README.md * Update README.md * Update README.md * Update README.md * Update introduction.md * Update plugins.md --------- Signed-off-by: Merwane Hamadi <merwanehamadi@gmail.com> Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
This commit is contained in:
@@ -1,8 +1,8 @@
|
||||
# Creating Challenges for Auto-GPT
|
||||
# Creating Challenges for AutoGPT
|
||||
|
||||
🏹 We're on the hunt for talented Challenge Creators! 🎯
|
||||
|
||||
Join us in shaping the future of Auto-GPT by designing challenges that test its limits. Your input will be invaluable in guiding our progress and ensuring that we're on the right track. We're seeking individuals with a diverse skill set, including:
|
||||
Join us in shaping the future of AutoGPT by designing challenges that test its limits. Your input will be invaluable in guiding our progress and ensuring that we're on the right track. We're seeking individuals with a diverse skill set, including:
|
||||
|
||||
🎨 UX Design: Your expertise will enhance the user experience for those attempting to conquer our challenges. With your help, we'll develop a dedicated section in our wiki, and potentially even launch a standalone website.
|
||||
|
||||
@@ -10,11 +10,11 @@ Join us in shaping the future of Auto-GPT by designing challenges that test its
|
||||
|
||||
⚙️ DevOps Skills: Experience with CI pipelines in GitHub and possibly Google Cloud Platform will be instrumental in streamlining our operations.
|
||||
|
||||
Are you ready to play a pivotal role in Auto-GPT's journey? Apply now to become a Challenge Creator by opening a PR! 🚀
|
||||
Are you ready to play a pivotal role in AutoGPT's journey? Apply now to become a Challenge Creator by opening a PR! 🚀
|
||||
|
||||
|
||||
# Getting Started
|
||||
Clone the original Auto-GPT repo and checkout to master branch
|
||||
Clone the original AutoGPT repo and checkout to master branch
|
||||
|
||||
|
||||
The challenges are not written using a specific framework. They try to be very agnostic
|
||||
@@ -27,7 +27,7 @@ Output => Artifact (files, image, code, etc, etc...)
|
||||
|
||||
## Defining your Agent
|
||||
|
||||
Go to https://github.com/Significant-Gravitas/Auto-GPT/blob/master/tests/integration/agent_factory.py
|
||||
Go to https://github.com/Significant-Gravitas/AutoGPT/blob/master/tests/integration/agent_factory.py
|
||||
|
||||
Create your agent fixture.
|
||||
|
||||
|
||||
@@ -1,3 +1,3 @@
|
||||
# Information Retrieval
|
||||
|
||||
Information retrieval challenges are designed to evaluate the proficiency of an AI agent, such as Auto-GPT, in searching, extracting, and presenting relevant information from a vast array of sources. These challenges often encompass tasks such as interpreting user queries, browsing the web, and filtering through unstructured data.
|
||||
Information retrieval challenges are designed to evaluate the proficiency of an AI agent, such as AutoGPT, in searching, extracting, and presenting relevant information from a vast array of sources. These challenges often encompass tasks such as interpreting user queries, browsing the web, and filtering through unstructured data.
|
||||
|
||||
@@ -1,30 +1,30 @@
|
||||
introduction.md
|
||||
# Introduction to Challenges
|
||||
|
||||
Welcome to the Auto-GPT Challenges page! This is a space where we encourage community members to collaborate and contribute towards improving Auto-GPT by identifying and solving challenges that Auto-GPT is not yet able to achieve.
|
||||
Welcome to the AutoGPT Challenges page! This is a space where we encourage community members to collaborate and contribute towards improving AutoGPT by identifying and solving challenges that AutoGPT is not yet able to achieve.
|
||||
|
||||
## What are challenges?
|
||||
|
||||
Challenges are tasks or problems that Auto-GPT has difficulty solving or has not yet been able to accomplish. These may include improving specific functionalities, enhancing the model's understanding of specific domains, or even developing new features that the current version of Auto-GPT lacks.
|
||||
Challenges are tasks or problems that AutoGPT has difficulty solving or has not yet been able to accomplish. These may include improving specific functionalities, enhancing the model's understanding of specific domains, or even developing new features that the current version of AutoGPT lacks.
|
||||
|
||||
## Why are challenges important?
|
||||
|
||||
Addressing challenges helps us improve Auto-GPT's performance, usability, and versatility. By working together to tackle these challenges, we can create a more powerful and efficient tool for everyone. It also allows the community to actively contribute to the project, making it a true open-source effort.
|
||||
Addressing challenges helps us improve AutoGPT's performance, usability, and versatility. By working together to tackle these challenges, we can create a more powerful and efficient tool for everyone. It also allows the community to actively contribute to the project, making it a true open-source effort.
|
||||
|
||||
## How can you participate?
|
||||
|
||||
There are two main ways to get involved with challenges:
|
||||
|
||||
1. **Submit a Challenge**: If you have identified a task that Auto-GPT struggles with, you can submit it as a challenge. This allows others to see the issue and collaborate on finding a solution.
|
||||
1. **Submit a Challenge**: If you have identified a task that AutoGPT struggles with, you can submit it as a challenge. This allows others to see the issue and collaborate on finding a solution.
|
||||
2. **Beat a Challenge**: If you have a solution or idea to tackle an existing challenge, you can contribute by working on the challenge and submitting your solution.
|
||||
|
||||
To learn more about submitting and beating challenges, please visit the [List of Challenges](list.md), [Submit a Challenge](submit.md), and [Beat a Challenge](beat.md) pages.
|
||||
|
||||
We look forward to your contributions and the exciting solutions that the community will develop together to make Auto-GPT even better!
|
||||
We look forward to your contributions and the exciting solutions that the community will develop together to make AutoGPT even better!
|
||||
|
||||
!!! warning
|
||||
|
||||
We're slowly transitioning to agbenchmark. agbenchmark is a simpler way to improve Auto-GPT. Simply run:
|
||||
We're slowly transitioning to agbenchmark. agbenchmark is a simpler way to improve AutoGPT. Simply run:
|
||||
|
||||
```
|
||||
agbenchmark
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
# List of Challenges
|
||||
|
||||
This page contains a curated list of challenges that Auto-GPT currently faces. If you think you have a solution or idea to tackle any of these challenges, feel free to dive in and start working on them! New challenges can also be submitted by following the guidelines on the [Submit a Challenge](challenges/submit.md) page.
|
||||
This page contains a curated list of challenges that AutoGPT currently faces. If you think you have a solution or idea to tackle any of these challenges, feel free to dive in and start working on them! New challenges can also be submitted by following the guidelines on the [Submit a Challenge](challenges/submit.md) page.
|
||||
|
||||
Memory Challenges: [List of Challenges](memory/introduction.md)
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
# Memory Challenges
|
||||
|
||||
Memory challenges are designed to test the ability of an AI agent, like Auto-GPT, to remember and use information throughout a series of tasks. These challenges often involve following instructions, processing text files, and keeping track of important data.
|
||||
Memory challenges are designed to test the ability of an AI agent, like AutoGPT, to remember and use information throughout a series of tasks. These challenges often involve following instructions, processing text files, and keeping track of important data.
|
||||
|
||||
The goal of memory challenges is to improve an agent's performance in tasks that require remembering and using information over time. By addressing these challenges, we can enhance Auto-GPT's capabilities and make it more useful in real-world applications.
|
||||
The goal of memory challenges is to improve an agent's performance in tasks that require remembering and using information over time. By addressing these challenges, we can enhance AutoGPT's capabilities and make it more useful in real-world applications.
|
||||
|
||||
@@ -1,10 +1,10 @@
|
||||
# Submit a Challenge
|
||||
|
||||
If you have identified a task or problem that Auto-GPT struggles with, you can submit it as a challenge for the community to tackle. Here's how you can submit a new challenge:
|
||||
If you have identified a task or problem that AutoGPT struggles with, you can submit it as a challenge for the community to tackle. Here's how you can submit a new challenge:
|
||||
|
||||
## How to Submit a Challenge
|
||||
|
||||
1. Create a new `.md` file in the `challenges` directory in the Auto-GPT GitHub repository. Make sure to pick the right category.
|
||||
1. Create a new `.md` file in the `challenges` directory in the AutoGPT GitHub repository. Make sure to pick the right category.
|
||||
2. Name the file with a descriptive title for the challenge, using hyphens instead of spaces (e.g., `improve-context-understanding.md`).
|
||||
3. In the file, follow the [challenge_template.md](challenge_template.md) to describe the problem, define the scope, and evaluate success.
|
||||
4. Commit the file and create a pull request.
|
||||
|
||||
@@ -40,7 +40,7 @@ Further optional configuration:
|
||||
|
||||
## Stable Diffusion WebUI
|
||||
|
||||
It is possible to use your own self-hosted Stable Diffusion WebUI with Auto-GPT:
|
||||
It is possible to use your own self-hosted Stable Diffusion WebUI with AutoGPT:
|
||||
|
||||
```ini
|
||||
IMAGE_PROVIDER=sdwebui
|
||||
|
||||
@@ -2,11 +2,11 @@
|
||||
The Pinecone, Milvus, Redis, and Weaviate memory backends were rendered incompatible
|
||||
by work on the memory system, and have been removed.
|
||||
Whether support will be added back in the future is subject to discussion,
|
||||
feel free to pitch in: https://github.com/Significant-Gravitas/Auto-GPT/discussions/4280
|
||||
feel free to pitch in: https://github.com/Significant-Gravitas/AutoGPT/discussions/4280
|
||||
|
||||
## Setting Your Cache Type
|
||||
|
||||
By default, Auto-GPT set up with Docker Compose will use Redis as its memory backend.
|
||||
By default, AutoGPT set up with Docker Compose will use Redis as its memory backend.
|
||||
Otherwise, the default is LocalCache (which stores memory in a JSON file).
|
||||
|
||||
To switch to a different backend, change the `MEMORY_BACKEND` in `.env`
|
||||
@@ -22,7 +22,7 @@ to the value that you want:
|
||||
The Pinecone, Milvus, Redis, and Weaviate memory backends were rendered incompatible
|
||||
by work on the memory system, and have been removed.
|
||||
Whether support will be added back in the future is subject to discussion,
|
||||
feel free to pitch in: https://github.com/Significant-Gravitas/Auto-GPT/discussions/4280
|
||||
feel free to pitch in: https://github.com/Significant-Gravitas/AutoGPT/discussions/4280
|
||||
|
||||
## Memory Backend Setup
|
||||
|
||||
@@ -37,12 +37,12 @@ Links to memory backends
|
||||
The Pinecone, Milvus, Redis, and Weaviate memory backends were rendered incompatible
|
||||
by work on the memory system, and have been removed.
|
||||
Whether support will be added back in the future is subject to discussion,
|
||||
feel free to pitch in: https://github.com/Significant-Gravitas/Auto-GPT/discussions/4280
|
||||
feel free to pitch in: https://github.com/Significant-Gravitas/AutoGPT/discussions/4280
|
||||
|
||||
### Redis Setup
|
||||
|
||||
!!! important
|
||||
If you have set up Auto-GPT using Docker Compose, then Redis is included, no further
|
||||
If you have set up AutoGPT using Docker Compose, then Redis is included, no further
|
||||
setup needed.
|
||||
|
||||
!!! caution
|
||||
@@ -80,7 +80,7 @@ Links to memory backends
|
||||
The Pinecone, Milvus, Redis, and Weaviate memory backends were rendered incompatible
|
||||
by work on the memory system, and have been removed.
|
||||
Whether support will be added back in the future is subject to discussion,
|
||||
feel free to pitch in: https://github.com/Significant-Gravitas/Auto-GPT/discussions/4280
|
||||
feel free to pitch in: https://github.com/Significant-Gravitas/AutoGPT/discussions/4280
|
||||
|
||||
### 🌲 Pinecone API Key Setup
|
||||
|
||||
@@ -100,7 +100,7 @@ In the `.env` file set:
|
||||
The Pinecone, Milvus, Redis, and Weaviate memory backends were rendered incompatible
|
||||
by work on the memory system, and have been removed.
|
||||
Whether support will be added back in the future is subject to discussion,
|
||||
feel free to pitch in: https://github.com/Significant-Gravitas/Auto-GPT/discussions/4280
|
||||
feel free to pitch in: https://github.com/Significant-Gravitas/AutoGPT/discussions/4280
|
||||
|
||||
### Milvus Setup
|
||||
|
||||
@@ -144,7 +144,7 @@ deployed with docker, or as a cloud service provided by [Zilliz Cloud](https://z
|
||||
The Pinecone, Milvus, Redis, and Weaviate memory backends were rendered incompatible
|
||||
by work on the memory system, and have been removed.
|
||||
Whether support will be added back in the future is subject to discussion,
|
||||
feel free to pitch in: https://github.com/Significant-Gravitas/Auto-GPT/discussions/4280
|
||||
feel free to pitch in: https://github.com/Significant-Gravitas/AutoGPT/discussions/4280
|
||||
|
||||
### Weaviate Setup
|
||||
[Weaviate](https://weaviate.io/) is an open-source vector database. It allows to store
|
||||
@@ -152,7 +152,7 @@ data objects and vector embeddings from ML-models and scales seamlessly to billi
|
||||
data objects. To set up a Weaviate database, check out their [Quickstart Tutorial](https://weaviate.io/developers/weaviate/quickstart).
|
||||
|
||||
Although still experimental, [Embedded Weaviate](https://weaviate.io/developers/weaviate/installation/embedded)
|
||||
is supported which allows the Auto-GPT process itself to start a Weaviate instance.
|
||||
is supported which allows the AutoGPT process itself to start a Weaviate instance.
|
||||
To enable it, set `USE_WEAVIATE_EMBEDDED` to `True` and make sure you `poetry add weaviate-client@^3.15.4`.
|
||||
|
||||
#### Install the Weaviate client
|
||||
@@ -189,13 +189,13 @@ View memory usage by using the `--debug` flag :)
|
||||
|
||||
!!! warning
|
||||
Data ingestion is broken in v0.4.7 and possibly earlier versions. This is a known issue that will be addressed in future releases. Follow these issues for updates.
|
||||
[Issue 4435](https://github.com/Significant-Gravitas/Auto-GPT/issues/4435)
|
||||
[Issue 4024](https://github.com/Significant-Gravitas/Auto-GPT/issues/4024)
|
||||
[Issue 2076](https://github.com/Significant-Gravitas/Auto-GPT/issues/2076)
|
||||
[Issue 4435](https://github.com/Significant-Gravitas/AutoGPT/issues/4435)
|
||||
[Issue 4024](https://github.com/Significant-Gravitas/AutoGPT/issues/4024)
|
||||
[Issue 2076](https://github.com/Significant-Gravitas/AutoGPT/issues/2076)
|
||||
|
||||
|
||||
|
||||
Memory pre-seeding allows you to ingest files into memory and pre-seed it before running Auto-GPT.
|
||||
Memory pre-seeding allows you to ingest files into memory and pre-seed it before running AutoGPT.
|
||||
|
||||
```shell
|
||||
$ python data_ingestion.py -h
|
||||
@@ -214,7 +214,7 @@ options:
|
||||
# python data_ingestion.py --dir DataFolder --init --overlap 100 --max_length 2000
|
||||
```
|
||||
|
||||
In the example above, the script initializes the memory, ingests all files within the `Auto-Gpt/auto_gpt_workspace/DataFolder` directory into memory with an overlap between chunks of 100 and a maximum length of each chunk of 2000.
|
||||
In the example above, the script initializes the memory, ingests all files within the `AutoGPT/auto_gpt_workspace/DataFolder` directory into memory with an overlap between chunks of 100 and a maximum length of each chunk of 2000.
|
||||
|
||||
Note that you can also use the `--file` argument to ingest a single file into memory and that data_ingestion.py will only ingest files within the `/auto_gpt_workspace` directory.
|
||||
|
||||
@@ -238,14 +238,14 @@ Memory pre-seeding is a technique for improving AI accuracy by ingesting relevan
|
||||
into its memory. Chunks of data are split and added to memory, allowing the AI to access
|
||||
them quickly and generate more accurate responses. It's useful for large datasets or when
|
||||
specific information needs to be accessed quickly. Examples include ingesting API or
|
||||
GitHub documentation before running Auto-GPT.
|
||||
GitHub documentation before running AutoGPT.
|
||||
|
||||
!!! attention
|
||||
If you use Redis for memory, make sure to run Auto-GPT with `WIPE_REDIS_ON_START=False`
|
||||
If you use Redis for memory, make sure to run AutoGPT with `WIPE_REDIS_ON_START=False`
|
||||
|
||||
For other memory backends, we currently forcefully wipe the memory when starting
|
||||
Auto-GPT. To ingest data with those memory backends, you can call the
|
||||
`data_ingestion.py` script anytime during an Auto-GPT run.
|
||||
AutoGPT. To ingest data with those memory backends, you can call the
|
||||
`data_ingestion.py` script anytime during an AutoGPT run.
|
||||
|
||||
Memories will be available to the AI immediately as they are ingested, even if ingested
|
||||
while Auto-GPT is running.
|
||||
while AutoGPT is running.
|
||||
|
||||
@@ -1,13 +1,13 @@
|
||||
# Configuration
|
||||
|
||||
Configuration is controlled through the `Config` object. You can set configuration variables via the `.env` file. If you don't have a `.env` file, create a copy of `.env.template` in your `Auto-GPT` folder and name it `.env`.
|
||||
Configuration is controlled through the `Config` object. You can set configuration variables via the `.env` file. If you don't have a `.env` file, create a copy of `.env.template` in your `AutoGPT` folder and name it `.env`.
|
||||
|
||||
## Environment Variables
|
||||
|
||||
- `AI_SETTINGS_FILE`: Location of the AI Settings file relative to the Auto-GPT root directory. Default: ai_settings.yaml
|
||||
- `AI_SETTINGS_FILE`: Location of the AI Settings file relative to the AutoGPT root directory. Default: ai_settings.yaml
|
||||
- `AUDIO_TO_TEXT_PROVIDER`: Audio To Text Provider. Only option currently is `huggingface`. Default: huggingface
|
||||
- `AUTHORISE_COMMAND_KEY`: Key response accepted when authorising commands. Default: y
|
||||
- `AZURE_CONFIG_FILE`: Location of the Azure Config file relative to the Auto-GPT root directory. Default: azure.yaml
|
||||
- `AZURE_CONFIG_FILE`: Location of the Azure Config file relative to the AutoGPT root directory. Default: azure.yaml
|
||||
- `BROWSE_CHUNK_MAX_LENGTH`: When browsing website, define the length of chunks to summarize. Default: 3000
|
||||
- `BROWSE_SPACY_LANGUAGE_MODEL`: [spaCy language model](https://spacy.io/usage/models) to use when creating chunks. Default: en_core_web_sm
|
||||
- `CHAT_MESSAGES_ENABLED`: Enable chat messages. Optional
|
||||
@@ -22,7 +22,7 @@ Configuration is controlled through the `Config` object. You can set configurati
|
||||
- `GITHUB_USERNAME`: GitHub Username. Optional.
|
||||
- `GOOGLE_API_KEY`: Google API key. Optional.
|
||||
- `GOOGLE_CUSTOM_SEARCH_ENGINE_ID`: [Google custom search engine ID](https://programmablesearchengine.google.com/controlpanel/all). Optional.
|
||||
- `HEADLESS_BROWSER`: Use a headless browser while Auto-GPT uses a web browser. Setting to `False` will allow you to see Auto-GPT operate the browser. Default: True
|
||||
- `HEADLESS_BROWSER`: Use a headless browser while AutoGPT uses a web browser. Setting to `False` will allow you to see AutoGPT operate the browser. Default: True
|
||||
- `HUGGINGFACE_API_TOKEN`: HuggingFace API, to be used for both image generation and audio to text. Optional.
|
||||
- `HUGGINGFACE_AUDIO_TO_TEXT_MODEL`: HuggingFace audio to text model. Default: CompVis/stable-diffusion-v1-4
|
||||
- `HUGGINGFACE_IMAGE_MODEL`: HuggingFace model to use for image generation. Default: CompVis/stable-diffusion-v1-4
|
||||
@@ -33,17 +33,17 @@ Configuration is controlled through the `Config` object. You can set configurati
|
||||
- `OPENAI_API_KEY`: *REQUIRED*- Your [OpenAI API Key](https://platform.openai.com/account/api-keys).
|
||||
- `OPENAI_ORGANIZATION`: Organization ID in OpenAI. Optional.
|
||||
- `PLAIN_OUTPUT`: Plain output, which disables the spinner. Default: False
|
||||
- `PLUGINS_CONFIG_FILE`: Path of the Plugins Config file relative to the Auto-GPT root directory. Default: plugins_config.yaml
|
||||
- `PROMPT_SETTINGS_FILE`: Location of the Prompt Settings file relative to the Auto-GPT root directory. Default: prompt_settings.yaml
|
||||
- `PLUGINS_CONFIG_FILE`: Path of the Plugins Config file relative to the AutoGPT root directory. Default: plugins_config.yaml
|
||||
- `PROMPT_SETTINGS_FILE`: Location of the Prompt Settings file relative to the AutoGPT root directory. Default: prompt_settings.yaml
|
||||
- `REDIS_HOST`: Redis Host. Default: localhost
|
||||
- `REDIS_PASSWORD`: Redis Password. Optional. Default:
|
||||
- `REDIS_PORT`: Redis Port. Default: 6379
|
||||
- `RESTRICT_TO_WORKSPACE`: The restrict file reading and writing to the workspace directory. Default: True
|
||||
- `SD_WEBUI_AUTH`: Stable Diffusion Web UI username:password pair. Optional.
|
||||
- `SD_WEBUI_URL`: Stable Diffusion Web UI URL. Default: http://localhost:7860
|
||||
- `SHELL_ALLOWLIST`: List of shell commands that ARE allowed to be executed by Auto-GPT. Only applies if `SHELL_COMMAND_CONTROL` is set to `allowlist`. Default: None
|
||||
- `SHELL_ALLOWLIST`: List of shell commands that ARE allowed to be executed by AutoGPT. Only applies if `SHELL_COMMAND_CONTROL` is set to `allowlist`. Default: None
|
||||
- `SHELL_COMMAND_CONTROL`: Whether to use `allowlist` or `denylist` to determine what shell commands can be executed (Default: denylist)
|
||||
- `SHELL_DENYLIST`: List of shell commands that ARE NOT allowed to be executed by Auto-GPT. Only applies if `SHELL_COMMAND_CONTROL` is set to `denylist`. Default: sudo,su
|
||||
- `SHELL_DENYLIST`: List of shell commands that ARE NOT allowed to be executed by AutoGPT. Only applies if `SHELL_COMMAND_CONTROL` is set to `denylist`. Default: sudo,su
|
||||
- `SMART_LLM`: LLM Model to use for "smart" tasks. Default: gpt-4
|
||||
- `STREAMELEMENTS_VOICE`: StreamElements voice to use. Default: Brian
|
||||
- `TEMPERATURE`: Value of temperature given to OpenAI. Value from 0 to 2. Lower is more deterministic, higher is more random. See https://platform.openai.com/docs/api-reference/completions/create#completions/create-temperature
|
||||
|
||||
@@ -1,13 +1,13 @@
|
||||
# Text to Speech
|
||||
|
||||
Enter this command to use TTS _(Text-to-Speech)_ for Auto-GPT
|
||||
Enter this command to use TTS _(Text-to-Speech)_ for AutoGPT
|
||||
|
||||
```shell
|
||||
python -m autogpt --speak
|
||||
```
|
||||
|
||||
Eleven Labs provides voice technologies such as voice design, speech synthesis, and
|
||||
premade voices that Auto-GPT can use for speech.
|
||||
premade voices that AutoGPT can use for speech.
|
||||
|
||||
1. Go to [ElevenLabs](https://beta.elevenlabs.io/) and make an account if you don't
|
||||
already have one.
|
||||
|
||||
@@ -3,6 +3,6 @@
|
||||
Welcome to AutoGPT. Please follow the [Installation](/setup/) guide to get started.
|
||||
|
||||
!!! note
|
||||
It is recommended to use a virtual machine/container (docker) for tasks that require high security measures to prevent any potential harm to the main computer's system and data. If you are considering to use Auto-GPT outside a virtualized/containerized environment, you are *strongly* advised to use a separate user account just for running Auto-GPT. This is even more important if you are going to allow Auto-GPT to write/execute scripts and run shell commands!
|
||||
It is recommended to use a virtual machine/container (docker) for tasks that require high security measures to prevent any potential harm to the main computer's system and data. If you are considering to use AutoGPT outside a virtualized/containerized environment, you are *strongly* advised to use a separate user account just for running AutoGPT. This is even more important if you are going to allow AutoGPT to write/execute scripts and run shell commands!
|
||||
|
||||
It is for these reasons that executing python scripts is explicitly disabled when running outside a container environment.
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
|
||||
⚠️💀 **WARNING** 💀⚠️: Review the code of any plugin you use thoroughly, as plugins can execute any Python code, potentially leading to malicious activities, such as stealing your API keys.
|
||||
|
||||
To configure plugins, you can create or edit the `plugins_config.yaml` file in the root directory of Auto-GPT. This file allows you to enable or disable plugins as desired. For specific configuration instructions, please refer to the documentation provided for each plugin. The file should be formatted in YAML. Here is an example for your reference:
|
||||
To configure plugins, you can create or edit the `plugins_config.yaml` file in the root directory of AutoGPT. This file allows you to enable or disable plugins as desired. For specific configuration instructions, please refer to the documentation provided for each plugin. The file should be formatted in YAML. Here is an example for your reference:
|
||||
|
||||
```yaml
|
||||
plugin_a:
|
||||
@@ -16,5 +16,5 @@ plugin_b:
|
||||
|
||||
See our [Plugins Repo](https://github.com/Significant-Gravitas/Auto-GPT-Plugins) for more info on how to install all the amazing plugins the community has built!
|
||||
|
||||
Alternatively, developers can use the [Auto-GPT Plugin Template](https://github.com/Significant-Gravitas/Auto-GPT-Plugin-Template) as a starting point for creating your own plugins.
|
||||
Alternatively, developers can use the [AutoGPT Plugin Template](https://github.com/Significant-Gravitas/Auto-GPT-Plugin-Template) as a starting point for creating your own plugins.
|
||||
|
||||
|
||||
@@ -1,8 +1,8 @@
|
||||
# Setting up Auto-GPT
|
||||
# Setting up AutoGPT
|
||||
|
||||
## 📋 Requirements
|
||||
|
||||
Choose an environment to run Auto-GPT in (pick one):
|
||||
Choose an environment to run AutoGPT in (pick one):
|
||||
|
||||
- [Docker](https://docs.docker.com/get-docker/) (*recommended*)
|
||||
- Python 3.10 or later (instructions: [for Windows](https://www.tutorialspoint.com/how-to-install-python-in-windows))
|
||||
@@ -14,7 +14,7 @@ Choose an environment to run Auto-GPT in (pick one):
|
||||
Get your OpenAI API key from: [https://platform.openai.com/account/api-keys](https://platform.openai.com/account/api-keys).
|
||||
|
||||
!!! attention
|
||||
To use the OpenAI API with Auto-GPT, we strongly recommend **setting up billing**
|
||||
To use the OpenAI API with AutoGPT, we strongly recommend **setting up billing**
|
||||
(AKA paid account). Free accounts are [limited][openai/api limits] to 3 API calls per
|
||||
minute, which can cause the application to crash.
|
||||
|
||||
@@ -29,16 +29,16 @@ Get your OpenAI API key from: [https://platform.openai.com/account/api-keys](htt
|
||||

|
||||
|
||||
|
||||
## Setting up Auto-GPT
|
||||
## Setting up AutoGPT
|
||||
|
||||
### Set up with Docker
|
||||
|
||||
1. Make sure you have Docker installed, see [requirements](#requirements)
|
||||
2. Create a project directory for Auto-GPT
|
||||
2. Create a project directory for AutoGPT
|
||||
|
||||
```shell
|
||||
mkdir Auto-GPT
|
||||
cd Auto-GPT
|
||||
mkdir AutoGPT
|
||||
cd AutoGPT
|
||||
```
|
||||
|
||||
3. In the project directory, create a file called `docker-compose.yml` with the following contents:
|
||||
@@ -77,11 +77,11 @@ Get your OpenAI API key from: [https://platform.openai.com/account/api-keys](htt
|
||||
6. Continue to [Run with Docker](#run-with-docker)
|
||||
|
||||
!!! note "Docker only supports headless browsing"
|
||||
Auto-GPT uses a browser in headless mode by default: `HEADLESS_BROWSER=True`.
|
||||
Please do not change this setting in combination with Docker, or Auto-GPT will crash.
|
||||
AutoGPT uses a browser in headless mode by default: `HEADLESS_BROWSER=True`.
|
||||
Please do not change this setting in combination with Docker, or AutoGPT will crash.
|
||||
|
||||
[Docker Hub]: https://hub.docker.com/r/significantgravitas/auto-gpt
|
||||
[repository]: https://github.com/Significant-Gravitas/Auto-GPT
|
||||
[repository]: https://github.com/Significant-Gravitas/AutoGPT
|
||||
|
||||
|
||||
### Set up with Git
|
||||
@@ -96,13 +96,13 @@ Get your OpenAI API key from: [https://platform.openai.com/account/api-keys](htt
|
||||
1. Clone the repository
|
||||
|
||||
```shell
|
||||
git clone -b stable https://github.com/Significant-Gravitas/Auto-GPT.git
|
||||
git clone -b stable https://github.com/Significant-Gravitas/AutoGPT.git
|
||||
```
|
||||
|
||||
2. Navigate to the directory where you downloaded the repository
|
||||
|
||||
```shell
|
||||
cd Auto-GPT/autogpts/autogpt
|
||||
cd AutoGPT/autogpts/autogpt
|
||||
```
|
||||
|
||||
### Set up without Git/Docker
|
||||
@@ -110,7 +110,7 @@ Get your OpenAI API key from: [https://platform.openai.com/account/api-keys](htt
|
||||
!!! warning
|
||||
We recommend to use Git or Docker, to make updating easier. Also note that some features such as Python execution will only work inside docker for security reasons.
|
||||
|
||||
1. Download `Source code (zip)` from the [latest stable release](https://github.com/Significant-Gravitas/Auto-GPT/releases/latest)
|
||||
1. Download `Source code (zip)` from the [latest stable release](https://github.com/Significant-Gravitas/AutoGPT/releases/latest)
|
||||
2. Extract the zip-file into a folder
|
||||
|
||||
|
||||
@@ -160,7 +160,7 @@ Get your OpenAI API key from: [https://platform.openai.com/account/api-keys](htt
|
||||
[Azure OpenAI docs]: https://learn.microsoft.com/en-us/azure/cognitive-services/openai/tutorials/embeddings?tabs=command-line
|
||||
|
||||
|
||||
## Running Auto-GPT
|
||||
## Running AutoGPT
|
||||
|
||||
### Run with Docker
|
||||
|
||||
@@ -177,7 +177,7 @@ This will display the version of Docker Compose that is currently installed on y
|
||||
|
||||
If you need to upgrade Docker Compose to a newer version, you can follow the installation instructions in the Docker documentation: https://docs.docker.com/compose/install/
|
||||
|
||||
Once you have a recent version of Docker Compose, run the commands below in your Auto-GPT folder.
|
||||
Once you have a recent version of Docker Compose, run the commands below in your AutoGPT folder.
|
||||
|
||||
1. Build the image. If you have pulled the image from Docker Hub, skip this step (NOTE: You *will* need to do this if you are modifying requirements.txt to add/remove dependencies like Python libs/frameworks)
|
||||
|
||||
@@ -185,7 +185,7 @@ Once you have a recent version of Docker Compose, run the commands below in your
|
||||
docker compose build auto-gpt
|
||||
```
|
||||
|
||||
2. Run Auto-GPT
|
||||
2. Run AutoGPT
|
||||
|
||||
```shell
|
||||
docker compose run --rm auto-gpt
|
||||
@@ -211,7 +211,7 @@ docker run -it --env-file=.env -v $PWD:/app auto-gpt
|
||||
docker run -it --env-file=.env -v $PWD:/app --rm auto-gpt --gpt3only --continuous
|
||||
```
|
||||
|
||||
[Docker Compose file]: https://github.com/Significant-Gravitas/Auto-GPT/blob/stable/docker-compose.yml
|
||||
[Docker Compose file]: https://github.com/Significant-Gravitas/AutoGPT/blob/stable/docker-compose.yml
|
||||
|
||||
|
||||
### Run with Dev Container
|
||||
@@ -239,7 +239,7 @@ pip3 install --upgrade pip
|
||||
Due to security reasons, certain features (like Python execution) will by default be disabled when running without docker. So, even if you want to run the program outside a docker container, you currently still need docker to actually run scripts.
|
||||
|
||||
Simply run the startup script in your terminal. This will install any necessary Python
|
||||
packages and launch Auto-GPT.
|
||||
packages and launch AutoGPT.
|
||||
|
||||
- On Linux/MacOS:
|
||||
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
## Share your logs with us to help improve Auto-GPT
|
||||
## Share your logs with us to help improve AutoGPT
|
||||
|
||||
Do you notice weird behavior with your agent? Do you have an interesting use case? Do you have a bug you want to report?
|
||||
Follow the steps below to enable your logs and upload them. You can include these logs when making an issue report or discussing an issue with us.
|
||||
|
||||
@@ -21,15 +21,15 @@ Running with `--help` lists all the possible command line arguments you can pass
|
||||
!!! note
|
||||
Replace anything in angled brackets (<>) to a value you want to specify
|
||||
|
||||
Here are some common arguments you can use when running Auto-GPT:
|
||||
Here are some common arguments you can use when running AutoGPT:
|
||||
|
||||
* Run Auto-GPT with a different AI Settings file
|
||||
* Run AutoGPT with a different AI Settings file
|
||||
|
||||
```shell
|
||||
./run.sh --ai-settings <filename>
|
||||
```
|
||||
|
||||
* Run Auto-GPT with a different Prompt Settings file
|
||||
* Run AutoGPT with a different Prompt Settings file
|
||||
|
||||
```shell
|
||||
./run.sh --prompt-settings <filename>
|
||||
@@ -47,7 +47,7 @@ Here are some common arguments you can use when running Auto-GPT:
|
||||
|
||||
### Speak Mode
|
||||
|
||||
Enter this command to use TTS _(Text-to-Speech)_ for Auto-GPT
|
||||
Enter this command to use TTS _(Text-to-Speech)_ for AutoGPT
|
||||
|
||||
```shell
|
||||
./run.sh --speak
|
||||
@@ -72,7 +72,7 @@ Running Self-Feedback will **INCREASE** token use and thus cost more. This featu
|
||||
|
||||
### GPT-3.5 ONLY Mode
|
||||
|
||||
If you don't have access to GPT-4, this mode allows you to use Auto-GPT!
|
||||
If you don't have access to GPT-4, this mode allows you to use AutoGPT!
|
||||
|
||||
```shell
|
||||
./run.sh --gpt3only
|
||||
@@ -82,7 +82,7 @@ You can achieve the same by setting `SMART_LLM` in `.env` to `gpt-3.5-turbo`.
|
||||
|
||||
### GPT-4 ONLY Mode
|
||||
|
||||
If you have access to GPT-4, this mode allows you to use Auto-GPT solely with GPT-4.
|
||||
If you have access to GPT-4, this mode allows you to use AutoGPT solely with GPT-4.
|
||||
This may give your bot increased intelligence.
|
||||
|
||||
```shell
|
||||
@@ -90,7 +90,7 @@ This may give your bot increased intelligence.
|
||||
```
|
||||
|
||||
!!! warning
|
||||
Since GPT-4 is more expensive to use, running Auto-GPT in GPT-4-only mode will
|
||||
Since GPT-4 is more expensive to use, running AutoGPT in GPT-4-only mode will
|
||||
increase your API costs.
|
||||
|
||||
## Logs
|
||||
|
||||
Reference in New Issue
Block a user