mirror of
https://github.com/aljazceru/Auto-GPT.git
synced 2025-12-18 14:34:23 +01:00
* feat: Refactor config loading and initialization to be modular and decentralized
- Refactored the `ConfigBuilder` class to support modular loading and initialization of the configuration from environment variables.
- Implemented recursive loading and initialization of nested config objects.
- Introduced the `SystemConfiguration` base class to provide common functionality for all system settings.
- Added the `from_env` attribute to the `UserConfigurable` decorator to provide environment variable mappings.
- Updated the `Config` class and its related classes to inherit from `SystemConfiguration` and use the `UserConfigurable` decorator.
- Updated `LoggingConfig` and `TTSConfig` to use the `UserConfigurable` decorator for their fields.
- Modified the implementation of the `build_config_from_env` method in `ConfigBuilder` to utilize the new modular and recursive loading and initialization logic.
- Updated applicable test cases to reflect the changes in the config loading and initialization logic.
This refactor improves the flexibility and maintainability of the configuration loading process by introducing modular and recursive behavior, allowing for easier extension and customization through environment variables.
* refactor: Move OpenAI credentials into `OpenAICredentials` sub-config
- Move OpenAI API key and other OpenAI credentials from the global config to a new sub-config called OpenAICredentials.
- Update the necessary code to use the new OpenAICredentials sub-config instead of the global config when accessing OpenAI credentials.
- (Hopefully) unbreak Azure support.
- Update azure.yaml.template.
- Enable validation of assignment operations on SystemConfiguration and SystemSettings objects.
* feat: Update AutoGPT configuration options and setup instructions
- Added new configuration options for logging and OpenAI usage to .env.template
- Removed deprecated configuration options in config/config.py
- Updated setup instructions in Docker and general setup documentation to include information on using Azure's OpenAI services
* fix: Fix image generation with Dall-E
- Fix issue with image generation with Dall-E API
Additional user context: This commit fixes an issue with image generation using the Dall-E API. The code now correctly retrieves the API key from the agent's legacy configuration.
* refactor(agent/core): Refactor `autogpt.core.configuration.schema` and update docstrings
- Refactor the `schema.py` file in the `autogpt.core.configuration` module.
- Added docstring to `SystemConfiguration.from_env()`
- Updated docstrings for functions `_get_user_config_values`, `_get_non_default_user_config_values`, `_recursive_init_model`, `_recurse_user_config_fields`, and `_recurse_user_config_values`.
198 lines
7.1 KiB
Markdown
198 lines
7.1 KiB
Markdown
# AutoGPT + Docker guide
|
|
|
|
!!! important
|
|
Docker Compose version 1.29.0 or later is required to use version 3.9 of the Compose file format.
|
|
You can check the version of Docker Compose installed on your system by running the following command:
|
|
|
|
```shell
|
|
docker compose version
|
|
```
|
|
|
|
This will display the version of Docker Compose that is currently installed on your system.
|
|
|
|
If you need to upgrade Docker Compose to a newer version, you can follow the installation instructions in the Docker documentation: https://docs.docker.com/compose/install/
|
|
|
|
## Basic Setup
|
|
|
|
1. Make sure you have Docker installed, see [requirements](#requirements)
|
|
2. Create a project directory for AutoGPT
|
|
|
|
```shell
|
|
mkdir AutoGPT
|
|
cd AutoGPT
|
|
```
|
|
|
|
3. In the project directory, create a file called `docker-compose.yml`:
|
|
|
|
<details>
|
|
<summary>
|
|
<code>docker-compose.yml></code> for <= v0.4.7
|
|
</summary>
|
|
|
|
```yaml
|
|
version: "3.9"
|
|
services:
|
|
auto-gpt:
|
|
image: significantgravitas/auto-gpt
|
|
env_file:
|
|
- .env
|
|
profiles: ["exclude-from-up"]
|
|
volumes:
|
|
- ./auto_gpt_workspace:/app/auto_gpt_workspace
|
|
- ./data:/app/data
|
|
## allow auto-gpt to write logs to disk
|
|
- ./logs:/app/logs
|
|
## uncomment following lines if you want to make use of these files
|
|
## you must have them existing in the same folder as this docker-compose.yml
|
|
#- type: bind
|
|
# source: ./azure.yaml
|
|
# target: /app/azure.yaml
|
|
#- type: bind
|
|
# source: ./ai_settings.yaml
|
|
# target: /app/ai_settings.yaml
|
|
#- type: bind
|
|
# source: ./prompt_settings.yaml
|
|
# target: /app/prompt_settings.yaml
|
|
```
|
|
</details>
|
|
|
|
<details>
|
|
<summary>
|
|
<code>docker-compose.yml></code> for > v0.4.7 (including <code>master</code>)
|
|
</summary>
|
|
|
|
```yaml
|
|
version: "3.9"
|
|
services:
|
|
auto-gpt:
|
|
image: significantgravitas/auto-gpt
|
|
env_file:
|
|
- .env
|
|
ports:
|
|
- "8000:8000" # remove this if you just want to run a single agent in TTY mode
|
|
profiles: ["exclude-from-up"]
|
|
volumes:
|
|
- ./data:/app/data
|
|
## allow auto-gpt to write logs to disk
|
|
- ./logs:/app/logs
|
|
## uncomment following lines if you want to make use of these files
|
|
## you must have them existing in the same folder as this docker-compose.yml
|
|
#- type: bind
|
|
# source: ./ai_settings.yaml
|
|
# target: /app/ai_settings.yaml
|
|
#- type: bind
|
|
# source: ./prompt_settings.yaml
|
|
# target: /app/prompt_settings.yaml
|
|
```
|
|
</details>
|
|
|
|
|
|
4. Download [`.env.template`][.env.template] and save it as `.env` in the AutoGPT folder.
|
|
5. Follow the [configuration](#configuration) steps.
|
|
6. Pull the latest image from [Docker Hub]
|
|
|
|
```shell
|
|
docker pull significantgravitas/auto-gpt
|
|
```
|
|
|
|
!!! note "Docker only supports headless browsing"
|
|
AutoGPT uses a browser in headless mode by default: `HEADLESS_BROWSER=True`.
|
|
Please do not change this setting in combination with Docker, or AutoGPT will crash.
|
|
|
|
[.env.template]: https://github.com/Significant-Gravitas/AutoGPT/tree/master/autogpts/autogpt/.env.template
|
|
[Docker Hub]: https://hub.docker.com/r/significantgravitas/auto-gpt
|
|
|
|
## Configuration
|
|
|
|
1. Open the `.env` file in a text editor. This file may
|
|
be hidden by default in some operating systems due to the dot prefix. To reveal
|
|
hidden files, follow the instructions for your specific operating system:
|
|
[Windows][show hidden files/Windows], [macOS][show hidden files/macOS].
|
|
2. Find the line that says `OPENAI_API_KEY=`.
|
|
3. After the `=`, enter your unique OpenAI API Key *without any quotes or spaces*.
|
|
4. Enter any other API keys or tokens for services you would like to use.
|
|
|
|
!!! note
|
|
To activate and adjust a setting, remove the `# ` prefix.
|
|
|
|
5. Save and close the `.env` file.
|
|
|
|
Templates for the optional extra configuration files (e.g. `prompt_settings.yml`) can be
|
|
found in the [repository].
|
|
|
|
!!! info "Using a GPT Azure-instance"
|
|
If you want to use GPT on an Azure instance, set `USE_AZURE` to `True` and
|
|
make an Azure configuration file:
|
|
|
|
- Rename `azure.yaml.template` to `azure.yaml` and provide the relevant `azure_api_base`, `azure_api_version` and all the deployment IDs for the relevant models in the `azure_model_map` section:
|
|
- `fast_llm_deployment_id`: your gpt-3.5-turbo or gpt-4 deployment ID
|
|
- `smart_llm_deployment_id`: your gpt-4 deployment ID
|
|
- `embedding_model_deployment_id`: your text-embedding-ada-002 v2 deployment ID
|
|
|
|
Example:
|
|
|
|
```yaml
|
|
# Please specify all of these values as double-quoted strings
|
|
# Replace string in angled brackets (<>) to your own deployment Name
|
|
azure_model_map:
|
|
fast_llm_deployment_id: "<auto-gpt-deployment>"
|
|
...
|
|
```
|
|
|
|
Details can be found in the [openai-python docs], and in the [Azure OpenAI docs] for the embedding model.
|
|
If you're on Windows you may need to install an [MSVC library](https://learn.microsoft.com/en-us/cpp/windows/latest-supported-vc-redist?view=msvc-170).
|
|
|
|
**Note:** Azure support has been dropped in `master`, so these instructions will only work with v0.4.7 (or earlier).
|
|
|
|
[repository]: https://github.com/Significant-Gravitas/AutoGPT/tree/master/autogpts/autogpt
|
|
[show hidden files/Windows]: https://support.microsoft.com/en-us/windows/view-hidden-files-and-folders-in-windows-97fbc472-c603-9d90-91d0-1166d1d9f4b5
|
|
[show hidden files/macOS]: https://www.pcmag.com/how-to/how-to-access-your-macs-hidden-files
|
|
[openai-python docs]: https://github.com/openai/openai-python#microsoft-azure-endpoints
|
|
[Azure OpenAI docs]: https://learn.microsoft.com/en-us/azure/cognitive-services/openai/tutorials/embeddings?tabs=command-line
|
|
|
|
## Developer Setup
|
|
|
|
!!! tip
|
|
Use this setup if you have cloned the repository and have made (or want to make)
|
|
changes to the codebase.
|
|
|
|
1. Copy `.env.template` to `.env`.
|
|
2. Follow the standard [configuration](#configuration) steps above.
|
|
|
|
## Running AutoGPT with Docker
|
|
|
|
After following setup instructions above, you can run AutoGPT with the following command:
|
|
|
|
```shell
|
|
docker compose run --rm auto-gpt
|
|
```
|
|
|
|
This creates and starts an AutoGPT container, and removes it after the application stops.
|
|
This does not mean your data will be lost: data generated by the application is stored
|
|
in the `data` folder.
|
|
|
|
Subcommands and arguments work the same as described in the [user guide]:
|
|
|
|
* Run AutoGPT:
|
|
```shell
|
|
docker compose run --rm auto-gpt serve
|
|
```
|
|
* Run AutoGPT in TTY mode, with continuous mode.
|
|
```shell
|
|
docker compose run --rm auto-gpt run --continuous
|
|
```
|
|
* Run AutoGPT in TTY mode and install dependencies for all active plugins:
|
|
```shell
|
|
docker compose run --rm auto-gpt run --install-plugin-deps
|
|
```
|
|
|
|
If you dare, you can also build and run it with "vanilla" docker commands:
|
|
|
|
```shell
|
|
docker build -t autogpt .
|
|
docker run -it --env-file=.env -v $PWD:/app autogpt
|
|
docker run -it --env-file=.env -v $PWD:/app --rm autogpt --gpt3only --continuous
|
|
```
|
|
|
|
[user guide]: /autogpt/usage/#command-line-interface
|