Files
Auto-GPT/autogpts/autogpt/docker-compose.yml
Reinier van der Leer 1f40d72081 feat(agent/workspace): Add GCS and S3 FileWorkspace providers (#6485)
* refactor: Rename FileWorkspace to LocalFileWorkspace and create FileWorkspace abstract class
  - Rename `FileWorkspace` to `LocalFileWorkspace` to provide a more descriptive name for the class that represents a file workspace that works with local files.
  - Create a new base class `FileWorkspace` to serve as the parent class for `LocalFileWorkspace`. This allows for easier extension and customization of file workspaces in the future.
  - Update import statements and references to `FileWorkspace` throughout the codebase to use the new naming conventions.

* feat: Add S3FileWorkspace + tests + test setups for CI and Docker
  - Added S3FileWorkspace class to provide an interface for interacting with a file workspace and storing files in an S3 bucket.
  - Updated pyproject.toml to include dependencies for boto3 and boto3-stubs.
  - Implemented unit tests for S3FileWorkspace.
  - Added MinIO service to Docker CI to allow testing S3 features in CI.
  - Added autogpt-test service config to docker-compose.yml for local testing with MinIO.

* ci(docker): tee test output instead of capturing

* fix: Improve error handling in S3FileWorkspace.initialize()
  - Do not tolerate all `botocore.exceptions.ClientError`s
  - Raise the exception anyways if the error is not "NoSuchBucket"

* feat: Add S3 workspace backend support and S3Credentials
  - Added support for S3 workspace backend in the Autogpt configuration
  - Added a new sub-config `S3Credentials` to store S3 credentials
  - Modified the `.env.template` file to include variables related to S3 credentials
  - Added a new `s3_credentials` attribute on the `Config` class to store S3 credentials
  - Moved the `unmasked` method from `ModelProviderCredentials` to the parent `ProviderCredentials` class to handle unmasking for S3 credentials

* fix(agent/tests): Fix S3FileWorkspace initialization in test_s3_file_workspace.py
  - Update the S3FileWorkspace initialization in the test_s3_file_workspace.py file to include the required S3 Credentials.

* refactor: Remove S3Credentials and add get_workspace function
  - Remove `S3Credentials` as boto3 will fetch the config from the environment by itself
  - Add `get_workspace` function in `autogpt.file_workspace` module
  - Update `.env.template` and tests to reflect the changes

* feat(agent/workspace): Make agent workspace backend configurable
  - Modified `autogpt.file_workspace.get_workspace` function to either take a workspace `id` or `root_path`.
  - Modified `FileWorkspaceMixin` to use the `get_workspace` function to set up the workspace.
  - Updated the type hints and imports accordingly.

* feat(agent/workspace): Add GCSFileWorkspace for Google Cloud Storage
  - Added support for Google Cloud Storage as a storage backend option in the workspace.
  - Created the `GCSFileWorkspace` class to interface with a file workspace stored in a Google Cloud Storage bucket.
  - Implemented the `GCSFileWorkspaceConfiguration` class to handle the configuration for Google Cloud Storage workspaces.
  - Updated the `get_workspace` function to include the option to use Google Cloud Storage as a workspace backend.
  - Added unit tests for the new `GCSFileWorkspace` class.

* fix: Unbreak use of non-local workspaces in AgentProtocolServer
  - Modify the `_get_task_agent_file_workspace` method to handle both local and non-local workspaces correctly
2023-12-07 14:46:08 +01:00

50 lines
1.1 KiB
YAML

# To boot the app run the following:
# docker compose run auto-gpt
# NOTE: Version 3.9 requires at least Docker Compose version 2 and Docker Engine version 20.10.13!
version: "3.9"
services:
auto-gpt:
build: ./
env_file:
- .env
ports:
- "8000:8000"
volumes:
- ./:/app
- ./docker-compose.yml:/app/docker-compose.yml:ro
- ./Dockerfile:/app/Dockerfile:ro
profiles: ["exclude-from-up"]
# Only for TESTING purposes. Run with: docker compose run --build --rm autogpt-test
autogpt-test:
build: ./
env_file:
- .env
environment:
S3_ENDPOINT_URL: http://minio:9000
AWS_ACCESS_KEY_ID: minio
AWS_SECRET_ACCESS_KEY: minio123
entrypoint: ["poetry", "run"]
command: ["pytest", "-v"]
volumes:
- ./autogpt:/app/autogpt
- ./tests:/app/tests
depends_on:
- minio
profiles: ["exclude-from-up"]
minio:
image: minio/minio
environment:
MINIO_ACCESS_KEY: minio
MINIO_SECRET_KEY: minio123
ports:
- 9000:9000
volumes:
- minio-data:/data
command: server /data
profiles: ["exclude-from-up"]
volumes:
minio-data: