refactor(agent/config): Modularize Config and revive Azure support (#6497)

* feat: Refactor config loading and initialization to be modular and decentralized

   - Refactored the `ConfigBuilder` class to support modular loading and initialization of the configuration from environment variables.
   - Implemented recursive loading and initialization of nested config objects.
   - Introduced the `SystemConfiguration` base class to provide common functionality for all system settings.
   - Added the `from_env` attribute to the `UserConfigurable` decorator to provide environment variable mappings.
   - Updated the `Config` class and its related classes to inherit from `SystemConfiguration` and use the `UserConfigurable` decorator.
   - Updated `LoggingConfig` and `TTSConfig` to use the `UserConfigurable` decorator for their fields.
   - Modified the implementation of the `build_config_from_env` method in `ConfigBuilder` to utilize the new modular and recursive loading and initialization logic.
   - Updated applicable test cases to reflect the changes in the config loading and initialization logic.

   This refactor improves the flexibility and maintainability of the configuration loading process by introducing modular and recursive behavior, allowing for easier extension and customization through environment variables.

* refactor: Move OpenAI credentials into `OpenAICredentials` sub-config

   - Move OpenAI API key and other OpenAI credentials from the global config to a new sub-config called OpenAICredentials.
   - Update the necessary code to use the new OpenAICredentials sub-config instead of the global config when accessing OpenAI credentials.
   - (Hopefully) unbreak Azure support.
      - Update azure.yaml.template.
   - Enable validation of assignment operations on SystemConfiguration and SystemSettings objects.

* feat: Update AutoGPT configuration options and setup instructions

   - Added new configuration options for logging and OpenAI usage to .env.template
   - Removed deprecated configuration options in config/config.py
   - Updated setup instructions in Docker and general setup documentation to include information on using Azure's OpenAI services

* fix: Fix image generation with Dall-E

   - Fix issue with image generation with Dall-E API

Additional user context: This commit fixes an issue with image generation using the Dall-E API. The code now correctly retrieves the API key from the agent's legacy configuration.

* refactor(agent/core): Refactor `autogpt.core.configuration.schema` and update docstrings

   - Refactor the `schema.py` file in the `autogpt.core.configuration` module.
   - Added docstring to `SystemConfiguration.from_env()`
   - Updated docstrings for functions `_get_user_config_values`, `_get_non_default_user_config_values`, `_recursive_init_model`, `_recurse_user_config_fields`, and `_recurse_user_config_values`.
This commit is contained in:
Reinier van der Leer
2023-12-05 16:28:23 +01:00
committed by GitHub
parent 03eb921ca6
commit 7b05245286
17 changed files with 666 additions and 401 deletions

View File

@@ -1,5 +1,3 @@
# For further descriptions of these settings see docs/configuration/options.md or go to docs.agpt.co
################################################################################ ################################################################################
### AutoGPT - GENERAL SETTINGS ### AutoGPT - GENERAL SETTINGS
################################################################################ ################################################################################
@@ -25,14 +23,6 @@ OPENAI_API_KEY=your-openai-api-key
## PROMPT_SETTINGS_FILE - Specifies which Prompt Settings file to use, relative to the AutoGPT root directory. (defaults to prompt_settings.yaml) ## PROMPT_SETTINGS_FILE - Specifies which Prompt Settings file to use, relative to the AutoGPT root directory. (defaults to prompt_settings.yaml)
# PROMPT_SETTINGS_FILE=prompt_settings.yaml # PROMPT_SETTINGS_FILE=prompt_settings.yaml
## OPENAI_API_BASE_URL - Custom url for the OpenAI API, useful for connecting to custom backends. No effect if USE_AZURE is true, leave blank to keep the default url
# the following is an example:
# OPENAI_API_BASE_URL=http://localhost:443/v1
## OPENAI_FUNCTIONS - Enables OpenAI functions: https://platform.openai.com/docs/guides/gpt/function-calling
## WARNING: this feature is only supported by OpenAI's newest models. Until these models become the default on 27 June, add a '-0613' suffix to the model of your choosing.
# OPENAI_FUNCTIONS=False
## AUTHORISE COMMAND KEY - Key to authorise commands ## AUTHORISE COMMAND KEY - Key to authorise commands
# AUTHORISE_COMMAND_KEY=y # AUTHORISE_COMMAND_KEY=y
@@ -52,6 +42,17 @@ OPENAI_API_KEY=your-openai-api-key
## TEMPERATURE - Sets temperature in OpenAI (Default: 0) ## TEMPERATURE - Sets temperature in OpenAI (Default: 0)
# TEMPERATURE=0 # TEMPERATURE=0
## OPENAI_API_BASE_URL - Custom url for the OpenAI API, useful for connecting to custom backends. No effect if USE_AZURE is true, leave blank to keep the default url
# the following is an example:
# OPENAI_API_BASE_URL=http://localhost:443/v1
# OPENAI_API_TYPE=
# OPENAI_API_VERSION=
## OPENAI_FUNCTIONS - Enables OpenAI functions: https://platform.openai.com/docs/guides/gpt/function-calling
## Note: this feature is only supported by OpenAI's newer models.
# OPENAI_FUNCTIONS=False
## OPENAI_ORGANIZATION - Your OpenAI Organization key (Default: None) ## OPENAI_ORGANIZATION - Your OpenAI Organization key (Default: None)
# OPENAI_ORGANIZATION= # OPENAI_ORGANIZATION=
@@ -90,32 +91,6 @@ OPENAI_API_KEY=your-openai-api-key
## SHELL_ALLOWLIST - List of shell commands that ARE allowed to be executed by AutoGPT (Default: None) ## SHELL_ALLOWLIST - List of shell commands that ARE allowed to be executed by AutoGPT (Default: None)
# SHELL_ALLOWLIST= # SHELL_ALLOWLIST=
################################################################################
### MEMORY
################################################################################
### General
## MEMORY_BACKEND - Memory backend type
# MEMORY_BACKEND=json_file
## MEMORY_INDEX - Value used in the Memory backend for scoping, naming, or indexing (Default: auto-gpt)
# MEMORY_INDEX=auto-gpt
### Redis
## REDIS_HOST - Redis host (Default: localhost, use "redis" for docker-compose)
# REDIS_HOST=localhost
## REDIS_PORT - Redis port (Default: 6379)
# REDIS_PORT=6379
## REDIS_PASSWORD - Redis password (Default: "")
# REDIS_PASSWORD=
## WIPE_REDIS_ON_START - Wipes data / index on start (Default: True)
# WIPE_REDIS_ON_START=True
################################################################################ ################################################################################
### IMAGE GENERATION PROVIDER ### IMAGE GENERATION PROVIDER
################################################################################ ################################################################################
@@ -191,13 +166,12 @@ OPENAI_API_KEY=your-openai-api-key
################################################################################ ################################################################################
## TEXT_TO_SPEECH_PROVIDER - Which Text to Speech provider to use (Default: gtts) ## TEXT_TO_SPEECH_PROVIDER - Which Text to Speech provider to use (Default: gtts)
## Options: gtts, streamelements, elevenlabs, macos
# TEXT_TO_SPEECH_PROVIDER=gtts # TEXT_TO_SPEECH_PROVIDER=gtts
### Only if TEXT_TO_SPEECH_PROVIDER=streamelements
## STREAMELEMENTS_VOICE - Voice to use for StreamElements (Default: Brian) ## STREAMELEMENTS_VOICE - Voice to use for StreamElements (Default: Brian)
# STREAMELEMENTS_VOICE=Brian # STREAMELEMENTS_VOICE=Brian
### Only if TEXT_TO_SPEECH_PROVIDER=elevenlabs
## ELEVENLABS_API_KEY - Eleven Labs API key (Default: None) ## ELEVENLABS_API_KEY - Eleven Labs API key (Default: None)
# ELEVENLABS_API_KEY= # ELEVENLABS_API_KEY=
@@ -210,3 +184,22 @@ OPENAI_API_KEY=your-openai-api-key
## CHAT_MESSAGES_ENABLED - Enable chat messages (Default: False) ## CHAT_MESSAGES_ENABLED - Enable chat messages (Default: False)
# CHAT_MESSAGES_ENABLED=False # CHAT_MESSAGES_ENABLED=False
################################################################################
### LOGGING
################################################################################
## LOG_LEVEL - Set the minimum level to filter log output by. Setting this to DEBUG implies LOG_FORMAT=debug, unless LOG_FORMAT is set explicitly.
## Options: DEBUG, INFO, WARNING, ERROR, CRITICAL
# LOG_LEVEL=INFO
## LOG_FORMAT - The format in which to log messages to the console (and log files).
## Options: simple, debug, structured_google_cloud
# LOG_FORMAT=simple
## LOG_FILE_FORMAT - Normally follows the LOG_FORMAT setting, but can be set separately.
## Note: Log file output is disabled if LOG_FORMAT=structured_google_cloud.
# LOG_FILE_FORMAT=simple
## PLAIN_OUTPUT - Disables animated typing in the console output.
# PLAIN_OUTPUT=False

View File

@@ -3,7 +3,7 @@ from __future__ import annotations
import logging import logging
from pathlib import Path from pathlib import Path
from typing import Literal, Optional from typing import TYPE_CHECKING, Literal, Optional
import click import click
from colorama import Back, Fore, Style from colorama import Back, Fore, Style
@@ -16,6 +16,9 @@ from autogpt.logs.config import LogFormatName
from autogpt.logs.helpers import print_attribute, request_user_double_check from autogpt.logs.helpers import print_attribute, request_user_double_check
from autogpt.memory.vector import get_supported_memory_backends from autogpt.memory.vector import get_supported_memory_backends
if TYPE_CHECKING:
from autogpt.core.resource.model_providers.openai import OpenAICredentials
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@@ -103,7 +106,11 @@ def apply_overrides_to_config(
config.smart_llm = GPT_3_MODEL config.smart_llm = GPT_3_MODEL
elif ( elif (
gpt4only gpt4only
and check_model(GPT_4_MODEL, model_type="smart_llm", config=config) and check_model(
GPT_4_MODEL,
model_type="smart_llm",
api_credentials=config.openai_credentials,
)
== GPT_4_MODEL == GPT_4_MODEL
): ):
print_attribute("GPT4 Only Mode", "ENABLED") print_attribute("GPT4 Only Mode", "ENABLED")
@@ -111,8 +118,12 @@ def apply_overrides_to_config(
config.fast_llm = GPT_4_MODEL config.fast_llm = GPT_4_MODEL
config.smart_llm = GPT_4_MODEL config.smart_llm = GPT_4_MODEL
else: else:
config.fast_llm = check_model(config.fast_llm, "fast_llm", config=config) config.fast_llm = check_model(
config.smart_llm = check_model(config.smart_llm, "smart_llm", config=config) config.fast_llm, "fast_llm", api_credentials=config.openai_credentials
)
config.smart_llm = check_model(
config.smart_llm, "smart_llm", api_credentials=config.openai_credentials
)
if memory_type: if memory_type:
supported_memory = get_supported_memory_backends() supported_memory = get_supported_memory_backends()
@@ -187,12 +198,11 @@ def apply_overrides_to_config(
def check_model( def check_model(
model_name: str, model_name: str,
model_type: Literal["smart_llm", "fast_llm"], model_type: Literal["smart_llm", "fast_llm"],
config: Config, api_credentials: OpenAICredentials,
) -> str: ) -> str:
"""Check if model is available for use. If not, return gpt-3.5-turbo.""" """Check if model is available for use. If not, return gpt-3.5-turbo."""
openai_credentials = config.get_openai_credentials(model_name)
api_manager = ApiManager() api_manager = ApiManager()
models = api_manager.get_models(**openai_credentials) models = api_manager.get_models(**api_credentials.get_api_access_kwargs(model_name))
if any(model_name in m["id"] for m in models): if any(model_name in m["id"] for m in models):
return model_name return model_name

View File

@@ -14,7 +14,6 @@ from typing import TYPE_CHECKING, Optional
from colorama import Fore, Style from colorama import Fore, Style
from forge.sdk.db import AgentDB from forge.sdk.db import AgentDB
from pydantic import SecretStr
if TYPE_CHECKING: if TYPE_CHECKING:
from autogpt.agents.agent import Agent from autogpt.agents.agent import Agent
@@ -31,7 +30,6 @@ from autogpt.config import (
ConfigBuilder, ConfigBuilder,
assert_config_has_openai_api_key, assert_config_has_openai_api_key,
) )
from autogpt.core.resource.model_providers import ModelProviderCredentials
from autogpt.core.resource.model_providers.openai import OpenAIProvider from autogpt.core.resource.model_providers.openai import OpenAIProvider
from autogpt.core.runner.client_lib.utils import coroutine from autogpt.core.runner.client_lib.utils import coroutine
from autogpt.logs.config import configure_chat_plugins, configure_logging from autogpt.logs.config import configure_chat_plugins, configure_logging
@@ -364,19 +362,11 @@ def _configure_openai_provider(config: Config) -> OpenAIProvider:
Returns: Returns:
A configured OpenAIProvider object. A configured OpenAIProvider object.
""" """
if config.openai_api_key is None: if config.openai_credentials is None:
raise RuntimeError("OpenAI key is not configured") raise RuntimeError("OpenAI key is not configured")
openai_settings = OpenAIProvider.default_settings.copy(deep=True) openai_settings = OpenAIProvider.default_settings.copy(deep=True)
openai_settings.credentials = ModelProviderCredentials( openai_settings.credentials = config.openai_credentials
api_key=SecretStr(config.openai_api_key),
# TODO: support OpenAI Azure credentials
api_base=SecretStr(config.openai_api_base) if config.openai_api_base else None,
api_type=SecretStr(config.openai_api_type) if config.openai_api_type else None,
api_version=SecretStr(config.openai_api_version)
if config.openai_api_version
else None,
)
return OpenAIProvider( return OpenAIProvider(
settings=openai_settings, settings=openai_settings,
logger=logging.getLogger("OpenAIProvider"), logger=logging.getLogger("OpenAIProvider"),

View File

@@ -147,7 +147,7 @@ def generate_image_with_dalle(
n=1, n=1,
size=f"{size}x{size}", size=f"{size}x{size}",
response_format="b64_json", response_format="b64_json",
api_key=agent.legacy_config.openai_api_key, api_key=agent.legacy_config.openai_credentials.api_key.get_secret_value(),
) )
logger.info(f"Image Generated for prompt:{prompt}") logger.info(f"Image Generated for prompt:{prompt}")

View File

@@ -1,22 +1,26 @@
"""Configuration class to store the state of bools for different scripts access.""" """Configuration class to store the state of bools for different scripts access."""
from __future__ import annotations from __future__ import annotations
import contextlib
import logging
import os import os
import re import re
from pathlib import Path from pathlib import Path
from typing import Any, Dict, Optional, Union from typing import Any, Optional, Union
import yaml
from auto_gpt_plugin_template import AutoGPTPluginTemplate from auto_gpt_plugin_template import AutoGPTPluginTemplate
from colorama import Fore from colorama import Fore
from pydantic import Field, validator from pydantic import Field, SecretStr, validator
import autogpt import autogpt
from autogpt.core.configuration.schema import Configurable, SystemSettings from autogpt.core.configuration.schema import (
from autogpt.core.resource.model_providers.openai import OPEN_AI_CHAT_MODELS Configurable,
from autogpt.logs.config import LogFormatName, LoggingConfig SystemSettings,
UserConfigurable,
)
from autogpt.core.resource.model_providers.openai import (
OPEN_AI_CHAT_MODELS,
OpenAICredentials,
)
from autogpt.logs.config import LoggingConfig
from autogpt.plugins.plugins_config import PluginsConfig from autogpt.plugins.plugins_config import PluginsConfig
from autogpt.speech import TTSConfig from autogpt.speech import TTSConfig
@@ -33,6 +37,7 @@ GPT_3_MODEL = "gpt-3.5-turbo"
class Config(SystemSettings, arbitrary_types_allowed=True): class Config(SystemSettings, arbitrary_types_allowed=True):
name: str = "Auto-GPT configuration" name: str = "Auto-GPT configuration"
description: str = "Default configuration for the Auto-GPT application." description: str = "Default configuration for the Auto-GPT application."
######################## ########################
# Application Settings # # Application Settings #
######################## ########################
@@ -40,10 +45,12 @@ class Config(SystemSettings, arbitrary_types_allowed=True):
app_data_dir: Path = project_root / "data" app_data_dir: Path = project_root / "data"
skip_news: bool = False skip_news: bool = False
skip_reprompt: bool = False skip_reprompt: bool = False
authorise_key: str = "y" authorise_key: str = UserConfigurable(default="y", from_env="AUTHORISE_COMMAND_KEY")
exit_key: str = "n" exit_key: str = UserConfigurable(default="n", from_env="EXIT_KEY")
noninteractive_mode: bool = False noninteractive_mode: bool = False
chat_messages_enabled: bool = True chat_messages_enabled: bool = UserConfigurable(
default=True, from_env=lambda: os.getenv("CHAT_MESSAGES_ENABLED") == "True"
)
# TTS configuration # TTS configuration
tts_config: TTSConfig = TTSConfig() tts_config: TTSConfig = TTSConfig()
logging: LoggingConfig = LoggingConfig() logging: LoggingConfig = LoggingConfig()
@@ -52,15 +59,38 @@ class Config(SystemSettings, arbitrary_types_allowed=True):
# Agent Control Settings # # Agent Control Settings #
########################## ##########################
# Paths # Paths
ai_settings_file: Path = project_root / AI_SETTINGS_FILE ai_settings_file: Path = UserConfigurable(
prompt_settings_file: Path = project_root / PROMPT_SETTINGS_FILE default=AI_SETTINGS_FILE,
from_env=lambda: Path(f) if (f := os.getenv("AI_SETTINGS_FILE")) else None,
)
prompt_settings_file: Path = UserConfigurable(
default=PROMPT_SETTINGS_FILE,
from_env=lambda: Path(f) if (f := os.getenv("PROMPT_SETTINGS_FILE")) else None,
)
# Model configuration # Model configuration
fast_llm: str = "gpt-3.5-turbo-16k" fast_llm: str = UserConfigurable(
smart_llm: str = "gpt-4" default="gpt-3.5-turbo-16k",
temperature: float = 0 from_env=lambda: os.getenv("FAST_LLM"),
openai_functions: bool = False )
embedding_model: str = "text-embedding-ada-002" smart_llm: str = UserConfigurable(
browse_spacy_language_model: str = "en_core_web_sm" default="gpt-4",
from_env=lambda: os.getenv("SMART_LLM"),
)
temperature: float = UserConfigurable(
default=0,
from_env=lambda: float(v) if (v := os.getenv("TEMPERATURE")) else None,
)
openai_functions: bool = UserConfigurable(
default=False, from_env=lambda: os.getenv("OPENAI_FUNCTIONS", "False") == "True"
)
embedding_model: str = UserConfigurable(
default="text-embedding-ada-002", from_env="EMBEDDING_MODEL"
)
browse_spacy_language_model: str = UserConfigurable(
default="en_core_web_sm", from_env="BROWSE_SPACY_LANGUAGE_MODEL"
)
# Run loop configuration # Run loop configuration
continuous_mode: bool = False continuous_mode: bool = False
continuous_limit: int = 0 continuous_limit: int = 0
@@ -68,74 +98,138 @@ class Config(SystemSettings, arbitrary_types_allowed=True):
########## ##########
# Memory # # Memory #
########## ##########
memory_backend: str = "json_file" memory_backend: str = UserConfigurable("json_file", from_env="MEMORY_BACKEND")
memory_index: str = "auto-gpt-memory" memory_index: str = UserConfigurable("auto-gpt-memory", from_env="MEMORY_INDEX")
redis_host: str = "localhost" redis_host: str = UserConfigurable("localhost", from_env="REDIS_HOST")
redis_port: int = 6379 redis_port: int = UserConfigurable(
redis_password: str = "" default=6379,
wipe_redis_on_start: bool = True from_env=lambda: int(v) if (v := os.getenv("REDIS_PORT")) else None,
)
redis_password: str = UserConfigurable("", from_env="REDIS_PASSWORD")
wipe_redis_on_start: bool = UserConfigurable(
default=True,
from_env=lambda: os.getenv("WIPE_REDIS_ON_START", "True") == "True",
)
############ ############
# Commands # # Commands #
############ ############
# General # General
disabled_command_categories: list[str] = Field(default_factory=list) disabled_command_categories: list[str] = UserConfigurable(
default_factory=list,
from_env=lambda: _safe_split(os.getenv("DISABLED_COMMAND_CATEGORIES")),
)
# File ops # File ops
restrict_to_workspace: bool = True restrict_to_workspace: bool = UserConfigurable(
default=True,
from_env=lambda: os.getenv("RESTRICT_TO_WORKSPACE", "True") == "True",
)
allow_downloads: bool = False allow_downloads: bool = False
# Shell commands # Shell commands
shell_command_control: str = "denylist" shell_command_control: str = UserConfigurable(
execute_local_commands: bool = False default="denylist", from_env="SHELL_COMMAND_CONTROL"
shell_denylist: list[str] = Field(default_factory=lambda: ["sudo", "su"]) )
shell_allowlist: list[str] = Field(default_factory=list) execute_local_commands: bool = UserConfigurable(
default=False,
from_env=lambda: os.getenv("EXECUTE_LOCAL_COMMANDS", "False") == "True",
)
shell_denylist: list[str] = UserConfigurable(
default_factory=lambda: ["sudo", "su"],
from_env=lambda: _safe_split(
os.getenv("SHELL_DENYLIST", os.getenv("DENY_COMMANDS"))
),
)
shell_allowlist: list[str] = UserConfigurable(
default_factory=list,
from_env=lambda: _safe_split(
os.getenv("SHELL_ALLOWLIST", os.getenv("ALLOW_COMMANDS"))
),
)
# Text to image # Text to image
image_provider: Optional[str] = None image_provider: Optional[str] = UserConfigurable(from_env="IMAGE_PROVIDER")
huggingface_image_model: str = "CompVis/stable-diffusion-v1-4" huggingface_image_model: str = UserConfigurable(
sd_webui_url: Optional[str] = "http://localhost:7860" default="CompVis/stable-diffusion-v1-4", from_env="HUGGINGFACE_IMAGE_MODEL"
image_size: int = 256 )
sd_webui_url: Optional[str] = UserConfigurable(
default="http://localhost:7860", from_env="SD_WEBUI_URL"
)
image_size: int = UserConfigurable(
default=256,
from_env=lambda: int(v) if (v := os.getenv("IMAGE_SIZE")) else None,
)
# Audio to text # Audio to text
audio_to_text_provider: str = "huggingface" audio_to_text_provider: str = UserConfigurable(
huggingface_audio_to_text_model: Optional[str] = None default="huggingface", from_env="AUDIO_TO_TEXT_PROVIDER"
)
huggingface_audio_to_text_model: Optional[str] = UserConfigurable(
from_env="HUGGINGFACE_AUDIO_TO_TEXT_MODEL"
)
# Web browsing # Web browsing
selenium_web_browser: str = "chrome" selenium_web_browser: str = UserConfigurable("chrome", from_env="USE_WEB_BROWSER")
selenium_headless: bool = True selenium_headless: bool = UserConfigurable(
user_agent: str = "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.97 Safari/537.36" # noqa: E501 default=True, from_env=lambda: os.getenv("HEADLESS_BROWSER", "True") == "True"
)
user_agent: str = UserConfigurable(
default="Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.97 Safari/537.36", # noqa: E501
from_env="USER_AGENT",
)
################### ###################
# Plugin Settings # # Plugin Settings #
################### ###################
plugins_dir: str = "plugins" plugins_dir: str = UserConfigurable("plugins", from_env="PLUGINS_DIR")
plugins_config_file: Path = project_root / PLUGINS_CONFIG_FILE plugins_config_file: Path = UserConfigurable(
default=PLUGINS_CONFIG_FILE,
from_env=lambda: Path(f) if (f := os.getenv("PLUGINS_CONFIG_FILE")) else None,
)
plugins_config: PluginsConfig = Field( plugins_config: PluginsConfig = Field(
default_factory=lambda: PluginsConfig(plugins={}) default_factory=lambda: PluginsConfig(plugins={})
) )
plugins: list[AutoGPTPluginTemplate] = Field(default_factory=list, exclude=True) plugins: list[AutoGPTPluginTemplate] = Field(default_factory=list, exclude=True)
plugins_allowlist: list[str] = Field(default_factory=list) plugins_allowlist: list[str] = UserConfigurable(
plugins_denylist: list[str] = Field(default_factory=list) default_factory=list,
plugins_openai: list[str] = Field(default_factory=list) from_env=lambda: _safe_split(os.getenv("ALLOWLISTED_PLUGINS")),
)
plugins_denylist: list[str] = UserConfigurable(
default_factory=list,
from_env=lambda: _safe_split(os.getenv("DENYLISTED_PLUGINS")),
)
plugins_openai: list[str] = UserConfigurable(
default_factory=list, from_env=lambda: _safe_split(os.getenv("OPENAI_PLUGINS"))
)
############### ###############
# Credentials # # Credentials #
############### ###############
# OpenAI # OpenAI
openai_api_key: Optional[str] = None openai_credentials: Optional[OpenAICredentials] = None
openai_api_type: Optional[str] = None azure_config_file: Optional[Path] = UserConfigurable(
openai_api_base: Optional[str] = None default=AZURE_CONFIG_FILE,
openai_api_version: Optional[str] = None from_env=lambda: Path(f) if (f := os.getenv("AZURE_CONFIG_FILE")) else None,
openai_organization: Optional[str] = None )
use_azure: bool = False
azure_config_file: Optional[Path] = project_root / AZURE_CONFIG_FILE
azure_model_to_deployment_id_map: Optional[Dict[str, str]] = None
# Github # Github
github_api_key: Optional[str] = None github_api_key: Optional[str] = UserConfigurable(from_env="GITHUB_API_KEY")
github_username: Optional[str] = None github_username: Optional[str] = UserConfigurable(from_env="GITHUB_USERNAME")
# Google # Google
google_api_key: Optional[str] = None google_api_key: Optional[str] = UserConfigurable(from_env="GOOGLE_API_KEY")
google_custom_search_engine_id: Optional[str] = None google_custom_search_engine_id: Optional[str] = UserConfigurable(
from_env=lambda: os.getenv("GOOGLE_CUSTOM_SEARCH_ENGINE_ID"),
)
# Huggingface # Huggingface
huggingface_api_token: Optional[str] = None huggingface_api_token: Optional[str] = UserConfigurable(
from_env="HUGGINGFACE_API_TOKEN"
)
# Stable Diffusion # Stable Diffusion
sd_webui_auth: Optional[str] = None sd_webui_auth: Optional[str] = UserConfigurable(from_env="SD_WEBUI_AUTH")
@validator("plugins", each_item=True) @validator("plugins", each_item=True)
def validate_plugins(cls, p: AutoGPTPluginTemplate | Any): def validate_plugins(cls, p: AutoGPTPluginTemplate | Any):
@@ -157,67 +251,6 @@ class Config(SystemSettings, arbitrary_types_allowed=True):
) )
return v return v
def get_openai_credentials(self, model: str) -> dict[str, str]:
credentials = {
"api_key": self.openai_api_key,
"api_base": self.openai_api_base,
"organization": self.openai_organization,
}
if self.use_azure:
azure_credentials = self.get_azure_credentials(model)
credentials.update(azure_credentials)
return credentials
def get_azure_credentials(self, model: str) -> dict[str, str]:
"""Get the kwargs for the Azure API."""
# Fix --gpt3only and --gpt4only in combination with Azure
fast_llm = (
self.fast_llm
if not (
self.fast_llm == self.smart_llm
and self.fast_llm.startswith(GPT_4_MODEL)
)
else f"not_{self.fast_llm}"
)
smart_llm = (
self.smart_llm
if not (
self.smart_llm == self.fast_llm
and self.smart_llm.startswith(GPT_3_MODEL)
)
else f"not_{self.smart_llm}"
)
deployment_id = {
fast_llm: self.azure_model_to_deployment_id_map.get(
"fast_llm_deployment_id",
self.azure_model_to_deployment_id_map.get(
"fast_llm_model_deployment_id" # backwards compatibility
),
),
smart_llm: self.azure_model_to_deployment_id_map.get(
"smart_llm_deployment_id",
self.azure_model_to_deployment_id_map.get(
"smart_llm_model_deployment_id" # backwards compatibility
),
),
self.embedding_model: self.azure_model_to_deployment_id_map.get(
"embedding_model_deployment_id"
),
}.get(model, None)
kwargs = {
"api_type": self.openai_api_type,
"api_base": self.openai_api_base,
"api_version": self.openai_api_version,
}
if model == self.embedding_model:
kwargs["engine"] = deployment_id
else:
kwargs["deployment_id"] = deployment_id
return kwargs
class ConfigBuilder(Configurable[Config]): class ConfigBuilder(Configurable[Config]):
default_settings = Config() default_settings = Config()
@@ -225,131 +258,25 @@ class ConfigBuilder(Configurable[Config]):
@classmethod @classmethod
def build_config_from_env(cls, project_root: Path = PROJECT_ROOT) -> Config: def build_config_from_env(cls, project_root: Path = PROJECT_ROOT) -> Config:
"""Initialize the Config class""" """Initialize the Config class"""
config_dict = {
"project_root": project_root,
"logging": {
"level": logging.getLevelName(os.getenv("LOG_LEVEL", "INFO")),
"log_format": LogFormatName(os.getenv("LOG_FORMAT", "simple")),
"log_file_format": LogFormatName(
os.getenv("LOG_FILE_FORMAT", os.getenv("LOG_FORMAT", "simple"))
),
"plain_console_output": os.getenv("PLAIN_OUTPUT", "False") == "True",
},
"authorise_key": os.getenv("AUTHORISE_COMMAND_KEY"),
"exit_key": os.getenv("EXIT_KEY"),
"shell_command_control": os.getenv("SHELL_COMMAND_CONTROL"),
"ai_settings_file": project_root
/ Path(os.getenv("AI_SETTINGS_FILE", AI_SETTINGS_FILE)),
"prompt_settings_file": project_root
/ Path(os.getenv("PROMPT_SETTINGS_FILE", PROMPT_SETTINGS_FILE)),
"fast_llm": os.getenv("FAST_LLM", os.getenv("FAST_LLM_MODEL")),
"smart_llm": os.getenv("SMART_LLM", os.getenv("SMART_LLM_MODEL")),
"embedding_model": os.getenv("EMBEDDING_MODEL"),
"browse_spacy_language_model": os.getenv("BROWSE_SPACY_LANGUAGE_MODEL"),
"openai_api_key": os.getenv("OPENAI_API_KEY"),
"use_azure": os.getenv("USE_AZURE") == "True",
"azure_config_file": project_root
/ Path(os.getenv("AZURE_CONFIG_FILE", AZURE_CONFIG_FILE)),
"execute_local_commands": os.getenv("EXECUTE_LOCAL_COMMANDS", "False")
== "True",
"restrict_to_workspace": os.getenv("RESTRICT_TO_WORKSPACE", "True")
== "True",
"openai_functions": os.getenv("OPENAI_FUNCTIONS", "False") == "True",
"tts_config": {
"provider": os.getenv("TEXT_TO_SPEECH_PROVIDER"),
},
"github_api_key": os.getenv("GITHUB_API_KEY"),
"github_username": os.getenv("GITHUB_USERNAME"),
"google_api_key": os.getenv("GOOGLE_API_KEY"),
"image_provider": os.getenv("IMAGE_PROVIDER"),
"huggingface_api_token": os.getenv("HUGGINGFACE_API_TOKEN"),
"huggingface_image_model": os.getenv("HUGGINGFACE_IMAGE_MODEL"),
"audio_to_text_provider": os.getenv("AUDIO_TO_TEXT_PROVIDER"),
"huggingface_audio_to_text_model": os.getenv(
"HUGGINGFACE_AUDIO_TO_TEXT_MODEL"
),
"sd_webui_url": os.getenv("SD_WEBUI_URL"),
"sd_webui_auth": os.getenv("SD_WEBUI_AUTH"),
"selenium_web_browser": os.getenv("USE_WEB_BROWSER"),
"selenium_headless": os.getenv("HEADLESS_BROWSER", "True") == "True",
"user_agent": os.getenv("USER_AGENT"),
"memory_backend": os.getenv("MEMORY_BACKEND"),
"memory_index": os.getenv("MEMORY_INDEX"),
"redis_host": os.getenv("REDIS_HOST"),
"redis_password": os.getenv("REDIS_PASSWORD"),
"wipe_redis_on_start": os.getenv("WIPE_REDIS_ON_START", "True") == "True",
"plugins_dir": os.getenv("PLUGINS_DIR"),
"plugins_config_file": project_root
/ Path(os.getenv("PLUGINS_CONFIG_FILE", PLUGINS_CONFIG_FILE)),
"chat_messages_enabled": os.getenv("CHAT_MESSAGES_ENABLED") == "True",
}
config_dict["disabled_command_categories"] = _safe_split( config = cls.build_agent_configuration()
os.getenv("DISABLED_COMMAND_CATEGORIES") config.project_root = project_root
)
config_dict["shell_denylist"] = _safe_split( # Make relative paths absolute
os.getenv("SHELL_DENYLIST", os.getenv("DENY_COMMANDS")) for k in {
) "ai_settings_file", # TODO: deprecate or repurpose
config_dict["shell_allowlist"] = _safe_split( "prompt_settings_file", # TODO: deprecate or repurpose
os.getenv("SHELL_ALLOWLIST", os.getenv("ALLOW_COMMANDS")) "plugins_config_file", # TODO: move from project root
) "azure_config_file", # TODO: move from project root
}:
setattr(config, k, project_root / getattr(config, k))
config_dict["google_custom_search_engine_id"] = os.getenv( if (
"GOOGLE_CUSTOM_SEARCH_ENGINE_ID", os.getenv("CUSTOM_SEARCH_ENGINE_ID") config.openai_credentials
) and config.openai_credentials.api_type == "azure"
and (config_file := config.azure_config_file)
if os.getenv("ELEVENLABS_API_KEY"): ):
config_dict["tts_config"]["elevenlabs"] = { config.openai_credentials.load_azure_config(config_file)
"api_key": os.getenv("ELEVENLABS_API_KEY"),
"voice_id": os.getenv("ELEVENLABS_VOICE_ID", ""),
}
if os.getenv("STREAMELEMENTS_VOICE"):
config_dict["tts_config"]["streamelements"] = {
"voice": os.getenv("STREAMELEMENTS_VOICE"),
}
if not config_dict["tts_config"]["provider"]:
if os.getenv("USE_MAC_OS_TTS"):
default_tts_provider = "macos"
elif "elevenlabs" in config_dict["tts_config"]:
default_tts_provider = "elevenlabs"
elif os.getenv("USE_BRIAN_TTS"):
default_tts_provider = "streamelements"
else:
default_tts_provider = "gtts"
config_dict["tts_config"]["provider"] = default_tts_provider
config_dict["plugins_allowlist"] = _safe_split(os.getenv("ALLOWLISTED_PLUGINS"))
config_dict["plugins_denylist"] = _safe_split(os.getenv("DENYLISTED_PLUGINS"))
with contextlib.suppress(TypeError):
config_dict["image_size"] = int(os.getenv("IMAGE_SIZE"))
with contextlib.suppress(TypeError):
config_dict["redis_port"] = int(os.getenv("REDIS_PORT"))
with contextlib.suppress(TypeError):
config_dict["temperature"] = float(os.getenv("TEMPERATURE"))
if config_dict["use_azure"]:
azure_config = cls.load_azure_config(
project_root / config_dict["azure_config_file"]
)
config_dict.update(azure_config)
elif os.getenv("OPENAI_API_BASE_URL"):
config_dict["openai_api_base"] = os.getenv("OPENAI_API_BASE_URL")
openai_organization = os.getenv("OPENAI_ORGANIZATION")
if openai_organization is not None:
config_dict["openai_organization"] = openai_organization
config_dict_without_none_values = {
k: v for k, v in config_dict.items() if v is not None
}
config = cls.build_agent_configuration(config_dict_without_none_values)
# Set secondary config variables (that depend on other config variables)
config.plugins_config = PluginsConfig.load_config( config.plugins_config = PluginsConfig.load_config(
config.plugins_config_file, config.plugins_config_file,
@@ -359,36 +286,10 @@ class ConfigBuilder(Configurable[Config]):
return config return config
@classmethod
def load_azure_config(cls, config_file: Path) -> Dict[str, str]:
"""
Loads the configuration parameters for Azure hosting from the specified file
path as a yaml file.
Parameters:
config_file (Path): The path to the config yaml file.
Returns:
Dict
"""
with open(config_file) as file:
config_params = yaml.load(file, Loader=yaml.FullLoader) or {}
return {
"openai_api_type": config_params.get("azure_api_type", "azure"),
"openai_api_base": config_params.get("azure_api_base", ""),
"openai_api_version": config_params.get(
"azure_api_version", "2023-03-15-preview"
),
"azure_model_to_deployment_id_map": config_params.get(
"azure_model_map", {}
),
}
def assert_config_has_openai_api_key(config: Config) -> None: def assert_config_has_openai_api_key(config: Config) -> None:
"""Check if the OpenAI API key is set in config.py or as an environment variable.""" """Check if the OpenAI API key is set in config.py or as an environment variable."""
if not config.openai_api_key: if not config.openai_credentials:
print( print(
Fore.RED Fore.RED
+ "Please set your OpenAI API key in .env or as an environment variable." + "Please set your OpenAI API key in .env or as an environment variable."
@@ -402,7 +303,9 @@ def assert_config_has_openai_api_key(config: Config) -> None:
openai_api_key = openai_api_key.strip() openai_api_key = openai_api_key.strip()
if re.search(key_pattern, openai_api_key): if re.search(key_pattern, openai_api_key):
os.environ["OPENAI_API_KEY"] = openai_api_key os.environ["OPENAI_API_KEY"] = openai_api_key
config.openai_api_key = openai_api_key config.openai_credentials = OpenAICredentials(
api_key=SecretStr(openai_api_key)
)
print( print(
Fore.GREEN Fore.GREEN
+ "OpenAI API key successfully set!\n" + "OpenAI API key successfully set!\n"

View File

@@ -1,24 +1,73 @@
import abc import abc
import functools import os
import typing import typing
from typing import Any, Generic, TypeVar from typing import Any, Callable, Generic, Optional, Type, TypeVar, get_args
from pydantic import BaseModel, Field from pydantic import BaseModel, Field, ValidationError
from pydantic.fields import ModelField, Undefined, UndefinedType
from pydantic.main import ModelMetaclass
T = TypeVar("T")
M = TypeVar("M", bound=BaseModel)
@functools.wraps(Field) def UserConfigurable(
def UserConfigurable(*args, **kwargs): default: T | UndefinedType = Undefined,
return Field(*args, **kwargs, user_configurable=True) *args,
default_factory: Optional[Callable[[], T]] = None,
from_env: Optional[str | Callable[[], T | None]] = None,
description: str = "",
**kwargs,
) -> T:
# TODO: use this to auto-generate docs for the application configuration # TODO: use this to auto-generate docs for the application configuration
return Field(
default,
*args,
default_factory=default_factory,
from_env=from_env,
description=description,
**kwargs,
user_configurable=True,
)
class SystemConfiguration(BaseModel): class SystemConfiguration(BaseModel):
def get_user_config(self) -> dict[str, Any]: def get_user_config(self) -> dict[str, Any]:
return _get_user_config_fields(self) return _recurse_user_config_values(self)
@classmethod
def from_env(cls):
"""
Initializes the config object from environment variables.
Environment variables are mapped to UserConfigurable fields using the from_env
attribute that can be passed to UserConfigurable.
"""
def infer_field_value(field: ModelField):
field_info = field.field_info
default_value = (
field.default
if field.default not in (None, Undefined)
else (field.default_factory() if field.default_factory else Undefined)
)
if from_env := field_info.extra.get("from_env"):
val_from_env = (
os.getenv(from_env) if type(from_env) is str else from_env()
)
if val_from_env is not None:
return val_from_env
return default_value
return _recursive_init_model(cls, infer_field_value)
class Config: class Config:
extra = "forbid" extra = "forbid"
use_enum_values = True use_enum_values = True
validate_assignment = True
SC = TypeVar("SC", bound=SystemConfiguration)
class SystemSettings(BaseModel): class SystemSettings(BaseModel):
@@ -30,6 +79,7 @@ class SystemSettings(BaseModel):
class Config: class Config:
extra = "forbid" extra = "forbid"
use_enum_values = True use_enum_values = True
validate_assignment = True
S = TypeVar("S", bound=SystemSettings) S = TypeVar("S", bound=SystemSettings)
@@ -43,55 +93,238 @@ class Configurable(abc.ABC, Generic[S]):
@classmethod @classmethod
def get_user_config(cls) -> dict[str, Any]: def get_user_config(cls) -> dict[str, Any]:
return _get_user_config_fields(cls.default_settings) return _recurse_user_config_values(cls.default_settings)
@classmethod @classmethod
def build_agent_configuration(cls, configuration: dict) -> S: def build_agent_configuration(cls, overrides: dict = {}) -> S:
"""Process the configuration for this object.""" """Process the configuration for this object."""
defaults = cls.default_settings.dict() base_config = _update_user_config_from_env(cls.default_settings)
final_configuration = deep_update(defaults, configuration) final_configuration = deep_update(base_config, overrides)
return cls.default_settings.__class__.parse_obj(final_configuration) return cls.default_settings.__class__.parse_obj(final_configuration)
def _get_user_config_fields(instance: BaseModel) -> dict[str, Any]: def _update_user_config_from_env(instance: BaseModel) -> dict[str, Any]:
""" """
Get the user config fields of a Pydantic model instance. Update config fields of a Pydantic model instance from environment variables.
Args: Precedence:
1. Non-default value already on the instance
2. Value returned by `from_env()`
3. Default value for the field
Params:
instance: The Pydantic model instance. instance: The Pydantic model instance.
Returns: Returns:
The user config fields of the instance. The user config fields of the instance.
""" """
def infer_field_value(field: ModelField, value):
field_info = field.field_info
default_value = (
field.default
if field.default not in (None, Undefined)
else (field.default_factory() if field.default_factory else None)
)
if value == default_value and (from_env := field_info.extra.get("from_env")):
val_from_env = os.getenv(from_env) if type(from_env) is str else from_env()
if val_from_env is not None:
return val_from_env
return value
def init_sub_config(model: Type[SC]) -> SC | None:
try:
return model.from_env()
except ValidationError as e:
# Gracefully handle missing fields
if all(e["type"] == "value_error.missing" for e in e.errors()):
return None
raise
return _recurse_user_config_fields(instance, infer_field_value, init_sub_config)
def _recursive_init_model(
model: Type[M],
infer_field_value: Callable[[ModelField], Any],
) -> M:
"""
Recursively initialize the user configuration fields of a Pydantic model.
Parameters:
model: The Pydantic model type.
infer_field_value: A callback function to infer the value of each field.
Parameters:
ModelField: The Pydantic ModelField object describing the field.
Returns:
BaseModel: An instance of the model with the initialized configuration.
"""
user_config_fields = {}
for name, field in model.__fields__.items():
if "user_configurable" in field.field_info.extra:
user_config_fields[name] = infer_field_value(field)
elif type(field.outer_type_) is ModelMetaclass and issubclass(
field.outer_type_, SystemConfiguration
):
try:
user_config_fields[name] = _recursive_init_model(
model=field.outer_type_,
infer_field_value=infer_field_value,
)
except ValidationError as e:
# Gracefully handle missing fields
if all(e["type"] == "value_error.missing" for e in e.errors()):
user_config_fields[name] = None
raise
user_config_fields = remove_none_items(user_config_fields)
return model.parse_obj(user_config_fields)
def _recurse_user_config_fields(
model: BaseModel,
infer_field_value: Callable[[ModelField, Any], Any],
init_sub_config: Optional[
Callable[[Type[SystemConfiguration]], SystemConfiguration | None]
] = None,
) -> dict[str, Any]:
"""
Recursively process the user configuration fields of a Pydantic model instance.
Params:
model: The Pydantic model to iterate over.
infer_field_value: A callback function to process each field.
Params:
ModelField: The Pydantic ModelField object describing the field.
Any: The current value of the field.
init_sub_config: An optional callback function to initialize a sub-config.
Params:
Type[SystemConfiguration]: The type of the sub-config to initialize.
Returns:
dict[str, Any]: The processed user configuration fields of the instance.
"""
user_config_fields = {} user_config_fields = {}
for name, value in instance.__dict__.items(): for name, field in model.__fields__.items():
field_info = instance.__fields__[name] value = getattr(model, name)
if "user_configurable" in field_info.field_info.extra:
user_config_fields[name] = value # Handle individual field
if "user_configurable" in field.field_info.extra:
user_config_fields[name] = infer_field_value(field, value)
# Recurse into nested config object
elif isinstance(value, SystemConfiguration): elif isinstance(value, SystemConfiguration):
user_config_fields[name] = value.get_user_config() user_config_fields[name] = _recurse_user_config_fields(
model=value,
infer_field_value=infer_field_value,
init_sub_config=init_sub_config,
)
# Recurse into optional nested config object
elif value is None and init_sub_config:
field_type = get_args(field.annotation)[0] # Optional[T] -> T
if type(field_type) is ModelMetaclass and issubclass(
field_type, SystemConfiguration
):
sub_config = init_sub_config(field_type)
if sub_config:
user_config_fields[name] = _recurse_user_config_fields(
model=sub_config,
infer_field_value=infer_field_value,
init_sub_config=init_sub_config,
)
elif isinstance(value, list) and all( elif isinstance(value, list) and all(
isinstance(i, SystemConfiguration) for i in value isinstance(i, SystemConfiguration) for i in value
): ):
user_config_fields[name] = [i.get_user_config() for i in value] user_config_fields[name] = [
_recurse_user_config_fields(i, infer_field_value, init_sub_config)
for i in value
]
elif isinstance(value, dict) and all( elif isinstance(value, dict) and all(
isinstance(i, SystemConfiguration) for i in value.values() isinstance(i, SystemConfiguration) for i in value.values()
): ):
user_config_fields[name] = { user_config_fields[name] = {
k: v.get_user_config() for k, v in value.items() k: _recurse_user_config_fields(v, infer_field_value, init_sub_config)
for k, v in value.items()
} }
return user_config_fields return user_config_fields
def _recurse_user_config_values(
instance: BaseModel,
get_field_value: Callable[[ModelField, T], T] = lambda _, v: v,
) -> dict[str, Any]:
"""
This function recursively traverses the user configuration values in a Pydantic
model instance.
Params:
instance: A Pydantic model instance.
get_field_value: A callback function to process each field. Parameters:
ModelField: The Pydantic ModelField object that describes the field.
Any: The current value of the field.
Returns:
A dictionary containing the processed user configuration fields of the instance.
"""
user_config_values = {}
for name, value in instance.__dict__.items():
field = instance.__fields__[name]
if "user_configurable" in field.field_info.extra:
user_config_values[name] = get_field_value(field, value)
elif isinstance(value, SystemConfiguration):
user_config_values[name] = _recurse_user_config_values(
instance=value, get_field_value=get_field_value
)
elif isinstance(value, list) and all(
isinstance(i, SystemConfiguration) for i in value
):
user_config_values[name] = [
_recurse_user_config_values(i, get_field_value) for i in value
]
elif isinstance(value, dict) and all(
isinstance(i, SystemConfiguration) for i in value.values()
):
user_config_values[name] = {
k: _recurse_user_config_values(v, get_field_value)
for k, v in value.items()
}
return user_config_values
def _get_non_default_user_config_values(instance: BaseModel) -> dict[str, Any]:
"""
Get the non-default user config fields of a Pydantic model instance.
Params:
instance: The Pydantic model instance.
Returns:
dict[str, Any]: The non-default user config values on the instance.
"""
def get_field_value(field: ModelField, value):
default = field.default_factory() if field.default_factory else field.default
if value != default:
return value
return remove_none_items(_recurse_user_config_values(instance, get_field_value))
def deep_update(original_dict: dict, update_dict: dict) -> dict: def deep_update(original_dict: dict, update_dict: dict) -> dict:
""" """
Recursively update a dictionary. Recursively update a dictionary.
Args: Params:
original_dict (dict): The dictionary to be updated. original_dict (dict): The dictionary to be updated.
update_dict (dict): The dictionary to update with. update_dict (dict): The dictionary to update with.
@@ -108,3 +341,11 @@ def deep_update(original_dict: dict, update_dict: dict) -> dict:
else: else:
original_dict[key] = value original_dict[key] = value
return original_dict return original_dict
def remove_none_items(d):
if isinstance(d, dict):
return {
k: remove_none_items(v) for k, v in d.items() if v not in (None, Undefined)
}
return d

View File

@@ -2,12 +2,16 @@ import enum
import functools import functools
import logging import logging
import math import math
import os
import time import time
from pathlib import Path
from typing import Callable, Optional, ParamSpec, TypeVar from typing import Callable, Optional, ParamSpec, TypeVar
import openai import openai
import tiktoken import tiktoken
import yaml
from openai.error import APIError, RateLimitError from openai.error import APIError, RateLimitError
from pydantic import SecretStr
from autogpt.core.configuration import ( from autogpt.core.configuration import (
Configurable, Configurable,
@@ -167,6 +171,68 @@ class OpenAIConfiguration(SystemConfiguration):
retries_per_request: int = UserConfigurable() retries_per_request: int = UserConfigurable()
class OpenAICredentials(ModelProviderCredentials):
"""Credentials for OpenAI."""
api_key: SecretStr = UserConfigurable(from_env="OPENAI_API_KEY")
api_base: Optional[SecretStr] = UserConfigurable(
default=None, from_env="OPENAI_API_BASE_URL"
)
organization: Optional[SecretStr] = UserConfigurable(from_env="OPENAI_ORGANIZATION")
api_type: str = UserConfigurable(
default="",
from_env=lambda: (
"azure"
if os.getenv("USE_AZURE") == "True"
else os.getenv("OPENAI_API_TYPE")
),
)
api_version: str = UserConfigurable("", from_env="OPENAI_API_VERSION")
azure_model_to_deploy_id_map: Optional[dict[str, str]] = None
def get_api_access_kwargs(self, model: str = "") -> dict[str, str]:
credentials = {k: v for k, v in self.unmasked().items() if type(v) is str}
if self.api_type == "azure" and model:
azure_credentials = self._get_azure_access_kwargs(model)
credentials.update(azure_credentials)
return credentials
def load_azure_config(self, config_file: Path) -> None:
with open(config_file) as file:
config_params = yaml.load(file, Loader=yaml.FullLoader) or {}
try:
assert (
azure_api_base := config_params.get("azure_api_base", "")
) != "", "Azure API base URL not set"
assert config_params.get(
"azure_model_map", {}
), "Azure model->deployment_id map is empty"
except AssertionError as e:
raise ValueError(*e.args)
self.api_base = SecretStr(azure_api_base)
self.api_type = config_params.get("azure_api_type", "azure")
self.api_version = config_params.get("azure_api_version", "")
self.azure_model_to_deploy_id_map = config_params.get("azure_model_map")
def _get_azure_access_kwargs(self, model: str) -> dict[str, str]:
"""Get the kwargs for the Azure API."""
if not self.azure_model_to_deploy_id_map:
raise ValueError("Azure model deployment map not configured")
if model not in self.azure_model_to_deploy_id_map:
raise ValueError(f"No Azure deployment ID configured for model '{model}'")
deployment_id = self.azure_model_to_deploy_id_map[model]
if model in OPEN_AI_EMBEDDING_MODELS:
return {"engine": deployment_id}
else:
return {"deployment_id": deployment_id}
class OpenAIModelProviderBudget(ModelProviderBudget): class OpenAIModelProviderBudget(ModelProviderBudget):
graceful_shutdown_threshold: float = UserConfigurable() graceful_shutdown_threshold: float = UserConfigurable()
warning_threshold: float = UserConfigurable() warning_threshold: float = UserConfigurable()
@@ -174,7 +240,7 @@ class OpenAIModelProviderBudget(ModelProviderBudget):
class OpenAISettings(ModelProviderSettings): class OpenAISettings(ModelProviderSettings):
configuration: OpenAIConfiguration configuration: OpenAIConfiguration
credentials: ModelProviderCredentials credentials: Optional[OpenAICredentials]
budget: OpenAIModelProviderBudget budget: OpenAIModelProviderBudget
@@ -187,7 +253,7 @@ class OpenAIProvider(
configuration=OpenAIConfiguration( configuration=OpenAIConfiguration(
retries_per_request=10, retries_per_request=10,
), ),
credentials=ModelProviderCredentials(), credentials=None,
budget=OpenAIModelProviderBudget( budget=OpenAIModelProviderBudget(
total_budget=math.inf, total_budget=math.inf,
total_cost=0.0, total_cost=0.0,
@@ -207,6 +273,7 @@ class OpenAIProvider(
settings: OpenAISettings, settings: OpenAISettings,
logger: logging.Logger, logger: logging.Logger,
): ):
assert settings.credentials, "Cannot create OpenAIProvider without credentials"
self._configuration = settings.configuration self._configuration = settings.configuration
self._credentials = settings.credentials self._credentials = settings.credentials
self._budget = settings.budget self._budget = settings.budget
@@ -362,7 +429,7 @@ class OpenAIProvider(
completion_kwargs = { completion_kwargs = {
"model": model_name, "model": model_name,
**kwargs, **kwargs,
**self._credentials.unmasked(), **self._credentials.get_api_access_kwargs(model_name),
} }
if functions: if functions:

View File

@@ -3,6 +3,7 @@ from __future__ import annotations
import enum import enum
import logging import logging
import os
import sys import sys
from pathlib import Path from pathlib import Path
from typing import TYPE_CHECKING, Optional from typing import TYPE_CHECKING, Optional
@@ -14,7 +15,7 @@ if TYPE_CHECKING:
from autogpt.config import Config from autogpt.config import Config
from autogpt.speech import TTSConfig from autogpt.speech import TTSConfig
from autogpt.core.configuration import SystemConfiguration from autogpt.core.configuration import SystemConfiguration, UserConfigurable
from autogpt.core.runner.client_lib.logging import BelowLevelFilter from autogpt.core.runner.client_lib.logging import BelowLevelFilter
from .formatters import AutoGptFormatter, StructuredLoggingFormatter from .formatters import AutoGptFormatter, StructuredLoggingFormatter
@@ -49,15 +50,29 @@ TEXT_LOG_FORMAT_MAP = {
class LoggingConfig(SystemConfiguration): class LoggingConfig(SystemConfiguration):
level: int = logging.INFO level: int = UserConfigurable(
default=logging.INFO,
from_env=lambda: logging.getLevelName(os.getenv("LOG_LEVEL", "INFO")),
)
# Console output # Console output
log_format: LogFormatName = LogFormatName.SIMPLE log_format: LogFormatName = UserConfigurable(
plain_console_output: bool = False default=LogFormatName.SIMPLE,
from_env=lambda: LogFormatName(os.getenv("LOG_FORMAT", "simple")),
)
plain_console_output: bool = UserConfigurable(
default=False,
from_env=lambda: os.getenv("PLAIN_OUTPUT", "False") == "True",
)
# File output # File output
log_dir: Path = LOG_DIR log_dir: Path = LOG_DIR
log_file_format: Optional[LogFormatName] = LogFormatName.SIMPLE log_file_format: Optional[LogFormatName] = UserConfigurable(
default=LogFormatName.SIMPLE,
from_env=lambda: LogFormatName(
os.getenv("LOG_FILE_FORMAT", os.getenv("LOG_FORMAT", "simple"))
),
)
def configure_logging( def configure_logging(

View File

@@ -17,8 +17,8 @@ PLACEHOLDERS = {"your-voice-id"}
class ElevenLabsConfig(SystemConfiguration): class ElevenLabsConfig(SystemConfiguration):
api_key: str = UserConfigurable() api_key: str = UserConfigurable(from_env="ELEVENLABS_API_KEY")
voice_id: str = UserConfigurable() voice_id: str = UserConfigurable(from_env="ELEVENLABS_VOICE_ID")
class ElevenLabsSpeech(VoiceBase): class ElevenLabsSpeech(VoiceBase):

View File

@@ -1,6 +1,7 @@
""" Text to speech module """ """ Text to speech module """
from __future__ import annotations from __future__ import annotations
import os
import threading import threading
from threading import Semaphore from threading import Semaphore
from typing import Literal, Optional from typing import Literal, Optional
@@ -20,11 +21,23 @@ _QUEUE_SEMAPHORE = Semaphore(
class TTSConfig(SystemConfiguration): class TTSConfig(SystemConfiguration):
speak_mode: bool = False speak_mode: bool = False
provider: Literal[
"elevenlabs", "gtts", "macos", "streamelements"
] = UserConfigurable(default="gtts")
elevenlabs: Optional[ElevenLabsConfig] = None elevenlabs: Optional[ElevenLabsConfig] = None
streamelements: Optional[StreamElementsConfig] = None streamelements: Optional[StreamElementsConfig] = None
provider: Literal[
"elevenlabs", "gtts", "macos", "streamelements"
] = UserConfigurable(
default="gtts",
from_env=lambda: os.getenv("TEXT_TO_SPEECH_PROVIDER")
or (
"macos"
if os.getenv("USE_MAC_OS_TTS")
else "elevenlabs"
if os.getenv("ELEVENLABS_API_KEY")
else "streamelements"
if os.getenv("USE_BRIAN_TTS")
else "gtts"
),
) # type: ignore
class TextToSpeechProvider: class TextToSpeechProvider:

View File

@@ -13,7 +13,7 @@ logger = logging.getLogger(__name__)
class StreamElementsConfig(SystemConfiguration): class StreamElementsConfig(SystemConfiguration):
voice: str = UserConfigurable(default="Brian") voice: str = UserConfigurable(default="Brian", from_env="STREAMELEMENTS_VOICE")
class StreamElementsSpeech(VoiceBase): class StreamElementsSpeech(VoiceBase):

View File

@@ -2,6 +2,6 @@ azure_api_type: azure
azure_api_base: your-base-url-for-azure azure_api_base: your-base-url-for-azure
azure_api_version: api-version-for-azure azure_api_version: api-version-for-azure
azure_model_map: azure_model_map:
fast_llm_deployment_id: gpt35-deployment-id-for-azure gpt-3.5-turbo: gpt35-deployment-id-for-azure
smart_llm_deployment_id: gpt4-deployment-id-for-azure gpt-4: gpt4-deployment-id-for-azure
embedding_model_deployment_id: embedding-deployment-id-for-azure text-embedding-ada-002: embedding-deployment-id-for-azure

View File

@@ -30,7 +30,9 @@ def tmp_project_root(tmp_path: Path) -> Path:
@pytest.fixture() @pytest.fixture()
def app_data_dir(tmp_project_root: Path) -> Path: def app_data_dir(tmp_project_root: Path) -> Path:
return tmp_project_root / "data" dir = tmp_project_root / "data"
dir.mkdir(parents=True, exist_ok=True)
return dir
@pytest.fixture() @pytest.fixture()
@@ -71,9 +73,9 @@ def config(
app_data_dir: Path, app_data_dir: Path,
mocker: MockerFixture, mocker: MockerFixture,
): ):
config = ConfigBuilder.build_config_from_env(project_root=tmp_project_root)
if not os.environ.get("OPENAI_API_KEY"): if not os.environ.get("OPENAI_API_KEY"):
os.environ["OPENAI_API_KEY"] = "sk-dummy" os.environ["OPENAI_API_KEY"] = "sk-dummy"
config = ConfigBuilder.build_config_from_env(project_root=tmp_project_root)
config.app_data_dir = app_data_dir config.app_data_dir = app_data_dir

View File

@@ -99,7 +99,8 @@ def generate_and_validate(
): ):
"""Generate an image and validate the output.""" """Generate an image and validate the output."""
agent.legacy_config.image_provider = image_provider agent.legacy_config.image_provider = image_provider
agent.legacy_config.huggingface_image_model = hugging_face_image_model if hugging_face_image_model:
agent.legacy_config.huggingface_image_model = hugging_face_image_model
prompt = "astronaut riding a horse" prompt = "astronaut riding a horse"
image_path = lst(generate_image(prompt, agent, image_size, **kwargs)) image_path = lst(generate_image(prompt, agent, image_size, **kwargs))

View File

@@ -8,10 +8,10 @@ from unittest import mock
from unittest.mock import patch from unittest.mock import patch
import pytest import pytest
from pydantic import SecretStr
from autogpt.app.configurator import GPT_3_MODEL, GPT_4_MODEL, apply_overrides_to_config from autogpt.app.configurator import GPT_3_MODEL, GPT_4_MODEL, apply_overrides_to_config
from autogpt.config import Config, ConfigBuilder from autogpt.config import Config, ConfigBuilder
from autogpt.file_workspace import FileWorkspace
def test_initial_values(config: Config) -> None: def test_initial_values(config: Config) -> None:
@@ -107,73 +107,81 @@ def test_smart_and_fast_llms_set_to_gpt4(mock_list_models: Any, config: Config)
config.smart_llm = smart_llm config.smart_llm = smart_llm
def test_missing_azure_config(workspace: FileWorkspace) -> None: def test_missing_azure_config(config: Config) -> None:
config_file = workspace.get_path("azure_config.yaml") assert config.openai_credentials is not None
config_file = config.app_data_dir / "azure_config.yaml"
with pytest.raises(FileNotFoundError): with pytest.raises(FileNotFoundError):
ConfigBuilder.load_azure_config(config_file) config.openai_credentials.load_azure_config(config_file)
config_file.write_text("") config_file.write_text("")
azure_config = ConfigBuilder.load_azure_config(config_file) with pytest.raises(ValueError):
config.openai_credentials.load_azure_config(config_file)
assert azure_config["openai_api_type"] == "azure" assert config.openai_credentials.api_type != "azure"
assert azure_config["openai_api_base"] == "" assert config.openai_credentials.api_version == ""
assert azure_config["openai_api_version"] == "2023-03-15-preview" assert config.openai_credentials.azure_model_to_deploy_id_map is None
assert azure_config["azure_model_to_deployment_id_map"] == {}
def test_azure_config(config: Config, workspace: FileWorkspace) -> None: def test_azure_config(config: Config) -> None:
config_file = workspace.get_path("azure_config.yaml") config_file = config.app_data_dir / "azure_config.yaml"
yaml_content = """ config_file.write_text(
f"""
azure_api_type: azure azure_api_type: azure
azure_api_base: https://dummy.openai.azure.com azure_api_base: https://dummy.openai.azure.com
azure_api_version: 2023-06-01-preview azure_api_version: 2023-06-01-preview
azure_model_map: azure_model_map:
fast_llm_deployment_id: FAST-LLM_ID {config.fast_llm}: FAST-LLM_ID
smart_llm_deployment_id: SMART-LLM_ID {config.smart_llm}: SMART-LLM_ID
embedding_model_deployment_id: embedding-deployment-id-for-azure {config.embedding_model}: embedding-deployment-id-for-azure
""" """
config_file.write_text(yaml_content) )
os.environ["USE_AZURE"] = "True" os.environ["USE_AZURE"] = "True"
os.environ["AZURE_CONFIG_FILE"] = str(config_file) os.environ["AZURE_CONFIG_FILE"] = str(config_file)
config = ConfigBuilder.build_config_from_env(project_root=workspace.root.parent) config = ConfigBuilder.build_config_from_env(project_root=config.project_root)
assert config.openai_api_type == "azure" assert (credentials := config.openai_credentials) is not None
assert config.openai_api_base == "https://dummy.openai.azure.com" assert credentials.api_type == "azure"
assert config.openai_api_version == "2023-06-01-preview" assert credentials.api_base == SecretStr("https://dummy.openai.azure.com")
assert config.azure_model_to_deployment_id_map == { assert credentials.api_version == "2023-06-01-preview"
"fast_llm_deployment_id": "FAST-LLM_ID", assert credentials.azure_model_to_deploy_id_map == {
"smart_llm_deployment_id": "SMART-LLM_ID", config.fast_llm: "FAST-LLM_ID",
"embedding_model_deployment_id": "embedding-deployment-id-for-azure", config.smart_llm: "SMART-LLM_ID",
config.embedding_model: "embedding-deployment-id-for-azure",
} }
fast_llm = config.fast_llm fast_llm = config.fast_llm
smart_llm = config.smart_llm smart_llm = config.smart_llm
assert ( assert (
config.get_azure_credentials(config.fast_llm)["deployment_id"] == "FAST-LLM_ID" credentials.get_api_access_kwargs(config.fast_llm)["deployment_id"]
== "FAST-LLM_ID"
) )
assert ( assert (
config.get_azure_credentials(config.smart_llm)["deployment_id"] credentials.get_api_access_kwargs(config.smart_llm)["deployment_id"]
== "SMART-LLM_ID" == "SMART-LLM_ID"
) )
# Emulate --gpt4only # Emulate --gpt4only
config.fast_llm = smart_llm config.fast_llm = smart_llm
assert ( assert (
config.get_azure_credentials(config.fast_llm)["deployment_id"] == "SMART-LLM_ID" credentials.get_api_access_kwargs(config.fast_llm)["deployment_id"]
== "SMART-LLM_ID"
) )
assert ( assert (
config.get_azure_credentials(config.smart_llm)["deployment_id"] credentials.get_api_access_kwargs(config.smart_llm)["deployment_id"]
== "SMART-LLM_ID" == "SMART-LLM_ID"
) )
# Emulate --gpt3only # Emulate --gpt3only
config.fast_llm = config.smart_llm = fast_llm config.fast_llm = config.smart_llm = fast_llm
assert ( assert (
config.get_azure_credentials(config.fast_llm)["deployment_id"] == "FAST-LLM_ID" credentials.get_api_access_kwargs(config.fast_llm)["deployment_id"]
== "FAST-LLM_ID"
) )
assert ( assert (
config.get_azure_credentials(config.smart_llm)["deployment_id"] == "FAST-LLM_ID" credentials.get_api_access_kwargs(config.smart_llm)["deployment_id"]
== "FAST-LLM_ID"
) )
del os.environ["USE_AZURE"] del os.environ["USE_AZURE"]

View File

@@ -144,7 +144,7 @@ found in the [repository].
**Note:** Azure support has been dropped in `master`, so these instructions will only work with v0.4.7 (or earlier). **Note:** Azure support has been dropped in `master`, so these instructions will only work with v0.4.7 (or earlier).
[repository]: https://github.com/Significant-Gravitas/AutoGPT/autogpts/autogpt [repository]: https://github.com/Significant-Gravitas/AutoGPT/tree/master/autogpts/autogpt
[show hidden files/Windows]: https://support.microsoft.com/en-us/windows/view-hidden-files-and-folders-in-windows-97fbc472-c603-9d90-91d0-1166d1d9f4b5 [show hidden files/Windows]: https://support.microsoft.com/en-us/windows/view-hidden-files-and-folders-in-windows-97fbc472-c603-9d90-91d0-1166d1d9f4b5
[show hidden files/macOS]: https://www.pcmag.com/how-to/how-to-access-your-macs-hidden-files [show hidden files/macOS]: https://www.pcmag.com/how-to/how-to-access-your-macs-hidden-files
[openai-python docs]: https://github.com/openai/openai-python#microsoft-azure-endpoints [openai-python docs]: https://github.com/openai/openai-python#microsoft-azure-endpoints

View File

@@ -88,6 +88,28 @@ Once you have cloned or downloaded the project, you can find the AutoGPT Agent i
```yaml ```yaml
OPENAI_API_KEY=sk-qwertykeys123456 OPENAI_API_KEY=sk-qwertykeys123456
``` ```
!!! info "Using a GPT Azure-instance"
If you want to use GPT on an Azure instance, set `USE_AZURE` to `True` and
make an Azure configuration file.
Rename `azure.yaml.template` to `azure.yaml` and provide the relevant
`azure_api_base`, `azure_api_version` and deployment IDs for the models that you
want to use.
E.g. if you want to use `gpt-3.5-turbo-16k` and `gpt-4-0314`:
```yaml
# Please specify all of these values as double-quoted strings
# Replace string in angled brackets (<>) to your own deployment Name
azure_model_map:
gpt-3.5-turbo-16k: "<auto-gpt-deployment>"
...
```
Details can be found in the [openai-python docs], and in the [Azure OpenAI docs] for the embedding model.
If you're on Windows you may need to install an [MSVC library](https://learn.microsoft.com/en-us/cpp/windows/latest-supported-vc-redist?view=msvc-170).
6. Enter any other API keys or tokens for services you would like to use. 6. Enter any other API keys or tokens for services you would like to use.
!!! note !!! note