Files
Auto-GPT/autogpts/autogpt/tests/conftest.py
Reinier van der Leer 39c46ef6be feat(agent/core): Add Anthropic Claude 3 support (#7085)
- feat(agent/core): Add `AnthropicProvider`
  - Add `ANTHROPIC_API_KEY` to .env.template and docs

  Notable differences in logic compared to `OpenAIProvider`:
  - Merges subsequent user messages in `AnthropicProvider._get_chat_completion_args`
  - Merges and extracts all system messages into `system` parameter in `AnthropicProvider._get_chat_completion_args`
  - Supports prefill; merges prefill content (if any) into generated response

- Prompt changes to improve compatibility with `AnthropicProvider`
  Anthropic has a slightly different API compared to OpenAI, and has much stricter input validation. E.g. Anthropic only supports a single `system` prompt, where OpenAI allows multiple `system` messages. Anthropic also forbids sequences of multiple `user` or `assistant` messages and requires that messages alternate between roles.
  - Move response format instruction from separate message into main system prompt
  - Fix clock message format
  - Add pre-fill to `OneShot` generated prompt

- refactor(agent/core): Tweak `model_providers.schema`
  - Simplify `ModelProviderUsage`
     - Remove attribute `total_tokens` as it is always equal to `prompt_tokens + completion_tokens`
     - Modify signature of `update_usage(..)`; no longer requires a full `ModelResponse` object as input
  - Improve `ModelProviderBudget`
     - Change type of attribute `usage` to `defaultdict[str, ModelProviderUsage]` -> allow per-model usage tracking
     - Modify signature of `update_usage_and_cost(..)`; no longer requires a full `ModelResponse` object as input
     - Allow `ModelProviderBudget` zero-argument instantiation
  - Fix type of `AssistantChatMessage.role` to match `ChatMessage.role` (str -> `ChatMessage.Role`)
  - Add shared attributes and constructor to `ModelProvider` base class
  - Add `max_output_tokens` parameter to `create_chat_completion` interface
  - Add pre-filling as a global feature
    - Add `prefill_response` field to `ChatPrompt` model
    - Add `prefill_response` parameter to `create_chat_completion` interface
  - Add `ChatModelProvider.get_available_models()` and remove `ApiManager`
  - Remove unused `OpenAIChatParser` typedef in openai.py
  - Remove redundant `budget` attribute definition on `OpenAISettings`
  - Remove unnecessary `usage` in `OpenAIProvider` > `default_settings` > `budget`

- feat(agent): Allow use of any available LLM provider through `MultiProvider`
  - Add `MultiProvider` (`model_providers.multi`)
  - Replace all references to / uses of `OpenAIProvider` with `MultiProvider`
  - Change type of `Config.smart_llm` and `Config.fast_llm` from `str` to `ModelName`

- feat(agent/core): Validate function call arguments in `create_chat_completion`
    - Add `validate_call` method to `CompletionModelFunction` in `model_providers.schema`
    - Add `validate_tool_calls` utility function in `model_providers.utils`
    - Add tool call validation step to `create_chat_completion` in `OpenAIProvider` and `AnthropicProvider`
    - Remove (now redundant) command argument validation logic in agent.py and models/command.py

- refactor(agent): Rename `get_openai_command_specs` to `function_specs_from_commands`
2024-05-04 20:33:25 +02:00

111 lines
2.7 KiB
Python

from __future__ import annotations
import os
import uuid
from pathlib import Path
import pytest
from pytest_mock import MockerFixture
from autogpt.agents.agent import Agent, AgentConfiguration, AgentSettings
from autogpt.app.main import _configure_llm_provider
from autogpt.config import AIProfile, Config, ConfigBuilder
from autogpt.core.resource.model_providers import ChatModelProvider
from autogpt.file_storage.local import (
FileStorage,
FileStorageConfiguration,
LocalFileStorage,
)
from autogpt.logs.config import configure_logging
pytest_plugins = [
"tests.integration.agent_factory",
"tests.integration.memory.utils",
"tests.vcr",
]
@pytest.fixture()
def tmp_project_root(tmp_path: Path) -> Path:
return tmp_path
@pytest.fixture()
def app_data_dir(tmp_project_root: Path) -> Path:
dir = tmp_project_root / "data"
dir.mkdir(parents=True, exist_ok=True)
return dir
@pytest.fixture()
def storage(app_data_dir: Path) -> FileStorage:
storage = LocalFileStorage(
FileStorageConfiguration(root=app_data_dir, restrict_to_root=False)
)
storage.initialize()
return storage
@pytest.fixture(scope="function")
def config(
tmp_project_root: Path,
app_data_dir: Path,
mocker: MockerFixture,
):
if not os.environ.get("OPENAI_API_KEY"):
os.environ["OPENAI_API_KEY"] = "sk-dummy"
config = ConfigBuilder.build_config_from_env(project_root=tmp_project_root)
config.app_data_dir = app_data_dir
config.noninteractive_mode = True
yield config
@pytest.fixture(scope="session")
def setup_logger(config: Config):
configure_logging(
debug=True,
log_dir=Path(__file__).parent / "logs",
plain_console_output=True,
)
@pytest.fixture
def llm_provider(config: Config) -> ChatModelProvider:
return _configure_llm_provider(config)
@pytest.fixture
def agent(
config: Config, llm_provider: ChatModelProvider, storage: FileStorage
) -> Agent:
ai_profile = AIProfile(
ai_name="Base",
ai_role="A base AI",
ai_goals=[],
)
agent_settings = AgentSettings(
name=Agent.default_settings.name,
description=Agent.default_settings.description,
agent_id=f"AutoGPT-test-agent-{str(uuid.uuid4())[:8]}",
ai_profile=ai_profile,
config=AgentConfiguration(
fast_llm=config.fast_llm,
smart_llm=config.smart_llm,
allow_fs_access=not config.restrict_to_workspace,
use_functions_api=config.openai_functions,
),
history=Agent.default_settings.history.copy(deep=True),
)
agent = Agent(
settings=agent_settings,
llm_provider=llm_provider,
file_storage=storage,
legacy_config=config,
)
return agent