mirror of
https://github.com/aljazceru/Auto-GPT.git
synced 2025-12-25 09:54:23 +01:00
Sync release v0.4.1 back into master (#4741)
This commit is contained in:
16
.github/workflows/ci.yml
vendored
16
.github/workflows/ci.yml
vendored
@@ -7,18 +7,18 @@ on:
|
||||
- 'tests/Auto-GPT-test-cassettes'
|
||||
- 'tests/challenges/current_score.json'
|
||||
pull_request:
|
||||
branches: [ stable, master ]
|
||||
branches: [ stable, master, release-* ]
|
||||
pull_request_target:
|
||||
branches: [ master, ci-test* ]
|
||||
branches: [ master, release-*, ci-test* ]
|
||||
|
||||
concurrency:
|
||||
group: ${{ format('ci-{0}', github.head_ref && format('pr-{0}', github.event.pull_request.number) || github.sha) }}
|
||||
cancel-in-progress: ${{ startsWith(github.event_name, 'pull_request') && github.event.pull_request.head.repo.fork == (github.event_name == 'pull_request_target') }}
|
||||
group: ${{ format('ci-{0}', github.head_ref && format('{0}-{1}', github.event_name, github.event.pull_request.number) || github.sha) }}
|
||||
cancel-in-progress: ${{ startsWith(github.event_name, 'pull_request') }}
|
||||
|
||||
jobs:
|
||||
lint:
|
||||
# eliminate duplicate runs on master
|
||||
if: github.event_name == 'push' || github.base_ref != 'master' || (github.event.pull_request.head.repo.fork == (github.event_name == 'pull_request_target'))
|
||||
# eliminate duplicate runs
|
||||
if: github.event_name == 'push' || (github.event.pull_request.head.repo.fork == (github.event_name == 'pull_request_target'))
|
||||
|
||||
runs-on: ubuntu-latest
|
||||
env:
|
||||
@@ -73,8 +73,8 @@ jobs:
|
||||
$cmd --check || (echo "You have unused imports or pass statements, please run '${cmd} --in-place'" && exit 1)
|
||||
|
||||
test:
|
||||
# eliminate duplicate runs on master
|
||||
if: github.event_name == 'push' || github.base_ref != 'master' || (github.event.pull_request.head.repo.fork == (github.event_name == 'pull_request_target'))
|
||||
# eliminate duplicate runs
|
||||
if: github.event_name == 'push' || (github.event.pull_request.head.repo.fork == (github.event_name == 'pull_request_target'))
|
||||
|
||||
permissions:
|
||||
# Gives the action the necessary permissions for publishing new
|
||||
|
||||
2
.github/workflows/docker-ci.yml
vendored
2
.github/workflows/docker-ci.yml
vendored
@@ -7,7 +7,7 @@ on:
|
||||
- 'tests/Auto-GPT-test-cassettes'
|
||||
- 'tests/challenges/current_score.json'
|
||||
pull_request:
|
||||
branches: [ master, stable ]
|
||||
branches: [ master, release-*, stable ]
|
||||
|
||||
concurrency:
|
||||
group: ${{ format('docker-ci-{0}', github.head_ref && format('pr-{0}', github.event.pull_request.number) || github.sha) }}
|
||||
|
||||
2
.github/workflows/pr-label.yml
vendored
2
.github/workflows/pr-label.yml
vendored
@@ -3,7 +3,7 @@ name: "Pull Request auto-label"
|
||||
on:
|
||||
# So that PRs touching the same files as the push are updated
|
||||
push:
|
||||
branches: [ master ]
|
||||
branches: [ master, release-* ]
|
||||
paths-ignore:
|
||||
- 'tests/Auto-GPT-test-cassettes'
|
||||
- 'tests/challenges/current_score.json'
|
||||
|
||||
52
BULLETIN.md
52
BULLETIN.md
@@ -3,45 +3,25 @@ Check out *https://agpt.co*, the official news & updates site for Auto-GPT!
|
||||
The documentation also has a place here, at *https://docs.agpt.co*
|
||||
|
||||
# For contributors 👷🏼
|
||||
Since releasing v0.3.0, we are working on re-architecting the Auto-GPT core to make
|
||||
it more extensible and to make room for structural performance-oriented R&D.
|
||||
In the meantime, we have less time to process incoming pull requests and issues,
|
||||
so we focus on high-value contributions:
|
||||
* significant bugfixes
|
||||
* *major* improvements to existing functionality and/or docs (so no single-typo fixes)
|
||||
* contributions that help us with re-architecture and other roadmapped items
|
||||
We have to be somewhat selective in order to keep making progress, but this does not
|
||||
mean you can't contribute. Check out the contribution guide on our wiki:
|
||||
Since releasing v0.3.0, whave been working on re-architecting the Auto-GPT core to make it more extensible and make room for structural performance-oriented R&D.
|
||||
|
||||
Check out the contribution guide on our wiki:
|
||||
https://github.com/Significant-Gravitas/Auto-GPT/wiki/Contributing
|
||||
|
||||
# 🚀 v0.4.0 Release 🚀
|
||||
Two weeks and 76 pull requests have passed since v0.3.1, and we are happy to announce
|
||||
the release of v0.4.0!
|
||||
# 🚀 v0.4.1 Release 🚀
|
||||
Two weeks and 50+ pull requests have passed since v0.4.0, and we are happy to announce the release of v0.4.1!
|
||||
|
||||
Highlights and notable changes since v0.3.0:
|
||||
|
||||
## ⚠️ Command `send_tweet` is REMOVED
|
||||
Twitter functionality (and more) is now covered by plugins.
|
||||
|
||||
## ⚠️ Memory backend deprecation 💾
|
||||
The Milvus, Pinecone and Weaviate memory backends were rendered incompatible
|
||||
by work on the memory system, and have been removed in `master`. The Redis
|
||||
memory store was also temporarily removed; we will merge a new implementation ASAP.
|
||||
Whether built-in support for the others will be added back in the future is subject to
|
||||
discussion, feel free to pitch in: https://github.com/Significant-Gravitas/Auto-GPT/discussions/4280
|
||||
|
||||
## Document support in `read_file` 📄
|
||||
Auto-GPT can now read text from document files, with support added for PDF, DOCX, CSV,
|
||||
HTML, TeX and more!
|
||||
|
||||
## Managing Auto-GPT's access to commands ❌🔧
|
||||
You can now disable set of built-in commands through the *DISABLED_COMMAND_CATEGORIES*
|
||||
variable in .env. Specific shell commands can also be disabled using *DENY_COMMANDS*,
|
||||
or selectively enabled using *ALLOW_COMMANDS*.
|
||||
Highlights and notable changes since v0.4.0:
|
||||
- The .env.template is more readable and better explains the purpose of each environment variable.
|
||||
- More dependable search
|
||||
- The CUSTOM_SEARCH_ENGINE_ID variable has been replaced to GOOGLE_CUSTOM_SEARCH_ENGINE_ID, make sure you update it.
|
||||
- Better read_file
|
||||
- More reliable python code execution
|
||||
- Lots of JSON error fixes
|
||||
- Directory-based plugins
|
||||
|
||||
## Further fixes and changes 🛠️
|
||||
Other highlights include improvements to self-feedback mode and continuous mode,
|
||||
documentation, docker and devcontainer setups, and much more. Most of the improvements
|
||||
that were made are not yet visible to users, but will pay off in the long term.
|
||||
Take a look at the Release Notes on Github for the full changelog!
|
||||
Under the hood, we've done a bunch of work improving architectures and streamlining code. Most of that won't be user-visible
|
||||
|
||||
## Take a look at the Release Notes on Github for the full changelog!
|
||||
https://github.com/Significant-Gravitas/Auto-GPT/releases
|
||||
|
||||
@@ -18,7 +18,7 @@ DENYLIST_CONTROL = "denylist"
|
||||
|
||||
@command(
|
||||
"execute_python_code",
|
||||
"Create a Python file and execute it",
|
||||
"Creates a Python file and executes it",
|
||||
{
|
||||
"code": {
|
||||
"type": "string",
|
||||
@@ -63,7 +63,7 @@ def execute_python_code(code: str, name: str, agent: Agent) -> str:
|
||||
|
||||
@command(
|
||||
"execute_python_file",
|
||||
"Execute an existing Python file",
|
||||
"Executes an existing Python file",
|
||||
{
|
||||
"filename": {
|
||||
"type": "string",
|
||||
@@ -191,7 +191,7 @@ def validate_command(command: str, config: Config) -> bool:
|
||||
|
||||
@command(
|
||||
"execute_shell",
|
||||
"Execute Shell Command, non-interactive commands only",
|
||||
"Executes a Shell Command, non-interactive commands only",
|
||||
{
|
||||
"command_line": {
|
||||
"type": "string",
|
||||
@@ -237,7 +237,7 @@ def execute_shell(command_line: str, agent: Agent) -> str:
|
||||
|
||||
@command(
|
||||
"execute_shell_popen",
|
||||
"Execute Shell Command, non-interactive commands only",
|
||||
"Executes a Shell Command, non-interactive commands only",
|
||||
{
|
||||
"query": {
|
||||
"type": "string",
|
||||
|
||||
@@ -11,6 +11,7 @@ from confection import Config
|
||||
from autogpt.agent.agent import Agent
|
||||
from autogpt.command_decorator import command
|
||||
from autogpt.commands.file_operations_utils import read_textual_file
|
||||
from autogpt.config import Config
|
||||
from autogpt.logs import logger
|
||||
from autogpt.memory.vector import MemoryItem, VectorMemory
|
||||
|
||||
@@ -175,7 +176,7 @@ def ingest_file(
|
||||
|
||||
@command(
|
||||
"write_to_file",
|
||||
"Write to file",
|
||||
"Writes to a file",
|
||||
{
|
||||
"filename": {
|
||||
"type": "string",
|
||||
@@ -215,7 +216,7 @@ def write_to_file(filename: str, text: str, agent: Agent) -> str:
|
||||
|
||||
@command(
|
||||
"append_to_file",
|
||||
"Append to file",
|
||||
"Appends to a file",
|
||||
{
|
||||
"filename": {
|
||||
"type": "string",
|
||||
@@ -260,7 +261,7 @@ def append_to_file(
|
||||
|
||||
@command(
|
||||
"delete_file",
|
||||
"Delete file",
|
||||
"Deletes a file",
|
||||
{
|
||||
"filename": {
|
||||
"type": "string",
|
||||
@@ -290,7 +291,7 @@ def delete_file(filename: str, agent: Agent) -> str:
|
||||
|
||||
@command(
|
||||
"list_files",
|
||||
"List Files in Directory",
|
||||
"Lists Files in a Directory",
|
||||
{
|
||||
"directory": {
|
||||
"type": "string",
|
||||
|
||||
@@ -9,7 +9,7 @@ from autogpt.url_utils.validators import validate_url
|
||||
|
||||
@command(
|
||||
"clone_repository",
|
||||
"Clone Repository",
|
||||
"Clones a Repository",
|
||||
{
|
||||
"url": {
|
||||
"type": "string",
|
||||
|
||||
@@ -16,7 +16,7 @@ from autogpt.logs import logger
|
||||
|
||||
@command(
|
||||
"generate_image",
|
||||
"Generate Image",
|
||||
"Generates an Image",
|
||||
{
|
||||
"prompt": {
|
||||
"type": "string",
|
||||
|
||||
@@ -15,7 +15,7 @@ DUCKDUCKGO_MAX_ATTEMPTS = 3
|
||||
|
||||
@command(
|
||||
"web_search",
|
||||
"Search the web",
|
||||
"Searches the web",
|
||||
{
|
||||
"query": {
|
||||
"type": "string",
|
||||
|
||||
@@ -41,7 +41,7 @@ FILE_DIR = Path(__file__).parent.parent
|
||||
|
||||
@command(
|
||||
"browse_website",
|
||||
"Browse Website",
|
||||
"Browses a Website",
|
||||
{
|
||||
"url": {"type": "string", "description": "The URL to visit", "required": True},
|
||||
"question": {
|
||||
|
||||
@@ -5,7 +5,7 @@ from typing import List, Optional
|
||||
import openai
|
||||
from openai import Model
|
||||
|
||||
from autogpt.llm.modelsinfo import COSTS
|
||||
from autogpt.llm.base import CompletionModelInfo
|
||||
from autogpt.logs import logger
|
||||
from autogpt.singleton import Singleton
|
||||
|
||||
@@ -35,14 +35,19 @@ class ApiManager(metaclass=Singleton):
|
||||
model (str): The model used for the API call.
|
||||
"""
|
||||
# the .model property in API responses can contain version suffixes like -v2
|
||||
from autogpt.llm.providers.openai import OPEN_AI_MODELS
|
||||
|
||||
model = model[:-3] if model.endswith("-v2") else model
|
||||
model_info = OPEN_AI_MODELS[model]
|
||||
|
||||
self.total_prompt_tokens += prompt_tokens
|
||||
self.total_completion_tokens += completion_tokens
|
||||
self.total_cost += (
|
||||
prompt_tokens * COSTS[model]["prompt"]
|
||||
+ completion_tokens * COSTS[model]["completion"]
|
||||
) / 1000
|
||||
self.total_cost += prompt_tokens * model_info.prompt_token_cost / 1000
|
||||
if issubclass(type(model_info), CompletionModelInfo):
|
||||
self.total_cost += (
|
||||
completion_tokens * model_info.completion_token_cost / 1000
|
||||
)
|
||||
|
||||
logger.debug(f"Total running cost: ${self.total_cost:.3f}")
|
||||
|
||||
def set_total_budget(self, total_budget):
|
||||
|
||||
@@ -34,22 +34,27 @@ class ModelInfo:
|
||||
|
||||
Would be lovely to eventually get this directly from APIs, but needs to be scraped from
|
||||
websites for now.
|
||||
|
||||
"""
|
||||
|
||||
name: str
|
||||
prompt_token_cost: float
|
||||
completion_token_cost: float
|
||||
max_tokens: int
|
||||
prompt_token_cost: float
|
||||
|
||||
|
||||
@dataclass
|
||||
class ChatModelInfo(ModelInfo):
|
||||
class CompletionModelInfo(ModelInfo):
|
||||
"""Struct for generic completion model information."""
|
||||
|
||||
completion_token_cost: float
|
||||
|
||||
|
||||
@dataclass
|
||||
class ChatModelInfo(CompletionModelInfo):
|
||||
"""Struct for chat model information."""
|
||||
|
||||
|
||||
@dataclass
|
||||
class TextModelInfo(ModelInfo):
|
||||
class TextModelInfo(CompletionModelInfo):
|
||||
"""Struct for text completion model information."""
|
||||
|
||||
|
||||
|
||||
@@ -1,11 +0,0 @@
|
||||
COSTS = {
|
||||
"gpt-3.5-turbo": {"prompt": 0.002, "completion": 0.002},
|
||||
"gpt-3.5-turbo-0301": {"prompt": 0.002, "completion": 0.002},
|
||||
"gpt-4-0314": {"prompt": 0.03, "completion": 0.06},
|
||||
"gpt-4": {"prompt": 0.03, "completion": 0.06},
|
||||
"gpt-4-0314": {"prompt": 0.03, "completion": 0.06},
|
||||
"gpt-4-32k": {"prompt": 0.06, "completion": 0.12},
|
||||
"gpt-4-32k-0314": {"prompt": 0.06, "completion": 0.12},
|
||||
"text-embedding-ada-002": {"prompt": 0.0004, "completion": 0.0},
|
||||
"text-davinci-003": {"prompt": 0.02, "completion": 0.02},
|
||||
}
|
||||
@@ -9,7 +9,6 @@ from colorama import Fore, Style
|
||||
from openai.error import APIError, RateLimitError, Timeout
|
||||
from openai.openai_object import OpenAIObject
|
||||
|
||||
from autogpt.llm.api_manager import ApiManager
|
||||
from autogpt.llm.base import (
|
||||
ChatModelInfo,
|
||||
EmbeddingModelInfo,
|
||||
@@ -22,23 +21,23 @@ from autogpt.logs import logger
|
||||
OPEN_AI_CHAT_MODELS = {
|
||||
info.name: info
|
||||
for info in [
|
||||
ChatModelInfo(
|
||||
name="gpt-3.5-turbo",
|
||||
prompt_token_cost=0.002,
|
||||
completion_token_cost=0.002,
|
||||
max_tokens=4096,
|
||||
),
|
||||
ChatModelInfo(
|
||||
name="gpt-3.5-turbo-0301",
|
||||
prompt_token_cost=0.002,
|
||||
prompt_token_cost=0.0015,
|
||||
completion_token_cost=0.002,
|
||||
max_tokens=4096,
|
||||
),
|
||||
ChatModelInfo(
|
||||
name="gpt-4",
|
||||
prompt_token_cost=0.03,
|
||||
completion_token_cost=0.06,
|
||||
max_tokens=8192,
|
||||
name="gpt-3.5-turbo-0613",
|
||||
prompt_token_cost=0.0015,
|
||||
completion_token_cost=0.002,
|
||||
max_tokens=4096,
|
||||
),
|
||||
ChatModelInfo(
|
||||
name="gpt-3.5-turbo-16k-0613",
|
||||
prompt_token_cost=0.003,
|
||||
completion_token_cost=0.004,
|
||||
max_tokens=16384,
|
||||
),
|
||||
ChatModelInfo(
|
||||
name="gpt-4-0314",
|
||||
@@ -47,10 +46,10 @@ OPEN_AI_CHAT_MODELS = {
|
||||
max_tokens=8192,
|
||||
),
|
||||
ChatModelInfo(
|
||||
name="gpt-4-32k",
|
||||
prompt_token_cost=0.06,
|
||||
completion_token_cost=0.12,
|
||||
max_tokens=32768,
|
||||
name="gpt-4-0613",
|
||||
prompt_token_cost=0.03,
|
||||
completion_token_cost=0.06,
|
||||
max_tokens=8192,
|
||||
),
|
||||
ChatModelInfo(
|
||||
name="gpt-4-32k-0314",
|
||||
@@ -58,8 +57,25 @@ OPEN_AI_CHAT_MODELS = {
|
||||
completion_token_cost=0.12,
|
||||
max_tokens=32768,
|
||||
),
|
||||
ChatModelInfo(
|
||||
name="gpt-4-32k-0613",
|
||||
prompt_token_cost=0.06,
|
||||
completion_token_cost=0.12,
|
||||
max_tokens=32768,
|
||||
),
|
||||
]
|
||||
}
|
||||
# Set aliases for rolling model IDs
|
||||
chat_model_mapping = {
|
||||
"gpt-3.5-turbo": "gpt-3.5-turbo-0301",
|
||||
"gpt-3.5-turbo-16k": "gpt-3.5-turbo-16k-0613",
|
||||
"gpt-4": "gpt-4-0314",
|
||||
"gpt-4-32k": "gpt-4-32k-0314",
|
||||
}
|
||||
for alias, target in chat_model_mapping.items():
|
||||
alias_info = ChatModelInfo(**OPEN_AI_CHAT_MODELS[target].__dict__)
|
||||
alias_info.name = alias
|
||||
OPEN_AI_CHAT_MODELS[alias] = alias_info
|
||||
|
||||
OPEN_AI_TEXT_MODELS = {
|
||||
info.name: info
|
||||
@@ -78,8 +94,7 @@ OPEN_AI_EMBEDDING_MODELS = {
|
||||
for info in [
|
||||
EmbeddingModelInfo(
|
||||
name="text-embedding-ada-002",
|
||||
prompt_token_cost=0.0004,
|
||||
completion_token_cost=0.0,
|
||||
prompt_token_cost=0.0001,
|
||||
max_tokens=8191,
|
||||
embedding_dimensions=1536,
|
||||
),
|
||||
@@ -95,6 +110,8 @@ OPEN_AI_MODELS: dict[str, ChatModelInfo | EmbeddingModelInfo | TextModelInfo] =
|
||||
|
||||
def meter_api(func):
|
||||
"""Adds ApiManager metering to functions which make OpenAI API calls"""
|
||||
from autogpt.llm.api_manager import ApiManager
|
||||
|
||||
api_manager = ApiManager()
|
||||
|
||||
openai_obj_processor = openai.util.convert_to_openai_object
|
||||
|
||||
@@ -24,32 +24,28 @@ def count_message_tokens(
|
||||
Returns:
|
||||
int: The number of tokens used by the list of messages.
|
||||
"""
|
||||
try:
|
||||
encoding = tiktoken.encoding_for_model(model)
|
||||
except KeyError:
|
||||
logger.warn("Warning: model not found. Using cl100k_base encoding.")
|
||||
encoding = tiktoken.get_encoding("cl100k_base")
|
||||
if model == "gpt-3.5-turbo":
|
||||
# !Note: gpt-3.5-turbo may change over time.
|
||||
# Returning num tokens assuming gpt-3.5-turbo-0301.")
|
||||
return count_message_tokens(messages, model="gpt-3.5-turbo-0301")
|
||||
elif model == "gpt-4":
|
||||
# !Note: gpt-4 may change over time. Returning num tokens assuming gpt-4-0314.")
|
||||
return count_message_tokens(messages, model="gpt-4-0314")
|
||||
elif model == "gpt-3.5-turbo-0301":
|
||||
if model.startswith("gpt-3.5-turbo"):
|
||||
tokens_per_message = (
|
||||
4 # every message follows <|start|>{role/name}\n{content}<|end|>\n
|
||||
)
|
||||
tokens_per_name = -1 # if there's a name, the role is omitted
|
||||
elif model == "gpt-4-0314":
|
||||
encoding_model = "gpt-3.5-turbo"
|
||||
elif model.startswith("gpt-4"):
|
||||
tokens_per_message = 3
|
||||
tokens_per_name = 1
|
||||
encoding_model = "gpt-4"
|
||||
else:
|
||||
raise NotImplementedError(
|
||||
f"num_tokens_from_messages() is not implemented for model {model}.\n"
|
||||
f"count_message_tokens() is not implemented for model {model}.\n"
|
||||
" See https://github.com/openai/openai-python/blob/main/chatml.md for"
|
||||
" information on how messages are converted to tokens."
|
||||
)
|
||||
try:
|
||||
encoding = tiktoken.encoding_for_model(encoding_model)
|
||||
except KeyError:
|
||||
logger.warn("Warning: model not found. Using cl100k_base encoding.")
|
||||
encoding = tiktoken.get_encoding("cl100k_base")
|
||||
|
||||
num_tokens = 0
|
||||
for message in messages:
|
||||
num_tokens += tokens_per_message
|
||||
|
||||
@@ -1,5 +1,6 @@
|
||||
"""Base class for all voice classes."""
|
||||
import abc
|
||||
import re
|
||||
from threading import Lock
|
||||
|
||||
from autogpt.config import Config
|
||||
@@ -30,6 +31,11 @@ class VoiceBase(AbstractSingleton):
|
||||
text (str): The text to say.
|
||||
voice_index (int): The index of the voice to use.
|
||||
"""
|
||||
text = re.sub(
|
||||
r"\b(?:https?://[-\w_.]+/?\w[-\w_.]*\.(?:[-\w_.]+/?\w[-\w_.]*\.)?[a-z]+(?:/[-\w_.%]+)*\b(?!\.))",
|
||||
"",
|
||||
text,
|
||||
)
|
||||
with self._mutex:
|
||||
return self._speech(text, voice_index)
|
||||
|
||||
|
||||
@@ -4,7 +4,7 @@ build-backend = "hatchling.build"
|
||||
|
||||
[project]
|
||||
name = "agpt"
|
||||
version = "0.4.0"
|
||||
version = "0.4.1"
|
||||
authors = [
|
||||
{ name="Torantulino", email="support@agpt.co" },
|
||||
]
|
||||
|
||||
@@ -2,7 +2,7 @@ from unittest.mock import MagicMock, patch
|
||||
|
||||
import pytest
|
||||
|
||||
from autogpt.llm.api_manager import COSTS, ApiManager
|
||||
from autogpt.llm.api_manager import ApiManager
|
||||
from autogpt.llm.providers import openai
|
||||
|
||||
api_manager = ApiManager()
|
||||
@@ -14,19 +14,6 @@ def reset_api_manager():
|
||||
yield
|
||||
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def mock_costs():
|
||||
with patch.dict(
|
||||
COSTS,
|
||||
{
|
||||
"gpt-3.5-turbo": {"prompt": 0.002, "completion": 0.002},
|
||||
"text-embedding-ada-002": {"prompt": 0.0004, "completion": 0},
|
||||
},
|
||||
clear=True,
|
||||
):
|
||||
yield
|
||||
|
||||
|
||||
class TestProviderOpenAI:
|
||||
@staticmethod
|
||||
def test_create_chat_completion_debug_mode(caplog):
|
||||
|
||||
@@ -1,8 +1,10 @@
|
||||
from unittest.mock import patch
|
||||
|
||||
import pytest
|
||||
from pytest_mock import MockerFixture
|
||||
|
||||
from autogpt.llm.api_manager import COSTS, ApiManager
|
||||
from autogpt.llm.api_manager import ApiManager
|
||||
from autogpt.llm.providers.openai import OPEN_AI_CHAT_MODELS, OPEN_AI_EMBEDDING_MODELS
|
||||
|
||||
api_manager = ApiManager()
|
||||
|
||||
@@ -14,26 +16,27 @@ def reset_api_manager():
|
||||
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def mock_costs():
|
||||
with patch.dict(
|
||||
COSTS,
|
||||
{
|
||||
"gpt-3.5-turbo": {"prompt": 0.002, "completion": 0.002},
|
||||
"text-embedding-ada-002": {"prompt": 0.0004, "completion": 0},
|
||||
},
|
||||
clear=True,
|
||||
):
|
||||
yield
|
||||
def mock_costs(mocker: MockerFixture):
|
||||
mocker.patch.multiple(
|
||||
OPEN_AI_CHAT_MODELS["gpt-3.5-turbo"],
|
||||
prompt_token_cost=0.0013,
|
||||
completion_token_cost=0.0025,
|
||||
)
|
||||
mocker.patch.multiple(
|
||||
OPEN_AI_EMBEDDING_MODELS["text-embedding-ada-002"],
|
||||
prompt_token_cost=0.0004,
|
||||
)
|
||||
yield
|
||||
|
||||
|
||||
class TestApiManager:
|
||||
def test_getter_methods(self):
|
||||
"""Test the getter methods for total tokens, cost, and budget."""
|
||||
api_manager.update_cost(60, 120, "gpt-3.5-turbo")
|
||||
api_manager.update_cost(600, 1200, "gpt-3.5-turbo")
|
||||
api_manager.set_total_budget(10.0)
|
||||
assert api_manager.get_total_prompt_tokens() == 60
|
||||
assert api_manager.get_total_completion_tokens() == 120
|
||||
assert api_manager.get_total_cost() == (60 * 0.002 + 120 * 0.002) / 1000
|
||||
assert api_manager.get_total_prompt_tokens() == 600
|
||||
assert api_manager.get_total_completion_tokens() == 1200
|
||||
assert api_manager.get_total_cost() == (600 * 0.0013 + 1200 * 0.0025) / 1000
|
||||
assert api_manager.get_total_budget() == 10.0
|
||||
|
||||
@staticmethod
|
||||
@@ -45,7 +48,7 @@ class TestApiManager:
|
||||
assert api_manager.get_total_budget() == total_budget
|
||||
|
||||
@staticmethod
|
||||
def test_update_cost():
|
||||
def test_update_cost_completion_model():
|
||||
"""Test if updating the cost works correctly."""
|
||||
prompt_tokens = 50
|
||||
completion_tokens = 100
|
||||
@@ -53,9 +56,24 @@ class TestApiManager:
|
||||
|
||||
api_manager.update_cost(prompt_tokens, completion_tokens, model)
|
||||
|
||||
assert api_manager.get_total_prompt_tokens() == 50
|
||||
assert api_manager.get_total_completion_tokens() == 100
|
||||
assert api_manager.get_total_cost() == (50 * 0.002 + 100 * 0.002) / 1000
|
||||
assert api_manager.get_total_prompt_tokens() == prompt_tokens
|
||||
assert api_manager.get_total_completion_tokens() == completion_tokens
|
||||
assert (
|
||||
api_manager.get_total_cost()
|
||||
== (prompt_tokens * 0.0013 + completion_tokens * 0.0025) / 1000
|
||||
)
|
||||
|
||||
@staticmethod
|
||||
def test_update_cost_embedding_model():
|
||||
"""Test if updating the cost works correctly."""
|
||||
prompt_tokens = 1337
|
||||
model = "text-embedding-ada-002"
|
||||
|
||||
api_manager.update_cost(prompt_tokens, 0, model)
|
||||
|
||||
assert api_manager.get_total_prompt_tokens() == prompt_tokens
|
||||
assert api_manager.get_total_completion_tokens() == 0
|
||||
assert api_manager.get_total_cost() == (prompt_tokens * 0.0004) / 1000
|
||||
|
||||
@staticmethod
|
||||
def test_get_models():
|
||||
|
||||
Reference in New Issue
Block a user