mirror of
https://github.com/aljazceru/Auto-GPT.git
synced 2026-02-19 13:14:26 +01:00
* refactor(benchmark): Deduplicate configuration loading logic
- Move the configuration loading logic to a separate `load_agbenchmark_config` function in `agbenchmark/config.py` module.
- Replace the duplicate loading logic in `conftest.py`, `generate_test.py`, `ReportManager.py`, `reports.py`, and `__main__.py` with calls to `load_agbenchmark_config` function.
* fix(benchmark): Fix type errors, linting errors, and clean up CLI validation in __main__.py
- Fixed type errors and linting errors in `__main__.py`
- Improved the readability of CLI argument validation by introducing a separate function for it
* refactor(benchmark): Lint and typefix app.py
- Rearranged and cleaned up import statements
- Fixed type errors caused by improper use of `psutil` objects
- Simplified a number of `os.path` usages by converting to `pathlib`
- Use `Task` and `TaskRequestBody` classes from `agent_protocol_client` instead of `.schema`
* refactor(benchmark): Replace `.agent_protocol_client` by `agent-protcol-client`, clean up schema.py
- Remove `agbenchmark.agent_protocol_client` (an offline copy of `agent-protocol-client`).
- Add `agent-protocol-client` as a dependency and change imports to `agent_protocol_client`.
- Fix type annotation on `agent_api_interface.py::upload_artifacts` (`ApiClient` -> `AgentApi`).
- Remove all unused types from schema.py (= most of them).
* refactor(benchmark): Use pathlib in agent_interface.py and agent_api_interface.py
* refactor(benchmark): Improve typing, response validation, and readability in app.py
- Simplified response generation by leveraging type checking and conversion by FastAPI.
- Introduced use of `HTTPException` for error responses.
- Improved naming, formatting, and typing in `app.py::create_evaluation`.
- Updated the docstring on `app.py::create_agent_task`.
- Fixed return type annotations of `create_single_test` and `create_challenge` in generate_test.py.
- Added default values to optional attributes on models in report_types_v2.py.
- Removed unused imports in `generate_test.py`
* refactor(benchmark): Clean up logging and print statements
- Introduced use of the `logging` library for unified logging and better readability.
- Converted most print statements to use `logger.debug`, `logger.warning`, and `logger.error`.
- Improved descriptiveness of log statements.
- Removed unnecessary print statements.
- Added log statements to unspecific and non-verbose `except` blocks.
- Added `--debug` flag, which sets the log level to `DEBUG` and enables a more comprehensive log format.
- Added `.utils.logging` module with `configure_logging` function to easily configure the logging library.
- Converted raw escape sequences in `.utils.challenge` to use `colorama`.
- Renamed `generate_test.py::generate_tests` to `load_challenges`.
* refactor(benchmark): Remove unused server.py and agent_interface.py::run_agent
- Remove unused server.py file
- Remove unused run_agent function from agent_interface.py
* refactor(benchmark): Clean up conftest.py
- Fix and add type annotations
- Rewrite docstrings
- Disable or remove unused code
- Fix definition of arguments and their types in `pytest_addoption`
* refactor(benchmark): Clean up generate_test.py file
- Refactored the `create_single_test` function for clarity and readability
- Removed unused variables
- Made creation of `Challenge` subclasses more straightforward
- Made bare `except` more specific
- Renamed `Challenge.setup_challenge` method to `run_challenge`
- Updated type hints and annotations
- Made minor code/readability improvements in `load_challenges`
- Added a helper function `_add_challenge_to_module` for attaching a Challenge class to the current module
* fix(benchmark): Fix and add type annotations in execute_sub_process.py
* refactor(benchmark): Simplify const determination in agent_interface.py
- Simplify the logic that determines the value of `HELICONE_GRAPHQL_LOGS`
* fix(benchmark): Register category markers to prevent warnings
- Use the `pytest_configure` hook to register the known challenge categories as markers. Otherwise, Pytest will raise "unknown marker" warnings at runtime.
* refactor(benchmark/challenges): Fix indentation in 4_revenue_retrieval_2/data.json
* refactor(benchmark): Update agent_api_interface.py
- Add type annotations to `copy_agent_artifacts_into_temp_folder` function
- Add note about broken endpoint in the `agent_protocol_client` library
- Remove unused variable in `run_api_agent` function
- Improve readability and resolve linting error
* feat(benchmark): Improve and centralize pathfinding
- Search path hierarchy for applicable `agbenchmark_config`, rather than assuming it's in the current folder.
- Create `agbenchmark.utils.path_manager` with `AGBenchmarkPathManager` and exporting a `PATH_MANAGER` const.
- Replace path constants defined in __main__.py with usages of `PATH_MANAGER`.
* feat(benchmark/cli): Clean up and improve CLI
- Updated commands, options, and their descriptions to be more intuitive and consistent
- Moved slow imports into the entrypoints that use them to speed up application startup
- Fixed type hints to match output types of Click options
- Hid deprecated `agbenchmark start` command
- Refactored code to improve readability and maintainability
- Moved main entrypoint into `run` subcommand
- Fixed `version` and `serve` subcommands
- Added `click-default-group` package to allow using `run` implicitly (for backwards compatibility)
- Renamed `--no_dep` to `--no-dep` for consistency
- Fixed string formatting issues in log statements
* refactor(benchmark/config): Move AgentBenchmarkConfig and related functions to config.py
- Move the `AgentBenchmarkConfig` class from `utils/data_types.py` to `config.py`.
- Extract the `calculate_info_test_path` function from `utils/data_types.py` and move it to `config.py` as a private helper function `_calculate_info_test_path`.
- Move `load_agent_benchmark_config()` to `AgentBenchmarkConfig.load()`.
- Changed simple getter methods on `AgentBenchmarkConfig` to calculated properties.
- Update all code references according to the changes mentioned above.
* refactor(benchmark): Fix ReportManager init parameter types and use pathlib
- Fix the type annotation of the `benchmark_start_time` parameter in `ReportManager.__init__`, was mistyped as `str` instead of `datetime`.
- Change the type of the `filename` parameter in the `ReportManager.__init__` method from `str` to `Path`.
- Rename `self.filename` with `self.report_file` in `ReportManager`.
- Change the way the report file is created, opened and saved to use the `Path` object.
* refactor(benchmark): Improve typing surrounding ChallengeData and clean up its implementation
- Use `ChallengeData` objects instead of untyped `dict` in app.py, generate_test.py, reports.py.
- Remove unnecessary methods `serialize`, `get_data`, `get_json_from_path`, `deserialize` from `ChallengeData` class.
- Remove unused methods `challenge_from_datum` and `challenge_from_test_data` from `ChallengeData class.
- Update function signatures and annotations of `create_challenge` and `generate_single_test` functions in generate_test.py.
- Add types to function signatures of `generate_single_call_report` and `finalize_reports` in reports.py.
- Remove unnecessary `challenge_data` parameter (in generate_test.py) and fixture (in conftest.py).
* refactor(benchmark): Clean up generate_test.py, conftest.py and __main__.py
- Cleaned up generate_test.py and conftest.py
- Consolidated challenge creation logic in the `Challenge` class itself, most notably the new `Challenge.from_challenge_spec` method.
- Moved challenge selection logic from generate_test.py to the `pytest_collection_modifyitems` hook in conftest.py.
- Converted methods in the `Challenge` class to class methods where appropriate.
- Improved argument handling in the `run_benchmark` function in `__main__.py`.
* refactor(benchmark/config): Merge AGBenchmarkPathManager into AgentBenchmarkConfig and reduce fragmented/global state
- Merge the functionality of `AGBenchmarkPathManager` into `AgentBenchmarkConfig` to consolidate the configuration management.
- Remove the `.path_manager` module containing `AGBenchmarkPathManager`.
- Pass the `AgentBenchmarkConfig` and its attributes through function arguments to reduce global state and improve code clarity.
* feat(benchmark/serve): Configurable port for `serve` subcommand
- Added `--port` option to `serve` subcommand to allow for specifying the port to run the API on.
- If no `--port` option is provided, the port will default to the value specified in the `PORT` environment variable, or 8080 if not set.
* feat(benchmark/cli): Add `config` subcommand
- Added a new subcommand `config` to the AGBenchmark CLI, to display information about the present AGBenchmark config.
* fix(benchmark): Gracefully handle incompatible challenge spec files in app.py
- Added a check to skip deprecated challenges
- Added logging to allow debugging of the loading process
- Added handling of validation errors when parsing challenge spec files
- Added missing `spec_file` attribute to `ChallengeData`
* refactor(benchmark): Move `run_benchmark` entrypoint to main.py, use it in `/reports` endpoint
- Move `run_benchmark` and `validate_args` from __main__.py to main.py
- Replace agbenchmark subprocess in `app.py:run_single_test` with `run_benchmark`
- Move `get_unique_categories` from __main__.py to challenges/__init__.py
- Move `OPTIONAL_CATEGORIES` from __main__.py to challenge.py
- Reduce operations on updates.json (including `initialize_updates_file`) outside of API
* refactor(benchmark): Remove unused `/updates` endpoint and all related code
- Remove `updates_json_file` attribute from `AgentBenchmarkConfig`
- Remove `get_updates` and `_initialize_updates_file` in app.py
- Remove `append_updates_file` and `create_update_json` functions in agent_api_interface.py
- Remove call to `append_updates_file` in challenge.py
* refactor(benchmark/config): Clean up and update docstrings on `AgentBenchmarkConfig`
- Add and update docstrings
- Change base class from `BaseModel` to `BaseSettings`, allow extras for backwards compatibility
- Make naming of path attributes on `AgentBenchmarkConfig` more consistent
- Remove unused `agent_home_directory` attribute
- Remove unused `workspace` attribute
* fix(benchmark): Restore mechanism to select (optional) categories in agent benchmark config
* fix(benchmark): Update agent-protocol-client to v1.1.0
- Fixes issue with fetching task artifact listings
285 lines
10 KiB
Python
285 lines
10 KiB
Python
import glob
|
|
import json
|
|
import logging
|
|
import math
|
|
import os
|
|
import subprocess
|
|
import sys
|
|
from abc import ABC
|
|
from pathlib import Path
|
|
from typing import Any, ClassVar, List
|
|
|
|
import openai
|
|
import pytest
|
|
from colorama import Fore, Style
|
|
|
|
from agbenchmark.agent_api_interface import run_api_agent
|
|
from agbenchmark.config import AgentBenchmarkConfig
|
|
from agbenchmark.utils.data_types import ChallengeData, Ground
|
|
from agbenchmark.utils.prompts import (
|
|
END_PROMPT,
|
|
FEW_SHOT_EXAMPLES,
|
|
PROMPT_MAP,
|
|
SCORING_MAP,
|
|
)
|
|
|
|
logger = logging.getLogger(__name__)
|
|
|
|
with open(
|
|
Path(__file__).parent.parent / "challenges" / "optional_categories.json"
|
|
) as f:
|
|
OPTIONAL_CATEGORIES: list[str] = json.load(f)["optional_categories"]
|
|
|
|
|
|
class Challenge(ABC):
|
|
"""The parent class to all specific challenges classes.
|
|
Defines helper methods for running a challenge"""
|
|
|
|
data: ChallengeData
|
|
CHALLENGE_LOCATION: ClassVar[str]
|
|
ARTIFACTS_LOCATION: ClassVar[str]
|
|
scores: ClassVar[dict[str, Any]] = {} # this is for suites
|
|
|
|
@staticmethod
|
|
def from_challenge_spec(spec_file: Path) -> type["Challenge"]:
|
|
challenge_data = ChallengeData.parse_file(spec_file)
|
|
|
|
challenge_class_name = f"Test{challenge_data.name}"
|
|
logger.debug(f"Creating {challenge_class_name} from spec: {spec_file}")
|
|
return type(
|
|
challenge_class_name,
|
|
(Challenge,),
|
|
{
|
|
"data": challenge_data,
|
|
"CHALLENGE_LOCATION": str(spec_file),
|
|
"ARTIFACTS_LOCATION": str(spec_file.resolve().parent),
|
|
},
|
|
)
|
|
|
|
# Define test method within the dynamically created class
|
|
@pytest.mark.asyncio
|
|
async def test_method(
|
|
self, config: AgentBenchmarkConfig, request: pytest.FixtureRequest
|
|
) -> None:
|
|
# skip optional categories
|
|
self.skip_optional_categories(config)
|
|
|
|
if os.environ.get("HELICONE_API_KEY"):
|
|
from helicone.lock import HeliconeLockManager
|
|
|
|
HeliconeLockManager.write_custom_property("challenge", self.data.name)
|
|
|
|
timeout = self.data.cutoff or 60
|
|
|
|
if request.config.getoption("--nc"):
|
|
timeout = 100000
|
|
elif cutoff := request.config.getoption("--cutoff"):
|
|
timeout = int(cutoff)
|
|
|
|
await self.run_challenge(config, timeout)
|
|
|
|
scores = self.get_scores(config.temp_folder)
|
|
request.node.answers = (
|
|
scores["answers"] if request.config.getoption("--keep-answers") else None
|
|
)
|
|
del scores["answers"] # remove answers from scores
|
|
request.node.scores = scores # store scores in request.node
|
|
is_score_100 = 1 in scores["values"]
|
|
|
|
assert is_score_100
|
|
|
|
async def run_challenge(self, config: AgentBenchmarkConfig, cutoff: int) -> None:
|
|
from agbenchmark.agent_interface import copy_artifacts_into_temp_folder
|
|
|
|
if not self.data.task:
|
|
return
|
|
|
|
print(
|
|
f"{Fore.MAGENTA + Style.BRIGHT}{'='*24} "
|
|
f"Starting {self.data.name} challenge"
|
|
f" {'='*24}{Style.RESET_ALL}"
|
|
)
|
|
print(f"{Fore.BLACK}Task: {self.data.task}{Fore.RESET}")
|
|
|
|
await run_api_agent(self.data, config, self.ARTIFACTS_LOCATION, cutoff)
|
|
|
|
# hidden files are added after the agent runs. Hidden files can be python test files.
|
|
# We copy them in the temporary folder to make it easy to import the code produced by the agent
|
|
artifact_paths = [
|
|
self.ARTIFACTS_LOCATION,
|
|
str(Path(self.CHALLENGE_LOCATION).parent),
|
|
]
|
|
for path in artifact_paths:
|
|
copy_artifacts_into_temp_folder(config.temp_folder, "custom_python", path)
|
|
|
|
@staticmethod
|
|
def get_artifacts_out(
|
|
workspace: str | Path | dict[str, str], ground: Ground
|
|
) -> List[str]:
|
|
if isinstance(workspace, dict):
|
|
workspace = workspace["output"]
|
|
|
|
script_dir = workspace
|
|
files_contents = []
|
|
|
|
for file_pattern in ground.files:
|
|
# Check if it is a file extension
|
|
if file_pattern.startswith("."):
|
|
# Find all files with the given extension in the workspace
|
|
matching_files = glob.glob(os.path.join(script_dir, "*" + file_pattern))
|
|
else:
|
|
# Otherwise, it is a specific file
|
|
matching_files = [os.path.join(script_dir, file_pattern)]
|
|
|
|
for file_path in matching_files:
|
|
if ground.eval.type == "python":
|
|
result = subprocess.run(
|
|
[sys.executable, file_path],
|
|
cwd=os.path.abspath(workspace),
|
|
capture_output=True,
|
|
text=True,
|
|
)
|
|
if "error" in result.stderr or result.returncode != 0:
|
|
print(result.stderr)
|
|
assert False, result.stderr
|
|
files_contents.append(f"Output: {result.stdout}\n")
|
|
else:
|
|
with open(file_path, "r") as f:
|
|
files_contents.append(f.read())
|
|
else:
|
|
if ground.eval.type == "pytest":
|
|
result = subprocess.run(
|
|
[sys.executable, "-m", "pytest"],
|
|
cwd=os.path.abspath(workspace),
|
|
capture_output=True,
|
|
text=True,
|
|
)
|
|
if "error" in result.stderr or result.returncode != 0:
|
|
print(result.stderr)
|
|
assert False, result.stderr
|
|
files_contents.append(f"Output: {result.stdout}\n")
|
|
|
|
return files_contents
|
|
|
|
@staticmethod
|
|
def scoring(content: str, ground: Ground) -> float:
|
|
print(f"{Fore.BLUE}Scoring content:{Style.RESET_ALL}", content)
|
|
if ground.should_contain:
|
|
for should_contain_word in ground.should_contain:
|
|
if not getattr(ground, "case_sensitive", True):
|
|
should_contain_word = should_contain_word.lower()
|
|
content = content.lower()
|
|
print_content = (
|
|
f"{Fore.BLUE}Word that should exist{Style.RESET_ALL}"
|
|
f" - {should_contain_word}:"
|
|
)
|
|
if should_contain_word not in content:
|
|
print(print_content, "False")
|
|
return 0.0
|
|
else:
|
|
print(print_content, "True")
|
|
|
|
if ground.should_not_contain:
|
|
for should_not_contain_word in ground.should_not_contain:
|
|
if not getattr(ground, "case_sensitive", True):
|
|
should_not_contain_word = should_not_contain_word.lower()
|
|
content = content.lower()
|
|
print_content = (
|
|
f"{Fore.BLUE}Word that should not exist{Style.RESET_ALL}"
|
|
f" - {should_not_contain_word}:"
|
|
)
|
|
if should_not_contain_word in content:
|
|
print(print_content, "False")
|
|
return 0.0
|
|
else:
|
|
print(print_content, "True")
|
|
|
|
return 1.0
|
|
|
|
@classmethod
|
|
def llm_eval(cls, content: str, ground: Ground) -> float:
|
|
openai.api_key = os.getenv("OPENAI_API_KEY")
|
|
if os.getenv("IS_MOCK"):
|
|
return 1.0
|
|
|
|
# the validation for this is done in the Eval BaseModel
|
|
scoring = SCORING_MAP[ground.eval.scoring] # type: ignore
|
|
prompt = PROMPT_MAP[ground.eval.template].format( # type: ignore
|
|
task=cls.data.task, scoring=scoring, answer=ground.answer, response=content
|
|
)
|
|
|
|
if ground.eval.examples:
|
|
prompt += FEW_SHOT_EXAMPLES.format(examples=ground.eval.examples)
|
|
|
|
prompt += END_PROMPT
|
|
|
|
answer = openai.ChatCompletion.create(
|
|
model="gpt-4",
|
|
messages=[
|
|
{"role": "system", "content": prompt},
|
|
],
|
|
)
|
|
|
|
return float(answer["choices"][0]["message"]["content"]) # type: ignore
|
|
|
|
@classmethod
|
|
def get_scores(cls, workspace: Path) -> dict[str, Any]:
|
|
scores = []
|
|
scores_dict: Any = {}
|
|
percentage = None
|
|
answers = {}
|
|
try:
|
|
if cls.data.task == "" and os.getenv("IS_MOCK"):
|
|
scores = [1.0]
|
|
answers = {"mock": "This is a mock answer"}
|
|
elif isinstance(cls.data.ground, Ground):
|
|
files_contents = cls.get_artifacts_out(workspace, cls.data.ground)
|
|
answers = {"answer": files_contents}
|
|
for file_content in files_contents:
|
|
score = cls.scoring(file_content, cls.data.ground)
|
|
print(f"{Fore.GREEN}Your score is:{Style.RESET_ALL}", score)
|
|
scores.append(score)
|
|
|
|
if cls.data.ground.eval.type == "llm":
|
|
llm_eval = cls.llm_eval("\n".join(files_contents), cls.data.ground)
|
|
if cls.data.ground.eval.scoring == "percentage":
|
|
scores.append(math.ceil(llm_eval / 100))
|
|
elif cls.data.ground.eval.scoring == "scale":
|
|
scores.append(math.ceil(llm_eval / 10))
|
|
print(f"{Fore.GREEN}Your score is:{Style.RESET_ALL}", llm_eval)
|
|
|
|
scores.append(llm_eval)
|
|
except Exception as e:
|
|
print("Error getting scores", e)
|
|
|
|
scores_data = {
|
|
"values": scores,
|
|
"scores_obj": scores_dict,
|
|
"percentage": percentage,
|
|
"answers": answers,
|
|
}
|
|
|
|
cls.scores[cls.__name__] = scores_data
|
|
|
|
return scores_data
|
|
|
|
def get_dummy_scores(self, test_name: str, scores: dict[str, Any]) -> int | None:
|
|
return 1 # remove this once this works
|
|
if 1 in scores.get("scores_obj", {}).get(test_name, []):
|
|
return 1
|
|
|
|
return None
|
|
|
|
@classmethod
|
|
def skip_optional_categories(cls, config: AgentBenchmarkConfig) -> None:
|
|
challenge_categories = set(c.value for c in cls.data.category)
|
|
challenge_optional_categories = challenge_categories & set(OPTIONAL_CATEGORIES)
|
|
if challenge_optional_categories and not (
|
|
config.categories
|
|
and set(challenge_optional_categories).issubset(set(config.categories))
|
|
):
|
|
pytest.skip(
|
|
f"Category {', '.join(challenge_optional_categories)} is optional, "
|
|
"and not explicitly selected in the benchmark config."
|
|
)
|