mirror of
https://github.com/aljazceru/Auto-GPT.git
synced 2025-12-17 14:04:27 +01:00
* refactor(benchmark): Deduplicate configuration loading logic
- Move the configuration loading logic to a separate `load_agbenchmark_config` function in `agbenchmark/config.py` module.
- Replace the duplicate loading logic in `conftest.py`, `generate_test.py`, `ReportManager.py`, `reports.py`, and `__main__.py` with calls to `load_agbenchmark_config` function.
* fix(benchmark): Fix type errors, linting errors, and clean up CLI validation in __main__.py
- Fixed type errors and linting errors in `__main__.py`
- Improved the readability of CLI argument validation by introducing a separate function for it
* refactor(benchmark): Lint and typefix app.py
- Rearranged and cleaned up import statements
- Fixed type errors caused by improper use of `psutil` objects
- Simplified a number of `os.path` usages by converting to `pathlib`
- Use `Task` and `TaskRequestBody` classes from `agent_protocol_client` instead of `.schema`
* refactor(benchmark): Replace `.agent_protocol_client` by `agent-protcol-client`, clean up schema.py
- Remove `agbenchmark.agent_protocol_client` (an offline copy of `agent-protocol-client`).
- Add `agent-protocol-client` as a dependency and change imports to `agent_protocol_client`.
- Fix type annotation on `agent_api_interface.py::upload_artifacts` (`ApiClient` -> `AgentApi`).
- Remove all unused types from schema.py (= most of them).
* refactor(benchmark): Use pathlib in agent_interface.py and agent_api_interface.py
* refactor(benchmark): Improve typing, response validation, and readability in app.py
- Simplified response generation by leveraging type checking and conversion by FastAPI.
- Introduced use of `HTTPException` for error responses.
- Improved naming, formatting, and typing in `app.py::create_evaluation`.
- Updated the docstring on `app.py::create_agent_task`.
- Fixed return type annotations of `create_single_test` and `create_challenge` in generate_test.py.
- Added default values to optional attributes on models in report_types_v2.py.
- Removed unused imports in `generate_test.py`
* refactor(benchmark): Clean up logging and print statements
- Introduced use of the `logging` library for unified logging and better readability.
- Converted most print statements to use `logger.debug`, `logger.warning`, and `logger.error`.
- Improved descriptiveness of log statements.
- Removed unnecessary print statements.
- Added log statements to unspecific and non-verbose `except` blocks.
- Added `--debug` flag, which sets the log level to `DEBUG` and enables a more comprehensive log format.
- Added `.utils.logging` module with `configure_logging` function to easily configure the logging library.
- Converted raw escape sequences in `.utils.challenge` to use `colorama`.
- Renamed `generate_test.py::generate_tests` to `load_challenges`.
* refactor(benchmark): Remove unused server.py and agent_interface.py::run_agent
- Remove unused server.py file
- Remove unused run_agent function from agent_interface.py
* refactor(benchmark): Clean up conftest.py
- Fix and add type annotations
- Rewrite docstrings
- Disable or remove unused code
- Fix definition of arguments and their types in `pytest_addoption`
* refactor(benchmark): Clean up generate_test.py file
- Refactored the `create_single_test` function for clarity and readability
- Removed unused variables
- Made creation of `Challenge` subclasses more straightforward
- Made bare `except` more specific
- Renamed `Challenge.setup_challenge` method to `run_challenge`
- Updated type hints and annotations
- Made minor code/readability improvements in `load_challenges`
- Added a helper function `_add_challenge_to_module` for attaching a Challenge class to the current module
* fix(benchmark): Fix and add type annotations in execute_sub_process.py
* refactor(benchmark): Simplify const determination in agent_interface.py
- Simplify the logic that determines the value of `HELICONE_GRAPHQL_LOGS`
* fix(benchmark): Register category markers to prevent warnings
- Use the `pytest_configure` hook to register the known challenge categories as markers. Otherwise, Pytest will raise "unknown marker" warnings at runtime.
* refactor(benchmark/challenges): Fix indentation in 4_revenue_retrieval_2/data.json
* refactor(benchmark): Update agent_api_interface.py
- Add type annotations to `copy_agent_artifacts_into_temp_folder` function
- Add note about broken endpoint in the `agent_protocol_client` library
- Remove unused variable in `run_api_agent` function
- Improve readability and resolve linting error
* feat(benchmark): Improve and centralize pathfinding
- Search path hierarchy for applicable `agbenchmark_config`, rather than assuming it's in the current folder.
- Create `agbenchmark.utils.path_manager` with `AGBenchmarkPathManager` and exporting a `PATH_MANAGER` const.
- Replace path constants defined in __main__.py with usages of `PATH_MANAGER`.
* feat(benchmark/cli): Clean up and improve CLI
- Updated commands, options, and their descriptions to be more intuitive and consistent
- Moved slow imports into the entrypoints that use them to speed up application startup
- Fixed type hints to match output types of Click options
- Hid deprecated `agbenchmark start` command
- Refactored code to improve readability and maintainability
- Moved main entrypoint into `run` subcommand
- Fixed `version` and `serve` subcommands
- Added `click-default-group` package to allow using `run` implicitly (for backwards compatibility)
- Renamed `--no_dep` to `--no-dep` for consistency
- Fixed string formatting issues in log statements
* refactor(benchmark/config): Move AgentBenchmarkConfig and related functions to config.py
- Move the `AgentBenchmarkConfig` class from `utils/data_types.py` to `config.py`.
- Extract the `calculate_info_test_path` function from `utils/data_types.py` and move it to `config.py` as a private helper function `_calculate_info_test_path`.
- Move `load_agent_benchmark_config()` to `AgentBenchmarkConfig.load()`.
- Changed simple getter methods on `AgentBenchmarkConfig` to calculated properties.
- Update all code references according to the changes mentioned above.
* refactor(benchmark): Fix ReportManager init parameter types and use pathlib
- Fix the type annotation of the `benchmark_start_time` parameter in `ReportManager.__init__`, was mistyped as `str` instead of `datetime`.
- Change the type of the `filename` parameter in the `ReportManager.__init__` method from `str` to `Path`.
- Rename `self.filename` with `self.report_file` in `ReportManager`.
- Change the way the report file is created, opened and saved to use the `Path` object.
* refactor(benchmark): Improve typing surrounding ChallengeData and clean up its implementation
- Use `ChallengeData` objects instead of untyped `dict` in app.py, generate_test.py, reports.py.
- Remove unnecessary methods `serialize`, `get_data`, `get_json_from_path`, `deserialize` from `ChallengeData` class.
- Remove unused methods `challenge_from_datum` and `challenge_from_test_data` from `ChallengeData class.
- Update function signatures and annotations of `create_challenge` and `generate_single_test` functions in generate_test.py.
- Add types to function signatures of `generate_single_call_report` and `finalize_reports` in reports.py.
- Remove unnecessary `challenge_data` parameter (in generate_test.py) and fixture (in conftest.py).
* refactor(benchmark): Clean up generate_test.py, conftest.py and __main__.py
- Cleaned up generate_test.py and conftest.py
- Consolidated challenge creation logic in the `Challenge` class itself, most notably the new `Challenge.from_challenge_spec` method.
- Moved challenge selection logic from generate_test.py to the `pytest_collection_modifyitems` hook in conftest.py.
- Converted methods in the `Challenge` class to class methods where appropriate.
- Improved argument handling in the `run_benchmark` function in `__main__.py`.
* refactor(benchmark/config): Merge AGBenchmarkPathManager into AgentBenchmarkConfig and reduce fragmented/global state
- Merge the functionality of `AGBenchmarkPathManager` into `AgentBenchmarkConfig` to consolidate the configuration management.
- Remove the `.path_manager` module containing `AGBenchmarkPathManager`.
- Pass the `AgentBenchmarkConfig` and its attributes through function arguments to reduce global state and improve code clarity.
* feat(benchmark/serve): Configurable port for `serve` subcommand
- Added `--port` option to `serve` subcommand to allow for specifying the port to run the API on.
- If no `--port` option is provided, the port will default to the value specified in the `PORT` environment variable, or 8080 if not set.
* feat(benchmark/cli): Add `config` subcommand
- Added a new subcommand `config` to the AGBenchmark CLI, to display information about the present AGBenchmark config.
* fix(benchmark): Gracefully handle incompatible challenge spec files in app.py
- Added a check to skip deprecated challenges
- Added logging to allow debugging of the loading process
- Added handling of validation errors when parsing challenge spec files
- Added missing `spec_file` attribute to `ChallengeData`
* refactor(benchmark): Move `run_benchmark` entrypoint to main.py, use it in `/reports` endpoint
- Move `run_benchmark` and `validate_args` from __main__.py to main.py
- Replace agbenchmark subprocess in `app.py:run_single_test` with `run_benchmark`
- Move `get_unique_categories` from __main__.py to challenges/__init__.py
- Move `OPTIONAL_CATEGORIES` from __main__.py to challenge.py
- Reduce operations on updates.json (including `initialize_updates_file`) outside of API
* refactor(benchmark): Remove unused `/updates` endpoint and all related code
- Remove `updates_json_file` attribute from `AgentBenchmarkConfig`
- Remove `get_updates` and `_initialize_updates_file` in app.py
- Remove `append_updates_file` and `create_update_json` functions in agent_api_interface.py
- Remove call to `append_updates_file` in challenge.py
* refactor(benchmark/config): Clean up and update docstrings on `AgentBenchmarkConfig`
- Add and update docstrings
- Change base class from `BaseModel` to `BaseSettings`, allow extras for backwards compatibility
- Make naming of path attributes on `AgentBenchmarkConfig` more consistent
- Remove unused `agent_home_directory` attribute
- Remove unused `workspace` attribute
* fix(benchmark): Restore mechanism to select (optional) categories in agent benchmark config
* fix(benchmark): Update agent-protocol-client to v1.1.0
- Fixes issue with fetching task artifact listings
157 lines
5.2 KiB
Python
157 lines
5.2 KiB
Python
# radio charts, logs, helper functions for tests, anything else relevant.
|
|
import json
|
|
import logging
|
|
import os
|
|
import re
|
|
from pathlib import Path
|
|
from typing import Any, Optional
|
|
|
|
from dotenv import load_dotenv
|
|
|
|
from agbenchmark.utils.data_types import DIFFICULTY_MAP, DifficultyLevel
|
|
|
|
load_dotenv()
|
|
|
|
AGENT_NAME = os.getenv("AGENT_NAME")
|
|
REPORT_LOCATION = os.getenv("REPORT_LOCATION", None)
|
|
|
|
logger = logging.getLogger(__name__)
|
|
|
|
|
|
def replace_backslash(value: Any) -> Any:
|
|
if isinstance(value, str):
|
|
return re.sub(
|
|
r"\\+", "/", value
|
|
) # replace one or more backslashes with a forward slash
|
|
elif isinstance(value, list):
|
|
return [replace_backslash(i) for i in value]
|
|
elif isinstance(value, dict):
|
|
return {k: replace_backslash(v) for k, v in value.items()}
|
|
else:
|
|
return value
|
|
|
|
|
|
def calculate_success_percentage(results: list[bool]) -> float:
|
|
# Take the last 10 results or all if less than 10
|
|
last_results = results[-10:] if len(results) > 10 else results
|
|
success_count = last_results.count(True)
|
|
total_count = len(last_results)
|
|
if total_count == 0:
|
|
return 0
|
|
success_percentage = (success_count / total_count) * 100 # as a percentage
|
|
return round(success_percentage, 2)
|
|
|
|
|
|
def get_test_path(json_file: str | Path) -> str:
|
|
if isinstance(json_file, str):
|
|
json_file = Path(json_file)
|
|
|
|
# Find the index of "agbenchmark" in the path parts
|
|
try:
|
|
agbenchmark_index = json_file.parts.index("benchmark")
|
|
except ValueError:
|
|
raise ValueError("Invalid challenge location.")
|
|
|
|
# Create the path from "agbenchmark" onwards
|
|
challenge_location = Path(*json_file.parts[agbenchmark_index:])
|
|
|
|
formatted_location = replace_backslash(str(challenge_location))
|
|
if isinstance(formatted_location, str):
|
|
return formatted_location
|
|
else:
|
|
return str(challenge_location)
|
|
|
|
|
|
def get_highest_success_difficulty(
|
|
data: dict, just_string: Optional[bool] = None
|
|
) -> str:
|
|
highest_difficulty = None
|
|
highest_difficulty_level = 0
|
|
|
|
for test_name, test_data in data.items():
|
|
try:
|
|
if test_data.get("tests", None):
|
|
highest_difficulty_str = test_data["metrics"]["highest_difficulty"]
|
|
try:
|
|
highest_difficulty = DifficultyLevel[highest_difficulty_str]
|
|
highest_difficulty_level = DIFFICULTY_MAP[highest_difficulty]
|
|
except KeyError:
|
|
logger.warning(
|
|
f"Unexpected difficulty level '{highest_difficulty_str}' "
|
|
f"in test '{test_name}'"
|
|
)
|
|
continue
|
|
else:
|
|
if test_data["metrics"]["success"]:
|
|
difficulty_str = test_data["metrics"]["difficulty"]
|
|
|
|
try:
|
|
difficulty_enum = DifficultyLevel[difficulty_str.lower()]
|
|
difficulty_level = DIFFICULTY_MAP[difficulty_enum]
|
|
|
|
if difficulty_level > highest_difficulty_level:
|
|
highest_difficulty = difficulty_enum
|
|
highest_difficulty_level = difficulty_level
|
|
except KeyError:
|
|
logger.warning(
|
|
f"Unexpected difficulty level '{difficulty_str}' "
|
|
f"in test '{test_name}'"
|
|
)
|
|
continue
|
|
except Exception as e:
|
|
logger.warning(
|
|
"An unexpected error [1] occurred while analyzing report [2]."
|
|
"Please notify a maintainer.\n"
|
|
f"Report data [1]: {data}\n"
|
|
f"Error [2]: {e}"
|
|
)
|
|
logger.warning(
|
|
"Make sure you selected the right test, no reports were generated."
|
|
)
|
|
break
|
|
|
|
if highest_difficulty is not None:
|
|
highest_difficulty_str = highest_difficulty.name # convert enum to string
|
|
else:
|
|
highest_difficulty_str = ""
|
|
|
|
if highest_difficulty_level and not just_string:
|
|
return f"{highest_difficulty_str}: {highest_difficulty_level}"
|
|
elif highest_difficulty_str:
|
|
return highest_difficulty_str
|
|
return "No successful tests"
|
|
|
|
|
|
# def get_git_commit_sha(directory: Path) -> Optional[str]:
|
|
# try:
|
|
# repo = git.Repo(directory)
|
|
# remote_url = repo.remotes.origin.url
|
|
# if remote_url.endswith(".git"):
|
|
# remote_url = remote_url[:-4]
|
|
# git_commit_sha = f"{remote_url}/tree/{repo.head.commit.hexsha}"
|
|
|
|
# # logger.debug(f"GIT_COMMIT_SHA: {git_commit_sha}")
|
|
# return git_commit_sha
|
|
# except Exception:
|
|
# # logger.error(f"{directory} is not a git repository!")
|
|
# return None
|
|
|
|
|
|
def write_pretty_json(data, json_file):
|
|
sorted_data = deep_sort(data)
|
|
json_graph = json.dumps(sorted_data, indent=4)
|
|
with open(json_file, "w") as f:
|
|
f.write(json_graph)
|
|
f.write("\n")
|
|
|
|
|
|
def deep_sort(obj):
|
|
"""
|
|
Recursively sort the keys in JSON object
|
|
"""
|
|
if isinstance(obj, dict):
|
|
return {k: deep_sort(v) for k, v in sorted(obj.items())}
|
|
if isinstance(obj, list):
|
|
return [deep_sort(elem) for elem in obj]
|
|
return obj
|