* Removed `autogpt.llm.base` and `autogpt.llm.utils`
* `core` does things async, so `Agent.think()` and `Agent.execute()` are now also async
* Renamed `dump()` and `parse()` on `JSONSchema` to `to_dict()` and `from_dict()`
* Removed `MessageHistory`
* Also, some typo's and linting fixes here and there
* Make .schema model names less pedantic
* Rename LanguageModel* objects to ChatModel* or CompletionModel* where appropriate
* Add `JSONSchema` utility class in `core.utils`
* Use `JSONSchema` instead of untyped dicts for `Ability` and `CompletionModelFunction` parameter specification
* Add token counting methods to `ModelProvider` interface and implementations
This commit integrates the `LeaderboardService` into `SkillTreeViewModel` to enable benchmark report submissions to the leaderboard. A `BenchmarkRun` object is created from the evaluation response and submitted using the `submitReport` method from `LeaderboardService`.
This commit extends the `RestApiUtility` class to include support for a new leaderboard base URL. A new `ApiType` enum value `ApiType.leaderboard` has been added, and the `_getEffectiveBaseUrl` method has been updated to handle this new type. The leaderboard base URL is "https://leaderboard.vercel.app/".
This commit adds a new `LeaderboardService` class featuring a `submitReport` method. This method allows for the submission of `BenchmarkRun` objects to the leaderboard via a POST request to the `/api/reports` endpoint. The new service uses the `ApiType.leaderboard` enum value.
This commit introduces the `BenchmarkRun` class, designed to model a complete benchmark run. The class encapsulates all data and sub-models related to a benchmark, providing a centralized object to handle various aspects of a benchmark run.
The `BenchmarkRun` class includes the following sub-models:
- `RepositoryInfo`: Information about the repository and team.
- `RunDetails`: Specific details like the run identifier, command, and timings.
- `TaskInfo`: Information about the task being benchmarked.
- `Metrics`: Performance metrics for the benchmark run.
- `Config`: Configuration settings for the benchmark run.
A `reachedCutoff` field is also included to indicate whether a certain cutoff was reached during the benchmark run.
Methods for serializing and deserializing the object to and from JSON are also provided.
This commit introduces the RepositoryInfo class, designed to encapsulate details about the repository and team associated with a benchmark run.
The class includes the following fields:
- repoUrl: The URL of the repository where the benchmark code resides.
- teamName: The name of the team responsible for the benchmark.
- benchmarkGitCommitSha: The Git commit SHA for the benchmark code.
- agentGitCommitSha: The Git commit SHA for the agent code.
The class supports JSON serialization and deserialization, making it easy to use with Flutter's JSON handling mechanisms.