Files
Auto-GPT/benchmark
Reinier van der Leer 23d58a3cc0 feat(benchmark/cli): Add challenge list, challenge info subcommands
- Add `challenge list` command with options `--all`, `--names`, `--json`
   - Add `tabular` dependency
   - Add `.utils.utils.sorted_by_enum_index` function to easily sort lists by an enum value/property based on the order of the enum's definition
- Add `challenge info [name]` command with option `--json`
   - Add `.utils.utils.pretty_print_model` routine to pretty-print Pydantic models
- Refactor `config` subcommand to use `pretty_print_model`
2024-02-16 15:17:11 +01:00
..
2023-09-05 10:10:03 -07:00
2023-09-17 06:55:20 -07:00
2023-10-05 10:59:50 -07:00
2023-09-21 20:06:37 -07:00
2023-10-02 12:41:32 -07:00
2023-09-05 10:10:03 -07:00
2023-09-05 10:10:03 -07:00
2023-09-05 10:10:03 -07:00
2023-09-05 10:10:03 -07:00
2023-09-22 15:49:29 -07:00
2023-09-11 17:41:27 -07:00

Auto-GPT Benchmarks

Built for the purpose of benchmarking the performance of agents regardless of how they work.

Objectively know how well your agent is performing in categories like code, retrieval, memory, and safety.

Save time and money while doing it through smart dependencies. The best part? It's all automated.

Scores:

Screenshot 2023-07-25 at 10 35 01 AM

Ranking overall:

Detailed results:

Screenshot 2023-07-25 at 10 42 15 AM

Click here to see the results and the raw data!!

More agents coming soon !