Files
Auto-GPT/benchmark
Reinier van der Leer 488f40a20f feat(benchmark): JungleGym WebArena (#6691)
* feat(benchmark): Add JungleGym WebArena challenges
   - Add `WebArenaChallenge`, `WebArenaChallengeSpec`, and other logic to make these challenges work
   - Add WebArena challenges to Pytest collection endpoint generate_test.py

* feat(benchmark/webarena): Add hand-picked selection of WebArena challenges
2024-01-19 20:34:04 +01:00
..
2023-09-05 10:10:03 -07:00
2023-09-17 06:55:20 -07:00
2023-10-05 10:59:50 -07:00
2023-09-21 20:06:37 -07:00
2023-10-02 12:41:32 -07:00
2023-09-05 10:10:03 -07:00
2023-09-05 10:10:03 -07:00
2023-09-05 10:10:03 -07:00
2023-09-05 10:10:03 -07:00
2023-09-05 10:10:03 -07:00
2023-09-22 15:49:29 -07:00
2023-09-11 17:41:27 -07:00

Auto-GPT Benchmarks

Built for the purpose of benchmarking the performance of agents regardless of how they work.

Objectively know how well your agent is performing in categories like code, retrieval, memory, and safety.

Save time and money while doing it through smart dependencies. The best part? It's all automated.

Scores:

Screenshot 2023-07-25 at 10 35 01 AM

Ranking overall:

Detailed results:

Screenshot 2023-07-25 at 10 42 15 AM

Click here to see the results and the raw data!!

More agents coming soon !