mirror of
https://github.com/aljazceru/Auto-GPT.git
synced 2026-01-31 11:54:30 +01:00
* feat(benchmark): Add JungleGym WebArena challenges - Add `WebArenaChallenge`, `WebArenaChallengeSpec`, and other logic to make these challenges work - Add WebArena challenges to Pytest collection endpoint generate_test.py * feat(benchmark/webarena): Add hand-picked selection of WebArena challenges
Auto-GPT Benchmarks
Built for the purpose of benchmarking the performance of agents regardless of how they work.
Objectively know how well your agent is performing in categories like code, retrieval, memory, and safety.
Save time and money while doing it through smart dependencies. The best part? It's all automated.
Scores:
Ranking overall:
Detailed results:
Click here to see the results and the raw data!!
More agents coming soon !