mirror of
https://github.com/aljazceru/Auto-GPT.git
synced 2026-01-05 15:24:23 +01:00
windows docs make workspace if not there (#25)
* windows docs make workspace if not there * small fixes
This commit is contained in:
committed by
GitHub
parent
ef5c4f8a11
commit
b8c7c05dd5
43
README.md
43
README.md
@@ -13,54 +13,53 @@ documented.
|
||||
|
||||
Clone the repo with:
|
||||
|
||||
`git clone git@github.com:Significant-Gravitas/Auto-GPT-Benchmarks.git`
|
||||
`cd Auto-GPT-Benchmarks`
|
||||
git clone git@github.com:Significant-Gravitas/Auto-GPT-Benchmarks.git
|
||||
cd Auto-GPT-Benchmarks
|
||||
|
||||
Create a venv with
|
||||
|
||||
`python3.9 -m venv venv`
|
||||
python3.9 -m venv venv
|
||||
|
||||
|
||||
Activate it with
|
||||
On MaxOS/Linux Activate it with
|
||||
|
||||
`source venv/bin/activate`
|
||||
source venv/bin/activate
|
||||
|
||||
On Windows:
|
||||
|
||||
venv/scripts/activate
|
||||
|
||||
Install the requirements with:
|
||||
|
||||
`pip install -r requirements.txt`
|
||||
pip install -r requirements.txt
|
||||
|
||||
If you haven't already clone the AutoGPT repo somewhere else on your machine.
|
||||
DO NOT CLONE IT INTO A SUBDIR OF THIS REPO.
|
||||
|
||||
`cd somewhere/else`
|
||||
`git clone git@github.com:Significant-Gravitas/Auto-GPT.git`
|
||||
cd somewhere/else
|
||||
git clone git@github.com:Significant-Gravitas/Auto-GPT.git
|
||||
cd Auto-GPT
|
||||
git checkout stable # Or the branch you want to benchmark
|
||||
|
||||
You will need to update the .env file in the Auto-GPT repo to have your OpenAI api key. The file in question is at:
|
||||
You will need to update the .env file in the Auto-GPT repo to have your OpenAI api key. The file in question is at. This should becopied from the .env.template as described in the Auto-GPT README.md
|
||||
|
||||
`Auto-GPT/.env`
|
||||
Auto-GPT/.env
|
||||
|
||||
Finally, we assume you have a docker container built from the Dockerfile in the Auto-GPT repo.
|
||||
|
||||
Build this with:
|
||||
|
||||
`cd Auto-GPT`
|
||||
`docker build -t autogpt .`
|
||||
|
||||
If you want to run with redis as your memory system, you can stand up a redis image in the AutoGPT repo with
|
||||
|
||||
`docker compose up`
|
||||
|
||||
Then you will need to adjust some variables in your .env file to use the redis memory backend.
|
||||
See the AutoGPT docs on how to do that.
|
||||
cd Auto-GPT
|
||||
docker build -t autogpt .
|
||||
|
||||
Run your first eval with:
|
||||
|
||||
`cd Auto-GPT-Benchmarks`
|
||||
`python3 auto_gpt_benchmarking test-match --auto-gpt-path /your/path/to/Auto-GPT`
|
||||
cd Auto-GPT-Benchmarks
|
||||
python3 auto_gpt_benchmarking test-match --auto-gpt-path /your/path/to/Auto-GPT
|
||||
|
||||
You should only need to use the --auto-gpt-path flag the first time you run it. Afterwards, that will be saved in
|
||||
|
||||
`auto_gpt_benchmarking/completion_fns/auto_gpt_completion_fn.yaml`.
|
||||
auto_gpt_benchmarking/completion_fns/auto_gpt_completion_fn.yaml.
|
||||
|
||||
To see a full list of available flags you can use run `python3 -m auto_gpt_benchmarking --help`
|
||||
Some of these are inherited from the openAI evals framework and do not work quite as intended as they are not applicable
|
||||
|
||||
@@ -108,6 +108,9 @@ class AutoGPTAgent:
|
||||
def __init__(self, prompt, auto_gpt_path: str):
|
||||
self.auto_gpt_path = Path(auto_gpt_path)
|
||||
self.auto_workspace = self.auto_gpt_path / "auto_gpt_workspace"
|
||||
# if the workspace doesn't exist, create it
|
||||
if not self.auto_workspace.exists():
|
||||
self.auto_workspace.mkdir()
|
||||
self.prompt_file = self.auto_workspace / "prompt.txt"
|
||||
self.output_file = self.auto_workspace / "output.txt"
|
||||
self.file_logger = self.auto_workspace / "file_logger.txt"
|
||||
|
||||
Reference in New Issue
Block a user