mirror of
https://github.com/aljazceru/gpt-engineer.git
synced 2025-12-17 20:55:09 +01:00
Merge branch 'main' into add-makefile
This commit is contained in:
112
.github/CONTRIBUTING.md
vendored
Normal file
112
.github/CONTRIBUTING.md
vendored
Normal file
@@ -0,0 +1,112 @@
|
||||
# Contributing to GPT Engineer
|
||||
|
||||
By participating in this project, you agree to abide by the [code of conduct](CODE_OF_CONDUCT.md).
|
||||
|
||||
## Getting Started
|
||||
|
||||
To get started with contributing, please follow these steps:
|
||||
|
||||
1. Fork the repository and clone it to your local machine.
|
||||
2. Install any necessary dependencies.
|
||||
3. Create a new branch for your changes: `git checkout -b my-branch-name`.
|
||||
4. Make your desired changes or additions.
|
||||
5. Run the tests to ensure everything is working as expected.
|
||||
6. Commit your changes: `git commit -m "Descriptive commit message"`.
|
||||
7. Push to the branch: `git push origin my-branch-name`.
|
||||
8. Submit a pull request to the `main` branch of the original repository.
|
||||
|
||||
|
||||
## Code Style
|
||||
|
||||
Please make sure to follow the established code style guidelines for this project. Consistent code style helps maintain readability and makes it easier for others to contribute to the project.
|
||||
|
||||
To enforce this we use [`pre-commit`](https://pre-commit.com/) to run [`black`](https://black.readthedocs.io/en/stable/index.html) and [`ruff`](https://beta.ruff.rs/docs/) on every commit.
|
||||
|
||||
`pre-commit` is part of our `requirements.txt` file so you should already have it installed. If you don't, you can install the library via pip with:
|
||||
|
||||
```bash
|
||||
$ pip install -r requirements.txt
|
||||
```
|
||||
|
||||
And then install the `pre-commit` hooks with:
|
||||
```bash
|
||||
$ pre-commit install
|
||||
|
||||
pre-commit installed at .git/hooks/pre-commit
|
||||
|
||||
```
|
||||
|
||||
If you are not familiar with the concept of [git hooks](https://git-scm.com/docs/githooks) and/or [`pre-commit`](https://pre-commit.com/) please read the documentation to understand how they work.
|
||||
|
||||
As an introduction of the actual workflow, here is an example of the process you will encounter when you make a commit:
|
||||
|
||||
Let's add a file we have modified with some errors, see how the pre-commit hooks run `black` and fails.
|
||||
`black` is set to automatically fix the issues it finds:
|
||||
```bash
|
||||
$ git add chat_to_files.py
|
||||
$ git commit -m "commit message"
|
||||
black....................................................................Failed
|
||||
- hook id: black
|
||||
- files were modified by this hook
|
||||
|
||||
reformatted chat_to_files.py
|
||||
|
||||
All done! ✨ 🍰 ✨
|
||||
1 file reformatted.
|
||||
```
|
||||
|
||||
You can see that `chat_to_files.py` is both staged and not staged for commit. This is because `black` has formatted it and now it is different from the version you have in your working directory. To fix this you can simply run `git add chat_to_files.py` again and now you can commit your changes.
|
||||
```bash
|
||||
$ git status
|
||||
On branch pre-commit-setup
|
||||
Changes to be committed:
|
||||
(use "git restore --staged <file>..." to unstage)
|
||||
modified: chat_to_files.py
|
||||
|
||||
Changes not staged for commit:
|
||||
(use "git add <file>..." to update what will be committed)
|
||||
(use "git restore <file>..." to discard changes in working directory)
|
||||
modified: chat_to_files.py
|
||||
```
|
||||
|
||||
|
||||
Now let's add the file again to include the latest commits and see how `ruff` fails.
|
||||
```bash
|
||||
$ git add chat_to_files.py
|
||||
$ git commit -m "commit message"
|
||||
black....................................................................Passed
|
||||
ruff.....................................................................Failed
|
||||
- hook id: ruff
|
||||
- exit code: 1
|
||||
- files were modified by this hook
|
||||
|
||||
Found 2 errors (2 fixed, 0 remaining).
|
||||
```
|
||||
|
||||
Same as before, you can see that `chat_to_files.py` is both staged and not staged for commit. This is because `ruff` has formatted it and now it is different from the version you have in your working directory. To fix this you can simply run `git add chat_to_files.py` again and now you can commit your changes.
|
||||
```bash
|
||||
$ git add chat_to_files.py
|
||||
$ git commit -m "commit message"
|
||||
black....................................................................Passed
|
||||
ruff.....................................................................Passed
|
||||
fix end of files.........................................................Passed
|
||||
[pre-commit-setup f00c0ce] testing
|
||||
1 file changed, 1 insertion(+), 1 deletion(-)
|
||||
```
|
||||
|
||||
Now your file has been committed and you can push your changes.
|
||||
|
||||
At the beggining this might seem like a tedious process (having to add the file again after `black` and `ruff` have modified it) but it is actually very useful. It allows you to see what changes `black` and `ruff` have made to your files and make sure that they are correct before you commit them.
|
||||
|
||||
|
||||
|
||||
## Issue Tracker
|
||||
|
||||
If you encounter any bugs, issues, or have feature requests, please [create a new issue](https://github.com/AntonOsika/gpt-engineer/issues/new) on the project's GitHub repository. Provide a clear and descriptive title along with relevant details to help us address the problem or understand your request.
|
||||
|
||||
|
||||
## Licensing
|
||||
|
||||
By contributing to GPT Engineer, you agree that your contributions will be licensed under the [LICENSE](../LICENSE) file of the project.
|
||||
|
||||
Thank you for your interest in contributing to GPT Engineer! We appreciate your support and look forward to your contributions.
|
||||
4
.gitignore
vendored
4
.gitignore
vendored
@@ -12,6 +12,7 @@ build/
|
||||
*.egg
|
||||
|
||||
# Virtual environments
|
||||
.env
|
||||
venv/
|
||||
ENV/
|
||||
|
||||
@@ -31,3 +32,6 @@ Thumbs.db
|
||||
|
||||
# this application's specific files
|
||||
archive
|
||||
|
||||
# any log file
|
||||
*log.txt
|
||||
|
||||
27
.pre-commit-config.yaml
Normal file
27
.pre-commit-config.yaml
Normal file
@@ -0,0 +1,27 @@
|
||||
# See https://pre-commit.com for more information
|
||||
# See https://pre-commit.com/hooks.html for more hooks
|
||||
fail_fast: true
|
||||
|
||||
repos:
|
||||
- repo: https://github.com/psf/black
|
||||
rev: 23.3.0
|
||||
hooks:
|
||||
- id: black
|
||||
args: [--config, pyproject.toml]
|
||||
types: [python]
|
||||
exclude: .+/migrations((/.+)|(\.py))
|
||||
|
||||
- repo: https://github.com/charliermarsh/ruff-pre-commit
|
||||
rev: "v0.0.272"
|
||||
hooks:
|
||||
- id: ruff
|
||||
args: [--fix, --exit-non-zero-on-fix]
|
||||
|
||||
- repo: https://github.com/pre-commit/pre-commit-hooks
|
||||
rev: v4.4.0
|
||||
hooks:
|
||||
- id: check-toml
|
||||
id: check-yaml
|
||||
id: detect-private-key
|
||||
id: end-of-file-fixer
|
||||
id: trailing-whitespace
|
||||
16
BENCHMARKS.md
Normal file
16
BENCHMARKS.md
Normal file
@@ -0,0 +1,16 @@
|
||||
# Benchmarks
|
||||
|
||||
```bash
|
||||
$ python scripts/benchmark.py
|
||||
```
|
||||
|
||||
**file_explorer** works with unit tests config
|
||||
|
||||
**image_resizer** almost works with unit tests config
|
||||
- failure mode: undefined import
|
||||
|
||||
**timer_app** almost works with unit tests config
|
||||
- failure mode: undefined import/conflicting names
|
||||
|
||||
**todo_list** doesn't really work with unit tests config
|
||||
- failure mode: placeholder text
|
||||
20
README.md
20
README.md
@@ -1,8 +1,7 @@
|
||||
# GPT Engineer
|
||||
**Specify what you want it to build, the AI asks for clarification, and then builds it.**
|
||||
|
||||
GPT Engineer is made to be easy to adapt, extend, and make your agent learn how you want your code to look. It generates an entire codebase based on a prompt.
|
||||
|
||||
GPT Engineer is made to be easy to adapt, extend, and make your agent learn how you want your code to look. It generates an entire codebase based on a prompt.
|
||||
|
||||
## Project philosophy
|
||||
- Simple to get value
|
||||
@@ -14,18 +13,20 @@ GPT Engineer is made to be easy to adapt, extend, and make your agent learn how
|
||||
- Simplicity, all computation is "resumable" and persisted to the filesystem
|
||||
|
||||
|
||||
|
||||

|
||||
|
||||
|
||||
## Usage
|
||||
|
||||
**Setup**:
|
||||
- `git clone https://github.com/AntonOsika/gpt-engineer.git`
|
||||
- `cd gpt-engineer`
|
||||
- `pip install .`
|
||||
- `pip install -r requirements.txt`
|
||||
- `export OPENAI_API_KEY=[your api key]` with a key that has GPT4 access
|
||||
|
||||
**Run**:
|
||||
- Create a new empty folder with a `main_prompt` file (or copy the example folder `cp -r example/ my-new-project`)
|
||||
- Fill in the `main_prompt` in your new folder
|
||||
- Run `gpt-engineer my-new-project`
|
||||
- Run `python -m gpt_engineer.main my-new-project`
|
||||
|
||||
**Results**:
|
||||
- Check the generated files in my-new-project/workspace
|
||||
@@ -43,11 +44,10 @@ Editing the identity, and evolving the main_prompt, is currently how you make th
|
||||
|
||||
Each step in steps.py will have its communication history with GPT4 stored in the logs folder, and can be rerun with scripts/rerun_edited_message_logs.py.
|
||||
|
||||
|
||||
## Demo
|
||||
## Contributing
|
||||
If you want to contribute, please check out the [projects](https://github.com/AntonOsika/gpt-engineer/projects?query=is%3Aopen) or [issues tab](https://github.com/AntonOsika/gpt-engineer/issues) in the GitHub repo and please read the [contributing document](.github/CONTRIBUTING.md) on how to contribute.
|
||||
|
||||
|
||||
## High resolution example
|
||||
|
||||
https://github.com/AntonOsika/gpt-engineer/assets/4467025/6e362e45-4a94-4b0d-973d-393a31d92d9b
|
||||
|
||||
|
||||
|
||||
1
benchmark/currency_converter/main_prompt
Normal file
1
benchmark/currency_converter/main_prompt
Normal file
@@ -0,0 +1 @@
|
||||
Build a currency converter app using an API for exchange rates. Use HTML, CSS, and JavaScript for the frontend and Node.js for the backend. Allow users to convert between different currencies.
|
||||
2
benchmark/file_explorer/main_prompt
Normal file
2
benchmark/file_explorer/main_prompt
Normal file
@@ -0,0 +1,2 @@
|
||||
Create a basic file explorer CLI tool in Python that allows users to navigate through directories, view file contents, and perform basic file operations (copy, move, delete).
|
||||
|
||||
1
benchmark/file_organizer/main_prompt
Normal file
1
benchmark/file_organizer/main_prompt
Normal file
@@ -0,0 +1 @@
|
||||
Create a file organizer CLI tool in Python that sorts files in a directory based on their file types (e.g., images, documents, audio) and moves them into corresponding folders.
|
||||
2
benchmark/image_resizer/main_prompt
Normal file
2
benchmark/image_resizer/main_prompt
Normal file
@@ -0,0 +1,2 @@
|
||||
Create a CLI tool in Python that allows users to resize images by specifying the desired width and height. Use the Pillow library for image manipulation.
|
||||
|
||||
1
benchmark/markdown_editor/main_prompt
Normal file
1
benchmark/markdown_editor/main_prompt
Normal file
@@ -0,0 +1 @@
|
||||
Build a simple markdown editor using HTML, CSS, and JavaScript. Allow users to input markdown text and display the formatted output in real-time.
|
||||
1
benchmark/password_generator/main_prompt
Normal file
1
benchmark/password_generator/main_prompt
Normal file
@@ -0,0 +1 @@
|
||||
Create a password generator CLI tool in Python that generates strong, random passwords based on user-specified criteria, such as length and character types (letters, numbers, symbols).
|
||||
1
benchmark/pomodoro_timer/main_prompt
Normal file
1
benchmark/pomodoro_timer/main_prompt
Normal file
@@ -0,0 +1 @@
|
||||
Develop a Pomodoro timer app using HTML, CSS, and JavaScript. Allow users to set work and break intervals and receive notifications when it's time to switch.
|
||||
2
benchmark/timer_app/main_prompt
Normal file
2
benchmark/timer_app/main_prompt
Normal file
@@ -0,0 +1,2 @@
|
||||
Create a simple timer app using HTML, CSS, and JavaScript that allows users to set a countdown timer and receive an alert when the time is up.
|
||||
|
||||
2
benchmark/todo_list/main_prompt
Normal file
2
benchmark/todo_list/main_prompt
Normal file
@@ -0,0 +1,2 @@
|
||||
Create a simple to-do list app using HTML, CSS, and JavaScript. Store tasks in local storage and allow users to add, edit, and delete tasks.
|
||||
|
||||
1
benchmark/url_shortener/main_prompt
Normal file
1
benchmark/url_shortener/main_prompt
Normal file
@@ -0,0 +1 @@
|
||||
Create a URL shortener app using HTML, CSS, JavaScript, and a backend language like Python or Node.js. Allow users to input a long URL and generate a shortened version that redirects to the original URL. Store the shortened URLs in a database.
|
||||
1
benchmark/weather_app/main_prompt
Normal file
1
benchmark/weather_app/main_prompt
Normal file
@@ -0,0 +1 @@
|
||||
Develop a weather app using Python and a weather API. Display current weather conditions for a given location, including temperature, humidity, and weather description.
|
||||
@@ -1,7 +1,9 @@
|
||||
import re
|
||||
from typing import List, Tuple
|
||||
from gpt_engineer.db import DB
|
||||
|
||||
|
||||
def parse_chat(chat): # -> List[Tuple[str, str]]:
|
||||
def parse_chat(chat) -> List[Tuple[str, str]]:
|
||||
# Get all ``` blocks
|
||||
regex = r"```(.*?)```"
|
||||
|
||||
@@ -19,7 +21,7 @@ def parse_chat(chat): # -> List[Tuple[str, str]]:
|
||||
return files
|
||||
|
||||
|
||||
def to_files(chat, workspace):
|
||||
def to_files(chat: str, workspace: DB):
|
||||
workspace["all_output.txt"] = chat
|
||||
|
||||
files = parse_chat(chat)
|
||||
|
||||
@@ -15,6 +15,8 @@ class DB:
|
||||
return f.read()
|
||||
|
||||
def __setitem__(self, key, val):
|
||||
Path(self.path / key).absolute().parent.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
with open(self.path / key, 'w', encoding='utf-8') as f:
|
||||
f.write(val)
|
||||
|
||||
|
||||
@@ -21,6 +21,7 @@ def chat(
|
||||
),
|
||||
model: str = "gpt-4",
|
||||
temperature: float = 0.1,
|
||||
steps_config: str = "default",
|
||||
):
|
||||
app_dir = pathlib.Path(os.path.curdir)
|
||||
input_path = project_path
|
||||
@@ -40,14 +41,9 @@ def chat(
|
||||
identity=DB(app_dir / "identity"),
|
||||
)
|
||||
|
||||
for step in STEPS:
|
||||
for step in STEPS[steps_config]:
|
||||
messages = step(ai, dbs)
|
||||
dbs.logs[step.__name__] = json.dumps(messages)
|
||||
|
||||
|
||||
def execute():
|
||||
app()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
execute()
|
||||
app()
|
||||
|
||||
@@ -1,48 +1,50 @@
|
||||
import json
|
||||
import subprocess
|
||||
|
||||
from gpt_engineer.ai import AI
|
||||
from gpt_engineer.chat_to_files import to_files
|
||||
from gpt_engineer.db import DBs
|
||||
from gpt_engineer.chat_to_files import parse_chat
|
||||
|
||||
|
||||
def setup_sys_prompt(dbs):
|
||||
return dbs.identity['setup'] + '\nUseful to know:\n' + dbs.identity['philosophy']
|
||||
return dbs.identity["setup"] + "\nUseful to know:\n" + dbs.identity["philosophy"]
|
||||
|
||||
|
||||
def run(ai: AI, dbs: DBs):
|
||||
'''Run the AI on the main prompt and save the results'''
|
||||
"""Run the AI on the main prompt and save the results"""
|
||||
messages = ai.start(
|
||||
setup_sys_prompt(dbs),
|
||||
dbs.input['main_prompt'],
|
||||
dbs.input["main_prompt"],
|
||||
)
|
||||
to_files(messages[-1]['content'], dbs.workspace)
|
||||
to_files(messages[-1]["content"], dbs.workspace)
|
||||
return messages
|
||||
|
||||
|
||||
def clarify(ai: AI, dbs: DBs):
|
||||
'''
|
||||
"""
|
||||
Ask the user if they want to clarify anything and save the results to the workspace
|
||||
'''
|
||||
messages = [ai.fsystem(dbs.identity['qa'])]
|
||||
user = dbs.input['main_prompt']
|
||||
"""
|
||||
messages = [ai.fsystem(dbs.identity["qa"])]
|
||||
user = dbs.input["main_prompt"]
|
||||
while True:
|
||||
messages = ai.next(messages, user)
|
||||
|
||||
if messages[-1]['content'].strip().lower() == 'no':
|
||||
if messages[-1]['content'].strip().lower().startswith("no"):
|
||||
break
|
||||
|
||||
print()
|
||||
user = input('(answer in text, or "q" to move on)\n')
|
||||
print()
|
||||
|
||||
if not user or user == 'q':
|
||||
if not user or user == "q":
|
||||
break
|
||||
|
||||
user += (
|
||||
'\n\n'
|
||||
'Is anything else unclear? If yes, only answer in the form:\n'
|
||||
'{remaining unclear areas} remaining questions.\n'
|
||||
'{Next question}\n'
|
||||
"\n\n"
|
||||
"Is anything else unclear? If yes, only answer in the form:\n"
|
||||
"{remaining unclear areas} remaining questions.\n"
|
||||
"{Next question}\n"
|
||||
'If everything is sufficiently clear, only answer "no".'
|
||||
)
|
||||
|
||||
@@ -50,22 +52,86 @@ def clarify(ai: AI, dbs: DBs):
|
||||
return messages
|
||||
|
||||
|
||||
def run_clarified(ai: AI, dbs: DBs):
|
||||
# get the messages from previous step
|
||||
messages = json.loads(dbs.logs[clarify.__name__])
|
||||
def gen_spec(ai: AI, dbs: DBs):
|
||||
'''
|
||||
Generate a spec from the main prompt + clarifications and save the results to the workspace
|
||||
'''
|
||||
messages = [ai.fsystem(setup_sys_prompt(dbs)), ai.fsystem(f"Main prompt: {dbs.input['main_prompt']}")]
|
||||
|
||||
messages = ai.next(messages, dbs.identity['spec'])
|
||||
messages = ai.next(messages, dbs.identity['respec'])
|
||||
messages = ai.next(messages, dbs.identity['spec'])
|
||||
|
||||
dbs.memory['specification'] = messages[-1]['content']
|
||||
|
||||
return messages
|
||||
|
||||
def pre_unit_tests(ai: AI, dbs: DBs):
|
||||
'''
|
||||
Generate unit tests based on the specification, that should work.
|
||||
'''
|
||||
messages = [ai.fsystem(setup_sys_prompt(dbs)), ai.fuser(f"Instructions: {dbs.input['main_prompt']}"), ai.fuser(f"Specification:\n\n{dbs.memory['specification']}")]
|
||||
|
||||
messages = ai.next(messages, dbs.identity['unit_tests'])
|
||||
|
||||
dbs.memory['unit_tests'] = messages[-1]['content']
|
||||
to_files(dbs.memory['unit_tests'], dbs.workspace)
|
||||
|
||||
messages = [
|
||||
ai.fsystem(setup_sys_prompt(dbs)),
|
||||
] + messages[1:]
|
||||
messages = ai.next(messages, dbs.identity['use_qa'])
|
||||
to_files(messages[-1]['content'], dbs.workspace)
|
||||
return messages
|
||||
|
||||
|
||||
STEPS = [clarify, run_clarified]
|
||||
def run_clarified(ai: AI, dbs: DBs):
|
||||
# get the messages from previous step
|
||||
|
||||
messages = [
|
||||
ai.fsystem(setup_sys_prompt(dbs)),
|
||||
ai.fuser(f"Instructions: {dbs.input['main_prompt']}"),
|
||||
ai.fuser(f"Specification:\n\n{dbs.memory['specification']}"),
|
||||
ai.fuser(f"Unit tests:\n\n{dbs.memory['unit_tests']}"),
|
||||
]
|
||||
messages = ai.next(messages, dbs.identity["use_qa"])
|
||||
to_files(messages[-1]["content"], dbs.workspace)
|
||||
return messages
|
||||
|
||||
|
||||
def execute_workspace(ai: AI, dbs: DBs):
|
||||
messages = ai.start(
|
||||
system=(
|
||||
f"You will get infomation about a codebase that is currently on disk in the folder {dbs.workspace.path}.\n"
|
||||
"From this you will answer with one code block that includes all the necessary macos terminal commands to "
|
||||
"a) install dependencies "
|
||||
"b) run the necessary parts of the codebase to try it.\n"
|
||||
"Do not explain the code, just give the commands.\n"
|
||||
),
|
||||
user="Information about the codebase:\n\n" + dbs.workspace["all_output.txt"],
|
||||
)
|
||||
|
||||
[[lang, command]] = parse_chat(messages[-1]['content'])
|
||||
assert lang in ['', 'bash']
|
||||
|
||||
print('Do you want to execute this code?')
|
||||
print(command)
|
||||
print()
|
||||
print('If yes, press enter. If no, type "no"')
|
||||
print()
|
||||
if input() == 'no':
|
||||
print('Ok, not executing the code.')
|
||||
return messages
|
||||
print('Executing the code...')
|
||||
print()
|
||||
subprocess.run(command, shell=True)
|
||||
return messages
|
||||
|
||||
|
||||
# Different configs of what steps to run
|
||||
STEPS = {
|
||||
"default": [run, execute_workspace],
|
||||
'unit_tests': [gen_spec, pre_unit_tests, run_clarified],
|
||||
'clarify': [clarify, run_clarified],
|
||||
}
|
||||
|
||||
# Future steps that can be added:
|
||||
# improve_files,
|
||||
# self_reflect_and_improve_files,
|
||||
# add_tests
|
||||
# run_tests_and_fix_files,
|
||||
# improve_based_on_in_file_feedback_comments
|
||||
# improve_based_on_in_file_feedback_comments
|
||||
|
||||
8
identity/respec
Normal file
8
identity/respec
Normal file
@@ -0,0 +1,8 @@
|
||||
You are a pragmatic principal engineer at Google. You have been asked to review a specification for a new feature.
|
||||
|
||||
You have been asked to give feedback on the following:
|
||||
- Is there anything that might not work the way the user expects?
|
||||
- Is there anything missing for the program to fully work?
|
||||
- Is there anything that can be simplified without decreasing quality?
|
||||
|
||||
You are asked to make educated assumptions for each unclear item. For each of these, communicate which assumptions you'll make when implementing the feature.
|
||||
8
identity/spec
Normal file
8
identity/spec
Normal file
@@ -0,0 +1,8 @@
|
||||
You are a GPT-Engineer, an AI developed to write programs. You have been asked to make a specification for a program.
|
||||
|
||||
Please generate a specification based on the given input. First, be super explicit about what the program should do, which features it should have and give details about anything that might be unclear. **Don't leave anything unclear or undefined.**
|
||||
|
||||
Second, lay out the names of the core classes, functions, methods that will be necessary, As well as a quick comment on their purpose.
|
||||
Then write out which non-standard dependencies you'll have to use.
|
||||
|
||||
This specification will be used later as the basis for your implementation.
|
||||
3
identity/unit_tests
Normal file
3
identity/unit_tests
Normal file
@@ -0,0 +1,3 @@
|
||||
You are a GPT-Engineer, an AI developed to use Test Driven Development to write tests according to a specification.
|
||||
|
||||
Please generate tests based on the above specification. The tests should be as simple as possible, but still cover all the functionality.
|
||||
@@ -3,11 +3,11 @@ Please now remember the steps:
|
||||
First lay out the names of the core classes, functions, methods that will be necessary, As well as a quick comment on their purpose.
|
||||
Then output the content of each file, with syntax below.
|
||||
(You will start with the "entrypoint" file, then go to the ones that are imported by that file, and so on.)
|
||||
Make sure that files contain all imports, types etc. The code should be fully functional. Make sure that code in different files are compatible with each other.
|
||||
Make sure that files contain all imports, types, variables etc. The code should be fully functional. If anything is unclear, just make assumptions. Make sure that code in different files are compatible with each other.
|
||||
Before you finish, double check that all parts of the architecture is present in the files.
|
||||
|
||||
File syntax:
|
||||
|
||||
```file.py/ts/html
|
||||
```filename.py/ts/html
|
||||
[ADD YOUR CODE HERE]
|
||||
```
|
||||
@@ -15,7 +15,7 @@ dependencies = [
|
||||
]
|
||||
|
||||
[project.scripts]
|
||||
gpt-engineer = 'gpt_engineer.main:execute'
|
||||
gpt-engineer = 'gpt_engineer.main:app'
|
||||
|
||||
[tool.setuptools]
|
||||
packages = ["gpt_engineer"]
|
||||
|
||||
@@ -1,4 +1,5 @@
|
||||
black==23.3.0
|
||||
openai==0.27.8
|
||||
pre-commit==3.3.3
|
||||
ruff==0.0.272
|
||||
typer==0.9.0
|
||||
46
scripts/benchmark.py
Normal file
46
scripts/benchmark.py
Normal file
@@ -0,0 +1,46 @@
|
||||
# list all folders in benchmark folder
|
||||
# for each folder, run the benchmark
|
||||
|
||||
import os
|
||||
import sys
|
||||
import subprocess
|
||||
import time
|
||||
import datetime
|
||||
import shutil
|
||||
import argparse
|
||||
import json
|
||||
from pathlib import Path
|
||||
from typer import run
|
||||
from itertools import islice
|
||||
|
||||
def main(
|
||||
n_benchmarks: int | None = None,
|
||||
):
|
||||
processes = []
|
||||
files = []
|
||||
path = Path('benchmark')
|
||||
|
||||
if n_benchmarks:
|
||||
benchmarks = islice(path.iterdir(), n_benchmarks)
|
||||
|
||||
for folder in benchmarks:
|
||||
if os.path.isdir(folder):
|
||||
print('Running benchmark for {}'.format(folder))
|
||||
|
||||
log_path = folder / 'log.txt'
|
||||
log_file = open(log_path, 'w')
|
||||
processes.append(subprocess.Popen(['python', '-m', 'gpt_engineer.main', folder], stdout=log_file, stderr=log_file, bufsize=0))
|
||||
files.append(log_file)
|
||||
|
||||
print('You can stream the log file by running: tail -f {}'.format(log_path))
|
||||
|
||||
for process, file in zip(processes, files):
|
||||
process.wait()
|
||||
print('process finished with code', process.returncode)
|
||||
file.close()
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
run(main)
|
||||
|
||||
|
||||
38
scripts/clean_benchmarks.py
Normal file
38
scripts/clean_benchmarks.py
Normal file
@@ -0,0 +1,38 @@
|
||||
# list all folders in benchmark folder
|
||||
# for each folder, run the benchmark
|
||||
|
||||
import os
|
||||
import sys
|
||||
import subprocess
|
||||
import time
|
||||
import datetime
|
||||
import shutil
|
||||
import argparse
|
||||
import json
|
||||
from pathlib import Path
|
||||
from typer import run
|
||||
from itertools import islice
|
||||
|
||||
def main(
|
||||
):
|
||||
benchmarks = Path('benchmark')
|
||||
|
||||
for benchmark in benchmarks.iterdir():
|
||||
if benchmark.is_dir():
|
||||
print(f'Cleaning {benchmark}')
|
||||
for path in benchmark.iterdir():
|
||||
if path.name == 'main_prompt':
|
||||
continue
|
||||
|
||||
# Get filename of Path object
|
||||
if path.is_dir():
|
||||
# delete the entire directory
|
||||
shutil.rmtree(path)
|
||||
else:
|
||||
# delete the file
|
||||
os.remove(path)
|
||||
|
||||
if __name__ == '__main__':
|
||||
run(main)
|
||||
|
||||
|
||||
Reference in New Issue
Block a user