14 Commits

Author SHA1 Message Date
UmerHA
19a4c10b6e Langchain integration (#512)
* Added LangChain integration

* Fixed issue created by git checkin process

* Added ':' to characters to remove from end of file path

* Tested initial migration to LangChain, removed comments and logging used for debugging

* Tested initial migration to LangChain, removed comments and logging used for debugging

* Converted camelCase to snake_case

* Turns out we need the exception handling

* Testing Hugging Face Integrations via LangChain

* Added LangChain loadable models

* Renames "qa" prompt to "clarify", since it's used in the "clarify" step, asking for clarification

* Fixed loading model yaml files

* Fixed streaming

* Added modeldir cli option

* Fixed typing

* Fixed interaction with token logging

* Fix spelling + dependency issues + typing

* Fix spelling + tests

* Removed unneeded logging which caused test to fail

* Cleaned up code

* Incorporated feedback

- deleted unnecessary functions & logger.info
- used LangChain ChatLLM instead of LLM to naturally communicate with gpt-4
- deleted loading model from yaml file, as LC doesn't offer this for ChatModels

* Update gpt_engineer/steps.py

Co-authored-by: Anton Osika <anton.osika@gmail.com>

* Incorporated feedback

- Fixed failing test
- Removed parsing complexity by using # type: ignore
- Replace every ocurence of ai.last_message_content with its content

* Fixed test

* Update gpt_engineer/steps.py

---------

Co-authored-by: H <holden.robbins@gmail.com>
Co-authored-by: Anton Osika <anton.osika@gmail.com>
2023-07-23 23:30:09 +02:00
OriAshkenazi
a55265ddb4 changed fallback model to 3.5-turbo-16k 2023-07-05 13:18:12 +03:00
UmerHA
8fd315d264 Implemented logging token usage (solves #322) (#438)
* Implemented logging token usage

Token usage is now tracked and logged into memory/logs/token_usage

* Step names are now inferred from function name

* Incorporated Anton's feedback

- Made LogUsage a dataclass
- For token logging, step name is now inferred via inspect module

* Formatted (black/ruff)

* Update gpt_engineer/ai.py

Co-authored-by: Anton Osika <anton.osika@gmail.com>

* formatting

---------

Co-authored-by: Anton Osika <anton.osika@gmail.com>
2023-07-03 21:28:34 +02:00
Nitaym
24fef650ba Python 3.8 support + Cleaning up some issues in extracting files out of the prompts (#355)
* Changed typing to fit python 3.8

* Made sure GPT won't add in comments about the files as these confuse the chat-to-files regex
2023-07-02 17:05:09 +02:00
Anton Osika
20ea0c126a Simplify archiving process (#469)
* simplify args

* Fix tests

* Black format
2023-07-02 16:43:13 +02:00
Anton Osika
2a39cc4cd7 Bugfixes, store output logs 2023-06-25 10:56:56 +02:00
Mendie Uwemedimo
f2525405b0 enabled compatiblilty with python versions <3.9 (#254) 2023-06-20 18:04:44 +02:00
LopeKinz
ac958df8d7 'Refactored by Sourcery' (#188) 2023-06-19 13:26:55 +02:00
Anton Osika
8180f0346c Fix the errors with parsing 2023-06-18 22:34:31 +02:00
Carl Thomé
b0ea7b0b41 kwargs -> temperature 2023-06-18 16:42:53 +02:00
Carl Thomé
38401b2f35 Add --verbose option for logging request/response 2023-06-18 16:21:16 +02:00
Patilla Code
084ce1759b make pre commit pass in the whole codebase (#149) 2023-06-18 14:46:03 +02:00
Anton Osika
c6b6ca1448 Fix not generating bash commands 2023-06-18 08:36:34 +02:00
Jack Eadie
6f8e976a42 Make gpt-engineer pip installable/runnable (#60) 2023-06-16 13:25:29 +02:00