* Added LangChain integration
* Fixed issue created by git checkin process
* Added ':' to characters to remove from end of file path
* Tested initial migration to LangChain, removed comments and logging used for debugging
* Tested initial migration to LangChain, removed comments and logging used for debugging
* Converted camelCase to snake_case
* Turns out we need the exception handling
* Testing Hugging Face Integrations via LangChain
* Added LangChain loadable models
* Renames "qa" prompt to "clarify", since it's used in the "clarify" step, asking for clarification
* Fixed loading model yaml files
* Fixed streaming
* Added modeldir cli option
* Fixed typing
* Fixed interaction with token logging
* Fix spelling + dependency issues + typing
* Fix spelling + tests
* Removed unneeded logging which caused test to fail
* Cleaned up code
* Incorporated feedback
- deleted unnecessary functions & logger.info
- used LangChain ChatLLM instead of LLM to naturally communicate with gpt-4
- deleted loading model from yaml file, as LC doesn't offer this for ChatModels
* Update gpt_engineer/steps.py
Co-authored-by: Anton Osika <anton.osika@gmail.com>
* Incorporated feedback
- Fixed failing test
- Removed parsing complexity by using # type: ignore
- Replace every ocurence of ai.last_message_content with its content
* Fixed test
* Update gpt_engineer/steps.py
---------
Co-authored-by: H <holden.robbins@gmail.com>
Co-authored-by: Anton Osika <anton.osika@gmail.com>
* Implemented logging token usage
Token usage is now tracked and logged into memory/logs/token_usage
* Step names are now inferred from function name
* Incorporated Anton's feedback
- Made LogUsage a dataclass
- For token logging, step name is now inferred via inspect module
* Formatted (black/ruff)
* Update gpt_engineer/ai.py
Co-authored-by: Anton Osika <anton.osika@gmail.com>
* formatting
---------
Co-authored-by: Anton Osika <anton.osika@gmail.com>