Further changes:
* remove `init` param from `get_memory()`, replace usages by `memory.clear()`
* make token length calculation optional in `MemoryItem.dump()`
* Extract open ai api calls and retry at lowest level
* Forgot a test
* Gotta fix my local docker config so I can let pre-commit hooks run, ugh
* fix: merge artiface
* Fix linting
* Update memory.vector.utils
* feat: make sure resp exists
* fix: raise error message if created
* feat: rename file
* fix: partial test fix
* fix: update comments
* fix: linting
* fix: remove broken test
* fix: require a model to exist
* fix: BaseError issue
* fix: runtime error
* Fix mock response in test_make_agent
* add 429 as errors to retry
---------
Co-authored-by: k-boikov <64261260+k-boikov@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <nick@ntindle.com>
Co-authored-by: Reinier van der Leer <github@pwuts.nl>
Co-authored-by: Nicholas Tindle <nicktindle@outlook.com>
Co-authored-by: Luke K (pr-0f3t) <2609441+lc0rp@users.noreply.github.com>
Co-authored-by: Merwane Hamadi <merwanehamadi@gmail.com>
* Correct and clean up JSON handling
* Use ast for message history too
* Lint
* Add comments explaining why we use literal_eval
* Add descriptions to llm_response_format schema
* Parse responses in code blocks
* Be more careful when parsing in code blocks
* Lint
* Implement Batch Running Summarization to avoid max token error (#4652)
* Fix extra space in prompt
---------
Co-authored-by: Reinier van der Leer <github@pwuts.nl>
* Add config as attribute to Agent, rename old config to ai_config
* Code review: Pass ai_config
---------
Co-authored-by: Nicholas Tindle <nick@ntindle.com>
Co-authored-by: merwanehamadi <merwanehamadi@gmail.com>
* Extract retry logic, unify embedding functions
* Add some docstrings
* Remove embedding creation from API manager
* Add test suite for retry handler
* Make api manager fixture
* Fix typing
* Streamline tests
* Collect all embedding code into a single module
* Collect all embedding code into a single module
* actually, llm_utils is a better place
* Oh, and remove the module now that we don't use it
---------
Co-authored-by: Nicholas Tindle <nick@ntindle.com>
* Implemented running cost counter for chat completions
This data is known to the AI as additional system context, and is printed out to the user
* Added comments to api_manager.py
* Added user-defined API budget.
The user is now prompted if they want to give the AI a budget for API calls. If they enter nothing, there is no monetary limit, but if they define a budget then the AI will be told to shut down gracefully once it has come within 1 cent of its limit, and to shut down immediately once it has exceeded its limit. If a budget is defined, Auto-GPT is always aware of how much it was given and how much remains to be spent.
* Chat completion calls are now done through api_manager. Total running cost is printed.
* Implemented api budget setting and tracking
User can now configure a maximum api budget, and the AI is aware of that and its remaining budget. The AI is instructed to shut down when exceeding the budget.
* Update autogpt/api_manager.py
Change "per token" to "per 1000 tokens" in a comment on the api cost
Co-authored-by: Rob Luke <code@robertluke.net>
* Fixed lint errors
* Include embedding costs
* Add embedding completion cost
* lint
* Added 'requires_api_key' decorator to test_commands.py, switched to a valid chat completions model
* Refactor API manager, add debug mode, and add tests
- Extract model costs to to avoid duplication
- Add debug mode parameter to ApiManager class
- Move debug mode configuration to
- Log AI response and budget messages in debug mode
- Implement 'test_api_manager.py'
* Fixed test_setup failing. An extra user input is needed for api budget
* Linting
---------
Co-authored-by: Rob Luke <code@robertluke.net>
Co-authored-by: Nicholas Tindle <nick@ntindle.com>
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/weaviate/schema/crud_schema.py", line 708, in _create_class_with_primitives
raise UnexpectedStatusCodeException("Create class", response)
weaviate.exceptions.UnexpectedStatusCodeException: Create class! Unexpected status code: 422, with response body: {'error': [{'message': "'Auto-gpt' is not a valid class name"}]}.
GPT4:
The error message indicates that "Auto-gpt" is not a valid class name. In Weaviate, class names must start with a capital letter and can contain only alphanumeric characters.
Took advice and code and applying to weaviate.py to great result, programs runs now with no error!
Unable to reproduce easily. Might be related to switching memory between Local and Weaviate? Either way, the proposed solution works for MacOS using Docker + Weaviate.