Reinier van der Leer
e8b6676b22
Restructure logs.py into a module; include log_cycle ( #4921 )
...
* Consolidate all logging stuff into one module
* Merge import statement for `logs` and `logs.log_cycle`
---------
Co-authored-by: James Collins <collijk@uw.edu >
2023-07-09 20:14:25 +02:00
Reinier van der Leer
1e1eff70bc
Rebase MessageHistory on ChatSequence ( #4922 )
...
* Rebase `MessageHistory` on `ChatSequence`
* Process feedback & make mypy happy
---------
Co-authored-by: James Collins <collijk@uw.edu >
2023-07-09 19:52:59 +02:00
Reinier van der Leer
bde007e6f7
Use GPT-4 in Agent loop by default ( #4899 )
...
* Use GPT-4 as default smart LLM in Agent
* Rename (smart|fast)_llm_model to (smart|fast)_llm everywhere
* Fix test_config.py::test_initial_values
* Fix test_config.py::test_azure_config
* Fix Azure config backwards compatibility
2023-07-07 03:42:18 +02:00
Erik Peterson
857d26d101
Add OpenAI function call support ( #4683 )
...
Co-authored-by: merwanehamadi <merwanehamadi@gmail.com >
Co-authored-by: Reinier van der Leer <github@pwuts.nl >
2023-06-22 04:52:44 +02:00
merwanehamadi
a7f805604c
Pass config everywhere in order to get rid of singleton ( #4666 )
...
Signed-off-by: Merwane Hamadi <merwanehamadi@gmail.com >
2023-06-18 19:05:41 -07:00
merwanehamadi
0b6fec4a28
Fix summarization happening in first cycle ( #4719 )
2023-06-16 18:17:47 -07:00
Erik Peterson
07d9b584f7
Correct and clean up JSON handling ( #4655 )
...
* Correct and clean up JSON handling
* Use ast for message history too
* Lint
* Add comments explaining why we use literal_eval
* Add descriptions to llm_response_format schema
* Parse responses in code blocks
* Be more careful when parsing in code blocks
* Lint
2023-06-13 09:54:50 -07:00
Kinance
7bf39cbb72
Include the token length of the current summary ( #4670 )
...
Co-authored-by: merwanehamadi <merwanehamadi@gmail.com >
2023-06-12 16:29:11 -07:00
Kinance
ff46c16805
Remove extra spaces in summarization prompt ( #4660 )
...
* Implement Batch Running Summarization to avoid max token error (#4652 )
* Fix extra space in prompt
---------
Co-authored-by: Reinier van der Leer <github@pwuts.nl >
2023-06-12 02:13:47 +02:00
Kinance
bc5dbb6692
Implement Batch Summarization in MessageHistory Class to manage context length under model's token limit ( #4652 )
...
* Implement Batch Running Summarization to avoid max token error
* Rename test func
2023-06-11 13:04:41 -07:00
Erik Peterson
0594ba33a2
Pass agent to commands instead of config ( #4645 )
...
* Add config as attribute to Agent, rename old config to ai_config
* Code review: Pass ai_config
* Pass agent to commands instead of config
* Lint
* Fix merge error
* Fix memory challenge a
---------
Co-authored-by: Nicholas Tindle <nick@ntindle.com >
Co-authored-by: merwanehamadi <merwanehamadi@gmail.com >
2023-06-10 15:48:50 -07:00
Erik Peterson
6b9e3b21d3
Add config as attribute to Agent, rename old config to ai_config ( #4638 )
...
* Add config as attribute to Agent, rename old config to ai_config
* Code review: Pass ai_config
---------
Co-authored-by: Nicholas Tindle <nick@ntindle.com >
Co-authored-by: merwanehamadi <merwanehamadi@gmail.com >
2023-06-10 14:47:26 -07:00
Reinier van der Leer
bfbe613960
Vector memory revamp (part 1: refactoring) ( #4208 )
...
Additional changes:
* Improve typing
* Modularize message history memory & fix/refactor lots of things
* Fix summarization
* Move memory relevance calculation to MemoryItem & improve test
* Fix import warnings in web_selenium.py
* Remove `memory_add` ghost command
* Implement overlap in `split_text`
* Move memory tests into subdirectory
* Remove deprecated `get_ada_embedding()` and helpers
* Fix used token calculation in `chat_with_ai`
* Replace Message TypedDict by dataclass
* Fix AgentManager singleton issues in tests
---------
Co-authored-by: Auto-GPT-Bot <github-bot@agpt.co >
2023-05-25 20:31:11 +02:00