mirror of
https://github.com/aljazceru/Auto-GPT.git
synced 2026-01-09 01:04:25 +01:00
Release v0.4.1 (#4686)
Co-authored-by: Reinier van der Leer <github@pwuts.nl> Co-authored-by: Nicholas Tindle <nick@ntindle.com> Co-authored-by: Nicholas Tindle <nicktindle@outlook.com> Co-authored-by: k-boikov <64261260+k-boikov@users.noreply.github.com> Co-authored-by: merwanehamadi <merwanehamadi@gmail.com> Co-authored-by: Merwane Hamadi <merwanehamadi@gmail.com> Co-authored-by: Richard Beales <rich@richbeales.net> Co-authored-by: Luke K <2609441+lc0rp@users.noreply.github.com> Co-authored-by: Luke K (pr-0f3t) <2609441+lc0rp@users.noreply.github.com> Co-authored-by: Erik Peterson <e@eriklp.com> Co-authored-by: Auto-GPT-Bot <github-bot@agpt.co> Co-authored-by: Benny van der Lans <49377421+bfalans@users.noreply.github.com> Co-authored-by: Jan <jan-github@phobia.de> Co-authored-by: Robin Richtsfeld <robin.richtsfeld@gmail.com> Co-authored-by: Marc Bornträger <marc.borntraeger@gmail.com> Co-authored-by: Stefan Ayala <stefanayala3266@gmail.com> Co-authored-by: javableu <45064273+javableu@users.noreply.github.com> Co-authored-by: DGdev91 <DGdev91@users.noreply.github.com> Co-authored-by: Kinance <kinance@gmail.com> Co-authored-by: digger yu <digger-yu@outlook.com> Co-authored-by: David <scenaristeur@gmail.com> Co-authored-by: gravelBridge <john.tian31@gmail.com> Fix Python CI "update cassettes" step (#4591) fix CI (#4596) Fix inverted logic for deny_command (#4563) fix current_score.json generation (#4601) Fix duckduckgo rate limiting (#4592) Fix debug code challenge (#4632) Fix issues with information retrieval challenge a (#4622) fix issues with env configuration and .env.template (#4630) Fix prompt issue causing 'No Command' issues and challenge to fail (#4623) Fix benchmark logs (#4653) Fix typo in docs/setup.md (#4613) Fix run.sh shebang (#4561) Fix autogpt docker image not working because missing prompt_settings (#4680) Fix execute_command coming from plugins (#4730)
This commit is contained in:
@@ -70,7 +70,7 @@ def kubernetes_agent(
|
||||
```
|
||||
|
||||
## Creating your challenge
|
||||
Go to `tests/integration/challenges`and create a file that is called `test_your_test_description.py` and add it to the appropriate folder. If no category exists you can create a new one.
|
||||
Go to `tests/challenges`and create a file that is called `test_your_test_description.py` and add it to the appropriate folder. If no category exists you can create a new one.
|
||||
|
||||
Your test could look something like this
|
||||
|
||||
@@ -84,7 +84,7 @@ import yaml
|
||||
|
||||
from autogpt.commands.file_operations import read_file, write_to_file
|
||||
from tests.integration.agent_utils import run_interaction_loop
|
||||
from tests.integration.challenges.utils import run_multiple_times
|
||||
from tests.challenges.utils import run_multiple_times
|
||||
from tests.utils import requires_api_key
|
||||
|
||||
|
||||
@@ -111,7 +111,7 @@ def test_information_retrieval_challenge_a(kubernetes_agent, monkeypatch) -> Non
|
||||
"""
|
||||
input_sequence = ["s", "s", "s", "s", "s", "EXIT"]
|
||||
gen = input_generator(input_sequence)
|
||||
monkeypatch.setattr("builtins.input", lambda _: next(gen))
|
||||
monkeypatch.setattr("autogpt.utils.session.prompt", lambda _: next(gen))
|
||||
|
||||
with contextlib.suppress(SystemExit):
|
||||
run_interaction_loop(kubernetes_agent, None)
|
||||
|
||||
@@ -5,7 +5,7 @@
|
||||
**Command to try**:
|
||||
|
||||
```
|
||||
pytest -s tests/integration/challenges/information_retrieval/test_information_retrieval_challenge_a.py --level=2
|
||||
pytest -s tests/challenges/information_retrieval/test_information_retrieval_challenge_a.py --level=2
|
||||
```
|
||||
|
||||
## Description
|
||||
|
||||
@@ -5,7 +5,7 @@
|
||||
**Command to try**:
|
||||
|
||||
```
|
||||
pytest -s tests/integration/challenges/information_retrieval/test_information_retrieval_challenge_b.py
|
||||
pytest -s tests/challenges/information_retrieval/test_information_retrieval_challenge_b.py
|
||||
```
|
||||
|
||||
## Description
|
||||
|
||||
@@ -4,7 +4,7 @@
|
||||
|
||||
**Command to try**:
|
||||
```
|
||||
pytest -s tests/integration/challenges/memory/test_memory_challenge_b.py --level=3
|
||||
pytest -s tests/challenges/memory/test_memory_challenge_b.py --level=3
|
||||
``
|
||||
|
||||
## Description
|
||||
@@ -41,4 +41,3 @@ Write all the task_ids into the file output.txt. The file has not been created y
|
||||
## Objective
|
||||
|
||||
The objective of this challenge is to test the agent's ability to follow instructions and maintain memory of the task IDs throughout the process. The agent successfully completed this challenge if it wrote the task ids in a file.
|
||||
|
||||
|
||||
@@ -4,7 +4,7 @@
|
||||
|
||||
**Command to try**:
|
||||
```
|
||||
pytest -s tests/integration/challenges/memory/test_memory_challenge_c.py --level=2
|
||||
pytest -s tests/challenges/memory/test_memory_challenge_c.py --level=2
|
||||
``
|
||||
|
||||
## Description
|
||||
|
||||
75
docs/challenges/memory/challenge_d.md
Normal file
75
docs/challenges/memory/challenge_d.md
Normal file
@@ -0,0 +1,75 @@
|
||||
# Memory Challenge C
|
||||
|
||||
**Status**: Current level to beat: level 1
|
||||
|
||||
**Command to try**:
|
||||
```
|
||||
pytest -s tests/challenges/memory/test_memory_challenge_d.py --level=1
|
||||
``
|
||||
|
||||
## Description
|
||||
|
||||
The provided code is a unit test designed to validate an AI's ability to track events and beliefs of characters in a story involving moving objects, specifically marbles. This scenario is an advanced form of the classic "Sally-Anne test", a psychological test used to measure a child's social cognitive ability to understand that others' perspectives and beliefs may differ from their own.
|
||||
|
||||
Here is an explanation of the challenge:
|
||||
|
||||
The AI is given a series of events involving characters Sally, Anne, Bob, and Charlie, and the movements of different marbles. These events are designed as tests at increasing levels of complexity.
|
||||
|
||||
For each level, the AI is expected to keep track of the events and the resulting beliefs of each character about the locations of each marble. These beliefs are affected by whether the character was inside or outside the room when events occurred, as characters inside the room are aware of the actions, while characters outside the room aren't.
|
||||
|
||||
After the AI processes the events and generates the beliefs of each character, it writes these beliefs to an output file in JSON format.
|
||||
|
||||
The check_beliefs function then checks the AI's beliefs against the expected beliefs for that level. The expected beliefs are predefined and represent the correct interpretation of the events for each level.
|
||||
|
||||
If the AI's beliefs match the expected beliefs, it means the AI has correctly interpreted the events and the perspectives of each character. This would indicate that the AI has passed the test for that level.
|
||||
|
||||
The test runs for levels up to the maximum level that the AI has successfully beaten, or up to a user-selected level.
|
||||
|
||||
|
||||
## Files
|
||||
|
||||
- `instructions_1.txt`
|
||||
|
||||
"Sally has a marble (marble A) and she puts it in her basket (basket S), then leaves the room. Anne moves marble A from Sally's basket (basket S) to her own basket (basket A).",
|
||||
|
||||
|
||||
- `instructions_2.txt`
|
||||
|
||||
"Sally gives a new marble (marble B) to Bob who is outside with her. Bob goes into the room and places marble B into Anne's basket (basket A). Anne tells Bob to tell Sally that he lost the marble b. Bob leaves the room and speaks to Sally about the marble B. Meanwhile, after Bob left the room, Anne moves marble A into the green box, but tells Charlie to tell Sally that marble A is under the sofa. Charlie leaves the room and speak to Sally about the marble A as instructed by Anne.",
|
||||
|
||||
|
||||
...and so on.
|
||||
|
||||
- `instructions_n.txt`
|
||||
|
||||
The expected believes of every characters are given in a list:
|
||||
|
||||
expected_beliefs = {
|
||||
1: {
|
||||
'Sally': {
|
||||
'marble A': 'basket S',
|
||||
},
|
||||
'Anne': {
|
||||
'marble A': 'basket A',
|
||||
}
|
||||
},
|
||||
2: {
|
||||
'Sally': {
|
||||
'marble A': 'sofa', # Because Charlie told her
|
||||
},
|
||||
'Anne': {
|
||||
'marble A': 'green box', # Because she moved it there
|
||||
'marble B': 'basket A', # Because Bob put it there and she was in the room
|
||||
},
|
||||
'Bob': {
|
||||
'B': 'basket A', # Last place he put it
|
||||
},
|
||||
'Charlie': {
|
||||
'A': 'sofa', # Because Anne told him to tell Sally so
|
||||
}
|
||||
},...
|
||||
|
||||
|
||||
## Objective
|
||||
|
||||
This test essentially checks if an AI can accurately model and track the beliefs of different characters based on their knowledge of events, which is a critical aspect of understanding and generating human-like narratives. This ability would be beneficial for tasks such as writing stories, dialogue systems, and more.
|
||||
53
docs/configuration/options.md
Normal file
53
docs/configuration/options.md
Normal file
@@ -0,0 +1,53 @@
|
||||
# Configuration
|
||||
|
||||
Configuration is controlled through the `Config` object. You can set configuration variables via the `.env` file. If you don't have a `.env` file, create a copy of `.env.template` in your `Auto-GPT` folder and name it `.env`.
|
||||
|
||||
## Environment Variables
|
||||
|
||||
- `AI_SETTINGS_FILE`: Location of AI Settings file. Default: ai_settings.yaml
|
||||
- `AUDIO_TO_TEXT_PROVIDER`: Audio To Text Provider. Only option currently is `huggingface`. Default: huggingface
|
||||
- `AUTHORISE_COMMAND_KEY`: Key response accepted when authorising commands. Default: y
|
||||
- `BROWSE_CHUNK_MAX_LENGTH`: When browsing website, define the length of chunks to summarize. Default: 3000
|
||||
- `BROWSE_SPACY_LANGUAGE_MODEL`: [spaCy language model](https://spacy.io/usage/models) to use when creating chunks. Default: en_core_web_sm
|
||||
- `CHAT_MESSAGES_ENABLED`: Enable chat messages. Optional
|
||||
- `DISABLED_COMMAND_CATEGORIES`: Command categories to disable. Command categories are Python module names, e.g. autogpt.commands.analyze_code. See the directory `autogpt/commands` in the source for all command modules. Default: None
|
||||
- `ELEVENLABS_API_KEY`: ElevenLabs API Key. Optional.
|
||||
- `ELEVENLABS_VOICE_ID`: ElevenLabs Voice ID. Optional.
|
||||
- `EMBEDDING_MODEL`: LLM Model to use for embedding tasks. Default: text-embedding-ada-002
|
||||
- `EXECUTE_LOCAL_COMMANDS`: If shell commands should be executed locally. Default: False
|
||||
- `EXIT_KEY`: Exit key accepted to exit. Default: n
|
||||
- `FAST_LLM_MODEL`: LLM Model to use for most tasks. Default: gpt-3.5-turbo
|
||||
- `GITHUB_API_KEY`: [Github API Key](https://github.com/settings/tokens). Optional.
|
||||
- `GITHUB_USERNAME`: GitHub Username. Optional.
|
||||
- `GOOGLE_API_KEY`: Google API key. Optional.
|
||||
- `GOOGLE_CUSTOM_SEARCH_ENGINE_ID`: [Google custom search engine ID](https://programmablesearchengine.google.com/controlpanel/all). Optional.
|
||||
- `HEADLESS_BROWSER`: Use a headless browser while Auto-GPT uses a web browser. Setting to `False` will allow you to see Auto-GPT operate the browser. Default: True
|
||||
- `HUGGINGFACE_API_TOKEN`: HuggingFace API, to be used for both image generation and audio to text. Optional.
|
||||
- `HUGGINGFACE_AUDIO_TO_TEXT_MODEL`: HuggingFace audio to text model. Default: CompVis/stable-diffusion-v1-4
|
||||
- `HUGGINGFACE_IMAGE_MODEL`: HuggingFace model to use for image generation. Default: CompVis/stable-diffusion-v1-4
|
||||
- `IMAGE_PROVIDER`: Image provider. Options are `dalle`, `huggingface`, and `sdwebui`. Default: dalle
|
||||
- `IMAGE_SIZE`: Default size of image to generate. Default: 256
|
||||
- `MEMORY_BACKEND`: Memory back-end to use. Currently `json_file` is the only supported and enabled backend. Default: json_file
|
||||
- `MEMORY_INDEX`: Value used in the Memory backend for scoping, naming, or indexing. Default: auto-gpt
|
||||
- `OPENAI_API_KEY`: *REQUIRED*- Your [OpenAI API Key](https://platform.openai.com/account/api-keys).
|
||||
- `OPENAI_ORGANIZATION`: Organization ID in OpenAI. Optional.
|
||||
- `PLAIN_OUTPUT`: Plain output, which disables the spinner. Default: False
|
||||
- `PLUGINS_CONFIG_FILE`: Path of plugins_config.yaml file. Default: plugins_config.yaml
|
||||
- `PROMPT_SETTINGS_FILE`: Location of Prompt Settings file. Default: prompt_settings.yaml
|
||||
- `REDIS_HOST`: Redis Host. Default: localhost
|
||||
- `REDIS_PASSWORD`: Redis Password. Optional. Default:
|
||||
- `REDIS_PORT`: Redis Port. Default: 6379
|
||||
- `RESTRICT_TO_WORKSPACE`: The restrict file reading and writing to the workspace directory. Default: True
|
||||
- `SD_WEBUI_AUTH`: Stable Diffusion Web UI username:password pair. Optional.
|
||||
- `SD_WEBUI_URL`: Stable Diffusion Web UI URL. Default: http://localhost:7860
|
||||
- `SHELL_ALLOWLIST`: List of shell commands that ARE allowed to be executed by Auto-GPT. Only applies if `SHELL_COMMAND_CONTROL` is set to `allowlist`. Default: None
|
||||
- `SHELL_COMMAND_CONTROL`: Whether to use `allowlist` or `denylist` to determine what shell commands can be executed (Default: denylist)
|
||||
- `SHELL_DENYLIST`: List of shell commands that ARE NOT allowed to be executed by Auto-GPT. Only applies if `SHELL_COMMAND_CONTROL` is set to `denylist`. Default: sudo,su
|
||||
- `SMART_LLM_MODEL`: LLM Model to use for "smart" tasks. Default: gpt-3.5-turbo
|
||||
- `STREAMELEMENTS_VOICE`: StreamElements voice to use. Default: Brian
|
||||
- `TEMPERATURE`: Value of temperature given to OpenAI. Value from 0 to 2. Lower is more deterministic, higher is more random. See https://platform.openai.com/docs/api-reference/completions/create#completions/create-temperature
|
||||
- `TEXT_TO_SPEECH_PROVIDER`: Text to Speech Provider. Options are `gtts`, `macos`, `elevenlabs`, and `streamelements`. Default: gtts
|
||||
- `USER_AGENT`: User-Agent given when browsing websites. Default: "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.97 Safari/537.36"
|
||||
- `USE_AZURE`: Use Azure's LLM Default: False
|
||||
- `USE_WEB_BROWSER`: Which web browser to use. Options are `chrome`, `firefox`, `safari` or `edge` Default: chrome
|
||||
- `WIPE_REDIS_ON_START`: Wipes data / index on start. Default: True
|
||||
@@ -2,6 +2,18 @@
|
||||
|
||||
⚠️💀 **WARNING** 💀⚠️: Review the code of any plugin you use thoroughly, as plugins can execute any Python code, potentially leading to malicious activities, such as stealing your API keys.
|
||||
|
||||
To configure plugins, you can create or edit the `plugins_config.yaml` file in the root directory of Auto-GPT. This file allows you to enable or disable plugins as desired. For specific configuration instructions, please refer to the documentation provided for each plugin. The file should be formatted in YAML. Here is an example for your reference:
|
||||
|
||||
```yaml
|
||||
plugin_a:
|
||||
config:
|
||||
api_key: my-api-key
|
||||
enabled: false
|
||||
plugin_b:
|
||||
config: {}
|
||||
enabled: true
|
||||
```
|
||||
|
||||
See our [Plugins Repo](https://github.com/Significant-Gravitas/Auto-GPT-Plugins) for more info on how to install all the amazing plugins the community has built!
|
||||
|
||||
Alternatively, developers can use the [Auto-GPT Plugin Template](https://github.com/Significant-Gravitas/Auto-GPT-Plugin-Template) as a starting point for creating your own plugins.
|
||||
|
||||
@@ -172,7 +172,7 @@ If you need to upgrade Docker Compose to a newer version, you can follow the ins
|
||||
|
||||
Once you have a recent version of docker-compose, run the commands below in your Auto-GPT folder.
|
||||
|
||||
1. Build the image. If you have pulled the image from Docker Hub, skip this step (NOTE: You *will* need to do this if you are modifying requirements.txt to add/remove depedencies like Python libs/frameworks)
|
||||
1. Build the image. If you have pulled the image from Docker Hub, skip this step (NOTE: You *will* need to do this if you are modifying requirements.txt to add/remove dependencies like Python libs/frameworks)
|
||||
|
||||
:::shell
|
||||
docker-compose build auto-gpt
|
||||
|
||||
Reference in New Issue
Block a user