diff --git a/.github/ISSUE_TEMPLATE/1.bug.yml b/.github/ISSUE_TEMPLATE/1.bug.yml
index 4877c411..c30a6105 100644
--- a/.github/ISSUE_TEMPLATE/1.bug.yml
+++ b/.github/ISSUE_TEMPLATE/1.bug.yml
@@ -5,12 +5,20 @@ body:
- type: markdown
attributes:
value: |
- ⚠️ ### Before you continue
- Before you create this issue please join the [discord](https://discord.gg/autogpt) to discuss with our incredible members
- about your issue. You are more than to specify your issue in the [tech-support](https://discord.com/channels/1092243196446249134/1092275629602394184) channel.
+ ### ⚠️ Before you continue
+ * Check out our [backlog], [roadmap] and join our [discord] to discuss what's going on
+ * If you need help, you can ask in the [discussions] section or in [#tech-support]
+ * **Throughly search the [existing issues] before creating a new one**
+
+ [backlog]: https://github.com/orgs/Significant-Gravitas/projects/1
+ [roadmap]: https://github.com/orgs/Significant-Gravitas/projects/2
+ [discord]: https://discord.gg/autogpt
+ [discussions]: https://github.com/Significant-Gravitas/Auto-GPT/discussions
+ [#tech-support]: https://discord.com/channels/1092243196446249134/1092275629602394184
+ [existing issues]: https://github.com/Significant-Gravitas/Auto-GPT/issues?q=is%3Aissue
- type: checkboxes
attributes:
- label: ⚠️ Search for existing issues first ⚠️
+ label: '### ⚠️ Search for existing issues first!'
description: >
Please [search the history](https://github.com/Torantulino/Auto-GPT/issues)
to see if an issue already exists for the same problem.
@@ -43,13 +51,16 @@ body:
- Windows
- Linux
- MacOS
+ - Docker
+ - Devcontainer / Codespace
+ - Windows Subsystem for Linux (WSL)
- Other (Please specify in your problem)
- type: dropdown
attributes:
- label: GPT-3 or GPT-4
+ label: GPT-3 or GPT-4?
description: >
If you are using Auto-GPT with `--gpt3only`, your problems may be caused by
- the limitations of GPT-3.5.
+ the [limitations](https://github.com/Significant-Gravitas/Auto-GPT/issues?q=is%3Aissue+label%3A%22AI+model+limitation%22) of GPT-3.5.
options:
- GPT-3.5
- GPT-4
@@ -69,7 +80,7 @@ body:
- type: textarea
attributes:
label: Your prompt 📝
- description: |
+ description: >
If applicable please provide the prompt you are using. Your prompt is stored in your `ai_settings.yaml` file.
value: |
```yaml
@@ -79,9 +90,24 @@ body:
attributes:
label: Your Logs 📒
description: |
- If applicable please include your error logs. You can find them at `logs/error.log`.
+ Please include the log showing your error and the command that caused it, if applicable.
+ You can copy it from your terminal or from `logs/activity.log`.
This will help us understand your issue better!
+
+
+ Example
+ ```log
+ INFO NEXT ACTION: COMMAND = execute_shell ARGUMENTS = {'command_line': 'some_command'}
+ INFO -=-=-=-=-=-=-= COMMAND AUTHORISED BY USER -=-=-=-=-=-=-=
+ Traceback (most recent call last):
+ File "/home/anaconda3/lib/python3.9/site-packages/openai/api_requestor.py", line 619, in _interpret_response
+ self._interpret_response_line(
+ File "/home/anaconda3/lib/python3.9/site-packages/openai/api_requestor.py", line 682, in _interpret_response_line
+ raise self.handle_error_response(
+ openai.error.InvalidRequestError: This model's maximum context length is 8191 tokens, however you requested 10982 tokens (10982 in your prompt; 0 for the completion). Please reduce your prompt; or completion length.
+ ```
+
value: |
```
- ```
\ No newline at end of file
+ ```