Rename Auto-GPT to AutoGPT (#5301)

* Rename to AutoGPT

Signed-off-by: Merwane Hamadi <merwanehamadi@gmail.com>

* Update autogpts/autogpt/BULLETIN.md

Co-authored-by: Reinier van der Leer <pwuts@agpt.co>

* Update BULLETIN.md

* Update docker-compose.yml

* Update autogpts/forge/tutorials/001_getting_started.md

Co-authored-by: Reinier van der Leer <pwuts@agpt.co>

* Update autogpts/autogpt/tests/unit/test_logs.py

Co-authored-by: Reinier van der Leer <pwuts@agpt.co>

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update introduction.md

* Update plugins.md

---------

Signed-off-by: Merwane Hamadi <merwanehamadi@gmail.com>
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
This commit is contained in:
merwanehamadi
2023-09-22 15:49:29 -07:00
committed by GitHub
parent bb627442d4
commit 8f41dbe27d
70 changed files with 242 additions and 243 deletions

View File

@@ -1,5 +1,5 @@
name: Bug report 🐛 name: Bug report 🐛
description: Create a bug report for Auto-GPT. description: Create a bug report for AutoGPT.
labels: ['status: needs triage'] labels: ['status: needs triage']
body: body:
- type: markdown - type: markdown
@@ -13,16 +13,16 @@ body:
[backlog]: https://github.com/orgs/Significant-Gravitas/projects/1 [backlog]: https://github.com/orgs/Significant-Gravitas/projects/1
[roadmap]: https://github.com/orgs/Significant-Gravitas/projects/2 [roadmap]: https://github.com/orgs/Significant-Gravitas/projects/2
[discord]: https://discord.gg/autogpt [discord]: https://discord.gg/autogpt
[discussions]: https://github.com/Significant-Gravitas/Auto-GPT/discussions [discussions]: https://github.com/Significant-Gravitas/AutoGPT/discussions
[#tech-support]: https://discord.com/channels/1092243196446249134/1092275629602394184 [#tech-support]: https://discord.com/channels/1092243196446249134/1092275629602394184
[existing issues]: https://github.com/Significant-Gravitas/Auto-GPT/issues?q=is%3Aissue [existing issues]: https://github.com/Significant-Gravitas/AutoGPT/issues?q=is%3Aissue
[wiki page on Contributing]: https://github.com/Significant-Gravitas/Nexus/wiki/Contributing [wiki page on Contributing]: https://github.com/Significant-Gravitas/Nexus/wiki/Contributing
- type: checkboxes - type: checkboxes
attributes: attributes:
label: ⚠️ Search for existing issues first ⚠️ label: ⚠️ Search for existing issues first ⚠️
description: > description: >
Please [search the history](https://github.com/Torantulino/Auto-GPT/issues) Please [search the history](https://github.com/Significant-Gravitas/AutoGPT/issues)
to see if an issue already exists for the same problem. to see if an issue already exists for the same problem.
options: options:
- label: I have searched the existing issues, and there is no existing issue for my problem - label: I have searched the existing issues, and there is no existing issue for my problem
@@ -35,8 +35,8 @@ body:
A good rule of thumb: What would you type if you were searching for the issue? A good rule of thumb: What would you type if you were searching for the issue?
For example: For example:
BAD - my auto-gpt keeps looping BAD - my AutoGPT keeps looping
GOOD - After performing execute_python_file, auto-gpt goes into a loop where it keeps trying to execute the file. GOOD - After performing execute_python_file, AutoGPT goes into a loop where it keeps trying to execute the file.
⚠️ SUPER-busy repo, please help the volunteer maintainers. ⚠️ SUPER-busy repo, please help the volunteer maintainers.
The less time we spend here, the more time we can spend building AutoGPT. The less time we spend here, the more time we can spend building AutoGPT.
@@ -54,7 +54,7 @@ body:
attributes: attributes:
label: Which Operating System are you using? label: Which Operating System are you using?
description: > description: >
Please select the operating system you were using to run Auto-GPT when this problem occurred. Please select the operating system you were using to run AutoGPT when this problem occurred.
options: options:
- Windows - Windows
- Linux - Linux
@@ -73,12 +73,12 @@ body:
- type: dropdown - type: dropdown
attributes: attributes:
label: Which version of Auto-GPT are you using? label: Which version of AutoGPT are you using?
description: | description: |
Please select which version of Auto-GPT you were using when this issue occurred. Please select which version of AutoGPT you were using when this issue occurred.
If you downloaded the code from the [releases page](https://github.com/Significant-Gravitas/Auto-GPT/releases/) make sure you were using the latest code. If you downloaded the code from the [releases page](https://github.com/Significant-Gravitas/AutoGPT/releases/) make sure you were using the latest code.
**If you weren't please try with the [latest code](https://github.com/Significant-Gravitas/Auto-GPT/releases/)**. **If you weren't please try with the [latest code](https://github.com/Significant-Gravitas/AutoGPT/releases/)**.
If installed with git you can run `git branch` to see which version of Auto-GPT you are running. If installed with git you can run `git branch` to see which version of AutoGPT you are running.
options: options:
- Latest Release - Latest Release
- Stable (branch) - Stable (branch)
@@ -90,8 +90,8 @@ body:
attributes: attributes:
label: Do you use OpenAI GPT-3 or GPT-4? label: Do you use OpenAI GPT-3 or GPT-4?
description: > description: >
If you are using Auto-GPT with `--gpt3only`, your problems may be caused by If you are using AutoGPT with `--gpt3only`, your problems may be caused by
the [limitations](https://github.com/Significant-Gravitas/Auto-GPT/issues?q=is%3Aissue+label%3A%22AI+model+limitation%22) of GPT-3.5. the [limitations](https://github.com/Significant-Gravitas/AutoGPT/issues?q=is%3Aissue+label%3A%22AI+model+limitation%22) of GPT-3.5.
options: options:
- GPT-3.5 - GPT-3.5
- GPT-4 - GPT-4
@@ -129,7 +129,7 @@ body:
- type: textarea - type: textarea
attributes: attributes:
label: Describe your issue. label: Describe your issue.
description: Describe the problem you are experiencing. Try to describe only the issue and phrase it short but clear. ⚠️ Provide NO other data in this field description: Describe the problem you are experiencing. Try to describe only the issue and phrase it short but clear. ⚠️ Provide NO other data in this field
validations: validations:
required: true required: true
@@ -139,16 +139,16 @@ body:
value: | value: |
The following is OPTIONAL, please keep in mind that the log files may contain personal information such as credentials.⚠️ The following is OPTIONAL, please keep in mind that the log files may contain personal information such as credentials.⚠️
"The log files are located in the folder 'logs' inside the main auto-gpt folder." "The log files are located in the folder 'logs' inside the main AutoGPT folder."
- type: textarea - type: textarea
attributes: attributes:
label: Upload Activity Log Content label: Upload Activity Log Content
description: | description: |
Upload the activity log content, this can help us understand the issue better. Upload the activity log content, this can help us understand the issue better.
To do this, go to the folder logs in your main auto-gpt folder, open activity.log and copy/paste the contents to this field. To do this, go to the folder logs in your main AutoGPT folder, open activity.log and copy/paste the contents to this field.
⚠️ The activity log may contain personal data given to auto-gpt by you in prompt or input as well as ⚠️ The activity log may contain personal data given to AutoGPT by you in prompt or input as well as
any personal information that auto-gpt collected out of files during last run. Do not add the activity log if you are not comfortable with sharing it. ⚠️ any personal information that AutoGPT collected out of files during last run. Do not add the activity log if you are not comfortable with sharing it. ⚠️
validations: validations:
required: false required: false
@@ -157,8 +157,8 @@ body:
label: Upload Error Log Content label: Upload Error Log Content
description: | description: |
Upload the error log content, this will help us understand the issue better. Upload the error log content, this will help us understand the issue better.
To do this, go to the folder logs in your main auto-gpt folder, open error.log and copy/paste the contents to this field. To do this, go to the folder logs in your main AutoGPT folder, open error.log and copy/paste the contents to this field.
⚠️ The error log may contain personal data given to auto-gpt by you in prompt or input as well as ⚠️ The error log may contain personal data given to AutoGPT by you in prompt or input as well as
any personal information that auto-gpt collected out of files during last run. Do not add the activity log if you are not comfortable with sharing it. ⚠️ any personal information that AutoGPT collected out of files during last run. Do not add the activity log if you are not comfortable with sharing it. ⚠️
validations: validations:
required: false required: false

View File

@@ -1,5 +1,5 @@
name: Feature request 🚀 name: Feature request 🚀
description: Suggest a new idea for Auto-GPT! description: Suggest a new idea for AutoGPT!
labels: ['status: needs triage'] labels: ['status: needs triage']
body: body:
- type: markdown - type: markdown
@@ -10,7 +10,7 @@ body:
- type: checkboxes - type: checkboxes
attributes: attributes:
label: Duplicates label: Duplicates
description: Please [search the history](https://github.com/Torantulino/Auto-GPT/issues) to see if an issue already exists for the same problem. description: Please [search the history](https://github.com/Significant-Gravitas/AutoGPT/issues) to see if an issue already exists for the same problem.
options: options:
- label: I have searched the existing issues - label: I have searched the existing issues
required: true required: true

View File

@@ -27,5 +27,5 @@ https://github.com/Significant-Gravitas/Nexus/wiki/Contributing
- [ ] Have you changed or added a feature? &ensp; `-4 pts` - [ ] Have you changed or added a feature? &ensp; `-4 pts`
- [ ] Have you added/updated corresponding documentation? &ensp; `+4 pts` - [ ] Have you added/updated corresponding documentation? &ensp; `+4 pts`
- [ ] Have you added/updated corresponding integration tests? &ensp; `+5 pts` - [ ] Have you added/updated corresponding integration tests? &ensp; `+5 pts`
- [ ] Have you changed the behavior of Auto-GPT? &ensp; `-5 pts` - [ ] Have you changed the behavior of AutoGPT? &ensp; `-5 pts`
- [ ] Have you also run `agbenchmark` to verify that these changes do not regress performance? &ensp; `+10 pts` - [ ] Have you also run `agbenchmark` to verify that these changes do not regress performance? &ensp; `+10 pts`

View File

@@ -1,4 +1,4 @@
name: Auto-GPT Python CI name: AutoGPT Python CI
on: on:
push: push:

View File

@@ -1,4 +1,4 @@
name: Auto-GPT Docker CI name: AutoGPT Docker CI
on: on:
push: push:

View File

@@ -1,4 +1,4 @@
name: Auto-GPT Docker Release name: AutoGPT Docker Release
on: on:
release: release:

View File

@@ -1,4 +1,4 @@
# Auto-GPT Contribution Guide # AutoGPT Contribution Guide
If you are reading this, you are probably looking for our **[contribution guide]**, If you are reading this, you are probably looking for our **[contribution guide]**,
which is part of our [knowledge base]. which is part of our [knowledge base].
@@ -33,5 +33,5 @@ In fact, why not just look through the whole wiki (it's only a few pages) and
hop on our Discord. See you there! :-) hop on our Discord. See you there! :-)
❤️ & 🔆 ❤️ & 🔆
The team @ Auto-GPT The team @ AutoGPT
https://discord.gg/autogpt https://discord.gg/autogpt

View File

@@ -1,6 +1,6 @@
# 🌟 AutoGPT: the heart of the open-source agent ecosystem # 🌟 AutoGPT: the heart of the open-source agent ecosystem
[![Discord Follow](https://dcbadge.vercel.app/api/server/autogpt?style=flat)](https://discord.gg/autogpt) [![GitHub Repo stars](https://img.shields.io/github/stars/Significant-Gravitas/auto-gpt?style=social)](https://github.com/Significant-Gravitas/Auto-GPT/stargazers) [![Twitter Follow](https://img.shields.io/twitter/follow/autogpt?style=social)](https://twitter.com/Auto_GPT) [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) [![Discord Follow](https://dcbadge.vercel.app/api/server/autogpt?style=flat)](https://discord.gg/autogpt) [![GitHub Repo stars](https://img.shields.io/github/stars/Significant-Gravitas/AutoGPT?style=social)](https://github.com/Significant-Gravitas/AutoGPT/stargazers) [![Twitter Follow](https://img.shields.io/twitter/follow/autogpt?style=social)](https://twitter.com/Auto_GPT) [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
**AutoGPT** is your go-to toolkit for supercharging agents. With its modular and extensible framework, you're empowered to focus on: **AutoGPT** is your go-to toolkit for supercharging agents. With its modular and extensible framework, you're empowered to focus on:
@@ -33,7 +33,7 @@ Among our currently benchmarked agents, AutoGPT scores the best. This will chang
## 🌟 Quickstart ## 🌟 Quickstart
- **Jumpstart your journey!** 🌠 To activate the best agent, follow the guide [here](https://github.com/Significant-Gravitas/Auto-GPT/blob/master/autogpts/autogpt/README.md). - **Jumpstart your journey!** 🌠 To activate the best agent, follow the guide [here](https://github.com/Significant-Gravitas/AutoGPT/blob/master/autogpts/autogpt/README.md).
Want to build your own groundbreaking agent using AutoGPT? 🛠️ Fork this repository! Detailed guidance is on the way. There are three major components to focus on: Want to build your own groundbreaking agent using AutoGPT? 🛠️ Fork this repository! Detailed guidance is on the way. There are three major components to focus on:
@@ -41,19 +41,19 @@ Want to build your own groundbreaking agent using AutoGPT? 🛠️ Fork this rep
**Forge your future!** The `forge` is your innovation lab. All the boilerplate code is already handled, letting you channel all your creativity into building a revolutionary agent. It's more than a starting point, it's a launchpad 🚀 for your ideas. **Forge your future!** The `forge` is your innovation lab. All the boilerplate code is already handled, letting you channel all your creativity into building a revolutionary agent. It's more than a starting point, it's a launchpad 🚀 for your ideas.
📘 [Learn More](https://github.com/Significant-Gravitas/Auto-GPT/tree/master/autogpts/forge) 📘 [Learn More](https://github.com/Significant-Gravitas/AutoGPT/tree/master/autogpts/forge)
### 🎯 the Benchmark ### 🎯 the Benchmark
**Test to impress!** The `benchmark` offers a stringent testing environment. Our framework allows for autonomous, objective performance evaluations, ensuring your agents are primed for real-world action. **Test to impress!** The `benchmark` offers a stringent testing environment. Our framework allows for autonomous, objective performance evaluations, ensuring your agents are primed for real-world action.
📘 [Learn More](https://github.com/Significant-Gravitas/Auto-GPT/blob/master/benchmark) 📘 [Learn More](https://github.com/Significant-Gravitas/AutoGPT/blob/master/benchmark)
### 🎮 the UI ### 🎮 the UI
**Take Control!** The `frontend` is your personal command center. It gives you a user-friendly interface to control and monitor your agents, making it easier to bring your ideas to life. **Take Control!** The `frontend` is your personal command center. It gives you a user-friendly interface to control and monitor your agents, making it easier to bring your ideas to life.
📘 [Learn More](https://github.com/Significant-Gravitas/Auto-GPT/tree/master/frontend) 📘 [Learn More](https://github.com/Significant-Gravitas/AutoGPT/tree/master/frontend)
--- ---
@@ -65,10 +65,10 @@ Want to build your own groundbreaking agent using AutoGPT? 🛠️ Fork this rep
#### Get help - [Discord 💬](https://discord.gg/autogpt) #### Get help - [Discord 💬](https://discord.gg/autogpt)
To report a bug or request a feature, create a [GitHub Issue](https://github.com/Significant-Gravitas/Auto-GPT/issues/new/choose). Please ensure someone else hasnt created an issue for the same topic. To report a bug or request a feature, create a [GitHub Issue](https://github.com/Significant-Gravitas/AutoGPT/issues/new/choose). Please ensure someone else hasnt created an issue for the same topic.
<p align="center"> <p align="center">
<a href="https://star-history.com/#Torantulino/auto-gpt&Date"> <a href="https://star-history.com/#Significant-Gravitas/AutoGPT&Date">
<img src="https://api.star-history.com/svg?repos=Torantulino/auto-gpt&type=Date" alt="Star History Chart"> <img src="https://api.star-history.com/svg?repos=Significant-Gravitas/AutoGPT&type=Date" alt="Star History Chart">
</a> </a>
</p> </p>

View File

@@ -10,4 +10,4 @@ RUN apt-get update && apt-get install -y \
RUN apt-get install -y curl jq wget git RUN apt-get install -y curl jq wget git
# Declare working directory # Declare working directory
WORKDIR /workspace/Auto-GPT WORKDIR /workspace/AutoGPT

View File

@@ -1,7 +1,7 @@
{ {
"dockerComposeFile": "./docker-compose.yml", "dockerComposeFile": "./docker-compose.yml",
"service": "auto-gpt", "service": "auto-gpt",
"workspaceFolder": "/workspace/Auto-GPT", "workspaceFolder": "/workspace/AutoGPT",
"shutdownAction": "stopCompose", "shutdownAction": "stopCompose",
"features": { "features": {
"ghcr.io/devcontainers/features/common-utils:2": { "ghcr.io/devcontainers/features/common-utils:2": {
@@ -52,5 +52,5 @@
"remoteUser": "vscode", "remoteUser": "vscode",
// Add the freshly containerized repo to the list of safe repositories // Add the freshly containerized repo to the list of safe repositories
"postCreateCommand": "git config --global --add safe.directory /workspace/Auto-GPT && poetry install" "postCreateCommand": "git config --global --add safe.directory /workspace/AutoGPT && poetry install"
} }

View File

@@ -9,4 +9,4 @@ services:
context: ../ context: ../
tty: true tty: true
volumes: volumes:
- ../:/workspace/Auto-GPT - ../:/workspace/AutoGPT

View File

@@ -1,7 +1,7 @@
# For further descriptions of these settings see docs/configuration/options.md or go to docs.agpt.co # For further descriptions of these settings see docs/configuration/options.md or go to docs.agpt.co
################################################################################ ################################################################################
### AUTO-GPT - GENERAL SETTINGS ### AutoGPT - GENERAL SETTINGS
################################################################################ ################################################################################
## OPENAI_API_KEY - OpenAI API Key (Example: my-openai-api-key) ## OPENAI_API_KEY - OpenAI API Key (Example: my-openai-api-key)
@@ -16,13 +16,13 @@ OPENAI_API_KEY=your-openai-api-key
## USER_AGENT - Define the user-agent used by the requests library to browse website (string) ## USER_AGENT - Define the user-agent used by the requests library to browse website (string)
# USER_AGENT="Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.97 Safari/537.36" # USER_AGENT="Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.97 Safari/537.36"
## AI_SETTINGS_FILE - Specifies which AI Settings file to use, relative to the Auto-GPT root directory. (defaults to ai_settings.yaml) ## AI_SETTINGS_FILE - Specifies which AI Settings file to use, relative to the AutoGPT root directory. (defaults to ai_settings.yaml)
# AI_SETTINGS_FILE=ai_settings.yaml # AI_SETTINGS_FILE=ai_settings.yaml
## PLUGINS_CONFIG_FILE - The path to the plugins_config.yaml file, relative to the Auto-GPT root directory. (Default plugins_config.yaml) ## PLUGINS_CONFIG_FILE - The path to the plugins_config.yaml file, relative to the AutoGPT root directory. (Default plugins_config.yaml)
# PLUGINS_CONFIG_FILE=plugins_config.yaml # PLUGINS_CONFIG_FILE=plugins_config.yaml
## PROMPT_SETTINGS_FILE - Specifies which Prompt Settings file to use, relative to the Auto-GPT root directory. (defaults to prompt_settings.yaml) ## PROMPT_SETTINGS_FILE - Specifies which Prompt Settings file to use, relative to the AutoGPT root directory. (defaults to prompt_settings.yaml)
# PROMPT_SETTINGS_FILE=prompt_settings.yaml # PROMPT_SETTINGS_FILE=prompt_settings.yaml
## OPENAI_API_BASE_URL - Custom url for the OpenAI API, useful for connecting to custom backends. No effect if USE_AZURE is true, leave blank to keep the default url ## OPENAI_API_BASE_URL - Custom url for the OpenAI API, useful for connecting to custom backends. No effect if USE_AZURE is true, leave blank to keep the default url
@@ -36,7 +36,7 @@ OPENAI_API_KEY=your-openai-api-key
## AUTHORISE COMMAND KEY - Key to authorise commands ## AUTHORISE COMMAND KEY - Key to authorise commands
# AUTHORISE_COMMAND_KEY=y # AUTHORISE_COMMAND_KEY=y
## EXIT_KEY - Key to exit AUTO-GPT ## EXIT_KEY - Key to exit AutoGPT
# EXIT_KEY=n # EXIT_KEY=n
## PLAIN_OUTPUT - Plain output, which disables the spinner (Default: False) ## PLAIN_OUTPUT - Plain output, which disables the spinner (Default: False)
@@ -58,7 +58,7 @@ OPENAI_API_KEY=your-openai-api-key
## USE_AZURE - Use Azure OpenAI or not (Default: False) ## USE_AZURE - Use Azure OpenAI or not (Default: False)
# USE_AZURE=False # USE_AZURE=False
## AZURE_CONFIG_FILE - The path to the azure.yaml file, relative to the Auto-GPT root directory. (Default: azure.yaml) ## AZURE_CONFIG_FILE - The path to the azure.yaml file, relative to the AutoGPT root directory. (Default: azure.yaml)
# AZURE_CONFIG_FILE=azure.yaml # AZURE_CONFIG_FILE=azure.yaml
@@ -83,11 +83,11 @@ OPENAI_API_KEY=your-openai-api-key
# SHELL_COMMAND_CONTROL=denylist # SHELL_COMMAND_CONTROL=denylist
## ONLY if SHELL_COMMAND_CONTROL is set to denylist: ## ONLY if SHELL_COMMAND_CONTROL is set to denylist:
## SHELL_DENYLIST - List of shell commands that ARE NOT allowed to be executed by Auto-GPT (Default: sudo,su) ## SHELL_DENYLIST - List of shell commands that ARE NOT allowed to be executed by AutoGPT (Default: sudo,su)
# SHELL_DENYLIST=sudo,su # SHELL_DENYLIST=sudo,su
## ONLY if SHELL_COMMAND_CONTROL is set to allowlist: ## ONLY if SHELL_COMMAND_CONTROL is set to allowlist:
## SHELL_ALLOWLIST - List of shell commands that ARE allowed to be executed by Auto-GPT (Default: None) ## SHELL_ALLOWLIST - List of shell commands that ARE allowed to be executed by AutoGPT (Default: None)
# SHELL_ALLOWLIST= # SHELL_ALLOWLIST=
################################################################################ ################################################################################

View File

@@ -1,4 +1,4 @@
# Upon entering directory, direnv requests user permission once to automatically load project dependencies onwards. # Upon entering directory, direnv requests user permission once to automatically load project dependencies onwards.
# Eliminating the need of running "nix develop github:superherointj/nix-auto-gpt" for Nix users to develop/use Auto-GPT. # Eliminating the need of running "nix develop github:superherointj/nix-auto-gpt" for Nix users to develop/use AutoGPT.
[[ -z $IN_NIX_SHELL ]] && use flake github:superherointj/nix-auto-gpt [[ -z $IN_NIX_SHELL ]] && use flake github:superherointj/nix-auto-gpt

View File

@@ -2,7 +2,7 @@
# -------------- # --------------
🌎 *Official Website*: https://agpt.co. 🌎 *Official Website*: https://agpt.co.
📖 *User Guide*: https://docs.agpt.co. 📖 *User Guide*: https://docs.agpt.co.
👩 *Contributors Wiki*: https://github.com/Significant-Gravitas/Auto-GPT/wiki/Contributing. 👩 *Contributors Wiki*: https://github.com/Significant-Gravitas/Nexus/wiki/Contributing.
# v0.4.7 RELEASE HIGHLIGHTS! 🚀 # v0.4.7 RELEASE HIGHLIGHTS! 🚀
# ----------------------------- # -----------------------------
@@ -18,4 +18,4 @@ We've also moved our documentation to Material Theme, at https://docs.agpt.co.
As usual, we've squashed a few bugs and made some under-the-hood improvements. As usual, we've squashed a few bugs and made some under-the-hood improvements.
Take a look at the Release Notes on Github for the full changelog: Take a look at the Release Notes on Github for the full changelog:
https://github.com/Significant-Gravitas/Auto-GPT/releases. https://github.com/Significant-Gravitas/AutoGPT/releases.

View File

@@ -2,7 +2,7 @@
# Visit https://bit.ly/cffinit to generate yours today! # Visit https://bit.ly/cffinit to generate yours today!
cff-version: 1.2.0 cff-version: 1.2.0
title: Auto-GPT title: AutoGPT
message: >- message: >-
If you use this software, please cite it using the If you use this software, please cite it using the
metadata from this file. metadata from this file.
@@ -10,7 +10,7 @@ type: software
authors: authors:
- name: Significant Gravitas - name: Significant Gravitas
website: 'https://agpt.co' website: 'https://agpt.co'
repository-code: 'https://github.com/Significant-Gravitas/Auto-GPT' repository-code: 'https://github.com/Significant-Gravitas/AutoGPT'
url: 'https://agpt.co' url: 'https://agpt.co'
abstract: >- abstract: >-
An experimental open-source attempt to make GPT-4 fully An experimental open-source attempt to make GPT-4 fully

View File

@@ -1,21 +1,21 @@
# Auto-GPT: An Autonomous GPT-4 Experiment # AutoGPT: An Autonomous GPT-4 Experiment
[![Discord Follow](https://dcbadge.vercel.app/api/server/autogpt?style=flat)](https://discord.gg/autogpt) [![Discord Follow](https://dcbadge.vercel.app/api/server/autogpt?style=flat)](https://discord.gg/autogpt)
[![GitHub Repo stars](https://img.shields.io/github/stars/Significant-Gravitas/auto-gpt?style=social)](https://github.com/Significant-Gravitas/Auto-GPT/stargazers) [![GitHub Repo stars](https://img.shields.io/github/stars/Significant-Gravitas/AutoGPT?style=social)](https://github.com/Significant-Gravitas/AutoGPT/stargazers)
[![Twitter Follow](https://img.shields.io/twitter/follow/siggravitas?style=social)](https://twitter.com/SigGravitas) [![Twitter Follow](https://img.shields.io/twitter/follow/siggravitas?style=social)](https://twitter.com/SigGravitas)
## 💡 Get help - [Q&A](https://github.com/Significant-Gravitas/Auto-GPT/discussions/categories/q-a) or [Discord 💬](https://discord.gg/autogpt) ## 💡 Get help - [Q&A](https://github.com/Significant-Gravitas/AutoGPT/discussions/categories/q-a) or [Discord 💬](https://discord.gg/autogpt)
<hr/> <hr/>
### 🔴 USE `stable` not `master` 🔴 ### 🔴 USE `stable` not `master` 🔴
**Download the latest `stable` release from here: https://github.com/Significant-Gravitas/Auto-GPT/releases/latest.** **Download the latest `stable` release from here: https://github.com/Significant-Gravitas/AutoGPT/releases/latest.**
The `master` branch is under heavy development and may often be in a **broken** state. The `master` branch is under heavy development and may often be in a **broken** state.
<hr/> <hr/>
Auto-GPT is an experimental open-source application showcasing the capabilities of the GPT-4 language model. This program, driven by GPT-4, chains together LLM "thoughts", to autonomously achieve whatever goal you set. As one of the first examples of GPT-4 running fully autonomously, Auto-GPT pushes the boundaries of what is possible with AI. AutoGPT is an experimental open-source application showcasing the capabilities of the GPT-4 language model. This program, driven by GPT-4, chains together LLM "thoughts", to autonomously achieve whatever goal you set. As one of the first examples of GPT-4 running fully autonomously, AutoGPT pushes the boundaries of what is possible with AI.
<h2 align="center"> Demo April 16th 2023 </h2> <h2 align="center"> Demo April 16th 2023 </h2>
@@ -36,7 +36,7 @@ Demo made by <a href=https://twitter.com/BlakeWerlinger>Blake Werlinger</a>
0. Check out the [wiki](https://github.com/Significant-Gravitas/Nexus/wiki) 0. Check out the [wiki](https://github.com/Significant-Gravitas/Nexus/wiki)
1. Get an OpenAI [API Key](https://platform.openai.com/account/api-keys) 1. Get an OpenAI [API Key](https://platform.openai.com/account/api-keys)
2. Download the [latest release](https://github.com/Significant-Gravitas/Auto-GPT/releases/latest) 2. Download the [latest release](https://github.com/Significant-Gravitas/AutoGPT/releases/latest)
3. Follow the [installation instructions][docs/setup] 3. Follow the [installation instructions][docs/setup]
4. Configure any additional features you want, or install some [plugins][docs/plugins] 4. Configure any additional features you want, or install some [plugins][docs/plugins]
5. [Run][docs/usage] the app 5. [Run][docs/usage] the app
@@ -65,10 +65,10 @@ Please see the [documentation][docs] for full setup instructions and configurati
2. Install all dependencies: `poetry install` 2. Install all dependencies: `poetry install`
<h2 align="center"> 💖 Help Fund Auto-GPT's Development 💖</h2> <h2 align="center"> 💖 Help Fund AutoGPT's Development 💖</h2>
<p align="center"> <p align="center">
If you can spare a coffee, you can help to cover the costs of developing Auto-GPT and help to push the boundaries of fully autonomous AI! If you can spare a coffee, you can help to cover the costs of developing AutoGPT and help to push the boundaries of fully autonomous AI!
Your support is greatly appreciated. Development of this free, open-source project is made possible by all the <a href="https://github.com/Significant-Gravitas/Auto-GPT/graphs/contributors">contributors</a> and <a href="https://github.com/sponsors/Torantulino">sponsors</a>. If you'd like to sponsor this project and have your avatar or company logo appear below <a href="https://github.com/sponsors/Torantulino">click here</a>. Your support is greatly appreciated. Development of this free, open-source project is made possible by all the <a href="https://github.com/Significant-Gravitas/AutoGPT/graphs/contributors">contributors</a> and <a href="https://github.com/sponsors/Torantulino">sponsors</a>. If you'd like to sponsor this project and have your avatar or company logo appear below <a href="https://github.com/sponsors/Torantulino">click here</a>.
</p> </p>
<p align="center"> <p align="center">
@@ -124,26 +124,26 @@ This experiment aims to showcase the potential of GPT-4 but comes with some limi
## 🛡 Disclaimer ## 🛡 Disclaimer
This project, Auto-GPT, is an experimental application and is provided "as-is" without any warranty, express or implied. By using this software, you agree to assume all risks associated with its use, including but not limited to data loss, system failure, or any other issues that may arise. This project, AutoGPT, is an experimental application and is provided "as-is" without any warranty, express or implied. By using this software, you agree to assume all risks associated with its use, including but not limited to data loss, system failure, or any other issues that may arise.
The developers and contributors of this project do not accept any responsibility or liability for any losses, damages, or other consequences that may occur as a result of using this software. You are solely responsible for any decisions and actions taken based on the information provided by Auto-GPT. The developers and contributors of this project do not accept any responsibility or liability for any losses, damages, or other consequences that may occur as a result of using this software. You are solely responsible for any decisions and actions taken based on the information provided by AutoGPT.
**Please note that the use of the GPT-4 language model can be expensive due to its token usage.** By utilizing this project, you acknowledge that you are responsible for monitoring and managing your own token usage and the associated costs. It is highly recommended to check your OpenAI API usage regularly and set up any necessary limits or alerts to prevent unexpected charges. **Please note that the use of the GPT-4 language model can be expensive due to its token usage.** By utilizing this project, you acknowledge that you are responsible for monitoring and managing your own token usage and the associated costs. It is highly recommended to check your OpenAI API usage regularly and set up any necessary limits or alerts to prevent unexpected charges.
As an autonomous experiment, Auto-GPT may generate content or take actions that are not in line with real-world business practices or legal requirements. It is your responsibility to ensure that any actions or decisions made based on the output of this software comply with all applicable laws, regulations, and ethical standards. The developers and contributors of this project shall not be held responsible for any consequences arising from the use of this software. As an autonomous experiment, AutoGPT may generate content or take actions that are not in line with real-world business practices or legal requirements. It is your responsibility to ensure that any actions or decisions made based on the output of this software comply with all applicable laws, regulations, and ethical standards. The developers and contributors of this project shall not be held responsible for any consequences arising from the use of this software.
By using Auto-GPT, you agree to indemnify, defend, and hold harmless the developers, contributors, and any affiliated parties from and against any and all claims, damages, losses, liabilities, costs, and expenses (including reasonable attorneys' fees) arising from your use of this software or your violation of these terms. By using AutoGPT, you agree to indemnify, defend, and hold harmless the developers, contributors, and any affiliated parties from and against any and all claims, damages, losses, liabilities, costs, and expenses (including reasonable attorneys' fees) arising from your use of this software or your violation of these terms.
## 🐦 Connect with Us on Twitter ## 🐦 Connect with Us on Twitter
Stay up-to-date with the latest news, updates, and insights about Auto-GPT by following our Twitter accounts. Engage with the developer and the AI's own account for interesting discussions, project updates, and more. Stay up-to-date with the latest news, updates, and insights about AutoGPT by following our Twitter accounts. Engage with the developer and the AI's own account for interesting discussions, project updates, and more.
- **Developer**: Follow [@siggravitas](https://twitter.com/siggravitas) for insights into the development process, project updates, and related topics from the creator of Entrepreneur-GPT. - **Developer**: Follow [@siggravitas](https://twitter.com/siggravitas) for insights into the development process, project updates, and related topics from the creator of Entrepreneur-GPT.
We look forward to connecting with you and hearing your thoughts, ideas, and experiences with Auto-GPT. Join us on Twitter and let's explore the future of AI together! We look forward to connecting with you and hearing your thoughts, ideas, and experiences with AutoGPT. Join us on Twitter and let's explore the future of AI together!
<p align="center"> <p align="center">
<a href="https://star-history.com/#Torantulino/auto-gpt&Date"> <a href="https://star-history.com/#Significant-Gravitas/AutoGPT&Date">
<img src="https://api.star-history.com/svg?repos=Torantulino/auto-gpt&type=Date" alt="Star History Chart"> <img src="https://api.star-history.com/svg?repos=Significant-Gravitas/AutoGPT&type=Date" alt="Star History Chart">
</a> </a>
</p> </p>

View File

@@ -37,7 +37,7 @@ def bootstrap_agent(task: str, continuous_mode: bool) -> Agent:
command_registry = CommandRegistry.with_command_modules(COMMAND_CATEGORIES, config) command_registry = CommandRegistry.with_command_modules(COMMAND_CATEGORIES, config)
ai_config = AIConfig( ai_config = AIConfig(
ai_name="Auto-GPT", ai_name="AutoGPT",
ai_role="a multi-purpose AI assistant.", ai_role="a multi-purpose AI assistant.",
ai_goals=[task], ai_goals=[task],
) )

View File

@@ -1,4 +1,4 @@
"""Auto-GPT: A GPT powered AI Assistant""" """AutoGPT: A GPT powered AI Assistant"""
import autogpt.app.cli import autogpt.app.cli
if __name__ == "__main__": if __name__ == "__main__":

View File

@@ -38,7 +38,7 @@ logger = logging.getLogger(__name__)
class PlanningAgent(ContextMixin, WorkspaceMixin, BaseAgent): class PlanningAgent(ContextMixin, WorkspaceMixin, BaseAgent):
"""Agent class for interacting with Auto-GPT.""" """Agent class for interacting with AutoGPT."""
ThoughtProcessID = Literal["plan", "action", "evaluate"] ThoughtProcessID = Literal["plan", "action", "evaluate"]

View File

@@ -17,7 +17,7 @@ import click
"--ai-settings", "--ai-settings",
"-C", "-C",
help=( help=(
"Specifies which ai_settings.yaml file to use, relative to the Auto-GPT" "Specifies which ai_settings.yaml file to use, relative to the AutoGPT"
" root directory. Will also automatically skip the re-prompt." " root directory. Will also automatically skip the re-prompt."
), ),
) )
@@ -51,7 +51,7 @@ import click
@click.option( @click.option(
"--allow-downloads", "--allow-downloads",
is_flag=True, is_flag=True,
help="Dangerous: Allows Auto-GPT to download files natively.", help="Dangerous: Allows AutoGPT to download files natively.",
) )
@click.option( @click.option(
"--skip-news", "--skip-news",
@@ -112,7 +112,7 @@ def main(
""" """
Welcome to AutoGPT an experimental open-source application showcasing the capabilities of the GPT-4 pushing the boundaries of AI. Welcome to AutoGPT an experimental open-source application showcasing the capabilities of the GPT-4 pushing the boundaries of AI.
Start an Auto-GPT assistant. Start an AutoGPT assistant.
""" """
# Put imports inside function to avoid importing everything when starting the CLI # Put imports inside function to avoid importing everything when starting the CLI
from autogpt.app.main import run_auto_gpt from autogpt.app.main import run_auto_gpt

View File

@@ -47,7 +47,7 @@ def create_config(
gpt4only (bool): Whether to enable GPT4 only mode gpt4only (bool): Whether to enable GPT4 only mode
memory_type (str): The type of memory backend to use memory_type (str): The type of memory backend to use
browser_name (str): The name of the browser to use when using selenium to scrape the web browser_name (str): The name of the browser to use when using selenium to scrape the web
allow_downloads (bool): Whether to allow Auto-GPT to download files natively allow_downloads (bool): Whether to allow AutoGPT to download files natively
skips_news (bool): Whether to suppress the output of latest news on startup skips_news (bool): Whether to suppress the output of latest news on startup
""" """
config.debug_mode = False config.debug_mode = False
@@ -152,7 +152,7 @@ def create_config(
if allow_downloads: if allow_downloads:
print_attribute("Native Downloading", "ENABLED") print_attribute("Native Downloading", "ENABLED")
logger.warn( logger.warn(
msg=f"{Back.LIGHTYELLOW_EX}Auto-GPT will now be able to download and save files to your machine.{Back.RESET}" msg=f"{Back.LIGHTYELLOW_EX}AutoGPT will now be able to download and save files to your machine.{Back.RESET}"
" It is recommended that you monitor any files it downloads carefully.", " It is recommended that you monitor any files it downloads carefully.",
) )
logger.warn( logger.warn(

View File

@@ -135,7 +135,7 @@ async def run_auto_gpt(
logger.error( logger.error(
"WARNING: You are running on an older version of Python. " "WARNING: You are running on an older version of Python. "
"Some people have observed problems with certain " "Some people have observed problems with certain "
"parts of Auto-GPT with this version. " "parts of AutoGPT with this version. "
"Please consider upgrading to Python 3.10 or higher.", "Please consider upgrading to Python 3.10 or higher.",
) )
@@ -273,7 +273,7 @@ async def run_interaction_loop(
def graceful_agent_interrupt(signum: int, frame: Optional[FrameType]) -> None: def graceful_agent_interrupt(signum: int, frame: Optional[FrameType]) -> None:
nonlocal cycle_budget, cycles_remaining, spinner nonlocal cycle_budget, cycles_remaining, spinner
if cycles_remaining in [0, 1]: if cycles_remaining in [0, 1]:
logger.error("Interrupt signal received. Stopping Auto-GPT immediately.") logger.error("Interrupt signal received. Stopping AutoGPT immediately.")
sys.exit() sys.exit()
else: else:
restart_spinner = spinner.running restart_spinner = spinner.running

View File

@@ -37,7 +37,7 @@ async def interactive_ai_config_setup(
# Construct the prompt # Construct the prompt
user_friendly_output( user_friendly_output(
title="Welcome to Auto-GPT! ", title="Welcome to AutoGPT! ",
message="run with '--help' for more information.", message="run with '--help' for more information.",
title_color=Fore.GREEN, title_color=Fore.GREEN,
) )
@@ -60,7 +60,7 @@ async def interactive_ai_config_setup(
) )
user_desire = await utils.clean_input( user_desire = await utils.clean_input(
config, f"{Fore.LIGHTBLUE_EX}I want Auto-GPT to{Style.RESET_ALL}: " config, f"{Fore.LIGHTBLUE_EX}I want AutoGPT to{Style.RESET_ALL}: "
) )
if user_desire.strip() == "": if user_desire.strip() == "":

View File

@@ -49,11 +49,11 @@ async def clean_input(config: Config, prompt: str = ""):
# handle_sigint must be set to False, so the signal handler in the # handle_sigint must be set to False, so the signal handler in the
# autogpt/main.py could be employed properly. This referes to # autogpt/main.py could be employed properly. This referes to
# https://github.com/Significant-Gravitas/Auto-GPT/pull/4799/files/3966cdfd694c2a80c0333823c3bc3da090f85ed3#r1264278776 # https://github.com/Significant-Gravitas/AutoGPT/pull/4799/files/3966cdfd694c2a80c0333823c3bc3da090f85ed3#r1264278776
answer = await session.prompt_async(ANSI(prompt), handle_sigint=False) answer = await session.prompt_async(ANSI(prompt), handle_sigint=False)
return answer return answer
except KeyboardInterrupt: except KeyboardInterrupt:
logger.info("You interrupted Auto-GPT") logger.info("You interrupted AutoGPT")
logger.info("Quitting...") logger.info("Quitting...")
exit(0) exit(0)
@@ -61,7 +61,7 @@ async def clean_input(config: Config, prompt: str = ""):
def get_bulletin_from_web(): def get_bulletin_from_web():
try: try:
response = requests.get( response = requests.get(
"https://raw.githubusercontent.com/Significant-Gravitas/Auto-GPT/master/autogpts/autogpt/BULLETIN.md" "https://raw.githubusercontent.com/Significant-Gravitas/AutoGPT/master/autogpts/autogpt/BULLETIN.md"
) )
if response.status_code == 200: if response.status_code == 200:
return response.text return response.text
@@ -90,12 +90,12 @@ def get_latest_bulletin() -> tuple[str, bool]:
new_bulletin = get_bulletin_from_web() new_bulletin = get_bulletin_from_web()
is_new_news = new_bulletin != "" and new_bulletin != current_bulletin is_new_news = new_bulletin != "" and new_bulletin != current_bulletin
news_header = Fore.YELLOW + "Welcome to Auto-GPT!\n" news_header = Fore.YELLOW + "Welcome to AutoGPT!\n"
if new_bulletin or current_bulletin: if new_bulletin or current_bulletin:
news_header += ( news_header += (
"Below you'll find the latest Auto-GPT News and updates regarding features!\n" "Below you'll find the latest AutoGPT News and updates regarding features!\n"
"If you don't wish to see this message, you " "If you don't wish to see this message, you "
"can run Auto-GPT with the *--skip-news* flag.\n" "can run AutoGPT with the *--skip-news* flag.\n"
) )
if new_bulletin and is_new_news: if new_bulletin and is_new_news:

View File

@@ -9,7 +9,7 @@ if TYPE_CHECKING:
from autogpt.models.command import Command, CommandOutput, CommandParameter from autogpt.models.command import Command, CommandOutput, CommandParameter
# Unique identifier for auto-gpt commands # Unique identifier for AutoGPT commands
AUTO_GPT_COMMAND_IDENTIFIER = "auto_gpt_command" AUTO_GPT_COMMAND_IDENTIFIER = "auto_gpt_command"

View File

@@ -118,7 +118,7 @@ def execute_python_file(
if we_are_running_in_a_docker_container(): if we_are_running_in_a_docker_container():
logger.debug( logger.debug(
f"Auto-GPT is running in a Docker container; executing {file_path} directly..." f"AutoGPT is running in a Docker container; executing {file_path} directly..."
) )
result = subprocess.run( result = subprocess.run(
["python", "-B", str(file_path)] + args, ["python", "-B", str(file_path)] + args,
@@ -131,7 +131,7 @@ def execute_python_file(
else: else:
raise CodeExecutionError(result.stderr) raise CodeExecutionError(result.stderr)
logger.debug("Auto-GPT is not running in a Docker container") logger.debug("AutoGPT is not running in a Docker container")
try: try:
client = docker.from_env() client = docker.from_env()
# You can replace this with the desired Python image/version # You can replace this with the desired Python image/version

View File

@@ -8,7 +8,7 @@
## The Motivation ## The Motivation
The `master` branch of Auto-GPT is an organically grown amalgamation of many thoughts The `master` branch of AutoGPT is an organically grown amalgamation of many thoughts
and ideas about agent-driven autonomous systems. It lacks clear abstraction boundaries, and ideas about agent-driven autonomous systems. It lacks clear abstraction boundaries,
has issues of global state and poorly encapsulated state, and is generally just hard to has issues of global state and poorly encapsulated state, and is generally just hard to
make effective changes to. Mainly it's just a system that's hard to make changes to. make effective changes to. Mainly it's just a system that's hard to make changes to.
@@ -51,7 +51,7 @@ for the breaking version change. We justified this by saying:
We want a lot of things from a configuration system. We lean heavily on it in the We want a lot of things from a configuration system. We lean heavily on it in the
`master` branch to allow several parts of the system to communicate with each other. `master` branch to allow several parts of the system to communicate with each other.
[Recent work](https://github.com/Significant-Gravitas/Auto-GPT/pull/4737) has made it [Recent work](https://github.com/Significant-Gravitas/AutoGPT/pull/4737) has made it
so that the config is no longer a singleton object that is materialized from the import so that the config is no longer a singleton object that is materialized from the import
state, but it's still treated as a state, but it's still treated as a
[god object](https://en.wikipedia.org/wiki/God_object) containing all information about [god object](https://en.wikipedia.org/wiki/God_object) containing all information about
@@ -90,8 +90,8 @@ much agent state, that means a user can only work with one agent at a time.
## Memory ## Memory
The memory system has been under extremely active development. The memory system has been under extremely active development.
See [#3536](https://github.com/Significant-Gravitas/Auto-GPT/issues/3536) and See [#3536](https://github.com/Significant-Gravitas/AutoGPT/issues/3536) and
[#4208](https://github.com/Significant-Gravitas/Auto-GPT/pull/4208) for discussion and [#4208](https://github.com/Significant-Gravitas/AutoGPT/pull/4208) for discussion and
work in the `master` branch. The TL;DR is work in the `master` branch. The TL;DR is
that we noticed a couple of months ago that the `Agent` performed **worse** with that we noticed a couple of months ago that the `Agent` performed **worse** with
permanent memory than without it. Since then the knowledge storage and retrieval permanent memory than without it. Since then the knowledge storage and retrieval
@@ -231,7 +231,7 @@ different tools, we'd use a different paradigm.
While many things are classes in the re-arch, they are not classes in the same way. While many things are classes in the re-arch, they are not classes in the same way.
There are three kinds of things (roughly) that are written as classes in the re-arch: There are three kinds of things (roughly) that are written as classes in the re-arch:
1. **Configuration**: Auto-GPT has *a lot* of configuration. This configuration 1. **Configuration**: AutoGPT has *a lot* of configuration. This configuration
is *data* and we use **[Pydantic](https://docs.pydantic.dev/latest/)** to manage it as is *data* and we use **[Pydantic](https://docs.pydantic.dev/latest/)** to manage it as
pydantic is basically industry standard for this stuff. It provides runtime validation pydantic is basically industry standard for this stuff. It provides runtime validation
for all the configuration and allows us to easily serialize configuration to both basic for all the configuration and allows us to easily serialize configuration to both basic
@@ -241,7 +241,7 @@ There are three kinds of things (roughly) that are written as classes in the re-
agent-to-agent communication. *These are essentially agent-to-agent communication. *These are essentially
[structs](https://en.wikipedia.org/wiki/Struct_(C_programming_language)) rather than [structs](https://en.wikipedia.org/wiki/Struct_(C_programming_language)) rather than
traditional classes.* traditional classes.*
2. **Internal Data**: Very similar to configuration, Auto-GPT passes around boatloads 2. **Internal Data**: Very similar to configuration, AutoGPT passes around boatloads
of internal data. We are interacting with language models and language model APIs of internal data. We are interacting with language models and language model APIs
which means we are handling lots of *structured* but *raw* text. Here we also which means we are handling lots of *structured* but *raw* text. Here we also
leverage **pydantic** to both *parse* and *validate* the internal data and also to leverage **pydantic** to both *parse* and *validate* the internal data and also to
@@ -257,8 +257,8 @@ There are three kinds of things (roughly) that are written as classes in the re-
subclass of the interface. This should not be controversial. subclass of the interface. This should not be controversial.
The approach is consistent with The approach is consistent with
[prior](https://github.com/Significant-Gravitas/Auto-GPT/issues/2458) [prior](https://github.com/Significant-Gravitas/AutoGPT/issues/2458)
[work](https://github.com/Significant-Gravitas/Auto-GPT/pull/2442) done by other [work](https://github.com/Significant-Gravitas/AutoGPT/pull/2442) done by other
maintainers in this direction. maintainers in this direction.
From an organization standpoint, OO programming is by far the most popular programming From an organization standpoint, OO programming is by far the most popular programming
@@ -269,4 +269,3 @@ contributing.
Finally, and importantly, we scoped the plan and initial design of the re-arch as a Finally, and importantly, we scoped the plan and initial design of the re-arch as a
large group of maintainers and collaborators early on. This is consistent with the large group of maintainers and collaborators early on. This is consistent with the
design we chose and no-one offered alternatives. design we chose and no-one offered alternatives.

View File

@@ -1,15 +1,15 @@
# Auto-GPT Core # AutoGPT Core
This subpackage contains the ongoing work for the This subpackage contains the ongoing work for the
[Auto-GPT Re-arch](https://github.com/Significant-Gravitas/Auto-GPT/issues/4770). It is [AutoGPT Re-arch](https://github.com/Significant-Gravitas/AutoGPT/issues/4770). It is
a work in progress and is not yet feature complete. In particular, it does not yet a work in progress and is not yet feature complete. In particular, it does not yet
have many of the Auto-GPT commands implemented and is pending ongoing work to have many of the AutoGPT commands implemented and is pending ongoing work to
[re-incorporate vector-based memory and knowledge retrieval](https://github.com/Significant-Gravitas/Auto-GPT/issues/3536). [re-incorporate vector-based memory and knowledge retrieval](https://github.com/Significant-Gravitas/AutoGPT/issues/3536).
## [Overview](ARCHITECTURE_NOTES.md) ## [Overview](ARCHITECTURE_NOTES.md)
The Auto-GPT Re-arch is a re-implementation of the Auto-GPT agent that is designed to be more modular, The AutoGPT Re-arch is a re-implementation of the AutoGPT agent that is designed to be more modular,
more extensible, and more maintainable than the original Auto-GPT agent. It is also designed to be more extensible, and more maintainable than the original AutoGPT agent. It is also designed to be
more accessible to new developers and to be easier to contribute to. The re-arch is a work in progress more accessible to new developers and to be easier to contribute to. The re-arch is a work in progress
and is not yet feature complete. It is also not yet ready for production use. and is not yet feature complete. It is also not yet ready for production use.
@@ -26,14 +26,14 @@ and is not yet feature complete. It is also not yet ready for production use.
## CLI Application ## CLI Application
There are two client applications for Auto-GPT included. There are two client applications for AutoGPT included.
:star2: **This is the reference application I'm working with for now** :star2: :star2: **This is the reference application I'm working with for now** :star2:
The first app is a straight CLI application. I have not done anything yet to port all the friendly display stuff from the ~~`logger.typewriter_log`~~`user_friendly_output` logic. The first app is a straight CLI application. I have not done anything yet to port all the friendly display stuff from the ~~`logger.typewriter_log`~~`user_friendly_output` logic.
- [Entry Point](https://github.com/Significant-Gravitas/Auto-GPT/blob/master/autogpt/core/runner/cli_app/cli.py) - [Entry Point](https://github.com/Significant-Gravitas/AutoGPT/blob/master/autogpt/core/runner/cli_app/cli.py)
- [Client Application](https://github.com/Significant-Gravitas/Auto-GPT/blob/master/autogpt/core/runner/cli_app/main.py) - [Client Application](https://github.com/Significant-Gravitas/AutoGPT/blob/master/autogpt/core/runner/cli_app/main.py)
You'll then need a settings file. Run You'll then need a settings file. Run
@@ -53,7 +53,7 @@ depending on your operating system:
At a bare minimum, you'll need to set `openai.credentials.api_key` to your OpenAI API Key to run At a bare minimum, you'll need to set `openai.credentials.api_key` to your OpenAI API Key to run
the model. the model.
You can then run Auto-GPT with You can then run AutoGPT with
``` ```
poetry run cli run poetry run cli run
@@ -71,9 +71,9 @@ The second app is still a CLI, but it sets up a local webserver that the client
rather than invoking calls to the Agent library code directly. This application is essentially a sketch rather than invoking calls to the Agent library code directly. This application is essentially a sketch
at this point as the folks who were driving it have had less time (and likely not enough clarity) to proceed. at this point as the folks who were driving it have had less time (and likely not enough clarity) to proceed.
- [Entry Point](https://github.com/Significant-Gravitas/Auto-GPT/blob/master/autogpt/core/runner/cli_web_app/cli.py) - [Entry Point](https://github.com/Significant-Gravitas/AutoGPT/blob/master/autogpt/core/runner/cli_web_app/cli.py)
- [Client Application](https://github.com/Significant-Gravitas/Auto-GPT/blob/master/autogpt/core/runner/cli_web_app/client/client.py) - [Client Application](https://github.com/Significant-Gravitas/AutoGPT/blob/master/autogpt/core/runner/cli_web_app/client/client.py)
- [Server API](https://github.com/Significant-Gravitas/Auto-GPT/blob/master/autogpt/core/runner/cli_web_app/server/api.py) - [Server API](https://github.com/Significant-Gravitas/AutoGPT/blob/master/autogpt/core/runner/cli_web_app/server/api.py)
To run, you still need to generate a default configuration. You can do To run, you still need to generate a default configuration. You can do

View File

@@ -9,7 +9,7 @@
USER_OBJECTIVE = ( USER_OBJECTIVE = (
"Write a wikipedia style article about the project: " "Write a wikipedia style article about the project: "
"https://github.com/significant-gravitas/Auto-GPT" "https://github.com/significant-gravitas/AutoGPT"
) )

View File

@@ -5,7 +5,7 @@ authors = ["Significant Gravitas <support@agpt.co>"]
maintainers = ["Reinier van der Leer <reinier.vanderleer@agpt.co>"] maintainers = ["Reinier van der Leer <reinier.vanderleer@agpt.co>"]
description = "An open-source attempt at an autonomous generalist agent" description = "An open-source attempt at an autonomous generalist agent"
readme = "README.md" readme = "README.md"
repository = "https://github.com/Significant-Gravitas/Auto-GPT/tree/master/autogpts/agpt" repository = "https://github.com/Significant-Gravitas/AutoGPT/tree/master/autogpts/agpt"
# documentation = "https://docs.agpt.co/autogpts/agpt" # TODO # documentation = "https://docs.agpt.co/autogpts/agpt" # TODO
classifiers = [ classifiers = [
"Programming Language :: Python :: 3", "Programming Language :: Python :: 3",

View File

@@ -33,8 +33,8 @@ autogpt.add_command(make_settings)
) )
@coroutine @coroutine
async def run(settings_file: str, pdb: bool) -> None: async def run(settings_file: str, pdb: bool) -> None:
"""Run the Auto-GPT agent.""" """Run the AutoGPT agent."""
click.echo("Running Auto-GPT agent...") click.echo("Running AutoGPT agent...")
settings_file: Path = Path(settings_file) settings_file: Path = Path(settings_file)
settings = {} settings = {}
if settings_file.exists(): if settings_file.exists():

View File

@@ -14,7 +14,7 @@ from autogpt.core.runner.client_lib.parser import (
async def run_auto_gpt(user_configuration: dict): async def run_auto_gpt(user_configuration: dict):
"""Run the Auto-GPT CLI client.""" """Run the AutoGPT CLI client."""
configure_root_logger() configure_root_logger()
@@ -38,7 +38,7 @@ async def run_auto_gpt(user_configuration: dict):
# Step 2. Get a name and goals for the agent. # Step 2. Get a name and goals for the agent.
# First we need to figure out what the user wants to do with the agent. # First we need to figure out what the user wants to do with the agent.
# We'll do this by asking the user for a prompt. # We'll do this by asking the user for a prompt.
user_objective = click.prompt("What do you want Auto-GPT to do?") user_objective = click.prompt("What do you want AutoGPT to do?")
# Ask a language model to determine a name and goals for a suitable agent. # Ask a language model to determine a name and goals for a suitable agent.
name_and_goals = await SimpleAgent.determine_agent_name_and_goals( name_and_goals = await SimpleAgent.determine_agent_name_and_goals(
user_objective, user_objective,

View File

@@ -30,8 +30,8 @@ autogpt.add_command(make_settings)
type=click.INT, type=click.INT,
) )
def server(port: int) -> None: def server(port: int) -> None:
"""Run the Auto-GPT runner httpserver.""" """Run the AutoGPT runner httpserver."""
click.echo("Running Auto-GPT runner httpserver...") click.echo("Running AutoGPT runner httpserver...")
AgentProtocol.handle_task(task_handler).start(port) AgentProtocol.handle_task(task_handler).start(port)
@@ -43,7 +43,7 @@ def server(port: int) -> None:
) )
@coroutine @coroutine
async def client(settings_file) -> None: async def client(settings_file) -> None:
"""Run the Auto-GPT runner client.""" """Run the AutoGPT runner client."""
settings_file = pathlib.Path(settings_file) settings_file = pathlib.Path(settings_file)
settings = {} settings = {}
if settings_file.exists(): if settings_file.exists():

View File

@@ -92,7 +92,7 @@ def bootstrap_agent(task, continuous_mode) -> Agent:
config.workspace_path = Workspace.init_workspace_directory(config) config.workspace_path = Workspace.init_workspace_directory(config)
config.file_logger_path = Workspace.build_file_logger_path(config.workspace_path) config.file_logger_path = Workspace.build_file_logger_path(config.workspace_path)
ai_config = AIConfig( ai_config = AIConfig(
ai_name="Auto-GPT", ai_name="AutoGPT",
ai_role="a multi-purpose AI assistant.", ai_role="a multi-purpose AI assistant.",
ai_goals=[task], ai_goals=[task],
) )

View File

@@ -54,7 +54,7 @@ def request_user_double_check(additionalText: Optional[str] = None) -> None:
if not additionalText: if not additionalText:
additionalText = ( additionalText = (
"Please ensure you've setup and configured everything" "Please ensure you've setup and configured everything"
" correctly. Read https://github.com/Torantulino/Auto-GPT#readme to " " correctly. Read https://github.com/Significant-Gravitas/AutoGPT/autogpts/autogpt#readme to "
"double check. You can also create a github issue or join the discord" "double check. You can also create a github issue or join the discord"
" and ask there!" " and ask there!"
) )

View File

@@ -67,7 +67,7 @@ def get_memory(config: Config) -> VectorMemory:
"The Pinecone memory backend has been rendered incompatible by work on " "The Pinecone memory backend has been rendered incompatible by work on "
"the memory system, and was removed. Whether support will be added back " "the memory system, and was removed. Whether support will be added back "
"in the future is subject to discussion, feel free to pitch in: " "in the future is subject to discussion, feel free to pitch in: "
"https://github.com/Significant-Gravitas/Auto-GPT/discussions/4280" "https://github.com/Significant-Gravitas/AutoGPT/discussions/4280"
) )
# if not PineconeMemory: # if not PineconeMemory:
# logger.warn( # logger.warn(
@@ -97,7 +97,7 @@ def get_memory(config: Config) -> VectorMemory:
"The Weaviate memory backend has been rendered incompatible by work on " "The Weaviate memory backend has been rendered incompatible by work on "
"the memory system, and was removed. Whether support will be added back " "the memory system, and was removed. Whether support will be added back "
"in the future is subject to discussion, feel free to pitch in: " "in the future is subject to discussion, feel free to pitch in: "
"https://github.com/Significant-Gravitas/Auto-GPT/discussions/4280" "https://github.com/Significant-Gravitas/AutoGPT/discussions/4280"
) )
# if not WeaviateMemory: # if not WeaviateMemory:
# logger.warn( # logger.warn(
@@ -112,7 +112,7 @@ def get_memory(config: Config) -> VectorMemory:
"The Milvus memory backend has been rendered incompatible by work on " "The Milvus memory backend has been rendered incompatible by work on "
"the memory system, and was removed. Whether support will be added back " "the memory system, and was removed. Whether support will be added back "
"in the future is subject to discussion, feel free to pitch in: " "in the future is subject to discussion, feel free to pitch in: "
"https://github.com/Significant-Gravitas/Auto-GPT/discussions/4280" "https://github.com/Significant-Gravitas/AutoGPT/discussions/4280"
) )
# if not MilvusMemory: # if not MilvusMemory:
# logger.warn( # logger.warn(

View File

@@ -13,7 +13,7 @@ class Message(TypedDict):
class BaseOpenAIPlugin(AutoGPTPluginTemplate): class BaseOpenAIPlugin(AutoGPTPluginTemplate):
""" """
This is a BaseOpenAIPlugin class for generating Auto-GPT plugins. This is a BaseOpenAIPlugin class for generating AutoGPT plugins.
""" """
def __init__(self, manifests_specs_clients: dict): def __init__(self, manifests_specs_clients: dict):

View File

@@ -26,4 +26,4 @@ DEFAULT_TASK_PROMPT_AICONFIG_AUTOMATIC = (
"Respond only with the output in the exact format specified in the system prompt, with no explanation or conversation.\n" "Respond only with the output in the exact format specified in the system prompt, with no explanation or conversation.\n"
) )
DEFAULT_USER_DESIRE_PROMPT = "Write a wikipedia style article about the project: https://github.com/significant-gravitas/Auto-GPT" # Default prompt DEFAULT_USER_DESIRE_PROMPT = "Write a wikipedia style article about the project: https://github.com/significant-gravitas/AutoGPT" # Default prompt

View File

@@ -273,7 +273,7 @@ tests-no-zope = ["cloudpickle", "hypothesis", "mypy (>=1.1.1)", "pympler", "pyte
[[package]] [[package]]
name = "auto_gpt_plugin_template" name = "auto_gpt_plugin_template"
version = "0.0.2" version = "0.0.2"
description = "The template plugin for Auto-GPT." description = "The template plugin for AutoGPT."
optional = false optional = false
python-versions = ">=3.8" python-versions = ">=3.8"
files = [] files = []
@@ -284,7 +284,7 @@ abstract-singleton = "*"
[package.source] [package.source]
type = "git" type = "git"
url = "https://github.com/Significant-Gravitas/Auto-GPT-Plugin-Template" url = "https://github.com/Significant-Gravitas/AutoGPT-Plugin-Template"
reference = "0.1.0" reference = "0.1.0"
resolved_reference = "7612a14c629dc64ad870eee4d05850d60e1dd9ce" resolved_reference = "7612a14c629dc64ad870eee4d05850d60e1dd9ce"

View File

@@ -6,7 +6,7 @@ authors = [
] ]
readme = "README.md" readme = "README.md"
description = "An open-source attempt to make GPT-4 autonomous" description = "An open-source attempt to make GPT-4 autonomous"
homepage = "https://github.com/Significant-Gravitas/Auto-GPT/tree/master/autogpts/autogpt" homepage = "https://github.com/Significant-Gravitas/AutoGPT/tree/master/autogpts/autogpt"
classifiers = [ classifiers = [
"Programming Language :: Python :: 3", "Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License", "License :: OSI Approved :: MIT License",

View File

@@ -47,7 +47,7 @@ def test_invalid_json_leading_sentence_with_gpt():
"command": { "command": {
"name": "browse_website", "name": "browse_website",
"args":{ "args":{
"url": "https://github.com/Torantulino/Auto-GPT" "url": "https://github.com/Significant-Gravitas/AutoGPT"
} }
}, },
"thoughts": "thoughts":
@@ -62,7 +62,7 @@ def test_invalid_json_leading_sentence_with_gpt():
good_obj = { good_obj = {
"command": { "command": {
"name": "browse_website", "name": "browse_website",
"args": {"url": "https://github.com/Torantulino/Auto-GPT"}, "args": {"url": "https://github.com/Significant-Gravitas/AutoGPT"},
}, },
"thoughts": { "thoughts": {
"text": "I suggest we start browsing the repository to find any issues that we can fix.", "text": "I suggest we start browsing the repository to find any issues that we can fix.",
@@ -78,13 +78,13 @@ def test_invalid_json_leading_sentence_with_gpt():
def test_invalid_json_leading_sentence_with_gpt(self): def test_invalid_json_leading_sentence_with_gpt(self):
"""Test that a REALLY invalid JSON string raises an error when try_to_fix_with_gpt is False.""" """Test that a REALLY invalid JSON string raises an error when try_to_fix_with_gpt is False."""
json_str = """I will first need to browse the repository (https://github.com/Torantulino/Auto-GPT) and identify any potential bugs that need fixing. I will use the "browse_website" command for this. json_str = """I will first need to browse the repository (https://github.com/Significant-Gravitas/AutoGPT) and identify any potential bugs that need fixing. I will use the "browse_website" command for this.
{ {
"command": { "command": {
"name": "browse_website", "name": "browse_website",
"args":{ "args":{
"url": "https://github.com/Torantulino/Auto-GPT" "url": "https://github.com/Significant-Gravitas/AutoGPT"
} }
}, },
"thoughts": "thoughts":
@@ -99,7 +99,7 @@ def test_invalid_json_leading_sentence_with_gpt(self):
good_obj = { good_obj = {
"command": { "command": {
"name": "browse_website", "name": "browse_website",
"args": {"url": "https://github.com/Torantulino/Auto-GPT"}, "args": {"url": "https://github.com/Significant-Gravitas/AutoGPT"},
}, },
"thoughts": { "thoughts": {
"text": "Browsing the repository to identify potential bugs", "text": "Browsing the repository to identify potential bugs",

View File

@@ -1,4 +1,4 @@
"""This is the Test plugin for Auto-GPT.""" """This is the Test plugin for AutoGPT."""
from typing import Any, Dict, List, Optional, Tuple, TypeVar from typing import Any, Dict, List, Optional, Tuple, TypeVar
from auto_gpt_plugin_template import AutoGPTPluginTemplate from auto_gpt_plugin_template import AutoGPTPluginTemplate
@@ -8,12 +8,12 @@ PromptGenerator = TypeVar("PromptGenerator")
class AutoGPTGuanaco(AutoGPTPluginTemplate): class AutoGPTGuanaco(AutoGPTPluginTemplate):
""" """
This is plugin for Auto-GPT. This is plugin for AutoGPT.
""" """
def __init__(self): def __init__(self):
super().__init__() super().__init__()
self._name = "Auto-GPT-Guanaco" self._name = "AutoGPT-Guanaco"
self._version = "0.1.0" self._version = "0.1.0"
self._description = "This is a Guanaco local model plugin." self._description = "This is a Guanaco local model plugin."

View File

@@ -11,8 +11,8 @@ from autogpt.logs.utils import remove_color_codes
"COMMAND = browse_website ARGUMENTS = {'url': 'https://www.google.com', 'question': 'What is the capital of France?'}", "COMMAND = browse_website ARGUMENTS = {'url': 'https://www.google.com', 'question': 'What is the capital of France?'}",
), ),
( (
"{'Schaue dir meine Projekte auf github () an, als auch meine Webseiten': 'https://github.com/Significant-Gravitas/Auto-GPT, https://discord.gg/autogpt und https://twitter.com/SigGravitas'}", "{'Schaue dir meine Projekte auf github () an, als auch meine Webseiten': 'https://github.com/Significant-Gravitas/AutoGPT, https://discord.gg/autogpt und https://twitter.com/Auto_GPT'}",
"{'Schaue dir meine Projekte auf github () an, als auch meine Webseiten': 'https://github.com/Significant-Gravitas/Auto-GPT, https://discord.gg/autogpt und https://twitter.com/SigGravitas'}", "{'Schaue dir meine Projekte auf github () an, als auch meine Webseiten': 'https://github.com/Significant-Gravitas/AutoGPT, https://discord.gg/autogpt und https://twitter.com/SigGravitas'}",
), ),
("", ""), ("", ""),
("hello", "hello"), ("hello", "hello"),

View File

@@ -85,7 +85,7 @@ def test_get_bulletin_from_web_success(mock_get):
assert expected_content in bulletin assert expected_content in bulletin
mock_get.assert_called_with( mock_get.assert_called_with(
"https://raw.githubusercontent.com/Significant-Gravitas/Auto-GPT/master/autogpts/autogpt/BULLETIN.md" "https://raw.githubusercontent.com/Significant-Gravitas/AutoGPT/master/autogpts/autogpt/BULLETIN.md"
) )

View File

@@ -1,12 +1,12 @@
# 🚀 **Auto-GPT-Forge**: Build Your Own Auto-GPT Agent! 🧠 # 🚀 **AutoGPT-Forge**: Build Your Own AutoGPT Agent! 🧠
### 🌌 Dive into the Universe of Auto-GPT Creation! 🌌 ### 🌌 Dive into the Universe of AutoGPT Creation! 🌌
Ever dreamt of becoming the genius behind an AI agent? Dive into the *Forge*, where **you** become the creator! Ever dreamt of becoming the genius behind an AI agent? Dive into the *Forge*, where **you** become the creator!
--- ---
### 🛠️ **Why Auto-GPT-Forge?** ### 🛠️ **Why AutoGPT-Forge?**
- 💤 **No More Boilerplate!** Don't let the mundane tasks stop you. Fork and build without the headache of starting from scratch! - 💤 **No More Boilerplate!** Don't let the mundane tasks stop you. Fork and build without the headache of starting from scratch!
- 🧠 **Brain-centric Development!** All the tools you need so you can spend 100% of your time on what matters - crafting the brain of your AI! - 🧠 **Brain-centric Development!** All the tools you need so you can spend 100% of your time on what matters - crafting the brain of your AI!
- 🛠️ **Tooling ecosystem!** We work with the best in class tools to bring you the best experience possible! - 🛠️ **Tooling ecosystem!** We work with the best in class tools to bring you the best experience possible!
@@ -25,4 +25,4 @@ Comming soon:
3. Interacting with and Benchmarking your Agent 3. Interacting with and Benchmarking your Agent
4. Abilities 4. Abilities
5. The Planning Loop 5. The Planning Loop
6. Memories 6. Memories

View File

@@ -36,7 +36,7 @@ class Agent:
config = Config() config = Config()
config.bind = [f"localhost:{port}"] config.bind = [f"localhost:{port}"]
app = FastAPI( app = FastAPI(
title="Auto-GPT Forge", title="AutoGPT Forge",
description="Modified version of The Agent Protocol.", description="Modified version of The Agent Protocol.",
version="v0.4", version="v0.4",
) )

View File

@@ -12,7 +12,7 @@ the ones that require special attention due to their complexity are:
2. `upload_agent_task_artifacts`: 2. `upload_agent_task_artifacts`:
This route allows for the upload of artifacts, supporting various URI types (e.g., s3, gcs, ftp, http). This route allows for the upload of artifacts, supporting various URI types (e.g., s3, gcs, ftp, http).
The support for different URI types makes it a bit more complex, and it's important to ensure that all The support for different URI types makes it a bit more complex, and it's important to ensure that all
supported URI types are correctly managed. NOTE: The Auto-GPT team will eventually handle the most common supported URI types are correctly managed. NOTE: The AutoGPT team will eventually handle the most common
uri types for you. uri types for you.
3. `create_agent_task`: 3. `create_agent_task`:
@@ -42,7 +42,7 @@ async def root():
""" """
Root endpoint that returns a welcome message. Root endpoint that returns a welcome message.
""" """
return Response(content="Welcome to the Auto-GPT Forge") return Response(content="Welcome to the AutoGPT Forge")
@base_router.get("/heartbeat", tags=["server"]) @base_router.get("/heartbeat", tags=["server"])

View File

@@ -1,5 +1,5 @@
[tool.poetry] [tool.poetry]
name = "Auto-GPT-Forge" name = "AutoGPT-Forge"
version = "0.1.0" version = "0.1.0"
description = "" description = ""
authors = ["Craig Swift <craigswift13@gmail.com>"] authors = ["Craig Swift <craigswift13@gmail.com>"]

View File

@@ -18,7 +18,7 @@ This project supports Linux (Debian based), Mac, and Windows Subsystem for Linux
## Section 2: Setting up the Forge Environment ## Section 2: Setting up the Forge Environment
To begin, you need to fork the [repository](https://github.com/Significant-Gravitas/Auto-GPT) by navigating to the main page of the repository and clicking "Fork" in the top-right corner. To begin, you need to fork the [repository](https://github.com/Significant-Gravitas/AutoGPT) by navigating to the main page of the repository and clicking "Fork" in the top-right corner.
![The Github repository](../../../docs/content/imgs/quickstart/001_repo.png) ![The Github repository](../../../docs/content/imgs/quickstart/001_repo.png)
@@ -31,7 +31,7 @@ Next, clone the repository to your local system. Ensure you have Git installed t
```bash ```bash
# replace the url with the one for your forked repo # replace the url with the one for your forked repo
git clone https://github.com/Significant-Gravitas/Auto-GPT.git git clone https://github.com/Significant-Gravitas/AutoGPT.git
``` ```
![Clone the Repository](../../../docs/content/imgs/quickstart/003_clone.png) ![Clone the Repository](../../../docs/content/imgs/quickstart/003_clone.png)
@@ -40,8 +40,8 @@ git clone https://github.com/Significant-Gravitas/Auto-GPT.git
Once you have clone the project change your directory to the newly cloned project: Once you have clone the project change your directory to the newly cloned project:
```bash ```bash
# The name of the directory will match the name you gave your fork. The defualt is Auto-GPT # The name of the directory will match the name you gave your fork. The default is AutoGPT
cd Auto-GPT cd AutoGPT
``` ```
To set up the project, utilize the `./run setup` command in the terminal. Follow the instructions to install necessary dependencies and set up your GitHub access token. To set up the project, utilize the `./run setup` command in the terminal. Follow the instructions to install necessary dependencies and set up your GitHub access token.

View File

@@ -14,7 +14,7 @@ Save time and money while doing it through smart dependencies. The best part? It
- 1- [Beebot](https://github.com/AutoPackAI/beebot) - 1- [Beebot](https://github.com/AutoPackAI/beebot)
- 2- [mini-agi](https://github.com/muellerberndt/mini-agi) - 2- [mini-agi](https://github.com/muellerberndt/mini-agi)
- 3- [Auto-GPT](https://github.com/Significant-Gravitas/Auto-GPT) - 3- [Auto-GPT](https://github.com/Significant-Gravitas/AutoGPT)
## Detailed results: ## Detailed results:

View File

@@ -1,6 +1,6 @@
{ {
"Auto-GPT": { "Auto-GPT": {
"url": "https://github.com/Significant-Gravitas/Auto-GPT", "url": "https://github.com/Significant-Gravitas/AutoGPT",
"branch": "master", "branch": "master",
"commit": "3a2d08fb415071cc94dd6fcee24cfbdd1fb487dd" "commit": "3a2d08fb415071cc94dd6fcee24cfbdd1fb487dd"
}, },

4
cli.py
View File

@@ -1,7 +1,7 @@
""" """
This is a minimal file intended to be run by users to help them manage the autogpt projects. This is a minimal file intended to be run by users to help them manage the autogpt projects.
If you want to contribute, please use only libraries that come as part of Python. If you want to contribute, please use only libraries that come as part of Python.
To ensure efficiency, add the imports to the functions so only what is needed is imported. To ensure efficiency, add the imports to the functions so only what is needed is imported.
""" """
try: try:
@@ -197,7 +197,7 @@ d88P 888 "Y88888 "Y888 "Y88P" "Y8888P88 888 888
if install_error: if install_error:
click.echo( click.echo(
click.style( click.style(
"\n\n🔴 If you need help, please raise a ticket on GitHub at https://github.com/Significant-Gravitas/Auto-GPT/issues\n\n", "\n\n🔴 If you need help, please raise a ticket on GitHub at https://github.com/Significant-Gravitas/AutoGPT/issues\n\n",
fg="magenta", fg="magenta",
bold=True, bold=True,
) )

View File

@@ -1,8 +1,8 @@
# Creating Challenges for Auto-GPT # Creating Challenges for AutoGPT
🏹 We're on the hunt for talented Challenge Creators! 🎯 🏹 We're on the hunt for talented Challenge Creators! 🎯
Join us in shaping the future of Auto-GPT by designing challenges that test its limits. Your input will be invaluable in guiding our progress and ensuring that we're on the right track. We're seeking individuals with a diverse skill set, including: Join us in shaping the future of AutoGPT by designing challenges that test its limits. Your input will be invaluable in guiding our progress and ensuring that we're on the right track. We're seeking individuals with a diverse skill set, including:
🎨 UX Design: Your expertise will enhance the user experience for those attempting to conquer our challenges. With your help, we'll develop a dedicated section in our wiki, and potentially even launch a standalone website. 🎨 UX Design: Your expertise will enhance the user experience for those attempting to conquer our challenges. With your help, we'll develop a dedicated section in our wiki, and potentially even launch a standalone website.
@@ -10,11 +10,11 @@ Join us in shaping the future of Auto-GPT by designing challenges that test its
⚙️ DevOps Skills: Experience with CI pipelines in GitHub and possibly Google Cloud Platform will be instrumental in streamlining our operations. ⚙️ DevOps Skills: Experience with CI pipelines in GitHub and possibly Google Cloud Platform will be instrumental in streamlining our operations.
Are you ready to play a pivotal role in Auto-GPT's journey? Apply now to become a Challenge Creator by opening a PR! 🚀 Are you ready to play a pivotal role in AutoGPT's journey? Apply now to become a Challenge Creator by opening a PR! 🚀
# Getting Started # Getting Started
Clone the original Auto-GPT repo and checkout to master branch Clone the original AutoGPT repo and checkout to master branch
The challenges are not written using a specific framework. They try to be very agnostic The challenges are not written using a specific framework. They try to be very agnostic
@@ -27,7 +27,7 @@ Output => Artifact (files, image, code, etc, etc...)
## Defining your Agent ## Defining your Agent
Go to https://github.com/Significant-Gravitas/Auto-GPT/blob/master/tests/integration/agent_factory.py Go to https://github.com/Significant-Gravitas/AutoGPT/blob/master/tests/integration/agent_factory.py
Create your agent fixture. Create your agent fixture.

View File

@@ -1,3 +1,3 @@
# Information Retrieval # Information Retrieval
Information retrieval challenges are designed to evaluate the proficiency of an AI agent, such as Auto-GPT, in searching, extracting, and presenting relevant information from a vast array of sources. These challenges often encompass tasks such as interpreting user queries, browsing the web, and filtering through unstructured data. Information retrieval challenges are designed to evaluate the proficiency of an AI agent, such as AutoGPT, in searching, extracting, and presenting relevant information from a vast array of sources. These challenges often encompass tasks such as interpreting user queries, browsing the web, and filtering through unstructured data.

View File

@@ -1,30 +1,30 @@
introduction.md introduction.md
# Introduction to Challenges # Introduction to Challenges
Welcome to the Auto-GPT Challenges page! This is a space where we encourage community members to collaborate and contribute towards improving Auto-GPT by identifying and solving challenges that Auto-GPT is not yet able to achieve. Welcome to the AutoGPT Challenges page! This is a space where we encourage community members to collaborate and contribute towards improving AutoGPT by identifying and solving challenges that AutoGPT is not yet able to achieve.
## What are challenges? ## What are challenges?
Challenges are tasks or problems that Auto-GPT has difficulty solving or has not yet been able to accomplish. These may include improving specific functionalities, enhancing the model's understanding of specific domains, or even developing new features that the current version of Auto-GPT lacks. Challenges are tasks or problems that AutoGPT has difficulty solving or has not yet been able to accomplish. These may include improving specific functionalities, enhancing the model's understanding of specific domains, or even developing new features that the current version of AutoGPT lacks.
## Why are challenges important? ## Why are challenges important?
Addressing challenges helps us improve Auto-GPT's performance, usability, and versatility. By working together to tackle these challenges, we can create a more powerful and efficient tool for everyone. It also allows the community to actively contribute to the project, making it a true open-source effort. Addressing challenges helps us improve AutoGPT's performance, usability, and versatility. By working together to tackle these challenges, we can create a more powerful and efficient tool for everyone. It also allows the community to actively contribute to the project, making it a true open-source effort.
## How can you participate? ## How can you participate?
There are two main ways to get involved with challenges: There are two main ways to get involved with challenges:
1. **Submit a Challenge**: If you have identified a task that Auto-GPT struggles with, you can submit it as a challenge. This allows others to see the issue and collaborate on finding a solution. 1. **Submit a Challenge**: If you have identified a task that AutoGPT struggles with, you can submit it as a challenge. This allows others to see the issue and collaborate on finding a solution.
2. **Beat a Challenge**: If you have a solution or idea to tackle an existing challenge, you can contribute by working on the challenge and submitting your solution. 2. **Beat a Challenge**: If you have a solution or idea to tackle an existing challenge, you can contribute by working on the challenge and submitting your solution.
To learn more about submitting and beating challenges, please visit the [List of Challenges](list.md), [Submit a Challenge](submit.md), and [Beat a Challenge](beat.md) pages. To learn more about submitting and beating challenges, please visit the [List of Challenges](list.md), [Submit a Challenge](submit.md), and [Beat a Challenge](beat.md) pages.
We look forward to your contributions and the exciting solutions that the community will develop together to make Auto-GPT even better! We look forward to your contributions and the exciting solutions that the community will develop together to make AutoGPT even better!
!!! warning !!! warning
We're slowly transitioning to agbenchmark. agbenchmark is a simpler way to improve Auto-GPT. Simply run: We're slowly transitioning to agbenchmark. agbenchmark is a simpler way to improve AutoGPT. Simply run:
``` ```
agbenchmark agbenchmark

View File

@@ -1,5 +1,5 @@
# List of Challenges # List of Challenges
This page contains a curated list of challenges that Auto-GPT currently faces. If you think you have a solution or idea to tackle any of these challenges, feel free to dive in and start working on them! New challenges can also be submitted by following the guidelines on the [Submit a Challenge](challenges/submit.md) page. This page contains a curated list of challenges that AutoGPT currently faces. If you think you have a solution or idea to tackle any of these challenges, feel free to dive in and start working on them! New challenges can also be submitted by following the guidelines on the [Submit a Challenge](challenges/submit.md) page.
Memory Challenges: [List of Challenges](memory/introduction.md) Memory Challenges: [List of Challenges](memory/introduction.md)

View File

@@ -1,5 +1,5 @@
# Memory Challenges # Memory Challenges
Memory challenges are designed to test the ability of an AI agent, like Auto-GPT, to remember and use information throughout a series of tasks. These challenges often involve following instructions, processing text files, and keeping track of important data. Memory challenges are designed to test the ability of an AI agent, like AutoGPT, to remember and use information throughout a series of tasks. These challenges often involve following instructions, processing text files, and keeping track of important data.
The goal of memory challenges is to improve an agent's performance in tasks that require remembering and using information over time. By addressing these challenges, we can enhance Auto-GPT's capabilities and make it more useful in real-world applications. The goal of memory challenges is to improve an agent's performance in tasks that require remembering and using information over time. By addressing these challenges, we can enhance AutoGPT's capabilities and make it more useful in real-world applications.

View File

@@ -1,10 +1,10 @@
# Submit a Challenge # Submit a Challenge
If you have identified a task or problem that Auto-GPT struggles with, you can submit it as a challenge for the community to tackle. Here's how you can submit a new challenge: If you have identified a task or problem that AutoGPT struggles with, you can submit it as a challenge for the community to tackle. Here's how you can submit a new challenge:
## How to Submit a Challenge ## How to Submit a Challenge
1. Create a new `.md` file in the `challenges` directory in the Auto-GPT GitHub repository. Make sure to pick the right category. 1. Create a new `.md` file in the `challenges` directory in the AutoGPT GitHub repository. Make sure to pick the right category.
2. Name the file with a descriptive title for the challenge, using hyphens instead of spaces (e.g., `improve-context-understanding.md`). 2. Name the file with a descriptive title for the challenge, using hyphens instead of spaces (e.g., `improve-context-understanding.md`).
3. In the file, follow the [challenge_template.md](challenge_template.md) to describe the problem, define the scope, and evaluate success. 3. In the file, follow the [challenge_template.md](challenge_template.md) to describe the problem, define the scope, and evaluate success.
4. Commit the file and create a pull request. 4. Commit the file and create a pull request.

View File

@@ -40,7 +40,7 @@ Further optional configuration:
## Stable Diffusion WebUI ## Stable Diffusion WebUI
It is possible to use your own self-hosted Stable Diffusion WebUI with Auto-GPT: It is possible to use your own self-hosted Stable Diffusion WebUI with AutoGPT:
```ini ```ini
IMAGE_PROVIDER=sdwebui IMAGE_PROVIDER=sdwebui

View File

@@ -2,11 +2,11 @@
The Pinecone, Milvus, Redis, and Weaviate memory backends were rendered incompatible The Pinecone, Milvus, Redis, and Weaviate memory backends were rendered incompatible
by work on the memory system, and have been removed. by work on the memory system, and have been removed.
Whether support will be added back in the future is subject to discussion, Whether support will be added back in the future is subject to discussion,
feel free to pitch in: https://github.com/Significant-Gravitas/Auto-GPT/discussions/4280 feel free to pitch in: https://github.com/Significant-Gravitas/AutoGPT/discussions/4280
## Setting Your Cache Type ## Setting Your Cache Type
By default, Auto-GPT set up with Docker Compose will use Redis as its memory backend. By default, AutoGPT set up with Docker Compose will use Redis as its memory backend.
Otherwise, the default is LocalCache (which stores memory in a JSON file). Otherwise, the default is LocalCache (which stores memory in a JSON file).
To switch to a different backend, change the `MEMORY_BACKEND` in `.env` To switch to a different backend, change the `MEMORY_BACKEND` in `.env`
@@ -22,7 +22,7 @@ to the value that you want:
The Pinecone, Milvus, Redis, and Weaviate memory backends were rendered incompatible The Pinecone, Milvus, Redis, and Weaviate memory backends were rendered incompatible
by work on the memory system, and have been removed. by work on the memory system, and have been removed.
Whether support will be added back in the future is subject to discussion, Whether support will be added back in the future is subject to discussion,
feel free to pitch in: https://github.com/Significant-Gravitas/Auto-GPT/discussions/4280 feel free to pitch in: https://github.com/Significant-Gravitas/AutoGPT/discussions/4280
## Memory Backend Setup ## Memory Backend Setup
@@ -37,12 +37,12 @@ Links to memory backends
The Pinecone, Milvus, Redis, and Weaviate memory backends were rendered incompatible The Pinecone, Milvus, Redis, and Weaviate memory backends were rendered incompatible
by work on the memory system, and have been removed. by work on the memory system, and have been removed.
Whether support will be added back in the future is subject to discussion, Whether support will be added back in the future is subject to discussion,
feel free to pitch in: https://github.com/Significant-Gravitas/Auto-GPT/discussions/4280 feel free to pitch in: https://github.com/Significant-Gravitas/AutoGPT/discussions/4280
### Redis Setup ### Redis Setup
!!! important !!! important
If you have set up Auto-GPT using Docker Compose, then Redis is included, no further If you have set up AutoGPT using Docker Compose, then Redis is included, no further
setup needed. setup needed.
!!! caution !!! caution
@@ -80,7 +80,7 @@ Links to memory backends
The Pinecone, Milvus, Redis, and Weaviate memory backends were rendered incompatible The Pinecone, Milvus, Redis, and Weaviate memory backends were rendered incompatible
by work on the memory system, and have been removed. by work on the memory system, and have been removed.
Whether support will be added back in the future is subject to discussion, Whether support will be added back in the future is subject to discussion,
feel free to pitch in: https://github.com/Significant-Gravitas/Auto-GPT/discussions/4280 feel free to pitch in: https://github.com/Significant-Gravitas/AutoGPT/discussions/4280
### 🌲 Pinecone API Key Setup ### 🌲 Pinecone API Key Setup
@@ -100,7 +100,7 @@ In the `.env` file set:
The Pinecone, Milvus, Redis, and Weaviate memory backends were rendered incompatible The Pinecone, Milvus, Redis, and Weaviate memory backends were rendered incompatible
by work on the memory system, and have been removed. by work on the memory system, and have been removed.
Whether support will be added back in the future is subject to discussion, Whether support will be added back in the future is subject to discussion,
feel free to pitch in: https://github.com/Significant-Gravitas/Auto-GPT/discussions/4280 feel free to pitch in: https://github.com/Significant-Gravitas/AutoGPT/discussions/4280
### Milvus Setup ### Milvus Setup
@@ -144,7 +144,7 @@ deployed with docker, or as a cloud service provided by [Zilliz Cloud](https://z
The Pinecone, Milvus, Redis, and Weaviate memory backends were rendered incompatible The Pinecone, Milvus, Redis, and Weaviate memory backends were rendered incompatible
by work on the memory system, and have been removed. by work on the memory system, and have been removed.
Whether support will be added back in the future is subject to discussion, Whether support will be added back in the future is subject to discussion,
feel free to pitch in: https://github.com/Significant-Gravitas/Auto-GPT/discussions/4280 feel free to pitch in: https://github.com/Significant-Gravitas/AutoGPT/discussions/4280
### Weaviate Setup ### Weaviate Setup
[Weaviate](https://weaviate.io/) is an open-source vector database. It allows to store [Weaviate](https://weaviate.io/) is an open-source vector database. It allows to store
@@ -152,7 +152,7 @@ data objects and vector embeddings from ML-models and scales seamlessly to billi
data objects. To set up a Weaviate database, check out their [Quickstart Tutorial](https://weaviate.io/developers/weaviate/quickstart). data objects. To set up a Weaviate database, check out their [Quickstart Tutorial](https://weaviate.io/developers/weaviate/quickstart).
Although still experimental, [Embedded Weaviate](https://weaviate.io/developers/weaviate/installation/embedded) Although still experimental, [Embedded Weaviate](https://weaviate.io/developers/weaviate/installation/embedded)
is supported which allows the Auto-GPT process itself to start a Weaviate instance. is supported which allows the AutoGPT process itself to start a Weaviate instance.
To enable it, set `USE_WEAVIATE_EMBEDDED` to `True` and make sure you `poetry add weaviate-client@^3.15.4`. To enable it, set `USE_WEAVIATE_EMBEDDED` to `True` and make sure you `poetry add weaviate-client@^3.15.4`.
#### Install the Weaviate client #### Install the Weaviate client
@@ -189,13 +189,13 @@ View memory usage by using the `--debug` flag :)
!!! warning !!! warning
Data ingestion is broken in v0.4.7 and possibly earlier versions. This is a known issue that will be addressed in future releases. Follow these issues for updates. Data ingestion is broken in v0.4.7 and possibly earlier versions. This is a known issue that will be addressed in future releases. Follow these issues for updates.
[Issue 4435](https://github.com/Significant-Gravitas/Auto-GPT/issues/4435) [Issue 4435](https://github.com/Significant-Gravitas/AutoGPT/issues/4435)
[Issue 4024](https://github.com/Significant-Gravitas/Auto-GPT/issues/4024) [Issue 4024](https://github.com/Significant-Gravitas/AutoGPT/issues/4024)
[Issue 2076](https://github.com/Significant-Gravitas/Auto-GPT/issues/2076) [Issue 2076](https://github.com/Significant-Gravitas/AutoGPT/issues/2076)
Memory pre-seeding allows you to ingest files into memory and pre-seed it before running Auto-GPT. Memory pre-seeding allows you to ingest files into memory and pre-seed it before running AutoGPT.
```shell ```shell
$ python data_ingestion.py -h $ python data_ingestion.py -h
@@ -214,7 +214,7 @@ options:
# python data_ingestion.py --dir DataFolder --init --overlap 100 --max_length 2000 # python data_ingestion.py --dir DataFolder --init --overlap 100 --max_length 2000
``` ```
In the example above, the script initializes the memory, ingests all files within the `Auto-Gpt/auto_gpt_workspace/DataFolder` directory into memory with an overlap between chunks of 100 and a maximum length of each chunk of 2000. In the example above, the script initializes the memory, ingests all files within the `AutoGPT/auto_gpt_workspace/DataFolder` directory into memory with an overlap between chunks of 100 and a maximum length of each chunk of 2000.
Note that you can also use the `--file` argument to ingest a single file into memory and that data_ingestion.py will only ingest files within the `/auto_gpt_workspace` directory. Note that you can also use the `--file` argument to ingest a single file into memory and that data_ingestion.py will only ingest files within the `/auto_gpt_workspace` directory.
@@ -238,14 +238,14 @@ Memory pre-seeding is a technique for improving AI accuracy by ingesting relevan
into its memory. Chunks of data are split and added to memory, allowing the AI to access into its memory. Chunks of data are split and added to memory, allowing the AI to access
them quickly and generate more accurate responses. It's useful for large datasets or when them quickly and generate more accurate responses. It's useful for large datasets or when
specific information needs to be accessed quickly. Examples include ingesting API or specific information needs to be accessed quickly. Examples include ingesting API or
GitHub documentation before running Auto-GPT. GitHub documentation before running AutoGPT.
!!! attention !!! attention
If you use Redis for memory, make sure to run Auto-GPT with `WIPE_REDIS_ON_START=False` If you use Redis for memory, make sure to run AutoGPT with `WIPE_REDIS_ON_START=False`
For other memory backends, we currently forcefully wipe the memory when starting For other memory backends, we currently forcefully wipe the memory when starting
Auto-GPT. To ingest data with those memory backends, you can call the AutoGPT. To ingest data with those memory backends, you can call the
`data_ingestion.py` script anytime during an Auto-GPT run. `data_ingestion.py` script anytime during an AutoGPT run.
Memories will be available to the AI immediately as they are ingested, even if ingested Memories will be available to the AI immediately as they are ingested, even if ingested
while Auto-GPT is running. while AutoGPT is running.

View File

@@ -1,13 +1,13 @@
# Configuration # Configuration
Configuration is controlled through the `Config` object. You can set configuration variables via the `.env` file. If you don't have a `.env` file, create a copy of `.env.template` in your `Auto-GPT` folder and name it `.env`. Configuration is controlled through the `Config` object. You can set configuration variables via the `.env` file. If you don't have a `.env` file, create a copy of `.env.template` in your `AutoGPT` folder and name it `.env`.
## Environment Variables ## Environment Variables
- `AI_SETTINGS_FILE`: Location of the AI Settings file relative to the Auto-GPT root directory. Default: ai_settings.yaml - `AI_SETTINGS_FILE`: Location of the AI Settings file relative to the AutoGPT root directory. Default: ai_settings.yaml
- `AUDIO_TO_TEXT_PROVIDER`: Audio To Text Provider. Only option currently is `huggingface`. Default: huggingface - `AUDIO_TO_TEXT_PROVIDER`: Audio To Text Provider. Only option currently is `huggingface`. Default: huggingface
- `AUTHORISE_COMMAND_KEY`: Key response accepted when authorising commands. Default: y - `AUTHORISE_COMMAND_KEY`: Key response accepted when authorising commands. Default: y
- `AZURE_CONFIG_FILE`: Location of the Azure Config file relative to the Auto-GPT root directory. Default: azure.yaml - `AZURE_CONFIG_FILE`: Location of the Azure Config file relative to the AutoGPT root directory. Default: azure.yaml
- `BROWSE_CHUNK_MAX_LENGTH`: When browsing website, define the length of chunks to summarize. Default: 3000 - `BROWSE_CHUNK_MAX_LENGTH`: When browsing website, define the length of chunks to summarize. Default: 3000
- `BROWSE_SPACY_LANGUAGE_MODEL`: [spaCy language model](https://spacy.io/usage/models) to use when creating chunks. Default: en_core_web_sm - `BROWSE_SPACY_LANGUAGE_MODEL`: [spaCy language model](https://spacy.io/usage/models) to use when creating chunks. Default: en_core_web_sm
- `CHAT_MESSAGES_ENABLED`: Enable chat messages. Optional - `CHAT_MESSAGES_ENABLED`: Enable chat messages. Optional
@@ -22,7 +22,7 @@ Configuration is controlled through the `Config` object. You can set configurati
- `GITHUB_USERNAME`: GitHub Username. Optional. - `GITHUB_USERNAME`: GitHub Username. Optional.
- `GOOGLE_API_KEY`: Google API key. Optional. - `GOOGLE_API_KEY`: Google API key. Optional.
- `GOOGLE_CUSTOM_SEARCH_ENGINE_ID`: [Google custom search engine ID](https://programmablesearchengine.google.com/controlpanel/all). Optional. - `GOOGLE_CUSTOM_SEARCH_ENGINE_ID`: [Google custom search engine ID](https://programmablesearchengine.google.com/controlpanel/all). Optional.
- `HEADLESS_BROWSER`: Use a headless browser while Auto-GPT uses a web browser. Setting to `False` will allow you to see Auto-GPT operate the browser. Default: True - `HEADLESS_BROWSER`: Use a headless browser while AutoGPT uses a web browser. Setting to `False` will allow you to see AutoGPT operate the browser. Default: True
- `HUGGINGFACE_API_TOKEN`: HuggingFace API, to be used for both image generation and audio to text. Optional. - `HUGGINGFACE_API_TOKEN`: HuggingFace API, to be used for both image generation and audio to text. Optional.
- `HUGGINGFACE_AUDIO_TO_TEXT_MODEL`: HuggingFace audio to text model. Default: CompVis/stable-diffusion-v1-4 - `HUGGINGFACE_AUDIO_TO_TEXT_MODEL`: HuggingFace audio to text model. Default: CompVis/stable-diffusion-v1-4
- `HUGGINGFACE_IMAGE_MODEL`: HuggingFace model to use for image generation. Default: CompVis/stable-diffusion-v1-4 - `HUGGINGFACE_IMAGE_MODEL`: HuggingFace model to use for image generation. Default: CompVis/stable-diffusion-v1-4
@@ -33,17 +33,17 @@ Configuration is controlled through the `Config` object. You can set configurati
- `OPENAI_API_KEY`: *REQUIRED*- Your [OpenAI API Key](https://platform.openai.com/account/api-keys). - `OPENAI_API_KEY`: *REQUIRED*- Your [OpenAI API Key](https://platform.openai.com/account/api-keys).
- `OPENAI_ORGANIZATION`: Organization ID in OpenAI. Optional. - `OPENAI_ORGANIZATION`: Organization ID in OpenAI. Optional.
- `PLAIN_OUTPUT`: Plain output, which disables the spinner. Default: False - `PLAIN_OUTPUT`: Plain output, which disables the spinner. Default: False
- `PLUGINS_CONFIG_FILE`: Path of the Plugins Config file relative to the Auto-GPT root directory. Default: plugins_config.yaml - `PLUGINS_CONFIG_FILE`: Path of the Plugins Config file relative to the AutoGPT root directory. Default: plugins_config.yaml
- `PROMPT_SETTINGS_FILE`: Location of the Prompt Settings file relative to the Auto-GPT root directory. Default: prompt_settings.yaml - `PROMPT_SETTINGS_FILE`: Location of the Prompt Settings file relative to the AutoGPT root directory. Default: prompt_settings.yaml
- `REDIS_HOST`: Redis Host. Default: localhost - `REDIS_HOST`: Redis Host. Default: localhost
- `REDIS_PASSWORD`: Redis Password. Optional. Default: - `REDIS_PASSWORD`: Redis Password. Optional. Default:
- `REDIS_PORT`: Redis Port. Default: 6379 - `REDIS_PORT`: Redis Port. Default: 6379
- `RESTRICT_TO_WORKSPACE`: The restrict file reading and writing to the workspace directory. Default: True - `RESTRICT_TO_WORKSPACE`: The restrict file reading and writing to the workspace directory. Default: True
- `SD_WEBUI_AUTH`: Stable Diffusion Web UI username:password pair. Optional. - `SD_WEBUI_AUTH`: Stable Diffusion Web UI username:password pair. Optional.
- `SD_WEBUI_URL`: Stable Diffusion Web UI URL. Default: http://localhost:7860 - `SD_WEBUI_URL`: Stable Diffusion Web UI URL. Default: http://localhost:7860
- `SHELL_ALLOWLIST`: List of shell commands that ARE allowed to be executed by Auto-GPT. Only applies if `SHELL_COMMAND_CONTROL` is set to `allowlist`. Default: None - `SHELL_ALLOWLIST`: List of shell commands that ARE allowed to be executed by AutoGPT. Only applies if `SHELL_COMMAND_CONTROL` is set to `allowlist`. Default: None
- `SHELL_COMMAND_CONTROL`: Whether to use `allowlist` or `denylist` to determine what shell commands can be executed (Default: denylist) - `SHELL_COMMAND_CONTROL`: Whether to use `allowlist` or `denylist` to determine what shell commands can be executed (Default: denylist)
- `SHELL_DENYLIST`: List of shell commands that ARE NOT allowed to be executed by Auto-GPT. Only applies if `SHELL_COMMAND_CONTROL` is set to `denylist`. Default: sudo,su - `SHELL_DENYLIST`: List of shell commands that ARE NOT allowed to be executed by AutoGPT. Only applies if `SHELL_COMMAND_CONTROL` is set to `denylist`. Default: sudo,su
- `SMART_LLM`: LLM Model to use for "smart" tasks. Default: gpt-4 - `SMART_LLM`: LLM Model to use for "smart" tasks. Default: gpt-4
- `STREAMELEMENTS_VOICE`: StreamElements voice to use. Default: Brian - `STREAMELEMENTS_VOICE`: StreamElements voice to use. Default: Brian
- `TEMPERATURE`: Value of temperature given to OpenAI. Value from 0 to 2. Lower is more deterministic, higher is more random. See https://platform.openai.com/docs/api-reference/completions/create#completions/create-temperature - `TEMPERATURE`: Value of temperature given to OpenAI. Value from 0 to 2. Lower is more deterministic, higher is more random. See https://platform.openai.com/docs/api-reference/completions/create#completions/create-temperature

View File

@@ -1,13 +1,13 @@
# Text to Speech # Text to Speech
Enter this command to use TTS _(Text-to-Speech)_ for Auto-GPT Enter this command to use TTS _(Text-to-Speech)_ for AutoGPT
```shell ```shell
python -m autogpt --speak python -m autogpt --speak
``` ```
Eleven Labs provides voice technologies such as voice design, speech synthesis, and Eleven Labs provides voice technologies such as voice design, speech synthesis, and
premade voices that Auto-GPT can use for speech. premade voices that AutoGPT can use for speech.
1. Go to [ElevenLabs](https://beta.elevenlabs.io/) and make an account if you don't 1. Go to [ElevenLabs](https://beta.elevenlabs.io/) and make an account if you don't
already have one. already have one.

View File

@@ -3,6 +3,6 @@
Welcome to AutoGPT. Please follow the [Installation](/setup/) guide to get started. Welcome to AutoGPT. Please follow the [Installation](/setup/) guide to get started.
!!! note !!! note
It is recommended to use a virtual machine/container (docker) for tasks that require high security measures to prevent any potential harm to the main computer's system and data. If you are considering to use Auto-GPT outside a virtualized/containerized environment, you are *strongly* advised to use a separate user account just for running Auto-GPT. This is even more important if you are going to allow Auto-GPT to write/execute scripts and run shell commands! It is recommended to use a virtual machine/container (docker) for tasks that require high security measures to prevent any potential harm to the main computer's system and data. If you are considering to use AutoGPT outside a virtualized/containerized environment, you are *strongly* advised to use a separate user account just for running AutoGPT. This is even more important if you are going to allow AutoGPT to write/execute scripts and run shell commands!
It is for these reasons that executing python scripts is explicitly disabled when running outside a container environment. It is for these reasons that executing python scripts is explicitly disabled when running outside a container environment.

View File

@@ -2,7 +2,7 @@
⚠️💀 **WARNING** 💀⚠️: Review the code of any plugin you use thoroughly, as plugins can execute any Python code, potentially leading to malicious activities, such as stealing your API keys. ⚠️💀 **WARNING** 💀⚠️: Review the code of any plugin you use thoroughly, as plugins can execute any Python code, potentially leading to malicious activities, such as stealing your API keys.
To configure plugins, you can create or edit the `plugins_config.yaml` file in the root directory of Auto-GPT. This file allows you to enable or disable plugins as desired. For specific configuration instructions, please refer to the documentation provided for each plugin. The file should be formatted in YAML. Here is an example for your reference: To configure plugins, you can create or edit the `plugins_config.yaml` file in the root directory of AutoGPT. This file allows you to enable or disable plugins as desired. For specific configuration instructions, please refer to the documentation provided for each plugin. The file should be formatted in YAML. Here is an example for your reference:
```yaml ```yaml
plugin_a: plugin_a:
@@ -16,5 +16,5 @@ plugin_b:
See our [Plugins Repo](https://github.com/Significant-Gravitas/Auto-GPT-Plugins) for more info on how to install all the amazing plugins the community has built! See our [Plugins Repo](https://github.com/Significant-Gravitas/Auto-GPT-Plugins) for more info on how to install all the amazing plugins the community has built!
Alternatively, developers can use the [Auto-GPT Plugin Template](https://github.com/Significant-Gravitas/Auto-GPT-Plugin-Template) as a starting point for creating your own plugins. Alternatively, developers can use the [AutoGPT Plugin Template](https://github.com/Significant-Gravitas/Auto-GPT-Plugin-Template) as a starting point for creating your own plugins.

View File

@@ -1,8 +1,8 @@
# Setting up Auto-GPT # Setting up AutoGPT
## 📋 Requirements ## 📋 Requirements
Choose an environment to run Auto-GPT in (pick one): Choose an environment to run AutoGPT in (pick one):
- [Docker](https://docs.docker.com/get-docker/) (*recommended*) - [Docker](https://docs.docker.com/get-docker/) (*recommended*)
- Python 3.10 or later (instructions: [for Windows](https://www.tutorialspoint.com/how-to-install-python-in-windows)) - Python 3.10 or later (instructions: [for Windows](https://www.tutorialspoint.com/how-to-install-python-in-windows))
@@ -14,7 +14,7 @@ Choose an environment to run Auto-GPT in (pick one):
Get your OpenAI API key from: [https://platform.openai.com/account/api-keys](https://platform.openai.com/account/api-keys). Get your OpenAI API key from: [https://platform.openai.com/account/api-keys](https://platform.openai.com/account/api-keys).
!!! attention !!! attention
To use the OpenAI API with Auto-GPT, we strongly recommend **setting up billing** To use the OpenAI API with AutoGPT, we strongly recommend **setting up billing**
(AKA paid account). Free accounts are [limited][openai/api limits] to 3 API calls per (AKA paid account). Free accounts are [limited][openai/api limits] to 3 API calls per
minute, which can cause the application to crash. minute, which can cause the application to crash.
@@ -29,16 +29,16 @@ Get your OpenAI API key from: [https://platform.openai.com/account/api-keys](htt
![For OpenAI API key to work, set up paid account at OpenAI API > Billing](./imgs/openai-api-key-billing-paid-account.png) ![For OpenAI API key to work, set up paid account at OpenAI API > Billing](./imgs/openai-api-key-billing-paid-account.png)
## Setting up Auto-GPT ## Setting up AutoGPT
### Set up with Docker ### Set up with Docker
1. Make sure you have Docker installed, see [requirements](#requirements) 1. Make sure you have Docker installed, see [requirements](#requirements)
2. Create a project directory for Auto-GPT 2. Create a project directory for AutoGPT
```shell ```shell
mkdir Auto-GPT mkdir AutoGPT
cd Auto-GPT cd AutoGPT
``` ```
3. In the project directory, create a file called `docker-compose.yml` with the following contents: 3. In the project directory, create a file called `docker-compose.yml` with the following contents:
@@ -77,11 +77,11 @@ Get your OpenAI API key from: [https://platform.openai.com/account/api-keys](htt
6. Continue to [Run with Docker](#run-with-docker) 6. Continue to [Run with Docker](#run-with-docker)
!!! note "Docker only supports headless browsing" !!! note "Docker only supports headless browsing"
Auto-GPT uses a browser in headless mode by default: `HEADLESS_BROWSER=True`. AutoGPT uses a browser in headless mode by default: `HEADLESS_BROWSER=True`.
Please do not change this setting in combination with Docker, or Auto-GPT will crash. Please do not change this setting in combination with Docker, or AutoGPT will crash.
[Docker Hub]: https://hub.docker.com/r/significantgravitas/auto-gpt [Docker Hub]: https://hub.docker.com/r/significantgravitas/auto-gpt
[repository]: https://github.com/Significant-Gravitas/Auto-GPT [repository]: https://github.com/Significant-Gravitas/AutoGPT
### Set up with Git ### Set up with Git
@@ -96,13 +96,13 @@ Get your OpenAI API key from: [https://platform.openai.com/account/api-keys](htt
1. Clone the repository 1. Clone the repository
```shell ```shell
git clone -b stable https://github.com/Significant-Gravitas/Auto-GPT.git git clone -b stable https://github.com/Significant-Gravitas/AutoGPT.git
``` ```
2. Navigate to the directory where you downloaded the repository 2. Navigate to the directory where you downloaded the repository
```shell ```shell
cd Auto-GPT/autogpts/autogpt cd AutoGPT/autogpts/autogpt
``` ```
### Set up without Git/Docker ### Set up without Git/Docker
@@ -110,7 +110,7 @@ Get your OpenAI API key from: [https://platform.openai.com/account/api-keys](htt
!!! warning !!! warning
We recommend to use Git or Docker, to make updating easier. Also note that some features such as Python execution will only work inside docker for security reasons. We recommend to use Git or Docker, to make updating easier. Also note that some features such as Python execution will only work inside docker for security reasons.
1. Download `Source code (zip)` from the [latest stable release](https://github.com/Significant-Gravitas/Auto-GPT/releases/latest) 1. Download `Source code (zip)` from the [latest stable release](https://github.com/Significant-Gravitas/AutoGPT/releases/latest)
2. Extract the zip-file into a folder 2. Extract the zip-file into a folder
@@ -160,7 +160,7 @@ Get your OpenAI API key from: [https://platform.openai.com/account/api-keys](htt
[Azure OpenAI docs]: https://learn.microsoft.com/en-us/azure/cognitive-services/openai/tutorials/embeddings?tabs=command-line [Azure OpenAI docs]: https://learn.microsoft.com/en-us/azure/cognitive-services/openai/tutorials/embeddings?tabs=command-line
## Running Auto-GPT ## Running AutoGPT
### Run with Docker ### Run with Docker
@@ -177,7 +177,7 @@ This will display the version of Docker Compose that is currently installed on y
If you need to upgrade Docker Compose to a newer version, you can follow the installation instructions in the Docker documentation: https://docs.docker.com/compose/install/ If you need to upgrade Docker Compose to a newer version, you can follow the installation instructions in the Docker documentation: https://docs.docker.com/compose/install/
Once you have a recent version of Docker Compose, run the commands below in your Auto-GPT folder. Once you have a recent version of Docker Compose, run the commands below in your AutoGPT folder.
1. Build the image. If you have pulled the image from Docker Hub, skip this step (NOTE: You *will* need to do this if you are modifying requirements.txt to add/remove dependencies like Python libs/frameworks) 1. Build the image. If you have pulled the image from Docker Hub, skip this step (NOTE: You *will* need to do this if you are modifying requirements.txt to add/remove dependencies like Python libs/frameworks)
@@ -185,7 +185,7 @@ Once you have a recent version of Docker Compose, run the commands below in your
docker compose build auto-gpt docker compose build auto-gpt
``` ```
2. Run Auto-GPT 2. Run AutoGPT
```shell ```shell
docker compose run --rm auto-gpt docker compose run --rm auto-gpt
@@ -211,7 +211,7 @@ docker run -it --env-file=.env -v $PWD:/app auto-gpt
docker run -it --env-file=.env -v $PWD:/app --rm auto-gpt --gpt3only --continuous docker run -it --env-file=.env -v $PWD:/app --rm auto-gpt --gpt3only --continuous
``` ```
[Docker Compose file]: https://github.com/Significant-Gravitas/Auto-GPT/blob/stable/docker-compose.yml [Docker Compose file]: https://github.com/Significant-Gravitas/AutoGPT/blob/stable/docker-compose.yml
### Run with Dev Container ### Run with Dev Container
@@ -239,7 +239,7 @@ pip3 install --upgrade pip
Due to security reasons, certain features (like Python execution) will by default be disabled when running without docker. So, even if you want to run the program outside a docker container, you currently still need docker to actually run scripts. Due to security reasons, certain features (like Python execution) will by default be disabled when running without docker. So, even if you want to run the program outside a docker container, you currently still need docker to actually run scripts.
Simply run the startup script in your terminal. This will install any necessary Python Simply run the startup script in your terminal. This will install any necessary Python
packages and launch Auto-GPT. packages and launch AutoGPT.
- On Linux/MacOS: - On Linux/MacOS:

View File

@@ -1,4 +1,4 @@
## Share your logs with us to help improve Auto-GPT ## Share your logs with us to help improve AutoGPT
Do you notice weird behavior with your agent? Do you have an interesting use case? Do you have a bug you want to report? Do you notice weird behavior with your agent? Do you have an interesting use case? Do you have a bug you want to report?
Follow the steps below to enable your logs and upload them. You can include these logs when making an issue report or discussing an issue with us. Follow the steps below to enable your logs and upload them. You can include these logs when making an issue report or discussing an issue with us.

View File

@@ -21,15 +21,15 @@ Running with `--help` lists all the possible command line arguments you can pass
!!! note !!! note
Replace anything in angled brackets (<>) to a value you want to specify Replace anything in angled brackets (<>) to a value you want to specify
Here are some common arguments you can use when running Auto-GPT: Here are some common arguments you can use when running AutoGPT:
* Run Auto-GPT with a different AI Settings file * Run AutoGPT with a different AI Settings file
```shell ```shell
./run.sh --ai-settings <filename> ./run.sh --ai-settings <filename>
``` ```
* Run Auto-GPT with a different Prompt Settings file * Run AutoGPT with a different Prompt Settings file
```shell ```shell
./run.sh --prompt-settings <filename> ./run.sh --prompt-settings <filename>
@@ -47,7 +47,7 @@ Here are some common arguments you can use when running Auto-GPT:
### Speak Mode ### Speak Mode
Enter this command to use TTS _(Text-to-Speech)_ for Auto-GPT Enter this command to use TTS _(Text-to-Speech)_ for AutoGPT
```shell ```shell
./run.sh --speak ./run.sh --speak
@@ -72,7 +72,7 @@ Running Self-Feedback will **INCREASE** token use and thus cost more. This featu
### GPT-3.5 ONLY Mode ### GPT-3.5 ONLY Mode
If you don't have access to GPT-4, this mode allows you to use Auto-GPT! If you don't have access to GPT-4, this mode allows you to use AutoGPT!
```shell ```shell
./run.sh --gpt3only ./run.sh --gpt3only
@@ -82,7 +82,7 @@ You can achieve the same by setting `SMART_LLM` in `.env` to `gpt-3.5-turbo`.
### GPT-4 ONLY Mode ### GPT-4 ONLY Mode
If you have access to GPT-4, this mode allows you to use Auto-GPT solely with GPT-4. If you have access to GPT-4, this mode allows you to use AutoGPT solely with GPT-4.
This may give your bot increased intelligence. This may give your bot increased intelligence.
```shell ```shell
@@ -90,7 +90,7 @@ This may give your bot increased intelligence.
``` ```
!!! warning !!! warning
Since GPT-4 is more expensive to use, running Auto-GPT in GPT-4-only mode will Since GPT-4 is more expensive to use, running AutoGPT in GPT-4-only mode will
increase your API costs. increase your API costs.
## Logs ## Logs

View File

@@ -1,6 +1,6 @@
site_name: AutoGPT Documentation site_name: AutoGPT Documentation
site_url: https://docs.agpt.co/ site_url: https://docs.agpt.co/
repo_url: https://github.com/Significant-Gravitas/Auto-GPT repo_url: https://github.com/Significant-Gravitas/AutoGPT
docs_dir: content docs_dir: content
nav: nav:
- Home: index.md - Home: index.md
@@ -14,7 +14,7 @@ nav:
- Voice: configuration/voice.md - Voice: configuration/voice.md
- Image Generation: configuration/imagegen.md - Image Generation: configuration/imagegen.md
- Help us improve Auto-GPT: - Help us improve AutoGPT:
- Share your debug logs with us: share-your-logs.md - Share your debug logs with us: share-your-logs.md
- Contribution guide: contributing.md - Contribution guide: contributing.md
- Running tests: testing.md - Running tests: testing.md
@@ -36,7 +36,7 @@ nav:
- Submit a Challenge: challenges/submit.md - Submit a Challenge: challenges/submit.md
- Beat a Challenge: challenges/beat.md - Beat a Challenge: challenges/beat.md
- License: https://github.com/Significant-Gravitas/Auto-GPT/blob/master/LICENSE - License: https://github.com/Significant-Gravitas/AutoGPT/blob/master/LICENSE
theme: theme:
name: material name: material

View File

@@ -1,4 +1,4 @@
# Netlify config for Auto-GPT docs # Netlify config for AutoGPT docs
[build] [build]
publish = "public/" publish = "public/"

View File

@@ -26,12 +26,12 @@ Flutter comes with Dart, to install Flutter, follow the instructions here: https
1. **Clone the repo:** 1. **Clone the repo:**
``` ```
git clone https://github.com/Significant-Gravitas/Auto-GPT.git git clone https://github.com/Significant-Gravitas/AutoGPT.git
``` ```
2. **Navigate to the project directory:** 2. **Navigate to the project directory:**
``` ```
cd Auto-GPT/frontend cd AutoGPT/frontend
``` ```
3. **Get Flutter packages:** 3. **Get Flutter packages:**