Merge branch 'master' into master

This commit is contained in:
Slowly-Grokking
2023-04-16 01:49:06 -05:00
committed by GitHub
16 changed files with 208 additions and 41 deletions

View File

@@ -50,7 +50,10 @@ SMART_TOKEN_LIMIT=8000
### MEMORY
################################################################################
# MEMORY_BACKEND - Memory backend type (Default: local)
### MEMORY_BACKEND - Memory backend type
# local - Default
# pinecone - Pinecone (if configured)
# redis - Redis (if configured)
MEMORY_BACKEND=local
### PINECONE

18
.github/workflows/docker-image.yml vendored Normal file
View File

@@ -0,0 +1,18 @@
name: Docker Image CI
on:
push:
branches: [ "master" ]
pull_request:
branches: [ "master" ]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Build the Docker image
run: docker build . --file Dockerfile --tag autogpt:$(date +%s)

View File

@@ -48,19 +48,20 @@ Your support is greatly appreciated
- [Docker](#docker)
- [Command Line Arguments](#command-line-arguments)
- [🗣️ Speech Mode](#-speech-mode)
- [List of IDs with names from eleven labs, you can use the name or ID:](#list-of-ids-with-names-from-eleven-labs-you-can-use-the-name-or-id)
- [OpenAI API Keys Configuration](#openai-api-keys-configuration)
- [🔍 Google API Keys Configuration](#-google-api-keys-configuration)
- [Setting up environment variables](#setting-up-environment-variables)
- [Memory Backend Setup](#memory-backend-setup)
- [Redis Setup](#redis-setup)
- [🌲 Pinecone API Key Setup](#-pinecone-api-key-setup)
- [Milvus Setup](#milvus-setup)
- [Setting up environment variables](#setting-up-environment-variables-1)
- [Setting Your Cache Type](#setting-your-cache-type)
- [View Memory Usage](#view-memory-usage)
- [🧠 Memory pre-seeding](#-memory-pre-seeding)
- [💀 Continuous Mode ⚠️](#-continuous-mode-)
- [GPT3.5 ONLY Mode](#gpt35-only-mode)
- [🖼 Image Generation](#-image-generation)
- [Selenium](#selenium)
- [⚠️ Limitations](#-limitations)
- [🛡 Disclaimer](#-disclaimer)
- [🐦 Connect with Us on Twitter](#-connect-with-us-on-twitter)
@@ -115,7 +116,15 @@ cd Auto-GPT
pip install -r requirements.txt
```
5. Rename `.env.template` to `.env` and fill in your `OPENAI_API_KEY`. If you plan to use Speech Mode, fill in your `ELEVENLABS_API_KEY` as well.
5. Locate the file named `.env.template` in the main `/Auto-GPT` folder.
Create a copy of this file, called `.env` by removing the `template` extension. The easiest way is to do this in a command prompt/terminal window `cp .env.template .env`
Open the `.env` file in a text editor. Note: Files starting with a dot might be hidden by your Operating System.
Find the line that says `OPENAI_API_KEY=`.
After the `"="`, enter your unique OpenAI API Key (without any quotes or spaces).
Enter any other API keys or Tokens for services you would like to utilize.
Save and close the `".env"` file.
By completing these steps, you have properly configured the API Keys for your project.
- See [OpenAI API Keys Configuration](#openai-api-keys-configuration) to obtain your OpenAI API key.
- Obtain your ElevenLabs API key from: https://elevenlabs.io. You can view your xi-api-key using the "Profile" tab on the website.
- If you want to use GPT on an Azure instance, set `USE_AZURE` to `True` and then follow these steps:
@@ -124,8 +133,8 @@ pip install -r requirements.txt
- `smart_llm_model_deployment_id` - your gpt-4 deployment ID
- `embedding_model_deployment_id` - your text-embedding-ada-002 v2 deployment ID
- Please specify all of these values as double-quoted strings
> Replace string in angled brackets (<>) to your own ID
```yaml
# Replace string in angled brackets (<>) to your own ID
azure_model_map:
fast_llm_model_deployment_id: "<my-fast-llm-deployment-id>"
...
@@ -254,7 +263,18 @@ export GOOGLE_API_KEY="YOUR_GOOGLE_API_KEY"
export CUSTOM_SEARCH_ENGINE_ID="YOUR_CUSTOM_SEARCH_ENGINE_ID"
```
## Redis Setup
## Memory Backend Setup
By default, Auto-GPT is going to use LocalCache.
To switch to either, change the `MEMORY_BACKEND` env variable to the value that you want:
- `local` (default) uses a local JSON cache file
- `pinecone` uses the Pinecone.io account you configured in your ENV settings
- `redis` will use the redis cache that you configured
- `milvus` will use the milvus that you configured
### Redis Setup
> _**CAUTION**_ \
This is not intended to be publicly accessible and lacks security measures. Therefore, avoid exposing Redis to the internet without a password or at all
1. Install docker desktop
@@ -293,20 +313,6 @@ Pinecone enables the storage of vast amounts of vector-based memory, allowing fo
2. Choose the `Starter` plan to avoid being charged.
3. Find your API key and region under the default project in the left sidebar.
### Milvus Setup
[Milvus](https://milvus.io/) is a open-source, high scalable vector database to storage huge amount of vector-based memory and provide fast relevant search.
- setup milvus database, keep your pymilvus version and milvus version same to avoid compatible issues.
- setup by open source [Install Milvus](https://milvus.io/docs/install_standalone-operator.md)
- or setup by [Zilliz Cloud](https://zilliz.com/cloud)
- set `MILVUS_ADDR` in `.env` to your milvus address `host:ip`.
- set `MEMORY_BACKEND` in `.env` to `milvus` to enable milvus as backend.
- optional
- set `MILVUS_COLLECTION` in `.env` to change milvus collection name as you want, `autogpt` is the default name.
### Setting up environment variables
In the `.env` file set:
- `PINECONE_API_KEY`
- `PINECONE_ENV` (example: _"us-east4-gcp"_)
@@ -330,15 +336,17 @@ export PINECONE_ENV="<YOUR_PINECONE_REGION>" # e.g: "us-east4-gcp"
export MEMORY_BACKEND="pinecone"
```
## Setting Your Cache Type
### Milvus Setup
By default, Auto-GPT is going to use LocalCache instead of redis or Pinecone.
[Milvus](https://milvus.io/) is a open-source, high scalable vector database to storage huge amount of vector-based memory and provide fast relevant search.
To switch to either, change the `MEMORY_BACKEND` env variable to the value that you want:
`local` (default) uses a local JSON cache file
`pinecone` uses the Pinecone.io account you configured in your ENV settings
`redis` will use the redis cache that you configured
- setup milvus database, keep your pymilvus version and milvus version same to avoid compatible issues.
- setup by open source [Install Milvus](https://milvus.io/docs/install_standalone-operator.md)
- or setup by [Zilliz Cloud](https://zilliz.com/cloud)
- set `MILVUS_ADDR` in `.env` to your milvus address `host:ip`.
- set `MEMORY_BACKEND` in `.env` to `milvus` to enable milvus as backend.
- optional
- set `MILVUS_COLLECTION` in `.env` to change milvus collection name as you want, `autogpt` is the default name.
## View Memory Usage

View File

@@ -186,7 +186,7 @@ def execute_command(command_name: str, arguments):
elif command_name == "generate_image":
return generate_image(arguments["prompt"])
elif command_name == "send_tweet":
return send_tweet(arguments['text'])
return send_tweet(arguments["text"])
elif command_name == "do_nothing":
return "No action performed."
elif command_name == "task_complete":

View File

@@ -23,7 +23,9 @@ def read_audio(audio):
headers = {"Authorization": f"Bearer {api_token}"}
if api_token is None:
raise ValueError("You need to set your Hugging Face API token in the config file.")
raise ValueError(
"You need to set your Hugging Face API token in the config file."
)
response = requests.post(
api_url,
@@ -31,5 +33,5 @@ def read_audio(audio):
data=audio,
)
text = json.loads(response.content.decode("utf-8"))['text']
text = json.loads(response.content.decode("utf-8"))["text"]
return "The audio says: " + text

View File

@@ -5,15 +5,49 @@ from pathlib import Path
from typing import Generator, List
# Set a dedicated folder for file I/O
WORKING_DIRECTORY = Path(__file__).parent.parent / "auto_gpt_workspace"
WORKING_DIRECTORY = Path(os.getcwd()) / "auto_gpt_workspace"
# Create the directory if it doesn't exist
if not os.path.exists(WORKING_DIRECTORY):
os.makedirs(WORKING_DIRECTORY)
LOG_FILE = "file_logger.txt"
LOG_FILE_PATH = WORKING_DIRECTORY / LOG_FILE
WORKING_DIRECTORY = str(WORKING_DIRECTORY)
def check_duplicate_operation(operation: str, filename: str) -> bool:
"""Check if the operation has already been performed on the given file
Args:
operation (str): The operation to check for
filename (str): The name of the file to check for
Returns:
bool: True if the operation has already been performed on the file
"""
log_content = read_file(LOG_FILE)
log_entry = f"{operation}: {filename}\n"
return log_entry in log_content
def log_operation(operation: str, filename: str) -> None:
"""Log the file operation to the file_logger.txt
Args:
operation (str): The operation to log
filename (str): The name of the file the operation was performed on
"""
log_entry = f"{operation}: {filename}\n"
# Create the log file if it doesn't exist
if not os.path.exists(LOG_FILE_PATH):
with open(LOG_FILE_PATH, "w", encoding="utf-8") as f:
f.write("File Operation Logger ")
append_to_file(LOG_FILE, log_entry)
def safe_join(base: str, *paths) -> str:
"""Join one or more path components intelligently.
@@ -122,6 +156,8 @@ def write_to_file(filename: str, text: str) -> str:
Returns:
str: A message indicating success or failure
"""
if check_duplicate_operation("write", filename):
return "Error: File has already been updated."
try:
filepath = safe_join(WORKING_DIRECTORY, filename)
directory = os.path.dirname(filepath)
@@ -129,6 +165,7 @@ def write_to_file(filename: str, text: str) -> str:
os.makedirs(directory)
with open(filepath, "w", encoding="utf-8") as f:
f.write(text)
log_operation("write", filename)
return "File written to successfully."
except Exception as e:
return f"Error: {str(e)}"
@@ -148,6 +185,7 @@ def append_to_file(filename: str, text: str) -> str:
filepath = safe_join(WORKING_DIRECTORY, filename)
with open(filepath, "a") as f:
f.write(text)
log_operation("append", filename)
return "Text appended successfully."
except Exception as e:
return f"Error: {str(e)}"
@@ -162,9 +200,12 @@ def delete_file(filename: str) -> str:
Returns:
str: A message indicating success or failure
"""
if check_duplicate_operation("delete", filename):
return "Error: File has already been deleted."
try:
filepath = safe_join(WORKING_DIRECTORY, filename)
os.remove(filepath)
log_operation("delete", filename)
return "File deleted successfully."
except Exception as e:
return f"Error: {str(e)}"

View File

@@ -16,5 +16,8 @@ def clone_repository(repo_url: str, clone_path: str) -> str:
str: The result of the clone operation"""
split_url = repo_url.split("//")
auth_repo_url = f"//{CFG.github_username}:{CFG.github_api_key}@".join(split_url)
git.Repo.clone_from(auth_repo_url, clone_path)
return f"""Cloned {repo_url} to {clone_path}"""
try:
git.Repo.clone_from(auth_repo_url, clone_path)
return f"""Cloned {repo_url} to {clone_path}"""
except Exception as e:
return f"Error: {str(e)}"

View File

@@ -7,9 +7,9 @@ load_dotenv()
def send_tweet(tweet_text):
consumer_key = os.environ.get("TW_CONSUMER_KEY")
consumer_secret= os.environ.get("TW_CONSUMER_SECRET")
access_token= os.environ.get("TW_ACCESS_TOKEN")
access_token_secret= os.environ.get("TW_ACCESS_TOKEN_SECRET")
consumer_secret = os.environ.get("TW_CONSUMER_SECRET")
access_token = os.environ.get("TW_ACCESS_TOKEN")
access_token_secret = os.environ.get("TW_ACCESS_TOKEN_SECRET")
# Authenticate to Twitter
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)

View File

@@ -100,7 +100,7 @@ class AIConfig:
prompt_start = (
"Your decisions must always be made independently without"
"seeking user assistance. Play to your strengths as an LLM and pursue"
" seeking user assistance. Play to your strengths as an LLM and pursue"
" simple strategies with no legal complications."
""
)

View File

@@ -37,7 +37,7 @@ def attempt_to_fix_json_by_finding_outermost_brackets(json_string: str):
except (json.JSONDecodeError, ValueError):
if CFG.debug_mode:
logger.error("Error: Invalid JSON: %s\n", json_string)
logger.error(f"Error: Invalid JSON: {json_string}\n")
if CFG.speak_mode:
say_text("Didn't work. I will have to ignore this response then.")
logger.error("Error: Invalid JSON, setting it to empty JSON now.\n")

View File

@@ -272,6 +272,8 @@ def print_assistant_thoughts(ai_name, assistant_reply):
# Speak the assistant's thoughts
if CFG.speak_mode and assistant_thoughts_speak:
say_text(assistant_thoughts_speak)
else:
logger.typewriter_log("SPEAK:", Fore.YELLOW, f"{assistant_thoughts_speak}")
return assistant_reply_json
except json.decoder.JSONDecodeError:

View File

@@ -84,7 +84,6 @@ def get_prompt() -> str:
("Generate Image", "generate_image", {"prompt": "<prompt>"}),
("Convert Audio to text", "read_audio_from_file", {"file": "<file>"}),
("Send Tweet", "send_tweet", {"text": "<text>"}),
]
# Only add shell command to the prompt if the AI is allowed to execute it

View File

@@ -23,3 +23,5 @@ numpy
pre-commit
black
isort
gitpython==3.1.31
tweepy

View File

@@ -26,4 +26,5 @@ sourcery
isort
gitpython==3.1.31
pytest
pytest-mock
pytest-mock
tweepy

View File

@@ -50,7 +50,9 @@ class TestScrapeText:
# Tests that the function returns an error message when an invalid or unreachable url is provided.
def test_invalid_url(self, mocker):
# Mock the requests.get() method to raise an exception
mocker.patch("requests.Session.get", side_effect=requests.exceptions.RequestException)
mocker.patch(
"requests.Session.get", side_effect=requests.exceptions.RequestException
)
# Call the function with an invalid URL and assert that it returns an error message
url = "http://www.invalidurl.com"

86
tests/unit/test_chat.py Normal file
View File

@@ -0,0 +1,86 @@
# Generated by CodiumAI
import unittest
import time
from unittest.mock import patch
from autogpt.chat import create_chat_message, generate_context
class TestChat(unittest.TestCase):
# Tests that the function returns a dictionary with the correct keys and values when valid strings are provided for role and content.
def test_happy_path_role_content(self):
result = create_chat_message("system", "Hello, world!")
self.assertEqual(result, {"role": "system", "content": "Hello, world!"})
# Tests that the function returns a dictionary with the correct keys and values when empty strings are provided for role and content.
def test_empty_role_content(self):
result = create_chat_message("", "")
self.assertEqual(result, {"role": "", "content": ""})
# Tests the behavior of the generate_context function when all input parameters are empty.
@patch("time.strftime")
def test_generate_context_empty_inputs(self, mock_strftime):
# Mock the time.strftime function to return a fixed value
mock_strftime.return_value = "Sat Apr 15 00:00:00 2023"
# Arrange
prompt = ""
relevant_memory = ""
full_message_history = []
model = "gpt-3.5-turbo-0301"
# Act
result = generate_context(prompt, relevant_memory, full_message_history, model)
# Assert
expected_result = (
-1,
47,
3,
[
{"role": "system", "content": ""},
{
"role": "system",
"content": f"The current time and date is {time.strftime('%c')}",
},
{
"role": "system",
"content": f"This reminds you of these events from your past:\n\n\n",
},
],
)
self.assertEqual(result, expected_result)
# Tests that the function successfully generates a current_context given valid inputs.
def test_generate_context_valid_inputs(self):
# Given
prompt = "What is your favorite color?"
relevant_memory = "You once painted your room blue."
full_message_history = [
create_chat_message("user", "Hi there!"),
create_chat_message("assistant", "Hello! How can I assist you today?"),
create_chat_message("user", "Can you tell me a joke?"),
create_chat_message(
"assistant",
"Why did the tomato turn red? Because it saw the salad dressing!",
),
create_chat_message("user", "Haha, that's funny."),
]
model = "gpt-3.5-turbo-0301"
# When
result = generate_context(prompt, relevant_memory, full_message_history, model)
# Then
self.assertIsInstance(result[0], int)
self.assertIsInstance(result[1], int)
self.assertIsInstance(result[2], int)
self.assertIsInstance(result[3], list)
self.assertGreaterEqual(result[0], 0)
self.assertGreaterEqual(result[1], 0)
self.assertGreaterEqual(result[2], 0)
self.assertGreaterEqual(
len(result[3]), 3
) # current_context should have at least 3 messages
self.assertLessEqual(
result[1], 2048
) # token limit for GPT-3.5-turbo-0301 is 2048 tokens