Restructure Repo (#5160)

This commit is contained in:
merwanehamadi
2023-09-05 18:39:48 -07:00
committed by GitHub
2283 changed files with 585814 additions and 44 deletions

View File

@@ -1,49 +1,17 @@
<!-- ⚠️ At the moment any non-essential commands are not being merged.
If you want to add non-essential commands to Auto-GPT, please create a plugin instead.
We are expecting to ship plugin support within the week (PR #757).
Resources:
* https://github.com/Significant-Gravitas/Auto-GPT-Plugin-Template
-->
<!-- 📢 Announcement
We've recently noticed an increase in pull requests focusing on combining multiple changes. While the intentions behind these PRs are appreciated, it's essential to maintain a clean and manageable git history. To ensure the quality of our repository, we kindly ask you to adhere to the following guidelines when submitting PRs:
Focus on a single, specific change.
Do not include any unrelated or "extra" modifications.
Provide clear documentation and explanations of the changes made.
Ensure diffs are limited to the intended lines — no applying preferred formatting styles or line endings (unless that's what the PR is about).
For guidance on committing only the specific lines you have changed, refer to this helpful video: https://youtu.be/8-hSNHHbiZg
Check out our [wiki page on Contributing](https://github.com/Significant-Gravitas/Nexus/wiki/Contributing)
By following these guidelines, your PRs are more likely to be merged quickly after testing, as long as they align with the project's overall direction. -->
### Background
<!-- Provide a concise overview of the rationale behind this change. Include relevant context, prior discussions, or links to related issues. Ensure that the change aligns with the project's overall direction. -->
<!-- IF YOU MAKE A PR FROM A FORK, THE mini-agi TEST WON'T PASS, so ignore it.-->
### Changes
<!-- Describe the specific, focused change made in this pull request. Detail the modifications clearly and avoid any unrelated or "extra" changes. -->
### Documentation
<!-- Explain how your changes are documented, such as in-code comments or external documentation. Ensure that the documentation is clear, concise, and easy to understand. -->
### Test Plan
<!-- Describe how you tested this functionality. Include steps to reproduce, relevant test cases, and any other pertinent information. -->
### PR Quality Checklist
- [ ] My pull request is atomic and focuses on a single change.
- [ ] I have thoroughly tested my changes with multiple different prompts.
- [ ] I have considered potential risks and mitigations for my changes.
- [ ] I have documented my changes clearly and comprehensively.
- [ ] I have not snuck in any "extra" small tweaks changes. <!-- Submit these as separate Pull Requests, they are the easiest to merge! -->
- [ ] I have run the following commands against my code to ensure it passes our linters:
```shell
black .
isort .
mypy
autoflake --remove-all-unused-imports --recursive --ignore-init-module-imports --ignore-pass-after-docstring autogpt tests --in-place
```
<!-- If you haven't added tests, please explain why. If you have, check the appropriate box. If you've ensured your PR is atomic and well-documented, check the corresponding boxes. -->
<!-- By submitting this, I agree that my pull request should be closed if I do not fill this out or follow the guidelines. -->
```shell
black . --exclude test.py
isort .
mypy .
autoflake --remove-all-unused-imports --recursive --ignore-init-module-imports --ignore-pass-after-docstring --in-place agbenchmark
```

301
.github/workflows/benchmark_ci.yml vendored Normal file
View File

@@ -0,0 +1,301 @@
name: CI
defaults:
run:
working-directory: ./benchmark/
on:
workflow_dispatch:
branches: [master]
inputs:
agents:
description: 'Agents to run (comma-separated)'
required: false
default: 'gpt-engineer,smol-developer,Auto-GPT,mini-agi,beebot,BabyAGI,PolyGPT,Turbo' # Default agents if none are specified
schedule:
- cron: '0 8 * * *'
push:
branches: [master, ci-test*]
paths-ignore:
- 'reports/**'
pull_request:
branches: [stable, master, release-*, develop]
jobs:
lint:
runs-on: ubuntu-latest
env:
min-python-version: '3.10'
steps:
- name: Checkout repository
uses: actions/checkout@v3
with:
fetch-depth: 0
ref: ${{ github.event.pull_request.head.ref }}
repository: ${{ github.event.pull_request.head.repo.full_name }}
submodules: true
- name: Set up Python ${{ env.min-python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ env.min-python-version }}
- id: get_date
name: Get date
run: echo "date=$(date +'%Y-%m-%d')" >> $GITHUB_OUTPUT
- name: Install Poetry
run: |
curl -sSL https://install.python-poetry.org | python -
- name: Install dependencies
run: |
export POETRY_VIRTUALENVS_IN_PROJECT=true
poetry install -vvv
- name: Lint with flake8
run: poetry run flake8
- name: Check black formatting
run: poetry run black . --exclude test.py --check
if: success() || failure()
- name: Check isort formatting
run: poetry run isort . --check
if: success() || failure()
- name: Check for unused imports and pass statements
run: |
cmd="poetry run autoflake --remove-all-unused-imports --recursive --ignore-init-module-imports --ignore-pass-after-docstring agbenchmark"
$cmd --check || (echo "You have unused imports or pass statements, please run '${cmd} --in-place'" && exit 1)
if: success() || failure()
matrix-setup:
runs-on: ubuntu-latest
outputs:
matrix: ${{ steps.set-matrix.outputs.matrix }}
env-name: ${{ steps.set-matrix.outputs.env-name }}
steps:
- id: set-matrix
run: |
if [ "${{ github.event_name }}" == "schedule" ]; then
echo "::set-output name=env-name::production"
echo "::set-output name=matrix::[ 'gpt-engineer', 'smol-developer', 'Auto-GPT', 'mini-agi', 'beebot', 'BabyAGI', 'PolyGPT', 'Turbo' ]"
elif [ "${{ github.event_name }}" == "workflow_dispatch" ]; then
IFS=',' read -ra matrix_array <<< "${{ github.event.inputs.agents }}"
matrix_string="[ \"$(echo "${matrix_array[@]}" | sed 's/ /", "/g')\" ]"
echo "::set-output name=env-name::production"
echo "::set-output name=matrix::$matrix_string"
else
echo "::set-output name=env-name::testing"
echo "::set-output name=matrix::[ 'mini-agi' ]"
fi
tests:
environment:
name: '${{ needs.matrix-setup.outputs.env-name }}'
needs: matrix-setup
env:
GH_TOKEN: ${{ github.event_name == 'pull_request' && github.token || secrets.PAT }}
min-python-version: '3.10'
name: '${{ matrix.agent-name }}'
runs-on: ubuntu-latest
timeout-minutes: 50
strategy:
fail-fast: false
matrix:
agent-name: ${{fromJson(needs.matrix-setup.outputs.matrix)}}
steps:
- name: Print Environment Name
run: |
echo "Matrix Setup Environment Name: ${{ needs.matrix-setup.outputs.env-name }}"
- name: Checkout repository
uses: actions/checkout@v3
with:
fetch-depth: 0
ref: ${{ github.event.pull_request.head.ref }}
repository: ${{ github.event.pull_request.head.repo.full_name }}
submodules: true
token: ${{ env.GH_TOKEN }}
- name: Setup Chrome and ChromeDriver
run: |
sudo apt-get update
sudo apt-get install -y wget
wget https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb
sudo dpkg -i google-chrome-stable_current_amd64.deb
sudo apt-get install -f
- name: Set up Python ${{ env.min-python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ env.min-python-version }}
- id: get_date
name: Get date
run: echo "date=$(date +'%Y-%m-%d')" >> $GITHUB_OUTPUT
- name: Install Poetry
run: |
curl -sSL https://install.python-poetry.org | python -
- name: Install dependencies
run: |
poetry install -vvv
poetry build
- name: Run regression tests
run: |
cd agent/$AGENT_NAME
prefix=""
if [ "$AGENT_NAME" == "gpt-engineer" ]; then
make install
source venv/bin/activate
elif [ "$AGENT_NAME" == "Auto-GPT" ]; then
python -m venv venv
source venv/bin/activate
pip install -r requirements.txt
pip uninstall agbenchmark -y
elif [ "$AGENT_NAME" == "mini-agi" ]; then
python -m venv venv
source venv/bin/activate
pip install -r requirements.txt
cp .env_example .env
elif [ "$AGENT_NAME" == "smol-developer" ]; then
python -m venv venv
source venv/bin/activate
pip install -r requirements.txt
elif [ "$AGENT_NAME" == "BabyAGI" ]; then
python -m venv venv
source venv/bin/activate
pip install -r requirements.txt
elif [ "$AGENT_NAME" == "SuperAGI" ]; then
cp config_template.yaml config.yaml
sed -i 's/OPENAI_API_KEY:.*/OPENAI_API_KEY: "'"${{ secrets.OPENAI_API_KEY }}"'"/' config.yaml
docker-compose up -d --build
elif [ "$AGENT_NAME" == "beebot" ]; then
poetry install
poetry run playwright install
poetry run uvicorn beebot.initiator.api:create_app --factory --timeout-graceful-shutdown=1 &
prefix="poetry run "
elif [ "$AGENT_NAME" == "PolyGPT" ]; then
cp .env.template .env
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.1/install.sh | bash
export NVM_DIR=$HOME/.nvm
source $NVM_DIR/nvm.sh
nvm install && nvm use
yarn install
export NODE_TLS_REJECT_UNAUTHORIZED=0
elif [ "$AGENT_NAME" == "Turbo" ]; then
python -m venv venv
source venv/bin/activate
pip install -r requirements.txt
cp .env.template .env
sed -i 's/your-openai-api-key/${{ secrets.OPENAI_API_KEY }}/g' .env
else
echo "Unknown agent name: $AGENT_NAME"
exit 1
fi
pip install ../../dist/*.whl
bash -c "$(curl -fsSL https://raw.githubusercontent.com/merwanehamadi/helicone/b7ab4bc53e51d8ab29fff19ce5986ab7720970c6/mitmproxy.sh)" -s start
if [ "${GITHUB_EVENT_NAME}" == "pull_request" ] || [ "${{ github.event_name }}" == "push" ]; then
set +e # Ignore non-zero exit codes and continue execution
echo "Running the following command: ${prefix}agbenchmark start --maintain --mock"
${prefix}agbenchmark start --maintain --mock
EXIT_CODE=$?
set -e # Stop ignoring non-zero exit codes
# Check if the exit code was 5, and if so, exit with 0 instead
if [ $EXIT_CODE -eq 5 ]; then
echo "regression_tests.json is empty."
fi
echo "Running the following command: ${prefix}agbenchmark start --mock"
${prefix}agbenchmark start --mock
echo "Running the following command: ${prefix}agbenchmark start --mock --category=retrieval"
${prefix}agbenchmark start --mock --category=retrieval
echo "Running the following command: ${prefix}agbenchmark start --mock --category=interface"
${prefix}agbenchmark start --mock --category=interface
echo "Running the following command: ${prefix}agbenchmark start --mock --category=code"
${prefix}agbenchmark start --mock --category=code
echo "Running the following command: ${prefix}agbenchmark start --mock --category=memory"
${prefix}agbenchmark start --mock --category=memory
echo "Running the following command: ${prefix}agbenchmark start --mock --suite TestRevenueRetrieval"
${prefix}agbenchmark start --mock --suite TestRevenueRetrieval
echo "Running the following command: ${prefix}agbenchmark start --test=TestWriteFile"
${prefix}agbenchmark start --test=TestWriteFile
cd ../..
poetry install
poetry run uvicorn server:app --reload &
sleep 5
export AGENT_NAME=mini-agi
echo "poetry run agbenchmark start --mock --api_mode --host=http://localhost:8000"
poetry run agbenchmark start --mock --api_mode --host=http://localhost:8000
else
echo "${prefix}agbenchmark start"
${prefix}agbenchmark start || echo "This command will always return a non zero exit code unless all the challenges are solved."
fi
cd ../..
env:
GITHUB_EVENT_NAME: ${{ github.event_name }}
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
AGENT_NAME: ${{ matrix.agent-name }}
PROMPT_USER: false # For mini-agi. TODO: Remove this and put it in benchmarks.py
HELICONE_API_KEY: ${{ secrets.HELICONE_API_KEY }}
BASERUN_API_KEY: ${{ secrets.BASERUN_API_KEY }}
REQUESTS_CA_BUNDLE: /etc/ssl/certs/ca-certificates.crt
HELICONE_CACHE_ENABLED: false
HELICONE_PROPERTY_AGENT: ${{ matrix.agent-name }}
REPORT_LOCATION: ${{ format('../../reports/{0}', matrix.agent-name) }}
WOLFRAM_ALPHA_APPID: ${{ secrets.WOLFRAM_ALPHA_APPID }}
SERPER_API_KEY: ${{ secrets.SERPER_API_KEY }}
BING_SUBSCRIPTION_KEY: ${{ secrets.BING_SUBSCRIPTION_KEY }}
- name: Upload reports
if: always()
uses: actions/upload-artifact@v3
with:
name: ${{ matrix.agent-name }}
path: reports/${{ matrix.agent-name }}
- name: Authenticate and Push to Branch
if: (success() || failure()) && (github.event_name == 'schedule' || github.event_name == 'workflow_dispatch')
run: |
git config --global user.email "github-bot@agpt.co"
git config --global user.name "Auto-GPT-Bot"
git add reports/* || echo "nothing to commit"
commit_message="${{ matrix.agent-name }}-$(date +'%Y%m%d%H%M%S')"
git commit -m "${commit_message}"
git stash
current_branch=${{ github.ref_name }}
attempts=0
max_attempts=3
while [ $attempts -lt $max_attempts ]; do
git fetch origin $current_branch
git rebase origin/$current_branch
if git push origin HEAD; then
echo "Success!"
poetry run python reports/send_to_googledrive.py || echo "Failed to upload to Google Drive"
exit 0
else
echo "Attempt $(($attempts + 1)) failed. Retrying..."
attempts=$(($attempts + 1))
fi
done
echo "Failed after $max_attempts attempts."
env:
GDRIVE_BASE64: ${{ secrets.GDRIVE_BASE64 }}
GITHUB_REF_NAME: ${{ github.ref_name }}

View File

@@ -0,0 +1,48 @@
name: Publish to PyPI
on:
workflow_dispatch:
jobs:
deploy:
runs-on: ubuntu-latest
permissions:
contents: write
steps:
- name: Checkout repository
uses: actions/checkout@v2
with:
submodules: true
fetch-depth: 0
- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: 3.8
- name: Install Poetry
run: |
curl -sSL https://install.python-poetry.org | python3 -
echo "$HOME/.poetry/bin" >> $GITHUB_PATH
- name: Build project for distribution
run: poetry build
- name: Install dependencies
run: poetry install
- name: Check Version
id: check-version
run: |
echo version=$(poetry version --short) >> $GITHUB_OUTPUT
- name: Create Release
uses: ncipollo/release-action@v1
with:
artifacts: "dist/*"
token: ${{ secrets.GITHUB_TOKEN }}
draft: false
generateReleaseNotes: true
tag: v${{ steps.check-version.outputs.version }}
commit: master
- name: Build and publish
run: poetry publish -u __token__ -p ${{ secrets.PYPI_API_TOKEN }}

View File

@@ -0,0 +1,49 @@
<!-- ⚠️ At the moment any non-essential commands are not being merged.
If you want to add non-essential commands to Auto-GPT, please create a plugin instead.
We are expecting to ship plugin support within the week (PR #757).
Resources:
* https://github.com/Significant-Gravitas/Auto-GPT-Plugin-Template
-->
<!-- 📢 Announcement
We've recently noticed an increase in pull requests focusing on combining multiple changes. While the intentions behind these PRs are appreciated, it's essential to maintain a clean and manageable git history. To ensure the quality of our repository, we kindly ask you to adhere to the following guidelines when submitting PRs:
Focus on a single, specific change.
Do not include any unrelated or "extra" modifications.
Provide clear documentation and explanations of the changes made.
Ensure diffs are limited to the intended lines — no applying preferred formatting styles or line endings (unless that's what the PR is about).
For guidance on committing only the specific lines you have changed, refer to this helpful video: https://youtu.be/8-hSNHHbiZg
Check out our [wiki page on Contributing](https://github.com/Significant-Gravitas/Nexus/wiki/Contributing)
By following these guidelines, your PRs are more likely to be merged quickly after testing, as long as they align with the project's overall direction. -->
### Background
<!-- Provide a concise overview of the rationale behind this change. Include relevant context, prior discussions, or links to related issues. Ensure that the change aligns with the project's overall direction. -->
### Changes
<!-- Describe the specific, focused change made in this pull request. Detail the modifications clearly and avoid any unrelated or "extra" changes. -->
### Documentation
<!-- Explain how your changes are documented, such as in-code comments or external documentation. Ensure that the documentation is clear, concise, and easy to understand. -->
### Test Plan
<!-- Describe how you tested this functionality. Include steps to reproduce, relevant test cases, and any other pertinent information. -->
### PR Quality Checklist
- [ ] My pull request is atomic and focuses on a single change.
- [ ] I have thoroughly tested my changes with multiple different prompts.
- [ ] I have considered potential risks and mitigations for my changes.
- [ ] I have documented my changes clearly and comprehensively.
- [ ] I have not snuck in any "extra" small tweaks changes. <!-- Submit these as separate Pull Requests, they are the easiest to merge! -->
- [ ] I have run the following commands against my code to ensure it passes our linters:
```shell
black .
isort .
mypy
autoflake --remove-all-unused-imports --recursive --ignore-init-module-imports --ignore-pass-after-docstring autogpt tests --in-place
```
<!-- If you haven't added tests, please explain why. If you have, check the appropriate box. If you've ensured your PR is atomic and well-documented, check the corresponding boxes. -->
<!-- By submitting this, I agree that my pull request should be closed if I do not fill this out or follow the guidelines. -->

169
autogpts/autogpt/.gitignore vendored Normal file
View File

@@ -0,0 +1,169 @@
## Original ignores
autogpt/keys.py
autogpt/*.json
auto_gpt_workspace/*
*.mpeg
.env
azure.yaml
ai_settings.yaml
last_run_ai_settings.yaml
.vscode
.idea/*
auto-gpt.json
log.txt
log-ingestion.txt
/logs
*.log
*.mp3
mem.sqlite3
venvAutoGPT
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
# C extensions
*.so
# Distribution / packaging
.Python
build/
develop-eggs/
dist/
/plugins/
plugins_config.yaml
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
pip-wheel-metadata/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/
# Translations
*.mo
*.pot
# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal
# Flask stuff:
instance/
.webassets-cache
# Scrapy stuff:
.scrapy
# Sphinx documentation
docs/_build/
site/
# PyBuilder
target/
# Jupyter Notebook
.ipynb_checkpoints
# IPython
profile_default/
ipython_config.py
# pyenv
.python-version
# pipenv
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
# However, in case of collaboration, if having platform-specific dependencies or dependencies
# having no cross-platform support, pipenv may install dependencies that don't work, or not
# install all needed dependencies.
#Pipfile.lock
# PEP 582; used by e.g. github.com/David-OConnor/pyflow
__pypackages__/
# Celery stuff
celerybeat-schedule
celerybeat.pid
# SageMath parsed files
*.sage.py
# Environments
.direnv/
.env
.venv
env/
venv*/
ENV/
env.bak/
# Spyder project settings
.spyderproject
.spyproject
# Rope project settings
.ropeproject
# mkdocs documentation
/site
# mypy
.mypy_cache/
.dmypy.json
dmypy.json
# Pyre type checker
.pyre/
llama-*
vicuna-*
# mac
.DS_Store
openai/
# news
CURRENT_BULLETIN.md
# AgBenchmark
agbenchmark/reports/
# Nodejs
package-lock.json
package.json

148
autogpts/autogpt/README.md Normal file

File diff suppressed because one or more lines are too long

Some files were not shown because too many files have changed in this diff Show More