Files
Auto-GPT/autogpt/core/planning/schema.py
James Collins b9f01330db Re-arch WIP (#3969)
Rough sketching out of a hello world using our refactored autogpt
library. See the tracking issue here: #4770.

# Run instructions

There are two client applications for Auto-GPT included. 

## CLI Application

🌟 **This is the reference application I'm working with for now**
🌟

The first app is a straight CLI application. I have not done anything
yet to port all the friendly display stuff from the
`logger.typewriter_log` logic.

- [Entry
Point](https://github.com/Significant-Gravitas/Auto-GPT/blob/re-arch/hello-world/autogpt/core/runner/cli_app/cli.py)
- [Client
Application](https://github.com/Significant-Gravitas/Auto-GPT/blob/re-arch/hello-world/autogpt/core/runner/cli_app/main.py)

To run, you first need a settings file.  Run

```
 python REPOSITORY_ROOT/autogpt/core/runner/cli_app/cli.py make-settings
 ```

where `REPOSITORY_ROOT` is the root of the Auto-GPT repository on your machine.  This will write a file called `default_agent_settings.yaml` with all the user-modifiable configuration keys to `~/auto-gpt/default_agent_settings.yml` and make the `auto-gpt` directory in your user directory if it doesn't exist).  At a bare minimum, you'll need to set `openai.credentials.api_key` to your OpenAI API Key to run the model.

You can then run Auto-GPT with 

```
python REPOSITORY_ROOT/autogpt/core/runner/cli_app/cli.py make-settings
```

to launch the interaction loop.

## CLI Web App

The second app is still a CLI, but it sets up a local webserver that the client application talks to rather than invoking calls to the Agent library code directly.  This application is essentially a sketch at this point as the folks who were driving it have had less time (and likely not enough clarity) to proceed.

- [Entry Point](https://github.com/Significant-Gravitas/Auto-GPT/blob/re-arch/hello-world/autogpt/core/runner/cli_web_app/cli.py)
- [Client Application](https://github.com/Significant-Gravitas/Auto-GPT/blob/re-arch/hello-world/autogpt/core/runner/cli_web_app/client/client.py)
- [Server API](https://github.com/Significant-Gravitas/Auto-GPT/blob/re-arch/hello-world/autogpt/core/runner/cli_web_app/server/api.py)

To run, you still need to generate a default configuration.  You can do 

```
python REPOSITORY_ROOT/autogpt/core/runner/cli_web_app/cli.py
make-settings
```

It invokes the same command as the bare CLI app, so follow the instructions above about setting your API key.

To run, do 

```
python REPOSITORY_ROOT/autogpt/core/runner/cli_web_app/cli.py client
```

This will launch a webserver and then start the client cli application to communicate with it.

⚠️ I am not actively developing this application.  It is a very good place to get involved if you have web application design experience and are looking to get involved in the re-arch.

---------

Co-authored-by: David Wurtz <davidjwurtz@gmail.com>
Co-authored-by: Media <12145726+rihp@users.noreply.github.com>
Co-authored-by: Richard Beales <rich@richbeales.net>
Co-authored-by: Daryl Rodrigo <darylrodrigo@gmail.com>
Co-authored-by: Daryl Rodrigo <daryl@orkestro.com>
Co-authored-by: Swifty <craigswift13@gmail.com>
Co-authored-by: Nicholas Tindle <nick@ntindle.com>
Co-authored-by: Merwane Hamadi <merwanehamadi@gmail.com>
2023-07-05 12:12:05 -07:00

77 lines
2.2 KiB
Python

import enum
from pydantic import BaseModel, Field
from autogpt.core.ability.schema import AbilityResult
from autogpt.core.resource.model_providers.schema import (
LanguageModelFunction,
LanguageModelMessage,
LanguageModelProviderModelResponse,
)
class LanguageModelClassification(str, enum.Enum):
"""The LanguageModelClassification is a functional description of the model.
This is used to determine what kind of model to use for a given prompt.
Sometimes we prefer a faster or cheaper model to accomplish a task when
possible.
"""
FAST_MODEL: str = "fast_model"
SMART_MODEL: str = "smart_model"
class LanguageModelPrompt(BaseModel):
messages: list[LanguageModelMessage]
functions: list[LanguageModelFunction] = Field(default_factory=list)
def __str__(self):
return "\n\n".join([f"{m.role.value}: {m.content}" for m in self.messages])
class LanguageModelResponse(LanguageModelProviderModelResponse):
"""Standard response struct for a response from a language model."""
class TaskType(str, enum.Enum):
RESEARCH: str = "research"
WRITE: str = "write"
EDIT: str = "edit"
CODE: str = "code"
DESIGN: str = "design"
TEST: str = "test"
PLAN: str = "plan"
class TaskStatus(str, enum.Enum):
BACKLOG: str = "backlog"
READY: str = "ready"
IN_PROGRESS: str = "in_progress"
DONE: str = "done"
class TaskContext(BaseModel):
cycle_count: int = 0
status: TaskStatus = TaskStatus.BACKLOG
parent: "Task" = None
prior_actions: list[AbilityResult] = Field(default_factory=list)
memories: list = Field(default_factory=list)
user_input: list[str] = Field(default_factory=list)
supplementary_info: list[str] = Field(default_factory=list)
enough_info: bool = False
class Task(BaseModel):
objective: str
type: str # TaskType FIXME: gpt does not obey the enum parameter in its schema
priority: int
ready_criteria: list[str]
acceptance_criteria: list[str]
context: TaskContext = Field(default_factory=TaskContext)
# Need to resolve the circular dependency between Task and TaskContext once both models are defined.
TaskContext.update_forward_refs()