Files
Auto-GPT/autogpt/core/planning/simple.py
James Collins b9f01330db Re-arch WIP (#3969)
Rough sketching out of a hello world using our refactored autogpt
library. See the tracking issue here: #4770.

# Run instructions

There are two client applications for Auto-GPT included. 

## CLI Application

🌟 **This is the reference application I'm working with for now**
🌟

The first app is a straight CLI application. I have not done anything
yet to port all the friendly display stuff from the
`logger.typewriter_log` logic.

- [Entry
Point](https://github.com/Significant-Gravitas/Auto-GPT/blob/re-arch/hello-world/autogpt/core/runner/cli_app/cli.py)
- [Client
Application](https://github.com/Significant-Gravitas/Auto-GPT/blob/re-arch/hello-world/autogpt/core/runner/cli_app/main.py)

To run, you first need a settings file.  Run

```
 python REPOSITORY_ROOT/autogpt/core/runner/cli_app/cli.py make-settings
 ```

where `REPOSITORY_ROOT` is the root of the Auto-GPT repository on your machine.  This will write a file called `default_agent_settings.yaml` with all the user-modifiable configuration keys to `~/auto-gpt/default_agent_settings.yml` and make the `auto-gpt` directory in your user directory if it doesn't exist).  At a bare minimum, you'll need to set `openai.credentials.api_key` to your OpenAI API Key to run the model.

You can then run Auto-GPT with 

```
python REPOSITORY_ROOT/autogpt/core/runner/cli_app/cli.py make-settings
```

to launch the interaction loop.

## CLI Web App

The second app is still a CLI, but it sets up a local webserver that the client application talks to rather than invoking calls to the Agent library code directly.  This application is essentially a sketch at this point as the folks who were driving it have had less time (and likely not enough clarity) to proceed.

- [Entry Point](https://github.com/Significant-Gravitas/Auto-GPT/blob/re-arch/hello-world/autogpt/core/runner/cli_web_app/cli.py)
- [Client Application](https://github.com/Significant-Gravitas/Auto-GPT/blob/re-arch/hello-world/autogpt/core/runner/cli_web_app/client/client.py)
- [Server API](https://github.com/Significant-Gravitas/Auto-GPT/blob/re-arch/hello-world/autogpt/core/runner/cli_web_app/server/api.py)

To run, you still need to generate a default configuration.  You can do 

```
python REPOSITORY_ROOT/autogpt/core/runner/cli_web_app/cli.py
make-settings
```

It invokes the same command as the bare CLI app, so follow the instructions above about setting your API key.

To run, do 

```
python REPOSITORY_ROOT/autogpt/core/runner/cli_web_app/cli.py client
```

This will launch a webserver and then start the client cli application to communicate with it.

⚠️ I am not actively developing this application.  It is a very good place to get involved if you have web application design experience and are looking to get involved in the re-arch.

---------

Co-authored-by: David Wurtz <davidjwurtz@gmail.com>
Co-authored-by: Media <12145726+rihp@users.noreply.github.com>
Co-authored-by: Richard Beales <rich@richbeales.net>
Co-authored-by: Daryl Rodrigo <darylrodrigo@gmail.com>
Co-authored-by: Daryl Rodrigo <daryl@orkestro.com>
Co-authored-by: Swifty <craigswift13@gmail.com>
Co-authored-by: Nicholas Tindle <nick@ntindle.com>
Co-authored-by: Merwane Hamadi <merwanehamadi@gmail.com>
2023-07-05 12:12:05 -07:00

183 lines
6.2 KiB
Python

import logging
import platform
import time
import distro
from autogpt.core.configuration import (
Configurable,
SystemConfiguration,
SystemSettings,
UserConfigurable,
)
from autogpt.core.planning import strategies
from autogpt.core.planning.base import PromptStrategy
from autogpt.core.planning.schema import (
LanguageModelClassification,
LanguageModelResponse,
Task,
)
from autogpt.core.resource.model_providers import (
LanguageModelProvider,
ModelProviderName,
OpenAIModelName,
)
from autogpt.core.workspace import Workspace
class LanguageModelConfiguration(SystemConfiguration):
"""Struct for model configuration."""
model_name: str = UserConfigurable()
provider_name: ModelProviderName = UserConfigurable()
temperature: float = UserConfigurable()
class PromptStrategiesConfiguration(SystemConfiguration):
name_and_goals: strategies.NameAndGoalsConfiguration
initial_plan: strategies.InitialPlanConfiguration
next_ability: strategies.NextAbilityConfiguration
class PlannerConfiguration(SystemConfiguration):
"""Configuration for the Planner subsystem."""
models: dict[LanguageModelClassification, LanguageModelConfiguration]
prompt_strategies: PromptStrategiesConfiguration
class PlannerSettings(SystemSettings):
"""Settings for the Planner subsystem."""
configuration: PlannerConfiguration
class SimplePlanner(Configurable):
"""Manages the agent's planning and goal-setting by constructing language model prompts."""
default_settings = PlannerSettings(
name="planner",
description="Manages the agent's planning and goal-setting by constructing language model prompts.",
configuration=PlannerConfiguration(
models={
LanguageModelClassification.FAST_MODEL: LanguageModelConfiguration(
model_name=OpenAIModelName.GPT3,
provider_name=ModelProviderName.OPENAI,
temperature=0.9,
),
LanguageModelClassification.SMART_MODEL: LanguageModelConfiguration(
model_name=OpenAIModelName.GPT4,
provider_name=ModelProviderName.OPENAI,
temperature=0.9,
),
},
prompt_strategies=PromptStrategiesConfiguration(
name_and_goals=strategies.NameAndGoals.default_configuration,
initial_plan=strategies.InitialPlan.default_configuration,
next_ability=strategies.NextAbility.default_configuration,
),
),
)
def __init__(
self,
settings: PlannerSettings,
logger: logging.Logger,
model_providers: dict[ModelProviderName, LanguageModelProvider],
workspace: Workspace = None, # Workspace is not available during bootstrapping.
) -> None:
self._configuration = settings.configuration
self._logger = logger
self._workspace = workspace
self._providers: dict[LanguageModelClassification, LanguageModelProvider] = {}
for model, model_config in self._configuration.models.items():
self._providers[model] = model_providers[model_config.provider_name]
self._prompt_strategies = {
"name_and_goals": strategies.NameAndGoals(
**self._configuration.prompt_strategies.name_and_goals.dict()
),
"initial_plan": strategies.InitialPlan(
**self._configuration.prompt_strategies.initial_plan.dict()
),
"next_ability": strategies.NextAbility(
**self._configuration.prompt_strategies.next_ability.dict()
),
}
async def decide_name_and_goals(self, user_objective: str) -> LanguageModelResponse:
return await self.chat_with_model(
self._prompt_strategies["name_and_goals"],
user_objective=user_objective,
)
async def make_initial_plan(
self,
agent_name: str,
agent_role: str,
agent_goals: list[str],
abilities: list[str],
) -> LanguageModelResponse:
return await self.chat_with_model(
self._prompt_strategies["initial_plan"],
agent_name=agent_name,
agent_role=agent_role,
agent_goals=agent_goals,
abilities=abilities,
)
async def determine_next_ability(
self,
task: Task,
ability_schema: list[dict],
):
return await self.chat_with_model(
self._prompt_strategies["next_ability"],
task=task,
ability_schema=ability_schema,
)
async def chat_with_model(
self,
prompt_strategy: PromptStrategy,
**kwargs,
) -> LanguageModelResponse:
model_classification = prompt_strategy.model_classification
model_configuration = self._configuration.models[model_classification].dict()
self._logger.debug(f"Using model configuration: {model_configuration}")
del model_configuration["provider_name"]
provider = self._providers[model_classification]
template_kwargs = self._make_template_kwargs_for_strategy(prompt_strategy)
template_kwargs.update(kwargs)
prompt = prompt_strategy.build_prompt(**template_kwargs)
self._logger.debug(f"Using prompt:\n{prompt}\n\n")
response = await provider.create_language_completion(
model_prompt=prompt.messages,
functions=prompt.functions,
**model_configuration,
completion_parser=prompt_strategy.parse_response_content,
)
return LanguageModelResponse.parse_obj(response.dict())
def _make_template_kwargs_for_strategy(self, strategy: PromptStrategy):
provider = self._providers[strategy.model_classification]
template_kwargs = {
"os_info": get_os_info(),
"api_budget": provider.get_remaining_budget(),
"current_time": time.strftime("%c"),
}
return template_kwargs
def get_os_info() -> str:
os_name = platform.system()
os_info = (
platform.platform(terse=True)
if os_name != "Linux"
else distro.name(pretty=True)
)
return os_info