mirror of
https://github.com/aljazceru/Auto-GPT.git
synced 2025-12-31 04:44:33 +01:00
Rough sketching out of a hello world using our refactored autogpt library. See the tracking issue here: #4770. # Run instructions There are two client applications for Auto-GPT included. ## CLI Application 🌟 **This is the reference application I'm working with for now** 🌟 The first app is a straight CLI application. I have not done anything yet to port all the friendly display stuff from the `logger.typewriter_log` logic. - [Entry Point](https://github.com/Significant-Gravitas/Auto-GPT/blob/re-arch/hello-world/autogpt/core/runner/cli_app/cli.py) - [Client Application](https://github.com/Significant-Gravitas/Auto-GPT/blob/re-arch/hello-world/autogpt/core/runner/cli_app/main.py) To run, you first need a settings file. Run ``` python REPOSITORY_ROOT/autogpt/core/runner/cli_app/cli.py make-settings ``` where `REPOSITORY_ROOT` is the root of the Auto-GPT repository on your machine. This will write a file called `default_agent_settings.yaml` with all the user-modifiable configuration keys to `~/auto-gpt/default_agent_settings.yml` and make the `auto-gpt` directory in your user directory if it doesn't exist). At a bare minimum, you'll need to set `openai.credentials.api_key` to your OpenAI API Key to run the model. You can then run Auto-GPT with ``` python REPOSITORY_ROOT/autogpt/core/runner/cli_app/cli.py make-settings ``` to launch the interaction loop. ## CLI Web App The second app is still a CLI, but it sets up a local webserver that the client application talks to rather than invoking calls to the Agent library code directly. This application is essentially a sketch at this point as the folks who were driving it have had less time (and likely not enough clarity) to proceed. - [Entry Point](https://github.com/Significant-Gravitas/Auto-GPT/blob/re-arch/hello-world/autogpt/core/runner/cli_web_app/cli.py) - [Client Application](https://github.com/Significant-Gravitas/Auto-GPT/blob/re-arch/hello-world/autogpt/core/runner/cli_web_app/client/client.py) - [Server API](https://github.com/Significant-Gravitas/Auto-GPT/blob/re-arch/hello-world/autogpt/core/runner/cli_web_app/server/api.py) To run, you still need to generate a default configuration. You can do ``` python REPOSITORY_ROOT/autogpt/core/runner/cli_web_app/cli.py make-settings ``` It invokes the same command as the bare CLI app, so follow the instructions above about setting your API key. To run, do ``` python REPOSITORY_ROOT/autogpt/core/runner/cli_web_app/cli.py client ``` This will launch a webserver and then start the client cli application to communicate with it. ⚠️ I am not actively developing this application. It is a very good place to get involved if you have web application design experience and are looking to get involved in the re-arch. --------- Co-authored-by: David Wurtz <davidjwurtz@gmail.com> Co-authored-by: Media <12145726+rihp@users.noreply.github.com> Co-authored-by: Richard Beales <rich@richbeales.net> Co-authored-by: Daryl Rodrigo <darylrodrigo@gmail.com> Co-authored-by: Daryl Rodrigo <daryl@orkestro.com> Co-authored-by: Swifty <craigswift13@gmail.com> Co-authored-by: Nicholas Tindle <nick@ntindle.com> Co-authored-by: Merwane Hamadi <merwanehamadi@gmail.com>
77 lines
2.1 KiB
Python
77 lines
2.1 KiB
Python
import abc
|
|
|
|
from autogpt.core.configuration import SystemConfiguration
|
|
from autogpt.core.planning.schema import (
|
|
LanguageModelClassification,
|
|
LanguageModelPrompt,
|
|
)
|
|
|
|
# class Planner(abc.ABC):
|
|
# """Manages the agent's planning and goal-setting by constructing language model prompts."""
|
|
#
|
|
# @staticmethod
|
|
# @abc.abstractmethod
|
|
# async def decide_name_and_goals(
|
|
# user_objective: str,
|
|
# ) -> LanguageModelResponse:
|
|
# """Decide the name and goals of an Agent from a user-defined objective.
|
|
#
|
|
# Args:
|
|
# user_objective: The user-defined objective for the agent.
|
|
#
|
|
# Returns:
|
|
# The agent name and goals as a response from the language model.
|
|
#
|
|
# """
|
|
# ...
|
|
#
|
|
# @abc.abstractmethod
|
|
# async def plan(self, context: PlanningContext) -> LanguageModelResponse:
|
|
# """Plan the next ability for the Agent.
|
|
#
|
|
# Args:
|
|
# context: A context object containing information about the agent's
|
|
# progress, result, memories, and feedback.
|
|
#
|
|
#
|
|
# Returns:
|
|
# The next ability the agent should take along with thoughts and reasoning.
|
|
#
|
|
# """
|
|
# ...
|
|
#
|
|
# @abc.abstractmethod
|
|
# def reflect(
|
|
# self,
|
|
# context: ReflectionContext,
|
|
# ) -> LanguageModelResponse:
|
|
# """Reflect on a planned ability and provide self-criticism.
|
|
#
|
|
#
|
|
# Args:
|
|
# context: A context object containing information about the agent's
|
|
# reasoning, plan, thoughts, and criticism.
|
|
#
|
|
# Returns:
|
|
# Self-criticism about the agent's plan.
|
|
#
|
|
# """
|
|
# ...
|
|
|
|
|
|
class PromptStrategy(abc.ABC):
|
|
default_configuration: SystemConfiguration
|
|
|
|
@property
|
|
@abc.abstractmethod
|
|
def model_classification(self) -> LanguageModelClassification:
|
|
...
|
|
|
|
@abc.abstractmethod
|
|
def build_prompt(self, *_, **kwargs) -> LanguageModelPrompt:
|
|
...
|
|
|
|
@abc.abstractmethod
|
|
def parse_response_content(self, response_content: dict) -> dict:
|
|
...
|