update nav

This commit is contained in:
zachary62
2025-04-04 14:25:33 -04:00
parent 581aa6b08f
commit ca46798ce5
8 changed files with 57 additions and 1 deletions

View File

@@ -1,3 +1,10 @@
---
layout: default
title: "MultiStepAgent"
parent: "SmolaAgents"
nav_order: 1
---
# Chapter 1: The MultiStepAgent - Your Task Orchestrator # Chapter 1: The MultiStepAgent - Your Task Orchestrator
Welcome to the SmolaAgents library! If you're looking to build smart AI agents that can tackle complex problems, you're in the right place. Welcome to the SmolaAgents library! If you're looking to build smart AI agents that can tackle complex problems, you're in the right place.

View File

@@ -1,3 +1,10 @@
---
layout: default
title: "Model Interface"
parent: "SmolaAgents"
nav_order: 2
---
# Chapter 2: Model Interface - Your Agent's Universal Translator # Chapter 2: Model Interface - Your Agent's Universal Translator
Welcome back! In [Chapter 1: The MultiStepAgent - Your Task Orchestrator](01_multistepagent.md), we met the `MultiStepAgent`, our AI project manager. We learned that it follows a "Think -> Act -> Observe" cycle to solve tasks. A crucial part of the "Think" phase is consulting its "brain" a Large Language Model (LLM). Welcome back! In [Chapter 1: The MultiStepAgent - Your Task Orchestrator](01_multistepagent.md), we met the `MultiStepAgent`, our AI project manager. We learned that it follows a "Think -> Act -> Observe" cycle to solve tasks. A crucial part of the "Think" phase is consulting its "brain" a Large Language Model (LLM).

View File

@@ -1,3 +1,10 @@
---
layout: default
title: "Tool"
parent: "SmolaAgents"
nav_order: 3
---
# Chapter 3: Tool - Giving Your Agent Superpowers # Chapter 3: Tool - Giving Your Agent Superpowers
Welcome back! In [Chapter 2: Model Interface](02_model_interface.md), we learned how our `MultiStepAgent` uses a "universal remote" (the Model Interface) to talk to its LLM "brain". The LLM thinks and suggests what the agent should do next. Welcome back! In [Chapter 2: Model Interface](02_model_interface.md), we learned how our `MultiStepAgent` uses a "universal remote" (the Model Interface) to talk to its LLM "brain". The LLM thinks and suggests what the agent should do next.

View File

@@ -1,3 +1,10 @@
---
layout: default
title: "AgentMemory"
parent: "SmolaAgents"
nav_order: 4
---
# Chapter 4: AgentMemory - The Agent's Notepad # Chapter 4: AgentMemory - The Agent's Notepad
Welcome back! In [Chapter 3: Tool](03_tool.md), we equipped our agent with "superpowers" tools like web search or calculators that let it interact with the world and perform actions. We saw how the agent's "brain" (the LLM) decides which tool to use, and the agent executes it. Welcome back! In [Chapter 3: Tool](03_tool.md), we equipped our agent with "superpowers" tools like web search or calculators that let it interact with the world and perform actions. We saw how the agent's "brain" (the LLM) decides which tool to use, and the agent executes it.

View File

@@ -1,3 +1,10 @@
---
layout: default
title: "PromptTemplates"
parent: "SmolaAgents"
nav_order: 5
---
# Chapter 5: PromptTemplates - Crafting Your Agent's Script # Chapter 5: PromptTemplates - Crafting Your Agent's Script
Welcome back! In [Chapter 4: AgentMemory](04_agentmemory.md), we learned how our agent uses its "logbook" (`AgentMemory`) to remember the task, its past actions, and observations. This memory is crucial for deciding the next step. Welcome back! In [Chapter 4: AgentMemory](04_agentmemory.md), we learned how our agent uses its "logbook" (`AgentMemory`) to remember the task, its past actions, and observations. This memory is crucial for deciding the next step.

View File

@@ -1,3 +1,10 @@
---
layout: default
title: "PythonExecutor"
parent: "SmolaAgents"
nav_order: 6
---
# Chapter 6: PythonExecutor - Running Code Safely # Chapter 6: PythonExecutor - Running Code Safely
Welcome back! In [Chapter 5: PromptTemplates](05_prompttemplates.md), we saw how agents use templates to create clear instructions for their LLM brain. These instructions often involve asking the LLM to generate code, especially for agents like `CodeAgent`, which are designed to solve problems by writing and running Python. Welcome back! In [Chapter 5: PromptTemplates](05_prompttemplates.md), we saw how agents use templates to create clear instructions for their LLM brain. These instructions often involve asking the LLM to generate code, especially for agents like `CodeAgent`, which are designed to solve problems by writing and running Python.

View File

@@ -1,3 +1,10 @@
---
layout: default
title: "AgentType"
parent: "SmolaAgents"
nav_order: 7
---
# Chapter 7: AgentType - Handling More Than Just Text # Chapter 7: AgentType - Handling More Than Just Text
Welcome back! In the previous chapters, especially when discussing [Tools](03_tool.md) and the [PythonExecutor](06_pythonexecutor.md), we saw how agents can perform actions and generate results. So far, we've mostly focused on text-based tasks and results. Welcome back! In the previous chapters, especially when discussing [Tools](03_tool.md) and the [PythonExecutor](06_pythonexecutor.md), we saw how agents can perform actions and generate results. So far, we've mostly focused on text-based tasks and results.

View File

@@ -1,3 +1,10 @@
---
layout: default
title: "AgentLogger & Monitor"
parent: "SmolaAgents"
nav_order: 8
---
# Chapter 8: AgentLogger & Monitor - Observing Your Agent in Action # Chapter 8: AgentLogger & Monitor - Observing Your Agent in Action
Welcome to the final chapter of the SmolaAgents tutorial! In [Chapter 7: AgentType](07_agenttype.md), we saw how `SmolaAgents` handles different kinds of data like text, images, and audio using specialized containers. Now that our agent can perform complex tasks ([Chapter 1: MultiStepAgent](01_multistepagent.md)), use various [Tools](03_tool.md), remember its progress ([Chapter 4: AgentMemory](04_agentmemory.md)), and even handle diverse data types, a new question arises: **How do we actually see what the agent is doing?** Welcome to the final chapter of the SmolaAgents tutorial! In [Chapter 7: AgentType](07_agenttype.md), we saw how `SmolaAgents` handles different kinds of data like text, images, and audio using specialized containers. Now that our agent can perform complex tasks ([Chapter 1: MultiStepAgent](01_multistepagent.md)), use various [Tools](03_tool.md), remember its progress ([Chapter 4: AgentMemory](04_agentmemory.md)), and even handle diverse data types, a new question arises: **How do we actually see what the agent is doing?**
@@ -49,7 +56,7 @@ When you create a `MultiStepAgent`, it automatically creates an `AgentLogger` in
**Example Output (Simulated)** **Example Output (Simulated)**
The `AgentLogger` uses `rich` to make the output colorful and easy to read. Heres a simplified idea of what you might see in your console for our "Capital and Weather" example: The `AgentLogger` uses `rich` to make the output colorful and easy to read. Here's a simplified idea of what you might see in your console for our "Capital and Weather" example:
```console ```console
╭─[bold] New run ─ ToolCallingAgent [/bold]────────────────────────────────╮ ╭─[bold] New run ─ ToolCallingAgent [/bold]────────────────────────────────╮