docs: add managing tools section and tool-router topic (#3310)

Co-authored-by: Rizel Scarlett <rizel@squareup.com>
This commit is contained in:
dianed-square
2025-07-09 07:47:53 -07:00
committed by GitHub
parent c8504b2a41
commit 2f36bac577
9 changed files with 281 additions and 95 deletions

View File

@@ -144,14 +144,14 @@ export GOOSE_PLANNER_CONTEXT_LIMIT=1000000
## Tool Configuration ## Tool Configuration
These variables control how Goose handles [tool permissions](/docs/guides/tool-permissions) and their execution. These variables control how Goose handles [tool permissions](/docs/guides/managing-tools/tool-permissions) and their execution.
| Variable | Purpose | Values | Default | | Variable | Purpose | Values | Default |
|----------|---------|---------|---------| |----------|---------|---------|---------|
| `GOOSE_MODE` | Controls how Goose handles tool execution | "auto", "approve", "chat", "smart_approve" | "smart_approve" | | `GOOSE_MODE` | Controls how Goose handles tool execution | "auto", "approve", "chat", "smart_approve" | "smart_approve" |
| `GOOSE_TOOLSHIM` | Enables/disables tool call interpretation | "1", "true" (case insensitive) to enable | false | | `GOOSE_TOOLSHIM` | Enables/disables tool call interpretation | "1", "true" (case insensitive) to enable | false |
| `GOOSE_TOOLSHIM_OLLAMA_MODEL` | Specifies the model for [tool call interpretation](/docs/guides/experimental-features/#ollama-tool-shim) | Model name (e.g. llama3.2, qwen2.5) | System default | | `GOOSE_TOOLSHIM_OLLAMA_MODEL` | Specifies the model for [tool call interpretation](/docs/guides/experimental-features/#ollama-tool-shim) | Model name (e.g. llama3.2, qwen2.5) | System default |
| `GOOSE_CLI_MIN_PRIORITY` | Controls verbosity of [tool output](/docs/guides/adjust-tool-output) | Float between 0.0 and 1.0 | 0.0 | | `GOOSE_CLI_MIN_PRIORITY` | Controls verbosity of [tool output](/docs/guides/managing-tools/adjust-tool-output) | Float between 0.0 and 1.0 | 0.0 |
| `GOOSE_CLI_TOOL_PARAMS_TRUNCATION_MAX_LENGTH` | Maximum length for tool parameter values before truncation in CLI output (not in debug mode) | Integer | 40 | | `GOOSE_CLI_TOOL_PARAMS_TRUNCATION_MAX_LENGTH` | Maximum length for tool parameter values before truncation in CLI output (not in debug mode) | Integer | 40 |
**Examples** **Examples**
@@ -196,6 +196,36 @@ export GOOSE_EDITOR_HOST="http://localhost:8000/v1"
export GOOSE_EDITOR_MODEL="your-model" export GOOSE_EDITOR_MODEL="your-model"
``` ```
## Tool Selection Strategy
These variables configure the [tool selection strategy](/docs/guides/managing-tools/tool-router).
| Variable | Purpose | Values | Default |
|----------|---------|---------|--------|
| `GOOSE_ROUTER_TOOL_SELECTION_STRATEGY` | The tool selection strategy to use | "default", "vector", "llm" | "default" |
| `GOOSE_EMBEDDING_MODEL_PROVIDER` | The provider to use for generating embeddings for the "vector" strategy | [See available providers](/docs/getting-started/providers#available-providers) (must support embeddings) | "openai" |
| `GOOSE_EMBEDDING_MODEL` | The model to use for generating embeddings for the "vector" strategy | Model name (provider-specific) | "text-embedding-3-small" |
**Examples**
```bash
# Use vector-based tool selection with custom settings
export GOOSE_ROUTER_TOOL_SELECTION_STRATEGY=vector
export GOOSE_EMBEDDING_MODEL_PROVIDER=ollama
export GOOSE_EMBEDDING_MODEL=nomic-embed-text
# Or use LLM-based selection
export GOOSE_ROUTER_TOOL_SELECTION_STRATEGY=llm
```
**Embedding Provider Support**
The default embedding provider is OpenAI. If using a different provider:
- Ensure the provider supports embeddings
- Specify an appropriate embedding model for that provider
- Ensure the provider is properly configured with necessary credentials
## Security Configuration ## Security Configuration
These variables control security related features. These variables control security related features.

View File

@@ -0,0 +1,8 @@
{
"label": "Managing Tools",
"position": 2,
"link": {
"type": "doc",
"id": "guides/managing-tools/index"
}
}

View File

@@ -1,5 +1,5 @@
--- ---
sidebar_position: 11 sidebar_position: 2
title: Adjusting Tool Output Verbosity title: Adjusting Tool Output Verbosity
sidebar_label: Adjust Tool Output sidebar_label: Adjust Tool Output
--- ---

View File

@@ -0,0 +1,60 @@
---
title: Managing Tools
hide_title: true
description: Control and configure the tools and extensions that power your Goose workflows
---
import Card from '@site/src/components/Card';
import styles from '@site/src/components/Card/styles.module.css';
<h1 className={styles.pageTitle}>Managing Tools</h1>
<p className={styles.pageDescription}>
Tools are specific functions within <a href="/docs/getting-started/using-extensions">extensions</a> that give Goose its capabilities. Learn to control and customize how these tools work for you.
</p>
<div className={styles.categorySection}>
<h2 className={styles.categoryTitle}>📚 Documentation & Guides</h2>
<div className={styles.cardGrid}>
<Card
title="Tool Permissions"
description="Configure fine-grained permissions to control which tools Goose can use and when, ensuring secure and controlled automation."
link="/docs/guides/managing-tools/tool-permissions"
/>
<Card
title="Tool Selection Strategy"
description="Optimize tool selection with dynamic routing that loads only the tools you need, reducing context overhead and improving performance."
link="/docs/guides/managing-tools/tool-router"
/>
<Card
title="Adjust Tool Output"
description="Customize how tool interactions are displayed, from detailed verbose output to clean concise summaries."
link="/docs/guides/managing-tools/adjust-tool-output"
/>
<Card
title="Ollama Tool Shim"
description="Enable tool calling for models that don't natively support it using an experimental local interpreter model setup."
link="/docs/guides/experimental-features#ollama-tool-shim"
/>
</div>
</div>
<div className={styles.categorySection}>
<h2 className={styles.categoryTitle}>📝 Featured Blog Posts</h2>
<div className={styles.cardGrid}>
<Card
title="Agentic AI and the MCP Ecosystem"
description="A 101 introduction to AI agents, tool calling, and how tools work with LLMs to enable powerful automation."
link="/blog/2025/02/17/agentic-ai-mcp"
/>
<Card
title="A Visual Guide To MCP Ecosystem"
description="Visual breakdown of MCP: How your AI agent, tools, and models work together, explained with diagrams and analogies."
link="/blog/2025/04/10/visual-guide-mcp"
/>
<Card
title="Finetuning Toolshim Models for Tool Calling"
description="Technical deep-dive into the challenges of tool calling with open-source models and the research behind toolshim solutions."
link="/blog/2025/04/11/finetuning-toolshim"
/>
</div>
</div>

View File

@@ -1,6 +1,6 @@
--- ---
title: Managing Tool Permissions title: Managing Tool Permissions
sidebar_position: 4 sidebar_position: 1
sidebar_label: Tool Permissions sidebar_label: Tool Permissions
--- ---

View File

@@ -0,0 +1,162 @@
---
sidebar_position: 3
title: Tool Selection Strategy
sidebar_label: Tool Selection Strategy
description: Configure smart tool selection to load only relevant tools, improving performance with multiple extensions
---
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
:::info Preview Feature
The Tool Selection Strategy is currently in preview. The Vector selection strategy is currently limited to Claude models served on Databricks.
:::
When you enable an [extension](/docs/getting-started/using-extensions), you gain access to all of its tools. For example, the Google Drive extension provides tools for reading documents, updating permissions, managing comments, and more. By default, Goose loads all tools into context when interacting with the LLM.
Enabling multiple extensions gives you access to a wider range of tools, but loading a lot of tools into context can be inefficient and confusing for the LLM. It's like having every tool in your workshop spread out on your bench when you only need one or two.
Choosing an intelligent tool selection strategy helps avoid this problem. Instead of loading all tools for every interaction, it loads only the tools needed for your current task. Both vector and LLM-based strategies ensure that only the functionality you need is loaded into context, so you can keep more of your favorite extensions enabled. These strategies provide:
- Reduced token consumption
- Improved LLM performance
- Better context management
- More accurate and efficient tool selection
## Tool Selection Strategies
| Strategy | Speed | Best For | Example Query |
|----------|-------|----------|---------------|
| **Default** | Fastest | Few extensions, simple setups | Any query (loads all tools) |
| **Vector** | Fast | Keyword-based matching | "read pdf file" |
| **LLM-based** | Slower | Complex, ambiguous queries | "analyze document contents" |
### Default Strategy
The default strategy loads all tools from enabled extensions into context, which works well if you only have a few extensions enabled. When you have more than a few extensions enabled, you should use the vector or LLM-based strategy for intelligent tool selection.
**Best for:**
- Simple setups with few extensions
- When you want all tools available at all times
- Maximum tool availability without selection logic
### Vector Strategy
The vector strategy uses mathematical similarity between embeddings to find relevant tools, providing efficient matching based on vector similarity between your query and available tools.
**Best for:**
- Situations where fast response times are critical
- Queries with keywords that match tool names or descriptions
**Example:**
- Prompt: "read pdf file"
- Result: Quickly matches with PDF-related tools based on keyword similarity
:::info Embedding Model
The default embedding model is `text-embedding-3-small`. You can change it using [environment variables](/docs/guides/environment-variables#tool-selection-strategy).
:::
### LLM-based Strategy
The LLM-based strategy leverages natural language understanding to analyze tools and queries semantically, making selections based on the full meaning of your request.
**Best for:**
- Complex or ambiguous queries that require understanding context
- Cases where exact keyword matches might miss relevant tools
- Situations where nuanced tool selection is important
**Example:**
- Prompt: "help me analyze the contents of my document"
- Result: Understands context and might suggest both PDF readers and content analysis tools
## Configuration
<Tabs groupId="interface">
<TabItem value="ui" label="Goose Desktop" default>
1. Click the gear icon ⚙️ on the top toolbar
2. Click `Advanced settings`
3. Under `Tool Selection Strategy`, select your preferred strategy:
- `Default`
- `Vector`
- `LLM-based`
</TabItem>
<TabItem value="cli" label="Goose CLI">
1. Run the `configuration` command:
```sh
goose configure
```
2. Select `Goose Settings`:
```sh
┌ goose-configure
◆ What would you like to configure?
│ ○ Configure Providers
│ ○ Add Extension
│ ○ Toggle Extensions
│ ○ Remove Extension
// highlight-start
│ ● Goose Settings (Set the Goose Mode, Tool Output, Tool Permissions, Experiment, Goose recipe github repo and more)
// highlight-end
```
3. Select `Router Tool Selection Strategy`:
```sh
┌ goose-configure
◇ What would you like to configure?
│ Goose Settings
◆ What setting would you like to configure?
│ ○ Goose Mode
// highlight-start
│ ● Router Tool Selection Strategy (Configure the strategy for selecting tools to use)
// highlight-end
│ ○ Tool Permission
│ ○ Tool Output
│ ○ Toggle Experiment
│ ○ Goose recipe github repo
```
4. Select your preferred strategy:
```sh
┌ goose-configure
◇ What would you like to configure?
│ Goose Settings
◇ What setting would you like to configure?
│ Router Tool Selection Strategy
// highlight-start
◆ Which router strategy would you like to use?
│ ● Vector Strategy (Use vector-based similarity to select tools)
│ ○ Default Strategy
// highlight-end
```
:::info
Currently, the LLM-based strategy can't be configured using the CLI.
:::
This example output shows the `Vector Strategy` was selected:
```
┌ goose-configure
◇ What would you like to configure?
│ Goose Settings
◇ What setting would you like to configure?
│ Router Tool Selection Strategy
◇ Which router strategy would you like to use?
│ Vector Strategy
└ Set to Vector Strategy - using vector-based similarity for tool selection
```
Goose CLI display a message indicating when the vector or LLM-based strategy is currently being used.
</TabItem>
</Tabs>

View File

@@ -24,7 +24,7 @@ Your experience with Goose is shaped by your [choice of LLM](/blog/2025/03/31/go
LLMs have context windows, which are limits on how much conversation history they can retain. Once exceeded, they may forget earlier parts of the conversation. Monitor your token usage and [start new sessions](/docs/guides/managing-goose-sessions) as needed. LLMs have context windows, which are limits on how much conversation history they can retain. Once exceeded, they may forget earlier parts of the conversation. Monitor your token usage and [start new sessions](/docs/guides/managing-goose-sessions) as needed.
### Turn off unnecessary extensions or tool ### Turn off unnecessary extensions or tool
Turning on too many extensions can degrade performance. Enable only essential [extensions and tools](/docs/guides/tool-permissions) to improve tool selection accuracy, save context window space, and stay within provider tool limits. Turning on too many extensions can degrade performance. Enable only essential [extensions and tools](/docs/guides/managing-tools/tool-permissions) to improve tool selection accuracy, save context window space, and stay within provider tool limits.
### Teach Goose your preferences ### Teach Goose your preferences
Help Goose remember how you like to work by using [`.goosehints`](/docs/guides/using-goosehints/) for permanent project preferences and the [Memory extension](/docs/mcp/memory-mcp) for things you want Goose to dynamically recall later. Both can help save valuable context window space while keeping your preferences available. Help Goose remember how you like to work by using [`.goosehints`](/docs/guides/using-goosehints/) for permanent project preferences and the [Memory extension](/docs/mcp/memory-mcp) for things you want Goose to dynamically recall later. Both can help save valuable context window space while keeping your preferences available.

View File

@@ -1,90 +0,0 @@
---
draft: true
---
# Tool Router (preview)
## Overview
Tool Router is a powerful feature that addresses a common challenge in LLM-based development: the difficulty of selecting the right tool when multiple extensions are enabled. Traditional approaches feed an entire list of tools into the context during chat sessions, which not only consumes a significant number of tokens but also reduces the effectiveness of tool calling.
## The Problem
When you enable multiple extensions (like Slack), you get access to numerous tools such as:
- Reading threads
- Sending messages
- Creating channels
- And many more
However, you typically don't need all this functionality at once. Loading every available tool into the context can be inefficient and potentially confusing for the LLM.
## The Solution: Tool Router
Tool Router introduces a smarter way to handle tool selection through vector-based indexing. Instead of passing all tools back and forth, it:
1. Indexes all tools from your enabled extensions
2. Uses vector search to load only the relevant tools into context when needed
3. Ensures that only the functionality you actually need is available
## Configuration
To enable this feature, change the Tool Selection Strategy from default to vector.
#### CLI
To configure Tool Router in the CLI, follow these steps:
1. Run the configuration command:
```bash
./target/debug/goose configure
```
2. This will update your existing config file. Alternatively, you can edit it directly at:
```
/Users/wendytang/.config/goose/config.yaml
```
3. During configuration:
- Select "Goose Settings"
- Choose "Router Tool Selection Strategy"
- Select "Vector Strategy"
The configuration process will look like this:
```
┌ goose-configure
◇ What would you like to configure?
│ Goose Settings
◇ What setting would you like to configure?
│ Router Tool Selection Strategy
◇ Which router strategy would you like to use?
│ Vector Strategy
└ Set to Vector Strategy - using vector-based similarity for tool selection
```
#### UI
Toggle the settings button on the top right and head to 'Advanced Settings', then 'Tool Selection Strategy' at the botoom.
## Benefits
- Reduced token consumption
- More accurate tool selection
- Improved LLM performance
- Better context management
- More efficient use of available tools
## Notes
### Model Compatibility
Tool Router currently only works with Claude models served through Databricks. The embedding functionality uses OpenAI's `text-embedding-3-small` model by default.
### Feedback & Next Steps
We'd love to hear your thoughts on this feature! Please reach out in the Goose Discord channel to share your use case and experience.
Our roadmap includes:
- Expanding Tool Router support to OpenAI models
- Adding customization options for the `k` parameter that controls how many similar tools are returned during vector similarity search

View File

@@ -122,6 +122,22 @@ const config: Config = {
from: '/docs/guides/recipe-reference', from: '/docs/guides/recipe-reference',
to: '/docs/guides/recipes/recipe-reference' to: '/docs/guides/recipes/recipe-reference'
}, },
{
from: '/docs/guides/tool-permissions',
to: '/docs/guides/managing-tools/tool-permissions'
},
{
from: '/docs/guides/adjust-tool-output',
to: '/docs/guides/managing-tools/adjust-tool-output'
},
{
from: '/docs/guides/benchmarking',
to: '/docs/tutorials/benchmarking'
},
{
from: '/docs/guides/goose-in-docker',
to: '/docs/tutorials/goose-in-docker'
},
// MCP tutorial redirects - moved from /docs/tutorials/ to /docs/mcp/ // MCP tutorial redirects - moved from /docs/tutorials/ to /docs/mcp/
{ {
from: '/docs/tutorials/agentql-mcp', from: '/docs/tutorials/agentql-mcp',