docs: update provider docs, fix rate limit link (#943)

Co-authored-by: angiejones <jones.angie@gmail.com>
Co-authored-by: Adewale Abati <acekyd01@gmail.com>
This commit is contained in:
Salman Mohammed
2025-01-30 15:04:33 -05:00
committed by GitHub
parent ecbe92bb8a
commit 06a24647c1
6 changed files with 214 additions and 180 deletions

View File

@@ -5,10 +5,6 @@ on:
branches:
- main
pull_request:
paths:
- 'documentation/**'
jobs:
deploy:
runs-on: ubuntu-latest

View File

@@ -8,24 +8,26 @@ import TabItem from '@theme/TabItem';
# Supported LLM Providers
Goose is compatible with a wide range of LLM providers, allowing you to choose and integrate your preferred model.
:::tip Model Selection
Goose relies heavily on tool calling capabilities and currently works best with Anthropic's Claude 3.5 Sonnet and OpenAI's GPT-4o (2024-11-20) model.
[Berkeley Function-Calling Leaderboard][function-calling-leaderboard] can be a good guide for selecting models.
:::
## Available Providers
| Provider | Description | Parameters |
|-----------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------|
|-----------------------------------------------|-----------------------------------------------------|---------------------------------------|
| [Anthropic](https://www.anthropic.com/) | Offers Claude, an advanced AI model for natural language tasks. | `ANTHROPIC_API_KEY` |
| [Databricks](https://www.databricks.com/) | Unified data analytics and AI platform for building and deploying models. | `DATABRICKS_HOST`, `DATABRICKS_TOKEN` |
| [Gemini](https://ai.google.dev/gemini-api/docs) | Advanced LLMs by Google with multimodal capabilities (text, images). | `GOOGLE_API_KEY` |
| [Groq](https://groq.com/) | High-performance inference hardware and tools for LLMs. | `GROQ_API_KEY` |
| [Ollama](https://ollama.com/) | Local model runner supporting Qwen, Llama, DeepSeek, and other open-source models. **Because this provider runs locally, you must first [download and run a model](/docs/getting-started/providers#local-llms-ollama).** | N/A |
| [Ollama](https://ollama.com/) | Local model runner supporting Qwen, Llama, DeepSeek, and other open-source models. **Because this provider runs locally, you must first [download and run a model](/docs/getting-started/providers#local-llms-ollama).** | `OLLAMA_HOST` |
| [OpenAI](https://platform.openai.com/api-keys) | Provides gpt-4o, o1, and other advanced language models. **o1-mini and o1-preview are not supported because Goose uses tool calling.** | `OPENAI_API_KEY` |
| [OpenRouter](https://openrouter.ai/) | API gateway for unified access to various models with features like rate-limiting management. | `OPENROUTER_API_KEY` |
:::tip Model Recommendation
Goose currently works best with Anthropic's Claude 3.5 Sonnet and OpenAI's o1 model.
:::
## Configure Provider
@@ -92,9 +94,70 @@ To configure your chosen provider or see available options, run `goose configure
3. Click Edit, enter your API key, and click `Set as Active`.
</TabItem>
</Tabs>
## Local LLMs (Ollama)
## Using Goose for Free
Goose is a free and open source AI agent that you can start using right away, but not all supported [LLM Providers][providers] provide a free tier.
Below, we outline a couple of free options and how to get started with them.
:::warning Limitations
These free options are a great way to get started with Goose and explore its capabilities. However, you may need to upgrade your LLM for better performance.
:::
### Google Gemini
Google Gemini provides a free tier. To start using the Gemini API with Goose, you need an API Key from [Google AI studio](https://aistudio.google.com/app/apikey).
To set up Google Gemini with Goose, follow these steps:
<Tabs groupId="interface">
<TabItem value="cli" label="Goose CLI" default>
1. Run:
```sh
goose configure
```
2. Select `Configure Providers` from the menu.
3. Follow the prompts to choose `Google Gemini` as the provider.
4. Enter your API key when prompted.
5. Enter the Gemini model of your choice.
```
┌ goose-configure
◇ What would you like to configure?
│ Configure Providers
◇ Which model provider should we use?
│ Google Gemini
◇ Provider Google Gemini requires GOOGLE_API_KEY, please enter a value
│▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪
◇ Enter a model from that provider:
│ gemini-2.0-flash-exp
◇ Hello! You're all set and ready to go, feel free to ask me anything!
└ Configuration saved successfully
```
</TabItem>
<TabItem value="ui" label="Goose Desktop">
**To update your LLM provider and API key:**
1. Click on the three dots in the top-right corner.
2. Select `Provider Settings` from the menu.
2. Choose `Google Gemini` as provider from the list.
3. Click Edit, enter your API key, and click `Set as Active`.
</TabItem>
</Tabs>
### Local LLMs (Ollama)
Ollama provides local LLMs, which requires a bit more set up before you can use it with Goose.
@@ -102,7 +165,7 @@ Ollama provides local LLMs, which requires a bit more set up before you can use
2. Run any [model supporting tool-calling](https://ollama.com/search?c=tools):
:::warning Limited Support for models without tool calling
Goose extensively uses tool calling, so models without it (e.g. `DeepSeek-r1`) can only do chat completion. If using models without tool calling, all Goose [extensions must be disabled](/docs/getting-started/using-extensions#enablingdisabling-extensions). As an alternative, you can use a [custom DeepSeek-r1 model](/docs/getting-started/using-goose-free#deepseek-r1) we've made specifically for Goose.
Goose extensively uses tool calling, so models without it (e.g. `DeepSeek-r1`) can only do chat completion. If using models without tool calling, all Goose [extensions must be disabled](/docs/getting-started/using-extensions#enablingdisabling-extensions). As an alternative, you can use a [custom DeepSeek-r1 model](/docs/getting-started/providers#deepseek-r1) we've made specifically for Goose.
:::
Example:
@@ -148,6 +211,23 @@ goose configure
```
5. Enter the host where your model is running
```
┌ goose-configure
◇ What would you like to configure?
│ Configure Providers
◇ Which model provider should we use?
│ Ollama
◆ Provider Ollama requires OLLAMA_HOST, please enter a value
│ http://localhost:11434
```
6. Enter the model you have running
```
@@ -169,3 +249,112 @@ goose configure
└ Configuration saved successfully
```
### DeepSeek-R1
Ollama provides open source LLMs, such as `DeepSeek-r1`, that you can install and run locally.
Note that the native `DeepSeek-r1` model doesn't support tool calling, however, we have a [custom model](https://ollama.com/michaelneale/deepseek-r1-goose) you can use with Goose.
:::warning
Note that this is a 70B model size and requires a powerful device to run smoothly.
:::
1. Download and install Ollama from [ollama.com](https://ollama.com/download).
2. In a terminal window, run the following command to install the custom DeepSeek-r1 model:
```sh
ollama run michaelneale/deepseek-r1-goose
```
<Tabs groupId="interface">
<TabItem value="cli" label="Goose CLI" default>
3. In a separate terminal window, configure with Goose:
```sh
goose configure
```
4. Choose to `Configure Providers`
```
┌ goose-configure
◆ What would you like to configure?
│ ● Configure Providers (Change provider or update credentials)
│ ○ Toggle Extensions
│ ○ Add Extension
```
5. Choose `Ollama` as the model provider
```
┌ goose-configure
◇ What would you like to configure?
│ Configure Providers
◆ Which model provider should we use?
│ ○ Anthropic
│ ○ Databricks
│ ○ Google Gemini
│ ○ Groq
│ ● Ollama (Local open source models)
│ ○ OpenAI
│ ○ OpenRouter
```
5. Enter the host where your model is running
```
┌ goose-configure
◇ What would you like to configure?
│ Configure Providers
◇ Which model provider should we use?
│ Ollama
◆ Provider Ollama requires OLLAMA_HOST, please enter a value
│ http://localhost:11434
```
6. Enter the installed model from above
```
┌ goose-configure
◇ What would you like to configure?
│ Configure Providers
◇ Which model provider should we use?
│ Ollama
◇ Provider Ollama requires OLLAMA_HOST, please enter a value
│ http://localhost:11434
◇ Enter a model from that provider:
│ michaelneale/deepseek-r1-goose
◇ Welcome! You're all set to explore and utilize my capabilities. Let's get started on solving your problems together!
└ Configuration saved successfully
```
</TabItem>
<TabItem value="ui" label="Goose Desktop">
3. Click `...` in the top-right corner.
4. Navigate to `Settings` -> `Browse Models` -> and select `Ollama` from the list.
5. Enter `michaelneale/deepseek-r1-goose` for the model name.
</TabItem>
</Tabs>
---
If you have any questions or need help with a specific provider, feel free to reach out to us on [Discord](https://discord.gg/block-opensource) or on the [Goose repo](https://github.com/block/goose).
[providers]: /docs/getting-started/providers
[function-calling-leaderboard]: https://gorilla.cs.berkeley.edu/leaderboard.html

View File

@@ -1,159 +0,0 @@
---
sidebar_position: 3
title: Using Goose for Free
---
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
# Using Goose for Free
Goose is a free and open source developer AI agent that you can start using right away, but not all supported [LLM Providers][providers] provide a free tier.
Below, we outline a couple of free options and how to get started with them.
## Google Gemini
Google Gemini provides a free tier. To start using the Gemini API with Goose, you need an API Key from [Google AI studio](https://aistudio.google.com/app/apikey).
To set up Google Gemini with Goose, follow these steps:
<Tabs groupId="interface">
<TabItem value="cli" label="Goose CLI" default>
1. Run:
```sh
goose configure
```
2. Select `Configure Providers` from the menu.
3. Follow the prompts to choose `Google Gemini` as the provider.
4. Enter your API key when prompted.
5. Enter the Gemini model of your choice.
```
┌ goose-configure
◇ What would you like to configure?
│ Configure Providers
◇ Which model provider should we use?
│ Google Gemini
◇ Provider Google Gemini requires GOOGLE_API_KEY, please enter a value
│▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪
◇ Enter a model from that provider:
│ gemini-2.0-flash-exp
◇ Hello! You're all set and ready to go, feel free to ask me anything!
└ Configuration saved successfully
```
</TabItem>
<TabItem value="ui" label="Goose Desktop">
**To update your LLM provider and API key:**
1. Click on the three dots in the top-right corner.
2. Select `Provider Settings` from the menu.
2. Choose `Google Gemini` as provider from the list.
3. Click Edit, enter your API key, and click `Set as Active`.
</TabItem>
</Tabs>
## DeepSeek-R1
:::warning
In our testing, we have found the Google Gemini performs better that DeepSeek models, likely
because Goose relies heavily on tool calling and DeepSeek does not support it natively yet.
When using DeepSeek, we currently recommend the 70B model size, which requires
a powerful device to run smoothly.
:::
Ollama provides open source LLMs, such as `DeepSeek-r1`, that you can install and run locally.
Note that the native `DeepSeek-r1` model doesn't support tool calling, however, we have a [custom model](https://ollama.com/michaelneale/deepseek-r1-goose) you can use with Goose.
1. Download and install Ollama from [ollama.com](https://ollama.com/download).
2. In a terminal window, run the following command to install the custom DeepSeek-r1 model:
```sh
ollama run michaelneale/deepseek-r1-goose
```
<Tabs groupId="interface">
<TabItem value="cli" label="Goose CLI" default>
3. In a separate terminal window, configure with Goose:
```sh
goose configure
```
4. Choose to `Configure Providers`
```
┌ goose-configure
◆ What would you like to configure?
│ ● Configure Providers (Change provider or update credentials)
│ ○ Toggle Extensions
│ ○ Add Extension
```
5. Choose `Ollama` as the model provider
```
┌ goose-configure
◇ What would you like to configure?
│ Configure Providers
◆ Which model provider should we use?
│ ○ Anthropic
│ ○ Databricks
│ ○ Google Gemini
│ ○ Groq
│ ● Ollama (Local open source models)
│ ○ OpenAI
│ ○ OpenRouter
```
6. Enter the installed deepseek-r1 model from above
```
┌ goose-configure
◇ What would you like to configure?
│ Configure Providers
◇ Which model provider should we use?
│ Ollama
◇ Enter a model from that provider:
│ michaelneale/deepseek-r1-goose
◇ Welcome! You're all set to explore and utilize my capabilities. Let's get started on solving your problems together!
└ Configuration saved successfully
```
</TabItem>
<TabItem value="ui" label="Goose Desktop">
3. Click `...` in the top-right corner.
4. Navigate to `Settings` -> `Browse Models` -> and select `Ollama` from the list.
5. Enter `michaelneale/deepseek-r1-goose` for the model name.
</TabItem>
</Tabs>
## Limitations
These free options are a great way to get started with Goose and explore its capabilities. However, if you need more advanced features or higher usage limits, you can upgrade to a paid plan with your LLM provider.
---
If you have any questions or need help with a specific provider, feel free to reach out to us on [Discord](https://discord.gg/block-opensource) or on the [Goose repo](https://github.com/block/goose).
[providers]: /docs/getting-started/providers

View File

@@ -82,6 +82,10 @@ Goose works with [supported LLM providers][providers]. When you install Goose, y
<RateLimits />
:::tip Model Selection
Goose relies heavily on tool calling capabilities and currently works best with Anthropic's Claude 3.5 Sonnet and OpenAI's GPT-4o (2024-11-20) model.
:::
## Start Session
Sessions are single, continuous conversations between you and Goose. Let's start one.

View File

@@ -67,6 +67,10 @@ const config: Config = {
"@docusaurus/plugin-client-redirects",
{
redirects: [
{
to: '/docs/getting-started/providers#using-goose-for-free',
from: '/docs/getting-started/using-goose-free',
},
{
to: '/docs/getting-started/providers',
from: '/v1/docs/getting-started/providers',
@@ -82,7 +86,7 @@ const config: Config = {
{
to: '/',
from: '/v1/',
}
},
],
},
],

View File

@@ -18,7 +18,7 @@ const RateLimits = () => {
<br />
Some providers also have rate limits on API usage, which can affect your
experience. Check out our{" "}
<a href="/docs/guides/handling-llm-rate-limits" target="_blank">
<a href="/goose/docs/guides/handling-llm-rate-limits-with-goose" target="_blank">
Handling Rate Limits
</a>{" "}
guide to learn how to efficiently manage these limits while using Goose.