12 KiB
sidebar_position, title
| sidebar_position | title |
|---|---|
| 2 | Configure LLM Provider |
import Tabs from '@theme/Tabs'; import TabItem from '@theme/TabItem';
Supported LLM Providers
Goose is compatible with a wide range of LLM providers, allowing you to choose and integrate your preferred model.
:::tip Model Selection Goose relies heavily on tool calling capabilities and currently works best with Anthropic's Claude 3.5 Sonnet and OpenAI's GPT-4o (2024-11-20) model. Berkeley Function-Calling Leaderboard can be a good guide for selecting models. :::
Available Providers
| Provider | Description | Parameters |
|---|---|---|
| Anthropic | Offers Claude, an advanced AI model for natural language tasks. | ANTHROPIC_API_KEY |
| Databricks | Unified data analytics and AI platform for building and deploying models. | DATABRICKS_HOST, DATABRICKS_TOKEN |
| Gemini | Advanced LLMs by Google with multimodal capabilities (text, images). | GOOGLE_API_KEY |
| Groq | High-performance inference hardware and tools for LLMs. | GROQ_API_KEY |
| Ollama | Local model runner supporting Qwen, Llama, DeepSeek, and other open-source models. Because this provider runs locally, you must first download and run a model. | OLLAMA_HOST |
| OpenAI | Provides gpt-4o, o1, and other advanced language models. o1-mini and o1-preview are not supported because Goose uses tool calling. | OPENAI_API_KEY |
| OpenRouter | API gateway for unified access to various models with features like rate-limiting management. | OPENROUTER_API_KEY |
Configure Provider
To configure your chosen provider or see available options, run goose configure in the CLI or visit the Provider Settings page in the Goose Desktop.
```sh
goose configure
```
2. Select `Configure Providers` from the menu and press Enter.
```
┌ goose-configure │ ◆ What would you like to configure? │ ● Configure Providers (Change provider or update credentials) │ ○ Toggle Extensions │ ○ Add Extension └
3. Choose a model provider and press Enter.
┌ goose-configure │ ◇ What would you like to configure? │ Configure Providers │ ◆ Which model provider should we use? │ ● Anthropic (Claude and other models from Anthropic) │ ○ Databricks │ ○ Google Gemini │ ○ Groq │ ○ Ollama │ ○ OpenAI │ ○ OpenRouter └
4. Enter you API key (and any other configuration details) when prompted
┌ goose-configure
│
◇ What would you like to configure?
│ Configure Providers
│
◇ Which model provider should we use?
│ Anthropic
│
◆ Provider Anthropic requires ANTHROPIC_API_KEY, please enter a value
│
└
</TabItem>
<TabItem value="ui" label="Goose Desktop">
**To update your LLM provider and API key:**
1. Click the three dots (`...`) in the top-right corner.
2. Select `Provider Settings` from the menu.
3. Click Edit, enter your API key, and click `Set as Active`.
</TabItem>
</Tabs>
## Using Goose for Free
Goose is a free and open source AI agent that you can start using right away, but not all supported [LLM Providers][providers] provide a free tier.
Below, we outline a couple of free options and how to get started with them.
:::warning Limitations
These free options are a great way to get started with Goose and explore its capabilities. However, you may need to upgrade your LLM for better performance.
:::
### Google Gemini
Google Gemini provides a free tier. To start using the Gemini API with Goose, you need an API Key from [Google AI studio](https://aistudio.google.com/app/apikey).
To set up Google Gemini with Goose, follow these steps:
<Tabs groupId="interface">
<TabItem value="cli" label="Goose CLI" default>
1. Run:
```sh
goose configure
```
2. Select `Configure Providers` from the menu.
3. Follow the prompts to choose `Google Gemini` as the provider.
4. Enter your API key when prompted.
5. Enter the Gemini model of your choice.
```
┌ goose-configure
│
◇ What would you like to configure?
│ Configure Providers
│
◇ Which model provider should we use?
│ Google Gemini
│
◇ Provider Google Gemini requires GOOGLE_API_KEY, please enter a value
│▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪
│
◇ Enter a model from that provider:
│ gemini-2.0-flash-exp
│
◇ Hello! You're all set and ready to go, feel free to ask me anything!
│
└ Configuration saved successfully
```
</TabItem>
<TabItem value="ui" label="Goose Desktop">
**To update your LLM provider and API key:**
1. Click on the three dots in the top-right corner.
2. Select `Provider Settings` from the menu.
2. Choose `Google Gemini` as provider from the list.
3. Click Edit, enter your API key, and click `Set as Active`.
</TabItem>
</Tabs>
### Local LLMs (Ollama)
Ollama provides local LLMs, which requires a bit more set up before you can use it with Goose.
1. [Download Ollama](https://ollama.com/download).
2. Run any [model supporting tool-calling](https://ollama.com/search?c=tools):
:::info Ollama Endpoint Construction
For Ollama, we set default host to `localhost` and port to `11434` if you don't provide it. When constructing the URL, we preprend `"http://"` if the scheme is not http or https.
If you're running Ollama on port 80 or 443, you have to set `OLLMA_HOST=http://host:port`
:::
:::warning Limited Support for models without tool calling
Goose extensively uses tool calling, so models without it (e.g. `DeepSeek-r1`) can only do chat completion. If using models without tool calling, all Goose [extensions must be disabled](/docs/getting-started/using-extensions#enablingdisabling-extensions). As an alternative, you can use a [custom DeepSeek-r1 model](/docs/getting-started/providers#deepseek-r1) we've made specifically for Goose.
:::
Example:
```sh
ollama run qwen2.5
- In a separate terminal window, configure with Goose:
goose configure
- Choose to
Configure Providers
┌ goose-configure
│
◆ What would you like to configure?
│ ● Configure Providers (Change provider or update credentials)
│ ○ Toggle Extensions
│ ○ Add Extension
└
- Choose
Ollamaas the model provider
┌ goose-configure
│
◇ What would you like to configure?
│ Configure Providers
│
◆ Which model provider should we use?
│ ○ Anthropic
│ ○ Databricks
│ ○ Google Gemini
│ ○ Groq
│ ● Ollama (Local open source models)
│ ○ OpenAI
│ ○ OpenRouter
└
- Enter the host where your model is running
┌ goose-configure
│
◇ What would you like to configure?
│ Configure Providers
│
◇ Which model provider should we use?
│ Ollama
│
◆ Provider Ollama requires OLLAMA_HOST, please enter a value
│ http://localhost:11434
└
- Enter the model you have running
┌ goose-configure
│
◇ What would you like to configure?
│ Configure Providers
│
◇ Which model provider should we use?
│ Ollama
│
◇ Provider Ollama requires OLLAMA_HOST, please enter a value
│ http://localhost:11434
│
◇ Enter a model from that provider:
│ qwen2.5
│
◇ Welcome! You're all set to explore and utilize my capabilities. Let's get started on solving your problems together!
│
└ Configuration saved successfully
DeepSeek-R1
Ollama provides open source LLMs, such as DeepSeek-r1, that you can install and run locally.
Note that the native DeepSeek-r1 model doesn't support tool calling, however, we have a custom model you can use with Goose.
:::warning Note that this is a 70B model size and requires a powerful device to run smoothly. :::
- Download and install Ollama from ollama.com.
- In a terminal window, run the following command to install the custom DeepSeek-r1 model:
ollama run michaelneale/deepseek-r1-goose
```sh
goose configure
```
4. Choose to `Configure Providers`
```
┌ goose-configure
│
◆ What would you like to configure?
│ ● Configure Providers (Change provider or update credentials)
│ ○ Toggle Extensions
│ ○ Add Extension
└
```
5. Choose `Ollama` as the model provider
```
┌ goose-configure
│
◇ What would you like to configure?
│ Configure Providers
│
◆ Which model provider should we use?
│ ○ Anthropic
│ ○ Databricks
│ ○ Google Gemini
│ ○ Groq
│ ● Ollama (Local open source models)
│ ○ OpenAI
│ ○ OpenRouter
└
```
5. Enter the host where your model is running
```
┌ goose-configure
│
◇ What would you like to configure?
│ Configure Providers
│
◇ Which model provider should we use?
│ Ollama
│
◆ Provider Ollama requires OLLAMA_HOST, please enter a value
│ http://localhost:11434
└
```
6. Enter the installed model from above
```
┌ goose-configure
│
◇ What would you like to configure?
│ Configure Providers
│
◇ Which model provider should we use?
│ Ollama
│
◇ Provider Ollama requires OLLAMA_HOST, please enter a value
│ http://localhost:11434
│
◇ Enter a model from that provider:
│ michaelneale/deepseek-r1-goose
│
◇ Welcome! You're all set to explore and utilize my capabilities. Let's get started on solving your problems together!
│
└ Configuration saved successfully
```
3. Click `...` in the top-right corner.
4. Navigate to `Settings` -> `Browse Models` -> and select `Ollama` from the list.
5. Enter `michaelneale/deepseek-r1-goose` for the model name.
If you have any questions or need help with a specific provider, feel free to reach out to us on Discord or on the Goose repo.