docs: more info on configuring Ollama (#804)

This commit is contained in:
Angie Jones
2025-01-26 19:09:02 -06:00
committed by GitHub
parent c91c136934
commit b3d2378bdd
2 changed files with 91 additions and 4 deletions

View File

@@ -3,6 +3,7 @@ title: Troubleshooting
---
# Troubleshooting
Goose, like any system, may run into occasional issues. This guide provides solutions for common problems.
### Goose Edits Files
@@ -28,12 +29,24 @@ For particularly large or complex tasks, consider breaking them into smaller ses
:::
---
### Context Length Exceeded Error
This error occurs when the input provided to Goose exceeds the maximum token limit of the LLM being used. To resolve this try breaking down your input into smaller parts. You can also use `.goosehints` as a way to provide goose with detailed context. Refer to the [Using Goosehints Guide][goosehints] for more information.
---
### Using Ollama Provider
Ollama provides local LLMs, which means you must first [download Ollama and run a model](/docs/getting-started/providers#local-llms-ollama) before attempting to use this provider with Goose. If you do not have the model downloaded, you'll run into the follow error:
> ExecutionError("error sending request for url (http://localhost:11434/v1/chat/completions)")
Another thing to note is that the DeepSeek models do not support tool calling, so all Goose [extensions must be disabled](/docs/getting-started/using-extensions#enablingdisabling-extensions) to use one of these models. Unfortunately, without the use of tools, there is not much Goose will be able to do autonomously if using DeepSeek. However, Ollama's other models such as `qwen2.5` do support tool calling and can be used with Goose extensions.
---
### Handling Rate Limit Errors
Goose may encounter a `429 error` (rate limit exceeded) when interacting with LLM providers. The recommended solution is to use OpenRouter. See [Handling LLM Rate Limits][handling-rate-limits] for more info.