mirror of
https://github.com/aljazceru/goose.git
synced 2025-12-18 14:44:21 +01:00
moving ollama host note (#1004)
This commit is contained in:
@@ -164,12 +164,6 @@ Ollama provides local LLMs, which requires a bit more set up before you can use
|
||||
1. [Download Ollama](https://ollama.com/download).
|
||||
2. Run any [model supporting tool-calling](https://ollama.com/search?c=tools):
|
||||
|
||||
|
||||
:::info Ollama Endpoint Construction
|
||||
For Ollama, we set default host to `localhost` and port to `11434` if you don't provide it. When constructing the URL, we preprend `"http://"` if the scheme is not http or https.
|
||||
If you're running Ollama on port 80 or 443, you have to set `OLLMA_HOST=http://host:port`
|
||||
:::
|
||||
|
||||
:::warning Limited Support for models without tool calling
|
||||
Goose extensively uses tool calling, so models without it (e.g. `DeepSeek-r1`) can only do chat completion. If using models without tool calling, all Goose [extensions must be disabled](/docs/getting-started/using-extensions#enablingdisabling-extensions). As an alternative, you can use a [custom DeepSeek-r1 model](/docs/getting-started/providers#deepseek-r1) we've made specifically for Goose.
|
||||
:::
|
||||
@@ -219,6 +213,10 @@ goose configure
|
||||
|
||||
5. Enter the host where your model is running
|
||||
|
||||
:::info Endpoint
|
||||
For Ollama, if you don't provide a host, we set it to `localhost:11434`. When constructing the URL, we preprend `http://` if the scheme is not `http` or `https`. If you're running Ollama on port 80 or 443, you'll have to set `OLLMA_HOST=http://host:{port}`
|
||||
:::
|
||||
|
||||
```
|
||||
┌ goose-configure
|
||||
│
|
||||
|
||||
Reference in New Issue
Block a user