Files
vibeline/.env.example
hzrd149 f3cd99a471 Docker compose (#2)
* Add dockerfile
Add example docker compose
Add docker build action
Update extract.sh to use faster-whisper
Update src/extract.py to check and download ollama models

* remove multi-platform build

* add vibeline-ui to docker compose

* Fix VOICE_MEMOS_DIR variable being ignored in some files

* remove requirement for faster-whisper since it comes with whisper-ctranslate2
2025-04-07 11:47:27 +01:00

15 lines
351 B
Plaintext

# Ollama model configuration
# You can use different models for different tasks
OLLAMA_EXTRACT_MODEL=llama2
OLLAMA_SUMMARIZE_MODEL=llama2
OLLAMA_DEFAULT_MODEL=llama2
# Ollama connection configuration (optional)
# OLLAMA_HOST=http://localhost:11434
# Path configuration
VOICE_MEMOS_DIR=VoiceMemos
# Whisper model configuration
WHISPER_MODEL=base.en