mirror of
https://github.com/aljazceru/vibeline.git
synced 2025-12-19 15:34:24 +01:00
c1397706f47b4abeb9fee2874d148972c677b67d
VibeLine
VibeLine is a powerful voice memo processing system that uses AI to extract meaningful content from your voice recordings. It uses a flexible plugin system to generate various types of content like summaries, blog posts, app ideas, and action items.
Features
- 🎙️ Automatic voice memo transcription
- 🔌 Flexible plugin system for content extraction
- 🤖 AI-powered content generation using Ollama
- 📝 Built-in plugins for:
- Summaries
- Blog posts
- App ideas
- Action items/TODOs
- 🎯 Smart plugin matching based on transcript content
- 📁 Organized output directory structure
Installation
- Clone the repository:
git clone https://github.com/yourusername/vibeline.git
cd vibeline
- Create and activate a virtual environment:
python -m venv vibenv
source vibenv/bin/activate # On Windows: vibenv\Scripts\activate
- Install dependencies:
pip install -r requirements.txt
- Copy the example environment file and edit it with your settings:
cp .env.example .env
- Make sure you have Ollama installed and running.
Usage
Basic Usage
- Place your voice memo (
.m4afile) in theVoiceMemosdirectory - Process the voice memo:
./process.sh VoiceMemos/your_memo.m4a
This will:
- Transcribe the audio
- Extract content based on active plugins
- Save outputs in organized directories
Directory Structure
VoiceMemos/
├── transcripts/ # Voice memo transcripts
├── summaries/ # Generated summaries
├── blog_posts/ # Generated blog posts
├── app_ideas/ # Generated app ideas
└── action_items/ # Generated action items
Plugin System
VibeLine uses a flexible YAML-based plugin system for content extraction. Each plugin is defined in a YAML file with the following structure:
name: plugin_name
description: What the plugin does
model: llama2 # Optional, falls back to ENV default
type: or # Comparison type: 'and' or 'or'
run: matching # When to run: 'always' or 'matching'
output_extension: .txt # Optional, defaults to .txt
prompt: |
Your prompt template here.
Use {transcript} for the transcript content.
Use {summary} for the summary content.
Plugin Types
-
Run Types:
always: Plugin runs for every transcriptmatching: Plugin runs only when keywords match
-
Comparison Types:
or: Runs if any keyword matchesand: Runs only if all keywords match
Creating a New Plugin
- Create a new YAML file in the
pluginsdirectory - Define the plugin configuration
- Add your prompt template
- The plugin will be automatically loaded on the next run
Example Plugin
name: blog_post
description: Generate draft blog posts from transcripts
model: llama2
type: or
run: matching
output_extension: .md
prompt: |
Based on the following transcript, create a blog post...
Transcript:
{transcript}
Environment Variables
OLLAMA_EXTRACT_MODEL: Default model for content extractionOLLAMA_SUMMARIZE_MODEL: Default model for summarizationVOICE_MEMOS_DIR: Directory for voice memos (default: "VoiceMemos")
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
License
This project is licensed under the MIT License - see the LICENSE file for details.
Languages
Python
76.5%
Shell
21.4%
Dockerfile
2.1%