diff --git a/.python-version b/.python-version new file mode 100644 index 0000000..b3cc253 --- /dev/null +++ b/.python-version @@ -0,0 +1 @@ +3.11.2 \ No newline at end of file diff --git a/README.md b/README.md index 033bb60..a211116 100644 --- a/README.md +++ b/README.md @@ -23,110 +23,14 @@ git clone https://github.com/yourusername/vibeline.git cd vibeline ``` -2. Create and activate a virtual environment: +2. Make sure you have Python 3.11.2 installed. The project uses `.python-version` to specify the required Python version. + +3. Run the setup script: ```bash -python -m venv vibenv -[ -d "vibenv" ] && source vibenv/bin/activate # On Windows: vibenv\Scripts\activate -``` - -3. Install dependencies: -```bash -pip install -r requirements.txt -``` - -4. Copy the example environment file and edit it with your settings: -```bash -cp .env.example .env -``` - -5. Make sure you have [Ollama](https://ollama.ai) installed and running. - -## Usage - -### Basic Usage - -1. Place your voice memo (`.m4a` file) in the `VoiceMemos` directory -2. Process the voice memo: -```bash -./process.sh VoiceMemos/your_memo.m4a +./setup.sh ``` This will: -1. Transcribe the audio -2. Extract content based on active plugins -3. Save outputs in organized directories - -### Directory Structure - -``` -VoiceMemos/ -├── transcripts/ # Voice memo transcripts -├── summaries/ # Generated summaries -├── blog_posts/ # Generated blog posts -├── app_ideas/ # Generated app ideas -└── action_items/ # Generated action items -``` - -## Plugin System - -VibeLine uses a flexible YAML-based plugin system for content extraction. Each plugin is defined in a YAML file with the following structure: - -```yaml -name: plugin_name -description: What the plugin does -model: llama2 # Optional, falls back to ENV default -type: or # Comparison type: 'and' or 'or' -run: matching # When to run: 'always' or 'matching' -output_extension: .txt # Optional, defaults to .txt -prompt: | - Your prompt template here. - Use {transcript} for the transcript content. - Use {summary} for the summary content. -``` - -### Plugin Types - -- **Run Types**: - - `always`: Plugin runs for every transcript - - `matching`: Plugin runs only when keywords match - -- **Comparison Types**: - - `or`: Runs if any keyword matches - - `and`: Runs only if all keywords match - -### Creating a New Plugin - -1. Create a new YAML file in the `plugins` directory -2. Define the plugin configuration -3. Add your prompt template -4. The plugin will be automatically loaded on the next run - -### Example Plugin - -```yaml -name: blog_post -description: Generate draft blog posts from transcripts -model: llama2 -type: or -run: matching -output_extension: .md -prompt: | - Based on the following transcript, create a blog post... - - Transcript: - {transcript} -``` - -## Environment Variables - -- `OLLAMA_EXTRACT_MODEL`: Default model for content extraction -- `OLLAMA_SUMMARIZE_MODEL`: Default model for summarization -- `VOICE_MEMOS_DIR`: Directory for voice memos (default: "VoiceMemos") - -## Contributing - -Contributions are welcome! Please feel free to submit a Pull Request. - -## License - -This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details. +- Create a virtual environment named `vibenv` +- Activate the virtual environment +- Install all required dependencies \ No newline at end of file