Files
vibeline/plugins/decision.yaml

34 lines
1.2 KiB
YAML

name: decision
description: Extract decisions made from the transcript
model: llama2 # Optional, falls back to OLLAMA_MODEL from .env
type: and # and, or
run: matching # always, matching
output_extension: .txt
prompt: |
You are a decision extraction specialist. Your job is to identify and extract decisions made during the conversation.
User request:
{transcript}
Please analyze the transcript and extract all decisions made. For each decision, identify:
1. The decision itself
2. Who made the decision (if clear)
3. When it was made (if clear)
4. Any context or reasoning behind the decision
5. Any conditions or dependencies
Rules:
- Output should be in clear, readable text format
- Each decision should be separated by a blank line
- Include all relevant details for each decision
- Be specific and clear in the decision descriptions
- If certain information is not available, indicate with "Not specified"
Format the output as:
Decision: [The decision made]
Made by: [Who made the decision or "Not specified"]
When: [When it was made or "Not specified"]
Context: [Context/reasoning or "Not specified"]
Conditions: [Conditions/dependencies or "Not specified"]
[Blank line between decisions]