Documentation Index
Fetch the complete documentation index at: https://docs.vowen.ai/llms.txt
Use this file to discover all available pages before exploring further.
Expected Resource Usage
| Scenario | RAM Usage | CPU |
|---|---|---|
| Idle (no model loaded) | ~200-300 MB | Near 0% |
| Small model loaded | ~400-600 MB | Near 0% (idle) |
| Large v3 model loaded | ~3-4 GB | Near 0% (idle) |
| During transcription | +50-200 MB | 30-100% (brief) |
RAM usage depends on which model is loaded. Models are loaded on first transcription and stay in memory for fast subsequent use.
Transcription Is Slow
On Windows (No GPU)
Local models can be slow on Windows without GPU acceleration. Options:- Use a cloud model — Groq offers fast, free transcription
- Enable GPU acceleration — Settings > Models > Download GPU module (NVIDIA only)
- Use a smaller model — Tiny or Base instead of Medium/Large
- Enable Resource Efficient Mode — Settings > General (at bottom)
On macOS
Local models are generally fast on Apple Silicon. If you’re experiencing slowness:- First transcription is always slower — the model needs to load into memory
- Large v3 is naturally slower — try Large v3 Turbo for better speed
- Parakeet is very fast on macOS — try switching to it
- Check Activity Monitor for other apps consuming CPU
With Cloud Models
If cloud transcription is slow:- Check your internet connection
- The provider may be experiencing load — try a different one
- Gemini Flash Lite has been reported as occasionally slower during peak times
High Memory Usage
Why Does Vowen Use So Much RAM?
The transcription model stays loaded in memory between uses for faster response times. This is by design.Reducing Memory Usage
- Switch to a smaller model — Base.en uses ~200 MB vs Large v3 at ~3 GB
- Enable Resource Efficient Mode — loads the model only when needed (slower but less RAM)
- Use cloud models — no local model needs to stay in memory
MacBook Running Warm
Normal during active transcription — the model is using CPU/GPU intensively. This stops immediately when transcription is complete. Between recordings, Vowen should be idle. If your MacBook runs warm even when not transcribing:- Check Activity Monitor for Vowen’s CPU usage
- Restart Vowen to reset any stuck processes
- Report the issue on Discord with your model configuration
Battery Impact
Vowen has minimal battery impact when idle. During transcription, it briefly uses significant CPU, but recordings typically last only seconds. Tips for laptop users:- Use cloud models to avoid CPU-intensive local processing
- Enable Resource Efficient Mode to unload models when not in use
- Smaller models use less power per transcription