Skip to main content

Documentation Index

Fetch the complete documentation index at: https://vowen.mintlify.app/llms.txt

Use this file to discover all available pages before exploring further.

Supported Hardware

HardwareStatus
Windows x64Fully supported
Windows ARM64Experimental (slower due to emulation)
NVIDIA GPUOptional, enables acceleration

Key Differences from macOS

FeaturemacOSWindows
Default shortcutFnCtrl + Shift
AI shortcutAlt + ShiftAlt + Shift
”Option” keyOptionAlt
”Command” keyCommandWindows key
Fn key supportYesNo
PermissionsManual (Accessibility, Mic)Automatic
Code signingYes (notarized)Not yet
GPU accelerationNot needed (Neural Engine)NVIDIA CUDA

Windows-Specific Features

GPU Acceleration (NVIDIA)

Speed up local model transcription dramatically:
  1. Go to Settings > Models
  2. Scroll to “GPU Acceleration” section
  3. Click Download to get CUDA modules
  4. Restart Vowen (or restart system if needed)
With GPU acceleration, Large v3 Turbo responds in 1-2 seconds.
Only NVIDIA GPUs with CUDA support are currently compatible. AMD and Intel GPUs are not supported.

Start at Login

Enable automatic startup: Settings > General > Open at Login

Minimize to Tray

Start the app minimized: Settings > General > Start minimized to tray

System Tray

Right-click the Vowen icon in the system tray (bottom-right) for:
  • Model and microphone switching
  • Meeting notes controls
  • Copy last transcription
  • Open settings
  • Quit

Installation Notes

Windows Defender Warning

Windows shows “Windows protected your PC” because the app isn’t code-signed with an EV certificate. This is normal for indie software. To proceed: Click “More info” → “Run anyway”

Antivirus False Positives

Some antivirus software (QuickHeal, Kaspersky, Norton) may flag Vowen. Add the Vowen installation folder to your antivirus whitelist.

Performance Tips

Local models are slower on Windows than macOS (no Neural Engine). To get fast transcription:
  1. Best: Use a cloud model (Groq — free and fast)
  2. Good: Enable GPU acceleration (NVIDIA only)
  3. OK: Use Tiny or Base model (fast but less accurate)
Without GPU acceleration, the Medium and Large models can take 5-10+ seconds to transcribe on Windows. Cloud models are strongly recommended for Windows users without NVIDIA GPUs.

Have a Windows-specific question? See the Windows FAQ.