Documentation Index
Fetch the complete documentation index at: https://vowen.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
What is Diarization?
Speaker diarization identifies and labels different speakers in a recording. Instead of a single block of text, your transcript shows which person said each part.Supported Models
Diarization is available with these cloud transcription models:| Model | Provider |
|---|---|
| Nova 2 / Nova 3 | Deepgram |
| Scribe v2 | ElevenLabs |
| Universal | AssemblyAI |
| Voxtral Mini | Mistral |
| STT Realtime | Soniox |
| Aurora | xAI |
| Speechmatics | Speechmatics |
Diarization is not available with local models (Whisper, Parakeet) or Groq. See Transcription Models for the full model list.
Enabling Diarization
- Select a cloud model that supports diarization (see table above)
- When starting Meeting Notes, you’ll see a diarization toggle
- Enable it before starting the recording
Mapping Speaker Names
After transcription, speakers are labeled generically (Speaker 1, Speaker 2, etc.). You can map these to real names:- Open the completed meeting note
- In the transcript section, look for the speaker mapping panel in the sidebar
- Assign names to each speaker label
- The transcript updates to show the real names
Tips for Better Speaker Separation
- Use a good microphone: clear audio helps the model distinguish speakers
- Avoid talking over each other: overlapping speech is harder to separate
- Vendors that ship diarization as a first-class feature (Deepgram, AssemblyAI, Speechmatics) tend to handle hard cases better than ones where it’s bolted on
- Longer meetings give the model more data to distinguish speakers accurately