AI Moderated Interviews
Overview
Yazi's AI Interview is a hybrid methodology that combines a structured survey section with an AI-moderated qualitative interview — all delivered within a single WhatsApp conversation. The AI interviewer adapts its follow-up questions in real time based on what the participant has already said, creating a conversational experience that sits between a traditional survey and a human-led depth interview.
The result is qualitative richness at quantitative scale — without needing a team of human moderators.
At a Glance
Best for: Understanding motivations, attitudes, experiences, and emotions
Typical sample size: 12–100 participants
Typical structure: 8–12 survey questions plus 5–15 AI follow-ups
Participant effort: 8–20 minutes
Best output: Structured data plus rich verbatims in one flow
Use AI Interviews when you need both measurement and depth in the same study.
How It Works
Participant Experience
The transition from survey to interview is seamless. To the participant, it feels like a single conversation that starts structured and becomes more open and conversational.

Example Conversation
Survey question: "Have you switched banks in the last 5 years?" → Participant selects "Yes"
AI follow-up: "Since you switched banks in the last 5 years, what made you leave your old bank?"
Participant responds: "The fees were too high and the app kept crashing"
AI follow-up: "That sounds frustrating. When the app crashed, how did that affect your day-to-day banking? Did you find workarounds or just stop using it?"
The AI draws on the full context of the conversation — not just the most recent answer — to ask relevant, non-repetitive follow-up questions.
When to Use AI Interviews vs. Surveys
Use a Survey when you need quantitative measurement, tracking, or benchmarking. Surveys are built for scale (250+ participants), deliver coded and countable data, and take participants 3–8 minutes to complete with 22–28 structured questions.
Use an AI Interview when you need to explore motivations, attitudes, experiences, or emotions. AI Interviews are designed for 12–100 participants, combine 8–12 survey questions with 5–15 AI-generated follow-ups, and take 8–20 minutes. You get rich verbatims, stories, and nuanced explanations alongside your structured data.
Rule of thumb: If you need numbers, use a survey. If you need understanding, use an AI interview. If you need both, use an AI interview — the survey section gives you the numbers, and the AI section gives you the depth.
Survey Surveys in YaziThe Interview Builder
Two Sections
The AI interview builder has two distinct sections:
For the full survey builder, see Survey Surveys in Yazi.

Interview Configuration Fields
The AI moderator is controlled through nine specific configuration fields, giving you granular control over the conversation:
Research Objective — The high-level outcome the research is trying to achieve. Example: "Understand why customers cancel their subscriptions within the first 3 months."
Target Audience — Who the participants are, providing context for tone and relevance. Example: "Working mothers aged 25–40 in urban South Africa who use meal delivery services."
Key Topic Areas — The themes the AI should cover, ensuring balanced question allocation across topics. Example: "Price sensitivity, delivery experience, menu variety, competitor usage."
Style & Tone — How the AI should communicate. Example: "Warm and conversational, not clinical. Use simple language."
Probing Intensity — How deeply the AI should follow up on responses. Example: "Probe deeply on emotional responses. Ask 'why' at least twice on key topics."
Media Requirements — When and how to request multimedia responses. Example: "Ask for a voice note when the participant describes a frustrating experience. Request a photo if they mention their workspace."
Safety Considerations — Topics to avoid or handle sensitively. Example: "Do not ask about specific medical diagnoses. If participant mentions mental health struggles, respond empathetically and move on."
Brand/Entity Insertion — Specific brands or products the AI should explore. Example: "Compare perceptions of Discovery Vitality vs Momentum Multiply."
Required Questions — Specific questions the AI must ask at some point during the interview. Example: "At some point ask: 'If you could change one thing about [product], what would it be?'"
Why nine fields instead of one?
A single broad prompt often causes the AI to over-focus on the first topic mentioned. Splitting the configuration into specific fields creates better topic balance and more reliable probing.
AI Behaviour & Controls
How the AI Decides What to Ask
The AI moderator reads three things before generating each follow-up question: all survey responses the participant has already provided, the full conversation history within the interview, and your nine configuration fields. It then generates the next most relevant question, avoiding repetition and ensuring coverage across your specified topic areas.
Conversation Flow
The AI asks one question at a time and waits for the participant's response. It acknowledges what the participant said before asking the next question. It probes deeper when responses are vague or surface-level, moves on when a topic has been sufficiently explored, and naturally transitions between topic areas.
Response Timing
The AI takes approximately 8–12 seconds to process a response and generate the next question. This is generally perceived as natural conversational pacing within WhatsApp. For older or slower-typing participants, you can extend the response timing in the settings.
Question Limit
Set a maximum question count to control interview length. The recommended range is 8–12 AI-generated questions per interview. The AI will naturally wrap up the conversation as it approaches the limit, ensuring a clean ending rather than an abrupt cutoff.
Handling Unexpected Participant Behaviour
If a participant goes off-topic, the AI acknowledges their response and gently steers back to relevant themes.
If a participant sends multiple short messages in quick succession, the AI waits for a pause before responding.
If a participant responds with very brief answers, the AI probes for more detail.
The AI will not reference random topics mentioned in passing unless they are relevant to your configured topic areas.
What the AI Won't Do
It will not make promises on your behalf (e.g., "Someone will call you back").
It will not provide advice, diagnoses, or recommendations to participants.
It will not share information about other participants.
It will not deviate from your configured research scope.
AI Interviews are strong within a defined research scope. They are not a substitute for human judgement in sensitive, regulated, or high-risk topics.
Agent Takeover
At any point during an AI interview, a human moderator can take over the conversation.
How It Works
Open the conversation in the Yazi dashboard.
Activate agent takeover.
The AI pauses and the moderator messages the participant directly.
The participant sees no difference — messages continue in the same chat.
When the moderator is done, they hand back to the AI, which continues from where it left off.
When to Use Agent Takeover
A participant shares something particularly interesting that deserves deeper human follow-up.
The AI is not probing in the direction you want.
You want to ask a very specific unscripted question.
The participant is confused and needs human clarification.
Your editorial or research team wants to conduct a live WhatsApp interview with selected participants.
The full transcript — both AI and human moderator messages — is captured in the results.
The ideal workflow: Use AI interviews at scale, review transcripts as they come in, and activate agent takeover for the 5–10 participants whose responses deserve deeper human exploration.
Agent Takeover (Human Moderation)Multimedia in AI Interviews
The AI can request and receive multimedia responses during the interview:
Voice notes — Capturing emotional responses, storytelling, and participants who prefer speaking to typing.
Videos — Product demonstrations, environment documentation, and testimonials.
Images — Workspace photos, product usage, and screenshots of relevant content.
Location — Understanding where participants are when they engage in specific behaviours.
Configure media requirements in the interview settings to control when the AI requests specific media types. For example: "Request a voice note when the participant describes an emotional experience" or "Ask for a photo when the participant mentions their daily routine."
All media files are automatically transcribed (voice notes and videos) and available in the results alongside the text conversation.
Multi-Language Support
AI interviews support the same multi-language functionality as surveys:
Configure your survey questions and interview fields in your primary language.
Auto-translate into target languages with manual editing.
Participants select their preferred language at the start.
The AI conducts the entire interview in the participant's chosen language.
All responses are translated back to your primary language in the results.
The AI naturally adapts to the participant's language style — if a participant uses informal terms or slang in their responses, the AI mirrors that register in its follow-up questions.
Language TranslationAI Interview Design Best Practices
Survey Section Design
Keep the survey section to 8–12 questions — its purpose is to provide context for the AI, not to be a full quantitative study.
Include key classification and behavioural questions that the AI can reference during the interview.
Rating scales and single-select questions give the AI clear data points to probe on.
Avoid long open-text questions in the survey section — save qualitative depth for the AI interview.
Configuration Tips
Be specific in your topic areas. "Customer experience" is too broad. "Checkout experience, delivery tracking, returns process, customer support interactions" gives the AI clear territory to cover.
Include required questions sparingly. 2–3 mandatory questions are fine. More than 5 makes the conversation feel scripted.
Set the right probing intensity. For exploratory research, probe deeply. For validation studies, keep probing moderate.
Describe your audience. The more the AI knows about who it's talking to, the better it calibrates tone and complexity.
Specify what to avoid. If there are sensitive topics, competitor names to skip, or behaviours to discourage (like promising callbacks), state them explicitly.
Testing Protocol
Important: Test your AI interview configuration thoroughly before launch. Role-play different participant scenarios to ensure the AI handles each appropriately. Previous projects have required 2–3 rounds of prompt refinement.
AI Interview vs. Human Interview
Best combined: Use AI for scale, then hand selected participants to a human moderator when deeper follow-up is needed.
Known Limitations
Known limitations
AI response delay — 8–12 seconds per response. This is usually acceptable but can feel slow for fast texters.
No guaranteed question order — The AI chooses sequence based on conversation flow.
Context window — Very long interviews can reduce recall of earlier details.
Language nuance — Very local slang or dialect may still need manual review.
Sandbox Testing
Build and test your full AI interview in the sandbox before going live:
Test the survey section and the AI interview section end to end.
All configuration and content carries over when you launch with your dedicated number.
Sandbox data exports are limited to 10 rows.
Always use the test link when testing AI interviews. The AI remembers previous conversations on the live link — if you've already reached the maximum question count, it won't ask further questions on subsequent attempts.
Always use the test link for repeat testing. The live link preserves conversation history.
Setup & Launch Timeline
Survey section configuration: 1–2 hours
Interview configuration: 30–60 minutes
Testing and refinement: 2–3 rounds over 1–2 days
Total time to launch: 2–4 days
Last updated