calendar-lines-penAI Moderated Interviews

Overview

Yazi's AI Interview is a hybrid methodology that combines a structured survey section with an AI-moderated qualitative interview — all delivered within a single WhatsApp conversation. The AI interviewer adapts its follow-up questions in real time based on what the participant has already said, creating a conversational experience that sits between a traditional survey and a human-led depth interview.

The result is qualitative richness at quantitative scale — without needing a team of human moderators.


At a Glance

  • Best for: Understanding motivations, attitudes, experiences, and emotions

  • Typical sample size: 12–100 participants

  • Typical structure: 8–12 survey questions plus 5–15 AI follow-ups

  • Participant effort: 8–20 minutes

  • Best output: Structured data plus rich verbatims in one flow

circle-info

Use AI Interviews when you need both measurement and depth in the same study.


How It Works

Participant Experience

1

Step 1: Participant starts in WhatsApp

The participant receives a WhatsApp invitation and taps the button to begin.

2

Step 2: Survey section runs first

The conversation starts with structured questions such as single select, multi-select, ratings, and media uploads.

3

Step 3: AI interview begins

The AI uses the participant's earlier answers to generate personalised follow-up questions in real time.

4

Step 4: Conversation adapts naturally

The AI probes, acknowledges what was said, and moves between topics based on the participant's responses.

5

Step 5: Study closes cleanly

The participant reaches the configured question limit and receives a custom closing message.

The transition from survey to interview is seamless. To the participant, it feels like a single conversation that starts structured and becomes more open and conversational.

Example Conversation

Survey question: "Have you switched banks in the last 5 years?" → Participant selects "Yes"

AI follow-up: "Since you switched banks in the last 5 years, what made you leave your old bank?"

Participant responds: "The fees were too high and the app kept crashing"

AI follow-up: "That sounds frustrating. When the app crashed, how did that affect your day-to-day banking? Did you find workarounds or just stop using it?"

The AI draws on the full context of the conversation — not just the most recent answer — to ask relevant, non-repetitive follow-up questions.


When to Use AI Interviews vs. Surveys

Use a Survey when you need quantitative measurement, tracking, or benchmarking. Surveys are built for scale (250+ participants), deliver coded and countable data, and take participants 3–8 minutes to complete with 22–28 structured questions.

Use an AI Interview when you need to explore motivations, attitudes, experiences, or emotions. AI Interviews are designed for 12–100 participants, combine 8–12 survey questions with 5–15 AI-generated follow-ups, and take 8–20 minutes. You get rich verbatims, stories, and nuanced explanations alongside your structured data.

Rule of thumb: If you need numbers, use a survey. If you need understanding, use an AI interview. If you need both, use an AI interview — the survey section gives you the numbers, and the AI section gives you the depth.

comment-questionSurvey Surveys in Yazichevron-right

The Interview Builder

Two Sections

The AI interview builder has two distinct sections:

Survey section

Built using the same builder as a standard survey.

  • All normal question types are supported

  • Routing logic still applies

  • Randomisation still applies

  • Formatting tools still apply

Interview configuration

This controls how the AI behaves during the conversational part.

  • What to explore

  • How to probe

  • What tone to use

  • What to avoid

For the full survey builder, see Survey Surveys in Yazi.

Interview Configuration Fields

The AI moderator is controlled through nine specific configuration fields, giving you granular control over the conversation:

Research Objective — The high-level outcome the research is trying to achieve. Example: "Understand why customers cancel their subscriptions within the first 3 months."

Target Audience — Who the participants are, providing context for tone and relevance. Example: "Working mothers aged 25–40 in urban South Africa who use meal delivery services."

Key Topic Areas — The themes the AI should cover, ensuring balanced question allocation across topics. Example: "Price sensitivity, delivery experience, menu variety, competitor usage."

Style & Tone — How the AI should communicate. Example: "Warm and conversational, not clinical. Use simple language."

Probing Intensity — How deeply the AI should follow up on responses. Example: "Probe deeply on emotional responses. Ask 'why' at least twice on key topics."

Media Requirements — When and how to request multimedia responses. Example: "Ask for a voice note when the participant describes a frustrating experience. Request a photo if they mention their workspace."

Safety Considerations — Topics to avoid or handle sensitively. Example: "Do not ask about specific medical diagnoses. If participant mentions mental health struggles, respond empathetically and move on."

Brand/Entity Insertion — Specific brands or products the AI should explore. Example: "Compare perceptions of Discovery Vitality vs Momentum Multiply."

Required Questions — Specific questions the AI must ask at some point during the interview. Example: "At some point ask: 'If you could change one thing about [product], what would it be?'"

circle-info

Why nine fields instead of one?

A single broad prompt often causes the AI to over-focus on the first topic mentioned. Splitting the configuration into specific fields creates better topic balance and more reliable probing.


AI Behaviour & Controls

How the AI Decides What to Ask

The AI moderator reads three things before generating each follow-up question: all survey responses the participant has already provided, the full conversation history within the interview, and your nine configuration fields. It then generates the next most relevant question, avoiding repetition and ensuring coverage across your specified topic areas.

Conversation Flow

The AI asks one question at a time and waits for the participant's response. It acknowledges what the participant said before asking the next question. It probes deeper when responses are vague or surface-level, moves on when a topic has been sufficiently explored, and naturally transitions between topic areas.

Response Timing

The AI takes approximately 8–12 seconds to process a response and generate the next question. This is generally perceived as natural conversational pacing within WhatsApp. For older or slower-typing participants, you can extend the response timing in the settings.

Question Limit

Set a maximum question count to control interview length. The recommended range is 8–12 AI-generated questions per interview. The AI will naturally wrap up the conversation as it approaches the limit, ensuring a clean ending rather than an abrupt cutoff.

Handling Unexpected Participant Behaviour

  • If a participant goes off-topic, the AI acknowledges their response and gently steers back to relevant themes.

  • If a participant sends multiple short messages in quick succession, the AI waits for a pause before responding.

  • If a participant responds with very brief answers, the AI probes for more detail.

  • The AI will not reference random topics mentioned in passing unless they are relevant to your configured topic areas.

What the AI Won't Do

  • It will not make promises on your behalf (e.g., "Someone will call you back").

  • It will not provide advice, diagnoses, or recommendations to participants.

  • It will not share information about other participants.

  • It will not deviate from your configured research scope.

circle-exclamation

Agent Takeover

At any point during an AI interview, a human moderator can take over the conversation.

How It Works

  1. Open the conversation in the Yazi dashboard.

  2. Activate agent takeover.

  3. The AI pauses and the moderator messages the participant directly.

  4. The participant sees no difference — messages continue in the same chat.

  5. When the moderator is done, they hand back to the AI, which continues from where it left off.

When to Use Agent Takeover

  • A participant shares something particularly interesting that deserves deeper human follow-up.

  • The AI is not probing in the direction you want.

  • You want to ask a very specific unscripted question.

  • The participant is confused and needs human clarification.

  • Your editorial or research team wants to conduct a live WhatsApp interview with selected participants.

The full transcript — both AI and human moderator messages — is captured in the results.

The ideal workflow: Use AI interviews at scale, review transcripts as they come in, and activate agent takeover for the 5–10 participants whose responses deserve deeper human exploration.

messagesAgent Takeover (Human Moderation)chevron-right

Multimedia in AI Interviews

The AI can request and receive multimedia responses during the interview:

  • Voice notes — Capturing emotional responses, storytelling, and participants who prefer speaking to typing.

  • Videos — Product demonstrations, environment documentation, and testimonials.

  • Images — Workspace photos, product usage, and screenshots of relevant content.

  • Location — Understanding where participants are when they engage in specific behaviours.

Configure media requirements in the interview settings to control when the AI requests specific media types. For example: "Request a voice note when the participant describes an emotional experience" or "Ask for a photo when the participant mentions their daily routine."

All media files are automatically transcribed (voice notes and videos) and available in the results alongside the text conversation.


Multi-Language Support

AI interviews support the same multi-language functionality as surveys:

  1. Configure your survey questions and interview fields in your primary language.

  2. Auto-translate into target languages with manual editing.

  3. Participants select their preferred language at the start.

  4. The AI conducts the entire interview in the participant's chosen language.

  5. All responses are translated back to your primary language in the results.

The AI naturally adapts to the participant's language style — if a participant uses informal terms or slang in their responses, the AI mirrors that register in its follow-up questions.

languageLanguage Translationchevron-right

AI Interview Design Best Practices

Survey Section Design

  • Keep the survey section to 8–12 questions — its purpose is to provide context for the AI, not to be a full quantitative study.

  • Include key classification and behavioural questions that the AI can reference during the interview.

  • Rating scales and single-select questions give the AI clear data points to probe on.

  • Avoid long open-text questions in the survey section — save qualitative depth for the AI interview.

Configuration Tips

  • Be specific in your topic areas. "Customer experience" is too broad. "Checkout experience, delivery tracking, returns process, customer support interactions" gives the AI clear territory to cover.

  • Include required questions sparingly. 2–3 mandatory questions are fine. More than 5 makes the conversation feel scripted.

  • Set the right probing intensity. For exploratory research, probe deeply. For validation studies, keep probing moderate.

  • Describe your audience. The more the AI knows about who it's talking to, the better it calibrates tone and complexity.

  • Specify what to avoid. If there are sensitive topics, competitor names to skip, or behaviours to discourage (like promising callbacks), state them explicitly.

Testing Protocol

1

Run a cooperative test

Complete the interview yourself as an engaged participant.

2

Run a low-effort test

Answer with very short or vague responses to see how the AI probes.

3

Run an off-topic test

Go deliberately off-topic and check whether the AI brings the conversation back on track.

4

Review the transcript

Check for repetition, missed topics, weak probing, and awkward transitions.

5

Refine and retest

Update the configuration and repeat the test cycle. Plan for at least 2–3 rounds.

Important: Test your AI interview configuration thoroughly before launch. Role-play different participant scenarios to ensure the AI handles each appropriately. Previous projects have required 2–3 rounds of prompt refinement.


AI Interview vs. Human Interview

AI interview strengths

  • Runs many conversations at once

  • Delivers consistent topic coverage

  • Available 24/7

  • Lower cost at scale

  • Strong for structured exploration

Human interview strengths

  • Better at handling tangents

  • Better with emotional nuance

  • Better for live clarification

  • Better for unexpected depth

  • Best for small high-value samples

Best combined: Use AI for scale, then hand selected participants to a human moderator when deeper follow-up is needed.


Known Limitations

circle-exclamation

Sandbox Testing

Build and test your full AI interview in the sandbox before going live:

  • Test the survey section and the AI interview section end to end.

  • All configuration and content carries over when you launch with your dedicated number.

  • Sandbox data exports are limited to 10 rows.

Always use the test link when testing AI interviews. The AI remembers previous conversations on the live link — if you've already reached the maximum question count, it won't ask further questions on subsequent attempts.

circle-info

Always use the test link for repeat testing. The live link preserves conversation history.


Setup & Launch Timeline

  • Survey section configuration: 1–2 hours

  • Interview configuration: 30–60 minutes

  • Testing and refinement: 2–3 rounds over 1–2 days

  • Total time to launch: 2–4 days

Last updated