Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.mavera.io/llms.txt

Use this file to discover all available pages before exploring further.

The Scenario

You need qualitative positioning feedback from five distinct audience segments — but you don’t have three weeks for recruitment, incentive budgets, or a moderator. With Mavera’s Speak surface, you have live voice conversations with AI personas, probe their reactions in real time, and extract structured insights. Five personas, five conversations, one afternoon.
Mavera-only workflow. No recruitment platforms, no incentive payments, no scheduling tools. Just Mavera’s Personas, Speak, and Chat surfaces.

When to Use This

  • Early-stage positioning validation with 2–3 hypotheses that need gut-checks.
  • Founder-led discovery — practice your pitch with synthetic personas before real prospects.
  • Message hierarchy testing — which proof points resonate first with each segment?
  • Pre-launch readiness — talk to 5 ICP segments and capture objections.

Architecture

Mavera SurfaceRole in Pipeline
Personas (POST /personas)Create 5 ICP personas with distinct priorities
Speak (POST /speak)Open voice conversation sessions
Chat (OpenAI-compatible)Extract structured insights from transcripts

What You Need

RequirementDetails
Mavera API keyStarts with mvra_live_. Get one at Developer Settings.
Python 3.8+ or Node.js 18+requests/openai for Python; native fetch for Node.
Credits~115–265 total. See Credits Estimate.
MAVERA_API_KEY=mvra_live_your_key_here

Step 1 — Create 5 ICP Personas

import os, time, json, requests

API_KEY = os.environ["MAVERA_API_KEY"]
BASE = "https://app.mavera.io/api/v1"
HEADERS = {"Authorization": f"Bearer {API_KEY}", "Content-Type": "application/json"}

PERSONAS = [
    {"name": "Elena — SaaS Marketing Director", "role": "Marketing Director, 200-person B2B SaaS",
     "description": "Manages team of 8. Spending $40K/quarter on research agencies. Frustrated by slow turnaround. Needs HubSpot integration and 90-day ROI.",
     "traits": ["ROI-focused", "time-pressed", "integration-demanding", "data-driven"]},
    {"name": "Raj — E-commerce Founder", "role": "Founder & CEO, DTC brand ($5M ARR)",
     "description": "Bootstrapped, writes ad copy personally. Audience is millennial women 25–34. Relies on gut instinct and A/B testing. Budget-conscious but invests for ROAS.",
     "traits": ["scrappy", "gut-instinct-driven", "ROAS-obsessed", "hands-on"]},
    {"name": "Dr. Amara — Healthcare Comms Lead", "role": "Director of Communications, hospital network",
     "description": "Navigates HIPAA compliance and health literacy. Content for diverse populations with varying reading levels. Multiple compliance review cycles.",
     "traits": ["compliance-aware", "patient-first", "health-literacy-focused", "risk-averse"]},
    {"name": "Marcus — Agency Creative Director", "role": "Creative Director, mid-size ad agency",
     "description": "Oversees 12 clients in CPG, fintech, retail. Team of 15. Constantly pitching new business. AI-curious but skeptical about quality vs human creativity.",
     "traits": ["quality-obsessed", "client-facing", "AI-curious-but-skeptical", "pitch-driven"]},
    {"name": "Keiko — Enterprise Product Manager", "role": "Senior PM, Fortune 500 tech company",
     "description": "Platform with 10K+ enterprise customers. Validates feature positioning before engineering invests. Currently uses slow customer advisory boards and lagging NPS surveys.",
     "traits": ["analytical", "consensus-builder", "evidence-driven", "launch-velocity-focused"]},
]

def create_persona(p):
    resp = requests.post(f"{BASE}/personas", headers=HEADERS, json={
        "name": p["name"], "role": p["role"], "description": p["description"], "traits": p["traits"]})
    resp.raise_for_status()
    data = resp.json()
    print(f"  Created: {data['name']}{data['id']}")
    return data["id"]

persona_ids = {}
for p in PERSONAS:
    persona_ids[p["name"].split(" — ")[0]] = create_persona(p)

Step 2 — Define Interview Script

Seven phases from opening reaction through pricing to a closing suggestion. Consistent across all 5 interviews.
POSITIONING = """
Mavera is an AI-powered audience research platform that lets marketing teams
test messaging, positioning, and creative with synthetic personas in minutes
instead of weeks. No recruitment, no scheduling, no incentive budgets.
"""

INTERVIEW_SCRIPT = [
    {"phase": "Opening", "prompt": f"React to this positioning:\n\n{POSITIONING}\n\nGut reaction?"},
    {"phase": "Comprehension", "prompt": "In your own words, what does this product do? Who is it for?"},
    {"phase": "Relevance", "prompt": "How relevant is this to your daily work? Give a specific recent example."},
    {"phase": "Differentiation", "prompt": "What alternatives do you use today? How does this compare?"},
    {"phase": "Objections", "prompt": "If someone pitched this to you, what's your first concern?"},
    {"phase": "Pricing", "prompt": "What would you expect to pay? What feels fair vs too expensive?"},
    {"phase": "Closing", "prompt": "One thing you'd change about how this product is described?"},
]

Step 3 — Run Speak Sessions

Start a Speak session for each persona and run through the interview script turn by turn.
def run_interview(name, pid):
    print(f"\n{'─' * 50}\nINTERVIEW: {name}\n{'─' * 50}")

    session = requests.post(f"{BASE}/speak", headers=HEADERS, json={
        "persona_id": pid,
        "context": "You're in a product positioning interview. Answer honestly from your professional perspective.",
    }).json()

    transcript = []
    for step in INTERVIEW_SCRIPT:
        response = requests.post(f"{BASE}/speak/{session['id']}/messages",
                                 headers=HEADERS, json={"message": step["prompt"]}).json()
        answer = response.get("transcript", response.get("content", ""))
        print(f"  [{step['phase']}] {answer[:100]}...")
        transcript.append({"phase": step["phase"], "question": step["prompt"],
                           "answer": answer, "persona": name})
    return transcript

all_transcripts = {}
for name, pid in persona_ids.items():
    all_transcripts[name] = run_interview(name, pid)
Speak sessions are sequential — each turn waits for the response. Budget ~2–3 minutes per interview, ~10–15 minutes for all 5.

Step 4 — Extract Structured Insights

Feed each transcript into Chat with a structured output schema.
from openai import OpenAI

mavera = OpenAI(api_key=API_KEY, base_url="https://app.mavera.io/api/v1")

INSIGHT_SCHEMA = {"type": "json_schema", "json_schema": {"name": "interview_insights", "strict": True,
    "schema": {"type": "object", "properties": {
        "positioning_clarity": {"type": "number"}, "relevance_score": {"type": "number"},
        "primary_objection": {"type": "string"},
        "price_expectation_low": {"type": "number"}, "price_expectation_high": {"type": "number"},
        "current_alternatives": {"type": "array", "items": {"type": "string"}},
        "key_quote": {"type": "string"}, "suggested_change": {"type": "string"},
        "would_buy": {"type": "string", "enum": ["yes", "maybe", "no"]},
        "top_3_insights": {"type": "array", "items": {"type": "string"}},
    }, "required": ["positioning_clarity", "relevance_score", "primary_objection",
        "price_expectation_low", "price_expectation_high", "current_alternatives",
        "key_quote", "suggested_change", "would_buy", "top_3_insights"]}}}

def extract_insights(name, transcript):
    formatted = "\n\n".join(f"[{t['phase']}]\nQ: {t['question']}\nA: {t['answer']}" for t in transcript)
    resp = mavera.responses.create(model="mavera-1", input=[
        {"role": "system", "content": "Extract structured insights from this positioning interview. Use exact quotes."},
        {"role": "user", "content": f"Interview with {name}:\n\n{formatted}"},
    ], extra_body={"response_format": INSIGHT_SCHEMA})
    return json.loads(resp.output[0].content[0].text)

all_insights = {}
for name, transcript in all_transcripts.items():
    ins = extract_insights(name, transcript)
    all_insights[name] = ins
    print(f"  {name}: clarity={ins['positioning_clarity']}/10, "
          f"relevance={ins['relevance_score']}/10, would_buy={ins['would_buy']}")

Step 5 — Marathon Summary Report

def print_marathon_report(insights):
    print("\n" + "=" * 70)
    print("PERSONA INTERVIEW MARATHON — SUMMARY")
    print("=" * 70)
    print(f"\n{'Persona':<20} {'Clarity':>8} {'Relevance':>10} {'Buy':>6} {'Price Range':>15}")
    print("─" * 60)
    for name, d in insights.items():
        pr = f"${d['price_expectation_low']}–${d['price_expectation_high']}"
        print(f"{name:<20} {d['positioning_clarity']:>7}/10 {d['relevance_score']:>9}/10 "
              f"{d['would_buy']:>6} {pr:>15}")

    print(f"\n{'─' * 60}\nOBJECTIONS\n{'─' * 60}")
    for name, d in insights.items():
        print(f"  {name}: {d['primary_objection']}")

    print(f"\n{'─' * 60}\nKEY QUOTES\n{'─' * 60}")
    for name, d in insights.items():
        print(f"  {name}: \"{d['key_quote']}\"")

    # Alternatives frequency
    alts = {}
    for d in insights.values():
        for a in d.get("current_alternatives", []):
            alts[a] = alts.get(a, 0) + 1
    print(f"\n{'─' * 60}\nALTERNATIVES MENTIONED\n{'─' * 60}")
    for a, c in sorted(alts.items(), key=lambda x: -x[1]):
        print(f"  {a}: {c} persona(s)")

print_marathon_report(all_insights)
with open("interview_marathon.json", "w") as f:
    json.dump(all_insights, f, indent=2)

Example Output

PERSONA INTERVIEW MARATHON — SUMMARY
══════════════════════════════════════════════════════════════════════════
Persona              Clarity  Relevance    Buy     Price Range
────────────────────────────────────────────────────────────────
Elena                   8/10      9/10    yes     $300–$800/mo
Raj                     7/10      6/10  maybe     $100–$300/mo
Dr. Amara               6/10      5/10  maybe     $200–$500/mo
Marcus                  9/10      8/10    yes     $500–$1500/mo
Keiko                   8/10      7/10  maybe     $400–$1000/mo

KEY QUOTES
────────────────────────────────────────────────────────────────
  Elena: "If this replaces one agency cycle per quarter, it pays for itself."
  Raj: "Cool tech, but I need to see it beat my gut on actual ROAS."
  Dr. Amara: "Can I trust synthetic personas for health messaging?"
  Marcus: "I'd use this for pitch prep in a heartbeat."
  Keiko: "Show me a case study where this predicted real user behavior."

Variations

Generate a follow-up question based on the persona’s answer:
def adaptive_probe(session_id, previous_answer, phase):
    probe = mavera.responses.create(model="mavera-1", input=[
        {"role": "system", "content": "Generate one follow-up question probing deeper."},
        {"role": "user", "content": f"Phase: {phase}\nAnswer: {previous_answer}"},
    ])
    return requests.post(f"{BASE}/speak/{session_id}/messages",
        headers=HEADERS, json={"message": probe.output[0].content[0].text}).json()
for i, pos in enumerate([POSITIONING_A, POSITIONING_B]):
    INTERVIEW_SCRIPT[0]["prompt"] = f"React to this:\n\n{pos}\n\nGut reaction?"
    for name, pid in persona_ids.items():
        run_interview(f"{name}_v{i}", pid)
objections = [ins["primary_objection"] for ins in all_insights.values()]
fg_questions = [{"question": f"How concerning is: '{obj}'", "type": "NPS", "order": i+1}
                for i, obj in enumerate(objections)]
fg = requests.post(f"{BASE}/focus-groups", headers=HEADERS, json={
    "name": "Objection Validation", "persona_ids": list(persona_ids.values()),
    "sample_size": 25, "questions": fg_questions}).json()
with open("interviews.md", "w") as f:
    for name, transcript in all_transcripts.items():
        f.write(f"# {name}\n\n")
        for t in transcript:
            f.write(f"## {t['phase']}\n**Q:** {t['question']}\n**A:** {t['answer']}\n\n---\n\n")

Credits Estimate

OperationTypical CostNotes
Persona creation (×5)0–25One-time; reuse across runs
Speak sessions (×5, 7 turns each)100–200Primary cost driver
Chat insight extraction (×5)15–40Structured output per transcript
Total~115–265
Keep interviews to 5–7 questions for the best depth/cost balance. Add follow-up probes selectively rather than for every question.

What’s Next

Industry Panel Simulation

Switch from interviews to a structured Focus Group with 10 buying-committee personas

Message Testing Matrix

Quantify interview findings with a systematic message × persona grid

Persona Debate

Pit opposing buyer types against each other for pricing insights

Generational Content Testing

Test across age demographics instead of role-based personas

Persona Selection Guide

Choose the right persona types for your research goal

Credits & Budget

Pre-flight checks and usage tracking