Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.mavera.io/llms.txt

Use this file to discover all available pages before exploring further.

Mavera Surfaces Used

SurfaceRole
Personas (POST /personas, GET /personas)Create 3 persona segments: customers, prospects, and churned users
Focus Groups (POST /focus-groups)Run NPS + Open-Ended questions per segment for quantitative and qualitative data
Chat + response_formatCompare segment results and synthesize a 360-degree brand perception report
A real brand perception audit interviews existing customers, potential customers, and people who left. This playbook simulates all three segments with Mavera personas, producing quantitative NPS scores and qualitative perception data — side-by-side — in a single session.

What Value Does Mavera Add?

ValueHow
InsuranceCatch brand perception gaps before they become retention problems. Compare how customers, prospects, and churned users see you.
Opening new doorsRun perception audits quarterly instead of annually. Track perception trends over time with consistent methodology.
Saving timeA traditional 3-segment perception study takes 4-6 weeks and $20K+. This runs in under an hour.

When to Use This

  • You suspect a gap between how you see your brand and how the market sees it.
  • You’re losing deals or customers and want to understand whether brand perception is a factor.
  • You’re preparing for a rebrand and need a baseline measurement.
  • You want to compare perception across segments (customers love you but prospects don’t know you).
  • You’re presenting brand health metrics to leadership and need structured data.

What You Need

RequirementDetails
Mavera API keyStarts with mvra_live_. Get one at Developer Settings.
Workspace IDFrom your dashboard URL (ws_...).
Brand contextYour brand name, category, value proposition, and key competitors.
Segment definitionsCharacteristics of your customers, prospects, and churned users.
Credits~300–700 total. See Credits Estimate.
Python 3.8+ or Node.js 18+requests / openai for Python; native fetch for Node.
MAVERA_API_KEY=mvra_live_your_key_here
MAVERA_WORKSPACE_ID=ws_your_workspace_id
BRAND_NAME=Acme

The Audit Framework

Three focus groups run in parallel — one per segment. Each uses the same questions for comparability.

Question Battery

Every segment answers the same 6 questions:
#QuestionTypeWhat It Measures
1”How likely are you to recommend {brand} to a colleague?”NPSOverall brand advocacy
2”What 3 words come to mind when you think of {brand}?”Open-EndedBrand associations
3”Rate {brand} on: Untrustworthy ←→ Trustworthy”Semantic DifferentialTrust perception
4”Rate {brand} on: Outdated ←→ Innovative”Semantic DifferentialInnovation perception
5”How well does {brand} deliver on its promise of {value prop}?”Likert (1-10)Promise-delivery gap
6”What is the biggest risk of choosing {brand} over alternatives?”Open-EndedPerceived weaknesses

The Flow

1

Define brand context

Set your brand name, category, value proposition, and key competitors. This context is included in every focus group.
2

Create 3 persona segments

Build 2-3 personas per segment: satisfied customers, evaluating prospects, and recently churned users. Each persona has unique motivations and context.
3

Run 3 Focus Groups in parallel

Launch one focus group per segment. All use the same 6-question battery for cross-segment comparability.
4

Poll for completion

Monitor all three groups. They run independently and may complete at different times.
5

Cross-segment comparison

Compare NPS, semantic differentials, and open-ended themes across all three segments.
6

Generate 360-degree report

Use Chat with structured output to synthesize a comprehensive brand perception report with gap analysis.

Code: Full Brand Perception Audit

Setup and Configuration

import os
import json
import time
import requests
from openai import OpenAI

MAVERA_API_KEY = os.environ["MAVERA_API_KEY"]
WORKSPACE_ID = os.environ["MAVERA_WORKSPACE_ID"]
BRAND_NAME = os.environ.get("BRAND_NAME", "Acme")
BASE = "https://app.mavera.io/api/v1"
HEADERS = {
    "Authorization": f"Bearer {MAVERA_API_KEY}",
    "Content-Type": "application/json",
}
mavera = OpenAI(api_key=MAVERA_API_KEY, base_url=BASE)

BRAND_CONTEXT = {
    "name": BRAND_NAME,
    "category": "AI-powered market research platform",
    "value_prop": "replacing guesswork with persona-validated audience insights",
    "competitors": ["Pollfish", "UserTesting", "Wynter", "SurveyMonkey"],
}

SEGMENTS = {
    "customers": {
        "label": "Current Customers",
        "personas": [
            {
                "name": f"{BRAND_NAME} Power User — Growth Marketer",
                "description": (
                    f"Active {BRAND_NAME} user for 6+ months. Uses focus groups and chat weekly. "
                    "Growth marketer at a Series B startup. Loves the speed but occasionally "
                    "questions depth compared to real research. NPS likely 7-9."
                ),
            },
            {
                "name": f"{BRAND_NAME} Casual User — Content Manager",
                "description": (
                    f"Uses {BRAND_NAME} 2-3 times a month for content scoring. "
                    "Content manager at a mid-market company. Finds it useful but hasn't "
                    "explored beyond basic chat. Might not renew if price increases. NPS likely 6-7."
                ),
            },
            {
                "name": f"{BRAND_NAME} Champion — VP Marketing",
                "description": (
                    f"Internally champions {BRAND_NAME}. VP Marketing who got the team adopted. "
                    "Uses it for positioning workshops and campaign validation. "
                    "Would recommend to peers. NPS likely 9-10."
                ),
            },
        ],
    },
    "prospects": {
        "label": "Prospects (Evaluating)",
        "personas": [
            {
                "name": f"{BRAND_NAME} Prospect — Skeptical PMM",
                "description": (
                    f"Aware of {BRAND_NAME} but hasn't tried it. Product marketer at an enterprise "
                    "company. Skeptical about synthetic audiences replacing real research. "
                    "Currently uses agencies and UserTesting. Needs proof of accuracy."
                ),
            },
            {
                "name": f"{BRAND_NAME} Prospect — Budget-Conscious Founder",
                "description": (
                    f"Heard about {BRAND_NAME} from a peer. Founder at a seed-stage startup. "
                    "Interested in the concept but worried about cost and whether AI research "
                    "is credible enough for investor presentations."
                ),
            },
        ],
    },
    "churned": {
        "label": "Churned Users",
        "personas": [
            {
                "name": f"Former {BRAND_NAME} User — Price Churned",
                "description": (
                    f"Used {BRAND_NAME} for 3 months, then cancelled citing cost. "
                    "Marketing coordinator at a small company. Found the tool useful but "
                    "couldn't justify the monthly cost for occasional use. Went back to "
                    "Google Forms and informal feedback."
                ),
            },
            {
                "name": f"Former {BRAND_NAME} User — Depth Churned",
                "description": (
                    f"Used {BRAND_NAME} for 6 months at an enterprise company. Cancelled because "
                    "leadership didn't trust synthetic research for high-stakes decisions. "
                    "Felt the personas lacked nuance compared to real interviews. "
                    "Switched to a traditional research agency."
                ),
            },
        ],
    },
}

Stage 1 — Create Segment Personas

def create_segment_personas() -> dict[str, list[str]]:
    """Create personas for all 3 segments. Returns {segment_key: [persona_ids]}."""
    segment_ids = {}

    for segment_key, segment in SEGMENTS.items():
        ids = []
        for persona in segment["personas"]:
            resp = requests.post(
                f"{BASE}/personas",
                headers=HEADERS,
                json={
                    "name": persona["name"],
                    "description": persona["description"],
                    "workspace_id": WORKSPACE_ID,
                },
            ).json()

            if "error" in resp:
                raise Exception(f"Failed: {persona['name']}: {resp['error']['message']}")

            ids.append(resp["id"])
            print(f"✓ [{segment['label']}] {persona['name']} ({resp['id']})")

        segment_ids[segment_key] = ids

    total = sum(len(v) for v in segment_ids.values())
    print(f"\nCreated {total} personas across {len(segment_ids)} segments")
    return segment_ids

Stage 2 — Run 3 Focus Groups

Each segment gets its own focus group with the same 6-question battery.
def build_question_battery() -> list[dict]:
    """Build the standard 6-question perception battery."""
    return [
        {
            "question": (
                f"How likely are you to recommend {BRAND_NAME} to a colleague or peer? "
                f"(0 = not at all likely, 10 = extremely likely)"
            ),
            "type": "NPS",
            "order": 1,
        },
        {
            "question": (
                f"What 3 words or phrases come to mind when you think of {BRAND_NAME}? "
                f"Explain why each word applies."
            ),
            "type": "OPEN_ENDED",
            "order": 2,
        },
        {
            "question": f"Rate {BRAND_NAME} on trustworthiness:",
            "type": "SEMANTIC_DIFFERENTIAL",
            "left_anchor": "Untrustworthy / unreliable",
            "right_anchor": "Trustworthy / dependable",
            "scale": 7,
            "order": 3,
        },
        {
            "question": f"Rate {BRAND_NAME} on innovation:",
            "type": "SEMANTIC_DIFFERENTIAL",
            "left_anchor": "Outdated / behind the curve",
            "right_anchor": "Innovative / cutting-edge",
            "scale": 7,
            "order": 4,
        },
        {
            "question": (
                f"How well does {BRAND_NAME} deliver on its promise of "
                f"'{BRAND_CONTEXT['value_prop']}'? Rate 1-10."
            ),
            "type": "LIKERT",
            "scale": 10,
            "order": 5,
        },
        {
            "question": (
                f"What is the biggest risk of choosing {BRAND_NAME} over alternatives "
                f"like {', '.join(BRAND_CONTEXT['competitors'][:3])}? "
                f"What concerns would make you hesitate?"
            ),
            "type": "OPEN_ENDED",
            "order": 6,
        },
    ]


def run_segment_focus_groups(segment_ids: dict) -> dict:
    """Launch one focus group per segment. Returns {segment_key: fg_response}."""
    questions = build_question_battery()
    focus_groups = {}

    for segment_key, persona_ids in segment_ids.items():
        label = SEGMENTS[segment_key]["label"]

        payload = {
            "name": f"Brand Perception Audit — {label}",
            "sample_size": 25,
            "persona_ids": persona_ids,
            "workspace_id": WORKSPACE_ID,
            "questions": questions,
        }

        resp = requests.post(
            f"{BASE}/focus-groups",
            headers=HEADERS,
            json=payload,
        ).json()

        if "error" in resp:
            raise Exception(f"Failed [{label}]: {resp['error']['message']}")

        focus_groups[segment_key] = resp
        print(f"✓ Focus group launched: {label} ({resp['id']})")

    return focus_groups


def poll_all_focus_groups(focus_groups: dict, timeout_min: int = 15) -> dict:
    """Poll all focus groups until they complete."""
    results = {}
    pending = dict(focus_groups)

    for attempt in range(timeout_min * 6):
        for segment_key, fg in list(pending.items()):
            resp = requests.get(
                f"{BASE}/focus-groups/{fg['id']}",
                headers=HEADERS,
            ).json()

            if "error" in resp:
                raise Exception(resp["error"]["message"])

            if resp.get("status") == "COMPLETED":
                results[segment_key] = resp
                del pending[segment_key]
                label = SEGMENTS[segment_key]["label"]
                print(f"✓ {label} completed")

            elif resp.get("status") == "FAILED":
                raise Exception(f"Focus group {fg['id']} failed")

        if not pending:
            print(f"\nAll {len(results)} focus groups completed")
            return results

        time.sleep(10)

    raise TimeoutError(f"{len(pending)} focus groups did not complete")

Stage 3 — Cross-Segment Comparison

def compare_segments(all_results: dict) -> dict:
    """Extract and compare key metrics across segments."""
    comparison = {}

    for segment_key, fg_results in all_results.items():
        label = SEGMENTS[segment_key]["label"]
        segment_data = {"label": label, "nps": None, "trust": None, "innovation": None, "promise_delivery": None, "word_associations": [], "perceived_risks": []}

        for result in fg_results.get("results", []):
            q_type = result.get("type")

            if q_type == "NPS":
                segment_data["nps"] = {
                    "score": result.get("nps_score"),
                    "promoters": result.get("promoter_count", 0),
                    "passives": result.get("passive_count", 0),
                    "detractors": result.get("detractor_count", 0),
                    "summary": result.get("summary", ""),
                }

            elif q_type == "SEMANTIC_DIFFERENTIAL":
                question = result.get("question", "")
                if "trustworthiness" in question.lower():
                    segment_data["trust"] = result.get("mean_score", 0)
                elif "innovation" in question.lower():
                    segment_data["innovation"] = result.get("mean_score", 0)

            elif q_type == "LIKERT":
                segment_data["promise_delivery"] = result.get("mean_score", 0)

            elif q_type == "OPEN_ENDED":
                question = result.get("question", "")
                if "3 words" in question.lower():
                    segment_data["word_associations"] = result.get("themes", [])
                elif "risk" in question.lower():
                    segment_data["perceived_risks"] = result.get("themes", [])

        comparison[segment_key] = segment_data

    return comparison


def print_comparison(comparison: dict):
    """Print a comparison table."""
    print("\n" + "=" * 70)
    print("CROSS-SEGMENT BRAND PERCEPTION COMPARISON")
    print("=" * 70)

    header = f"{'Metric':<25}"
    for data in comparison.values():
        header += f"{data['label']:<20}"
    print(header)
    print("-" * 70)

    # NPS
    row = f"{'NPS Score':<25}"
    for data in comparison.values():
        nps = data["nps"]["score"] if data["nps"] else "N/A"
        row += f"{nps:<20}"
    print(row)

    # Trust
    row = f"{'Trust (1-7)':<25}"
    for data in comparison.values():
        row += f"{data['trust'] or 'N/A':<20}"
    print(row)

    # Innovation
    row = f"{'Innovation (1-7)':<25}"
    for data in comparison.values():
        row += f"{data['innovation'] or 'N/A':<20}"
    print(row)

    # Promise delivery
    row = f"{'Promise Delivery (1-10)':<25}"
    for data in comparison.values():
        row += f"{data['promise_delivery'] or 'N/A':<20}"
    print(row)

    # Word associations
    print(f"\n{'Word Associations:':<25}")
    for data in comparison.values():
        words = ", ".join(data["word_associations"][:5]) if data["word_associations"] else "N/A"
        print(f"  {data['label']}: {words}")

    # Perceived risks
    print(f"\n{'Perceived Risks:':<25}")
    for data in comparison.values():
        risks = data["perceived_risks"][:3] if data["perceived_risks"] else ["N/A"]
        print(f"  {data['label']}:")
        for risk in risks:
            print(f"    • {risk}")

Stage 4 — Generate 360-Degree Report

REPORT_SCHEMA = {"type": "json_schema", "json_schema": {
    "name": "brand_perception_report", "strict": True,
    "schema": {
        "type": "object",
        "properties": {
            "executive_summary": {"type": "string"},
            "overall_brand_health": {"type": "string", "description": "Strong, Moderate, Weak, or Critical"},
            "nps_comparison": {
                "type": "object",
                "properties": {
                    "customers": {"type": "number"},
                    "prospects": {"type": "number"},
                    "churned": {"type": "number"},
                    "gap_analysis": {"type": "string"},
                },
                "required": ["customers", "prospects", "churned", "gap_analysis"],
            },
            "perception_gaps": {
                "type": "array",
                "items": {
                    "type": "object",
                    "properties": {
                        "dimension": {"type": "string"},
                        "gap_description": {"type": "string"},
                        "severity": {"type": "string"},
                        "recommendation": {"type": "string"},
                    },
                    "required": ["dimension", "gap_description", "severity", "recommendation"],
                },
            },
            "brand_strengths": {"type": "array", "items": {"type": "string"}},
            "brand_weaknesses": {"type": "array", "items": {"type": "string"}},
            "churn_drivers": {"type": "array", "items": {"type": "string"}},
            "prospect_barriers": {"type": "array", "items": {"type": "string"}},
            "priority_actions": {
                "type": "array",
                "items": {
                    "type": "object",
                    "properties": {
                        "action": {"type": "string"},
                        "impact": {"type": "string"},
                        "effort": {"type": "string"},
                        "timeline": {"type": "string"},
                    },
                    "required": ["action", "impact", "effort", "timeline"],
                },
            },
        },
        "required": [
            "executive_summary", "overall_brand_health", "nps_comparison",
            "perception_gaps", "brand_strengths", "brand_weaknesses",
            "churn_drivers", "prospect_barriers", "priority_actions",
        ],
    },
}}


def generate_perception_report(comparison: dict) -> dict:
    """Synthesize a 360-degree brand perception report."""
    prompt = (
        f"You are a brand strategist. Analyze this 360-degree brand perception data "
        f"for {BRAND_NAME} ({BRAND_CONTEXT['category']}) and produce a comprehensive report.\n\n"
        f"Value proposition: '{BRAND_CONTEXT['value_prop']}'\n"
        f"Competitors: {', '.join(BRAND_CONTEXT['competitors'])}\n\n"
        f"## Cross-Segment Data\n{json.dumps(comparison, indent=2)}\n\n"
        "Focus on:\n"
        "1. How perception differs across customers, prospects, and churned users\n"
        "2. The biggest perception gaps and their business impact\n"
        "3. What's driving churn from a brand perception standpoint\n"
        "4. What's blocking prospect conversion\n"
        "5. Prioritized actions to improve brand health"
    )

    resp = mavera.responses.create(
        model="mavera-1",
        input=[{"role": "user", "content": prompt}],
        extra_body={"response_format": REPORT_SCHEMA},
    )

    return json.loads(resp.output[0].content[0].text)

Running the Full Audit

def run_brand_audit():
    print("=" * 60)
    print(f"BRAND PERCEPTION AUDIT — {BRAND_NAME}")
    print("=" * 60)

    # Stage 1: Create personas
    print("\n--- Stage 1: Creating Segment Personas ---")
    segment_ids = create_segment_personas()

    # Stage 2: Run focus groups
    print("\n--- Stage 2: Running 3 Focus Groups ---")
    focus_groups = run_segment_focus_groups(segment_ids)
    all_results = poll_all_focus_groups(focus_groups)

    # Stage 3: Compare
    print("\n--- Stage 3: Cross-Segment Comparison ---")
    comparison = compare_segments(all_results)
    print_comparison(comparison)

    # Stage 4: Report
    print("\n--- Stage 4: Generating 360° Report ---")
    report = generate_perception_report(comparison)

    print(f"\nBrand Health: {report['overall_brand_health']}")
    print(f"NPS — Customers: {report['nps_comparison']['customers']}, "
          f"Prospects: {report['nps_comparison']['prospects']}, "
          f"Churned: {report['nps_comparison']['churned']}")

    print(f"\nStrengths:")
    for s in report["brand_strengths"]:
        print(f"  + {s}")
    print(f"\nWeaknesses:")
    for w in report["brand_weaknesses"]:
        print(f"  - {w}")

    print(f"\nPriority Actions:")
    for action in report["priority_actions"]:
        print(f"  [{action['impact']} impact, {action['effort']} effort] {action['action']}")

    # Save
    output = {"comparison": comparison, "report": report}
    with open("brand_perception_audit.json", "w") as f:
        json.dump(output, f, indent=2)

    print("\n✓ Saved brand_perception_audit.json")
    return report


if __name__ == "__main__":
    run_brand_audit()

Example Output

{
  "executive_summary": "Acme has strong brand health among active customers (NPS +42) but significant perception gaps with prospects (NPS -5) and churned users (NPS -28). Trust and innovation scores are high with customers but drop sharply in the churned segment. The primary churn driver is perceived lack of depth for high-stakes decisions. The primary prospect barrier is skepticism about synthetic vs. real research.",
  "overall_brand_health": "Moderate",
  "nps_comparison": {
    "customers": 42,
    "prospects": -5,
    "churned": -28,
    "gap_analysis": "A 70-point spread between customers and churned users signals a post-purchase experience problem, not an acquisition problem. Prospects are nearly neutral — they need proof, not persuasion."
  },
  "perception_gaps": [
    {
      "dimension": "Research Credibility",
      "gap_description": "Customers rate trust at 5.8/7, but churned users rate it at 2.9/7. Churned users question whether synthetic research is rigorous enough for executive decisions.",
      "severity": "High",
      "recommendation": "Publish validation studies comparing Mavera results to traditional research outcomes. Add confidence intervals to focus group outputs."
    }
  ],
  "brand_strengths": [
    "Speed — all segments acknowledge faster time-to-insight",
    "Innovation — perceived as cutting-edge by customers and prospects",
    "Ease of use — customers cite low learning curve"
  ],
  "brand_weaknesses": [
    "Research depth questioned for high-stakes decisions",
    "Price-value perception weak for occasional users",
    "Limited brand awareness among prospects"
  ],
  "churn_drivers": [
    "Leadership doesn't trust synthetic research for board-level decisions",
    "Cost not justified for teams using it less than weekly",
    "Missing integrations with existing research workflows"
  ],
  "prospect_barriers": [
    "Skepticism about AI-generated audience data accuracy",
    "No peer case studies or social proof from their industry",
    "Unclear how results compare to real focus groups"
  ],
  "priority_actions": [
    {
      "action": "Create 3 case studies showing Mavera vs traditional research correlation",
      "impact": "High",
      "effort": "Medium",
      "timeline": "6 weeks"
    },
    {
      "action": "Launch a free tier or trial to reduce prospect risk perception",
      "impact": "High",
      "effort": "Low",
      "timeline": "2 weeks"
    }
  ]
}

Variations

Run the same audit quarterly. Store results and diff:
q1 = json.load(open("audits/q1.json"))
q2 = json.load(open("audits/q2.json"))

for segment in ["customers", "prospects", "churned"]:
    delta_nps = q2["comparison"][segment]["nps"]["score"] - q1["comparison"][segment]["nps"]["score"]
    print(f"{segment}: NPS {'+' if delta_nps > 0 else ''}{delta_nps}")
Run the same battery for a competitor’s brand using your personas:
COMPETITOR_NAME = "CompetitorX"
# Replace BRAND_NAME in question battery
competitor_questions = build_question_battery()
for q in competitor_questions:
    q["question"] = q["question"].replace(BRAND_NAME, COMPETITOR_NAME)
Instead of customer/prospect/churned, segment by industry:
SEGMENTS = {
    "saas": {"label": "SaaS Companies", "personas": [...]},
    "agency": {"label": "Marketing Agencies", "personas": [...]},
    "enterprise": {"label": "Enterprise", "personas": [...]},
}
Add an internal segment to see if employees and customers are aligned:
SEGMENTS["employees"] = {
    "label": "Internal Team",
    "personas": [
        {"name": "Sales Rep", "description": "Frontline seller..."},
        {"name": "Customer Success", "description": "Manages renewals..."},
        {"name": "Product Manager", "description": "Builds the roadmap..."},
    ],
}

Credits Estimate

StageTypical CostNotes
Create 7 personas0Persona creation is free
Focus Group — Customers (N=25, 6 Qs)100–200 credits
Focus Group — Prospects (N=25, 6 Qs)100–200 credits
Focus Group — Churned (N=25, 6 Qs)100–200 credits
Perception report (1 chat call)5–15 creditsSingle structured output
Total~305–615 credits
The three focus groups run in parallel on the server — total wall-clock time is the longest single group, not the sum. Expect 5-10 minutes for all three.

See Also

Focus Groups

NPS, Semantic Differential, and all question types

Positioning Workshop

Validate positioning after measuring current perception

Pricing Research

Test pricing alongside perception

News-Triggered Research

Monitor how external events shift brand perception

Generational Content Testing

Test perception across age cohorts

Annual Planning Kickoff

Feed perception data into annual planning