Use this file to discover all available pages before exploring further.
You tell Mavera the exact shape you want. It guarantees the response matches. Every field, every type, every time.No validation code. No retries. No “I hope this parses.” You define a schema, and the API returns JSON that conforms to it — or tells you why it can’t.
Structured Outputs work with the same OpenAI SDK you already use. If you’ve used structured outputs with OpenAI, you already know how this works. Just point at Mavera.
You pass a schema via text.format. Mavera constrains the model’s output to match that schema exactly. The response comes back as valid JSON that you can parse directly into your typed objects.There are two ways to get structured output from the API:
Method
How
Best for
text.format
Pass a schema in the Responses API request
Extracting data, scoring, classification, any time you want typed JSON back
Function calling
Define tools with JSON Schema parameters
When the model should decide which action to take, then return structured arguments
Score a product review. Get back a number, a sentiment label, and an explanation. Every time.
import osfrom pydantic import BaseModelfrom openai import OpenAIclass ReviewScore(BaseModel): score: int sentiment: str explanation: strclient = OpenAI( api_key=os.environ["MAVERA_API_KEY"], base_url="https://app.mavera.io/api/v1",)completion = client.responses.parse( model="mavera-1", input=[ {"role": "system", "content": "Score this product review."}, {"role": "user", "content": "The battery life is incredible but the screen is dim."}, ], text_format=ReviewScore, extra_body={"persona_id": os.environ.get("PERSONA_ID")},)result = completion.output_parsedprint(f"Score: {result.score}, Sentiment: {result.sentiment}")print(f"Explanation: {result.explanation}")
Example output:
{ "score": 7, "sentiment": "mixed", "explanation": "Strong praise for battery life offset by a notable complaint about screen brightness. Overall positive but with a clear reservation."}
The .parse() method automatically converts the JSON response into your Pydantic model (Python) or Zod schema (JavaScript). You get a fully typed object — not a string, not a dict.
When using raw JSON Schema with strict: true, you must set "additionalProperties": false on every object in your schema — including nested ones. This is a JSON Schema requirement for strict mode.
Make the model show its work. Useful when you need to audit reasoning or debug unexpected results.
class Step(BaseModel): reasoning: str = Field(description="One step of reasoning") evidence: str = Field(description="What supports this step")class ChainOfThought(BaseModel): steps: list[Step] = Field(description="Reasoning steps, in order") final_answer: str confidence: float = Field(description="0-1 confidence in the final answer")completion = client.responses.parse( model="mavera-1", input=[ {"role": "system", "content": "Think step by step. Show your reasoning."}, {"role": "user", "content": "Should a DTC brand expand into wholesale retail in 2025?"}, ], text_format=ChainOfThought, extra_body={"persona_id": os.environ.get("PERSONA_ID")},)result = completion.output_parsedfor i, step in enumerate(result.steps, 1): print(f"Step {i}: {step.reasoning}") print(f" Evidence: {step.evidence}")print(f"\nAnswer: {result.final_answer} (confidence: {result.confidence})")
Multi-label Classification
Classify content into multiple categories, each with a confidence score. Great for content tagging, support ticket routing, or lead scoring.
class Category(BaseModel): label: str = Field(description="Category name") confidence: float = Field(description="0-1 confidence this category applies") reason: str = Field(description="Why this category was assigned")class Classification(BaseModel): categories: list[Category] primary_category: str = Field(description="The single most relevant category") content_summary: strcompletion = client.responses.parse( model="mavera-1", input=[ { "role": "system", "content": "Classify this support ticket. Categories: billing, technical, account, feature_request, bug_report." }, { "role": "user", "content": "I was charged twice for my subscription and now I can't log in to my account." }, ], text_format=Classification, extra_body={"persona_id": os.environ.get("PERSONA_ID")},)result = completion.output_parsedprint(f"Primary: {result.primary_category}")for cat in result.categories: print(f" {cat.label}: {cat.confidence:.0%} — {cat.reason}")
Nested Objects — Structured Report
Extract a full structured report with sections, each containing multiple findings. Perfect for research analysis, audit reports, or content reviews.
class Finding(BaseModel): title: str detail: str severity: str = Field(description="low, medium, or high") recommendation: strclass Section(BaseModel): heading: str summary: str findings: list[Finding]class Report(BaseModel): title: str executive_summary: str sections: list[Section] overall_risk_level: str = Field(description="low, medium, high, or critical") next_steps: list[str]completion = client.responses.parse( model="mavera-1", input=[ {"role": "system", "content": "Produce a structured brand health report."}, {"role": "user", "content": "Analyze brand perception for a fintech startup targeting millennials. Recent news: launched budgeting feature, had a 2-hour outage last week."}, ], text_format=Report, extra_body={"persona_id": os.environ.get("PERSONA_ID")},)report = completion.output_parsedprint(f"# {report.title}")print(f"\n{report.executive_summary}")for section in report.sections: print(f"\n## {section.heading}") for finding in section.findings: print(f" [{finding.severity.upper()}] {finding.title}: {finding.recommendation}")
Enum Constraints
Force the model to pick from a fixed set of options. Eliminates the “creative” responses you didn’t ask for.
from enum import Enumclass Priority(str, Enum): critical = "critical" high = "high" medium = "medium" low = "low"class Impact(str, Enum): revenue = "revenue" user_experience = "user_experience" security = "security" operations = "operations"class TriageResult(BaseModel): priority: Priority impact_area: Impact estimated_effort_hours: int should_escalate: bool reasoning: strcompletion = client.responses.parse( model="mavera-1", input=[ {"role": "system", "content": "Triage this engineering issue."}, {"role": "user", "content": "Users in the EU are seeing 500 errors on the checkout page since the last deploy."}, ], text_format=TriageResult, extra_body={"persona_id": os.environ.get("PERSONA_ID")},)result = completion.output_parsedprint(f"Priority: {result.priority.value}")print(f"Impact: {result.impact_area.value}")print(f"Escalate: {result.should_escalate}")
This is where Mavera shines. Structured outputs give you typed data. Personas give you perspective. Combine them and you get quantified audience insights you can pipe directly into your code.Here’s a concrete example: score an ad headline from a Gen Z persona’s perspective.
import osfrom pydantic import BaseModel, Fieldfrom openai import OpenAIclass HeadlineScore(BaseModel): relevance_score: int = Field(description="1-10 how relevant this feels to the persona") would_click: bool = Field(description="Would this persona actually click?") reasoning: str = Field(description="Why or why not, in the persona's voice") suggested_edit: str = Field(description="How the persona would improve it")client = OpenAI( api_key=os.environ["MAVERA_API_KEY"], base_url="https://app.mavera.io/api/v1",)completion = client.responses.parse( model="mavera-1", input=[ { "role": "system", "content": "You are evaluating an ad headline. Score it honestly from your perspective." }, { "role": "user", "content": "Headline: 'Invest in Your Future — Open a Savings Account Today'" }, ], text_format=HeadlineScore, extra_body={"persona_id": os.environ["GEN_Z_PERSONA_ID"]},)result = completion.output_parsedprint(f"Relevance: {result.relevance_score}/10")print(f"Would click: {result.would_click}")print(f"Reasoning: {result.reasoning}")print(f"Suggested edit: {result.suggested_edit}")
Example output from a Gen Z persona:
{ "relevance_score": 3, "would_click": false, "reasoning": "This feels like something my parents would say. 'Invest in your future' is vague and preachy. I'd scroll right past it.", "suggested_edit": "Start Saving Without Thinking About It — Auto-Round-Up Is Here"}
Run the same prompt across multiple personas to compare how different demographics react. Loop through persona IDs and collect the results into a comparison table.
When in doubt, use Structured Outputs. JSON mode ({"type": "json_object"}) gives you valid JSON but no schema guarantees — the model might return any shape. Structured Outputs guarantee the shape matches your schema.
Deeply nested schemas work, but simpler schemas produce better results. If you can flatten a three-level object into two levels, do it. The model has an easier time conforming to simple structures, and your parsing code stays clean.
class SimpleReview(BaseModel): score: int sentiment: str pros: list[str] cons: list[str]
This is almost always better than wrapping pros and cons inside a nested analysis object.
Use descriptions on every field
The description parameter on your schema fields acts like a mini-prompt for each field. It tells the model what you expect, without cluttering your system prompt.
class Score(BaseModel): relevance: int = Field(description="1-10, where 10 means perfectly relevant to the target audience") tone_match: int = Field(description="1-10, where 10 means the tone perfectly matches brand guidelines")
Without descriptions, the model guesses what “relevance” and “tone_match” mean. With descriptions, it knows.
Handle refusals
Sometimes the model can’t — or shouldn’t — produce output matching your schema. Maybe the input is harmful, or the request doesn’t make sense. When this happens, output_parsed will be None and refusal will contain an explanation.Always check for refusals before accessing .output_parsed:
if completion.refusal: print(f"Model refused: {completion.refusal}")else: print(f"Score: {completion.output_parsed.score}")
if (completion.refusal) { console.log(`Model refused: ${completion.refusal}`);} else { console.log(`Score: ${completion.output_parsed.score}`);}
Use enums to constrain choices
If a field should only have a few possible values, use an enum. This eliminates creative-but-wrong answers like “kinda positive” when you wanted "positive", "negative", or "neutral".
Rarely, the model might not be able to produce output matching your schema — usually because the schema contradicts the prompt, or the input is too ambiguous. When this happens with .parse():
If the response is cut off because it hit max_tokens, the JSON will be incomplete and parsing will fail. Either:
Increase max_tokens for complex schemas
Simplify your schema to produce shorter output
Check response.status before parsing
if completion.status != "completed": print("Response was truncated. Increase max_tokens or simplify the schema.")elif completion.output_parsed: print(f"Result: {completion.output_parsed}")
text.format and analysis_mode cannot be used together. Use text.format for your own custom schemas, or analysis_mode for Mavera’s built-in analysis structure. If you need both, run two separate requests.
No streaming with .parse() — the .parse() convenience method waits for the full response. If you need streaming, use .create() with a raw JSON Schema in text.format and parse the complete response yourself.
Schema size — extremely large schemas (50+ fields, deep nesting) may increase latency. Break large schemas into smaller, focused ones when possible.
additionalProperties: false — required on every object level in strict mode. This is the most common schema error.
No regex patterns — use description to explain expected formats (e.g., “ISO 8601 date string”) instead of JSON Schema pattern.