Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.mavera.io/llms.txt

Use this file to discover all available pages before exploring further.

Scenario

You need a 5-part blog series on a complex topic — say, “Building a Data-Driven Marketing Stack.” Writing each part from scratch risks inconsistency: the voice drifts, key points get repeated, the narrative arc loses coherence. This playbook generates part 1 with Mavera’s Generate API, then feeds the output back as context for part 2. Each subsequent part receives the full chain of prior outputs, so the AI maintains consistent terminology, avoids repeating itself, and builds on the narrative thread from earlier installments.
Mavera-only. No external CMS, no editorial calendar tool. Just Generate + Chat + your brand voice. The chain-of-context pattern works with any generation app.

Architecture


What You Need

RequirementDetails
Mavera API keyStarts with mvra_live_. Get one at Developer Settings.
Workspace IDFrom your dashboard URL (ws_...).
Brand voice ID (optional)For consistent voice across all parts. Create one via Brand Voice.
Series outlineTopics for each part, ordered by narrative logic.
Credits~150–400 total. See Credits Estimate.
Python 3.8+ or Node.js 18+requests + openai SDK for Python; native fetch for Node.
MAVERA_API_KEY=mvra_live_your_key_here
MAVERA_WORKSPACE_ID=ws_your_workspace_id
BRAND_VOICE_ID=bv_optional_voice_id

The Flow

1

Define the series plan

Create a structured outline: series title, number of parts, and each part’s topic, key points, and desired length. The plan drives the generation loop.
2

Generate part 1

Call POST /generations with the first topic. No prior context needed — this sets the foundation.
3

Chain subsequent parts

For parts 2–N, include all prior outputs as context in the input. Use Chat to build a continuity prompt that instructs the AI: “Continue this series. Here’s what we’ve covered so far.”
4

Generate a series introduction

After all parts are complete, use Chat to create a series introduction and linking summary that ties all parts together.
5

Export the series

Save each part plus the intro as separate files. Include a manifest with part order, credits, and word counts.

Stage 1 — Define the Series Plan

import os
import time
import json
import requests

API_KEY = os.environ["MAVERA_API_KEY"]
WORKSPACE_ID = os.environ["MAVERA_WORKSPACE_ID"]
BRAND_VOICE_ID = os.environ.get("BRAND_VOICE_ID")
BASE = "https://app.mavera.io/api/v1"
HEADERS = {
    "Authorization": f"Bearer {API_KEY}",
    "Content-Type": "application/json",
}

SERIES = {
    "title": "Building a Data-Driven Marketing Stack",
    "audience": "Marketing leaders at growth-stage B2B companies",
    "parts": [
        {
            "part": 1,
            "topic": "Why Most Marketing Stacks Fail",
            "key_points": [
                "Common anti-patterns: tool sprawl, data silos, vanity metrics",
                "The cost of not having a unified data layer",
                "What a 'data-driven' stack actually looks like",
            ],
            "length": "1000 words",
        },
        {
            "part": 2,
            "topic": "Choosing Your Foundation: CDP, CRM, or Data Warehouse",
            "key_points": [
                "Tradeoffs between CDP-first, CRM-first, and warehouse-first approaches",
                "Decision framework based on team size and data maturity",
                "Real-world examples of each path",
            ],
            "length": "1200 words",
        },
        {
            "part": 3,
            "topic": "Instrumenting Your Funnel: Events, Attribution, and Metrics That Matter",
            "key_points": [
                "Event taxonomy design principles",
                "Multi-touch attribution models and when to use each",
                "The 5 metrics your board actually cares about",
            ],
            "length": "1200 words",
        },
        {
            "part": 4,
            "topic": "Activation: From Insights to Automated Campaigns",
            "key_points": [
                "Turning data into audience segments",
                "Trigger-based campaign architecture",
                "Personalization at scale without a team of 50",
            ],
            "length": "1000 words",
        },
        {
            "part": 5,
            "topic": "Measuring What Matters: Dashboards, Reviews, and Iteration",
            "key_points": [
                "Dashboard design for different stakeholders",
                "Weekly/monthly review cadence",
                "How to iterate on your stack without breaking everything",
            ],
            "length": "1000 words",
        },
    ],
}

Stage 2 — Generate with Chained Context

Each part receives a continuity prompt built from all prior outputs. Part 1 has no prior context. Part 5 sees the summaries of parts 1–4.
from openai import OpenAI

mavera = OpenAI(api_key=API_KEY, base_url=BASE)


def wait_for_generation(gen_id, max_wait=300):
    for _ in range(max_wait // 10):
        resp = requests.get(f"{BASE}/generations/{gen_id}", headers=HEADERS)
        data = resp.json()
        if data.get("status") == "COMPLETED":
            return data
        time.sleep(10)
    raise TimeoutError(f"Generation {gen_id} timed out")


def build_context_prompt(prior_parts):
    """Build a continuity instruction from all prior parts."""
    if not prior_parts:
        return ""

    summary_lines = []
    for p in prior_parts:
        summary_lines.append(
            f"Part {p['part']}: \"{p['topic']}\"\n"
            f"Key themes: {p['output'][:500]}...\n"
        )

    return (
        "IMPORTANT CONTINUITY CONTEXT — this is a multi-part series. "
        "Below are the previous parts. Continue the narrative thread. "
        "Do NOT repeat points already covered. Reference earlier parts naturally "
        "(e.g. 'As we discussed in Part 2...'). Maintain the same voice and terminology.\n\n"
        + "\n---\n".join(summary_lines)
    )


series_outputs = []
total_credits = 0

for part_plan in SERIES["parts"]:
    context = build_context_prompt(series_outputs)

    input_data = {
        "topic": f"Part {part_plan['part']}: {part_plan['topic']} "
                 f"(from the series '{SERIES['title']}')",
        "target_audience": SERIES["audience"],
        "length": part_plan["length"],
        "key_points": part_plan["key_points"],
    }

    if context:
        input_data["additional_context"] = context

    payload = {
        "app_id": "blog_post_generator",
        "title": f"{SERIES['title']} — Part {part_plan['part']}",
        "input_data": input_data,
        "workspace_id": WORKSPACE_ID,
    }
    if BRAND_VOICE_ID:
        payload["brand_voice_id"] = BRAND_VOICE_ID

    print(f"\nGenerating Part {part_plan['part']}: {part_plan['topic']}...")
    resp = requests.post(f"{BASE}/generations", headers=HEADERS, json=payload)
    resp.raise_for_status()
    gen = resp.json()

    if gen.get("status") in ("PENDING", "RUNNING"):
        gen = wait_for_generation(gen["id"])

    credits = gen.get("usage", {}).get("credits_used", 0)
    total_credits += credits

    series_outputs.append({
        "part": part_plan["part"],
        "topic": part_plan["topic"],
        "output": gen.get("output", ""),
        "credits": credits,
    })

    word_count = len(gen.get("output", "").split())
    print(f"  ✓ {credits} credits | {word_count} words")

print(f"\n{'='*50}")
print(f"Series complete: {len(series_outputs)} parts | {total_credits} total credits")
Context length grows with each part. For very long series (8+ parts), truncate prior outputs to summaries instead of full text. Use Chat to generate a 200-word summary of each prior part before feeding it as context.

Stage 3 — Generate Series Introduction and Navigation

After all parts exist, use Chat to produce a series introduction and per-part summaries that help readers navigate.
toc_entries = "\n".join(
    f"- Part {p['part']}: {p['topic']} ({len(p['output'].split())} words)"
    for p in series_outputs
)

intro_resp = mavera.responses.create(
    model="mavera-1",
    input=[
        {
            "role": "user",
            "content": (
                f"Write a compelling series introduction for '{SERIES['title']}'. "
                f"Target audience: {SERIES['audience']}.\n\n"
                f"The series has {len(series_outputs)} parts:\n{toc_entries}\n\n"
                "Write:\n"
                "1. A 150-word hook paragraph explaining why this series matters\n"
                "2. A brief summary of what each part covers (2-3 sentences each)\n"
                "3. A 'Who should read this' section\n"
                "4. Estimated total reading time\n\n"
                "Tone: authoritative but accessible. No fluff."
            ),
        },
    ],
)
series_intro = intro_resp.output[0].content[0].text
print("Series introduction generated")
print(series_intro[:300] + "...")


Variations

For series with 8+ parts, summarize each prior part instead of including full text:
def summarize_part(part_output):
    resp = mavera.responses.create(
        model="mavera-1",
        input=[{
            "role": "user",
            "content": f"Summarize this blog post in 200 words. Preserve key terms and conclusions:\n\n{part_output}",
        }],
    )
    return resp.output[0].content[0].text

def build_context_summarized(prior_parts):
    summaries = [f"Part {p['part']}: {summarize_part(p['output'])}" for p in prior_parts]
    return "Previous parts summary:\n\n" + "\n---\n".join(summaries)
Mix content types within the series — a blog post for part 1, a case study for part 3, an interview format for part 5:
APP_MAP = {1: "blog_post_generator", 2: "blog_post_generator", 3: "case_study_generator",
           4: "blog_post_generator", 5: "interview_format_generator"}
app_id = APP_MAP.get(part_plan["part"], "blog_post_generator")
After generating each part, run it past a target persona via Chat to check relevance:
review = mavera.responses.create(
    model="mavera-1",
    input=[{
        "role": "user",
        "content": f"Rate this content's relevance to you (1-10) and suggest improvements:\n\n{part_output[:2000]}",
    }],
    extra_body={"persona_id": "persona_marketing_leader"},
)
Use Chat with response_format to generate the full series outline (topic, key points, length per part) from a single topic before starting the generation loop. The AI ensures logical flow between parts.
After all parts are generated, append navigation links (← Part N / Part N+2 →) to each output for a connected reading experience.

Credits Estimate

OperationTypical CostNotes
Blog post generation (×5 parts)75–150 credits15–30 per part depending on length
Series introduction (Chat)1–5 creditsSingle chat call
Context summaries (if used)5–25 credits1–5 per summary, only for long series
Total (5-part series)~80–180 credits
Total (10-part series with summaries)~180–400 credits
The chain-of-context approach costs slightly more per part than independent generation (the context tokens add ~10% to each call). The narrative coherence is worth it — readers notice when parts contradict each other.

What’s Next

Brand Voice Content Library

Create a full content library in one sitting

Content Repurposing Pipeline

Turn each series part into social, email, and ad formats

Content Localization

Adapt the series for different regional audiences

Content Generation

Full API reference for generation apps

Responses API

response_format, personas, and analysis_mode

Credits & Budget

Track and manage credit usage