Skip to main content
Mave is Mavera’s AI-powered research agent. Unlike the Responses API, which answers questions from a single persona’s perspective, Mave conducts research — searching the web, pulling news articles, querying SEO data, and synthesizing findings into cited, validated reports. Every Mave response includes sources with URLs, a confidence score, and a hallucination risk assessment. It’s designed for questions where you need facts, not just opinions.
Mave uses a 5-phase orchestration process: triage, planning, research, execution, and validation. This is why responses take longer (5–30 seconds) but are significantly more grounded than standard chat.

How It Works

1

Triage

Mave analyzes your query complexity and determines if clarification is needed. Queries are classified as Simple, Moderate, Complex, or Strategic — each triggering a different depth of research.
2

Planning

Mave creates an execution strategy: which personas to consult, what data sources to query, what search terms to use, and how to structure the final output.
3

Research

Mave executes tool calls in parallel — web search, news search, SEO data, and knowledge base queries — to gather comprehensive information from multiple angles.
4

Execution

Mave synthesizes all research into a coherent response, incorporating persona perspectives and inline citations to source material.
5

Validation

Mave performs a reality check: assessing accuracy, flagging unsupported claims, evaluating hallucination risk, and producing a confidence score.

Basic Usage

import requests

response = requests.post(
    "https://app.mavera.io/api/v1/mave/chat",
    headers={
        "Authorization": "Bearer mvra_live_your_key_here",
        "Content-Type": "application/json"
    },
    json={
        "message": "Analyze the competitive landscape for electric vehicles in Europe"
    }
)

result = response.json()
print(f"Thread ID: {result['thread_id']}")
print(f"\n{result['content']}")

print("\n--- Sources ---")
for source in result["sources"]:
    print(f"  [{source['title']}]({source['url']})")

print(f"\nConfidence: {result['validation']['confidence_score']}")
print(f"Hallucination risk: {result['validation']['hallucination_risk']}")
print(f"Credits used: {result['usage']['credits_used']}")

Multi-Turn Threads

Every Mave response returns a thread_id. Pass it back in subsequent requests to maintain conversation context. Mave remembers the full research history within a thread, so follow-up questions can reference earlier findings without re-doing the research.
headers = {
    "Authorization": "Bearer mvra_live_your_key_here",
    "Content-Type": "application/json"
}

response1 = requests.post(
    "https://app.mavera.io/api/v1/mave/chat",
    headers=headers,
    json={"message": "Analyze the EV market in Europe"}
)
thread_id = response1.json()["thread_id"]
print(response1.json()["content"])

response2 = requests.post(
    "https://app.mavera.io/api/v1/mave/chat",
    headers=headers,
    json={
        "message": "What about Tesla's market share specifically?",
        "thread_id": thread_id
    }
)
print(response2.json()["content"])

response3 = requests.post(
    "https://app.mavera.io/api/v1/mave/chat",
    headers=headers,
    json={
        "message": "Compare that with BYD's growth trajectory",
        "thread_id": thread_id
    }
)
print(response3.json()["content"])
Follow-up messages in the same thread are cheaper because Mave can reuse earlier research context. Start new threads only when changing topics entirely.

Streaming

Enable streaming for real-time display of Mave’s response as it generates.
import httpx
import json

with httpx.stream(
    "POST",
    "https://app.mavera.io/api/v1/mave/chat",
    headers={"Authorization": "Bearer mvra_live_your_key_here"},
    json={"message": "What are the latest AI trends in healthcare?", "stream": True}
) as response:
    for line in response.iter_lines():
        if line.startswith("data: "):
            data = json.loads(line[6:])
            if data.get("type") == "content":
                print(data["content"], end="", flush=True)
            elif data.get("type") == "sources":
                print("\n\n--- Sources ---")
                for source in data["sources"]:
                    print(f"  {source['title']}: {source['url']}")
            elif data.get("type") == "validation":
                print(f"\nConfidence: {data['confidence_score']}")
            elif data.get("type") == "done":
                print(f"\nCredits used: {data['usage']['credits_used']}")
SSE event types:
Event TypeContent
contentText chunk of the response
sourcesArray of source objects with title, URL, and snippet
validationConfidence score and hallucination risk
doneFinal event with usage data

Sources and Citations

Every Mave response includes a sources array with the materials it used during research.
{
  "sources": [
    {
      "title": "European EV Sales Report 2024",
      "url": "https://example.com/report",
      "snippet": "EV adoption in Europe grew 25% year-over-year, with Norway leading at 82% market share...",
      "source_type": "web"
    },
    {
      "title": "Tesla Q4 2024 Earnings Call",
      "url": "https://example.com/earnings",
      "snippet": "European deliveries increased 18% compared to Q3...",
      "source_type": "news"
    }
  ]
}
Source types include web (general web search), news (news articles), seo (SEO/domain data), and knowledge_base (your custom knowledge).
Always verify critical citations by visiting the source URLs. While Mave’s validation phase reduces hallucination, no AI system produces perfectly accurate output 100% of the time.

CircleMind Knowledge Base

CircleMind is Mavera’s knowledge base layer. When configured, Mave automatically searches your custom knowledge alongside public web data, grounding responses in your proprietary information.

How It Works

  1. Upload documents to your CircleMind knowledge base (PDFs, docs, text files)
  2. Mave automatically queries your knowledge base during the Research phase
  3. Results are blended with web and news data, with source attribution

Querying with Knowledge Base

response = requests.post(
    "https://app.mavera.io/api/v1/mave/chat",
    headers={"Authorization": "Bearer mvra_live_your_key_here"},
    json={
        "message": "What does our internal research say about customer churn in Q3?",
        "use_knowledge_base": True
    }
)

for source in response.json()["sources"]:
    if source["source_type"] == "knowledge_base":
        print(f"Internal source: {source['title']}")
CircleMind knowledge base queries are included in Mave’s credit cost — no additional charge per knowledge base lookup. See CircleMind docs for setup instructions.

Data Sources

Mave can access multiple data sources during the Research phase:
SourceProviderDataWhen Used
Web SearchTavilyReal-time web pages, articles, documentationMost queries
News SearchPerigonRecent news articles, press releasesCurrent events, market updates
SEO DataSEMrushDomain metrics, traffic, keyword rankingsCompetitive analysis
Knowledge BaseCircleMindYour uploaded documentsWhen use_knowledge_base: true

Thread Management

List Threads

response = requests.get(
    "https://app.mavera.io/api/v1/mave/threads",
    headers={"Authorization": "Bearer mvra_live_your_key_here"}
)

for thread in response.json()["data"]:
    print(f"{thread['id']}{thread['title']}{thread['message_count']} messages")

Get Thread Details

curl https://app.mavera.io/api/v1/mave/threads/mave_thread_abc123 \
  -H "Authorization: Bearer mvra_live_your_key_here"

Delete Thread

curl -X DELETE https://app.mavera.io/api/v1/mave/threads/mave_thread_abc123 \
  -H "Authorization: Bearer mvra_live_your_key_here"

Response Format

{
  "thread_id": "mave_thread_abc123",
  "message_id": "msg_xyz789",
  "content": "## Electric Vehicle Market Analysis\n\nBased on my research across multiple sources...",
  "sources": [
    {
      "title": "European EV Sales Report 2024",
      "url": "https://example.com/report",
      "snippet": "EV adoption in Europe grew 25% year-over-year...",
      "source_type": "web"
    }
  ],
  "personas_used": [
    {
      "id": "persona_123",
      "name": "Industry Analyst"
    }
  ],
  "validation": {
    "passed": true,
    "confidence_score": 0.89,
    "hallucination_risk": "low",
    "unsupported_claims": []
  },
  "usage": {
    "credits_used": 35
  }
}

Validation Object

The validation field provides transparency into Mave’s self-assessment:
FieldTypeDescription
passedbooleanWhether the response passed the reality check
confidence_scorenumber (0–1)Overall confidence in the accuracy of the response
hallucination_riskstring"low", "medium", or "high"
unsupported_claimsarrayStatements that couldn’t be verified by sources

When to Use Mave vs Chat

NeedUse ChatUse Mave
Persona perspective on a topicYes
Real-time web data with citationsYes
Quick conversational interactionYes
Multi-source research reportYes
Structured JSON outputYes
Fact-checked analysisYes
Tool calling / function callingYes
Knowledge base integrationYes
Cost per query1–5 credits10–75 credits
Response time< 3 seconds5–30 seconds
Combine both: Use Mave to research a topic, then use Chat with a specific persona to explore how that audience would react to the findings. This gives you both factual grounding and audience perspective.

Credit Usage

Mave queries are more expensive than Chat because they involve multiple research phases and data source calls.
Query ComplexityTypical CostExample
Simple10–15 credits”What is the population of France?”
Moderate20–30 credits”Summarize recent AI regulation in the EU”
Complex30–50 credits”Compare market strategies of top 5 EV brands in Europe”
Strategic40–75 credits”Full competitive landscape analysis for fintech in Southeast Asia”
Follow-up messages in the same thread typically cost 30–50% less than the initial query.

Best Practices

Clear, specific questions lead to better research. “Analyze Tesla’s market position in Germany in 2024” produces a more focused, useful report than “Tell me about Tesla.”
Continue conversations in the same thread to maintain context and reduce redundant research. Mave can reference earlier findings in follow-up responses.
Review validation.confidence_score and validation.hallucination_risk before relying on the output for important decisions. Scores below 0.7 warrant additional verification.
Complex and strategic queries can take 15–30 seconds. Streaming lets you display progress to users in real time instead of showing a loading spinner.
When your question involves internal data, enable CircleMind to blend your proprietary knowledge with public web research.
Don’t use Mave when you just need a persona’s perspective — that’s what Chat is for. Use Mave when you need researched, cited, fact-checked information.

Next Steps

Quickstart: Mave

First Mave query in 5 minutes

Mave Research Report

Build a full report from a Mave session

CircleMind

Set up your knowledge base

API Reference

Full API specification