Skip to main content

Installation

pip install openai
Minimum version: Python 3.8+. The OpenAI SDK works with Mavera’s Responses API via base URL override.

Install Troubleshooting

Ensure you install the official OpenAI package: pip install openai. Avoid similarly named packages like openai-api or openai-python. Use pip show openai to verify.
If you’re behind a corporate proxy, set REQUESTS_CA_BUNDLE or SSL_CERT_FILE. For local development, you can use OPENAI_BASE_URL to point at a proxy.
Pin a known-good version: pip install openai>=1.0.0,<2. Check PyPI for the latest.
Recommended: python -m venv venv then source venv/bin/activate (or venv\Scripts\activate on Windows) before pip install openai.

Migrating from OpenAI

If you’re switching from OpenAI to Mavera, change only these:
  1. Base URLbase_url="https://app.mavera.io/api/v1"
  2. API key — Use your Mavera key (starts with mvra_live_)
  3. Model — Use mavera-1
  4. Persona — Add extra_body={"persona_id": "..."} to every responses.create() call
Your existing streaming, tools, and structured outputs work with minor adjustments. See Migrate OpenAI to Mavera for the full guide.

Configuration

from openai import OpenAI

client = OpenAI(
    api_key="mvra_live_your_key_here",
    base_url="https://app.mavera.io/api/v1",
)
Store your API key in an environment variable: MAVERA_API_KEY
import os
from openai import OpenAI

client = OpenAI(
    api_key=os.environ["MAVERA_API_KEY"],
    base_url="https://app.mavera.io/api/v1",
)

Responses API

Basic Usage

response = client.responses.create(
    model="mavera-1",
    input="What are the latest trends in AI?",
    instructions="You are a helpful assistant.",
    extra_body={"persona_id": "YOUR_PERSONA_ID"},
)

print(response.output[0].content[0].text)
print(f"Credits used: {response.usage.credits_used}")

Streaming

with client.responses.stream(
    model="mavera-1",
    input="Write a short story",
    extra_body={"persona_id": "YOUR_PERSONA_ID"},
) as stream:
    for event in stream:
        if event.type == "response.output_text.delta":
            print(event.delta, end="", flush=True)

With Analysis Mode

response = client.responses.create(
    model="mavera-1",
    input="How do millennials feel about remote work?",
    extra_body={
        "persona_id": "YOUR_PERSONA_ID",
        "analysis_mode": True,
        "reasoning_effort": "high",
    },
)

analysis = response.analysis
print(f"Confidence: {analysis['confidence']}/10")

REST API Endpoints

For non-Chat endpoints, use requests or httpx:
import requests

headers = {"Authorization": "Bearer mvra_live_your_key_here"}
base_url = "https://app.mavera.io/api/v1"

# List personas
response = requests.get(f"{base_url}/personas", headers=headers)
personas = response.json()["data"]

# Mave Agent
response = requests.post(
    f"{base_url}/mave/chat",
    headers=headers,
    json={"message": "Analyze the EV market"}
)
result = response.json()

# Create Focus Group
response = requests.post(
    f"{base_url}/focus-groups",
    headers=headers,
    json={
        "name": "Product Feedback",
        "sample_size": 50,
        "persona_ids": ["persona_1", "persona_2"],
        "questions": [...]
    }
)

Async Support

import asyncio
from openai import AsyncOpenAI

client = AsyncOpenAI(
    api_key="mvra_live_your_key_here",
    base_url="https://app.mavera.io/api/v1",
)

async def main():
    response = await client.responses.create(
        model="mavera-1",
        input="Hello!",
        extra_body={"persona_id": "YOUR_PERSONA_ID"},
    )
    print(response.output[0].content[0].text)

asyncio.run(main())

Error Handling

from openai import APIError, AuthenticationError, RateLimitError

try:
    response = client.responses.create(...)
except AuthenticationError:
    print("Invalid API key")
except RateLimitError:
    print("Rate limited - implement backoff")
except APIError as e:
    if e.status_code == 402:
        print("Insufficient credits")
    else:
        print(f"API error: {e}")

Type Hints

The OpenAI SDK includes full type hints:
from openai import OpenAI

client = OpenAI(
    api_key="mvra_live_your_key_here",
    base_url="https://app.mavera.io/api/v1",
)

response = client.responses.create(
    model="mavera-1",
    input="Hello",
    extra_body={"persona_id": "YOUR_PERSONA_ID"},
)

Full Example Script

A complete script you can run from the command line — lists personas, picks one, sends a chat, and prints the response:
# mavera_chat.py — run with: python mavera_chat.py "Your question here"
# Requires: pip install openai requests
import os
import sys
import requests
from openai import OpenAI

def main():
    api_key = os.environ.get("MAVERA_API_KEY")
    if not api_key:
        print("Set MAVERA_API_KEY environment variable")
        sys.exit(1)

    client = OpenAI(api_key=api_key, base_url="https://app.mavera.io/api/v1")

    # List personas and pick first
    import requests
    resp = requests.get(
        "https://app.mavera.io/api/v1/personas",
        headers={"Authorization": f"Bearer {api_key}"},
    )
    personas = resp.json().get("data", [])
    if not personas:
        print("No personas found")
        sys.exit(1)
    persona_id = personas[0]["id"]
    print(f"Using persona: {personas[0]['name']}\n")

    # Chat
    question = sys.argv[1] if len(sys.argv) > 1 else "What matters most to you when choosing a product?"
    response = client.responses.create(
        model="mavera-1",
        input=question,
        extra_body={"persona_id": persona_id},
    )

    print(response.output[0].content[0].text)
    print(f"\nCredits used: {response.usage.credits_used}")

if __name__ == "__main__":
    main()
Run: MAVERA_API_KEY=mvra_live_xxx python mavera_chat.py "How do you feel about subscription pricing?"

Quickstart: Chat

First chat in 5 minutes

Migrate OpenAI → Mavera

Switching from OpenAI

Responses API

Full feature reference

API Reference

Responses API spec