On This Page
Quick Start
The Hypereal API provides access to powerful media generation models. All requests require Bearer authentication with your API key.
curl -X POST https://api.hypereal.cloud/v1/images/generate \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"prompt": "a sunset over the ocean",
"model": "gpt-image-2",
"mode": "fast"
}'curl -X POST https://api.hypereal.cloud/v1/videos/generate \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "wan-2-6-i2v",
"mode": "fast",
"input": {
"prompt": "a cat walking",
"image": "https://example.com/cat.jpg"
}
}'/api/v1/chatLLM Chat
Powerful LLM chat API with streaming support. Ideal for building chatbots, content generation, code assistance, and any conversational AI application.
Available Models
Request Body
role and contentPricing
Pricing varies by model. See the model list above for per-model input/output rates. Credits scale with token usage. See full pricing.
curl -X POST https://api.hypereal.cloud/v1/chat \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{
"messages": [
{"role": "user", "content": "Hello, how are you?"}
],
"stream": false
}'{
"id": "chatcmpl-abc123",
"choices": [{
"index": 0,
"message": {
"role": "assistant",
"content": "Hello! I'm doing great..."
},
"finish_reason": "stop"
}],
"creditsUsed": 2
}const response = await fetch('https://api.hypereal.cloud/v1/chat', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': 'Bearer YOUR_API_KEY'
},
body: JSON.stringify({
messages: [{ role: 'user', content: 'Hi!' }],
stream: true
})
});
const reader = response.body.getReader();
while (true) {
const { done, value } = await reader.read();
if (done) break;
// Process SSE chunks
}/api/v1/messagesAnthropic Messages API
Anthropic-compatible /v1/messages endpoint for Claude Code, OpenCode, and any Anthropic SDK client. Supports streaming, tools, and extended thinking.
Billed per token from your credit balance. Not included in subscription plans.
Supported Models
Claude Code Setup
Set these environment variables and launch Claude Code:
export ANTHROPIC_BASE_URL=https://api.hypereal.cloud/api export ANTHROPIC_API_KEY=ck_YOUR_API_KEY claude
curl -X POST https://api.hypereal.cloud/api/v1/messages \
-H "Content-Type: application/json" \
-H "x-api-key: ck_YOUR_API_KEY" \
-H "anthropic-version: 2023-06-01" \
-d '{
"model": "claude-sonnet-4-6",
"max_tokens": 1024,
"stream": true,
"messages": [
{"role": "user", "content": "Hello!"}
]
}'import anthropic
client = anthropic.Anthropic(
base_url="https://api.hypereal.cloud/api",
api_key="ck_YOUR_API_KEY",
)
message = client.messages.create(
model="claude-sonnet-4-6",
max_tokens=1024,
messages=[{"role": "user", "content": "Hello!"}],
)
print(message.content[0].text)/api/v1/chat/completionsChat Completions API
OpenAI-compatible /v1/chat/completions endpoint for Codex CLI, Cursor, Continue, and any OpenAI SDK client. Full support for all GPT-5 and Codex models.
Billed per token from your credit balance. Not included in subscription plans.
Supported Models (selection)
Codex Models β optimized for coding
o-series β reasoning models
GPT-5 family β general purpose
56 models supported. All GPT-3.5, GPT-4, GPT-4o, GPT-5 variants included. GET /api/v1/chat/completions for the full model list and pricing.
Codex CLI Setup
Codex CLI speaks OpenAI's Responses API by default. Configure it as a custom provider pointing at /api/v1/responses.
Step 1 β Install
npm install -g @openai/codex # or: brew install codex
Step 2 β Generate an API key at manage-api-keys (must start with ck_)
Step 3 β Write ~/.codex/config.toml
model = "gpt-5-codex" model_provider = "hypereal" [model_providers.hypereal] name = "Hypereal" base_url = "https://hypereal.cloud/api/v1" env_key = "HYPEREAL_API_KEY" wire_api = "responses"
base_url stops at /api/v1.wire_api = "responses" is required.
Step 4 β Export the key (name must match env_key)
export HYPEREAL_API_KEY="ck_your_key_here" # Persist across shells: echo 'export HYPEREAL_API_KEY="ck_your_key_here"' >> ~/.zshrc
Step 5 β Run
codex # or one-shot: codex exec "refactor this function"
Verify
curl https://hypereal.cloud/api/v1/responses
Returns JSON with supported models. If this fails, Codex CLI also cannot connect.
Troubleshooting
- 401 unauthorized: key missing
ck_prefix, or env var empty (echo $HYPEREAL_API_KEYto verify). - missing env HYPEREAL_API_KEY: export it in the shell where you run
codex. - stuck on "connecting":
wire_apimissing or wrong. - MCP client failed to start: harmless warning from a pre-existing MCP config; silence with
mcp_servers = {}. - 402 insufficient credits: top up at settings (min 200 credits).
curl -X POST https://api.hypereal.cloud/api/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer ck_YOUR_API_KEY" \
-d '{
"model": "gpt-5",
"stream": true,
"messages": [
{"role": "user", "content": "Hello!"}
]
}'from openai import OpenAI
client = OpenAI(
base_url="https://api.hypereal.cloud/api/v1",
api_key="ck_YOUR_API_KEY",
)
response = client.chat.completions.create(
model="gpt-5",
messages=[{"role": "user", "content": "Hello!"}],
stream=True,
)
for chunk in response:
print(chunk.choices[0].delta.content, end="")/api/v1/images/generateGenerate Images
Generate images using state-of-the-art AI models. This endpoint processes requests synchronously and returns the generated image URL directly in the response.
Request Body
gpt-image-2)auto (default) or fast. Auto selects the best provider. Fast uses Wavespeed direct (no queue, surcharge applies).Response Fields
url and modelcurl -X POST https://api.hypereal.cloud/v1/images/generate \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{
"prompt": "A futuristic cityscape at sunset with flying cars",
"model": "gpt-image-2",
"mode": "fast",
"aspect_ratio": "16:9"
}'{
"created": 1766841456,
"data": [
{
"url": "https://pub-xxx.r2.dev/generated/images/xxx.png",
"model": "gpt-image-2"
}
],
"resultId": "res_abc123456",
"creditsUsed": 4
}/api/v1/videos/generateGenerate Videos
Generate videos using state-of-the-art AI models. This endpoint supports both text-to-video and image-to-video generation with webhook callbacks for async delivery.
Request Body
veo-3-1-i2v)auto (default) or fast. Auto selects the best provider. Fast uses Wavespeed direct (no queue, surcharge applies).Response Fields
curl -X POST https://api.hypereal.cloud/v1/videos/generate \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{
"model": "veo-3-1-i2v",
"mode": "fast",
"input": {
"prompt": "Camera slowly pans across the scene",
"image": "https://hypereal.tech/demo-girl.webp",
"duration": 5
},
"webhook_url": "https://your-server.com/webhook"
}'{
"jobId": "job_abc123456",
"status": "processing",
"message": "Generation started. Result will be sent to your webhook URL.",
"creditsUsed": 69
}/api/v1/audio/generateGenerate Audio
Text-to-speech, voice cloning, and speech recognition. Supports 64+ emotional expressions for natural-sounding speech synthesis.
Available Models
Emotion Syntax
Wrap emotions in parentheses at the start of text:
(happy) Hello! β’ (sad) I miss you β’ (excited)(laughing) Amazing!curl -X POST https://api.hypereal.cloud/api/v1/audio/generate \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{
"model": "audio-tts",
"input": {
"text": "(happy) Welcome to Hypereal!",
"format": "mp3"
}
}'{
"model": "audio-clone",
"input": {
"text": "Clone my voice!",
"audio": "https://example.com/voice.mp3"
}
}{
"success": true,
"text": "Hello, welcome to Hypereal.",
"duration": 2.5,
"segments": [
{"text": "Hello,", "start": 0.0, "end": 0.8},
{"text": "welcome to Hypereal.", "start": 0.9, "end": 2.5}
]
}/api/v1/3d/generateGenerate 3D Models
Convert images or text to 3D models using state-of-the-art AI. Generate high-quality GLB models ready for use in games, AR/VR, and web applications.
Available Models
Output Format
All 3D models are returned in GLB format, which is widely supported by 3D viewers, game engines, and web frameworks.
curl -X POST https://api.hypereal.cloud/api/v1/3d/generate \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{
"model": "hunyuan3d-v2-base",
"input": {
"image": "https://example.com/object.png"
}
}'{
"model": "tripo3d-multiview-to-3d",
"input": {
"front_image_url": "https://example.com/front.png",
"back_image_url": "https://example.com/back.png",
"left_image_url": "https://example.com/left.png",
"right_image_url": "https://example.com/right.png"
}
}{
"success": true,
"outputUrl": "https://cdn.hypereal.tech/3d/model.glb",
"creditsUsed": 45
}Processing Modes
Synchronous (Images)
The /api/v1/images/generate endpoint processes requests synchronously. The generated image URL is returned directly in the response.
Asynchronous (Videos)
The /api/v1/videos/generate endpoint processes requests asynchronously. Provide a webhook_url to receive results when complete, or poll the job status endpoint.
Webhook Delivery
Provide a webhook_url in your request to receive the result when generation completes. We'll POST the result directly to your server.
Webhook Payload
{
"status": "completed",
"outputUrl": "https://cdn.hypereal.tech/output/video.mp4",
"jobId": "job_abc123456",
"type": "video",
"model": "veo-3-1-i2v",
"creditsUsed": 69
}Example: Node.js Webhook Handler
app.post('/webhook/hypereal', express.json(), (req, res) => {
const { status, outputUrl, jobId, error } = req.body;
if (status === 'completed') {
console.log(`Job ${jobId} completed: ${outputUrl}`);
// Save to database, notify user, etc.
} else if (status === 'failed') {
console.error(`Job ${jobId} failed: ${error}`);
}
// Always return 200 to acknowledge receipt
res.status(200).json({ received: true });
});Return 200 immediately
Process asynchronously if needed, but respond quickly to avoid timeouts.
Handle idempotency
Use jobId to prevent duplicate processing.
/api/v1/jobs/{id}Job Polling
If you can't receive webhooks, use the pollUrl returned in the initial response to check job status until complete.
Query Parameters
video or imageGET /api/v1/jobs/job_abc123?model=veo-3-1-i2v&type=video
{
"status": "completed",
"outputUrl": "https://cdn.hypereal.tech/output/video.mp4",
"jobId": "job_abc123"
}Supported Models
All available models with their parameters and pricing.
Video Generation (62 models)
Image Generation (32 models)
Image Editing (15 models)
Audio Generation (11 models)
3D Model Generation (9 models)
Error Responses
All endpoints return standard HTTP status codes with error details in JSON format.
Unauthorized
{
"error": "Unauthorized. Please log in..."
}Bad Request
{
"error": "Model is required"
}Insufficient Credits
{
"error": "Insufficient credits",
"required": 69,
"available": 10
}Not Found
{
"error": "Job not found"
}Rate Limits
Rate limits are enforced per API key on a rolling 1-hour window.
Each API key has a default rate limit of 1000 requests per hour. Exceeding returns 429.
Credits
Credits are deducted when you submit a request (before processing). If the generation fails or returns no output, credits are automatically refunded to your account.
LLM Chat
New2+ credits per message
Video Generation
20-150 credits per video
Image Generation
4-25 credits per image
Image Editing
4-14 credits per edit
Audio Generation
107 credits per voice clone
3D Model Generation
45 credits per 3D model
