Drop-in recipes for the official SDKs.
Point the official openai or @anthropic-ai/sdk at https://hypereal.cloud/api/v1 with your ck_ key. Streaming, tool calling, and structured outputs work the same code as against the upstream APIs — no shims, no wrappers.
Streaming chat completions
SSE chunks pass through unmodified — same delta / finish_reason / data: [DONE] terminator the OpenAI SDK already knows how to parse.
import OpenAI from "openai";
const client = new OpenAI({
apiKey: process.env.HYPEREAL_API_KEY, // ck_...
baseURL: "https://hypereal.cloud/api/v1",
});
const stream = await client.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: "Write a haiku about caching." }],
stream: true,
});
for await (const chunk of stream) {
process.stdout.write(chunk.choices[0]?.delta?.content ?? "");
}
from openai import OpenAI
client = OpenAI(
api_key=os.environ["HYPEREAL_API_KEY"], # ck_...
base_url="https://hypereal.cloud/api/v1",
)
stream = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Write a haiku about caching."}],
stream=True,
)
for chunk in stream:
delta = chunk.choices[0].delta.content or ""
print(delta, end="", flush=True)
Tool / function calling
tools, tool_choice, and parallel_tool_calls are forwarded verbatim. tool_calls in the response (and stream deltas) come back unchanged.
import OpenAI from "openai";
const client = new OpenAI({
apiKey: process.env.HYPEREAL_API_KEY,
baseURL: "https://hypereal.cloud/api/v1",
});
const res = await client.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: "What's the weather in Tokyo?" }],
tools: [
{
type: "function",
function: {
name: "get_weather",
description: "Get the current weather for a city",
parameters: {
type: "object",
properties: { city: { type: "string" } },
required: ["city"],
},
},
},
],
tool_choice: "auto",
});
console.log(res.choices[0].message.tool_calls);
// → [{ id: "call_...", function: { name: "get_weather", arguments: '{"city":"Tokyo"}' } }]
from openai import OpenAI
client = OpenAI(
api_key=os.environ["HYPEREAL_API_KEY"],
base_url="https://hypereal.cloud/api/v1",
)
res = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "What's the weather in Tokyo?"}],
tools=[{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get the current weather for a city",
"parameters": {
"type": "object",
"properties": {"city": {"type": "string"}},
"required": ["city"],
},
},
}],
tool_choice="auto",
)
print(res.choices[0].message.tool_calls)
Structured outputs
Pass response_format with type: json_schema, name, strict, schema. Schemas are enforced upstream — the model returns a string you can JSON.parse with confidence.
import OpenAI from "openai";
const client = new OpenAI({
apiKey: process.env.HYPEREAL_API_KEY,
baseURL: "https://hypereal.cloud/api/v1",
});
const res = await client.chat.completions.create({
model: "gpt-4o",
messages: [
{ role: "user", content: "Extract: Bill ordered 3 cappuccinos." },
],
response_format: {
type: "json_schema",
json_schema: {
name: "order",
strict: true,
schema: {
type: "object",
properties: {
customer: { type: "string" },
item: { type: "string" },
quantity: { type: "integer" },
},
required: ["customer", "item", "quantity"],
additionalProperties: false,
},
},
},
});
const order = JSON.parse(res.choices[0].message.content!);
// → { customer: "Bill", item: "cappuccino", quantity: 3 }
from openai import OpenAI
client = OpenAI(
api_key=os.environ["HYPEREAL_API_KEY"],
base_url="https://hypereal.cloud/api/v1",
)
res = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Extract: Bill ordered 3 cappuccinos."}],
response_format={
"type": "json_schema",
"json_schema": {
"name": "order",
"strict": True,
"schema": {
"type": "object",
"properties": {
"customer": {"type": "string"},
"item": {"type": "string"},
"quantity": {"type": "integer"},
},
"required": ["customer", "item", "quantity"],
"additionalProperties": False,
},
},
},
)
import json
order = json.loads(res.choices[0].message.content)
Anthropic Messages SDK
The Anthropic SDK works against /v1/messages with the same ck_ key. Extended thinking, tool use, and streaming are all supported.
import Anthropic from "@anthropic-ai/sdk";
const client = new Anthropic({
apiKey: process.env.HYPEREAL_API_KEY, // ck_...
baseURL: "https://hypereal.cloud/api/v1",
});
const res = await client.messages.create({
model: "claude-sonnet-4-6",
max_tokens: 1024,
messages: [{ role: "user", content: "Explain MapReduce in one tweet." }],
});
console.log(res.content[0].type === "text" ? res.content[0].text : "");
from anthropic import Anthropic
client = Anthropic(
api_key=os.environ["HYPEREAL_API_KEY"],
base_url="https://hypereal.cloud/api/v1",
)
res = client.messages.create(
model="claude-sonnet-4-6",
max_tokens=1024,
messages=[{"role": "user", "content": "Explain MapReduce in one tweet."}],
)
print(res.content[0].text)
Legacy JSON mode
If you don't have a schema yet, response_format: { type: 'json_object' } still works for older code paths.
// Legacy mode — no schema, just "valid JSON".
const res = await client.chat.completions.create({
model: "gpt-4o-mini",
messages: [
{ role: "system", content: "Respond ONLY with valid JSON." },
{ role: "user", content: "Give me a recipe for pasta carbonara." },
],
response_format: { type: "json_object" },
});
Other passthrough fields
The gateway forwards seed, n, stop, logprobs, top_logprobs, presence_penalty, frequency_penalty, parallel_tool_calls, service_tier, and metadata verbatim — anything the OpenAI Chat Completions spec accepts, you can send.
https://hypereal.cloud/api/v1 · Auth: Authorization: Bearer ck_... or x-api-key: ck_... · Billing: pay-as-you-go credits, 100 credits = $1 USD. See full reference for model IDs and per-MTok pricing.