OpenAI SDK Compatibility
Use Halfred as a drop-in replacement for OpenAI with existing OpenAI SDK libraries. Simply point your client to Halfred's endpoint.
Why Use OpenAI SDK with Halfred?
Ready-to-use SDKs: Access Halfred through battle-tested OpenAI SDKs available in most common programming languages (Go, PHP, Ruby, Java, .NET, Rust, and more)
Drop-in replacement: Use existing OpenAI code with zero modifications
Intelligent model routing: Benefit from Halfred's profile-based model selection
Cost optimization: Automatic selection of the best model for your use case
Multi-provider access: Access models from OpenAI, Anthropic, Google, and more through a single interface
Enhanced reliability: Built-in failover and load balancing
💡 Tip: For the most up-to-date OpenAI SDK documentation and library references for your programming language, visit the official OpenAI SDK libraries page. All official and community-maintained OpenAI SDKs listed there are compatible with Halfred.
Quick Setup
Base URL Configuration
Instead of using OpenAI's default endpoint, configure your client to use Halfred's API:
Base URL: https://api.halfred.ai/v1/
API Key: Your Halfred API key (starts with "halfred_")Language-Specific Examples
JavaScript/TypeScript (Node.js)
Installation
npm install openaiConfiguration
import OpenAI from "openai";
const openai = new OpenAI({
apiKey: "halfred_xxxxxxxxxxxxxxxxxxxxxxxxxxxx",
baseURL: "https://api.halfred.ai/v1/",
});
// Use exactly like the OpenAI SDK
async function main() {
const completion = await openai.chat.completions.create({
model: "standard", // Use Halfred profiles: lite, standard, deepthink, dev
messages: [
{ role: "system", content: "You are a helpful assistant." },
{ role: "user", content: "What is the capital of France?" },
],
temperature: 0.7,
});
console.log(completion.choices[0].message.content);
}
main();Environment Variables
# .env file
OPENAI_API_KEY=halfred_xxxxxxxxxxxxxxxxxxxxxxxxxxxx
OPENAI_BASE_URL=https://api.halfred.ai/v1/import OpenAI from "openai";
// Automatically uses OPENAI_API_KEY and OPENAI_BASE_URL from environment
const openai = new OpenAI();
const completion = await openai.chat.completions.create({
model: "lite",
messages: [{ role: "user", content: "Hello!" }],
});Python
Installation
pip install openaiConfiguration
from openai import OpenAI
client = OpenAI(
api_key="halfred_xxxxxxxxxxxxxxxxxxxxxxxxxxxx",
base_url="https://api.halfred.ai/v1/"
)
# Use exactly like the OpenAI client
completion = client.chat.completions.create(
model="lite", # Use Halfred profiles: lite, standard, deepthink, dev
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What is the capital of France?"}
],
temperature=0.7
)
print(completion.choices[0].message.content)Environment Variables
# .env file or environment
export OPENAI_API_KEY="halfred_xxxxxxxxxxxxxxxxxxxxxxxxxxxx"
export OPENAI_BASE_URL="https://api.halfred.ai/v1/"from openai import OpenAI
# Automatically uses OPENAI_API_KEY and OPENAI_BASE_URL from environment
client = OpenAI()
completion = client.chat.completions.create(
model="lite",
messages=[{"role": "user", "content": "Hello!"}]
)Go
Installation
go get github.com/sashabaranov/go-openaiConfiguration
package main
import (
"context"
"fmt"
"github.com/sashabaranov/go-openai"
)
func main() {
config := openai.DefaultConfig("halfred_xxxxxxxxxxxxxxxxxxxxxxxxxxxx")
config.BaseURL = "https://api.halfred.ai/v1/"
client := openai.NewClientWithConfig(config)
resp, err := client.CreateChatCompletion(
context.Background(),
openai.ChatCompletionRequest{
Model: "standard", // Use Halfred profiles
Messages: []openai.ChatCompletionMessage{
{
Role: openai.ChatMessageRoleSystem,
Content: "You are a helpful assistant.",
},
{
Role: openai.ChatMessageRoleUser,
Content: "What is the capital of France?",
},
},
Temperature: 0.7,
},
)
if err != nil {
fmt.Printf("Error: %v\n", err)
return
}
fmt.Println(resp.Choices[0].Message.Content)
}PHP
Installation
composer require openai-php/clientConfiguration
<?php
require_once 'vendor/autoload.php';
use OpenAI;
$client = OpenAI::factory()
->withApiKey('halfred_xxxxxxxxxxxxxxxxxxxxxxxxxxxx')
->withBaseUri('https://api.halfred.ai/v1/')
->make();
$response = $client->chat()->create([
'model' => 'standard', // Use Halfred profiles
'messages' => [
['role' => 'system', 'content' => 'You are a helpful assistant.'],
['role' => 'user', 'content' => 'What is the capital of France?'],
],
'temperature' => 0.7,
]);
echo $response->choices[0]->message->content;Ruby
Installation
gem install ruby-openaiConfiguration
require 'openai'
client = OpenAI::Client.new(
access_token: 'halfred_xxxxxxxxxxxxxxxxxxxxxxxxxxxx',
uri_base: 'https://api.halfred.ai/v1/'
)
response = client.chat(
parameters: {
model: 'lite', # Use Halfred profiles
messages: [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'What is the capital of France?' }
],
temperature: 0.7
}
)
puts response.dig('choices', 0, 'message', 'content')Model Selection
When using the OpenAI SDK with Halfred, use Halfred's profile names instead of specific model names:
Available Models
lite
LITE
Fast and cost-effective for simple tasks
standard
STANDARD
Balanced performance for most applications
deepthink
DEEPTHINK
Advanced reasoning for complex tasks
dev
DEV
Free tier for development and testing
Alternative Model Names
You can also use the prefixed format:
halfred-litehalfred-standardhalfred-deepthinkhalfred-dev
Both formats are equivalent and will work identically.
Supported Features
✅ Fully Supported
Chat Completions: Full support for conversational AI
Message Roles: system, user, assistant roles
Temperature: Control randomness (0.0 to 2.0)
Response Format: JSON mode and structured outputs
Token Usage: Accurate token counting and usage statistics
Response Format
Halfred returns responses in the exact same format as OpenAI, with additional metadata:
{
"id": "chatcmpl-abc123",
"object": "chat.completion",
"created": 1677652288,
"model": "gpt-4o",
"provider": "openai",
"profile": "standard",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "The capital of France is Paris."
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 12,
"completion_tokens": 8,
"total_tokens": 20
}
}Additional Fields
Halfred adds these extra fields to the standard OpenAI response:
provider: The underlying AI provider used (e.g., "openai", "anthropic")profile: The Halfred profile that was selected
Migration from OpenAI
Step 1: Update Configuration
Replace your OpenAI configuration:
// Before (OpenAI)
const openai = new OpenAI({
apiKey: "sk-your-openai-key",
});
// After (Halfred)
const openai = new OpenAI({
apiKey: "halfred_xxxxxxxxxxxxxxxxxxxxxxxxxxxx",
baseURL: "https://api.halfred.ai/v1/",
});Step 2: Update Model Names
Replace specific OpenAI model names with Halfred profiles:
// Before (OpenAI)
model: "gpt-4o";
model: "gpt-3.5-turbo";
// After (Halfred)
model: "standard"; // Automatically selects best model
model: "lite"; // For cost-effective tasksStep 3: Test Your Integration
Your existing code should work without any other changes. Test with a simple completion:
const completion = await openai.chat.completions.create({
model: "dev",
messages: [{ role: "user", content: "Hello, world!" }],
});
console.log(completion.choices[0].message.content);Best Practices
Model Selection Strategy
// For simple tasks (cost-effective)
const quickResponse = await openai.chat.completions.create({
model: "lite",
messages: [{ role: "user", content: "Summarize this in one sentence." }],
});
// For most applications (balanced)
const standardResponse = await openai.chat.completions.create({
model: "standard",
messages: [{ role: "user", content: "Explain quantum computing." }],
});
// For complex reasoning (high-quality)
const deepResponse = await openai.chat.completions.create({
model: "deepthink",
messages: [{ role: "user", content: "Analyze this complex business scenario..." }],
});
// For development/testing (free)
const testResponse = await openai.chat.completions.create({
model: "dev",
messages: [{ role: "user", content: "Test message" }],
});Error Handling
try {
const completion = await openai.chat.completions.create({
model: "standard",
messages: [{ role: "user", content: "Hello!" }],
});
console.log(completion.choices[0].message.content);
} catch (error) {
if (error.status === 401) {
console.error("Invalid API key");
} else if (error.status === 429) {
console.error("Rate limit exceeded or insufficient credits");
} else {
console.error("API error:", error.message);
}
}Troubleshooting
Common Issues
Authentication Error
Error: 401 UnauthorizedSolution: Ensure you're using a valid Halfred API key (format: halfred_xxxx...).
Invalid Model Error
Error: Model 'gpt-4' not foundSolution: Use Halfred profile names (lite, standard, deepthink, dev) instead of specific model names.
Base URL Not Set
Error: Connection refusedSolution: Make sure you've set the baseURL to https://api.halfred.ai/v1/.
Timeout Configuration
const openai = new OpenAI({
apiKey: "halfred_xxxxxxxxxxxxxxxxxxxxxxxxxxxx",
baseURL: "https://api.halfred.ai/v1/",
timeout: 30000, // 30 seconds
});Retry Logic
import OpenAI from "openai";
const openai = new OpenAI({
apiKey: "halfred_xxxxxxxxxxxxxxxxxxxxxxxxxxxx",
baseURL: "https://api.halfred.ai/v1/",
maxRetries: 3,
});Support
For OpenAI SDK compatibility issues:
Check our API Reference for detailed endpoint documentation
Contact support at [email protected]
Join our community Discord
Last updated