Getting Started
Get up and running with Pnyx in under 5 minutes. This guide will walk you through setting up your API key, making your first request, and understanding the core concepts.
Prerequisites
- A Pnyx account (sign up at pnyx.ai)
- Basic familiarity with REST APIs
- Your preferred programming language or cURL
Step 1: Get Your API Key
- Log in to your Pnyx dashboard
- Navigate to Settings > API Keys
- Click Generate New Key
- Copy your API key and keep it secure
⚠️ Important: Keep your API key secure and never expose it in client-side code or public repositories.
Step 2: Make Your First Request
Using cURL
curl -X POST "https://api.pnyx.ai/v1/chat/completions" \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "auto",
"messages": [
{"role": "user", "content": "Hello, world!"}
]
}'
Using Python
import requests
headers = {
"Authorization": "Bearer YOUR_API_KEY",
"Content-Type": "application/json"
}
data = {
"model": "auto",
"messages": [
{"role": "user", "content": "Hello, world!"}
]
}
response = requests.post(
"https://api.pnyx.ai/v1/chat/completions",
headers=headers,
json=data
)
print(response.json())
Using JavaScript
const response = await fetch('https://api.pnyx.ai/v1/chat/completions', {
method: 'POST',
headers: {
'Authorization': 'Bearer YOUR_API_KEY',
'Content-Type': 'application/json',
},
body: JSON.stringify({
model: 'auto',
messages: [
{ role: 'user', content: 'Hello, world!' }
]
})
});
const data = await response.json();
console.log(data);
Step 3: Understanding the Response
A successful request will return a response like this:
{
"id": "chatcmpl-123",
"object": "chat.completion",
"created": 1677652288,
"model": "gpt-3.5-turbo",
"choices": [{
"index": 0,
"message": {
"role": "assistant",
"content": "Hello! How can I help you today?"
},
"finish_reason": "stop"
}],
"usage": {
"prompt_tokens": 9,
"completion_tokens": 12,
"total_tokens": 21
}
}
Key Concepts
Intelligent Routing
When you use "model": "auto"
, Pnyx automatically selects the best model for your request based on:
- Cost efficiency - Balance performance with cost
- Response speed - Optimize for latency when needed
- Model capabilities - Route complex tasks to more capable models
Model Selection
You can also specify a particular model:
{
"model": "gpt-4",
"messages": [...]
}
Available models include:
gpt-4
- OpenAI's most capable modelgpt-3.5-turbo
- Fast and cost-effectiveclaude-3-opus
- Anthropic's most capable modelclaude-3-sonnet
- Balanced performance and speed
Next Steps
Now that you've made your first request, explore these topics:
- API Reference - Complete API documentation
- Best Practices - Optimize your usage
- Examples - Real-world implementation examples
- SDKs - Official client libraries
Need Help?
- Join our Discord community
- Email us at [email protected]
- Check out our GitHub repository