Available Models
Here’s a breakdown of current models supported:| Model ID | Context Length | Max Output | Tool Calling | Vision | Description |
|---|---|---|---|---|---|
neuroa/m1-preview | 128,000 tokens | 8,000 tokens | ✅ Supported | ❌ No | Preview release of our flagship model, optimized for chat, reasoning, and tool use |
Example Usage
To use a specific model, simply set themodel field in your chat completion request:
Features
- Long Context Window: Handles up to 128k tokens, ideal for summarization, retrieval, and multi-part chat sessions.
- Tool Calling Support: Integrates with external functions via OpenAI-style tool/function calling.
- High Output Capacity: Generate up to 8k tokens in one response.
Authentication
As with all endpoints, be sure to include your Bearer token:Looking for more models? Stay tuned for updates and additional releases coming soon!
