Get to know the available models you can use with the Neuroa AI API. Each model is designed for specific use cases ranging from natural conversation to advanced tool use and long-context reasoning.Documentation Index
Fetch the complete documentation index at: https://docs.neuroa.xyz/llms.txt
Use this file to discover all available pages before exploring further.
Available Models
Here’s a breakdown of current models supported:| Model ID | Context Length | Max Output | Tool Calling | Vision | Description |
|---|---|---|---|---|---|
neuroa/m1-preview | 128,000 tokens | 8,000 tokens | ✅ Supported | ❌ No | Preview release of our flagship model, optimized for chat, reasoning, and tool use |
Example Usage
To use a specific model, simply set themodel field in your chat completion request:
Features
- Long Context Window: Handles up to 128k tokens, ideal for summarization, retrieval, and multi-part chat sessions.
- Tool Calling Support: Integrates with external functions via OpenAI-style tool/function calling.
- High Output Capacity: Generate up to 8k tokens in one response.
Authentication
As with all endpoints, be sure to include your Bearer token:Looking for more models? Stay tuned for updates and additional releases coming soon!
