Skip to main content
Get to know the available models you can use with the Neuroa AI API. Each model is designed for specific use cases ranging from natural conversation to advanced tool use and long-context reasoning.

Available Models

Here’s a breakdown of current models supported:
Model IDContext LengthMax OutputTool CallingVisionDescription
neuroa/m1-preview128,000 tokens8,000 tokens✅ Supported❌ NoPreview release of our flagship model, optimized for chat, reasoning, and tool use

Example Usage

To use a specific model, simply set the model field in your chat completion request:
{
  "model": "neuroa/m1-preview",
  "messages": [
    {
      "role": "user",
      "content": "Summarize the key ideas from a long technical article."
    }
  ],
  "tool_choice": "auto"
}

Features

  • Long Context Window: Handles up to 128k tokens, ideal for summarization, retrieval, and multi-part chat sessions.
  • Tool Calling Support: Integrates with external functions via OpenAI-style tool/function calling.
  • High Output Capacity: Generate up to 8k tokens in one response.

Authentication

As with all endpoints, be sure to include your Bearer token:
Authorization: Bearer YOUR_API_KEY

Looking for more models? Stay tuned for updates and additional releases coming soon!