Skip to main content

Overview

deepseek-r1 is the right choice when answer quality depends on deep reasoning more than speed. It is slower than the default models, but it performs better on logic-heavy work, difficult analysis, and complex planning.

Specs

FieldValue
Model IDdeepseek-r1
Best forReasoning, analysis, math, hard planning
Context window64K
Max output tokens32K
Input modalitiesText
Output modalitiesText
Tool callingYes
Structured outputsYes
Prompt cachingNo
SpeedSlow
Cost bandBalanced
Release stageStable

Use this when

  • You are solving difficult logic or analysis tasks.
  • You want the strongest reasoning path in the catalog.
  • You can tolerate slower responses in exchange for better thinking quality.

Pick something else when

  • You want a better default for day-to-day work: use qwen-3.6-plus.
  • You need tool-heavy autonomous coding: use kimi-k2.5.
  • You need low latency or large throughput: use qwen-3-32b or gemini-2.5-flash.

Example

from openai import OpenAI

client = OpenAI(base_url="https://kymaapi.com/v1", api_key="ky-...")

response = client.chat.completions.create(
    model="deepseek-r1",
    messages=[
        {"role": "user", "content": "Find the failure modes in this distributed queue design."}
    ]
)

Agent query example

curl "https://kymaapi.com/v1/models?reasoning=true&recommended_for=reasoning&release_stage=stable"
AliasResolves to
reasoningdeepseek-r1