DEV Community

Super Jarvis
Super Jarvis

Posted on • Originally published at deepseekv4.space

DeepSeek V4 API: Model IDs, Base URL, Thinking, and Tools

DeepSeek V4 is exposed through the DeepSeek OpenAI-compatible API. The current pricing page lists two V4 model IDs:

  • deepseek-v4-pro
  • deepseek-v4-flash

The base URL is:

https://api.deepseek.com
Enter fullscreen mode Exit fullscreen mode

Source: DeepSeek API pricing.

DeepSeek V4 API request pipeline

API integration is mostly about choosing the right model ID, keeping the request shape compatible, and deciding when tools or Thinking should be enabled.

Minimal request shape

Use the chat completions API with one of the V4 model IDs:

{
  "model": "deepseek-v4-flash",
  "messages": [
    {
      "role": "user",
      "content": "Explain DeepSeek V4 Flash pricing."
    }
  ]
}
Enter fullscreen mode Exit fullscreen mode

Thinking mode

DeepSeek documents Thinking as a request option with enabled or disabled mode, plus reasoning effort. Use Thinking when you want the model to spend more reasoning budget on difficult tasks.

In product terms:

  • Disable Thinking for fast answers and low-cost paths.
  • Enable Thinking for code repair, planning, math, and long analysis.
  • Use Pro when the answer quality ceiling matters more than cost.

Tools and web search

DeepSeek V4 can be used behind a tool-enabled chat route. On this site, web search is implemented as a server-side search_web tool and then passed into the model response. That means web search depends on the site's search provider configuration, not only DeepSeek itself.

Image upload

The site supports image attachment upload and passes public image references into chat. The current V4 API documentation primarily describes text, Thinking, tools, JSON, and FIM surfaces, so direct image understanding should be verified in your runtime before promising vision behavior.


Source article: Read the original post

Homepage: Visit the site

Model pages:

Top comments (0)