DEV Community

Cover image for How to use DeepSeek V4: web interface, API setup, and first coding tasks
Wanda
Wanda

Posted on • Originally published at apidog.com

How to use DeepSeek V4: web interface, API setup, and first coding tasks

TL;DR

DeepSeek V4 is available via a web chat interface and an OpenAI-compatible API. For API use, generate an API key, use Bearer token authentication, and send requests to the chat completions endpoint. Set temperature to 0.2 for code/spec outputs and 0.5 for creative tasks. Break complex coding tasks into smaller, sequential prompts. Always test your API integration with Apidog before full implementation.

Try Apidog today

Introduction

DeepSeek V4 performs well for coding, reasoning, and technical writing. It follows instructions accurately at low temperature, generates clean code with minimal extra output, and responds precisely to constraints in prompts.

This guide explains how to get started with the web UI, configure API access, and leverage the model for practical coding workflows.

Starting with the web interface

The web interface is the quickest way to evaluate V4 before building API integrations.

Getting access:

  1. Visit chat.deepseek.com
  2. Log in with your account.
  3. Select "V4" from the model list in the sidebar.

Prompting tips:

V4 works best with concise, explicit prompts. Define requirements directly:

  • Use: “Write a Python function that…” instead of “Can you help me with…”
  • Specify limits: “Implementation under 100 lines”
  • Restrict output: “Output only the code, no explanation”
  • Ask for assumptions: “List any assumptions you’re making”

Temperature settings:

The web UI doesn't expose the temperature parameter. When using the API, set:

  • 0.2 – Code generation, specs, structured output
  • 0.5 – Exploring alternatives, generating variations
  • 0.7+ – Creative writing, brainstorming

Handling long conversations:

Context accumulates across interactions. If responses become vague or drift, start a new thread for better results. Fresh, focused context improves output.

API setup

Step 1: Create an API key

  1. Go to platform.deepseek.com
  2. Navigate to API Keys.
  3. Create a new key and copy it immediately (it’s shown only once).
  4. Store the key as an environment variable:
   export DEEPSEEK_API_KEY="your-api-key-here"
Enter fullscreen mode Exit fullscreen mode

Step 2: Test with curl

DeepSeek V4 uses an OpenAI-compatible endpoint:

curl https://api.deepseek.com/v1/chat/completions \
  -H "Authorization: Bearer $DEEPSEEK_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "deepseek-v4",
    "messages": [{"role": "user", "content": "Write a Python function that sorts a list of dictionaries by a specified key."}],
    "temperature": 0.2
  }'
Enter fullscreen mode Exit fullscreen mode

Step 3: Python integration

from openai import OpenAI

client = OpenAI(
    api_key="your-api-key",
    base_url="https://api.deepseek.com/v1"
)

response = client.chat.completions.create(
    model="deepseek-v4",
    messages=[
        {"role": "system", "content": "You write clean, minimal Python. No explanatory prose unless asked."},
        {"role": "user", "content": "Write a function that renames screenshot files based on their creation timestamp."}
    ],
    temperature=0.2
)

print(response.choices[0].message.content)
Enter fullscreen mode Exit fullscreen mode

The OpenAI Python client works directly with DeepSeek’s API as endpoints are compatible.

Testing with Apidog

Testing your DeepSeek API calls in Apidog helps identify response format issues early.

Environment setup:

  1. Open Apidog and create a new project.
  2. Go to Environments; create one called “DeepSeek Production.”
  3. Add variable:
    • Name: DEEPSEEK_API_KEY
    • Type: Secret
    • Value: your key

Create a test request:

POST https://api.deepseek.com/v1/chat/completions
Authorization: Bearer {{DEEPSEEK_API_KEY}}
Content-Type: application/json

{
  "model": "deepseek-v4",
  "messages": [
    {
      "role": "system",
      "content": "You are a coding assistant. Respond only with code unless asked for explanation."
    },
    {
      "role": "user",
      "content": "{{user_prompt}}"
    }
  ],
  "temperature": 0.2,
  "max_tokens": 2000
}
Enter fullscreen mode Exit fullscreen mode

Add assertions:

Status code is 200
Response body has field choices
Response body, field choices[0].message.content is not empty
Enter fullscreen mode Exit fullscreen mode

Test streaming mode:

To test real-time streaming responses, set "stream": true:

{
  "model": "deepseek-v4",
  "messages": [...],
  "stream": true,
  "temperature": 0.2
}
Enter fullscreen mode Exit fullscreen mode

Apidog can handle streaming; ensure the final content assembles correctly.


First coding task: the automation workflow

A practical way to evaluate V4 is by asking it to generate a file automation script. This will test:

  • Whether the model identifies implicit requirements
  • Its ability to handle file operations (a common source of bugs)
  • Whether it asks clarifying questions or makes assumptions

Prompt structure for coding tasks:

Break your request into these phases for optimal results:

Phase 1: Risks assessment

I want to write a Python script that renames files in a folder based on their creation date. 
Before you write any code, list the risks and edge cases I should handle.
Enter fullscreen mode Exit fullscreen mode

Phase 2: Implementation plan

Now write a step-by-step implementation plan. Don't write code yet.
Enter fullscreen mode Exit fullscreen mode

Phase 3: Code

Write the Python script. Requirements:
- Under 120 lines
- Handle the edge cases you listed
- Add a --dry-run flag that shows what would be renamed without making changes
- No external dependencies beyond the standard library
Enter fullscreen mode Exit fullscreen mode

Phase 4: Tests

Write pytest tests for the main renaming logic. Mock the file system.
Enter fullscreen mode Exit fullscreen mode

This stepwise approach yields cleaner, more robust outputs than a single, large prompt.


Model strengths and limitations

Strengths:

  • Follows format and requirements reliably at low temperature
  • Handles direct, concise instructions without extra preamble
  • Surfaces edge cases when prompted
  • Produces minimal, boilerplate-free code

Considerations:

  • Always review generated code; V4 is not a replacement for code review.
  • Break complex scripts into smaller, sequential tasks.
  • For large multi-file refactoring, consider alternatives like Claude Opus 4.6 or GPT-5 for more predictable results.
  • Higher temperature settings may introduce errors; validate outputs at low temperature.

Rate limits and pricing

Check current rate limits at platform.deepseek.com. DeepSeek’s pricing is competitive with other providers. For batch workflows, DeepSeek V4 offers strong cost efficiency.

For production use, always implement:

  • Retry logic with exponential backoff for HTTP 429 errors (rate limits)
  • Request logging to track token consumption
  • Output validation before deploying generated code

FAQ

Is DeepSeek V4 OpenAI-compatible?

Yes. The chat completions endpoint uses the OpenAI API format. To switch from OpenAI to DeepSeek, update the base URL and API key.

What’s the context window?

DeepSeek V4 supports a large context window suitable for repository-scale code review. Refer to the latest documentation for exact limits.

Can I use DeepSeek V4 for non-coding tasks?

Yes. The model is strong at writing, analysis, and research tasks. Its strengths in structured output and instruction following apply beyond code.

How does V4 compare to Claude Opus 4.6 for coding?

Claude Opus 4.6 leads on SWE-bench at 80.9%. DeepSeek V4 excels in multi-file, large context tasks. For most dev workflows, both are strong; differences hinge on cost and edge case handling.

Does the API support function calling?

Yes. DeepSeek V4 supports function calling in the OpenAI format, making it compatible with tool-use workflows in the OpenAI SDK.

Top comments (0)