DEV Community

Apollo
Apollo

Posted on

How to integrate DeepSeek R1 into your React app

Integrating DeepSeek R1 into Your React Application: A Comprehensive Technical Guide

DeepSeek R1 is a powerful AI model that brings advanced natural language processing capabilities to applications. This tutorial will walk you through the complete process of integrating DeepSeek R1 into a React application, covering everything from initial setup to advanced implementation patterns.

Prerequisites

Before beginning, ensure you have:

  • Node.js (v16 or later) installed
  • A React project (create-react-app or similar)
  • A DeepSeek API key (obtainable from DeepSeek's developer portal)
  • Basic familiarity with React hooks and async/await

Step 1: Setting Up the Project

First, install the required dependencies:

npm install axios @tanstack/react-query
Enter fullscreen mode Exit fullscreen mode

We'll use Axios for HTTP requests and React Query for state management.

Step 2: Creating the DeepSeek Service Layer

Create a new file src/services/deepseek.js:

import axios from 'axios';

const DEEPSEEK_API_URL = 'https://api.deepseek.com/v1/r1/completions';
const API_KEY = process.env.REACT_APP_DEEPSEEK_API_KEY;

const deepseekClient = axios.create({
  baseURL: DEEPSEEK_API_URL,
  headers: {
    'Content-Type': 'application/json',
    'Authorization': `Bearer ${API_KEY}`
  }
});

export const generateCompletion = async (prompt, options = {}) => {
  const defaultOptions = {
    max_tokens: 150,
    temperature: 0.7,
    top_p: 1.0,
    ...options
  };

  try {
    const response = await deepseekClient.post('', {
      prompt,
      ...defaultOptions
    });
    return response.data.choices[0].text;
  } catch (error) {
    console.error('DeepSeek API error:', error);
    throw error;
  }
};
Enter fullscreen mode Exit fullscreen mode

Store your API key in .env.local:

REACT_APP_DEEPSEEK_API_KEY=your_api_key_here
Enter fullscreen mode Exit fullscreen mode

Step 3: Creating a Custom Hook for DeepSeek

Create src/hooks/useDeepSeek.js:

import { useMutation } from '@tanstack/react-query';
import { generateCompletion } from '../services/deepseek';

export const useDeepSeek = () => {
  const mutation = useMutation({
    mutationFn: ({ prompt, options }) => generateCompletion(prompt, options),
    onError: (error) => {
      console.error('DeepSeek generation failed:', error);
    }
  });

  return {
    generate: mutation.mutate,
    isLoading: mutation.isPending,
    isError: mutation.isError,
    data: mutation.data,
    error: mutation.error
  };
};
Enter fullscreen mode Exit fullscreen mode

Step 4: Building the UI Component

Create a new component src/components/DeepSeekInterface.jsx:

import { useState } from 'react';
import { useDeepSeek } from '../hooks/useDeepSeek';

const DeepSeekInterface = () => {
  const [prompt, setPrompt] = useState('');
  const [temperature, setTemperature] = useState(0.7);
  const [maxTokens, setMaxTokens] = useState(150);
  const { generate, isLoading, isError, data } = useDeepSeek();

  const handleSubmit = (e) => {
    e.preventDefault();
    generate({
      prompt,
      options: {
        temperature: parseFloat(temperature),
        max_tokens: parseInt(maxTokens)
      }
    });
  };

  return (
    <div className="deepseek-container">
      <form onSubmit={handleSubmit}>
        <div className="form-group">
          <label htmlFor="prompt">Prompt:</label>
          <textarea
            id="prompt"
            value={prompt}
            onChange={(e) => setPrompt(e.target.value)}
            rows={5}
            disabled={isLoading}
          />
        </div>

        <div className="params-group">
          <div className="param">
            <label htmlFor="temperature">Temperature (0-1):</label>
            <input
              type="number"
              id="temperature"
              min="0"
              max="1"
              step="0.1"
              value={temperature}
              onChange={(e) => setTemperature(e.target.value)}
            />
          </div>

          <div className="param">
            <label htmlFor="maxTokens">Max Tokens:</label>
            <input
              type="number"
              id="maxTokens"
              min="10"
              max="2048"
              value={maxTokens}
              onChange={(e) => setMaxTokens(e.target.value)}
            />
          </div>
        </div>

        <button type="submit" disabled={isLoading}>
          {isLoading ? 'Generating...' : 'Generate'}
        </button>
      </form>

      {isError && (
        <div className="error">
          An error occurred while generating the response.
        </div>
      )}

      {data && (
        <div className="response">
          <h3>Response:</h3>
          <div className="response-content">{data}</div>
        </div>
      )}
    </div>
  );
};

export default DeepSeekInterface;
Enter fullscreen mode Exit fullscreen mode

Step 5: Advanced Features - Streaming Responses

For real-time streaming of responses, modify the service layer:

export const generateStreamingCompletion = async (prompt, options = {}, onData) => {
  const defaultOptions = {
    max_tokens: 150,
    temperature: 0.7,
    stream: true,
    ...options
  };

  try {
    const response = await fetch(DEEPSEEK_API_URL, {
      method: 'POST',
      headers: {
        'Content-Type': 'application/json',
        'Authorization': `Bearer ${API_KEY}`
      },
      body: JSON.stringify({
        prompt,
        ...defaultOptions
      })
    });

    const reader = response.body.getReader();
    const decoder = new TextDecoder();
    let fullResponse = '';

    while (true) {
      const { done, value } = await reader.read();
      if (done) break;

      const chunk = decoder.decode(value);
      const lines = chunk.split('\n').filter(line => line.trim() !== '');

      for (const line of lines) {
        if (line.startsWith('data: ')) {
          const data = line.replace('data: ', '');
          if (data === '[DONE]') break;

          try {
            const parsed = JSON.parse(data);
            const text = parsed.choices[0]?.text || '';
            fullResponse += text;
            onData(fullResponse);
          } catch (err) {
            console.error('Error parsing stream data:', err);
          }
        }
      }
    }

    return fullResponse;
  } catch (error) {
    console.error('DeepSeek streaming error:', error);
    throw error;
  }
};
Enter fullscreen mode Exit fullscreen mode

Update the hook to support streaming:

export const useDeepSeekStream = () => {
  const [streamingResponse, setStreamingResponse] = useState('');
  const [isStreaming, setIsStreaming] = useState(false);

  const generateStream = async ({ prompt, options }) => {
    setStreamingResponse('');
    setIsStreaming(true);

    try {
      await generateStreamingCompletion(prompt, options, (data) => {
        setStreamingResponse(data);
      });
    } finally {
      setIsStreaming(false);
    }
  };

  return {
    generateStream,
    streamingResponse,
    isStreaming,
    resetStream: () => setStreamingResponse('')
  };
};
Enter fullscreen mode Exit fullscreen mode

Step 6: Error Handling and Rate Limiting

Implement a robust error handling system:

// In deepseek.js
const handleRateLimit = async (error) => {
  if (error.response?.status === 429) {
    const retryAfter = error.response.headers['retry-after'] || 1;
    console.log(`Rate limited. Retrying after ${retryAfter} seconds...`);
    await new Promise(resolve => setTimeout(resolve, retryAfter * 1000));
    return true;
  }
  return false;
};

export const generateCompletion = async (prompt, options = {}, retries = 3) => {
  const defaultOptions = {
    max_tokens: 150,
    temperature: 0.7,
    ...options
  };

  try {
    const response = await deepseekClient.post('', {
      prompt,
      ...defaultOptions
    });
    return response.data.choices[0].text;
  } catch (error) {
    if (await handleRateLimit(error) && retries > 0) {
      return generateCompletion(prompt, options, retries - 1);
    }
    throw error;
  }
};
Enter fullscreen mode Exit fullscreen mode

Step 7: Optimizing Performance with Caching

Enhance the custom hook with caching:

export const useDeepSeek = () => {
  const queryClient = useQueryClient();

  const mutation = useMutation({
    mutationFn: ({ prompt, options }) => generateCompletion(prompt, options),
    onSuccess: (data, variables) => {
      // Cache the response
      queryClient.setQueryData(
        ['deepseek', variables.prompt, variables.options],
        data
      );
    },
    onError: (error) => {
      console.error('DeepSeek generation failed:', error);
    }
  });

  const cachedGenerate = useCallback(
    async ({ prompt, options }) => {
      const queryKey = ['deepseek', prompt, options];
      const cached = queryClient.getQueryData(queryKey);

      if (cached) {
        return cached;
      }

      return mutation.mutateAsync({ prompt, options });
    },
    [mutation, queryClient]
  );

  return {
    generate: mutation.mutate,
    generateCached: cachedGenerate,
    isLoading: mutation.isPending,
    isError: mutation.isError,
    data: mutation.data,
    error: mutation.error
  };
};
Enter fullscreen mode Exit fullscreen mode

Step 8: Implementing a Chat Interface

For a conversational interface:

const ChatInterface = () => {
  const [messages, setMessages] = useState([]);
  const [input, setInput] = useState('');
  const { generate, isLoading } = useDeepSeek();

  const handleSend = async () => {
    if (!input.trim()) return;

    const userMessage = { role: 'user', content: input };
    setMessages(prev => [...prev, userMessage]);

    try {
      const response = await generate({
        prompt: input,
        options: {
          context: messages.map(m => `${m.role}: ${m.content}`).join('\n')
        }
      });

      setMessages(prev => [...prev, { role: 'assistant', content: response }]);
    } finally {
      setInput('');
    }
  };

  return (
    <div className="chat-container">
      <div className="messages">
        {messages.map((msg, i) => (
          <div key={i} className={`message ${msg.role}`}>
            <strong>{msg.role}:</strong> {msg.content}
          </div>
        ))}
        {isLoading && <div className="message assistant">Thinking...</div>}
      </div>

      <div className="input-area">
        <input
          value={input}
          onChange={(e) => setInput(e.target.value)}
          disabled={isLoading}
          onKeyPress={(e) => e.key === 'Enter' && handleSend()}
        />
        <button onClick={handleSend} disabled={isLoading}>
          Send
        </button>
      </div>
    </div>
  );
};
Enter fullscreen mode Exit fullscreen mode

Best Practices and Considerations

  1. Rate Limiting: DeepSeek API has rate limits. Implement exponential backoff for retries.
  2. Token Management: Monitor token usage to avoid unexpected costs.
  3. Error Handling: Provide user-friendly error messages for API failures.
  4. Loading States: Clearly indicate when the AI is processing a request.
  5. Security: Never expose your API key in client-side code. For production apps, consider using a backend proxy.

Conclusion

This comprehensive guide has walked you through integrating DeepSeek R1 into a React application, covering everything from basic implementation to advanced features like streaming responses and caching. By following these patterns, you can build powerful AI-enhanced applications with React and DeepSeek R1.

Remember to consult the official DeepSeek documentation for the most up-to-date API specifications and best practices.


🚀 Stop Writing Boilerplate Prompts

If you want to skip the setup and code 10x faster with complete AI architecture patterns, grab my Senior React Developer AI Cookbook ($19). It includes Server Action prompt libraries, UI component generation loops, and hydration debugging strategies.

Browse all 10+ developer products at the Apollo AI Store | Or snipe Solana tokens free via @ApolloSniper_Bot.

Top comments (0)