DEV Community

Luis Fernando De Pombo for backmesh

Posted on • Originally published at backmesh.com

1 1

3 common mistakes when integrating the OpenAI API with your web or mobile app

A key concern when building a web or mobile application that uses OpenAI’s language models is protecting your private API key. If you want to keep complexity low and release features quickly, setting up a fully fledged backend just to safeguard an API key can be time consuming. In this article, you’ll learn how to avoid the most common security mistakes when integrating the OpenAI API into a client-side app—without sacrificing simplicity or safety. Overlooking these pitfalls can leave the door open to OpenAI API misuse, which can:

  • Lead to unexpected or excessive charges on your OpenAI account
  • Violate OpenAI’s Acceptable Use Policy and Terms of Service
  • Potentially expose your project to spam or abusive behavior

Mistake # 1: Storing the OpenAI Key in Frontend Code

Putting your API key directly in client-side code (using any type of web or mobile framework) as shown in the sample frontend code below is tempting, but also very risky.

import OpenAI from "openai";

const client = new OpenAI({
  dangerouslyAllowBrowser: true,
  apiKey: 'sk-proj-YOURSECRETKEY',
});

const completion = await OpenAI.instance.chat(...)
Enter fullscreen mode Exit fullscreen mode

Anyone can inspect your code (even if it is minified or obfuscated somehow), extract your key, and abuse it. Don't let this happen to you. The OpenAI SDK requires the dangerouslyAllowBrowser flag to try to prevent this problem and make sure you add a layer of protection that keeps the secret key away from the end user’s browser or device.

Mistake # 2: Using an Unauthenticated Proxy

Putting your OpenAI API key as a secret in a hosted server or cloud function and calling that endpoint from your client is much better than exposing it directly, but it is still unsafe. Let's take the following client-side application code:


const opair = await fetch("https://functionroute.yourcloud.com/v1/chat/completions", {
  method: 'POST',
  headers: {
    'Content-Type': 'application/json',
  },
  body: JSON.stringify({...})
});

Enter fullscreen mode Exit fullscreen mode

Alongside the cloud function OpenAI API proxy it is calling:

import route from "yourcloud";

route.get('/', req, res => {
  const opair = await fetch(`https://api.openai.com/${req.path}`, {
    method: 'POST',
    headers: {
      'Content-Type': 'application/json',
      'Authentication': `Bearer ${env.OPENAI_SECRET_KEY}`

    },
    body: req.body.clone(),
  });
  return res.send(opair);
})

Enter fullscreen mode Exit fullscreen mode

Since this proxy is publicly accessible, an attacker could bypass your app and access your OpenAI API even without your private key at https://functionroute.yourcloud.com/.

Mistake # 3: Using an Authenticated Proxy without Permissions

Putting your OpenAI API key as a secret in a hosted server or cloud function and adding authentication so only your users can call the proxy endpoint is quite common and still much better than the setups laid out thus far. However, it is still unsafe. Let's take the following frontend code:

import supabase from "supabase-js";

const jwt = supabase.auth.session().access_token;

const opair = await fetch("https://functionroute.yourcloud.com/v1/chat/completion", {
  method: 'POST',
  headers: {
    'Content-Type': 'application/json',
    'Authentication': `Bearer ${jwt}`
  },
  body: JSON.stringify({...}),
});

Enter fullscreen mode Exit fullscreen mode

Alongside the cloud function OpenAI API authenticated proxy it is calling:

import route from "yourcloud";
import { createClient } from '@supabase/supabase-js'

const supabase = createSupabaseClient(...);

route.get('/', req, res => {
    const authHeader = req.headers['authorization'];
  try {
    // 1. Get the token from the Authorization header
    const authHeader = req.headers['authorization'] || ''
    const jwt = authHeader.split(' ')[1] // Bearer <token>

    if (!jwt) {
      return res.status(401).json({ error: 'No token provided' })
    }

    // 2. Verify the token using the JWKS
        const { data: { user }, error } = await supabase.auth.getUser(jwt);
        if (error || !user) {
            return createResponse('Unauthorized', { status: 401 });
        }
    // 3. The token is valid
    const opair = await fetch(`https://api.openai.com/${req.path}`, {
      method: 'POST',
      headers: {
        'Content-Type': 'application/json',
        'Authentication': `Bearer ${env.OPENAI_SECRET_KEY}`

      },
      body: req.body.clone(),
    });
    return res.send(opair);

  } catch (error) {
    console.error(error)
    return res.status(401).json({ error: 'Invalid token' })
  }
})

Enter fullscreen mode Exit fullscreen mode

This backend integration for the OpenAI API looks fine at first glance and provides some safety. However, there are three problems with it:

  1. There are no limits to how many times a given user can call your OpenAI API proxy. Any user can call this API proxy up to the default account rate limits for the Open AI API associated with your private API key by simply getting their JWT and calling the proxy at https://functionroute.yourcloud.com/v1/. The JWTs, or authentication tokens, typically expire within an hour. That means that someone can generate a token from your client-side code and (mis)use it until it expires. Then rinse and repeat.
  2. OpenAI, and other LLM APIs, have endpoints that allow uploading files for augmented analysis and creating threads with chat history for autonomous assistants. This backend does not provide any permissions or access control over which users uploaded or created which private resources in the OpenAI API. This means that any user with just their JWT can exploit this security vulnerability and see all the files and conversations ever created with your OpenAI API key. This will cause infringements on the data and privacy of your end users. The following frontend code will call the OpenAI API list files end point which will return all the files ever uploaded regardless of the user that uploaded it:
import supabase from "supabase-js";

const jwt = supabase.auth.session().access_token;

const opair = await fetch("https://functionroute.yourcloud.com/v1/files", {
  method: 'GET',
  headers: {
    'Content-Type': 'application/json',
    'Authentication': `Bearer ${jwt}`
  },
  body: JSON.stringify({...}),
});

Enter fullscreen mode Exit fullscreen mode

Building a safe integration

To prevent the vulnerabilities laid out thus far, you should implement the following safeguards in your backend:

  • Implement Authentication: Require valid credentials (e.g., token-based auth) so you know exactly who’s accessing your services.
  • Track Usage and Limits per User: Prevent users from racking up high API costs by restricting the number of requests per minute per user.
  • Enforce Secure Permissions: Ensure only the user who created a resource can view or modify it to protect against unauthorized access or data leaks.

Building a simple integration

Implementing proper integration with the OpenAI API in a backend can be time-consuming. A Backend-as-a-Service (BaaS) like Backmesh abstracts this complexity handling authentication, rate limits and permission enforcement for you—without requiring you to build or maintain a backend. The idea is straightforward:

  1. Store your private OpenAI API key securely in a protected configuration that users cannot see.
    1. Process requests with permissions, rate limiting and authentication
    2. Your front end sends a request to the Backmesh endpoint on behalf of your user.
    3. The endpoint verifies this request belongs to one of your users, the user hasn't gone past their limits (e.g. 5 requests per min) and the user is not requesting resources it did not previously create. Only if all of this is true, Backmesh attaches your secret API key from the secure store to the request.
    4. The request goes to OpenAI, then Backmesh returns the OpenAI API response to your front end with streaming support.

The resulting client-side code is described below and is taken from one of our tutorials:

import OpenAI from "openai";
import supabase from "supabase-js";

const BACKMESH_PROXY_URL =
  "https://edge.backmesh.com/v1/proxy/gbBbHCDBxqb8zwMk6dCio63jhOP2/wjlwRswvSXp4FBXwYLZ1/v1";

const jwt = supabase.auth.session().access_token;

const client = new OpenAI({
  httpAgent: new HttpsProxyAgent(BACKMESH_PROXY_URL),
  dangerouslyAllowBrowser: true,
  apiKey: jwt,
});
Enter fullscreen mode Exit fullscreen mode

From your perspective, there’s “no backend” to manage—no servers, no DevOps, no complicated configurations. But under the hood, Backmesh is a secure middleman between your front end and OpenAI API. Read more about Backmesh's security considerations here or check out the open source code on Github.

Sentry blog image

How to reduce TTFB

In the past few years in the web dev world, we’ve seen a significant push towards rendering our websites on the server. Doing so is better for SEO and performs better on low-powered devices, but one thing we had to sacrifice is TTFB.

In this article, we’ll see how we can identify what makes our TTFB high so we can fix it.

Read more

Top comments (0)

A Workflow Copilot. Tailored to You.

Pieces.app image

Our desktop app, with its intelligent copilot, streamlines coding by generating snippets, extracting code from screenshots, and accelerating problem-solving.

Read the docs