DEV Community

Cover image for Build A Transcription App with Strapi, ChatGPT, & Whisper: Part 3
Michael for Strapi

Posted on

Build A Transcription App with Strapi, ChatGPT, & Whisper: Part 3

Welcome to the third and final installment in this series. In Part 2, we created and connected the backend with Strapi to help save our meetings and transcriptions. In this part of the series, we will use ChatGPT with Strapi to gain insights about the transcribed text at the click of a button. We will also look at some testing and how to deploy the application to Strapi cloud.

Outline

You can find the outline for this series below:

Create Custom API Endpoints in Strapi

We will need our custom endpoints in Strapi CMS to connect with ChatGPT, so navigate to the terminal, change the directory to strapi-transcribe-api, and run the below command:

yarn strapi generate
Enter fullscreen mode Exit fullscreen mode

Doing this will begin the process of generating our custom API. Choose the API option, give it the name transcribe-insight-gpt, and select "no" when it asks us if this is for a plugin.

Make Strapi API Public

Inside the src directory, If we check the api directory in our code editor, we should see the newly created API for transcribe-insight-gpt with it's routes, controllers, and services directories.

Let's check if it works by uncommenting the code in each file, restarting the server, and navigating to the admin dashboard. We will want to make access to this route public, so click Settings > Users & permissions plugin > Roles > Public, then scroll down to Select all on the transcribe-insight-gpt API to make the permissions public, and click save in the top right.

If we enter the following into our browser and click enter, we should get an "ok" message.

http://localhost:1337/api/transcribe-insight-gpt
Enter fullscreen mode Exit fullscreen mode

Using ChatGPT with Strapi

We have confirmed the API endpoint is working, let's connect it to OpenAI first, install the OpenAI package, navigate to the route directory, and run the below command in the terminal

yarn add openai
Enter fullscreen mode Exit fullscreen mode

Then, in the .env file, add the API key to the OPENAI environment variable:

OPENAI=<OpenAI api key here>
Enter fullscreen mode Exit fullscreen mode

Now, under the transcribe-insight-gpt directory, change the code in the routes directory to the following:

module.exports = {
  routes: [
    {
      method: "POST",
      path: "/transcribe-insight-gpt/exampleAction",
      handler: "transcribe-insight-gpt.exampleAction",
      config: {
        policies: [],
        middlewares: [],
      },
    },
  ],
};
Enter fullscreen mode Exit fullscreen mode

Change the code in the controller directory to the following:

"use strict";

module.exports = {
  exampleAction: async (ctx) => {
    try {
      const response = await strapi
        .service("api::transcribe-insight-gpt.transcribe-insight-gpt")
        .insightService(ctx);

      ctx.body = { data: response };
    } catch (err) {
      console.log(err.message);
      throw new Error(err.message);
    }
  },
};
Enter fullscreen mode Exit fullscreen mode

And the code in the services directory to the following:

"use strict";
const { OpenAI } = require("openai");
const openai = new OpenAI({
  apiKey: process.env.OPENAI,
});

/**
 * transcribe-insight-gpt service
 */

module.exports = ({ strapi }) => ({
  insightService: async (ctx) => {
    try {
      const input = ctx.request.body.data?.input;
      const operation = ctx.request.body.data?.operation;

      if (operation === "analysis") {
        const analysisResult = await gptAnalysis(input);

        return {
          message: analysisResult,
        };
      } else if (operation === "answer") {
        const answerResult = await gptAnswer(input);

        return {
          message: answerResult,
        };
      } else {
        return { error: "Invalid operation specified" };
      }
    } catch (err) {
      ctx.body = err;
    }
  },
});

async function gptAnalysis(input) {
  const analysisPrompt =
    "Analyse the following text and give me a brief overview of what it means:";
  const completion = await openai.chat.completions.create({
    messages: [{ role: "user", content: `${analysisPrompt} ${input}` }],
    model: "gpt-3.5-turbo",
  });

  const analysis = completion.choices[0].message.content;

  return analysis;
}

async function gptAnswer(input) {
  const answerPrompt =
    "Analyse the following text and give me an answer to the question posed: ";
  const completion = await openai.chat.completions.create({
    messages: [{ role: "user", content: `${answerPrompt} ${input}` }],
    model: "gpt-3.5-turbo",
  });

  const answer = completion.choices[0].message.content;

  return answer;
}
Enter fullscreen mode Exit fullscreen mode

Here, we pass two parameters to our API: the input text, which will be our transcriptions, and the operation, which will be either analysis or answer depending on what operation we want it to perform. Each operation will have a different prompt for ChatGPT.

Confirm That ChatGPT Works

We can check the connection to our POST route by pasting the below code in our terminal:

curl -X POST \
  http://localhost:1337/api/transcribe-insight-gpt/exampleAction \
  -H 'Content-Type: application/json' \
  -d '{
    "data": {
        "input": "Comparatively, four-dimensional space has an extra coordinate axis, orthogonal to the other three, which is usually labeled w. To describe the two additional cardinal directions",
        "operation": "analysis"
    }
}'
Enter fullscreen mode Exit fullscreen mode

And to check the answer operation, you can use the below command:

curl -X POST \
  http://localhost:1337/api/transcribe-insight-gpt/exampleAction \
  -H 'Content-Type: application/json' \
  -d '{
    "data": {
        "input": "I speak without a mouth and hear without ears. I have no body, but I come alive with the wind. What am I?",
        "operation": "answer"
    }
}'
Enter fullscreen mode Exit fullscreen mode

That's great. Now that we have our analysis and answer capabilities within a Strapi API route, we need to connect this to our front-end code and ensure we can save this information for our meetings and transcriptions.

Connect Custom API for Anaylysis in Next.js

To maintain a clear separation of concerns, let's create a separate API file for our app's analysis functionality.

Get Analysis from ChatGPT

In transcribe-frontend under the api directory, create a new file called analysis.js and paste in the following code:

const baseUrl = 'http://localhost:1337';
const url = `${baseUrl}/api/transcribe-insight-gpt/exampleAction`;

export async function callInsightGpt(operation, input) {
  console.log('operation - ', operation);
  const payload = {
    data: {
      input: input,
      operation: operation,
    },
  };
  try {
    const response = await fetch(url, {
      method: 'POST',
      headers: {
        'Content-Type': 'application/json',
      },
      body: JSON.stringify(payload),
    });

    const data = await response.json();
    return data;
  } catch (error) {
    console.error('Error:', error);
  }
}
Enter fullscreen mode Exit fullscreen mode

The code above is a POST request to call the insight API and get the analysis back from ChatGPT.

Update Transcriptions with Analysis and Answers

Let's add a way to update our transcriptions with analysis and answers. Paste the following code into the transcriptions.js file.

export async function updateTranscription(
  updatedTranscription,
  transcriptionId
) {
  const updateURL = `${url}/${transcriptionId}`;
  const payload = {
    data: updatedTranscription,
  };

  try {
    const res = await fetch(updateURL, {
      method: 'PUT',
      headers: {
        'Content-Type': 'application/json',
      },
      body: JSON.stringify(payload),
    });

    return await res.json();
  } catch (error) {
    console.error('Error updating meeting:', error);
    throw error;
  }
}
Enter fullscreen mode Exit fullscreen mode

The code above is a PUT request to handle an update of the analysis or answer field on each transcription.

Create a Custom Hook to Handle the Overview and Analysis of Transcriptions

Now, let's create a hook where we can use this method. Create a file named useInsightGpt under the hooks directory and paste in the following code:

import { useState } from 'react';
import { callInsightGpt } from '../api/analysis';
import { updateMeeting } from '../api/meetings';
import { updateTranscription } from '../api/transcriptions';

export const useInsightGpt = () => {
  const [loadingAnalysis, setLoading] = useState(false);
  const [transcriptionIdLoading, setTranscriptionIdLoading] = useState('');
  const [analysisError, setError] = useState(null);

  const getAndSaveTranscriptionAnalysis = async (
    operation,
    input,
    transcriptionId
  ) => {
    try {
      setTranscriptionIdLoading(transcriptionId);
      // Get insight analysis / answer
      const { data } = await callInsightGpt(operation, input);
      // Use transcriptionId to save it to the transcription
      const updateTranscriptionDetails =
        operation === 'analysis'
          ? { analysis: data.message }
          : { answer: data.message };
      await updateTranscription(updateTranscriptionDetails, transcriptionId);
      setTranscriptionIdLoading('');
    } catch (e) {
      setTranscriptionIdLoading('');
      setError('Error getting analysis', e);
    }
  };

  const getAndSaveOverviewAnalysis = async (operation, input, meetingId) => {
    try {
      setLoading(true);
      // Get overview insight
      const {
        data: { message },
      } = await callInsightGpt(operation, input);
      // Use meetingId to save it to the meeting
      const updateMeetingDetails = { overview: message };
      await updateMeeting(updateMeetingDetails, meetingId);
      setLoading(false);
    } catch (e) {
      setLoading(false);
      setError('Error getting overview', e);
    }
  };

  return {
    loadingAnalysis,
    transcriptionIdLoading,
    analysisError,
    getAndSaveTranscriptionAnalysis,
    getAndSaveOverviewAnalysis,
  };
};

Enter fullscreen mode Exit fullscreen mode

This hook handles the logic to get and save the overview for our meeting when it has ended. It also handles getting the analysis or answers to our transcriptions and saving them, too. It keeps track of which transcription we are requesting analysis for so we can show specific loading states.

Display Analysis of a Transcription

Import the functionality above into the TranscribeContainer and use it. Paste the following updated code into TranscribeContainer.jsx

import React, { useState, useEffect } from "react";
import styles from "../styles/Transcribe.module.css";
import { useAudioRecorder } from "../hooks/useAudioRecorder";
import RecordingControls from "../components/transcription/RecordingControls";
import TranscribedText from "../components/transcription/TranscribedText";
import { useRouter } from "next/router";
import { useMeetings } from "../hooks/useMeetings";
import { useInsightGpt } from "../hooks/useInsightGpt";
import { createNewTranscription } from "../api/transcriptions";

const TranscribeContainer = ({ streaming = true, timeSlice = 1000 }) => {
  const router = useRouter();
  const [meetingId, setMeetingId] = useState(null);
  const [meetingTitle, setMeetingTitle] = useState("");
  const {
    getMeetingDetails,
    saveTranscriptionToMeeting,
    updateMeetingDetails,
    loading,
    error,
    meetingDetails,
  } = useMeetings();
  const {
    loadingAnalysis,
    transcriptionIdLoading,
    analysisError,
    getAndSaveTranscriptionAnalysis,
    getAndSaveOverviewAnalysis,
  } = useInsightGpt();
  const apiKey = process.env.NEXT_PUBLIC_OPENAI_API_KEY;
  const whisperApiEndpoint = "https://api.openai.com/v1/audio/";
  const {
    recording,
    transcribed,
    handleStartRecording,
    handleStopRecording,
    setTranscribed,
  } = useAudioRecorder(streaming, timeSlice, apiKey, whisperApiEndpoint);

  const { ended } = meetingDetails;
  const transcribedHistory = meetingDetails?.transcribed_chunks?.data;

  useEffect(() => {
    const fetchDetails = async () => {
      if (router.isReady) {
        const { meetingId } = router.query;
        if (meetingId) {
          try {
            await getMeetingDetails(meetingId);
            setMeetingId(meetingId);
          } catch (err) {
            console.log("Error getting meeting details - ", err);
          }
        }
      }
    };

    fetchDetails();
  }, [router.isReady, router.query]);

  useEffect(() => {
    setMeetingTitle(meetingDetails.title);
  }, [meetingDetails]);

  const handleGetAnalysis = async (input, transcriptionId) => {
    await getAndSaveTranscriptionAnalysis("analysis", input, transcriptionId);
    // re-fetch meeting details
    await getMeetingDetails(meetingId);
  };

  const handleGetAnswer = async (input, transcriptionId) => {
    await getAndSaveTranscriptionAnalysis("answer", input, transcriptionId);
    // re-fetch meeting details
    await getMeetingDetails(meetingId);
  };

  const handleStopMeeting = async () => {
    // provide meeting overview and save it
    // getMeetingOverview(transcribed_chunks)
    await updateMeetingDetails(
      {
        title: meetingTitle,
        ended: true,
      },
      meetingId,
    );

    // re-fetch meeting details
    await getMeetingDetails(meetingId);
    setTranscribed("");
  };

  const stopAndSaveTranscription = async () => {
    // save transcription first
    let {
      data: { id: transcriptionId },
    } = await createNewTranscription(transcribed);

    // make a call to save the transcription chunk here
    await saveTranscriptionToMeeting(meetingId, meetingTitle, transcriptionId);
    // re-fetch current meeting which should have updated transcriptions
    await getMeetingDetails(meetingId);
    // Stop and clear the current transcription as it's now saved
    await handleStopRecording();
  };

  const handleGoBack = () => {
    router.back();
  };

  if (loading) return <p>Loading...</p>;

  return (
    <div style={{ margin: "20px" }}>
      {ended && (
        <button onClick={handleGoBack} className={styles.goBackButton}>
          Go Back
        </button>
      )}
      {!ended && (
        <button
          className={styles["end-meeting-button"]}
          onClick={handleStopMeeting}
        >
          End Meeting
        </button>
      )}
      {ended ? (
        <p className={styles.title}>{meetingTitle}</p>
      ) : (
        <input
          onChange={(e) => setMeetingTitle(e.target.value)}
          value={meetingTitle}
          type="text"
          placeholder="Meeting title here..."
          className={styles["custom-input"]}
        />
      )}
      <div>
        {!ended && (
          <div>
            <RecordingControls
              handleStartRecording={handleStartRecording}
              handleStopRecording={stopAndSaveTranscription}
            />
            {recording ? (
              <p className={styles["primary-text"]}>Recording</p>
            ) : (
              <p>Not recording</p>
            )}
          </div>
        )}

        {/*Current transcription*/}
        {transcribed && <h1>Current transcription</h1>}
        <TranscribedText transcribed={transcribed} current={true} />

        {/*Transcribed history*/}
        <h1>History</h1>
        {transcribedHistory
          ?.slice()
          .reverse()
          .map((val, i) => {
            const transcribedChunk = val.attributes;
            const text = transcribedChunk.text;
            const transcriptionId = val.id;
            return (
              <TranscribedText
                key={transcriptionId}
                transcribed={text}
                answer={transcribedChunk.answer}
                analysis={transcribedChunk.analysis}
                handleGetAnalysis={() =>
                  handleGetAnalysis(text, transcriptionId)
                }
                handleGetAnswer={() => handleGetAnswer(text, transcriptionId)}
                loading={transcriptionIdLoading === transcriptionId}
              />
            );
          })}
      </div>
    </div>
  );
};

export default TranscribeContainer;
Enter fullscreen mode Exit fullscreen mode

Here, depending on your need, we use the useInsightGpt hook to get the analysis or answer. We also display a loading indicator beside the transcribed text.

Display Answers and Analysis of Transcriptions in Real Time

Paste the following code into TranscribedText.jsx to update the UI accordingly.

import styles from '../../styles/Transcribe.module.css';

function TranscribedText({
  transcribed,
  answer,
  analysis,
  handleGetAnalysis,
  handleGetAnswer,
  loading,
  current,
}) {
  return (
    <div className={styles['transcribed-text-container']}>
      <div className={styles['speech-bubble-container']}>
        {transcribed && (
          <div className={styles['speech-bubble']}>
            <div className={styles['speech-pointer']}></div>
            <div className={styles['speech-text-question']}>{transcribed}</div>
            {!current && (
              <div className={styles['button-container']}>
                <button
                  className={styles['primary-button-analysis']}
                  onClick={handleGetAnalysis}
                >
                  Get analysis
                </button>
                <button
                  className={styles['primary-button-answer']}
                  onClick={handleGetAnswer}
                >
                  Get answer
                </button>
              </div>
            )}
          </div>
        )}
      </div>
      <div>
        <div className={styles['speech-bubble-container']}>
          {loading && (
            <div className={styles['analysis-bubble']}>
              <div className={styles['analysis-pointer']}></div>
              <div className={styles['speech-text-answer']}>Loading...</div>
            </div>
          )}
          {analysis && (
            <div className={styles['analysis-bubble']}>
              <div className={styles['analysis-pointer']}></div>
              <p style={{ margin: 0 }}>Analysis</p>
              <div className={styles['speech-text-answer']}>{analysis}</div>
            </div>
          )}
        </div>
        <div className={styles['speech-bubble-container']}>
          {answer && (
            <div className={styles['speech-bubble-right']}>
              <div className={styles['speech-pointer-right']}></div>
              <p style={{ margin: 0 }}>Answer</p>
              <div className={styles['speech-text-answer']}>{answer}</div>
            </div>
          )}
        </div>
      </div>
    </div>
  );
}

export default TranscribedText;
Enter fullscreen mode Exit fullscreen mode

We can now request analysis and get answers to questions in real-time straight after they have been transcribed.

Analysis Demo.gif

Implement Meeting Overview functionality

When the user ends the meeting, we want to provide an overview of everything discussed. Let's add this functionality to the TranscribeContainer component.

In the function handleStopMeeting we can use the method getAndSaveOverviewAnalysis from the useInsightGpt hook:

const handleStopMeeting = async () => {
    // provide meeting overview and save it
    const transcribedHistoryText = transcribedHistory
      .map((val) => `transcribed_chunk: ${val.attributes.text}`)
      .join(', ');

    await getAndSaveOverviewAnalysis(
      'analysis',
      transcribedHistoryText,
      meetingId
    );

    await updateMeetingDetails(
      {
        title: meetingTitle,
        ended: true,
      },
      meetingId
    );

    // re-fetch meeting details
    await getMeetingDetails(meetingId);
    setTranscribed('');
  };
Enter fullscreen mode Exit fullscreen mode

Here, we are joining all of the transcribed chunks from the meeting and then sending them to our ChatGPT API for analysis, where they will be saved for our meeting.

Now, let's display the overview once it has been loaded. Add the following code above the RecordingControls:

{loadingAnalysis && <p>Loading Overview...</p>}

 {overview && (
    <div>
      <h1>Overview</h1>
      <p>{overview}</p>
    </div>
  )}
Enter fullscreen mode Exit fullscreen mode

Then, destructure the overview from the meeting details by adding the following line below our hook declarations:

const { ended, overview } = meetingDetails;
Enter fullscreen mode Exit fullscreen mode

To summarise, we listen to the loading indicator from useInsightGpt and check if overview is present from the meeting; if it is, we display it.

Overview Demo.gif

Error handling in Next.js

We have a couple of errors that could be caused by one of our hooks; let's create a component to handle them.

Create a file called ErrorToast.js under the components directory:

import { useEffect, useState } from 'react';

const ErrorToast = ({ message, duration }) => {
  const [visible, setVisible] = useState(true);

  useEffect(() => {
    const timer = setTimeout(() => {
      setVisible(false);
    }, duration);

    return () => clearTimeout(timer);
  }, [duration]);

  if (!visible) return null;

  return <div className="toast">{message}</div>;
};

export default ErrorToast;
Enter fullscreen mode Exit fullscreen mode

And add the following css code to globals.css under the style directory:

.toast {
  position: fixed;
  top: 20px;
  left: 50%;
  transform: translateX(-50%);
  background-color: rgba(255, 0, 0, 0.8);
  color: white;
  padding: 16px;
  border-radius: 8px;
  box-shadow: 0 4px 8px rgba(0, 0, 0, 0.1);
  z-index: 1000;
  transition: opacity 0.5s ease-out;
  opacity: 1;
  display: flex;
  align-items: center;
  justify-content: center;
  text-align: center;
}

.toast-hide {
  opacity: 0;
}
Enter fullscreen mode Exit fullscreen mode

Now, we can use this error component in TranscribeContainer; whenever we encounter an unexpected error from the API, we will show this error toast briefly to notify the user that something went wrong.

Import the ErrorToast at the top of the file and then paste the following code above the Go Back button in the return statement of our component:

 {error || analysisError ? (
        <ErrorToast message={error || analysisError} duration={5000} />
      ) : null}
Enter fullscreen mode Exit fullscreen mode

Testing with Next.js using Jest

Now, let's add a test to ensure our hooks are working as we expect them to and to alert us to any breaking changes in the code that might be introduced later. First, add the packages below so we can use jest in our project.

yarn add -D jest jest-environment-jsdom @testing-library/react @testing-library/jest-dom @testing-library/react-hooks
Enter fullscreen mode Exit fullscreen mode

Then create a jest.config.js file in the route of the frontend project and add the following code:

const nextJest = require('next/jest');
const createJestConfig = nextJest({
  dir: './',
});
const customJestConfig = {
  moduleDirectories: ['node_modules', '<rootDir>/'],
  testEnvironment: 'jest-environment-jsdom',
};
module.exports = createJestConfig(customJestConfig);
Enter fullscreen mode Exit fullscreen mode

This just sets up Jest ready to be used in Next.js.

Create a test directory and an index.test.js file with the following code:

import { renderHook, act } from '@testing-library/react-hooks';
import { useInsightGpt } from '../hooks/useInsightGpt';
import { callInsightGpt } from '../api/analysis';
import { updateMeeting } from '../api/meetings';
import { updateTranscription } from '../api/transcriptions';

jest.mock('../api/analysis');
jest.mock('../api/meetings');
jest.mock('../api/transcriptions');

describe('useInsightGpt', () => {
  beforeEach(() => {
    jest.clearAllMocks();
  });

  it('should handle transcription analysis successfully', async () => {
    const mockData = { data: { message: 'Test analysis message' } };
    callInsightGpt.mockResolvedValueOnce(mockData);
    updateTranscription.mockResolvedValueOnce({});

    const { result } = renderHook(() => useInsightGpt());

    await act(async () => {
      await result.current.getAndSaveTranscriptionAnalysis(
        'analysis',
        'input',
        'transcriptionId'
      );
    });

    expect(callInsightGpt).toHaveBeenCalledWith('analysis', 'input');
    expect(updateTranscription).toHaveBeenCalledWith(
      { analysis: 'Test analysis message' },
      'transcriptionId'
    );
    expect(result.current.transcriptionIdLoading).toBe('');
    expect(result.current.analysisError).toBe(null);
  });

  it('should handle overview analysis successfully', async () => {
    const mockData = { data: { message: 'Test overview message' } };
    callInsightGpt.mockResolvedValueOnce(mockData);
    updateMeeting.mockResolvedValueOnce({});

    const { result } = renderHook(() => useInsightGpt());

    await act(async () => {
      await result.current.getAndSaveOverviewAnalysis(
        'overview',
        'input',
        'meetingId'
      );
    });

    expect(callInsightGpt).toHaveBeenCalledWith('overview', 'input');
    expect(updateMeeting).toHaveBeenCalledWith(
      { overview: 'Test overview message' },
      'meetingId'
    );
    expect(result.current.loadingAnalysis).toBe(false);
    expect(result.current.analysisError).toBe(null);
  });

  it('should handle errors in transcription analysis', async () => {
    const mockError = new Error('Test error');
    callInsightGpt.mockRejectedValueOnce(mockError);

    const { result } = renderHook(() => useInsightGpt());

    await act(async () => {
      await result.current.getAndSaveTranscriptionAnalysis(
        'analysis',
        'input',
        'transcriptionId'
      );
    });

    expect(result.current.transcriptionIdLoading).toBe('');
    expect(result.current.analysisError).toBe(
      'Error getting analysis',
      mockError
    );
  });

  it('should handle errors in overview analysis', async () => {
    const mockError = new Error('Test error');
    callInsightGpt.mockRejectedValueOnce(mockError);

    const { result } = renderHook(() => useInsightGpt());

    await act(async () => {
      await result.current.getAndSaveOverviewAnalysis(
        'overview',
        'input',
        'meetingId'
      );
    });

    expect(result.current.loadingAnalysis).toBe(false);
    expect(result.current.analysisError).toBe(
      'Error getting overview',
      mockError
    );
  });
});
Enter fullscreen mode Exit fullscreen mode

Because the hooks use our Strapi API, we need a way to replace the data we're getting back from the API calls. We're using jest.mock to intercept the APIs and send back mock data. This way, we can test our hooks' internal logic without calling the API.

In the first two tests, we mock the API call and return some data, then render our hook and call the correct function. We then check if the correct functions have been called with the correct data from inside the hook. The last two tests just test that errors are handled correctly.

Add the following under scripts in the package.json file:

"test": "jest --watch"
Enter fullscreen mode Exit fullscreen mode

Now open the terminal, navigate to the route directory of the frontend project, and run the following command to check if the tests are passing:

yarn test
Enter fullscreen mode Exit fullscreen mode

You should see a success message like the one below:

jest test successful.png

As an optional challenge, let's see if you can apply what we did with testing useInsightGpt to testing the other hooks.

Application Demo

Here is what our application looks like.

Demo of Build Transcription App with Strapi and ChatGPT

Deployment with Strapi cloud

Finally, we have the finished application up and running correctly with some tests. The time has come to deploy our project to Strapi cloud.

First, navigate to Strapi and click on "cloud" at the top right.

Connect with GitHub.

strapi cloud connect with Github.png

From the dashboard, click on Create project.

strapi cloud create project.png

Choose your GitHub account and the correct repo, fill out the display name, and choose the region.

strapi cloud import git repository.png

Now, if you have the same file structure as me, which you should do if you've been following along, then you will just need to add the base directory, so click on Show advanced settings and enter the base directory of /strapi-transcribe-api, then you will need to add all of the environment variables that can be found in the .env file in the route of the strapi project.

Once you have added all of these, click on "create project." This will bring you to a loading screen, and then you will be redirected to the build logs; here, you can just wait for the build to finish.

Once it has finished building, you can click on Overview from the top left. This should direct you to the dashboard, where you will find the details of your deployment and the app URL under Overview on the right.

strapi cloud projects overview.png

First, click on your app URL, which will open a new tab and direct you to the welcome page of your Strapi app. Then, create a new admin user, which will log you into the dashboard.

This is a new deployment, and as such, it won't have any of the data we had saved locally; it also won't have carried across the public settings we had on the API, so click on Settings>Users & Permissions Plugin>Roles>Public, expand and select all on Meeting, Transcribe-insight-gpt, and Transcribed-chunk, and then click save in the top right.

Once again, let's just check that our deployment was successful by running the below command in the terminal. Please replace https://yourDeployedUrlHere.com with the URL in the Strapi cloud dashboard.

curl -X POST \
  https://yourDeployedUrlHere.com/api/transcribe-insight-gpt/exampleAction \
  -H 'Content-Type: application/json' \
  -d '{
    "data": {
        "input": "I speak without a mouth and hear without ears. I have no body, but I come alive with the wind. What am I?",
        "operation": "answer"
    }
}'
Enter fullscreen mode Exit fullscreen mode

Deplying Next.js with Vercel

Now we have the API deployed and ready to use, let's deploy our frontend with Vercel.

First, we will need to change the baseUrl in our API files to link to our newly deployed Strapi instance,

Add the following variable to .env.local

NEXT_PUBLIC_STRAPI_URL="your strapi cloud url"
Enter fullscreen mode Exit fullscreen mode

Now go ahead and replace the current value of baseUrl with the following in all three API files:

const baseUrl =
  process.env.NODE_ENV == 'production'
    ? process.env.NEXT_PUBLIC_STRAPI_URL
    : 'http://localhost:1337';
Enter fullscreen mode Exit fullscreen mode

This will just check if the app is running in production. If so, it will use our deployed strap instance. If not, it will revert to localhost. Make sure to push these changes to Github.

Now navigate to Vercel and sign up if you don't already have an account.

Now, let's create a new project by continuing with GitHub.

vercel create new project.png

Once you have verified your account, import the correct GitHub repo

vercel connect Github Repo.png

Now we will fill out some configuration details, give the project a name, change the framework preset to Next.js, change the root directory to 'transcribe-frontend', and add the two environment variables from the .env.local file in the Next.js project.

vercel configure project.png

Now click deploy and wait for it to finish. Once deployed, it should redirect you to a success page with a preview of the app.

vercel deployment successful.png

Now click continue to the dashboard, where you can find information about the app, such as the domain and the deployment logs.

vercel information about app.png

From here, you can click visit to be directed to the app's frontend deployment.

Conclusion

So there you have it! You have now built your transcription app from start to finish. We have gone over how to achieve this with several cutting-edge technologies. We used Strapi for the backend CMS and custom ChatGPT integration, demonstrating how quickly and easily this technology can make building complex web apps. We also covered some architectural patterns with error handling and testing in Next.js, and finally, we deployed the backend to the Strapi cloud. I hope that you have found this series eye-opening and that it will encourage you to bring your ideas to life.

Additional Resources

Top comments (0)