Synopsis
Check preview - knowr.co
Day to day, Users are using Bard and ChatGPT a lot rather than using traditional google search because of accurate results and getting right information with ease.
In this article, I am going to explain my journey of "how i combined OPENAI ChatGPT and Google Bard", and challenges which i encountered.
Tech stack - React, Typescript and FastAPI.
Initially I taken API details from ChatGPT and easily configured in frontend, These api details will acts as backend and it can give information when user had requested on frontend UI.
Challenge 1 - Integrating Google Bard
I faced lot of challenges and difficulties to get the api details of Bard because there are very limited people can only access bard api key details, who had requested in maker suite
If you didn't get api details from bard maker suite portal, then there are few unofficial api which you can work on and integrate it.
Unofficial Reference - https://github.com/ra83205/google-bard-api
Challenge 2 - Hiding API keys on Frontend
Many newbie developers used to leave api keys on .env file in frontend. They might think those dot files no one can accessible, the hidden fact is any one can access any file in frontend.
If you place a piece of code in any file on frontend, Anyone can access, so if you have to hide api details and keys etc, You need create a backend middleware.
If you intelligently place the .env details in hosting admin portal, where frontend can access it these .env values, then also its not safe.
Challenge 3 - Sending same response as ChatGPT does
After created the backend middleware, then i stored all my API details in backend and created another new API's details from ChatGPT and Bard, If user requests in frontend, then request will go to the backend middleware and it will request to these ChatGPT and bard and will give response.
If you observe carefully ChatGPT responses will send one sentence by one sentence from backend server to frontend, Basically it will stream the responses instead of sending whole response at a time.
In our case, Responses will be fetch by the backend middleware from ChatGPT and Bard API's when user in frontend is requested. At that point if you need to send those responses from backend middleware to frontend you need to setup stream functionality especially for ChatGPT responses.
How Stream functionality should work ?
Where if you fetch one sentence from ChatGPT API in backend middleware, then backend middleware need to pass same one sentence to frontend.
Challenge 4 - Hosting into cloud
When hosting whole backend in cloud, the response from ChatGPT is not coming one by one from backend middleware, I sat 2 days where the mistake was went, If responses not coming one by one in stream fashion to frontend, then we need to wait some time until total response loaded in backend middleware. It costs so much time.
After researching i found the problem with Nginx, where Nginx is restricting the stream response due to it default settings, so i changed those setting in Nginx by adding buffer true.
What is Nginx ?
If you host your application in cloud, You can't access application url directly to outside world from cloud, If you access also it will show as "Not working retry error"
To access the url you need to access via via Ngnix or apache.
If you enjoyed please checkout preview - knowr.co

Top comments (0)