This post will explain creating a Lambda function to call ChatGPT API. It will provide the instructions and the python code to create and deploy the function as an API.
Lambda is a serverless computing platform provided by Amazon Web Services (AWS). Using Lambda, developers can create applications without the need to manage their infrastructure. Lambda also enables applications to auto-scale according to demand, making the platform cost-effective (pay only for what you use).
ChatGPT is a product developed by OpenAI to enable users to talk with an advanced language model. Using ChatGPT, developers can create applications that automatically answer questions, generate and complete text, summarize long articles, and translate text.
Important Notes: This post is intended for learning purposes and will create a public API without protecting the API key secret. In a production environment it would be advisable enabling authentication and protecting the API secret key.
Creating the Lambda function
The first step is to create a Lambda function. On the AWS dashboard, enter the Lambda Service and create a new function from scratch. The following are the parameters to create the Lambda Function:
Author from scratch
- Basic Information:
- Function name: chatGPT
- Runtime: Python 3.9 (or greater)
The Lambda function chatGPT is created. The console provides the Code tab to insert the code, but since the function is empty, it does nothing.
Calling the Open AI API
We need to create an API key to call the Open AI API. First, we need to navigate to the Open AI API site and create an account. There is a View API Keys link on the user profile page, where we can create a new secret key to call the API.
Once the API key is created, you can create the Lambda function code. The code below implements a Lambda function that call the OpenAI API. In a production environment, it would be safer to store the API-KEY in a vault or at least in an environment variable.
import openai
def lambda_handler(event, context):
openai.api_key = 'API-KEY'
question = event['question']
response = openai.Completion.create(
model = "text-davinci-003",
prompt = question,
temperature = 1,
max_tokens = 100)
return {
'statusCode': 200,
'body': response['choices'][0]['text']
}
This code depends on the openai lib, which you need to install to run the Lambda function. There are some tutorials that explain how to install python dependencies in a Lambda environment.
Moreover, the Open AI service my take some time to run. Hence, we need to configure and increase the Lambda function timeout. In the Configuration tab, you can navigate to General Configuration and change the Timeout to 10 seconds.
Testing the Lambda Function
The code is ready to run and now we need to test it. In the Test tab, we can create a new test event and write the following Event JSON.
{
"question": "Provide a brief description of aws lambda"
}
After run the test, we expect a message telling that the execution result succeeded. We can expand the result to check the Open AI result.
{
"statusCode": 200,
"body": "\n\nAWS Lambda is a serverless computing service provided by Amazon Web Services (AWS). Lambda allows developers to run code without managing any infrastructure. Lambda executes code in response to certain events, such as an HTTP request or an upload to an Amazon S3 bucket. Lambda is compatible with a variety of programming languages, including Node.js, Java, Python and C#."
}
Deploying the Lambda function
We need to create a trigger calling the Lambda function to publish the function. Since we want to create an API, we will use the API Gateway trigger. Click on the add trigger button, choose the API Gateway, and provide the following parameters:
API Gateway
- Intent: Create a new API
- API Type: REST API
- Security: Open
Once created, the trigger will provide a URL to call the API.
Top comments (2)
I followed your excellent instructions and am implementing this in a Foundry VTT module, but I can't seem to get the requests to work with CORS. I have tried so many things. Here is the error I'm getting. Any suggestions?
Hi,
I don't know about Foundry VTT module development, but in general, CORS errors are solved on the server side setting the appropriate
Access-Control-Allow-Origin
in the response header.Regards,