Ever wished you could simply type a prompt and have an AI whip up an engaging poll for your team? Well, buckle up! In this article, we're diving headfirst into the world of Structured Outputs with OpenAI, and we're doing it with style—by building a poll generator.
At Kollabe, we've integrated this exact functionality to help teams create engaging polls instantly. The best part? The implementation is surprisingly straightforward, and I'll show you exactly how we did it.
What Are Structured Outputs?
When you work with AI, you might be used to getting back long paragraphs of text. But what if you need machine-readable, consistent json data - something that won't leave you questioning whether "Option A" means "Yes" or "Definitely, yes"? Enter Structured Outputs.
This feature is a game-changer because it guarantees that the AI's response will match your specified format. No more hoping the AI will format things correctly or writing complex parsing logic—you define a schema, and OpenAI ensures the output matches it exactly. This not only simplifies downstream processing but also allows you to use libraries like Zod to validate and enforce the schema of your output.
If you are already familiar with using JSON data with OpenAI you might be wondering "What is the difference between structured outputs and OpenAI's JSON mode?"
Structured Outputs is the evolution of JSON mode. While both ensure valid JSON is produced, only Structured Outputs ensure schema adherance.
Supported Models
At the time of writing (February 2025), structured outputs are available on the following OpenAI models:
-
o3-mini-2025-1-31
and later -
o1-2024-12-17
and later -
gpt-4o-mini-2024-07-18
and later -
gpt-4o-2024-08-06
and later
Make sure to check OpenAI's documentation for the most up-to-date list of supported models, as new ones are regularly added. Also ensure that you have access to these models. OpenAI has different developer tiers.
The Poll Generator 🙋♂️
Whether you're gathering feedback from your team, running a community survey, or just trying to decide where to go for lunch, a well-crafted poll can make all the difference. Our goal is simple: the user provides a prompt (maybe something like "What's the best time for our weekly team sync?") and the AI returns a poll with a clear, concise question and a set of engaging options.
Before structured outputs, implementations relied on hoping the AI would format things correctly or using complex prompt engineering. Now, with structured outputs, we can streamline our code and let OpenAI handle the heavy lifting, all while ensuring the output is exactly what we expect.
Let's Get Our Hands Dirty (with TypeScript) ✍️
Below is the clean, modern, and fully TypeScript-powered code for our poll generator using structured outputs. Notice how we define a JSON schema for our poll, use Zod to validate the response, and then call the OpenAI API with the response_format
parameter.
import type OpenAI from "openai";
import type { ChatCompletionCreateParamsBase } from "openai/resources/chat/completions";
import { z } from "zod";
import { languageMap, openai } from "@/lib/openai/index";
/**
* Define the JSON schema for our poll using OpenAI's structured output format.
* This schema tells the AI exactly what we expect—a question and an array of options.
*/
const pollSchema: OpenAI.ResponseFormatJSONSchema["json_schema"] = {
name: "CreateAIPoll",
description:
"Create a high-quality poll that engages participants and generates meaningful discussion.",
schema: {
type: "object",
properties: {
question: {
type: "string",
description:
"A clear and concise question that can be answered by selecting one of the provided options.",
},
options: {
type: "array",
description:
"A list of possible answers for the poll. Each option should be distinct and cover a range of likely responses. Optionally, start each option with an emoji to make it more engaging.",
items: {
type: "string",
description:
"A possible answer to the poll question. Ensure each option is concise and unambiguous.",
},
},
},
required: ["question", "options"],
additionalProperties: false,
},
strict: true,
};
/**
* Create a Zod schema to validate the API response.
*/
const aiPoll = z.object({
question: z.string(),
options: z.array(z.string()),
});
export type AIPoll = z.infer<typeof aiPoll>;
/**
* getAIPoll calls OpenAI's chat completion API using structured outputs.
*
* @param prompt - The prompt provided by the user to guide poll creation.
* @param languageCode - The language in which the poll should be generated.
* @param params - Additional parameters for the OpenAI API.
* @returns A promise that resolves to a validated poll object.
*/
export const getAIPoll = async (
prompt: string,
languageCode: string,
params: Partial<ChatCompletionCreateParamsBase> = {}
): Promise<AIPoll> => {
const resp = await openai.beta.chat.completions.parse({
...params,
stream: false,
model: "gpt-4o-2024-11-20",
messages: [
{
role: "system",
content:
"You are a poll generator AI. Your task is to generate engaging polls that spark meaningful discussions. Adapt your tone and style based on the context of the user's prompt.",
},
{
role: "user",
content: `The prompt for the poll is: ${prompt}. Generate the response in ${languageMap[languageCode]} (${languageCode}).`,
},
],
response_format: {
type: "json_schema",
json_schema: pollSchema,
},
});
// Parse and validate the response using Zod
return aiPoll.parse(resp.choices[0]?.message.parsed);
};
Code Walkthrough 🥾
-
Defining the Schema:
- We begin by declaring a JSON schema (
pollSchema
) that precisely outlines what our poll should include. The schema specifies two properties:- question: A string that represents the poll's question
- options: An array of strings, each representing a possible answer
- The
additionalProperties: false
flag ensures that no extra data sneaks into our response - Most importantly, OpenAI guarantees that the response will match this schema exactly
- Including
strict
,required
, andadditionalProperties
is required when using json_schema
- We begin by declaring a JSON schema (
-
Setting Up Zod:
- Next, we create a Zod schema (
aiPoll
) to validate the response from the AI - This acts as a safety net—if somehow the response doesn't match our schema (which shouldn't happen with structured outputs), our code will throw an error rather than processing invalid data
- It also gives us great TypeScript types through type inference
- Next, we create a Zod schema (
-
Calling the API:
- In the
getAIPoll
function, we callopenai.beta.chat.completions.parse
with our prompt and our structuredresponse_format
- The
messages
array contains two roles:- System: Sets the behavior of the AI as a poll generator
- User: Passes along the specific prompt and desired language
- The
response_format
parameter tells OpenAI to strictly adhere to our schema
- In the
-
Validation and Return:
- Finally, we parse the returned message using our Zod schema
- Since OpenAI guarantees the format, this validation step is more about TypeScript type safety than runtime checking
Why This Approach Rocks 🪨
Using structured outputs with OpenAI has several major benefits:
- Guaranteed Format: The AI's output is guaranteed to follow your schema. No more "I expected an array, but got a string" mishaps
- Type Safety: Combined with Zod, you get end-to-end type safety from the AI response to your application code
- Simplified Processing: No need for complex parsing or error handling—the data comes back exactly as you expect
- Multilingual Support: The same structure works across any language, making it easy to create polls for global teams
- Clean Integration: The code integrates smoothly with any modern TypeScript/JavaScript application
At Kollabe, this approach has allowed us to focus on creating great user experiences instead of wrestling with AI response parsing. Our users can generate polls in seconds, knowing they'll always get well-structured, usable results.
Real-World Applications 🌎
The applications of structured outputs extend far beyond simple polls. Here are some ways organizations are using this technology:
const prompt = "What aspects of our recent project launch would you like to discuss?";
const poll = await getAIPoll(prompt, "en");
console.log(poll);
// Output:
// {
// question: "Which aspect of our recent project launch had the biggest impact on our success?",
// options: [
// "🚀 The pre-launch preparation and planning",
// "👥 Team collaboration and communication",
// "🛠️ Technical execution and problem-solving",
// "🎯 Marketing strategy and customer engagement",
// "📈 Post-launch support and improvements"
// ]
// }
The End 👋
In this article, we explored how to use OpenAI's structured outputs to build a poll generator that is both functional and fun. We walked through the key elements:
- Defining a JSON schema that tells the AI exactly what output is expected
- Validating the output using Zod to catch any discrepancies
- Calling the OpenAI API with a clear set of instructions
- Most importantly, getting guaranteed structured responses every time
If you're interested in seeing this in action, check out Kollabe. We've implemented this exact approach to help teams create engaging polls instantly, and we're constantly amazed by how well it works.
Whether you're building your own implementation or looking for a ready-to-use solution, structured outputs are a game-changer for working with AI. They transform the sometimes unpredictable world of AI responses into something you can rely on, letting you focus on building great features instead of parsing responses.
Happy polling!
Top comments (4)
Amazing article!
Thank you for sharing!
Turns out that other AI giants offer a similar functionality, e.g. Gemini structured output API, and while they differ a bit - it should be possible to implement ”bring your own AI” solutions 🤔
Good callout! I believe Amazon also allows this with their foundation models.
Thanks for sharing 😃 I'm wondering how you're handling openAI refusing to answer because you triggered the content filter or something. It will return a valid JSON with blank values ,or if you required a filed it will populate it with random things like "invalid".
It's not consistent and I don't know how to handle them
Hey @clawfire
If you check out the docs, it shows you that you can check handle edge cases using the "finish_reason" value on the message. This is straight from the docs, showing all of the cases: