DEV Community

Cover image for What OpenAI’s latest updates mean for AI builders
farez
farez

Posted on

What OpenAI’s latest updates mean for AI builders

Do we still need embeddings?

GPT-4 now has a 128K context or “more than 300 pages of text in a single prompt”. And you can now upload files with the API. This means for most “chat with document” cases, you can just upload the file wholesale and ask questions.

Or you can feed all your data to the new Assistants API and let it do the retrieval. You just give it your file and it can do its own vector search on it, if required. Furthermore, it remembers all the chat history, so you don’t need to save that either.

This also simplifies your architecture.

Better math

Assistant API can now run sandboxed Python. Math hasn’t been GPT’s strong point until now.

Combine this with retrieval, and you have a really powerful data analysis tool.

Multimodal

You can start building true multimodal apps with new Vision, DALL-E, Text-to-speech endpoints.

Read a table then output a chart. Read an image and read a summary of it out loud. Brainstorm design ideas verbally and have it iterate on a design.

Cheaper

GPT-3.5 and GPT-4 are 3x cheaper. Fine tuning is 4x cheaper.

If you’re a SaaS competing on prices, I think this is the time to reconsider your business model.

But this is great news for bespoke builds. The ongoing maintenance cost for your clients will be much more attractive.

Faster

Multiple function calling in a single message means less round trips to the API. This is great for the user’s experience.

And the rate limits have doubled GPT-4.

More accuracy, more reproducible

There’s a new “JSON mode” that will instruct the API to always return valid JSON. This is really useful for returning structured output in a chat session. JSON mode is available in GPT-3.5 too.

And there is a new “seed” parameter to make outputs reproducible. Useful if you’re integrating your data into a workflow that requires a high level of repeatability.


What this means for our own AI app, bunni.ai

  • It's harder to build and grow general-purpose AI tools now. We've started to focus on a niche (researchers) and a specific use case ("summarisation").
  • We'll continue to innovate on pricing and UX.
  • The new OpenAI API updates will give us so much more to work with. Especially for our custom-build clients.

Follow me on X for comments on AI, development and SaaS: https://twitter.com/farez

Top comments (0)