DEV Community

Tech Tim (@TechTim42)
Tech Tim (@TechTim42)

Posted on • Updated on • Originally published at Medium

What LLM does not help for SaaS

LLMs, such as the ChatGPT model released by OpenAI, have become a prominent topic in the field of natural language processing (NLP) and artificial intelligence (AI). These models are capable of generating human-like text responses, making them valuable for a wide range of applications, including SaaS products.

There are some examples:

  • engshell, bring native language in shell to run command in computer.
  • konjer, talk to your book.
  • gptduck, learn about a code repo though LLM.

It is nearly half a year now since the release of ChatGPT. Tons of SaaS company has integrated or want to intergrate Open AI, GPT3.5 or GPT4 to their SaaS products. However, before implementing or planning, there are two challenges I have noticed.

Cost of LLM, API Rate

us dollars

Photo by 金 运 on Unsplash

These costs can vary depending on the type of LLM being used, the size of the input data, and the number of requests made. For example, the GPT-4 model from OpenAI charges $0.03 per 1,000 tokens for 8k context, while the GPT-3.5-Turbo model charges $0.002 per 1,000 tokens. SaaS providers need to carefully account for these costs and ensure that they are transparently communicated to their users to avoid any surprises or unexpected charges.

Bring Low SLA to SaaS

bar chart of SLA

Photo by Isaac Smith on Unsplash

In addition to cost considerations, there is also the issue of the SLA for LLM APIs. OpenAI does not provide a guaranteed SLA for the availability of its APIs, which means that SaaS providers cannot guarantee the availability of their products when relying on these APIs. For example, if a SaaS provider has an original SLA of 99.5%, introducing the GPT-4 API into their product could potentially reduce this SLA due to the lack of a guaranteed SLA for the API.

For SLA Calculation. If a SaaS contains 2 tier, if first tier SLA is 90%, the second tier is 90% too, for the product, the SLA is 81%. In this case, if the SLA for OpenAI is unknown, the OpenAI is integrated as part of core of the products, the new SLA is 0%, which it means it will break the agreement.

How to overcome these 2 Potential Issues?

show a solution in front of a laptop

Photo by John Schnobrich on Unsplash

How to Save the Cost of Integrating LLM in SaaS?

There is no perfect way to solve these issues. For the pricey, one potential solution is to bring your own OpenAI API Key. This could save the cost from the SaaS provider, since the price will be charged from end users. However, SaaS providers need to ensure that proper security measures are in place to protect the API keys and the data processed by the LLM APIs if they are stored and managed by SaaS provider.

There is another issue for the products that ask API Key from clients/consumers, it is not in a SaaS style. The reason for a SaaS product becoming popular, because it let users to have the cross-platform experience in all services in one, data persistency, computing, and easy to use UI/UX. At the same time, users are charged in one bill as well. Bringing API Key basically break the purpose of building a SaaS products.

How to Increase the SLA of LLM APP?

Self-host + Replication

One potential solution to these challenges is to self-host LLMs and replicate them across different regions to ensure availability and reduce latency. However, this approach can also be expensive, particularly from a time perspective, as it requires significant resources to set up and maintain the infrastructure needed to host and replicate LLMs.

Not Integrate LLM into Core Product

This could work for SaaS products which do not require LLM AI as core service of it.

For example, one approach f*or a Facebook Marketplace-like web application would be to avoid relying on LLM to extract information from sale listings, and instead utilize LLM as an optional chat assistant*. This would allow for the core product functionalities to remain independent of low SLA LLM, while still enabling users to benefit from LLM capabilities.

In Conclusion

Integrating LLM APIs into SaaS products can offer significant benefits in terms of functionality and user experience. However, SaaS providers need to carefully consider the cost, security, and availability considerations associated with using these APIs. Bringing your own API key can offer a seamless experience for users, but requires proper security measures. Self-hosting LLMs can provide greater availability and reduced latency, but requires significant resources to set up and maintain.

References:

Top comments (0)