<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Chris Hefley</title>
    <description>The latest articles on DEV Community by Chris Hefley (@chrishefley).</description>
    <link>https://dev.to/chrishefley</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/chrishefley"/>
    <language>en</language>
    <item>
      <title>How are you dealing with OpenAI "outages" in your applications?</title>
      <dc:creator>Chris Hefley</dc:creator>
      <pubDate>Mon, 15 Jul 2024 20:44:42 +0000</pubDate>
      <link>https://dev.to/chrishefley/how-are-you-dealing-with-openai-outages-in-your-applications-4glk</link>
      <guid>https://dev.to/chrishefley/how-are-you-dealing-with-openai-outages-in-your-applications-4glk</guid>
      <description>&lt;p&gt;We wanted to add some cool AI chat features to our product – and getting it to work with code examples was easy – but reliably scaling it and doing it securely was a lot harder.&lt;/p&gt;

&lt;p&gt;Even at moderate scale, keeping it reliable and available was a major headache. Provider outages caused our product to fail at the worst times (like product demos).&lt;/p&gt;

&lt;p&gt;What does it take to get our AI features Production-Ready?&lt;/p&gt;

&lt;p&gt;We realized we needed multiple LLM providers so that we could gracefully failover to another when not if they had an outage.&lt;/p&gt;

&lt;p&gt;Different providers had different rate limits, so we added the ability to retry a request to different providers whenever we hit a rate limit. &lt;/p&gt;

&lt;p&gt;An let’s not forget EU customers. Without data tenancy settings to route AI chat requests to LLM providers in the EU, they wouldn’t be able to use our software. &lt;/p&gt;

&lt;p&gt;We added response caching, a developer control panel, customer token allowances, secure API key storage, load shedding, and PII data redaction, too. &lt;/p&gt;

&lt;p&gt;And now we’ve packaged up everything we’ve learned for you to use in your applications. &lt;/p&gt;

&lt;p&gt;Head on over to &lt;a href="https://llmasaservice.io/" rel="noopener noreferrer"&gt;https://llmasaservice.io/&lt;/a&gt; and check it out. We're looking for application developers to pilot it. Get in touch, and let's get your AI features into production!&lt;/p&gt;

</description>
      <category>ai</category>
      <category>chatgpt</category>
      <category>sass</category>
    </item>
  </channel>
</rss>
