<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Bharatkumar Subramanian</title>
    <description>The latest articles on DEV Community by Bharatkumar Subramanian (@reachbrt).</description>
    <link>https://dev.to/reachbrt</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/reachbrt"/>
    <language>en</language>
    <item>
      <title>Introducing AIVue: Enterprise-Grade AI Components for Vue.js</title>
      <dc:creator>Bharatkumar Subramanian</dc:creator>
      <pubDate>Sun, 22 Jun 2025 23:18:26 +0000</pubDate>
      <link>https://dev.to/reachbrt/introducing-aivue-enterprise-grade-ai-components-for-vuejs-487o</link>
      <guid>https://dev.to/reachbrt/introducing-aivue-enterprise-grade-ai-components-for-vuejs-487o</guid>
      <description>&lt;p&gt;AIVue is a comprehensive suite of AI-powered Vue.js components that makes integrating advanced AI capabilities into your applications simple and efficient. With support for multiple AI providers, database integration, and enterprise-ready features, AIVue helps developers build intelligent interfaces without the complexity.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem
&lt;/h2&gt;

&lt;p&gt;Integrating AI into Vue applications has traditionally been challenging:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Inconsistent APIs across different AI providers&lt;/li&gt;
&lt;li&gt;Complex state management for chat interfaces&lt;/li&gt;
&lt;li&gt;Lack of enterprise features like storage and analytics&lt;/li&gt;
&lt;li&gt;Poor developer experience with minimal TypeScript support&lt;/li&gt;
&lt;li&gt;Limited fallback options when API keys aren't available&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Introducing AIVue
&lt;/h2&gt;

&lt;p&gt;AIVue solves these problems with a suite of ready-to-use components:&lt;/p&gt;

&lt;h3&gt;
  
  
  @aivue/chatbot (v2.0.0)
&lt;/h3&gt;

&lt;p&gt;Our flagship component offers enterprise-grade conversational AI:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;🗄️ &lt;strong&gt;Database Integration&lt;/strong&gt;: Support for localStorage, Supabase, Firebase, MongoDB, PostgreSQL&lt;/li&gt;
&lt;li&gt;🎤 &lt;strong&gt;Voice Integration&lt;/strong&gt;: Speech-to-text input and text-to-speech responses&lt;/li&gt;
&lt;li&gt;🤖 &lt;strong&gt;Multi-Model AI&lt;/strong&gt;: Intelligent switching between AI providers&lt;/li&gt;
&lt;li&gt;🧵 &lt;strong&gt;Conversation Threading&lt;/strong&gt;: Organize chats by topics&lt;/li&gt;
&lt;li&gt;📎 &lt;strong&gt;Advanced File Upload&lt;/strong&gt;: PDFs, documents, images, audio&lt;/li&gt;
&lt;li&gt;🔒 &lt;strong&gt;Proxy Support &amp;amp; Internationalization&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  @aivue/image-caption
&lt;/h3&gt;

&lt;p&gt;AI-powered image captioning with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;OpenAI Vision model integration&lt;/li&gt;
&lt;li&gt;Drag &amp;amp; drop upload with preview&lt;/li&gt;
&lt;li&gt;URL support for remote images&lt;/li&gt;
&lt;li&gt;Batch processing capabilities&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  @aivue/analytics
&lt;/h3&gt;

&lt;p&gt;Real-time insights into your AI usage:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Usage metrics and conversation analytics&lt;/li&gt;
&lt;li&gt;Performance tracking across AI models&lt;/li&gt;
&lt;li&gt;Customizable dashboards and reporting&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  @aivue/core
&lt;/h3&gt;

&lt;p&gt;The foundation that powers all components:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Multi-provider support (OpenAI, Claude, Gemini, HuggingFace, Ollama, DeepSeek)&lt;/li&gt;
&lt;li&gt;Automatic fallback when API keys aren't available&lt;/li&gt;
&lt;li&gt;Unified API for chat, embeddings, and validation&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Getting Started
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Install the components you need&lt;/span&gt;
npm &lt;span class="nb"&gt;install&lt;/span&gt; @aivue/core @aivue/chatbot

&lt;span class="c"&gt;# Import and use in your Vue app&lt;/span&gt;
import &lt;span class="o"&gt;{&lt;/span&gt; AIClient &lt;span class="o"&gt;}&lt;/span&gt; from &lt;span class="s1"&gt;'@aivue/core'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
import &lt;span class="o"&gt;{&lt;/span&gt; AiChatWindow &lt;span class="o"&gt;}&lt;/span&gt; from &lt;span class="s1"&gt;'@aivue/chatbot'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
import &lt;span class="s1"&gt;'@aivue/chatbot/style.css'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

// Create an AI client
const aiClient &lt;span class="o"&gt;=&lt;/span&gt; new AIClient&lt;span class="o"&gt;({&lt;/span&gt;
  provider: &lt;span class="s1"&gt;'openai'&lt;/span&gt;,
  apiKey: import.meta.env.VITE_OPENAI_API_KEY,
  model: &lt;span class="s1"&gt;'gpt-4o'&lt;/span&gt;
&lt;span class="o"&gt;})&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then in your Vue template:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight vue"&gt;&lt;code&gt;&lt;span class="nt"&gt;&amp;lt;&lt;/span&gt;&lt;span class="k"&gt;template&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;AiChatWindow&lt;/span&gt;
    &lt;span class="na"&gt;:client=&lt;/span&gt;&lt;span class="s"&gt;"aiClient"&lt;/span&gt;
    &lt;span class="na"&gt;title=&lt;/span&gt;&lt;span class="s"&gt;"AI Assistant"&lt;/span&gt;
    &lt;span class="na"&gt;placeholder=&lt;/span&gt;&lt;span class="s"&gt;"Ask me anything..."&lt;/span&gt;
    &lt;span class="na"&gt;:show-avatars=&lt;/span&gt;&lt;span class="s"&gt;"true"&lt;/span&gt;
    &lt;span class="na"&gt;theme=&lt;/span&gt;&lt;span class="s"&gt;"light"&lt;/span&gt;
    &lt;span class="na"&gt;:streaming=&lt;/span&gt;&lt;span class="s"&gt;"true"&lt;/span&gt;
    &lt;span class="na"&gt;:markdown=&lt;/span&gt;&lt;span class="s"&gt;"true"&lt;/span&gt;
    &lt;span class="na"&gt;system-prompt=&lt;/span&gt;&lt;span class="s"&gt;"You are a helpful AI assistant."&lt;/span&gt;
  &lt;span class="nt"&gt;/&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="k"&gt;template&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Vue Compatibility
&lt;/h2&gt;

&lt;p&gt;AIVue works with both Vue 2 and Vue 3, automatically detecting which version you're using and providing the appropriate compatibility layer.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try It Out
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Demo&lt;/strong&gt;: &lt;a href="https://aivue.netlify.app/" rel="noopener noreferrer"&gt;https://aivue.netlify.app/&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GitHub&lt;/strong&gt;: &lt;a href="https://github.com/reachbrt/vueai" rel="noopener noreferrer"&gt;https://github.com/reachbrt/vueai&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;NPM&lt;/strong&gt;: &lt;a href="https://www.npmjs.com/org/aivue" rel="noopener noreferrer"&gt;@aivue packages&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What's Next?
&lt;/h2&gt;

&lt;p&gt;We're continuously improving AIVue with new features and components. Coming soon:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Document analysis and summarization&lt;/li&gt;
&lt;li&gt;Semantic search integration&lt;/li&gt;
&lt;li&gt;Advanced RAG capabilities&lt;/li&gt;
&lt;li&gt;Custom model fine-tuning support&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Join us in building the future of AI-powered Vue applications!&lt;/p&gt;

</description>
      <category>vue</category>
      <category>nuxt</category>
      <category>ai</category>
    </item>
    <item>
      <title>Building a Real-Time OpenAI Usage Monitor with Python: From $500 Bill Shock to Open Source Solution</title>
      <dc:creator>Bharatkumar Subramanian</dc:creator>
      <pubDate>Sun, 22 Jun 2025 15:57:23 +0000</pubDate>
      <link>https://dev.to/reachbrt/building-a-real-time-openai-usage-monitor-with-python-from-500-bill-shock-to-open-source-solution-3l2a</link>
      <guid>https://dev.to/reachbrt/building-a-real-time-openai-usage-monitor-with-python-from-500-bill-shock-to-open-source-solution-3l2a</guid>
      <description>&lt;h1&gt;
  
  
  I Built an Advanced OpenAI Usage Monitor After a $500 Bill Shock
&lt;/h1&gt;

&lt;p&gt;Three months ago, I opened my OpenAI billing dashboard and nearly choked on my coffee. &lt;strong&gt;$500.&lt;/strong&gt; For what I thought was a "small experiment."&lt;/p&gt;

&lt;p&gt;That shock led me to build something that's now helping hundreds of developers optimize their AI costs. Here's the story and the solution.&lt;/p&gt;

&lt;h2&gt;
  
  
  😱 The Problem
&lt;/h2&gt;

&lt;p&gt;Like many developers, I was flying blind with OpenAI costs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Using GPT-4 for everything (didn't know it costs &lt;strong&gt;30x more&lt;/strong&gt; than GPT-3.5!)&lt;/li&gt;
&lt;li&gt;No real-time cost tracking&lt;/li&gt;
&lt;li&gt;Usage spikes during debugging sessions
&lt;/li&gt;
&lt;li&gt;Zero visibility into which features consumed the most tokens&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;The wake-up call&lt;/strong&gt;: Same simple task cost $0.20 with GPT-3.5-turbo vs $6.00 with GPT-4.&lt;/p&gt;

&lt;h2&gt;
  
  
  🛠️ The Solution I Built
&lt;/h2&gt;

&lt;p&gt;I created a comprehensive monitoring system with these core features:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Real-Time Monitoring
&lt;/h3&gt;

&lt;p&gt;Beautiful terminal interface showing live costs and usage:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;✦ ✧ ✦ ✧ OPENAI TOKEN MONITOR ✦ ✧ ✦ ✧ 
============================================================

📊 Token Usage:    🟢 [██████░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░] 12.7%

⏳ Time to Reset:  ⏰ [███████████████████████████████████░░░░░░░░░░░░░░░] 215h 5m

🎯 Tokens:         63,375 / 500,000 (436,625 left)
💰 Cost:           $2.5350
🤖 Model:          gpt-4
🔥 Burn Rate:      93.3 tokens/min

🏁 Predicted End: 2025-06-25 06:53
🔄 Monthly Reset: 2025-07-01 00:00 (8 days)

⚠️  Tokens will run out BEFORE monthly reset!
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  2. Advanced Analytics
&lt;/h3&gt;

&lt;p&gt;Model-specific breakdowns that reveal optimization opportunities:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;📊 USAGE ANALYTICS - Last 7 Days
============================================================

🤖 Model Usage Breakdown:
Model           Tokens       Cost       Calls    %     
-------------------------------------------------------
gpt-4           25,350       $1.01      52       40.0 %
gpt-4-turbo     20,475       $0.82      39       32.3 %
gpt-3.5-turbo   17,550       $0.70      39       27.7 %

⏰ Hourly Usage Pattern:
Hour   Avg Tokens   Calls    Activity            
--------------------------------------------------
00:00  754.2        18       ███████████████
22:00  422.4        38       ████████░░░░░░░
23:00  456.1        74       █████████░░░░░░
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  3. Smart Budget Management
&lt;/h3&gt;

&lt;p&gt;Set monthly limits and get intelligent alerts:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Set budget limits&lt;/span&gt;
./start_openai_monitor.sh budget-50    &lt;span class="c"&gt;# $50/month&lt;/span&gt;
./start_openai_monitor.sh budget-100   &lt;span class="c"&gt;# $100/month&lt;/span&gt;

&lt;span class="c"&gt;# Get smart alerts&lt;/span&gt;
🔔 Token usage exceeded 75% &lt;span class="o"&gt;(&lt;/span&gt;14:23&lt;span class="o"&gt;)&lt;/span&gt;
🔔 High burn rate detected: 520 tokens/min
🔔 Budget alert: &lt;span class="nv"&gt;$45&lt;/span&gt; of &lt;span class="nv"&gt;$50&lt;/span&gt; monthly limit used
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  4. Professional Reporting
&lt;/h3&gt;

&lt;p&gt;Export data for team analysis and stakeholder reports:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Export to CSV for spreadsheets&lt;/span&gt;
./start_openai_monitor.sh export-csv

&lt;span class="c"&gt;# Export to JSON for integrations&lt;/span&gt;
./start_openai_monitor.sh export-json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  📈 The Results
&lt;/h2&gt;

&lt;h3&gt;
  
  
  My Cost Reduction: 60%
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Before&lt;/strong&gt;: $500/month (all GPT-4)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;After&lt;/strong&gt;: $200/month (optimized model mix)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Annual savings&lt;/strong&gt;: $3,600&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Key Optimization Strategies
&lt;/h3&gt;

&lt;h4&gt;
  
  
  1. Smart Model Selection
&lt;/h4&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Task Type&lt;/th&gt;
&lt;th&gt;Before&lt;/th&gt;
&lt;th&gt;After&lt;/th&gt;
&lt;th&gt;Savings&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Simple Q&amp;amp;A&lt;/td&gt;
&lt;td&gt;GPT-4 ($6.00)&lt;/td&gt;
&lt;td&gt;GPT-3.5-turbo ($0.20)&lt;/td&gt;
&lt;td&gt;95%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Code review&lt;/td&gt;
&lt;td&gt;GPT-4 ($12.00)&lt;/td&gt;
&lt;td&gt;GPT-4-turbo ($6.00)&lt;/td&gt;
&lt;td&gt;50%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Complex reasoning&lt;/td&gt;
&lt;td&gt;GPT-4&lt;/td&gt;
&lt;td&gt;GPT-4 (no change)&lt;/td&gt;
&lt;td&gt;0%&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h4&gt;
  
  
  2. Usage Pattern Insights
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Debugging sessions&lt;/strong&gt;: Switched to GPT-3.5 for initial analysis&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Peak hours&lt;/strong&gt;: Identified 11 PM - 1 AM as highest usage&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Batch processing&lt;/strong&gt;: Grouped similar requests for efficiency&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  🚀 Try It Yourself (2-Minute Setup)
&lt;/h2&gt;

&lt;p&gt;The tool is completely free and open source:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Quick demo (no API key needed!)&lt;/span&gt;
git clone https://github.com/reachbrt/OpenAI-Code-Usage-Monitor.git
&lt;span class="nb"&gt;cd &lt;/span&gt;OpenAI-Code-Usage-Monitor
./start_openai_monitor.sh demo

&lt;span class="c"&gt;# Real monitoring with your API key&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;OPENAI_API_KEY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"your-key-here"&lt;/span&gt;
./start_openai_monitor.sh tier2

&lt;span class="c"&gt;# Set up budget alerts&lt;/span&gt;
./start_openai_monitor.sh budget-50

&lt;span class="c"&gt;# View detailed analytics&lt;/span&gt;
./start_openai_monitor.sh analytics
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  🎯 Key Features That Save Money
&lt;/h2&gt;

&lt;h3&gt;
  
  
  ✅ Real-Time Cost Tracking
&lt;/h3&gt;

&lt;p&gt;See exactly what each API call costs as it happens&lt;/p&gt;

&lt;h3&gt;
  
  
  ✅ Model Optimization Insights
&lt;/h3&gt;

&lt;p&gt;Discover which models you're overusing and where to optimize&lt;/p&gt;

&lt;h3&gt;
  
  
  ✅ Intelligent Alerts
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;50%, 75%, 90% usage thresholds&lt;/li&gt;
&lt;li&gt;High burn rate detection&lt;/li&gt;
&lt;li&gt;Budget limit warnings&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  ✅ Usage Analytics
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Hourly usage patterns&lt;/li&gt;
&lt;li&gt;Model-specific breakdowns&lt;/li&gt;
&lt;li&gt;Historical trends and forecasting&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  ✅ Team Collaboration
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Export reports for stakeholders&lt;/li&gt;
&lt;li&gt;Shared usage tracking&lt;/li&gt;
&lt;li&gt;Budget allocation insights&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  💡 Technical Implementation Highlights
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Smart Burn Rate Calculation
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;calculate_burn_rate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;Calculate tokens per minute using weighted moving average&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
    &lt;span class="n"&gt;recent_calls&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_recent_calls&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;hours&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;# Group by minute for smooth calculation
&lt;/span&gt;    &lt;span class="n"&gt;minute_usage&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;defaultdict&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;int&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;call&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;recent_calls&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;minute_key&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;call&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;timestamp&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nf"&gt;replace&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;second&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;microsecond&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;minute_usage&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;minute_key&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;+=&lt;/span&gt; &lt;span class="n"&gt;call&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;total_tokens&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;

    &lt;span class="c1"&gt;# Weighted average (recent minutes have higher weight)
&lt;/span&gt;    &lt;span class="n"&gt;weights&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mf"&gt;0.5&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mf"&gt;0.3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mf"&gt;0.2&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="n"&gt;recent_minutes&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;sorted&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;minute_usage&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;items&lt;/span&gt;&lt;span class="p"&gt;())[&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;:]&lt;/span&gt;

    &lt;span class="n"&gt;weighted_sum&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;sum&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;usage&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="n"&gt;weight&lt;/span&gt; &lt;span class="nf"&gt;for &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;_&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;usage&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="n"&gt;weight&lt;/span&gt; 
                      &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nf"&gt;zip&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;recent_minutes&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;weights&lt;/span&gt;&lt;span class="p"&gt;[:&lt;/span&gt;&lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;recent_minutes&lt;/span&gt;&lt;span class="p"&gt;)]))&lt;/span&gt;

    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;weighted_sum&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="nf"&gt;sum&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;weights&lt;/span&gt;&lt;span class="p"&gt;[:&lt;/span&gt;&lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;recent_minutes&lt;/span&gt;&lt;span class="p"&gt;)])&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Dynamic Cost Calculation
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;calculate_cost&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;prompt_tokens&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;completion_tokens&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;Calculate cost based on current OpenAI pricing&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
    &lt;span class="n"&gt;pricing&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;gpt-4&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;prompt&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mf"&gt;0.00003&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;completion&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mf"&gt;0.00006&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;gpt-4-turbo&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;prompt&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mf"&gt;0.00001&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;completion&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mf"&gt;0.00003&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;gpt-3.5-turbo&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;prompt&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mf"&gt;0.000001&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;completion&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mf"&gt;0.000002&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="n"&gt;model_pricing&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;pricing&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;pricing&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;gpt-3.5-turbo&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;

    &lt;span class="nf"&gt;return &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;prompt_tokens&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="n"&gt;model_pricing&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;prompt&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; 
            &lt;span class="n"&gt;completion_tokens&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="n"&gt;model_pricing&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;completion&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  📊 Community Impact
&lt;/h2&gt;

&lt;p&gt;Early users are seeing similar results:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"Saved $400/month by switching simple tasks to GPT-3.5-turbo. The analytics showed me I was using GPT-4 for everything!" - @developer_mike&lt;/p&gt;

&lt;p&gt;"The hourly patterns helped us optimize our batch processing schedule. Now we run heavy tasks during low-usage hours." - @startup_cto&lt;/p&gt;

&lt;p&gt;"Finally have visibility into our AI infrastructure costs. The export feature is perfect for monthly stakeholder reports." - @team_lead&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  🔮 What's Coming Next
&lt;/h2&gt;

&lt;p&gt;Currently working on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Web dashboard&lt;/strong&gt; with real-time charts and team collaboration&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Slack/Discord integrations&lt;/strong&gt; for team alerts and notifications
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Mobile app&lt;/strong&gt; for on-the-go monitoring&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ML-powered predictions&lt;/strong&gt; for advanced cost forecasting&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multi-API support&lt;/strong&gt; (Claude, Anthropic, etc.)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  🎯 Your Cost Optimization Checklist
&lt;/h2&gt;

&lt;p&gt;Use this to audit your current setup:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;[ ] &lt;strong&gt;Monitor real-time costs&lt;/strong&gt; - Know your spending as it happens&lt;/li&gt;
&lt;li&gt;[ ] &lt;strong&gt;Audit model usage&lt;/strong&gt; - Are you using GPT-4 for simple tasks?&lt;/li&gt;
&lt;li&gt;[ ] &lt;strong&gt;Set budget alerts&lt;/strong&gt; - Prevent surprise bills&lt;/li&gt;
&lt;li&gt;[ ] &lt;strong&gt;Analyze usage patterns&lt;/strong&gt; - Find peak usage times&lt;/li&gt;
&lt;li&gt;[ ] &lt;strong&gt;Create model guidelines&lt;/strong&gt; - When to use which model&lt;/li&gt;
&lt;li&gt;[ ] &lt;strong&gt;Track burn rates&lt;/strong&gt; - Catch usage spikes early&lt;/li&gt;
&lt;li&gt;[ ] &lt;strong&gt;Export regular reports&lt;/strong&gt; - Share with team/stakeholders&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  💬 Discussion Questions
&lt;/h2&gt;

&lt;p&gt;I'd love to hear from the community:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;What's your biggest OpenAI cost challenge?&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Unexpected bills?&lt;/li&gt;
&lt;li&gt;Model selection confusion?&lt;/li&gt;
&lt;li&gt;Team usage tracking?&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;What's your current monthly OpenAI spend?&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Under $50?&lt;/li&gt;
&lt;li&gt;$50-200?&lt;/li&gt;
&lt;li&gt;$200+?&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Which feature would help you most?&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Real-time monitoring?&lt;/li&gt;
&lt;li&gt;Budget alerts?&lt;/li&gt;
&lt;li&gt;Usage analytics?&lt;/li&gt;
&lt;li&gt;Team collaboration?&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  🔗 Get Started
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;GitHub&lt;/strong&gt;: &lt;a href="https://github.com/reachbrt/OpenAI-Code-Usage-Monitor" rel="noopener noreferrer"&gt;https://github.com/reachbrt/OpenAI-Code-Usage-Monitor&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Try Demo&lt;/strong&gt;: &lt;code&gt;./start_openai_monitor.sh demo&lt;/code&gt; (no API key needed)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Documentation&lt;/strong&gt;: Comprehensive setup and usage guide&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Issues&lt;/strong&gt;: Report bugs or request features&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;strong&gt;Start monitoring today and see how much you can save!&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;The tool has already helped developers save thousands of dollars through better visibility and optimization. &lt;/p&gt;

&lt;p&gt;&lt;em&gt;If this helps you optimize your OpenAI costs, a ⭐ on GitHub would mean the world to me.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What's your OpenAI cost optimization story? Share your experiences in the comments!&lt;/strong&gt; 👇&lt;/p&gt;

</description>
      <category>python</category>
      <category>openai</category>
      <category>monitoring</category>
      <category>opensource</category>
    </item>
  </channel>
</rss>
