<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Jonathan Flower</title>
    <description>The latest articles on DEV Community by Jonathan Flower (@jfbloom22).</description>
    <link>https://dev.to/jfbloom22</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/jfbloom22"/>
    <language>en</language>
    <item>
      <title>How to not Loose $500k to a Malicious Cursor Extension</title>
      <dc:creator>Jonathan Flower</dc:creator>
      <pubDate>Thu, 24 Jul 2025 10:12:00 +0000</pubDate>
      <link>https://dev.to/jfbloom22/how-to-not-loose-500k-to-a-malicious-cursor-extension-50h6</link>
      <guid>https://dev.to/jfbloom22/how-to-not-loose-500k-to-a-malicious-cursor-extension-50h6</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxb5uaawarbbsjnc2wtfg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxb5uaawarbbsjnc2wtfg.png" alt=" " width="800" height="610"&gt;&lt;/a&gt;&lt;br&gt;
Cursor’s open plugin marketplace allowed a malicious extension to steal $500,000;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Issue with VS Code Plugins in Cursor&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Cursor is an AI-powered code editor forked from Visual Studio Code, but unlike the official VS Code, it cannot use Microsoft’s proprietary extension marketplace. Instead, Cursor relies on the Open VSX registry—a more open, community-driven alternative with less strict security controls and review processes.&lt;/p&gt;

&lt;p&gt;This openness allowed attackers to upload a fake “Solidity Language” extension that appeared legitimate (with a copied description and inflated download numbers). When installed, the extension executed malicious code, granting attackers remote access to the developer’s machine and ultimately leading to the theft of $500,000 in cryptocurrency. The attack exploited the fact that IDE extensions have deep system access, and the Open VSX marketplace’s ranking algorithm could be manipulated to make malicious extensions appear more trustworthy than legitimate ones.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgtxre92u3u0vt8m7g4vf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgtxre92u3u0vt8m7g4vf.png" alt="Search results" width="614" height="674"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Advice to Protect Against This Vulnerability&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Install Extensions from Trusted Sources&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;Prefer Microsoft’s official marketplace when possible. If you use Cursor or another VS Code fork, first install and test extensions in the official VS Code, then migrate them to Cursor. This reduces the risk of installing a malicious lookalike.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Be Wary of Non-Functional or New Extensions&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;If an extension doesn’t work as expected, uninstall it immediately. New extensions are riskier—let them mature and gain community trust before adopting.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scrutinize Publisher Details&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;Check the publisher’s profile, history, and other published extensions. Attackers often use subtle name changes (like a capital “I” instead of a lowercase “l”) to impersonate trusted publishers.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Watch for Inflated Download Counts&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;Don’t rely solely on download numbers or ratings; these can be faked.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Compartmentalize Sensitive Work&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;Use minimal extensions in environments handling sensitive data (like crypto wallets). Consider separate systems for high-value activities.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Stay Informed and Use Security Tools&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;Keep up with security advisories and use reputable antivirus or endpoint protection to detect suspicious activity.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Verify Extension Code When Possible&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;Open-source does not guarantee safety. The distributed package may differ from the public code. If you’re highly security-conscious, build extensions from source or verify cryptographic signatures.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;By following these practices, you can significantly reduce your risk when using VS Code plugins in Cursor or any similar open marketplace.&lt;/p&gt;

&lt;p&gt;This is largely from &lt;a href="https://www.youtube.com/watch?v=CqKZhYsjw6M" rel="noopener noreferrer"&gt;Java Brains YouTube&lt;/a&gt; video about the issue.&lt;/p&gt;

&lt;p&gt;Other sources:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.kaspersky.com/about/press-releases/kaspersky-uncovers-500k-crypto-heist-through-malicious-packages-targeting-cursor-developers" rel="noopener noreferrer"&gt;​⁠https://www.kaspersky.com/about/press-releases/kaspersky-uncovers-500k-crypto-heist-through-malicious-packages-targeting-cursor-developers&lt;/a&gt; &lt;a href="https://securelist.com/open-source-package-for-cursor-ai-turned-into-a-crypto-heist/116908/" rel="noopener noreferrer"&gt;​⁠https://securelist.com/open-source-package-for-cursor-ai-turned-into-a-crypto-heist/116908/&lt;/a&gt; &lt;a href="https://www.webasha.com/blog/what-is-the-real-risk-behind-malicious-vscode-extensions-like-the-cursor-ide-incident" rel="noopener noreferrer"&gt;​⁠https://www.webasha.com/blog/what-is-the-real-risk-behind-malicious-vscode-extensions-like-the-cursor-ide-incident&lt;/a&gt;&lt;/p&gt;

</description>
      <category>artificialintelligen</category>
      <category>softwaredevelopment</category>
      <category>ai</category>
      <category>codingtools</category>
    </item>
    <item>
      <title>Build a GPT That Talks to Your Database in One Day</title>
      <dc:creator>Jonathan Flower</dc:creator>
      <pubDate>Thu, 27 Jun 2024 01:52:13 +0000</pubDate>
      <link>https://dev.to/jfbloom22/build-a-gpt-that-talks-to-your-database-in-one-day-1kf0</link>
      <guid>https://dev.to/jfbloom22/build-a-gpt-that-talks-to-your-database-in-one-day-1kf0</guid>
      <description>&lt;p&gt;Have you ever wondered how challenging it is to create a Custom GPT with user authentication and database access?   I found the lack of examples of this disheartening. So, I create a comprehensive guide myself and am pleased to say, with a small amount of coding skills you can build your own in day.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/jfbloom22/custom-gpt-api-oauth"&gt;GitHub Repository: Custom GPT API OAuth&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://chatgpt.com/g/g-oXRAoqOK8-my-pizza-dough-oauth-demo"&gt;Demo: My Pizza Dough (OAuth Demo)&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Tech Stack
&lt;/h3&gt;

&lt;p&gt;To achieve this, I used:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Clerk.com: for authentication&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://vercel.com"&gt;Vercel&lt;/a&gt;: for hosting&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.prisma.io/"&gt;Prisma&lt;/a&gt;: for great database management UX&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://neon.tech/"&gt;Neon&lt;/a&gt;: for serverless postgres&lt;/li&gt;
&lt;li&gt;Typescript&lt;/li&gt;
&lt;li&gt;Express.js&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Who is this for?
&lt;/h3&gt;

&lt;p&gt;This guide is perfect for developers looking to create an AI Agent or a Custom GPT that supports user authentication and database access with ease.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why it matters
&lt;/h3&gt;

&lt;p&gt;An AI Agent with user authentication unlocks numerous possibilities, enabling applications that require secure user data access and personalized experiences.&lt;/p&gt;

&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;p&gt;Before diving in you will need:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Familiarity with DNS and configuring a Custom Domain Name&lt;/li&gt;
&lt;li&gt;Familiarity with creating a REST API&lt;/li&gt;
&lt;li&gt;Have a paid subscription to ChatGPT&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Background Story
&lt;/h3&gt;

&lt;p&gt;In my quest to build GPTs with authentication, I found the lack of examples disheartening. Despite extensive searches on Perplexity, Arc Search, and ChatGPT, the closest resources were:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://blog.logto.io/gpt-action-oauth/"&gt;Authenticate users in GPT actions: Build a personal agenda assistant&lt;/a&gt; - Too closely tied to Logto.io&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://agilemerchants.medium.com/how-to-add-oauth-authorization-for-custom-gpts-d1eaf32ee730"&gt;How to add OAuth authorization for custom GPTs&lt;/a&gt; - Using PHP, Laravel, and looks overly complicated&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.youtube.com/watch?v=nA4FtyAKhfA"&gt;GPT Action with Google OAuth)&lt;/a&gt; - Limited to Google APIs&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/Anil-matcha/GPT-Actions"&gt;GPT-Actions: GPT Auth&lt;/a&gt; - Not using OAuth2
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Since I couldn’t find what I needed, I decided to create an example myself, hoping it would help others in similar situations.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Challenge
&lt;/h3&gt;

&lt;p&gt;After getting the basics working and configuring the GPT, I faced a frustratingly generic error when trying to save:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fobckpnhetfe9w8jh5yiq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fobckpnhetfe9w8jh5yiq.png" alt="Image description" width="800" height="231"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To troubleshoot, I built another Custom GPT using OAuth2 to access my Google Calendar, confirming the issue was specific to my project. The community shared similar frustrations:&lt;br&gt;
&lt;a href="https://community.openai.com/t/error-saving-draft-when-creating-an-authenticated-action-in-a-gpt/490733/14"&gt;"Error saving draft" when creating an authenticated action in a GPT&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  The Solution
&lt;/h3&gt;

&lt;p&gt;The breakthrough came from an unusual requirement in the OpenAI docs: the OAuth2 server must share the same domain as the API server, except for Google, Microsoft, and Adobe OAuth domains.&lt;/p&gt;

&lt;p&gt;Once I configured a custom domain on Clerk, everything worked beautifully! The result is a template project that any developer can fork, customize, and deploy in a single day.&lt;br&gt;
    *   &lt;a href="https://github.com/jfbloom22/custom-gpt-api-oauth"&gt;GitHub Repository: Custom GPT API OAuth&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Demo
&lt;/h3&gt;

&lt;p&gt;Check out the demo for a hands-on experience. Feel free to explore, and even spam the database - restoring it is easy with backups on Neon.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://chatgpt.com/g/g-oXRAoqOK8-my-pizza-dough-oauth-demo"&gt;Demo: My Pizza Dough - OAuth Demo&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Checkout the rest of my series on GPTs: ~&lt;a href="https://blog.jonathanflower.com/uncategorized/who-cares-about-custom-gpts/"&gt;Who Cares About Custom GPTs?&lt;/a&gt;~&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>oauth2</category>
      <category>openai</category>
      <category>chatgpt</category>
      <category>node</category>
    </item>
    <item>
      <title>How do I start to incorporate AI into my business?</title>
      <dc:creator>Jonathan Flower</dc:creator>
      <pubDate>Tue, 04 Jun 2024 13:51:58 +0000</pubDate>
      <link>https://dev.to/jfbloom22/how-do-i-start-to-incorporate-ai-into-my-business-51fl</link>
      <guid>https://dev.to/jfbloom22/how-do-i-start-to-incorporate-ai-into-my-business-51fl</guid>
      <description>&lt;h2&gt;
  
  
  1. Spend at least 10 minutes playing with ChatGPT
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Start chatting without needing to signup: &lt;a href="https://chat.openai.com/"&gt;https://chat.openai.com/&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Type whatever comes to mind. A few ideas to get the conversation started:

&lt;ul&gt;
&lt;li&gt;What were the last few things you Googled? Try typing those into ChatGPT.&lt;/li&gt;
&lt;li&gt;What is a topic you are curious about? Ask ChatGPT to tell you about it.&lt;/li&gt;
&lt;li&gt;Try to stump it by asking a hard question in your field of expertise.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;There are a ton of AI tools out there. I suggest starting with ChatGPT because it is free and the best for many things.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  2. Is it magic or an idiot?
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;This is largely dependent on the questions you asked and how you phrased them. The question is what types of questions can it answer, and how do I ask my questions in a way that helps it respond well?&lt;/li&gt;
&lt;li&gt;The simplest advice is to imagine ChatGPT is a high school intern who has memorized the internet. Technically I don’t agree with personifying ChatGPT, but I have not found a better way to guide people with a simple concept. Let’s imagine giving a high school intern very brief instructions. Who knows what you are going to get from them! If you take the time to explain the goal, steps, any other relevant details or expectations, then you will be surprised if you don’t receive back something you can use or at least provide feedback on so that they can improve.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  3. Practical uses
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Now that you have some basic concepts, it is time to explore how it might be able to speed or enhance your work. You are on your way to becoming AI empowered.&lt;/li&gt;
&lt;li&gt;We started this post with a question: “How do I start to incorporate AI into my business?” This is a perfect question to ask ChatGPT.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   I am in the [insert your industry] industry and my role is a [insert your role]. How do I start to incorporate AI in my business?
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  4. Ask ChatGPT to give you a lot of ideas for prompts:
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   I am a [insert your profession]. Give me 50 ChatGPT prompts that can help me be more productive in my job.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Privacy
&lt;/h2&gt;

&lt;p&gt;As you start to use ChatGPT, you should be cautious about uploading documents or entering text with sensitive information. Do not enter anything sensitive. The free version of ChatGPT will use this information to train their models. Here is a guide on how to protect your private information: &lt;a href="https://www.linkedin.com/posts/jonathan-flower_ai-chatgpt-codingtools-activity-7201957443331919876-6-0m?utm_source=share&amp;amp;utm_medium=member_desktop"&gt;How to Protect Your Data with ChatGPT | Jonathan Flower posted on the topic | LinkedIn&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>softwaredevelopment</category>
      <category>aiempowerment</category>
    </item>
    <item>
      <title>Automatically Close Apps That Drain your Battery</title>
      <dc:creator>Jonathan Flower</dc:creator>
      <pubDate>Tue, 04 Jun 2024 11:51:48 +0000</pubDate>
      <link>https://dev.to/jfbloom22/automatically-close-apps-that-drain-your-battery-493i</link>
      <guid>https://dev.to/jfbloom22/automatically-close-apps-that-drain-your-battery-493i</guid>
      <description>&lt;p&gt;Ever feel the need to close certain apps that drain your battery and then later on forget to relaunch them when connected to power?&lt;/p&gt;

&lt;p&gt;I use an app called Rewind that records my screen much like the new &lt;a href="https://support.microsoft.com/en-us/windows/retrace-your-steps-with-recall-aa03f8a0-a78b-4b3e-b0a1-2eb8ac48701c"&gt;Microsoft Copilot Recall&lt;/a&gt;. Unfortunately Rewind has been discontinued. But it still works great. The only issue is that it uses a fair amount of power and runs down my battery and so I turn it off when on Battery. I also have an app called Wave Link that connects to my Elgato Wave:3 Studio Mic. It runs audio filtering whether I am actively using the mic or not and again, runs down my battery.&lt;/p&gt;

&lt;p&gt;Thus, I wrote an apple script that runs every minute and automatically launches or quits these apps. Hope this helps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Open Automator&lt;/li&gt;
&lt;li&gt;Create a New Application&lt;/li&gt;
&lt;li&gt;Search and add Run AppleScript&lt;/li&gt;
&lt;li&gt;Update the script below to fit your needs
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-- Get current battery status
set batteryInfo to do shell script "pmset -g batt"
-- List of apps
set appsToClose to {"Rewind", "WaveLink"}
set appsToOpen to {"Rewind", "WaveLink"}

-- Check if on battery power
if batteryInfo contains "Battery Power" then
 -- Loop through apps and quit them
 repeat with appName in appsToClose
  tell application "System Events"
   -- Check if the app is running by its name

   if exists (processes whose name is appName) then
    tell application "System Events" to set appRunning to true
   else
    tell application "System Events" to set appRunning to false
   end if
  end tell
  if appRunning then
   tell application appName to quit
  end if
 end repeat
else
 -- Loop through apps and open them
 repeat with appName in appsToOpen
  tell application "System Events"
   -- Check if the app is running by its name
   if exists (processes whose name is appName) then
    if (appName is "WaveLink") then
     set appName to "Elgato Wave Link"
    end if
    tell application "System Events" to set appRunning to true
   else
    tell application "System Events" to set appRunning to false
   end if
  end tell
  if not appRunning then
   tell application appName to activate
  end if
 end repeat
end if
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Plist file&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;?xml version="1.0" encoding="UTF-8"?&amp;gt;
&amp;lt;!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"&amp;gt;
&amp;lt;plist version="1.0"&amp;gt;
&amp;lt;dict&amp;gt;
        &amp;lt;key&amp;gt;Label&amp;lt;/key&amp;gt;
        &amp;lt;string&amp;gt;com.jflow.closeapp&amp;lt;/string&amp;gt;
        &amp;lt;key&amp;gt;ProgramArguments&amp;lt;/key&amp;gt;
        &amp;lt;array&amp;gt;
                &amp;lt;string&amp;gt;/usr/bin/open&amp;lt;/string&amp;gt;
                &amp;lt;string&amp;gt;/Users/jflowerhome/Documents/CloseOnBatteryV2.app&amp;lt;/string&amp;gt;
                &amp;lt;string&amp;gt;--args&amp;lt;/string&amp;gt;
                &amp;lt;string&amp;gt;--run-in-background&amp;lt;/string&amp;gt;
        &amp;lt;/array&amp;gt;
        &amp;lt;key&amp;gt;StartInterval&amp;lt;/key&amp;gt;
        &amp;lt;integer&amp;gt;60&amp;lt;/integer&amp;gt;
        &amp;lt;key&amp;gt;RunAtLoad&amp;lt;/key&amp;gt;
        &amp;lt;true/&amp;gt;
&amp;lt;/dict&amp;gt;
&amp;lt;/plist&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Load the plist file so that macOS runs the app regularly
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;launchctl load ~/Library/LaunchAgents/com.jflow.closeapp.plist
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Tips:
&lt;/h3&gt;

&lt;p&gt;One key was to avoid a relative file path to the apple script.&lt;br&gt;&lt;br&gt;
One strange issue I ran into is that the WaveLink has a different launch name than it’s running name.&lt;/p&gt;

&lt;p&gt;When updating the plist file unload and then load it again:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;launchctl unload ~/Library/LaunchAgents/com.jflow.closeapp.plist
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Troubleshooting
&lt;/h3&gt;

&lt;p&gt;Search for the app in launchctl and note the second column. If it is a “1” then it errored out.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;launchctl list | grep com.jflow
- 0 com.jflow.closeapp
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>softwaredevelopment</category>
      <category>codingtools</category>
    </item>
    <item>
      <title>Next Gen User Experiences – Vercel Ship 2024</title>
      <dc:creator>Jonathan Flower</dc:creator>
      <pubDate>Mon, 03 Jun 2024 13:17:46 +0000</pubDate>
      <link>https://dev.to/jfbloom22/next-gen-user-experiences-vercel-ship-2024-1928</link>
      <guid>https://dev.to/jfbloom22/next-gen-user-experiences-vercel-ship-2024-1928</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftklb3o6uzbnk5hvs8vkf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftklb3o6uzbnk5hvs8vkf.png" alt="Image description" width="800" height="481"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://x.com/jaredpalmer"&gt;Jared Palmer&lt;/a&gt; presented on:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Next Gen User Experiences&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The idea is to:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Give chatbots rich component based interfaces with what we’re calling Generative UI&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This is supported by the power of &lt;a href="https://sdk.vercel.ai/docs/introduction"&gt;Vercel’s AI SDK&lt;/a&gt; and &lt;a href="https://react.dev/reference/rsc/server-components"&gt;React Server Components&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;See it in action:&lt;br&gt;&lt;br&gt;
&lt;a href="https://youtube.com/clip/UgkxqLkH7fNdoVbZPTqV15o2vCeETk8y7ZRh?si=_EPT3KRfPs6Qn9KW"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--N60mMF-4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://s.w.org/images/core/emoji/15.0.3/72x72/2702.png" alt="✂" width="72" height="72"&gt; Generative UI – interactive schedule component fully wired into the chat experience&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is the future because it greatly simplifies the interaction. Rather than having a different AI bot integrated into each app, my favorite apps are integrated into a single chat interface. This way the AI has the context it needs to act on requests such as “book a meeting with Lee @2pm today to talk about the movie “Her”.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>softwaredevelopment</category>
      <category>codingtools</category>
      <category>genai</category>
    </item>
    <item>
      <title>AI and Sensitive Data: A Guide to Protect your Data</title>
      <dc:creator>Jonathan Flower</dc:creator>
      <pubDate>Thu, 30 May 2024 14:46:27 +0000</pubDate>
      <link>https://dev.to/jfbloom22/ai-and-sensitive-data-a-guide-to-protect-your-data-25kj</link>
      <guid>https://dev.to/jfbloom22/ai-and-sensitive-data-a-guide-to-protect-your-data-25kj</guid>
      <description>&lt;p&gt;Privacy and security come at the cost of convenience. Here is a guide to help you safeguard your data appropriately.&lt;/p&gt;

&lt;h3&gt;
  
  
  Level 1: ChatGPT Temporary Chat
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjq7zvnijtocm7hgkpfni.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjq7zvnijtocm7hgkpfni.png" alt="Image description" width="800" height="488"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Promise:&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;This chat won’t appear in history, use or create memories, or be used to train our models. For safety purposes, we may keep a copy for up to 30 days.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Why It Matters:&lt;/strong&gt; Temporary chats provide a layer of privacy by ensuring your conversations are not stored long-term or used for model training. This reduces the risk of data breaches and enhances your control over personal information.&lt;/p&gt;

&lt;h2&gt;
  
  
  Level 2: ChatGPT for Teams
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://openai.com/chatgpt/team"&gt;https://openai.com/chatgpt/team&lt;/a&gt;&lt;br&gt;&lt;br&gt;
&lt;strong&gt;The Promise:&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;We never train on your data or conversations.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Why It Matters:&lt;/strong&gt; For collaborative environments, ChatGPT for Teams offers a secure solution where your data remains private and is not utilized for training AI models. This ensures confidential business information stays within your organization.&lt;/p&gt;

&lt;h3&gt;
  
  
  Level 3: Azure AI
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://learn.microsoft.com/en-us/legal/cognitive-services/openai/data-privacy"&gt;Data, privacy, and security for Azure OpenAI Service – Azure AI services&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why It Matters:&lt;/strong&gt; Microsoft’s Azure AI provides extensive controls and guarantees around security and privacy, offering enterprise-grade protection. With robust compliance certifications, Azure ensures that your data is handled with the highest standards of security.&lt;/p&gt;

&lt;h3&gt;
  
  
  Level 4: Local AI Models
&lt;/h3&gt;

&lt;p&gt;Running AI locally can significantly enhance your privacy. Here are some excellent options:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://privatellm.app/en"&gt;https://privatellm.app/en&lt;/a&gt; – Local AI chat for Mac and iOS&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.continue.dev"&gt;https://www.continue.dev&lt;/a&gt; – GitHub Copilot, but open source and using a local AI modal&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://pinokio.computer%20"&gt;https://pinokio.computer&lt;/a&gt;– Easiest way to discover and install a huge collection of local AI tools&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Why It Matters:&lt;/strong&gt; Using local models ensures your data never leaves your device, reducing the risk of interception or misuse. This layer is particularly beneficial for those handling highly sensitive information.&lt;br&gt;&lt;br&gt;
Additionally, Microsoft is innovating with &lt;a href="https://www.microsoft.com/en-us/windows/copilot-plus-pcs"&gt;Copilot+ PCs&lt;/a&gt;, betting big on running more AI locally to enhance privacy&lt;/p&gt;

&lt;h3&gt;
  
  
  Level 5: Hacker Proof
&lt;/h3&gt;

&lt;p&gt;While local processing minimizes external threats, securing your hardware is paramount. If your device is compromised, no software solution can fully protect your data.&lt;br&gt;&lt;br&gt;
&lt;strong&gt;The Simplest Solution:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Dedicated AI Computer:&lt;/strong&gt; Use a separate computer exclusively for AI tasks, disconnected from the internet.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Faraday Cage:&lt;/strong&gt; For ultimate security, place the device in a Faraday cage to block any wireless signals.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data Transfer:&lt;/strong&gt; Use a USB drive to transfer data to and from this secure computer.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Why It Matters:&lt;/strong&gt; This setup ensures that your AI processing environment is isolated from network-based threats, offering the highest level of security for your sensitive operations.&lt;/p&gt;




&lt;p&gt;It has been my pleasure to guide you and empower you to work confidently and securely with AI.&lt;/p&gt;

</description>
      <category>artificialintelligen</category>
      <category>softwaredevelopment</category>
      <category>ai</category>
      <category>chatgpt</category>
    </item>
    <item>
      <title>GPTs that have clearly received a lot of love</title>
      <dc:creator>Jonathan Flower</dc:creator>
      <pubDate>Wed, 29 May 2024 20:16:11 +0000</pubDate>
      <link>https://dev.to/jfbloom22/gpts-that-have-clearly-received-a-lot-of-love-2edn</link>
      <guid>https://dev.to/jfbloom22/gpts-that-have-clearly-received-a-lot-of-love-2edn</guid>
      <description>&lt;p&gt;I hate wasting my time with AI tools that promise the world and end up being a complete disappointment. I love finding AI tools that actually solve problems better than more conventional tools.&lt;/p&gt;

&lt;p&gt;Before we get into these excellent GPTs, even these powerful GPT will fail horribly when given a weak prompt. My favorite guiding principle is to think of the GPT as high school intern. If you do not provide detail, who knows what you are going to get! It helps a lot when you communicate the goal and provide steps to follow whenever possible.&lt;/p&gt;

&lt;h2&gt;
  
  
  Great GPTs
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;a href="https://chatgpt.com/g/g-n7Rs0IK86-grimoire"&gt;Grimoire&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;When starting a new coding project, I typically start here. Grimoire helps me think through architectural decisions and evaluate which technologies will be the best fit. The way Grimoire collaborates with me on the solution is a clear step above generic ChatGPT.&lt;/p&gt;

&lt;p&gt;From the creator, Nick Dobos:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;How is Grimoire different from vanilla GPT?&lt;br&gt;&lt;br&gt;
-Coding focused system prompts to help you build anything.&lt;br&gt;&lt;br&gt;
Combining the best tricks I’ve learned to pull correct &amp;amp; bug free code out from GPT with minimal prompting effort&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  &lt;a href="https://chatgpt.com/g/g-bo0FiWLY7-consensus"&gt;Consensus&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;“search and synthesize information from over 200 million academic papers.” For example, I asked it if intermittent fasting is good for my heart health. I received back a detailed response with links to research papers supporting each point.  &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhkd41lairzatonshc8lv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhkd41lairzatonshc8lv.png" alt="Image description" width="800" height="612"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;a href="https://chatgpt.com/g/g-GbLbctpPz-universal-primer"&gt;Universal Primer&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;I love learning and this GPT is my go to when I want to dive deeper on a concept. I love how it breaks things down concepts into easily digestible chunks and includes plenty of examples.&lt;/p&gt;

&lt;h3&gt;
  
  
  What are your favorite GPTs?
&lt;/h3&gt;

&lt;p&gt;This article is part of a series: &lt;a href="https://blog.jonathanflower.com/uncategorized/who-cares-about-custom-gpts/"&gt;Who Cares About GPTs?&lt;/a&gt;&lt;/p&gt;

</description>
      <category>artificialintelligen</category>
      <category>softwaredevelopment</category>
      <category>ai</category>
      <category>chatgpt</category>
    </item>
    <item>
      <title>Personalize Your AI Experience: Reasons to Create a Private GPT</title>
      <dc:creator>Jonathan Flower</dc:creator>
      <pubDate>Wed, 29 May 2024 19:41:01 +0000</pubDate>
      <link>https://dev.to/jfbloom22/personalize-your-ai-experience-reasons-to-create-a-private-gpt-7o4</link>
      <guid>https://dev.to/jfbloom22/personalize-your-ai-experience-reasons-to-create-a-private-gpt-7o4</guid>
      <description>&lt;p&gt;Do you struggle to keep track of your favorite prompts? Despite saving them in my note-taking app, Bear, retrieving the right prompt when I need it and adding my personal information and documents remains a hassle. Surely, there must be a better way!&lt;/p&gt;

&lt;p&gt;One of my favorite prompts is one I use to help draft cover letters.&lt;/p&gt;

&lt;h3&gt;
  
  
  My Cover Letter Process:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Search my chat history for an existing chat about cover letters, often I don’t find one quick enough and abandon the search.&lt;/li&gt;
&lt;li&gt;Switch to searching Bear for the right prompt&lt;/li&gt;
&lt;li&gt;Copy paste the prompt&lt;/li&gt;
&lt;li&gt;Upload my CV&lt;/li&gt;
&lt;li&gt;Wait a few seconds (at this point I often get distracted, 10 minutes later I remember I was supposed to be working on my cover letter. Sound familiar?)&lt;/li&gt;
&lt;li&gt;Paste in the job description&lt;/li&gt;
&lt;li&gt;Copy Paste the draft cover letter, revise, send it over.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Will this still saved me time and improved the quality of my cover letters, it is clunky.&lt;/p&gt;

&lt;h3&gt;
  
  
  Private GPT
&lt;/h3&gt;

&lt;p&gt;That’s when I stumbled upon the game-changing concept of Private GPTs. How have I missed this? I can create a dedicated AI assistant that crafts personalized cover letters with a single prompt. No more distractions—just efficiency.:  &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flsrsdbzimvmz4zlwhee2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flsrsdbzimvmz4zlwhee2.png" alt="Image description" width="800" height="640"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How To Create a GPT
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://help.openai.com/en/articles/8554397-creating-a-gpt"&gt;Creating a GPT | OpenAI Help Center&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  How to make it private:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;when you click Create, make sure to select Only me&lt;/li&gt;
&lt;li&gt;in Additional Settings, uncheck “Use conversation data in your GPT to improve our models” for more privacy.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Favorite Private GPTs
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Private Cover Letter GPT
&lt;/h3&gt;

&lt;p&gt;I uploaded my resume, chatted with the configuration tool telling it to create something that drafted a cover letter every time I provide a job description. Now, I can open my GPT, paste in a job description, and it immediately starts drafting a cover letter. So convenient and fast! The best part is, over time I have continued to “train” my GPT to better write in my voice and have it provide an ATS rating is so that I can quickly determine if this job is a good fit.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdv1zqhticao7ye2tenni.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdv1zqhticao7ye2tenni.png" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Private Chef GPT
&lt;/h3&gt;

&lt;p&gt;I have another private GPT that helps with meal planning. I uploaded our family cookbook so that it can suggest creating our favorite meals. My wife will take a picture of the pantry and fridge and allow it to suggest what to cook for dinner. The meals have been excellent!&lt;/p&gt;

&lt;p&gt;When meal planning, it started off by outlining complicated 3 course meals. This is where a private GPT is way better than a collection of favorite prompts. We were able to simply talk to the configuration tool and tell it we preferred more budget friendly and easy to cook meals. Now when we start a chat with our Private Chef GPT, it knows how big my family is, our favorite family recipes, and how we prefer to meal plan.&lt;/p&gt;

&lt;p&gt;Here is what it looks like when editing the GPT. You literally have a conversation with the configuration tool, and it programs the GPT for you.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fak5j5p1uv130y3f1t7gw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fak5j5p1uv130y3f1t7gw.png" alt="Image description" width="800" height="836"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  What Private GPT should I create next?
&lt;/h3&gt;

&lt;p&gt;(featured image credit Dalle3)&lt;/p&gt;

</description>
      <category>artificialintelligen</category>
      <category>softwaredevelopment</category>
      <category>jobsearch</category>
      <category>openai</category>
    </item>
    <item>
      <title>Metacognition is susceptible to stochasticity</title>
      <dc:creator>Jonathan Flower</dc:creator>
      <pubDate>Mon, 13 May 2024 16:34:42 +0000</pubDate>
      <link>https://dev.to/jfbloom22/metacognition-is-susceptible-to-stochasticity-10fc</link>
      <guid>https://dev.to/jfbloom22/metacognition-is-susceptible-to-stochasticity-10fc</guid>
      <description>&lt;p&gt;Excellent advice from Sam Schillace, Deputy CTO of Microsoft, on building with AI. What does it mean?&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;metacognition is susceptible to stochasticity&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In other words, LLMs are easily confused. Their outputs can be unpredictable. It’s better to offload the planning aspect to code, which can provide more structured and deterministic guidance for the LLM.&lt;/p&gt;

&lt;p&gt;Sam Schillace goes on to explain:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;the model’s good at thinking, but it’s not good at planning. So you do planning in code.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;What an excellent guidance on how best to create value with AI models.&lt;/p&gt;

&lt;p&gt;In a recent client project, we wondered what the right balance was between relying on the LLMs reasoning ability verses using code to help guide the LLM. When relying more on the LLM, the AI Agent was able to better handle unusual questions. However, its’ responses were less deterministic. The version that relied on code and predefined prompts, depending on the user’s objective, was much more predictable.&lt;/p&gt;

&lt;p&gt;For instance, we wanted the AI Agent to ask questions about the user’s preferences before recommending a product. The model would randomly recommend a product earlier in the conversation than we wanted. What worked best was using code to wait to add the prompt about recommending products until specific conditions were met (such as how many questions the user had answered).&lt;/p&gt;

&lt;p&gt;Here is a link to the episode and more of my favorite quotes:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.latent.space/p/worlds-fair-2024"&gt;Presenting the AI Engineer World’s Fair — with Sam Schillace, Deputy CTO of Microsoft&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Sam Schillace:&lt;/strong&gt;  &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;This is a little bit of an anthropomorphism and an illusion that we’re having. So like when we look at these models, we think there’s something continuous there.&lt;br&gt;&lt;br&gt;
We’re having a conversation with chat GPT or whatever with Azure open air or like, like what’s really happened. It’s a little bit like watching claymation, right? Like when you watch claymation, you don’t think that the model is actually the clay model is actually really alive. You know, that there’s like a bunch of still disconnected slot screens that your mind is connecting into a continuous experience.&lt;br&gt;&lt;br&gt;
But what happens is when you’re doing plans and you’re doing these longer running things that you’re talking about, that second level, the metacognition is very vulnerable to that stochastic noise, which is like, I totally want to put this on a bumper sticker that like metacognition is susceptible to stochasticity would be like the great bumper sticker.&lt;/p&gt;

&lt;p&gt;So what, these things are very vulnerable to feedback loops when they’re trying to do autonomy, and they’re very vulnerable to getting lost.&lt;/p&gt;

&lt;p&gt;So what we’ve learned to answer your question of how you put all this stuff together is You have to, the model’s good at thinking, but it’s not good at planning. So you do planning in code. So you have to describe the larger process of what you’re doing in code somehow.&lt;/p&gt;

&lt;p&gt;Having that like code exoskeleton wrapped around the model is really helpful, like it keeps the model from drifting off and then you don’t have as many of these vulnerabilities around memory that you would normally have.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Image credit: Dalle3 (I ran with Sam’s suggestion that this would look great as a bumper sticker. I selected the best out of 8 tries. Pretty hilarious how poorly it spells words like Metacognition and stochasticity. I was going to avoid including it in the post, but then I realized it illustrates Sam’s point perfectly.)&lt;/p&gt;

</description>
      <category>artificialintelligen</category>
      <category>softwaredevelopment</category>
    </item>
    <item>
      <title>Retrieval Unleashed with OpenAI’s new Assistants API</title>
      <dc:creator>Jonathan Flower</dc:creator>
      <pubDate>Wed, 08 May 2024 12:41:35 +0000</pubDate>
      <link>https://dev.to/jfbloom22/retrieval-unleashed-with-openais-new-assistants-api-4mam</link>
      <guid>https://dev.to/jfbloom22/retrieval-unleashed-with-openais-new-assistants-api-4mam</guid>
      <description>&lt;p&gt;OpenAI released a new version of their Assistants API and made some really neat upgrades to retrieval.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://platform.openai.com/docs/assistants/whats-new"&gt;https://platform.openai.com/docs/assistants/whats-new&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now we can ingest up to 10,000 files per assistant. Is that enough for you?&lt;/p&gt;

&lt;p&gt;My favorite part is where they explain how the new file_search tool works:&lt;/p&gt;

&lt;h2&gt;
  
  
  How File Search Works
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;Rewrites user queries to optimize them for search.&lt;/li&gt;
&lt;li&gt;Breaks down complex user queries into multiple searches it can run in parallel.&lt;/li&gt;
&lt;li&gt;Runs both keyword and semantic searches across both assistant and thread vector stores.&lt;/li&gt;
&lt;li&gt;Reranks search results to pick the most relevant ones before generating the final response.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;⠀By default, the file_search tool uses the following settings:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Chunk size: 800 tokens&lt;/li&gt;
&lt;li&gt;Chunk overlap: 400 tokens&lt;/li&gt;
&lt;li&gt;Embedding model: text-embedding-3-large at 256 dimensions&lt;/li&gt;
&lt;li&gt;Maximum number of chunks added to context: 20 (could be fewer)
&lt;a href="https://platform.openai.com/docs/assistants/tools/file-search/how-it-works"&gt;https://platform.openai.com/docs/assistants/tools/file-search/how-it-works&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;cite&gt;&lt;a href="https://platform.openai.com/docs/assistants/tools/file-search/how-it-works"&gt;https://platform.openai.com/docs/assistants/tools/file-search/how-it-works&lt;/a&gt;&lt;/cite&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This is stinking awesome because the AI agent I architected a month ago for a client is satisfyingly similar. What took me over 120 hours and 600+ lines of LangChain, you get for free when you build your solution with Assistant API.&lt;/p&gt;

&lt;p&gt;A few differences I find interesting. The number of chunks added to context is a lot higher and the overlap is higher. I was targeting around 5 chunks and only a 20% overlap. I suspect this is for two reasons, 1) I have not added reranking yet. 2) If I add up the chunks between the parallel RAGs, I am getting close to 20 total chunks. OpenAI did not specify if they are talking per RAG or total.&lt;/p&gt;

&lt;p&gt;The large overlap in the chunks likely helps this solution work for a broader set of use cases. Which is exactly what OpenAI is targeting here. If you are building this all custom, you can tune each piece of this to your specific use case.&lt;/p&gt;

&lt;p&gt;One clear limitation is that I chose to create separate vector databases because my client had two sets of data that were for very different purposes, however you can only configure an assistant with a single Vector Store.&lt;/p&gt;

&lt;h2&gt;
  
  
  Custom LangChain or Assistant API?
&lt;/h2&gt;

&lt;p&gt;One aspect of building with OpenAI tools that fascinates me is the tradeoff between:&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Custom LangChain&lt;/strong&gt; : customization, dependability, privacy&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Assistant API&lt;/strong&gt; : limited customization, faster development, and rising with the tide.&lt;/p&gt;

&lt;p&gt;What do I mean by rising with the tide? As OpenAI improves Assistant API, your solution will get better for free. You might need to update a few things to take advantage of the latest and greatest, but man is that a lot easier than potentially trying to replicate the new feature in your custom solution. When and for whom does this tradeoff make sense?&lt;/p&gt;

</description>
      <category>artificialintelligen</category>
      <category>softwaredevelopment</category>
      <category>langchain</category>
      <category>openai</category>
    </item>
    <item>
      <title>Gmail and ChatGPT in your Dock</title>
      <dc:creator>Jonathan Flower</dc:creator>
      <pubDate>Thu, 02 May 2024 20:06:17 +0000</pubDate>
      <link>https://dev.to/jfbloom22/gmail-and-chatgpt-in-your-dock-261f</link>
      <guid>https://dev.to/jfbloom22/gmail-and-chatgpt-in-your-dock-261f</guid>
      <description>&lt;p&gt;Did you know you can add web apps like ChatGPT to your dock in Sonoma?&lt;/p&gt;

&lt;p&gt;I absolutely love this feature. Hands down, my favorite feature of Sonoma.&lt;/p&gt;

&lt;p&gt;This means I can Command + Tab to select the Gmail app!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffcn0eo9dmfyp19doasr4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffcn0eo9dmfyp19doasr4.png" alt="Gmail, ChatGPT, and Claude in Mac Dock" width="664" height="328"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F81rc66ix58dp0jonj28m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F81rc66ix58dp0jonj28m.png" alt="Gmail macos app" width="794" height="280"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How to add a web app to your dock
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Open Safari&lt;/li&gt;
&lt;li&gt;Navigate to the website you want to add to your dock&lt;/li&gt;
&lt;li&gt;Click: File -&amp;gt; Add to Dock…&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fec4yu7hzrsghzbxehnbs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fec4yu7hzrsghzbxehnbs.png" alt="add to dock" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>softwaredevelopment</category>
      <category>chatgpt</category>
      <category>macos</category>
    </item>
    <item>
      <title>React Server Components: A New Era of Web Development?</title>
      <dc:creator>Jonathan Flower</dc:creator>
      <pubDate>Fri, 19 Apr 2024 14:21:02 +0000</pubDate>
      <link>https://dev.to/jfbloom22/react-server-components-a-new-era-of-web-development-36ph</link>
      <guid>https://dev.to/jfbloom22/react-server-components-a-new-era-of-web-development-36ph</guid>
      <description>&lt;p&gt;Here is a great podcast that captures some of the history from Dan Abramov:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://changelog.com/jsparty/311#t=4361"&gt;React Server Components with Dan Abramov (JS Party #311)&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Part of what I found interesting is the misconception that RSC is all about server side rendering. There is some question about there potentially being a better name.&lt;/p&gt;

&lt;p&gt;I like how Josh Comeau explains it:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;React Server Components is not a replacement for Server Side Rendering. You shouldn’t think of React Server Components as “SSR version 2.0”.&lt;br&gt;&lt;br&gt;
Instead, I like to think of it as two separate puzzle pieces that snap together perfectly, two flavors that complement each other.&lt;br&gt;&lt;br&gt;
We still rely on Server Side Rendering to generate the initial HTML. React Server Components builds on top of that, allowing us to omit certain components from the client-side JavaScript bundle, ensuring they only run on the server.&lt;/p&gt;

&lt;p&gt;&lt;cite&gt;&lt;a href="https://www.joshwcomeau.com/react/server-components/"&gt;Making Sense of React Server Components&lt;/a&gt; – Josh Comeau&lt;/cite&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;And here is another great podcast about RSC: &lt;a href="https://portal.gitnation.org/contents/simplifying-server-components"&gt;Simplifying Server Components by Mark Dalgleish&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I am still on the fence about RCS. Clarifying that this is not simply SSR 2.0 is helping me get over some of my resistance. However, I am failing to see enough value add here for most web apps. An approach like this is complicated and presents its’ own set of challenges. This new approach requires adopting a novel mental model for how the app works. This means it will present a challenge for developers to adopt and troubleshoot effectively. If your solution can be delivered as an SPA or SSR, the developer experience and performance is pretty great these days. I wonder at what point does it start to make sense to add the complexity of RSC.&lt;/p&gt;

&lt;p&gt;The other area I am researching is how to deploy offline first with RSC. I am a huge fan of offline first and optimistic UI patterns because of the massive performance and scalability advantages. It is not clear to me how to develop these patterns with RSC or if you would want to.&lt;/p&gt;

&lt;p&gt;One very cool application of RSC is in AI agents. Since the AI models are server side, it makes sense to render components server side. Having the ability to dynamically render UI for each AI agent response has some very cool use cases. Checkout what Vercel is doing with what they are calling Generative UI: &lt;a href="https://vercel.com/blog/ai-sdk-3-generative-ui"&gt;Introducing AI SDK 3.0 with Generative UI support&lt;/a&gt;&lt;/p&gt;

</description>
      <category>softwaredevelopment</category>
      <category>optimisticui</category>
      <category>react</category>
      <category>rsc</category>
    </item>
  </channel>
</rss>
