<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Zaynul Abedin Miah</title>
    <description>The latest articles on DEV Community by Zaynul Abedin Miah (@azaynul10).</description>
    <link>https://dev.to/azaynul10</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/azaynul10"/>
    <language>en</language>
    <item>
      <title>Building a Voice-Controlled Web Agent for the Gemini Hackathon (And How I Beat the API Rate Limits)</title>
      <dc:creator>Zaynul Abedin Miah</dc:creator>
      <pubDate>Mon, 16 Mar 2026 18:56:02 +0000</pubDate>
      <link>https://dev.to/azaynul10/building-a-voice-controlled-web-agent-for-the-gemini-hackathon-and-how-i-beat-the-api-rate-limits-13m4</link>
      <guid>https://dev.to/azaynul10/building-a-voice-controlled-web-agent-for-the-gemini-hackathon-and-how-i-beat-the-api-rate-limits-13m4</guid>
      <description>&lt;p&gt;I created this piece of content for the purposes of entering the Gemini Live Agent Challenge hackathon.&lt;/p&gt;

&lt;p&gt;It’s currently 1:00 AM in Dhaka. My terminal is a wall of green and red logs, my coffee is cold, and I am about to submit my project for the Google Gemini Live Agent Challenge.&lt;/p&gt;

&lt;p&gt;Over the last few days, I’ve been building IAN (Intelligent Accessibility Navigator). It’s a multimodal AI Agent designed to browse the internet for you using just your voice.&lt;/p&gt;

&lt;p&gt;If you are breaking into tech, or if you are one of the hackathon judges reading this, I want to take you behind the scenes of how I built this, the late-night architecture pivots, and how I managed to stop my headless browsers from crashing my server.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Broken Web: Why We Need a New Approach to Web Accessibility ♿
&lt;/h2&gt;

&lt;p&gt;If you have ever tried using a traditional screen reader on a modern e-commerce site, you know it’s a nightmare.&lt;/p&gt;

&lt;p&gt;Traditional screen readers rely entirely on parsing the Document Object Model (DOM). But today’s web is incredibly messy. It’s filled with dynamically injected &lt;/p&gt; tags masquerading as buttons, missing ARIA labels, and complex Single Page Applications (SPAs). When a visually impaired user tries to navigate a notoriously clunky real-world website, the screen reader just yells a wall of meaningless code at them.

&lt;p&gt;Web Accessibility shouldn't rely on perfect HTML. I realized something: Humans don't parse the DOM to use a website. We just look at the screen. With the new multimodal Gemini models, I wondered: Could I build an AI that does exactly the same thing?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fig520onekgkeqq0eprod.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fig520onekgkeqq0eprod.PNG" alt=" " width="800" height="401"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Enter IAN: A Voice-Controlled AI Agent 🎙️
&lt;/h2&gt;

&lt;p&gt;IAN is a Next-Gen Agent that acts as a digital equalizer. Instead of navigating with a keyboard or relying on HTML tags, you use a high-contrast, Neo-Brutalist React dashboard. You hold down a button, speak naturally ("Hey, go to Amazon and search for running shoes"), and the AI physically takes over the browser for you.&lt;/p&gt;

&lt;p&gt;To build this at "startup speed," I leaned heavily on the Google Agent Development Kit (ADK) and some rapid "vibe coding" for the frontend using Google's experimental agentic IDEs.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Dual-Model Architecture (And Bypassing the DOM) 🧠
&lt;/h2&gt;

&lt;p&gt;Building AI Agents that run in real-time is tricky. If the agent is busy taking a screenshot, it can't listen to your voice. To fix this, I split IAN's brain into two distinct pieces.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz9wvfgmcf42w86bbvddt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz9wvfgmcf42w86bbvddt.png" alt=" " width="640" height="640"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Voice Intent (Gemini 2.5 Native Audio)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When you speak into the React frontend, the raw audio is streamed via WebSockets to my backend. Using the ADK's InMemorySessionService, I pass this audio directly into gemini-2.5-flash-native-audio.&lt;/p&gt;

&lt;p&gt;This model is incredibly fast. It acts as my "Audio Orchestrator." Its only job is to detect Voice Activity (VAD), figure out what the user wants, and output a strict, silent tag like: [NAVIGATE: search amazon for shoes].&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Visual Action (Playwright Automation)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Once my backend intercepts that [NAVIGATE] tag, it spins up a background thread running Playwright automation.&lt;/p&gt;

&lt;p&gt;A headless Chromium browser opens the website, takes a screenshot, and sends it to gemini-2.5-flash (the vision model). The AI looks at the pixels, calculates the exact $(X, Y)$ coordinates of the search bar, and tells Playwright to physically click and type. It completely ignores the messy DOM!&lt;/p&gt;

&lt;h2&gt;
  
  
  Surviving the Hackathon: Beating Rate Limits &amp;amp; Bugs 🐛
&lt;/h2&gt;

&lt;p&gt;Of course, it wasn't all smooth sailing. Here are the two massive roadblocks I hit and how I solved them:&lt;/p&gt;

&lt;p&gt;Taming the "Concurrency Explosion" on Google Cloud Run ☁️&lt;/p&gt;

&lt;p&gt;I deployed my FastAPI backend to Google Cloud Run to ensure it could scale. However, headless Playwright browsers consume a lot of memory.&lt;/p&gt;

&lt;p&gt;During testing, if I gave the agent two commands too quickly, it would spawn multiple browsers, eat all my RAM, and instantly hit Gemini's 429 RESOURCE_EXHAUSTED API rate limit.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Fix:&lt;/strong&gt; I implemented a strict asyncio.Lock() in Python to prevent ghost-browsers from spawning. I also built a 3.5-second "pacemaker" into the visual reasoning loop. This forced the agent to process exactly 15 actions per minute, keeping me perfectly safe within the free-tier API quotas without dropping the WebSocket connection!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Float32 Audio Nightmare 🎧&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Browsers capture microphone audio in Float32 format at 44.1kHz. But the Gemini Native Audio model strictly requires 16kHz, 16-bit PCM audio. For hours, the LLM was just hallucinating static noise.&lt;/p&gt;

&lt;p&gt;I had to dive deep into the Web Audio API and write a custom JavaScript processor to manually downsample and clamp the audio chunks on the fly before sending them over the WebSocket.&lt;/p&gt;

&lt;h2&gt;
  
  
  Advice for Developers Breaking Into Tech
&lt;/h2&gt;

&lt;p&gt;If you are just starting out and looking at complex architectures like this, don't be intimidated.&lt;/p&gt;

&lt;p&gt;A week ago, I had never used the Google ADK, and my WebSockets kept crashing. Hackathons are the ultimate forcing function. You learn by breaking things, reading the docs, and fixing them at 2:00 AM. If you want to break into tech, stop doing tutorials and go enter a hackathon. The pressure makes you a better developer.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Final Demo &amp;amp; What's Next 🔮
&lt;/h2&gt;

&lt;p&gt;This hackathon was an incredible experience, but it's just a proof of concept. The next step is migrating this logic from a Cloud Run proxy into a local Chrome Extension, allowing IAN to drive your actual local browser securely.&lt;/p&gt;

&lt;p&gt;The era of struggling with inaccessible HTML is over. If you can see it on the screen, AI can click it for you.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://devpost.com/software/ian-intelligent-accessibility-navigator" rel="noopener noreferrer"&gt;Devpost Submission Link&lt;br&gt;
&lt;/a&gt;&lt;br&gt;
📺 Watch the live demo here: &lt;a href="https://youtu.be/dLxiK394WOc?si=0sxj4HrlZ3XD-p0a" rel="noopener noreferrer"&gt;YouTube Demo Link&lt;/a&gt;&lt;br&gt;
💻 Check out the Open-Source Code: &lt;a href="https://github.com/azaynul10/ian-accessibility-agent" rel="noopener noreferrer"&gt;GitHub Repository&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Wish me luck with the judges!&lt;/p&gt;

</description>
      <category>wecoded</category>
      <category>geminiliveagentchallenge</category>
      <category>googlecloud</category>
      <category>ai</category>
    </item>
    <item>
      <title>Forging a Digital Shield Against Climate Chaos – The Birth of CarbonPro AI</title>
      <dc:creator>Zaynul Abedin Miah</dc:creator>
      <pubDate>Sun, 27 Jul 2025 13:39:47 +0000</pubDate>
      <link>https://dev.to/azaynul10/forging-a-digital-shield-against-climate-chaos-the-birth-of-carbonpro-ai-56mi</link>
      <guid>https://dev.to/azaynul10/forging-a-digital-shield-against-climate-chaos-the-birth-of-carbonpro-ai-56mi</guid>
      <description>&lt;h1&gt;
  
  
  Building with Bolt:
&lt;/h1&gt;

&lt;p&gt;In the sweltering heat of a planet on the brink, where wildfires rage and ice caps weep, I found my spark. The World's Largest Hackathon wasn't just a coding marathon; it was my call to arms. As a solo builder juggling three hackathons in a single frantic week during semester break, I dove headfirst into creating CarbonPro AI – a real-time carbon credit trading platform that's not just another marketplace, but a proactive fortress against emissions. Picture this: instead of mopping up the mess after the flood, we're building dams before the storm hits. That's the magic I unleashed with Bolt.new, turning a month's frenzy into a tool that could reshape our fight for a breathable future.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffoizg0dr4sv3i4lxtq3j.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffoizg0dr4sv3i4lxtq3j.PNG" alt=" " width="800" height="386"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Vision: From Reactive Regret to Proactive Revolution
&lt;/h2&gt;

&lt;p&gt;The carbon credit market is a behemoth in waiting. According to recent projections from CarbonCredits.com and CFA Institute reports, it's set to balloon from $2 billion in 2022 to a staggering $250 billion by 2050 – if we can iron out the wrinkles of integrity concerns, supply shortages, and regulatory hurdles. But here's the rub: most systems are stuck in the past, trading credits &lt;em&gt;after&lt;/em&gt; the damage is done. Businesses pollute first, then scramble for offsets, often at inflated costs amid scandals over "phantom credits" that do little real good.&lt;/p&gt;

&lt;p&gt;My vision for CarbonPro AI was to flip the script. I wanted a platform that democratizes carbon markets, making them accessible not just to corporations but to everyday warriors – from eco-conscious individuals to small businesses. Key pillars:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Predictive Power&lt;/strong&gt;: Use AI to forecast emissions months ahead, turning guesswork into precision strikes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Transparent Trading&lt;/strong&gt;: Real-time exchanges on blockchain, where every transaction is etched in digital stone.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Intuitive Interface&lt;/strong&gt;: A sleek, responsive design that feels like trading stocks, but for the planet's survival.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Impact-First Design&lt;/strong&gt;: Personalized recommendations to slash footprints, blending finance with genuine environmental wins.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Bolt.new became my secret weapon, accelerating what could have been a grueling setup into a seamless launchpad. No more wrestling with boilerplate code – I described my dream, and AI handed me the blueprint.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Technical Odyssey: Code as Catalyst for Change
&lt;/h2&gt;

&lt;p&gt;Building CarbonPro AI felt like conducting an orchestra in a thunderstorm – exhilarating, chaotic, and ultimately harmonious. Bolt.new's AI didn't just assist; it co-piloted, suggesting optimizations and debugging knots I hadn't even spotted. Let's break down the journey.&lt;/p&gt;

&lt;h3&gt;
  
  
  Igniting the Stack with Bolt.new
&lt;/h3&gt;

&lt;p&gt;From day one, Bolt.new's intelligent prompts helped me architect the perfect stack. I prompted for a "high-performance, sustainable trading platform with real-time features and blockchain integration," and it delivered:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Frontend&lt;/strong&gt;: React.js with TypeScript for robust, type-safe components that scale effortlessly.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Styling&lt;/strong&gt;: Tailwind CSS for lightning-fast prototyping of eco-inspired designs – think deep forest greens fading to sky blues, symbolizing earth's recovery.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Backend&lt;/strong&gt;: Supabase for real-time PostgreSQL magic, handling user auth and data sync with integrated WebSockets.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Blockchain&lt;/strong&gt;: Algorand, the eco-warrior of chains. Research from Algorand's site highlights its carbon-negative operations (through offsets and efficient PoS consensus), blistering 1,000+ TPS speed, and smart contracts via the Algorand Virtual Machine (AVM). Perfect for tokenizing credits as Standard Assets, with Pera Wallet integration for seamless mobile trades.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Visualization&lt;/strong&gt;: Recharts for dynamic graphs that pulse with market life.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This stack wasn't random; Bolt.new analyzed my needs and refined it, saving me weeks of trial-and-error.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feu2ipkbsw5yyhfg20rg9.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feu2ipkbsw5yyhfg20rg9.jpg" alt=" " width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Crafting the Real-Time Trading Heartbeat
&lt;/h3&gt;

&lt;p&gt;The core thrill? The trading engine – a beast that matches orders in milliseconds, like a digital auction house for planetary salvation. Challenges abounded: handling high-volume matches without lag, ensuring atomic swaps on Algorand for fraud-proof trades.&lt;/p&gt;

&lt;p&gt;Bolt.new's AI refined my initial prompt ("Build an efficient order matching system for carbon credits"), suggesting this optimized snippet:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Optimized order matching with priority queue for O(log n) operations&lt;/span&gt;
&lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;OrderMatcher&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nf"&gt;constructor&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;buyHeap&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;MaxHeap&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;  &lt;span class="c1"&gt;// Max-heap for buys (highest price first)&lt;/span&gt;
    &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;sellHeap&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;MinHeap&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt; &lt;span class="c1"&gt;// Min-heap for sells (lowest price first)&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="nf"&gt;matchOrder&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;newOrder&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;oppositeHeap&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;newOrder&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="kd"&gt;type&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;buy&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;?&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;sellHeap&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;buyHeap&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;remaining&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;newOrder&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;quantity&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="k"&gt;while &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;remaining&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;oppositeHeap&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;isEmpty&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;top&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;oppositeHeap&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;peek&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
      &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;isMatch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;newOrder&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;top&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="k"&gt;break&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

      &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;matchQty&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;Math&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;min&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;remaining&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;top&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;quantity&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
      &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;executeTrade&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;newOrder&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;top&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;matchQty&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

      &lt;span class="nx"&gt;remaining&lt;/span&gt; &lt;span class="o"&gt;-=&lt;/span&gt; &lt;span class="nx"&gt;matchQty&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
      &lt;span class="nx"&gt;top&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;quantity&lt;/span&gt; &lt;span class="o"&gt;-=&lt;/span&gt; &lt;span class="nx"&gt;matchQty&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
      &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;top&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;quantity&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="nx"&gt;oppositeHeap&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;pop&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;remaining&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;newOrder&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="kd"&gt;type&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;buy&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;?&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;buyHeap&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;sellHeap&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;push&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;newOrder&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nf"&gt;isMatch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;order&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;opposite&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;order&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="kd"&gt;type&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;buy&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;?&lt;/span&gt; &lt;span class="nx"&gt;order&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;price&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;=&lt;/span&gt; &lt;span class="nx"&gt;opposite&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;price&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;order&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;price&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;=&lt;/span&gt; &lt;span class="nx"&gt;opposite&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;price&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This heap-based approach, inspired by financial exchanges, cut processing time by 40%. Bolt.new even simulated load tests, revealing bottlenecks I fixed before they bit.&lt;/p&gt;

&lt;h3&gt;
  
  
  Weaving AI Magic for Emissions Prophecy
&lt;/h3&gt;

&lt;p&gt;Drawing from real-world AI applications – like Google's DeepMind optimizing data center energy (reducing cooling by 40%, per MIT News) or IBM's Watson forecasting urban emissions – I built an AI engine that sips data from Supabase and spits out futures. The model processes historical trades, satellite-derived environmental data (via APIs like NASA's), and user inputs for 85%+ accurate predictions.&lt;/p&gt;

&lt;p&gt;A favorite Bolt.new-generated snippet for the prediction core:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// AI-driven emission forecasting with temporal fusion&lt;/span&gt;
&lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;EmissionForecaster&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="nf"&gt;forecast&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;userData&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;timeframe&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;quarter&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;historical&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;fetchHistoricalEmissions&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;userData&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;externalFactors&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;integrateExternalData&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt; &lt;span class="c1"&gt;// e.g., weather APIs, market volatility&lt;/span&gt;

    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;modelInput&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;preprocess&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
      &lt;span class="na"&gt;user&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;userData&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;consumption&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="nx"&gt;historical&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;external&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;externalFactors&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="nx"&gt;timeframe&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;

    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;prediction&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;mlModel&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;predict&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;modelInput&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="c1"&gt;// TensorFlow.js or Scikit integration via worker&lt;/span&gt;

    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;total&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;prediction&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;aggregate&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;breakdown&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;prediction&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;categories&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;confidence&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;calculateConfidence&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;prediction&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;variance&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
      &lt;span class="na"&gt;recommendations&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;generateActions&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;prediction&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;};&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nf"&gt;generateActions&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;pred&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;pred&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;categories&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;({&lt;/span&gt;
      &lt;span class="na"&gt;category&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;cat&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;reductionPotential&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;cat&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;value&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mf"&gt;0.2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;// AI-optimized suggestions&lt;/span&gt;
      &lt;span class="na"&gt;actions&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Switch to renewable energy&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Optimize supply chain&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="p"&gt;}));&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Real-world parallels? Tools like Microsoft's Azure Carbon Optimizer or startups using ML for farm-level predictions (from AIMultiple research) inspired this, but Bolt.new helped tailor it for carbon credits, adding confidence scoring to build user trust.&lt;/p&gt;

&lt;h3&gt;
  
  
  Securing Trades on Algorand's Green Ledger
&lt;/h3&gt;

&lt;p&gt;Algorand's allure? It's not just fast – with 1,000+ TPS and finality in seconds – it's sustainable. The network offsets its entire carbon footprint through partnerships (making it carbon-negative), ideal for a climate-focused app. Smart contracts via AVM handle atomic swaps, ensuring no partial trades.&lt;/p&gt;

&lt;p&gt;Integration code, polished by Bolt.new:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Secure atomic swap on Algorand with Pera Wallet&lt;/span&gt;
&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="nf"&gt;executeSwap&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;buyer&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;seller&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;creditAsset&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;quantity&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;price&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;params&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;algorandClient&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getTransactionParams&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="k"&gt;do&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;txn1&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;algosdk&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;makeAssetTransferTxnWithSuggestedParams&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
      &lt;span class="na"&gt;from&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;buyer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;address&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;to&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;seller&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;address&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;amount&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;price&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;quantity&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;assetIndex&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;// ALGO&lt;/span&gt;
      &lt;span class="na"&gt;suggestedParams&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;params&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;

    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;txn2&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;algosdk&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;makeAssetTransferTxnWithSuggestedParams&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
      &lt;span class="na"&gt;from&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;seller&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;address&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;to&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;buyer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;address&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;amount&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;quantity&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;assetIndex&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;creditAsset&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;suggestedParams&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;params&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;

    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;groupID&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;algosdk&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;computeGroupID&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="nx"&gt;txn1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;txn2&lt;/span&gt;&lt;span class="p"&gt;]);&lt;/span&gt;
    &lt;span class="nx"&gt;txn1&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;group&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;groupID&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nx"&gt;txn2&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;group&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;groupID&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;signed&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;peraWallet&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;signTransaction&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="nx"&gt;txn1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;txn2&lt;/span&gt;&lt;span class="p"&gt;]);&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;algorandClient&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;sendRawTransaction&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;signed&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="k"&gt;do&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;success&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;txId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;signed&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;txId&lt;/span&gt; &lt;span class="p"&gt;};&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Swap failed:&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;success&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt; &lt;span class="p"&gt;};&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Pera Wallet's mobile-first design made on-the-go trades a breeze, syncing seamlessly with Supabase for hybrid web3 experiences.&lt;/p&gt;

&lt;h3&gt;
  
  
  Painting Data with Purpose: Real-Time Visualization
&lt;/h3&gt;

&lt;p&gt;No trading platform shines without visuals that tell stories. Recharts brought data to life, with gradients evoking clean skies:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Interactive price prediction chart with environmental overlays&lt;/span&gt;
&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;ResponsiveContainer&lt;/span&gt; &lt;span class="nx"&gt;width&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;100%&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="nx"&gt;height&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="mi"&gt;400&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;LineChart&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;predictionData&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="nx"&gt;margin&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{{&lt;/span&gt; &lt;span class="na"&gt;top&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;20&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;right&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;30&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;left&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;20&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;bottom&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;5&lt;/span&gt; &lt;span class="p"&gt;}}&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
    &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;CartesianGrid&lt;/span&gt; &lt;span class="nx"&gt;strokeDasharray&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;3 3&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="nx"&gt;stroke&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;#00374C&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="nx"&gt;opacity&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="mf"&gt;0.4&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="sr"&gt;/&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;    &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;XAxis&lt;/span&gt; &lt;span class="nx"&gt;dataKey&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;date&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="nx"&gt;stroke&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;#F5F5F5&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="nx"&gt;tickFormatter&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;formatDate&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="sr"&gt;/&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;    &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;YAxis&lt;/span&gt; &lt;span class="nx"&gt;yAxisId&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;left&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="nx"&gt;stroke&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;#22BFFD&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="nx"&gt;domain&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{[&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;auto&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;auto&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]}&lt;/span&gt; &lt;span class="sr"&gt;/&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;    &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;YAxis&lt;/span&gt; &lt;span class="nx"&gt;yAxisId&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;right&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="nx"&gt;orientation&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;right&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="nx"&gt;stroke&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;#00FF00&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="nx"&gt;domain&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{[&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;auto&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;auto&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]}&lt;/span&gt; &lt;span class="nx"&gt;label&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{{&lt;/span&gt; &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Emission Reduction Potential&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;angle&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;90&lt;/span&gt; &lt;span class="p"&gt;}}&lt;/span&gt; &lt;span class="sr"&gt;/&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;    &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;Tooltip&lt;/span&gt; &lt;span class="nx"&gt;content&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;CustomTooltip&lt;/span&gt; &lt;span class="o"&gt;/&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="sr"&gt;/&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;    &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;Legend&lt;/span&gt; &lt;span class="o"&gt;/&amp;gt;&lt;/span&gt;
    &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;Line&lt;/span&gt; &lt;span class="nx"&gt;yAxisId&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;left&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="kd"&gt;type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;monotone&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="nx"&gt;dataKey&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;predictedPrice&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="nx"&gt;stroke&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;#22BFFD&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="nx"&gt;strokeWidth&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="nx"&gt;dot&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="sr"&gt;/&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;    &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;Line&lt;/span&gt; &lt;span class="nx"&gt;yAxisId&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;right&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="kd"&gt;type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;monotone&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="nx"&gt;dataKey&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;reductionPotential&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="nx"&gt;stroke&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;#00FF00&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="nx"&gt;strokeWidth&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="nx"&gt;dot&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="sr"&gt;/&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;    &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;Area&lt;/span&gt; &lt;span class="nx"&gt;yAxisId&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;left&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="kd"&gt;type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;monotone&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="nx"&gt;dataKey&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;confidenceLower&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="nx"&gt;stackId&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;1&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="nx"&gt;stroke&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;none&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="nx"&gt;fill&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;#22BFFD&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="nx"&gt;fillOpacity&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="mf"&gt;0.2&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="sr"&gt;/&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;    &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;Area&lt;/span&gt; &lt;span class="nx"&gt;yAxisId&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;left&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="kd"&gt;type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;monotone&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="nx"&gt;dataKey&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;confidenceUpper&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="nx"&gt;stackId&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;1&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="nx"&gt;stroke&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;none&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="nx"&gt;fill&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;#22BFFD&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="nx"&gt;fillOpacity&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="mf"&gt;0.2&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="sr"&gt;/&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;  &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/LineChart&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/ResponsiveContainer&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Bolt.new's suggestions added interactive tooltips showing reduction tips, turning charts into educational tools.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bolt.new: My AI Co-Creator in the Climate Crusade
&lt;/h2&gt;

&lt;p&gt;Bolt.new wasn't a tool; it was a thought partner, reshaping my solo dev flow.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Rapid Prototyping Symphony&lt;/strong&gt;: From vague ideas to polished prototypes in hours. I prompted "Design a eco-themed trading dashboard with real-time charts," and got a fully styled React component – complete with dark mode for energy-saving vibes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Code as Poetry&lt;/strong&gt;: AI enforced best practices, like strict TypeScript interfaces for order types, preventing bugs that could "pollute" the system. It even suggested eco-efficient code, minimizing computations to echo the platform's green ethos.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Optimization Alchemy&lt;/strong&gt;: When latency crept in, Bolt.new proposed web workers for AI predictions, slashing load times. Result? Sub-500ms responses, as if the app breathed in sync with the user.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Testing Fortress&lt;/strong&gt;: AI-generated 95% coverage tests, simulating market crashes and wallet failures. One prompt: "Write unit tests for blockchain swaps," yielded comprehensive suites that caught edge cases early.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Treasured Code Gems and Prompt Wizardry
&lt;/h2&gt;

&lt;p&gt;Bolt.new's prompts were my spells. Favorite: "Optimize order book for 10k concurrent users with low latency." It birthed this gem for subscription management:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Efficient real-time pub/sub with memory leak prevention&lt;/span&gt;
&lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;PubSub&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nf"&gt;constructor&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;events&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Map&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;maxListeners&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nf"&gt;subscribe&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;listener&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;events&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;has&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;events&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;set&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Set&lt;/span&gt;&lt;span class="p"&gt;());&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;listeners&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;events&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;listeners&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;size&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;=&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;maxListeners&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;throw&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Max listeners exceeded&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="nx"&gt;listeners&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;listener&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;return &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;listeners&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;delete&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;listener&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nf"&gt;publish&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;listeners&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;events&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;listeners&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="nx"&gt;listeners&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;forEach&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;listener&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;listener&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Another: The AI forecast wrapper, blending Scipy-like logic in JS for emissions curves.&lt;/p&gt;

&lt;h2&gt;
  
  
  AI: From Tool to Transformation
&lt;/h2&gt;

&lt;p&gt;AI-powered dev didn't just speed me up; it evolved me.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Dream Bigger&lt;/strong&gt;: Tackled blockchain and AI that daunted me before – now, they're allies in sustainability.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Iterate Like Wind&lt;/strong&gt;: Feedback loops shrank from days to minutes, letting me pivot from basic trading to full predictive analytics.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Architect with Wisdom&lt;/strong&gt;: Bolt.new suggested modular designs, like separating UI from engine, making scaling feel natural as tree growth.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Learn Exponentially&lt;/strong&gt;: Dove into Algorand's AVM and Supabase's edge functions, with AI as tutor, emerging wiser about green tech.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  The Living Platform: CarbonPro AI Unleashed
&lt;/h2&gt;

&lt;p&gt;Behold the harvest: A platform where users forecast footprints, trade with confidence, and watch impacts bloom. Features sing of sustainability – AI accuracy at 94.2%, blockchain immutability, responsive design for global access. Deployed on Netlify, it's live, breathing, ready to offset tons.&lt;/p&gt;

&lt;h2&gt;
  
  
  Lessons Etched in Code and Carbon
&lt;/h2&gt;

&lt;p&gt;Bolt.new taught me AI is the wind beneath innovation's wings – not replacing human spark, but fanning it into infernos. I learned resilience in solo sprints, the poetry of clean code, and that tech can heal our world if wielded with purpose.&lt;/p&gt;

&lt;h2&gt;
  
  
  Horizon Bound: From Hack to Global Guardian
&lt;/h2&gt;

&lt;p&gt;CarbonPro AI isn't done; it's awakening. I'll evolve it into a startup, integrating more registries, IoT for real-time data, perhaps even VR trading floors. This hackathon wasn't about prizes; it was my launch into building for tomorrow's blue skies.&lt;/p&gt;

&lt;p&gt;Fellow builders, if Bolt.new ignited your fire, share your tales. Let's code a cooler planet together. Check CarbonPro AI at &lt;a href="https://endearing-cendol-18b63d.netlify.app/" rel="noopener noreferrer"&gt;endearing-cendol-18b63d.netlify.app&lt;/a&gt; – your feedback could be the next commit!&lt;/p&gt;

&lt;p&gt;&lt;em&gt;In the code of creation, we find our world's redemption. One prompt, one platform, one planet at a time.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>wlhchallenge</category>
      <category>career</category>
      <category>entrepreneurship</category>
    </item>
    <item>
      <title>How I Built a Snake Game Using Amazon Q CLI: A Step-by-Step Tutorial</title>
      <dc:creator>Zaynul Abedin Miah</dc:creator>
      <pubDate>Thu, 12 Jun 2025 14:33:33 +0000</pubDate>
      <link>https://dev.to/azaynul10/how-i-built-a-snake-game-using-amazon-q-cli-a-step-by-step-tutorial-3f62</link>
      <guid>https://dev.to/azaynul10/how-i-built-a-snake-game-using-amazon-q-cli-a-step-by-step-tutorial-3f62</guid>
      <description>&lt;p&gt;The classic Snake game brings back memories of guiding a pixelated snake to gobble up food while dodging collisions. I wanted to recreate this nostalgic game with modern twists—multiple difficulty levels, power-ups, obstacles, visual effects, and sound—all built from scratch using Amazon Q CLI, an AI-powered command-line tool that generates code, debugs issues, and automates tasks through natural language prompts. In this tutorial, I’ll share how I developed an Enhanced Snake Game in Python using Pygame, with Amazon Q CLI as my coding partner. Whether you’re a beginner or a seasoned coder, this guide will show you how to leverage AI to create a fun, feature-rich game.&lt;/p&gt;

&lt;h2&gt;
  
  
  Background and Context
&lt;/h2&gt;

&lt;p&gt;The Enhanced Snake Game is a modern take on the classic arcade game, built using Python and Pygame in a WSL2 environment. The project leveraged Amazon Q CLI, a generative AI-powered command-line tool that supports natural language code generation, debugging, and automation. The user’s journey involved initiating Amazon Q CLI, generating the base game, iteratively adding features, and resolving technical challenges, resulting in a polished game with advanced mechanics.&lt;/p&gt;

&lt;p&gt;The Enhanced Snake Game that we are going to build using Amazon Q CLI includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Multiple Difficulty Levels: Easy, Medium, Hard, and Extreme, with varying speeds and wall collision options.&lt;/li&gt;
&lt;li&gt;Special Food and Power-Ups: Bonus food for extra points, plus speed boosts, invincibility, and double-score effects.&lt;/li&gt;
&lt;li&gt;Obstacles: Randomly placed blocks that challenge navigation.&lt;/li&gt;
&lt;li&gt;Visual Effects: Particle explosions when collecting items and pulsating effects for special food and power-ups.&lt;/li&gt;
&lt;li&gt;Sound Effects: Audio cues for eating, power-ups, and game over, plus background music.&lt;/li&gt;
&lt;li&gt;Persistent High Scores: Saved in a JSON file for each difficulty level.&lt;/li&gt;
&lt;li&gt;&lt;p&gt;By the end, you’ll have a polished game and a clear understanding of how Amazon Q CLI can accelerate development.&lt;br&gt;
Prerequisites&lt;br&gt;
Before diving in, ensure you have:&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Amazon Q CLI: Installed on your system (instructions below).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Python 3: Installed, along with the Pygame library (pip install pygame).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;AWS Builder ID: Required to use &lt;a href="https://community.aws/builderid?trk=b085178b-f0cb-447b-b32d-bd0641720467&amp;amp;sc_channel=el" rel="noopener noreferrer"&gt;Amazon Q CLI&lt;/a&gt; for free. Sign up at AWS Builder ID. Or use pro license if you have that would be great.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;WSL2: For Windows users, or a Linux/macOS environment.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Basic Python knowledge (though Amazon Q CLI simplifies much of the coding).&lt;br&gt;
Step 1: Setting Up Amazon Q CLI&lt;br&gt;
But as I'm using windows so to start, I set up Amazon Q CLI in my Windows Subsystem for Linux (WSL2) environment on Windows, which provides a Linux-like setup for running the tool. Here’s how to do it:&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Open a WSL Terminal:&lt;/p&gt;

&lt;p&gt;Launch WSL by running&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;wsl -d Ubuntu
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;in a Windows Command Prompt or PowerShell.&lt;/p&gt;

&lt;p&gt;Install Dependencies:&lt;/p&gt;

&lt;p&gt;Ensure curl and unzip are installed&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;:sudo apt update
sudo apt install curl unzip
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Download and Install Amazon Q CLI:&lt;/p&gt;

&lt;p&gt;Download the zip&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;file:curl --proto '=https' --tlsv1.2 -sSf https://desktop-release.q.us-east-1.amazonaws.com/latest/q-x86_64-linux.zip -o q.zip
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Unzip and install:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;unzip q.zip
./q/install.sh

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Log In with AWS Builder ID:&lt;/p&gt;

&lt;p&gt;Run:q login&lt;/p&gt;

&lt;p&gt;Select “Use for Free with Builder ID,” follow the browser prompts to sign in, and complete the authentication.&lt;/p&gt;

&lt;p&gt;Verify Installation:&lt;/p&gt;

&lt;p&gt;Check if Amazon Q CLI is working&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;:q doctor
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If it outputs “Everything looks good!” you’re ready to go.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fymy06lkqfpdqghxhmc6n.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fymy06lkqfpdqghxhmc6n.PNG" alt="Image description" width="774" height="775"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With Amazon Q CLI set up, I was ready to start coding.&lt;br&gt;
&lt;strong&gt;Step 2: Creating the Base Snake Game&lt;/strong&gt;&lt;br&gt;
I began by creating a basic Snake game using Pygame. In the WSL terminal, I started Amazon Q CLI’s chat feature:&lt;br&gt;
q chat&lt;/p&gt;

&lt;p&gt;I entered this prompt:&lt;br&gt;
Create a basic Snake game in Python using Pygame. The snake should move with arrow keys, grow when it eats food, and end if it hits the wall or itself.&lt;/p&gt;

&lt;p&gt;Amazon Q CLI generated a working game, which I saved as simple_snake_game.py. You can check out the repository here for the code it generated.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff9tx90mjvupymmruqr9d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff9tx90mjvupymmruqr9d.png" alt="Image description" width="800" height="441"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This code created a simple Snake game where the snake moves, eats food to grow, and ends on wall or self-collision. I tested it by running:&lt;br&gt;
python3 simple_snake_game.py&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3: Adding Difficulty Levels&lt;/strong&gt;&lt;br&gt;
To make the game more engaging, I added difficulty levels. I prompted Amazon Q CLI:&lt;br&gt;
Add difficulty levels to the Snake game: Easy (slow speed, screen wrap), Medium (medium speed, screen wrap), Hard (fast speed, wall collision), and Extreme (very fast, wall collision). Include a menu to select the difficulty.&lt;/p&gt;

&lt;p&gt;Amazon Q CLI generated code with a difficulty dictionary and a menu system. Here’s a snippet:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;DIFFICULTY_LEVELS = {
    "Easy": {"speed": 8, "color": (0, 255, 0), "wall_collision": False},
    "Medium": {"speed": 12, "color": (0, 0, 255), "wall_collision": False},
    "Hard": {"speed": 16, "color": (255, 0, 0), "wall_collision": True},
    "Extreme": {"speed": 20, "color": (255, 255, 0), "wall_collision": True}
}

def show_difficulty_menu():
    selected = 0
    options = list(DIFFICULTY_LEVELS.keys())
    while True:
        for event in pygame.event.get():
            if event.type == pygame.KEYDOWN:
                if event.key == pygame.K_UP:
                    selected = (selected - 1) % len(options)
                elif event.key == pygame.K_DOWN:
                    selected = (selected + 1) % len(options)
                elif event.key == pygame.K_RETURN:
                    return options[selected]
        screen.fill((0, 0, 0))
        for i, option in enumerate(options):
            color = DIFFICULTY_LEVELS[option]["color"] if i == selected else (255, 255, 255)
            text = font.render(option, True, color)
            screen.blit(text, (WIDTH // 2 - text.get_width() // 2, 200 + i * 50))
        pygame.display.flip()
        clock.tick(10)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This added a menu where players select difficulty, and the game adjusts speed and wall behavior accordingly.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp431k3afpgstj0d7v7xy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp431k3afpgstj0d7v7xy.png" alt="Image description" width="800" height="628"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;**Step 4: Adding Special Food and Power-Ups&lt;br&gt;
**To increase excitement, I added special food and power-ups. My prompt:&lt;br&gt;
Add special food that gives bonus points and disappears after a few seconds. Include power-ups like speed boost, invincibility, and double score that spawn randomly.&lt;/p&gt;

&lt;p&gt;Amazon Q CLI created Food and PowerUp classes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;class Food:
    def __init__(self):
        self.position = (0, 0)
        self.type = "normal"  # normal, bonus, or special
        self.color = RED
        self.points = 1
        self.spawn_time = pygame.time.get_ticks()
        self.lifespan = None  # None means permanent
        self.randomize_position()

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
class PowerUp:
    def __init__(self):
        self.position = (0, 0)
        self.active = False
        self.type = None
        self.spawn_time = 0
        self.lifespan = 10000  # 10 seconds

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These classes enabled dynamic food types and power-ups with timed effects, enhancing gameplay strategy.&lt;br&gt;
**Step 5: Introducing Obstacles&lt;br&gt;
**To add challenge, I included obstacles. I prompted:&lt;br&gt;
Add randomly placed obstacles that the snake must avoid unless it’s invincible.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpgej2ynlz206ui1rnyws.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpgej2ynlz206ui1rnyws.PNG" alt="Image description" width="800" height="581"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Amazon Q CLI generated an Obstacle class:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;class Obstacle:
    def __init__(self):
        self.positions = []
        self.generate()

    def generate(self):
        self.positions = []
        # Create 5-10 random obstacles
        num_obstacles = random.randint(5, 10)
        for _ in range(num_obstacles):
            pos = (random.randint(2, GRID_WIDTH - 3), random.randint(2, GRID_HEIGHT - 3))
            # Make sure obstacles aren't too close to the center where the snake starts
            if abs(pos[0] - GRID_WIDTH // 2) &amp;gt; 3 or abs(pos[1] - GRID_HEIGHT // 2) &amp;gt; 3:
                self.positions.append(pos)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This added random blocks, making navigation trickier unless the snake was invincible.&lt;br&gt;
&lt;strong&gt;Step 6: Enhancing Visual Effects&lt;/strong&gt;&lt;br&gt;
For a polished look, I added visual effects. My prompt:&lt;br&gt;
Add particle effects when the snake eats food or collects a power-up. Make special food and power-ups pulsate.&lt;/p&gt;

&lt;p&gt;Amazon Q CLI provided a Particle class and pulsating effects:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Particle effect class
class Particle:
    def __init__(self, x, y, color):
        self.x = x
        self.y = y
        self.color = color
        self.size = random.randint(2, 5)
        self.life = 30
        self.vx = random.uniform(-1, 1)
        self.vy = random.uniform(-1, 1)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This created engaging visuals, like explosions and pulsating items.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 7: Adding Sound Effects&lt;/strong&gt;&lt;br&gt;
To enhance immersion, I added sound. My prompt:&lt;br&gt;
Add sound effects for eating food, collecting power-ups, and game over. Include background music.&lt;/p&gt;

&lt;p&gt;Amazon Q CLI generated code to load audio files:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    eat_sound = try_load_sound("eat.wav")
    if eat_sound is None:
        eat_sound = try_load_sound("eat.mp3")

    # Load game over sound
    game_over_sound = try_load_sound("game_over.wav")
    if game_over_sound is None:
        game_over_sound = try_load_sound("game_over.mp3")

    # Load powerup sound
    powerup_sound = try_load_sound("powerup.wav")
    if powerup_sound is None:
        powerup_sound = try_load_sound("powerup.mp3")

    # Set maximum volume for all sounds
    if eat_sound:
        eat_sound.set_volume(1.0)
        print("Eat sound volume set to maximum")
    if game_over_sound:
        game_over_sound.set_volume(1.0)
        print("Game over sound volume set to maximum")
    if powerup_sound:
        powerup_sound.set_volume(1.0)
        print("Power-up sound volume set to maximum")

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I downloaded eat.wav, game_over.wav, powerup.wav, and background.mp3 from pixabay and placed them in /home/q/snake_game_assets/sounds.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp2xrxcuh29q7nrhpsdtb.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp2xrxcuh29q7nrhpsdtb.PNG" alt="Image description" width="800" height="414"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Step 8: High Score Tracking&lt;/strong&gt;&lt;br&gt;
To keep players engaged, I added persistent high scores. My prompt:&lt;br&gt;
Add high score tracking that saves to a JSON file for each difficulty.&lt;/p&gt;

&lt;p&gt;Amazon Q CLI implemented:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    def load_high_scores(self):
        try:
            highscore_file = os.path.join(DEFAULT_ASSETS_DIR, "highscores.json")
            if os.path.exists(highscore_file):
                with open(highscore_file, 'r') as f:
                    return json.load(f)
            return {"Easy": 0, "Medium": 0, "Hard": 0, "Extreme": 0}
        except:
            print("Error loading high scores")
            return {"Easy": 0, "Medium": 0, "Hard": 0, "Extreme": 0}

    def save_high_scores(self):
        try:
            highscore_file = os.path.join(DEFAULT_ASSETS_DIR, "highscores.json")
            os.makedirs(os.path.dirname(highscore_file), exist_ok=True)
            with open(highscore_file, 'w') as f:
                json.dump(self.high_scores, f)
        except:
            print("Error saving high scores") f)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This saved scores to highscores.json in the assets directory.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0qyrjgyiy5ptl39op8fv.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0qyrjgyiy5ptl39op8fv.PNG" alt="Image description" width="800" height="674"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 9: Polishing and Debugging&lt;/strong&gt;&lt;br&gt;
I polished the game with features like pause (P key), wall toggle (W key), and dynamic difficulty switching (1-4 keys). I faced challenges like:&lt;/p&gt;

&lt;p&gt;Sound Loading Issues: Files were initially in the wrong directory (/home/q/sounds). I moved them to /home/q/snake_game_assets/sounds.&lt;br&gt;
WSL Path Issues: Fixed by using os.path for cross-platform compatibility.&lt;br&gt;
Double File Extensions: Renamed files like eat.wav.wav to eat.wav.&lt;/p&gt;

&lt;p&gt;Amazon Q CLI helped debug these by suggesting path fixes and creating a sound test script.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
Using Amazon Q CLI, I transformed a simple idea into a feature-rich Enhanced Snake Game in just a few hours. The AI’s ability to generate code, debug issues, and explain concepts made development fast and educational. Try it yourself at AWS Builder ID and start building your own games!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbmemkbiofm8ahb0a2a59.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbmemkbiofm8ahb0a2a59.PNG" alt="Image description" width="800" height="649"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>ai</category>
      <category>awschallenge</category>
      <category>gamechallenge</category>
    </item>
    <item>
      <title>Getting started with AWS</title>
      <dc:creator>Zaynul Abedin Miah</dc:creator>
      <pubDate>Tue, 16 Jan 2024 20:32:09 +0000</pubDate>
      <link>https://dev.to/azaynul10/getting-started-with-aws-46cp</link>
      <guid>https://dev.to/azaynul10/getting-started-with-aws-46cp</guid>
      <description>&lt;p&gt;Amazon Web Services (AWS) has become the most popular cloud platform used all around the world since it officially launched back in 2006. The cloud platform provides wider range of services such as platform as a service (Paas), Software as a service (Saas) and Infrastruture as a service (Iaas) helping industry titans like Netflix, Dropbox, and Reddit in it's growth. This cloud platform businesses to quickly build and deploy applications in the cloud. Now let's dive into what AWS actually is and what are the types of services that it provides.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Understanding AWS Cloud&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;AWS allows users to rent a virtual computer on which they can run own computer applications known as compute power. One of the fundamental services provided by AWS. AWS offers over 200 cloud services including computing, storage, networking, database, analytics, application services, deployment, management, and tools for the cloud that help businesses to scale and grow.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Some key benefits of using AWS are:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Flexible pay-as-you-go model - Pay only for what you use&lt;/li&gt;
&lt;li&gt;Scalability - Instantly spin up or down resources as your requirement changes.&lt;/li&gt;
&lt;li&gt;Global infrastructure - Deploy apps across geographic regions&lt;/li&gt;
&lt;li&gt;Security - elps you protect your data, accounts, and workloads from unauthorized access, DDoS protection and encryption&lt;/li&gt;
&lt;li&gt;Frequent updates - New features and services are updated regularly. So you can stay upto date to latest tech.&lt;/li&gt;
&lt;li&gt;No upfront costs or long-term commitments required. &lt;/li&gt;
&lt;li&gt;Savings Plans and Reserved Instances allow you to purchase storages with discounts. &lt;/li&gt;
&lt;li&gt;AWS also provides 24x7 customer and technical support.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Major AWS Cloud Services&lt;/strong&gt;&lt;br&gt;
Some of the major AWS cloud services include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Compute - This is the backbone of AWS which provides processing power to run applications. The primary compute service like EC2 provides scalable virtual servers to deploy apps. Auto Scaling automatically adjusts capacity based on your demand.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Storage -  AWS provides reliable storage services like Amazon Simple Storage Service (S3) for storing large amounts of data. S3 offers unlimited object storage with high durability. EBS provides block storage volumes for EC2. Storage Gateway connects on-prem storage to cloud.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Database - Amazon Relational Database Service (RDS) is a managed service which is easy to set up, operate and scale a relational databases. A financial services can use Amazon RDS to store customer's transaction and finance data then use SQL quiries to analyze customers behaviour and make data driven decision. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Networking - VPC for isolating resources in virtual network. Route 53 for DNS management. Direct Connect for private connectivity between on-prem and cloud. a company could use VPC to isolate application from public network which creates a private secure network.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Analytics - EMR for big data processing with Spark, Hadoop, Hbase. QuickSight for business intelligence and visualization.   Amazon Redshift is a fully managed also can store and analyze extremely large volumes of data, typically on the order of petabytes. A healthcare research institute could use Amazon Redshift to analyze large datasets of patient information to identify trends and improve patient care.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Security - Identity and Access Management (IAM) for access control. KMS for encryption key management. CloudTrail for logging and auditing. A business could use IAM to control which employees have access to certain AWS resources which provides them a better security for their applications and datas. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Management - CloudFormation for infrastructure provisioning using code. CloudWatch for monitoring and alerting.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Developer Tools - CodeCommit, CodeBuild, CodeDeploy, CodePipeline for CI/CD. X-Ray for analyzing app performance. AWS CodeStar enables you to quickly develop, build, and deploy applications on AWS giving software developers flexibility by writing them code to deploy a finished application.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fquvqv1s5wde72hvapyg9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fquvqv1s5wde72hvapyg9.png" alt="Image description" width="602" height="504"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;You Can Also Get Started&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;AWS offers a free tier for new users to experiment with certain services without incurring costs. The AWS free tier offers limited usage of key services like EC2, S3, RDS for 12 months. Create an AWS account, explore the management console and launch instances to gain hands-on experience with core AWS infrastructure and capabilities.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloud</category>
      <category>cloudstorage</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Demystifying the Quantum Computing Stack</title>
      <dc:creator>Zaynul Abedin Miah</dc:creator>
      <pubDate>Sun, 12 Nov 2023 06:47:09 +0000</pubDate>
      <link>https://dev.to/azaynul10/demystifying-the-quantum-computing-stack-a-beginners-guide-2e0e</link>
      <guid>https://dev.to/azaynul10/demystifying-the-quantum-computing-stack-a-beginners-guide-2e0e</guid>
      <description>&lt;p&gt;Most people don’t fully understand how computers work from circuits to apps. Hardware and software specialists understand their domain, but few see the full picture. &lt;/p&gt;

&lt;p&gt;Let's take typing an email as an example. You input text via the keyboard (input). The processor encodes into bits (computation). The bits go through gates/circuits to render letters on screen (processing). The email text appears (output).&lt;/p&gt;

&lt;p&gt;Behind the scenes, the computer manipulates bits with circuits and gates programmed with code. As the user, you don't need these details. We'll peel back the layers, to appreciate how the electronics and hardware run the software. This will make it easier to understand qubits for quantum computing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Classical Computing Stack
&lt;/h2&gt;

&lt;p&gt;This explains the layers of classical computing, from physical bits, to logic gates, circuits, algorithms and finally applications. It shows how each layer builds on the previous one. &lt;/p&gt;

&lt;p&gt;Let's imagine you want to stream a movie on Netflix. This involves several layers:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Physical Bits&lt;/strong&gt; - This is the lowest level of the stack. Bits are physically implemented as transistors and electronic switches in the hardware. Transistors act like switches that are either on or off, representing 1 or 0. Modern CPUs have billions of transistors packed extremely densely using nanotechnology.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Logic Gates&lt;/strong&gt; - The basic building blocks of logical operations. Common gates include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;NOT gate - Flips the input bit, 0 becomes 1, 1 becomes 0.&lt;/li&gt;
&lt;li&gt;AND gate - Outputs 1 only if both input bits are 1, otherwise outputs 0.&lt;/li&gt;
&lt;li&gt;OR gate - Outputs 1 if either input bit is 1, otherwise 0.&lt;/li&gt;
&lt;li&gt;XOR (exclusive OR) gate - Outputs 1 if the input bits are different, 0 if they are the same.
Gates are combined together to create components like adders, flip-flops, registers that are used to build circuits.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Circuits&lt;/strong&gt; - Gates are connected together into circuits that perform operations like adding two numbers. More complex circuits include ALUs (arithmetic logic units) and control units in CPUs. They are typically represented using digital logic diagrams. For streaming, circuits encode and decode video data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Algorithms&lt;/strong&gt;- Step-by-step sequences of instructions or operations to perform tasks like sorting data, encrypting information, compressing files. Circuits are designed to implement algorithms. For Netflix, the compression algorithm squeezes the movie file size, the encryption algorithm scrambles data for security.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Protocols&lt;/strong&gt; - Common sets of steps like the internet protocols telling data how to route across the network.Common ones include TCP/IP, HTTP, FTP. This allows your computer to request the movie data from Netflix's servers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Application&lt;/strong&gt; - The top layer that ties everything together into an application like the Netflix app. It uses the lower protocols, algorithms, circuits down to the physical bits to store and play your movie.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzsv5tko3jj9sfcdwsy81.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzsv5tko3jj9sfcdwsy81.png" alt="Image description" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Programming?
&lt;/h2&gt;

&lt;p&gt;Programming involves writing instructions for a computer to execute and transform inputs to outputs. &lt;/p&gt;

&lt;p&gt;At its core, programming requires:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Inputs - data needed for the task &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Program Logic - step-by-step instructions to manipulate inputs to calculate desired outputs&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Outputs - results produced after processing inputs&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For example, a program to calculate circle area would take radius as input. The logic would:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Get radius value &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Calculate area using formula πr^2&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Display area &lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The output would be the numerical area printed. &lt;/p&gt;

&lt;p&gt;The logic contains unambiguous instructions in a language like Python executed sequentially. &lt;/p&gt;

&lt;p&gt;Other examples include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Banking app accepting deposits &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Accounting program generating reports&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Online store fetching and processing data&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Real-world programming requires analyzing requirements, designing logic, coding, and testing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Python as a programming language
&lt;/h2&gt;

&lt;p&gt;Python is a high-level, general purpose programming language used to build diverse applications. &lt;/p&gt;

&lt;p&gt;Key features:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Readability - Python code is designed to be easy to understand, almost like English. For example: print("Hello World!").&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Versatility - Used for web, data science, AI, scientific computing, and more. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Rich Ecosystem - Many high-quality libraries like NumPy, TensorFlow, Django/Flask. Allows reusing code.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Easy to Learn - Intuitive syntax compared to Java/C++. Good first language.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Example projects:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Web apps like Dropbox, Instagram  &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Data science/AI - NumPy, Pandas, Scikit-Learn&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Automation scripts &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Python libraries are pre-written code packages that extend capabilities. They provide common implementations so developers don't rewrite everything.&lt;/p&gt;

&lt;p&gt;Key Python libraries for quantum:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;NumPy - Scientific computing tools&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Cirq - Quantum computing/circuits library &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;TensorFlow Quantum - Quantum machine learning &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;PyQuil - Quantum programs for Rigetti chips&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Qiskit - IBM's quantum circuit simulator&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;So Python plus quantum libraries provide a full stack for developing/simulating quantum algorithms and running them on quantum hardware.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fddxi7asyt2sxhemfzwh5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fddxi7asyt2sxhemfzwh5.png" alt="Image description" width="752" height="423"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>quantum</category>
      <category>cirq</category>
      <category>programming</category>
      <category>hardware</category>
    </item>
    <item>
      <title>Introduction to Quantum Computing</title>
      <dc:creator>Zaynul Abedin Miah</dc:creator>
      <pubDate>Fri, 20 Oct 2023 15:48:52 +0000</pubDate>
      <link>https://dev.to/azaynul10/introduction-to-quantum-computing-159a</link>
      <guid>https://dev.to/azaynul10/introduction-to-quantum-computing-159a</guid>
      <description>&lt;p&gt;Quantum Computer is a branch of computer science that employs the concepts of quantum theory to execute complicated calculations more effectively than classical computers (like our laptops and phones). They do computations using the strange properties of quantum mechanics that control very small particles like photons and electrons. The goal is to perform certain tasks like optimization, simulation, and cryptography much faster than normal computers allow.&lt;/p&gt;

&lt;p&gt;Regular computers use bits, which can be either 1 or 0. But in quantum computing, the basic unit of information is called a qubit, which can be 1 and 0 at the same time! &lt;/p&gt;

&lt;p&gt;For example, a quantum algorithm could help find the best path for a delivery driver to take through multiple towns using superposition to try all route options at once. The quantum computer produces a final answer by measuring the qubit, causing it to pick just 0 or 1.&lt;/p&gt;

&lt;h2&gt;
  
  
  4 principles of Qubits:
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Superpostion&lt;/strong&gt;&lt;br&gt;
Qubits can exist in a superposition of 0 and 1 states at the same time prior to measurement. This allows them to store  much more information than binary bits. &lt;/p&gt;

&lt;p&gt;For example, a basketball can only be in your hands or on the floor, not both places at once. But in quantum mechanics, tiny objects like electrons can be in different states or places at the same time. This is called superposition. It's like having a basketball that's both in your hands and on the floor at the exact same time!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Interference&lt;/strong&gt; &lt;br&gt;
The quantum states that a qubit can live in can constructively or destructively interact with each other, increasing or canceling out states. This is like waves colliding and is what allows qubits to explore multiple answers at once when running algorithms. For example, It's like if throwing multiple basketballs at a hoop could make the balls either more likely or less likely to go in, based on how the different basketball routes interact.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Entanglement&lt;/strong&gt;&lt;br&gt;
When quantum particles interact, they become linked so that what happens to one particle affects the other, even if they are very far apart. It's like having two basketballs that always spin in perfect opposite directions - if you see one spinning clockwise, you know the other far away basketball is spinning counterclockwise. This link between quantum particles is kept even over long distances.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Measurement&lt;/strong&gt;&lt;br&gt;
In quantum mechanics, the act of measurement forces particles to pick specific states, breaking superpositions. This causes qubits to make precise classical values that we can understand. It's like how opening the box to look at Schrodinger's cat forces it to be either dead or alive. Similarly, measuring a quantum particle's position forces it to pick a definite position, even if before measuring it was in multiple states at once. This process takes data from qubits to complete algorithms.&lt;/p&gt;

&lt;p&gt;So, quantum mechanics allows for strange behaviors not seen in our everyday world. Quantum computing taps into these traits to make powerful calculations in new ways not possible with regular computers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Core Concepts:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Qubit&lt;/strong&gt; - The basic unit of information in a quantum computer. Analogous to a classical bit, but can exist in a superposition of 0 and 1.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Quantum Gates&lt;/strong&gt; - Operations that manipulate qubits. They are the basic logic gates, analogous to AND, OR, NOT gates in classical computing.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Quantum Circuits&lt;/strong&gt; - A series of interconnected quantum gates that perform a computation on the qubits. The quantum equivalent of classical circuits or programs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Quantum Algorithms&lt;/strong&gt; - Step-by-step procedures to solve a problem on a quantum computer. Algorithms like Shor's and Grover's take advantage of quantum principles to gain speedups over classical algorithms.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Quantum Protocols&lt;/strong&gt; - Procedures that use quantum effects like entanglement to enable secure communication. For example, quantum key distribution protocols for cryptography.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Quantum Hardware&lt;/strong&gt; - The physical implementation of quantum computers, including the qubits and infrastructure to manipulate them. Current hardware includes superconducting qubits, ion traps, and photonics.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Quantum Error Correction&lt;/strong&gt; - Techniques to detect and account for errors in noisy quantum systems to make reliable, scalable quantum computers.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Future of Quantum Computing:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Google's Quantum AI team has developed Circ, a quantum programming language. They are a major player in quantum computing research and development.&lt;/li&gt;
&lt;li&gt;Microsoft, IBM, Intel, AWS, Honeywell, and other big tech companies also have major quantum computing programs and are racing to build functional quantum computers.&lt;/li&gt;
&lt;li&gt;Banks like Goldman Sachs, JP Morgan, and Barclays are collaborating with quantum startups to explore using quantum computing for risk analysis, trading, and portfolio optimization.&lt;/li&gt;
&lt;li&gt;Aerospace companies like Airbus, Lockheed Martin, and Boeing are partnering with quantum companies to apply quantum computing to complex aerodynamic simulations and aircraft design challenges.&lt;/li&gt;
&lt;li&gt;Biotech and pharmaceutical companies are considering using quantum computing to speed up drug discovery and molecular simulations.&lt;/li&gt;
&lt;li&gt;Volkswagen and Daimler are working on using quantum computers for optimizing traffic flow and developing new battery materials for electric vehicles.&lt;/li&gt;
&lt;li&gt;The UK, EU, China, and other countries have major quantum computing research initiatives and are investing heavily in developing quantum technologies.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So, in summary big companies across many industries as well as governments worldwide see the potential for quantum computing and are actively collaborating to turn this potential into reality in the coming years.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7t6aydvbp72jftd7mm26.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7t6aydvbp72jftd7mm26.png" alt="Image description" width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>quantum</category>
      <category>computerscience</category>
      <category>beginners</category>
      <category>python</category>
    </item>
    <item>
      <title>Embarking on the DevOps Journey</title>
      <dc:creator>Zaynul Abedin Miah</dc:creator>
      <pubDate>Sat, 16 Sep 2023 13:05:43 +0000</pubDate>
      <link>https://dev.to/azaynul10/embarking-on-the-devops-journey-58de</link>
      <guid>https://dev.to/azaynul10/embarking-on-the-devops-journey-58de</guid>
      <description>&lt;h2&gt;
  
  
  The Journey to DevOps and Cloud Adoption
&lt;/h2&gt;

&lt;p&gt;Devops and cloud computing are one of the trending topics of today. DevOps is a software development approach that emphasizes collaboration and communication between development and operations teams to improve the speed and quality of software delivery. However, many people struggle to understand what DevOps truly entails and how to effectively adopt cloud technologies. In this post, I will walk through the key concepts and challenges around implementing DevOps and cloud solutions. &lt;/p&gt;

&lt;h2&gt;
  
  
  Setting Up Local and Cloud Environments
&lt;/h2&gt;

&lt;p&gt;I know that setting up robust development and testing environments can be a major pain point for many students and instructors. Cloud computing is a key enabler of DevOps, providing on-demand access to computing resources and enabling teams to quickly and easily provision and scale infrastructure. It involves automating the software development lifecycle, from building and testing to deployment and maintenance, using various tools and technologies. Many students and instructors face issues with setting up a lab environment like questions like whether to set up on a local machine or on the cloud using a Linux operating system may arise.&lt;/p&gt;

&lt;h2&gt;
  
  
  Navigating the Linux CLI
&lt;/h2&gt;

&lt;p&gt;The Linux command line is notoriously challenging to learn for beginners. But it's a critical skill for DevOps, since Linux is the go-to OS for the majority of production servers. Students and instructors alike struggle to become fluent in Linux administration solely from books or lectures. The CLI is best learned by troubleshooting real issues through trial and error.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnimpsevq0g9sn8jn5p5l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnimpsevq0g9sn8jn5p5l.png" alt="Image description" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Networking and Connectivity Headaches
&lt;/h2&gt;

&lt;p&gt;Connecting applications and services together can produce all sorts of headaches. Developers must choose the right IP addresses, ports, DNS settings, and more. Virtual machines and containers add network layers that can make connectivity erratic. These networking gremlins can derail projects and damage productivity. Networking can be challenging, especially when it comes to getting VMs to communicate with each other or establish an internet connection. DNS-related problems can also be a hurdle. &lt;/p&gt;

&lt;h2&gt;
  
  
  Working with Diverse Languages and Platforms
&lt;/h2&gt;

&lt;p&gt;Troubleshooting applications developed on various platforms such as Java, Python, and Node.js can be daunting, and students often need help with this. Setting up web and application servers requires dealing with database connectivity errors and troubleshooting, among other things. Developers must configure each platform correctly and integrate components built with different tech stacks. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbuv9iic90fn05tibf1t8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbuv9iic90fn05tibf1t8.png" alt="Image description" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Troubleshooting YAML, JSON, and Config Files
&lt;/h2&gt;

&lt;p&gt;Infrastructure-as-code tools like Kubernetes, Docker and Terraform rely heavily on YAML, JSON and domain-specific language files. But if you're not familiar with these formats, it's easy to bungle the syntax and introduce bugs. Developers waste precious time troubleshooting config files rather than building features&lt;/p&gt;

&lt;p&gt;When working with Kubernetes, you may encounter many YAML and JSON files for the first time. If someone directs you to download a GitHub repository, you may be unsure how to install or configure a programming language on your system. To create and maintain their labs, many students require assistance with troubleshooting and networking with Linux. Before transitioning to DevOps, you must first understand what Build entails. Beginners may face difficulties accessing web servers, such as determining which IP addresses and ports to utilize.&lt;/p&gt;

&lt;h2&gt;
  
  
  Collaboration with Git and GitHub
&lt;/h2&gt;

&lt;p&gt;DevOps emphasizes collaboration and communication between development and operations teams, as well as other stakeholders like business analysts and quality assurance teams, to ensure everyone is aligned and working towards the same goals.&lt;/p&gt;

&lt;p&gt;Imagine you have an idea that could change the world. You start coding, but your code is only accessible at HTTP localhost 8080. You try to share it using your laptop, but when you shut it down, no one can access it. To make it accessible 24/7, you need to host it on a server. However, you can't just copy the code and run it - you need to configure the system and have the necessary programming languages or runtimes in place. Your laptop becomes your development environment and the server hosts your application. &lt;/p&gt;

&lt;p&gt;When multiple people work on the same code, it can create conflicts. Git solves this problem by allowing everyone to work on the same application at the same time and collaborate efficiently. All users must install and configure Git. Central Hub is a cloud-based platform that acts as a central location for all code. GitHub is a popular publicly hosted git-based central repository that allows you to configure project organizations and define different access levels for users. Other platforms like Gitlab and Bitbucket are also available.&lt;/p&gt;

&lt;h2&gt;
  
  
  Build Automation
&lt;/h2&gt;

&lt;p&gt;The process of compiling code into executable programs is known as the "build". The build operation to a dedicated build server that obtains the most recent version and compiles it before deploying it to production. Automating builds improves efficiency and reduces mistakes from manual processes. Build tools like Maven, Gradle and Make take source code and package it for deployment.&lt;/p&gt;

&lt;h2&gt;
  
  
  Testing and Staging Environments
&lt;/h2&gt;

&lt;p&gt;However, deploying new build to the production environment carries risks of introducing bugs or causing previously functioning components to break.Before releasing to production, code must be rigorously tested. Test environments replicate the quirks of production without impacting real users. Staging environments mirror the final production configuration to catch integration issues. Neglecting testing is asking for trouble.&lt;/p&gt;

&lt;h2&gt;
  
  
  Continuous Integration/Continuous Delivery (CI/CD)
&lt;/h2&gt;

&lt;p&gt;As the code base and features grow, manual deployment takes a whole week to execute. That's where CI/CD comes in - it stands for continuous integration and delivery tools like Jenkins, Github actions and Gitlab CI/CD. These tools automate manual tasks and build a pipeline for you. &lt;/p&gt;

&lt;h2&gt;
  
  
  Container Orchestration with Kubernetes
&lt;/h2&gt;

&lt;p&gt;As more users join, production is disrupted during periods of reduced load. Container platforms like Docker allow packaging applications into lightweight containers. This is where container orchestration platforms come in. Container orchestration platforms like Kubernetes help manage and deploy containerized applications, ensuring they run consistently and reliably across different environments.&lt;/p&gt;

&lt;h2&gt;
  
  
  Infrastructure Provisioning with Terraform
&lt;/h2&gt;

&lt;p&gt;Spinning up servers, networks and cloud resources is slow, manual and prone to human error. Infrastructure-as-code tools like Terraform automate provisioning by treating infrastructure like software. Terraform codifies and versions infrastructure configurations.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftiaf5sx8k1ddih9qzrpb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftiaf5sx8k1ddih9qzrpb.png" alt="Image description" width="800" height="420"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Configuration Management with Ansible
&lt;/h2&gt;

&lt;p&gt;Once infrastructure is provisioned, it must be configured and kept up-to-date. Ansible is a popular automation tool for handling administrative tasks like installing software packages, changing configs, and updating dependencies across servers. Terraform is primarily used for infrastructure provisioning, while Ansible is used for post-configuration tasks like software installation and server configuration.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvettnxaq0jrvu7vmtgjc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvettnxaq0jrvu7vmtgjc.png" alt="Image description" width="800" height="532"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Implementing the DevOps Infinity Loop
&lt;/h2&gt;

&lt;p&gt;Organizations can implement the DevOps Infinity Loop to achieve faster, higher-quality, and more reliable software application delivery. The phases of planning, development, testing, deployment, and monitoring can be executed in a continuous feedback loop. Collaboration, communication, and automation between software development and IT operations teams should also be emphasized to facilitate continuous improvement. The DevOps Infinity Loop is an effective way to achieve faster, higher-quality, and more reliable software application delivery. It can be used as a guide for your organization's software development process, with an emphasis on continuous improvement. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftu8s24n0g38ipt5izjf8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftu8s24n0g38ipt5izjf8.png" alt="Image description" width="800" height="402"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The combination of tools and techniques above comprise the modern DevOps philosophy. Adopting DevOps requires rethinking processes, culture and technology. But those able to navigate the journey are rewarded with software delivery that is robust, rapid, and resilient.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>beginners</category>
      <category>python</category>
      <category>github</category>
    </item>
    <item>
      <title>Big O Notation</title>
      <dc:creator>Zaynul Abedin Miah</dc:creator>
      <pubDate>Thu, 03 Aug 2023 19:04:41 +0000</pubDate>
      <link>https://dev.to/azaynul10/big-o-notation-1734</link>
      <guid>https://dev.to/azaynul10/big-o-notation-1734</guid>
      <description>&lt;h2&gt;
  
  
  What is Big O Notation?
&lt;/h2&gt;

&lt;p&gt;Big O notation shows how an algorithm's performance scales with input size. It measures efficiency and program performance by revealing how runtime grows with input size. Time complexity is a concept that demonstrates how the runtime of a function increases as input size increases. It's important in technical interviews and code comparisons, as it measures the number of operations and space complexity. Understanding it helps develop faster and more efficient applications.&lt;/p&gt;

&lt;p&gt;Let's look at some of the notations used to describe runtime algorithms like Big O Notation.&lt;/p&gt;

&lt;p&gt;For example: Suppose we want to buy a car and want to know how many liters it takes to drive 100 miles. Depending on various factors, the performance of a car can vary. For instance, if you drive on a highway, it may consume 10 liters of fuel, while in city traffic it might consume 20 liters, and under mixed conditions, it may consume 15 liters to cover a distance of 100 miles.&lt;/p&gt;

&lt;p&gt;Algorithm performance is measured using best, worst, and average case scenarios. Greek letters Omega, Theta, and Big O are used to represent these cases. Big O is commonly used in the industry.&lt;/p&gt;

&lt;h2&gt;
  
  
  Big O - O(1)
&lt;/h2&gt;

&lt;p&gt;When an algorithm has a "constant time complexity," it means that it takes the same amount of time to run, regardless of the input size. This is determined by analyzing how the number of operations changes with input size. For instance, if a function is designed to "multiply numbers," and it only performs one operation, it will have constant time complexity. One way to analyze it is by using a deck of cards. Removing the first card at random is a task that takes constant time, as it doesn't require searching through the entire deck. The graph for O(1) time complexity is a flat line across the bottom, making it the most efficient big O time complexity. This means that no matter how many elements there are, the number of operations remains constant.&lt;/p&gt;

&lt;h2&gt;
  
  
  Big O - O(n)
&lt;/h2&gt;

&lt;p&gt;O(n) time complexity means that a function's running time grows in direct proportion to the input data size. A simple function that loops from zero to "n" has a linear time complexity. The function performs the operation a corresponding number of times as the input grows larger. When we pass the function the number "n," it runs "n" times. This is what O(n) represents. To further illustrate this concept, let's use the analogy of a deck of cards. Suppose we have a deck of cards and want to select a specific card, let's say the ten of hearts. In order to find that card, we would need to go through each card until we locate it. Although there is a possibility that it could be the first card, it is highly unlikely. Now, imagine the deck of cards is filled with hundreds of other cards, none of which are the ten of hearts. Your search time directly depends on the size of the deck of cards. This example demonstrates linear time complexity, which is represented by O(n). O(1) time complexity is constant, while O(n) time complexity increases linearly with the number of elements. The graph plots the number of operations against input size, with O(1) represented by a flat line and O(n) represented by a straight line.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjcgdq77ufx54ubknwsp1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjcgdq77ufx54ubknwsp1.png" alt="Image description" width="800" height="539"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Drop Constants
&lt;/h2&gt;

&lt;p&gt;Dropping constants means ignoring specific numbers in analysis because they don't matter as input size increases.&lt;/p&gt;

&lt;p&gt;Let me give you an example. Imagine we have an algorithm that takes n units of time to run. If we double the input size will take roughly 2n units of time. Triple the input size, and it will take about 3n units of time. Notice how the constant (2 or 3) doesn't change the overall pattern of the algorithm's growth.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def print_items(n):
    for i in range(n):
        print(i)
    for j in range(n):
        print(j) 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The Big O notation is a method of analyzing algorithms that focuses on the variable n, which is the most significant factor. By dropping constants, we can compare various algorithms and determine how their efficiency changes with input size.&lt;/p&gt;

&lt;h2&gt;
  
  
  Big O - O(n^2)
&lt;/h2&gt;

&lt;p&gt;One way to compare every number in a list is by using nested loops. Begin by comparing the first number with all the others, then move on to the second number and repeat the process until you reach the last number. Each time two numbers are compared, an operation is performed.&lt;/p&gt;

&lt;p&gt;The number of operations needed for a list of n numbers is n^2. This is because the first loop runs n times, and for each iteration, the second loop also runs n times. The number of operations increases rapidly as the list gets bigger. Nested loops with iterations that depend on the input size have a time complexity of O(n^2). This means that as the input size increases, the number of operations grows rapidly. This is not considered efficient for solving problems.&lt;/p&gt;

&lt;h2&gt;
  
  
  Droping Non Dominant Term
&lt;/h2&gt;

&lt;p&gt;When working with two loops that have a time complexity of O(n^2), it's important to focus on the dominant term. Removing the non-dominant term simplifies the time complexity to O(n^2). This means that as the number of elements increases, the function takes more time to run, and the increase in time is proportional to the square of the number of elements. To simplify time complexity in Big O notation, we can remove non-dominant terms like O(n^2) and O(n) to get the desired outcome. Always drop non-dominant terms when simplifying.&lt;/p&gt;

&lt;h2&gt;
  
  
  Big O - O(logN)
&lt;/h2&gt;

&lt;p&gt;To search for a number in a sorted array, use divide and conquer. It's faster than linear search, with a time complexity of O(log n).&lt;/p&gt;

&lt;p&gt;A more efficient approach to finding a target in an array of numbers is the divide-and-conquer method. This method only requires log2(n) steps, with n representing the size of the array.&lt;/p&gt;

&lt;p&gt;A real-world example that demonstrates a logarithmic search is searching for a specific card in a deck of ordered cards. Suppose we are looking for the 10 hearts in a deck that is ordered by suit - diamonds, clubs, hearts, and spades. To find the target card, we would first divide the deck in half to narrow the search to just the bottom suits. Then, we would divide the bottom half in half again to narrow it down to just the hearts suit. We would continue dividing the pile of hearts in half until the 10 of hearts is found. This logarithmic divide and conquer approach efficiently find the target card in log n steps compared to a linear search of each card.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsitbp9l5opgefo6f912i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsitbp9l5opgefo6f912i.png" alt="Image description" width="800" height="318"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Space Complexity
&lt;/h2&gt;

&lt;p&gt;Calculating the space complexity of an algorithm involves determining the additional memory it needs. Recursive functions often use O(n) space, but not all functions need this much space. A pair sum function can use O(1) space. Understanding space complexity helps optimise memory usage. Using a logarithmic time complexity is more efficient than a linear one. The logarithmic algorithm has a flatter graph, which makes it better for searching specific items in large datasets.&lt;/p&gt;

&lt;p&gt;When analyzing time complexity with multiple inputs, it's crucial to recognize that the information may vary in size. For sequential loops, time complexities are combined, whereas for nested loops, they should be multiplied.&lt;br&gt;
A nested loop example is O(a*b) time.&lt;br&gt;
Pattern is:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Do this, than do that → add time complexities&lt;/li&gt;
&lt;li&gt;Do this FOR EACH time you do that → multiply time complexities
Important interview questions to understand time complexity with multiple inputs.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There are standard rules to follow when calculating Big O for code:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Assignments and if statements are O(1)&lt;/li&gt;
&lt;li&gt;Simple loops are O(n)&lt;/li&gt;
&lt;li&gt;Nested loops are O(n^2)&lt;/li&gt;
&lt;li&gt;Dividing loop counter by 2 is O(log n)&lt;/li&gt;
&lt;li&gt;Finally, when dealing with multiple statements, we need to add them together. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhkvoio6o869iow4g1k11.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhkvoio6o869iow4g1k11.png" alt="Image description" width="800" height="235"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can employ an iterative algorithm to find the most significant number in an array. The function begins by assigning the first element to a variable and then proceeds to iterate through the remaining parts of the array.&lt;br&gt;
Inside the loop:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Checks if the current element is bigger than the largest&lt;/li&gt;
&lt;li&gt;If so, assigns the current to the largest
Once the loop is complete, the program will output the largest number from the array. This is a straightforward approach for identifying the maximum value in an array.
The question is how to analyze the time complexity using Big O:&lt;/li&gt;
&lt;li&gt;Assignment is O(1)&lt;/li&gt;
&lt;li&gt;Loop is O(n)&lt;/li&gt;
&lt;li&gt;If the statement inside the loop is O(1)&lt;/li&gt;
&lt;li&gt;Assignment in the loop is O(1)&lt;/li&gt;
&lt;li&gt;Print is O(1)&lt;/li&gt;
&lt;li&gt;Sum as O(1) + O(n) + O(1)&lt;/li&gt;
&lt;li&gt;Drop non-dominant O(1) terms&lt;/li&gt;
&lt;li&gt;Final complexity is O(n)
These rules help analyze time complexity systematically.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def findBIggestNumber(sampleArray):
    biggestNumber = sampleArray[0]
    for index in range(1,len(sampleArray)):
        if sampleArray[index] &amp;gt; biggestNumber:
            biggestNumber = sampleArray[index]
    print(biggestNumber) 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can follow this repository it will show you how to these time complexities appear on code: &lt;a href="https://github.com/azaynul10/The-Complete-Data-Structures-and-Algorithms-Course-in-Python/blob/main/timeComplexities.py" rel="noopener noreferrer"&gt;https://github.com/azaynul10/The-Complete-Data-Structures-and-Algorithms-Course-in-Python/blob/main/timeComplexities.py&lt;/a&gt;&lt;/p&gt;

</description>
      <category>dsa</category>
      <category>beginners</category>
      <category>programming</category>
      <category>python</category>
    </item>
    <item>
      <title>Introduction To Data Structure And Algorithms</title>
      <dc:creator>Zaynul Abedin Miah</dc:creator>
      <pubDate>Sun, 16 Jul 2023 06:02:22 +0000</pubDate>
      <link>https://dev.to/azaynul10/introduction-to-data-structure-and-algorithms-2j7k</link>
      <guid>https://dev.to/azaynul10/introduction-to-data-structure-and-algorithms-2j7k</guid>
      <description>&lt;h2&gt;
  
  
  Data Structure
&lt;/h2&gt;

&lt;p&gt;Data structures are different ways of storing data on a computer. Data structures are introduced as different ways of organizing and storing data on a computer. Organized data improves efficiency and performance. The choice of data structure affects the performance of software or application. Choosing the proper data structure is crucial for optimal software performance. Big companies like Google, Apple, Amazon, and Facebook ask questions about data structures and algorithms from their candidates during the interview process. A professional software developer must know which data structure to choose for a particular app, which directly affects the performance.&lt;/p&gt;

&lt;h2&gt;
  
  
  Some Examples of Data Structure
&lt;/h2&gt;

&lt;p&gt;Suppose you were given a bunch of wood and wanted to choose black wood. But the woods are not organized, so it is difficult to pick the black wood, and also, it is time-consuming. But what if we arrange the wood in a sequence? Then choosing the black woods would be easier and less time-consuming. &lt;/p&gt;

&lt;p&gt;Let's look at another example suppose there are a bunch of people who want a ticket for a concert. It becomes almost impossible to get ticket if the people aren't organized in a queue. And this is called a queue data structure.&lt;/p&gt;

&lt;p&gt;And finally, suppose there are a bunch of books on the table, and you want to return them to the library without putting them in an organized way. It isn't easy to carry them. So if we put them in an organized manner, it becomes easier to take them. It reminds us of the stack data structure. There are many types of data structures, each with advantages and disadvantages.&lt;/p&gt;

&lt;h2&gt;
  
  
  Algorithm
&lt;/h2&gt;

&lt;p&gt;An algorithm is a set of instructions that must be carried out in a specific order to produce a desired result. Algorithms are used in everyday life and in computer science to solve problems and perform tasks. Good algorithms solve problems correctly and efficiently. Algorithms are used to write time and memory efficient programs. The efficiency and performance of a program depend on the choice of algorithm. Different types of algorithms are used by famous companies to transmit live video, find the shortest path on a map, and arrange solar panels on the International Space Station. Good algorithms solve problems correctly and efficiently.&lt;/p&gt;

&lt;h2&gt;
  
  
  Example of Algorithm
&lt;/h2&gt;

&lt;p&gt;So let's look at the previous example we gave for the data structure in which we organize wooden items. We have to accomplish several steps if we want these wood items for flooring. The first step would be to choose flooring and which colour we will use. Then we need to purchase it and bring it home. The third step is we need to prepare some flooring. The next step is to determine the layout of the flooring in the space. And final step is we need to trim the door chasing, and our floor is ready. So to complete a task, we are performing some steps. This set of instructions, this set of steps, is called an algorithm.&lt;/p&gt;

&lt;h2&gt;
  
  
  Importance of Data Structure and Algorithm
&lt;/h2&gt;

&lt;p&gt;The efficiency and performance of a program depend on the choice of data structure and algorithm. Companies ask questions related to data structures and algorithms during the interview process to evaluate a candidate's problem-solving ability and fundamental concepts of programming. Data structures and algorithms are used in everyday life and in computer science to solve problems and perform tasks. Learning about data structures and algorithms allows us to write time and memory efficient programs.&lt;/p&gt;

&lt;p&gt;In the third example of data structure, we saw an example of books; it will always be impossible to find the book we need if it isn't organized. This is how a librarian efficiently manages it in a particular form to perform a task on it. Here, books are the data and arranging them like this is a data structure and finding the book we need is an algorithm. Thus, we can easily see the essential data structures and algorithms. &lt;/p&gt;

&lt;h2&gt;
  
  
  Types of Data Structure
&lt;/h2&gt;

&lt;p&gt;There are different types of data structures in Python, which can be classified as &lt;strong&gt;primitive&lt;/strong&gt; or &lt;strong&gt;non-primitive&lt;/strong&gt;. Primitive data structures include integers, floats, strings, and Booleans, while non-primitive data structures include lists, tuples, arrays, dictionaries, sets, linked lists, stacks, queues, trees, graphs, and hashmaps. Linear data structures are arranged in a sequential order, while non-linear data structures are not. Built-in data structures are included in Python, while user-defined data structures are created by users. The choice of data structure depends on the specific task and its efficiency and performance.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Types Of Algorithms&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;There are different types of algorithms in Python, that includes sorting, searching, graph, dynamic programming, divide and conquer, and recursive algorithms.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Sorting algorithms&lt;/strong&gt; arrange data in ascending or descending order.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Searching algorithms&lt;/strong&gt; find a specific value in a data set.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Graph algorithms&lt;/strong&gt; work with data represented as a graph.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dynamic programming&lt;/strong&gt; algorithms solve problems by breaking them down into smaller subproblems.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Divide and conquer algorithms&lt;/strong&gt; solve problems by breaking them down into smaller subproblems and combining the results.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Recursive algorithms&lt;/strong&gt; solve problems by breaking them down into smaller subproblems of a similar nature.&lt;/li&gt;
&lt;li&gt;Different types of algorithms have unique properties that work efficiently in different circumstances.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>dsa</category>
      <category>python</category>
      <category>career</category>
      <category>algorithms</category>
    </item>
    <item>
      <title>Fine-tuning LLMs with instruction</title>
      <dc:creator>Zaynul Abedin Miah</dc:creator>
      <pubDate>Thu, 13 Jul 2023 13:40:12 +0000</pubDate>
      <link>https://dev.to/azaynul10/fine-tuning-llms-with-instruction-3foo</link>
      <guid>https://dev.to/azaynul10/fine-tuning-llms-with-instruction-3foo</guid>
      <description>&lt;p&gt;Fine-tuning is a way to improve a pre-trained language model (LLM) by using labelled examples. It helps the model generate better completions for a specific task. This process helps the model improve its behaviour for specific tasks, like instruction fine-tuning. Instruction fine-tuning trains the model using examples that show how it should respond to a specific instruction. The process creates a new version of the model with updated weights, known as an instruct model, that is more suitable for the tasks you want to do.&lt;/p&gt;

&lt;p&gt;You can use prompt template libraries to turn your current datasets into instruction prompt datasets for fine-tuning. The most common method to fine-tune LLMs is instruction fine-tuning. It involves creating a dataset with prompt completion pairs and dividing it into training, validation, and test sets. The model's weights are then updated using backpropagation and cross-entropy loss. Fine-tuning improves the base model to make it better for the tasks we want to do. Fine-tuning on one task can cause catastrophic forgetting, where the model forgets how to perform other tasks. To prevent this problem, you can fine-tune across multiple kinds of instructions or use parameter efficient fine-tuning (PEFT) techniques. Parameter efficient fine-tuning (PEFT) is a method that tunes specific tasks with minimal memory usage. It keeps the original model's weights and only trains a few adapter layers and parameters for specific tasks.This method is more resistant to catastrophic forgetting and is currently being actively researched.&lt;/p&gt;

&lt;p&gt;LoRA is a technique that uses low-rank matrices to achieve good performance while using less computational power and memory. Efficient fine-tuning techniques improve performance when prompting reaches its limit.&lt;/p&gt;

&lt;p&gt;Evaluation steps involve measuring the model's performance by using validation and test datasets to determine its accuracy. Using multitask fine-tuning on multiple tasks can help the model stay versatile, but it needs more data and computing resources. Prompt templates are used to give general instructions for different tasks. We can make the model perform better on specific tasks by using domain-specific datasets. Evaluation metrics and benchmarks are used to measure the quality of the model's completions and compare the fine-tuned version with the base model.&lt;/p&gt;

&lt;p&gt;To understand the capabilities of language models, we need evaluation metrics. Researchers use existing datasets and benchmarks to measure and compare the models. Selecting the appropriate evaluation dataset is extremely important. It is essential to consider the specific skills of the model and any potential risks involved. Assessing LLM performance on new data gives a more accurate evaluation. GLUE, SuperGLUE, HELM, and BIG-bench are benchmarks that measure and compare model performance for various tasks and scenarios. Leaderboards and results pages help track progress in LLM. MMLU and BIG-bench evaluate LLMs on tasks in various fields like law, software development, and biology.&lt;/p&gt;

&lt;p&gt;HELM benchmarks use a multimetric approach. They measure seven metrics across 16 core scenarios, including fairness, bias, and toxicity. Evaluators can check the HELM results page to find LLMs that have been evaluated based on specific scenarios and metrics that are relevant to their requirements.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://arxiv.org/abs/2210.11416" rel="noopener noreferrer"&gt;paper&lt;/a&gt; presents FLAN, a method for fine-tuning instructions, and explains how it can be used. FLAN improves generalisation, human usability, and zero-shot reasoning by fine-tuning the 540B PaLM model on 1836 tasks and incorporating Chain-of-Thought Reasoning data. The study evaluates each aspect.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>llm</category>
      <category>deeplearning</category>
      <category>ai</category>
    </item>
    <item>
      <title>LLM pre training and scaling law</title>
      <dc:creator>Zaynul Abedin Miah</dc:creator>
      <pubDate>Fri, 07 Jul 2023 06:12:34 +0000</pubDate>
      <link>https://dev.to/azaynul10/llm-pre-training-and-scaling-law-8ee</link>
      <guid>https://dev.to/azaynul10/llm-pre-training-and-scaling-law-8ee</guid>
      <description>&lt;p&gt;This blog I've written is part of the first module from Coursera course by AWS Team on "Generative AI with Large Language Models" When choosing a model for generative AI applications, there are two options: selecting an existing model or creating one from the beginning. Open-source models and model hubs like Hugging Face and PyTorch can be helpful tools for AI developers. Pre-training is essential to help large language models learn to recognize patterns and structures in human language. Autoencoding models, such as BERT and RoBERTa, are useful for understanding both directions of text and can be used for tasks like sentiment analysis and identifying named entities. Autoregressive models, like GBT and BLOOM, are used for generating text by looking at the words that came before it. Sequence-to-sequence models, such as T5 and BART, use both an encoder and a decoder to perform tasks like translation and summarization.&lt;/p&gt;

&lt;p&gt;The size of a model affects how well it can perform, but it is difficult and costly to train larger models. Frameworks like Hugging Face and PyTorch have model hubs that offer resources and model cards, providing information about use cases, training methods, and limitations of the models. Different types of transformer models, such as encoder-only, decoder-only, and sequence-to-sequence models, are used for different tasks depending on what they are trained to do.&lt;/p&gt;

&lt;h2&gt;
  
  
  Efficient multi-GPU compute strategies
&lt;/h2&gt;

&lt;p&gt;Large language models (LLMs) need a lot of GPU RAM to store their parameters, which becomes even higher during training due to extra components. Quantization helps save memory by decreasing the level of detail in model weights using data types like FP16, Bfloat16, or INT8. LLMs are large, with billions or hundreds of billions of parameters, making it impossible to train them on a single GPU. Distributed computing techniques are needed to address these challenges. Fine-tuning is a process that comes after training, keeping all the training parameters in memory and might require using distributed computing.&lt;/p&gt;

&lt;p&gt;Using more than one GPU is important for training large models and can also be beneficial for smaller models. Techniques like Distributed Data-Parallel (DDP) for small models and Fully Sharded Data-Parallel (FSDP) for large models can help balance memory usage and communication volume between GPUs. Researchers are studying how to improve the performance of smaller models because training them on multiple GPUs is expensive and technically complex.&lt;/p&gt;

&lt;p&gt;During pre-training, the main aim is to make the model perform at the highest level possible on its learning task, which involves minimising loss when predicting tokens. There are two ways to improve performance: making the dataset bigger and adding more parameters to the model. However, the budget for computing is a restricting factor. A petaFLOP per second measures the amount of computer resources needed for training. The Chinchilla paper suggests that many 100 billion parameter large language models may be overparameterized and undertrained, so they would benefit from additional training data.&lt;/p&gt;

&lt;h2&gt;
  
  
  Domain Adaption
&lt;/h2&gt;

&lt;p&gt;Domain adaptation is necessary in language models for specialized domains like law and medicine due to specific vocabulary and language structures. Pretraining models from scratch is necessary to achieve better performance in these specialized domains, such as when your target domain uses vocabulary and language structures not commonly used in day-to-day language.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fefgpv1som1kuu2meg2bf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fefgpv1som1kuu2meg2bf.png" alt="Image description" width="617" height="403"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  BloombergGPT
&lt;/h2&gt;

&lt;p&gt;BloombergGPT, a financial-focused large language model created by Bloomberg, demonstrates how pre-training a model can improve its domain-specificity. BloombergGPT model uses finance and general-purpose data to understand finance and generate finance-related text. The recommended training dataset size is typically 20 times larger than the number of parameters in the model. However, the team's dataset is smaller due to limited availability of financial domain data. They wanted 50 billion parameters and 1.4 trillion tokens for training, but found 700 billion tokens, which is less than the optimal amount for computation. Pretraining can improve domain-specificity, but challenges may require trade-offs between optimal compute resources and model training configurations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Additional resources&lt;/strong&gt;&lt;br&gt;
Here are some of the resourse paper you can read:&lt;br&gt;
&lt;a href="https://arxiv.org/abs/2211.05100" rel="noopener noreferrer"&gt;BLOOM&lt;/a&gt; is an open-source language model with 176 billion parameters, similar to GPT-4. It has been trained in a transparent and open manner. The authors discuss the dataset and training process in detail in this paper. You can also view a summary of the model: &lt;a href="https://bigscience.notion.site/BLOOM-BigScience-176B-Model-ad073ca07cdf479398d5f95d88e218c4" rel="noopener noreferrer"&gt;https://bigscience.notion.site/BLOOM-BigScience-176B-Model-ad073ca07cdf479398d5f95d88e218c4&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;DeepLearning lessons. AI's Natural Language Processing specialisation covers the fundamentals of vector space models and how they are used in language modelling: &lt;a href="https://www.coursera.org/learn/classification-vector-spaces-in-nlp/home/module/3" rel="noopener noreferrer"&gt;https://www.coursera.org/learn/classification-vector-spaces-in-nlp/home/module/3&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;OpenAI researchers study scaling laws for large language models: &lt;a href="https://arxiv.org/abs/2001.08361" rel="noopener noreferrer"&gt;https://arxiv.org/abs/2001.08361&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Which language model architecture and pretraining objective are most effective for zero-shot generalisation? The paper looks at different modelling choices in big pre-trained language models and finds the best way to achieve zero-shot generalisation:&lt;br&gt;
&lt;a href="https://arxiv.org/pdf/2204.05832.pdf" rel="noopener noreferrer"&gt;https://arxiv.org/pdf/2204.05832.pdf&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://huggingface.co/tasks" rel="noopener noreferrer"&gt;HuggingFace&lt;/a&gt; Tasks and &lt;a href="https://huggingface.co/models" rel="noopener noreferrer"&gt;Model Hub&lt;/a&gt; provide resources for machine learning tasks.&lt;/p&gt;

&lt;p&gt;Meta AI proposes efficient LLMs with 13B parameters, outperforming GPT3 with 175B parameters on most benchmarks: &lt;a href="https://arxiv.org/pdf/2302.13971.pdf" rel="noopener noreferrer"&gt;https://arxiv.org/pdf/2302.13971.pdf&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This paper Explores few-shot learning in LLMs: &lt;a href="https://arxiv.org/pdf/2005.14165.pdf" rel="noopener noreferrer"&gt;https://arxiv.org/pdf/2005.14165.pdf&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;DeepMind's "Chinchilla Paper" evaluates optimal model size and token count for LLM training: &lt;a href="https://arxiv.org/pdf/2203.15556.pdf" rel="noopener noreferrer"&gt;https://arxiv.org/pdf/2203.15556.pdf&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;BloombergGPT is a finance-specific LLM trained to follow chinchilla laws, providing a powerful example: &lt;a href="https://arxiv.org/pdf/2303.17564.pdf" rel="noopener noreferrer"&gt;https://arxiv.org/pdf/2303.17564.pdf&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>llm</category>
      <category>ai</category>
      <category>deeplearning</category>
    </item>
    <item>
      <title>Introduction to LLMs and the generative AI project lifecycle Summary</title>
      <dc:creator>Zaynul Abedin Miah</dc:creator>
      <pubDate>Tue, 04 Jul 2023 18:27:03 +0000</pubDate>
      <link>https://dev.to/azaynul10/introduction-to-llms-and-the-generative-ai-project-lifecycle-summary-1b2p</link>
      <guid>https://dev.to/azaynul10/introduction-to-llms-and-the-generative-ai-project-lifecycle-summary-1b2p</guid>
      <description>&lt;p&gt;This blog is a summary from Coursera course conducted by the AWS team which is on "Generative AI with Large Language Models". This is the first module from that course. LLMs, also known as Large Language Models, are a technology that is both versatile and widely used. They have the ability to greatly reduce the amount of time needed for developing machine learning and AI applications. Job opportunities are valuable because they have many uses in different industries. To learn LLM, you should already be familiar with Python programming, have a basic understanding of data science, and be familiar with machine learning concepts. You only need experience with Python or TensorFlow to learn LLMs.&lt;/p&gt;

&lt;p&gt;The transformer architecture is a widely used and highly advanced model in many applications. It has been around for a while and is considered state-of-the-art. It is used as a basis for vision transformers and other types of data. During the project lifecycle, you have to make decisions. One decision is whether to use pre-trained models or train custom models. Another decision is figuring out the right model size. Smaller models can still do important tasks well, while larger models are superior for general knowledge.&lt;/p&gt;

&lt;p&gt;LLMs, or Language Learning Models, are trained using large datasets to imitate human abilities and show emerging properties. Generative AI tools are capable of imitating or coming close to human abilities in tasks such as chatbots, generating images, and developing code. Language models are big tools that help generate human-like text. They are used to solve various tasks in business and social contexts. When you interact with LLMs, you use natural language prompts. The result of this interaction is called a completion, which is a combination of the prompt and the generated text.&lt;/p&gt;

&lt;p&gt;LLMs, or Language Models, are not limited to just chat tasks. They can also be used for various other purposes such as essay writing, summarising conversations, translation, and more. Next word prediction is used for various tasks, such as creating simple chatbots, writing essays, summarising conversations, and translating text.&lt;/p&gt;

&lt;p&gt;By connecting LLMs to external data sources and APIs, they can access more information and interact with the real world. The size of foundation models impacts how well they understand language and perform tasks. You can adjust smaller models to work better for specific tasks.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe7rvssb4sxfca34o9nnp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe7rvssb4sxfca34o9nnp.png" alt="Image description" width="672" height="916"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Transformers Architecture
&lt;/h2&gt;

&lt;p&gt;In 2017, the Transformer model changed the way we process natural language. It uses self-attention to compute representations of input sequences, making it good at capturing long-term dependencies and doing computations quickly. This model performs better than previous models that used RNNs or CNNs in multiple machine translation tasks. It achieves state-of-the-art performance. Transformers make it easier to scale on multi-core GPUs and process larger training datasets in parallel.&lt;/p&gt;

&lt;p&gt;Tokenization is an important step that converts words into numbers so that they can be processed in the model. The embedding layer is used to represent tokens as vectors in a space with high-dimension. This helps to encode the meaning and context of the tokens. Positional encoding is a way to keep track of the order of words in a sequence. The self-attention layer examines how tokens are related to each other in order to understand their contextual dependencies. To capture different aspects of language, we learn multiple sets of self-attention weights called attention heads.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6w8ios78i60vbu0gicot.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6w8ios78i60vbu0gicot.png" alt="Image description" width="800" height="195"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Generating text with transformers
&lt;/h2&gt;

&lt;p&gt;Transformer models are versatile and can be used for different tasks, such as classification and generating text. There are different types of transformer architectures, such as encoder-only, encoder-decoder, and decoder-only models. These models have different uses. It is important to have a good understanding of prompt engineering in order to effectively interact with transformer models.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prompting and prompt engineering
&lt;/h2&gt;

&lt;p&gt;The Transformer model showed impressive abilities in NLP tasks and set the stage for future improvements in language models. Improving the outcomes of language models is important, and two key factors that can help achieve this are prompt engineering and in-context learning. Prompt engineering is the process of modifying the prompt language to influence how the model behaves. On the other hand, in-context learning helps models understand better by giving them examples or extra data in the prompt. On the other hand, zero-shot inference allows larger models to understand a task without specific training. The size of a model is important for being able to do many tasks well. Configuration parameters are very important in determining how language models generate their output during inference.&lt;/p&gt;

&lt;p&gt;These parameters determine things like the highest possible number of tokens and the level of creativity in the generated output. The "Max new tokens" option limits the number of tokens that can be generated. On the other hand, greedy decoding chooses the word with the highest probability, but this can sometimes result in repeated words or sequences. Random sampling helps to introduce variability and decrease the repetition of words. Top-k and top-p sampling are methods that restrict random sampling to predictions whose combined probabilities do not exceed a specified value. The temperature parameter determines how random the model's output will be.&lt;/p&gt;

&lt;h2&gt;
  
  
  Generative AI &amp;amp; LLMs
&lt;/h2&gt;

&lt;p&gt;The generative AI project life cycle is a framework for creating and launching an application that uses LLM technology. It is important to accurately define the project scope, taking into account the capabilities and requirements of the model. Deciding between training a model from scratch or using an existing base model is very important. Improving the model's capabilities can be done by assessing its performance and considering prompt engineering or fine-tuning. Using reinforcement learning with human feedback helps ensure that the model behaves appropriately when it is being used.&lt;/p&gt;

&lt;p&gt;Evaluation metrics and benchmarks are used to assess how well a model performs and whether it meets the desired criteria. When we optimise the model for deployment, it helps us use resources efficiently and provides a better experience for users.&lt;/p&gt;

&lt;p&gt;Using an existing model is a common practise, but there are certain situations where training from the beginning is required. Engineering and fine-tuning help improve the performance of the model. Reinforcement learning also adds extra control. Advanced techniques are necessary to overcome limitations such as inventing information or using complex reasoning. The generative AI project life cycle is a structured approach that helps guide the development and deployment process.&lt;/p&gt;

&lt;p&gt;Link to the lab exercise that I completed on LLM: &lt;a href="https://github.com/azaynul10/Generative-AI-with-Large-Language-Models/blob/02a380343845fe205f7a3dae9bf2f7bb86e258dd/Lab_1_summarize_dialogue%20.ipynb" rel="noopener noreferrer"&gt;https://github.com/azaynul10/Generative-AI-with-Large-Language-Models/blob/02a380343845fe205f7a3dae9bf2f7bb86e258dd/Lab_1_summarize_dialogue%20.ipynb&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can read the Transformers paper: &lt;a href="https://arxiv.org/abs/1706.03762" rel="noopener noreferrer"&gt;https://arxiv.org/abs/1706.03762&lt;/a&gt;&lt;/p&gt;

</description>
      <category>llm</category>
      <category>aws</category>
      <category>ai</category>
      <category>deeplearning</category>
    </item>
  </channel>
</rss>
