<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Monday Romelfanger</title>
    <description>The latest articles on DEV Community by Monday Romelfanger (@monday-sun).</description>
    <link>https://dev.to/monday-sun</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/monday-sun"/>
    <language>en</language>
    <item>
      <title>The Future of Working with AI — A Human-Centered Collaboration Model</title>
      <dc:creator>Monday Romelfanger</dc:creator>
      <pubDate>Sun, 04 May 2025 00:00:00 +0000</pubDate>
      <link>https://dev.to/monday-sun/the-future-of-working-with-ai-a-human-centered-collaboration-model-5727</link>
      <guid>https://dev.to/monday-sun/the-future-of-working-with-ai-a-human-centered-collaboration-model-5727</guid>
      <description>&lt;h2&gt;
  
  
  The Friction of Current AI Tools
&lt;/h2&gt;

&lt;p&gt;At the start of 2025, I worked as a software engineer at a tiny AI data analytics startup. I was fighting with AI on two fronts: how to provide valuable features for our customers and how to use AI tools for my work. Both somehow seemed intractable. Our customers weren't valuing the insights we were generating; I was introducing all kinds of defects I would have caught before relying on AI tools.&lt;/p&gt;

&lt;p&gt;I left that company in February and spent the next few months deep-diving into understanding what was going wrong. For myself, I adopted a no-code-writing rule and worked on several projects. I talked with people across various disciplines, such as Devs, PM, marketing, and DevOps, about how they use AI tools and where they feel frustrated.&lt;/p&gt;

&lt;p&gt;Along the way, I found some common themes: people feel less engaged and in control of their work when using standard LLM-based tools. Some people are learning how to work around this, others are either avoiding them altogether or using them minimally. That pattern is not the path to realizing the full potential of AI for work.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pillars for AI-Human Interaction
&lt;/h2&gt;

&lt;p&gt;Reflecting on the conversations, my own experience, and research around human collaboration, I've developed these five pillars:&lt;/p&gt;

&lt;h3&gt;
  
  
  Human‑First Purpose
&lt;/h3&gt;

&lt;p&gt;Every feature starts with a concrete human need and ends with greater human agency. AI success is measured by how well it amplifies people, not how many tokens it generates or how long it works without human intervention.&lt;/p&gt;

&lt;h3&gt;
  
  
  Native to Human Communication
&lt;/h3&gt;

&lt;p&gt;Humans think and collaborate through language, persistent artifacts, and iterative feedback. AI must operate in those same channels—clear text, editable docs, code diffs, sketches—and stay conversational rather than opaque.&lt;/p&gt;

&lt;h3&gt;
  
  
  Workflow Respect &amp;amp; Augmentation
&lt;/h3&gt;

&lt;p&gt;Human workflow loops, like brainstorm → draft → review → ship, remain intact. AI plugs into each stage to accelerate or lighten the load; it never jumps or removes a step humans rely on for shared understanding. Even as these workflows evolve and AI takes on more work, they need to optimize for sharing intent.&lt;/p&gt;

&lt;h3&gt;
  
  
  Granular, Trust‑First Collaboration
&lt;/h3&gt;

&lt;p&gt;Like a good teammate, AI delivers work in small, reviewable chunks, carries forward the shared project memory, and adapts quickly to human feedback. Total transparency isn't required—legible, context‑aware interaction is.&lt;/p&gt;

&lt;h3&gt;
  
  
  Feedback‑Ready by Design
&lt;/h3&gt;

&lt;p&gt;Every interaction offers a low‑friction path for humans to correct, refine, or redirect the AI, turning each cycle into a learning moment for both sides and preventing silent drift from intent.&lt;/p&gt;

&lt;p&gt;These pillars aren't speculative; they're already emerging in how thoughtful teams and builders work with AI today. These pillars hold regardless of how agentic AI becomes. The future of AI at work isn't purely agentic or autonomous; it's &lt;strong&gt;collaborative, contextual, and conversational&lt;/strong&gt;. That's the path to real amplification, not just automation.&lt;/p&gt;

&lt;h3&gt;
  
  
  A Vision for the Future
&lt;/h3&gt;

&lt;p&gt;Based on these pillars, I envision a collaborative AI future. Some ways this might look:&lt;/p&gt;

&lt;h4&gt;
  
  
  Artifacts as Memory
&lt;/h4&gt;

&lt;p&gt;If it's not written down, it's forgotten. If it's not human-readable, it's not shared memory. LLMs must use artifacts rather than using opaque, internally generated memory structures. We write things down in documents, code, comments, and plans. &lt;em&gt;Those&lt;/em&gt; are the artifacts that humans already trust because they convey information in a format and pace that's legible to humans. It also gives users the tools to communicate the source of the information.&lt;/p&gt;

&lt;p&gt;Using documents and other artifacts as memory gives users a way to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Inspect and adjust what the AI "remembers"&lt;/li&gt;
&lt;li&gt;Trace why the agent took an action, or discover missing directives&lt;/li&gt;
&lt;li&gt;Update or correct the AI's understanding without having to engineer a perfect prompt.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We see the start of this already with various types of RAG tools: ChatGPT doing web searches, Cursor doing searches within a codebase. Still, we lack tools that explicitly drive toward creating artifacts as a collaborative exercise in building shared context and assisting users in adjusting those artifacts to meet their needs.&lt;/p&gt;

&lt;h4&gt;
  
  
  Feedback-Driven Workflows
&lt;/h4&gt;

&lt;p&gt;We don't need AI to replace our processes; we need it to &lt;strong&gt;participate in them&lt;/strong&gt;. We design our processes to produce a series of artifacts that help specify and solve the target problem. Each artifact is a point where stakeholders and contributors can give feedback to ensure we maintain a path to the desired outcome.&lt;/p&gt;

&lt;p&gt;Let's look at some example workflows across multiple domains:&lt;/p&gt;

&lt;h5&gt;
  
  
  Software Engineering
&lt;/h5&gt;

&lt;p&gt;Workflow: Spec → Code → PR → Review → Merge → Changelog&lt;/p&gt;

&lt;p&gt;Artifacts:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Feature specs (markdown, tickets)&lt;/li&gt;
&lt;li&gt;Code diffs&lt;/li&gt;
&lt;li&gt;Pull requests with review threads&lt;/li&gt;
&lt;li&gt;CI logs, test results&lt;/li&gt;
&lt;li&gt;Final changelog or release note&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each step produces reviewable, structured outputs. AI can draft, refine, test, and comment, but developers can also participate in the review process to maintain context and orient the work to meet their needs.&lt;/p&gt;

&lt;h5&gt;
  
  
  Hardware Product Manufacturing
&lt;/h5&gt;

&lt;p&gt;Workflow: CAD Design → 3D-Printed Prototype → Tooling → Test Run → QA Feedback → Mass Production&lt;/p&gt;

&lt;p&gt;Artifacts:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;3D-printed prototype (early physical test of form/function)&lt;/li&gt;
&lt;li&gt;Injection mold or custom jig (tooling as a reusable artifact)&lt;/li&gt;
&lt;li&gt;Small batch from test run&lt;/li&gt;
&lt;li&gt;QA reports (digital, but based on physical inspection)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Every phase produces &lt;strong&gt;something you can hold, test, or measure&lt;/strong&gt; , and each artifact invites feedback before committing resources. A prototype might reveal ergonomic flaws. A test run might uncover material warping.&lt;/p&gt;

&lt;h5&gt;
  
  
  Technical Documentation
&lt;/h5&gt;

&lt;p&gt;Workflow: Outline → Draft → Review → Publish → Update&lt;/p&gt;

&lt;p&gt;Artifacts:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Doc outline&lt;/li&gt;
&lt;li&gt;Drafts with comments&lt;/li&gt;
&lt;li&gt;Final published documentation&lt;/li&gt;
&lt;li&gt;Change tracking/reasoning&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Feedback happens at multiple fidelity levels. AI can co-author, but clarity and accuracy come through human collaboration.&lt;/p&gt;

&lt;p&gt;While these workflows are common, the way each person navigates them is deeply individual. Some people will want more iterations of feedback, and some will care about specific artifacts while disregarding others. Individuals and teams need to be able to express what they want from the process, have the AI help design that process, accept feedback from multiple human collaborators throughout, and adjust the process as needed.&lt;/p&gt;

&lt;h4&gt;
  
  
  Curated AI Personas and Multi-AI Collaboration
&lt;/h4&gt;

&lt;p&gt;An AI that knows what I know can act as a stand-in for my experience and perspective. Individuals working alongside AI assistants build a knowledge base representing their collective experience and perspective—an expert diagnostician, developer, or civil rights activist. An AI agent can then participate in other workflows and act on their behalf to make expertise more accessible.&lt;/p&gt;

&lt;p&gt;As each person develops and evolves their persona, they can contribute meaningfully to more conversations that they otherwise wouldn't have the bandwidth to participate in. AI personas can be especially helpful in ensuring diverse perspectives are available when garnering feedback. Multiple AI perspectives can converse and seek input from their human counterparts to develop the best solutions for the problems we're working to solve.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Future Worth Collaborating On
&lt;/h2&gt;

&lt;p&gt;Work aims to provide humans with what they need to survive and thrive. AI has the potential to unlock new types of creativity in problem solving, increase productivity, and scale perspectives, but AI can't replace lived human experience. People still experience the world through their bodies, with internal lives and needs that no model can fully replicate. This human-centered collaboration model strives to respect those needs and shape a future where AI boosts humanity instead of hindering it.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Refactor-First, Feature-Last: Conversing with Code</title>
      <dc:creator>Monday Romelfanger</dc:creator>
      <pubDate>Fri, 02 May 2025 20:27:08 +0000</pubDate>
      <link>https://dev.to/monday-sun/refactor-first-feature-last-conversing-with-code-237i</link>
      <guid>https://dev.to/monday-sun/refactor-first-feature-last-conversing-with-code-237i</guid>
      <description>&lt;p&gt;Two months into my no-code-writing development experiment, I've started to reflect on what I've gained and lost in the process.&lt;/p&gt;

&lt;p&gt;There are clear wins: faster prototyping, more exploratory design conversations before touching code, and almost no actual coding (which is a huge relief, I'm more of an organize-ideas person, not a coder).&lt;/p&gt;

&lt;p&gt;I followed the tools' lead intentionally, curious to see what they had to offer. But in doing so, I slipped back into early-career patterns, prioritizing feature work and not handling code smells up front. I lost my Monday-ness.&lt;/p&gt;

&lt;p&gt;I'm a code whisperer. A wild west code wrangler. The one who could reshape entire systems by refactoring in their sleep.&lt;/p&gt;

&lt;p&gt;In this post, I want to return to the process that made that kind of work possible. The one that helped me grow from an intrepid puzzle-solver into a slayer of ye olde tech debt. It's not flashy. But it's deliberate, repeatable, and it's how I've quietly transformed codebases, again and again.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Core Philosophy
&lt;/h2&gt;

&lt;p&gt;Building software is a cycle of failing and learning. You'll never build spotless code, and can't fully predict the future. Whatever you did yesterday is wrong for today.&lt;/p&gt;

&lt;p&gt;That's why I treat building software as a conversation. All code talks back. It resists change, reveals design tensions, and asks for structure. A refactor-first, feature-last workflow gives me the space to &lt;em&gt;listen&lt;/em&gt; before I act and lay the groundwork for successful feature building.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Workflow
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fhumans.build.software%2Fassets%2Ffiles%2F2025-05-02%2Fworkflow_light.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fhumans.build.software%2Fassets%2Ffiles%2F2025-05-02%2Fworkflow_light.png" alt="Diagram illustrating the Refactor-First, Feature-Last workflow" width="800" height="1036"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This diagram walks through all the stages I go through to develop a feature. You notice it has loops, sometimes I go through them a few times for any story I'm working on.&lt;/p&gt;

&lt;h3&gt;
  
  
  Identify Feature Scope
&lt;/h3&gt;

&lt;p&gt;Usually, you do this before working on something, but it's a helpful starting point. Your feature might change scope or be broken into multiple features as you work.&lt;/p&gt;

&lt;h3&gt;
  
  
  Attempt Loop
&lt;/h3&gt;

&lt;p&gt;I rarely do a feature in one shot. I take at least two passes through here, often 3 or 4. However, part of this process is that each subsequent iteration should be easier than the last. This loop has these stages:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Try implementing the feature&lt;/strong&gt;: The first iteration should be messy. See what you can do: see what you need.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Evaluate friction and code smells&lt;/strong&gt;: As you go, you'll get a sense of where the code isn't working and what you need to do for this feature. It might be missing test coverage, interface/abstraction, code duplication, or&lt;a href="https://refactoring.guru/refactoring/smells" rel="noopener noreferrer"&gt; code smells&lt;/a&gt;. Keep a list of things you notice. You don't have to completely implement it before moving on; you just need to understand the space.&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Decide&lt;/strong&gt;: Can I add this feature cleanly while touching as little existing code as possible? This question is my primary evaluation criterion for whether or not I can move on to feature development. I typically prioritize reducing the risk of human error. Human errors are reduced when I can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cleanly add feature flags to toggle new behaviors on/off&lt;/li&gt;
&lt;li&gt;Implement new code in focused isolation from the existing system&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;The &lt;a href="https://en.wikipedia.org/wiki/SOLID" rel="noopener noreferrer"&gt;SOLID principles&lt;/a&gt; are some of my primary benchmarks for evaluating this. These principles help me focus new work into smaller building blocks and help me find the missing ones.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Branches&lt;/strong&gt;: If your answer to the Decide question is:

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Yes&lt;/strong&gt;: Sweet, go to the feature phase&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No&lt;/strong&gt;: Go to the refactor phase. Throw away this attempt. Hard reset, delete the branch, or do whatever feels useful.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Too Big&lt;/strong&gt;: The feature scope is too big if you're having difficulty keeping track of all the parts. Go back and re-evaluate the feature scope.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  Refactor Phase
&lt;/h3&gt;

&lt;p&gt;For the refactoring phase, you'll go through the top 2-3 friction points you identified earlier. For each friction point:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Create a new branch&lt;/strong&gt;: do each refactoring in isolation. Isolating helps keep code reviews fast and simplifies identifying which introduced a defect in the worst-case scenario.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Add/update tests&lt;/strong&gt;: ensure good test coverage before making changes. Tests help protect against errors even when refactoring. For some things, if you aren't changing behavior, you might consider a compiler as test coverage.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Refactor&lt;/strong&gt;: make your changes&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Open a Pull Request, Code Review, Run Validation&lt;/strong&gt;: Like any other code change, do your typical quality checks. If you kept it small, these should be quick. It's much easier to get a little bit of someone's time to review a small PR than it is to get them to review a big one.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once all the friction points you wanted to address are complete, return to the Feature Attempt Loop.&lt;/p&gt;

&lt;h3&gt;
  
  
  Feature Phase
&lt;/h3&gt;

&lt;p&gt;By the time you get here, you've probably done most of the hard work. You should know how you want to build this feature, which should be relatively straightforward.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Create a new branch&lt;/strong&gt;: create the branch from your main branch with your completed refactoring.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Implement your feature&lt;/strong&gt;: write new code, add new tests&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Open a Pull Request, Code Review, Run Validation&lt;/strong&gt;: Get feedback, but remember, it doesn't have to be perfect – you'll fix it next time.

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Approved&lt;/strong&gt;: Merge it; you're done!&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;New Issues Found&lt;/strong&gt;: go back to the refactor loop. You could keep this branch for rebasing or cherry picking from later.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Too Big&lt;/strong&gt;: Getting here and deciding your PR is too big is still possible. A big PR will be hard to review. Go back to the beginning and evaluate your feature scope.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  Why This Works
&lt;/h2&gt;

&lt;p&gt;This workflow helps adapt without fear. Because I know whatever I ship today, I've already built the space to improve it tomorrow. Learning, making mistakes, and improving are built into the workflow.&lt;/p&gt;

&lt;p&gt;Each phase is relatively quick – I'll often do an attempt in the morning, do multiple refactorings in the afternoon, and then the feature the next day. Each PR reduces the cognitive load for you and your reviewers because you took the time to make small changes and set yourself up for success. Smaller PRs also increase your ability to be agile and react to new information as you go.&lt;/p&gt;

&lt;p&gt;Paired with &lt;a href="https://humans.build.software/blog/micro-commits" rel="noopener noreferrer"&gt;micro-commits&lt;/a&gt;, I have a safe and unstoppable workflow. I've used this flow to deep re-architect projects, migrations from old libraries to new, and for everyday feature work. It empowers me to continuously evaluate code change risk, build trust and design awareness with my teammates, and find strategies to ensure I'm providing a high-quality experience for customers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Barriers to Adoption
&lt;/h2&gt;

&lt;p&gt;I've seen a lot of developers struggle with adopting this pattern. Here are the top barriers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Initial slowdown&lt;/strong&gt;: As with any process change, you slow down when you implement it. You have to be disciplined in following it until it becomes second nature.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pattern recognition&lt;/strong&gt;: I've always relied on a gut feeling that not everyone has for detecting code smells. There is an art to writing good code that not everyone has. Code is communication, and not everyone is good at communicating. But it's a skill you can learn over time.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;External pressure&lt;/strong&gt;: Even if you develop the muscle to do this pattern, external pressure to 'get things done' can still override what you know needs to be done. In my experience, this workflow is more resilient to external pressures because I can't do the feature until I'm ready to, but in a very early startup environment, I've fallen back into old habits (feature first, too big PRs, not throwing away code when I should)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;This workflow is key to my fearlessness when facing even the oldest tech debt, and is flexible enough to carry me through any tech stack. It's freeing to orient my work around learning from my mistakes and the mistakes of the previous authors.&lt;/p&gt;

&lt;p&gt;I wanted to reflect on this workflow because I'm considering how it can apply when the 'conversation' includes AI tools. Can this flow be used to improve AI coding outcomes? Could it help reduce the context needed for any individual task? LLMs can do these tasks, but current tools require prompting each step individually, which is exceptionally tedious.&lt;/p&gt;

&lt;p&gt;Some of my upcoming experiments will attempt to automate this workflow using LLMs.&lt;/p&gt;

&lt;p&gt;What about you? Have you tried adopting a similar workflow? How do you talk to code? What barriers have you faced in trying to improve your workflows?&lt;/p&gt;

</description>
      <category>programming</category>
      <category>softwareengineering</category>
      <category>ai</category>
    </item>
    <item>
      <title>I thought you’d be more skeptical</title>
      <dc:creator>Monday Romelfanger</dc:creator>
      <pubDate>Fri, 25 Apr 2025 21:23:21 +0000</pubDate>
      <link>https://dev.to/monday-sun/i-thought-youd-be-more-skeptical-36g</link>
      <guid>https://dev.to/monday-sun/i-thought-youd-be-more-skeptical-36g</guid>
      <description>&lt;p&gt;A couple of weeks ago, at a weekly family dinner, I was discussing my thoughts on AI with someone when they turned and said, “I’m really interested to see where you go. If anyone was super skeptical of AI, I thought it would be you.” I’ve been thinking about this and wanted to share some framing about how I think about AI, because I am skeptical. As far as I can tell, the hype that artificial superintelligence is &lt;a href="https://venturebeat.com/ai/2027-agi-forecast-maps-a-24-month-sprint-to-human-level-ai/" rel="noopener noreferrer"&gt;three years away&lt;/a&gt; is still hype. But I also want to view these tools practically in terms of what they &lt;em&gt;can&lt;/em&gt; offer and do.&lt;/p&gt;

&lt;h2&gt;
  
  
  LLMs sound human because that’s what we asked for
&lt;/h2&gt;

&lt;p&gt;LLMs are statistical machines. Many people oversimplify and say they’re “next word predictors," which is valid to an extent. You give a string of words going in, and it gives you a stream of words going out.&lt;/p&gt;

&lt;p&gt;In 2012, I took classes for a certificate in Natural Language Processing from UW, and that’s essentially what we learned: the next word prediction model. We trained models on a big set of annotated text, which would make a giant matrix of weights for various words based on preceding ones, and it would guess one word at a time. You couldn’t ask it to make more than one sentence at a time without losing coherence. Here’s some papers from project work in one of those classes (&lt;a href="https://humans.build.software/assets/files/2025-04-25/Project1-2-report.pdf" rel="noopener noreferrer"&gt;1&lt;/a&gt;, &lt;a href="https://humans.build.software/assets/files/2025-04-25/Project2-report.pdf" rel="noopener noreferrer"&gt;2&lt;/a&gt;. Side note: “For our classifier we chose to take a largely different path than a majority of the class,” of course I did 😂). Even if you wiggle through the technical jargon, you can see we weren’t doing overly complicated tasks: using keywords to indicate bias, single sentence generation. The models were so small that we could train them overnight on our laptops.&lt;/p&gt;

&lt;p&gt;The most popular LLMs are based on these foundations, but hit a roadblock in defining what language features are essential. So, we gave machines a vast amount of data and resources (time, GPUs, energy, water) to decide how to define and create the data matrices to evaluate the input and generate output. We give them feedback, they iterate, and the process repeats until results meet some benchmark of success. They output by streaming text because that’s the interface we gave them. We wanted them to feel responsive, conversational. So they do.&lt;/p&gt;

&lt;h2&gt;
  
  
  Simulated Thought and Short-Term Memory
&lt;/h2&gt;

&lt;p&gt;Last month, Anthropic released a &lt;a href="https://www.anthropic.com/research/tracing-thoughts-language-model" rel="noopener noreferrer"&gt;post&lt;/a&gt; about recent research into how LLMs think. One of the downsides of training machines is that we lose insight into how they process the inputs and create outputs. My takeaway from that post is that LLMs “think” similarly to me. They take text, map it to concepts, fill in the next conceptual step, and then translate back to text. This process is how they simulate understanding. The problem with this is twofold: 1) language and images are inefficient for conveying conceptual meaning, and 2) context windows are limited, so an LLM can only remember a small number of concepts at a time.&lt;/p&gt;

&lt;h3&gt;
  
  
  Inefficiency of Language
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://www.science.org/doi/10.1126/sciadv.aaw2594" rel="noopener noreferrer"&gt;Human language is low-bandwidth&lt;/a&gt; by our standards of technology. We’re not built for rapid communication, and our language reflects that. Even looking at the last sentence, wouldn’t it be more efficient to say, “Humans not optimal communicators?” Now I really sound like a robot, and your understanding of the idea is worse. We take time to chew on ideas; we insert less useful words to create space for comprehension between them. We also have limited vocabulary because we have limited memory. Not every word in every language is directly translatable. Vocabulary comes from the cultural context of the language and will be unique to each. So we do our best to find shared meaning, and the machines try to follow along.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Magical Number Seven, Plus or Minus Two
&lt;/h3&gt;

&lt;p&gt;I’ve referenced &lt;a href="https://en.wikipedia.org/wiki/The_Magical_Number_Seven,_Plus_or_Minus_Two" rel="noopener noreferrer"&gt;this idea&lt;/a&gt; &lt;a href="https://humans.build.software/blog/micro-commits" rel="noopener noreferrer"&gt;before&lt;/a&gt;, but our brains can only hold onto a few ideas at a time. LLMs, currently, are worse. They’ve mapped language to concepts, but anecdotally, they only seem able to keep track of two things at a time, maybe three. They have no real long-term memory to pull from other than what’s baked into the model. Working memory is where AI still struggles the most. If it could hold on to more concepts at once, they’d feel more intuitive for people to work with (and maybe I’ll buy fewer keyboards?)&lt;/p&gt;

&lt;h2&gt;
  
  
  Working Within the Flaws
&lt;/h2&gt;

&lt;p&gt;LLMs are great because of their flaws. They are a way for people to operate machines in a natural language, at speeds that make sense to us. That reduces the technical barriers to technology and will help reduce the need for human-machine translators (coders, programmers). They also push us to be clear about our needs and shift the focus to communication, while being a partner in creating that shared understanding. You don’t code with an LLM. You can explore ideas, generate specifications, and ask for feedback. When I think about the future of software engineering, I see more software engineering and less coding, which, if you know me, is one of my biggest complaints about the software industry.&lt;/p&gt;

&lt;p&gt;LLMs are a translation tool. They can help translate your thoughts into natural language; they can translate natural language to code, and back again; they can translate one natural language into another. They aren’t perfect, and no translation ever is, but they are often good enough. And when they aren’t, you can give them feedback until they meet your bar for success.&lt;/p&gt;

&lt;p&gt;I like LLMs because they reflect something human, flaws and all. However, they can never experience the world as we do, as fleshy bags of weird biology scurrying to keep ourselves alive. To say I’m skeptical isn’t far enough; I’m certain AI won’t solve everything and will create new problems too. But I hope it’ll help us see our own humanity and empower us to build a better future.&lt;/p&gt;

</description>
      <category>llm</category>
      <category>ai</category>
    </item>
    <item>
      <title>How LLMs 'Know' Stuff</title>
      <dc:creator>Monday Romelfanger</dc:creator>
      <pubDate>Fri, 18 Apr 2025 19:05:43 +0000</pubDate>
      <link>https://dev.to/monday-sun/how-llms-know-stuff-3fgi</link>
      <guid>https://dev.to/monday-sun/how-llms-know-stuff-3fgi</guid>
      <description>&lt;p&gt;One of the more quietly frustrating things about using large language models is realizing they don't &lt;em&gt;actually&lt;/em&gt; know what you're talking about.&lt;/p&gt;

&lt;p&gt;They sound like they do. They generate confident, fluent text that gives the impression of comprehension. But behind the curtain, it's a bunch of math trying to guess the following words based on patterns it's seen before.&lt;/p&gt;

&lt;p&gt;But even if the model doesn't "know" anything in the human sense, it still produces helpful (sometimes even excellent) output. So, where does that output come from? What's shaping it?&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;1. Latent Knowledge&lt;/strong&gt;
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;em&gt;What the model learned before you showed up.&lt;/em&gt;
&lt;/h3&gt;

&lt;p&gt;Every model is trained on a massive pile of text: books, websites, public forums, codebases, documentation, Wikipedia, and way too many Reddit arguments. I tend to think of it as 'the computers read the internet.' That training process creates what's called &lt;strong&gt;latent knowledge&lt;/strong&gt;—the model's internal sense of how language works, and what tends to go with what.&lt;/p&gt;

&lt;p&gt;It doesn't "remember" specific documents. It has no file system. What it &lt;em&gt;has&lt;/em&gt; is a set of statistical associations baked into its weights. It knows that the phrase "how to boil an egg" is likely followed by steps. It knows that Python code looks a certain way. It knows that banana bread is a thing.&lt;/p&gt;

&lt;p&gt;This baked-in knowledge is broad and surprisingly flexible. You can ask a well-formed question and often get a decent answer without giving it any context at all. But it's also static. It can't learn new things in real time. It doesn't know what happened yesterday unless yesterday happened in its training data, or it has a web search tool.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;2. Initial Context&lt;/strong&gt;
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;em&gt;What you tell it upfront.&lt;/em&gt;
&lt;/h3&gt;

&lt;p&gt;When you open a new chat with an llm, it waits for you to set the direction and tone of the conversation. When you send a prompt, you're shaping what the model sees; what statistical paths it chooses to generate a response. That prompt and any setup or documents you include or attach are the &lt;strong&gt;initial context&lt;/strong&gt;. It's the only thing the model has to work with besides its training.&lt;/p&gt;

&lt;p&gt;If your initial context is clear, well-structured, and includes relevant constraints, you give the model more to work with than its internal hunches. But if your prompt is vague or underspecified, the model will fill in the blanks using its latent knowledge. Might be good; might be based on a Spirk fanfic from 1976.&lt;/p&gt;

&lt;p&gt;This is where people &lt;em&gt;overestimate&lt;/em&gt; the model's awareness. You might assume that if you paste in a document or mention something once in paragraph four, the model "knows" it. Not always. Not unless it's recent, repeated, or phrased clearly enough to stick.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;3. Conversational Context&lt;/strong&gt;
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;em&gt;What gets built over time.&lt;/em&gt;
&lt;/h3&gt;

&lt;p&gt;If you're working in a chat interface, every message you send becomes part of the conversation history. That's &lt;strong&gt;conversational context&lt;/strong&gt;—an evolving thread of instructions, clarifications, feedback, and bad follow-up questions. (We've all done it.)&lt;/p&gt;

&lt;p&gt;This ongoing exchange &lt;em&gt;feels&lt;/em&gt; collaborative. And it can be. But here's the thing people miss:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;LLMs don't track conversation like a human. They reread the most recent portions of the thread every time and try to predict what comes next.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Unless you explicitly tell it, it doesn't "know" that it made a mistake two messages ago. Over the course of the conversation, the meaning of the words will get subtly reinforced or become more ambiguous. If the conversation gets long, the early stuff starts to fade—not because it's forgotten, but because it's buried.&lt;/p&gt;

&lt;p&gt;Here's how all of this flows together:&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fblk9pcfhx0gfxu6lc4o5.png" alt="A diagram showing how different types of information flow into a large language model’s output. Three input sources are shown: Latent Knowledge (Pretraining), Initial Prompt (Setup &amp;amp; Instructions), and Conversational Context (Message History). These all flow into a shared middle layer labeled Context Window (Working Memory). From there, an arrow leads to Model Output. The diagram emphasizes that all input sources shape the model's output, but more recent conversational context may have stronger influence due to proximity within the context window." width="577" height="367"&gt;
&lt;/h2&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Proximity Bias&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Proximity bias plays a big role in the model's output. The model pays attention to &lt;em&gt;everything&lt;/em&gt; in the context window, but the &lt;strong&gt;closer something is to the end of the conversation, the more influence it tends to have&lt;/strong&gt;. In plain terms, the most recent messages often shape the output the most.&lt;/p&gt;

&lt;p&gt;This is why:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A correction at the end of a long thread can override your original setup&lt;/li&gt;
&lt;li&gt;A prompt tweak halfway through a conversation can shift tone or strategy&lt;/li&gt;
&lt;li&gt;Important details buried early in the thread can get ignored entirely&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The model isn't malicious. It's simple. It reacts to what's in front of it, but &lt;em&gt;mainly to the most recent words&lt;/em&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Where This Leaves Us&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;When an LLM gives you a weird answer, it's tempting to blame the model. But more often, it's reacting to the context you gave it (or didn't).&lt;/p&gt;

&lt;p&gt;Its "knowledge" comes from three sources:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What the model learned during training (latent knowledge)&lt;/li&gt;
&lt;li&gt;What you gave up front (initial context)&lt;/li&gt;
&lt;li&gt;What's accumulated in the chat (conversational context)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And those don't carry equal weight. As your conversation unfolds, recent messages tend to dominate. Earlier instructions fade. Meaning drifts.&lt;/p&gt;

&lt;p&gt;So if you're aiming for clarity or consistency, don't tack a final question onto a long chat and hope for the best. Pause. Ask it to summarize what you need as a new prompt, and validate that it looks correct. Start fresh with a structured prompt that reflects what you need.&lt;/p&gt;

&lt;p&gt;LLMs don't know what you mean—but they'll try to guess. The more clearly you understand and express your own intent, the more likely they'll get it right.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>llm</category>
      <category>promptengineering</category>
      <category>development</category>
    </item>
    <item>
      <title>Monday’s Weekly Notes: April 14, 2025</title>
      <dc:creator>Monday Romelfanger</dc:creator>
      <pubDate>Mon, 14 Apr 2025 07:00:00 +0000</pubDate>
      <link>https://dev.to/monday-sun/mondays-weekly-notes-april-14-2025-312f</link>
      <guid>https://dev.to/monday-sun/mondays-weekly-notes-april-14-2025-312f</guid>
      <description>&lt;p&gt;&lt;em&gt;Weekly Notes are here to inspire others experimenting right now, give me a framework for reflection, and open up discussion around these workflows.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Highlights
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Made a lot of progress on &lt;a href="https://github.com/pandallel/quilt" rel="noopener noreferrer"&gt;Quilt&lt;/a&gt;: the modular actor system clicked, SQLite persistence landed, and the first full path through the cutting and swatching pipeline works.&lt;/li&gt;
&lt;li&gt;Started writing as a companion to the dev work — two blog posts published, the reflection is helpful&lt;/li&gt;
&lt;li&gt;The Cursor workflow is starting to prove itself, even if it’s still rough around the edges.&lt;/li&gt;
&lt;li&gt;LLM-based workflows are fast but cognitively dense — I’m moving quickly, but it feels like carrying a lot at once.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  In-Progress / Work Notes
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Built and integrated Quilt’s actor system: 

&lt;ul&gt;
&lt;li&gt;The actor model was suggested by an LLM but made real when I connected it to how I already think — code as cooperating systems.&lt;/li&gt;
&lt;li&gt;Formalizing this helped break the work into intuitive, modular pieces.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Connected key components: 

&lt;ul&gt;
&lt;li&gt;Discovery actor ↔ scanner + material registry&lt;/li&gt;
&lt;li&gt;Cutting actor ↔ persistence backend&lt;/li&gt;
&lt;li&gt;SQLite integrated for early persistence&lt;/li&gt;
&lt;li&gt;Laid groundwork for swatching infrastructure&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Cursor code review caught real issues: 

&lt;ul&gt;
&lt;li&gt;Missing error reporting if Cut persistence failed&lt;/li&gt;
&lt;li&gt;Architectural gaps in dealing with backpressure&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Tooling insights: 

&lt;ul&gt;
&lt;li&gt;Plan → Act → Plan loop is effective at keeping the Cursor Agent from running away&lt;/li&gt;
&lt;li&gt;Memory bank too rigid for evolving work, not team-friendly yet&lt;/li&gt;
&lt;li&gt;Local review flow works but not yet integrated with GitHub &lt;/li&gt;
&lt;li&gt;Tried &lt;a href="https://github.com/marketplace/actions/ai-code-review-action" rel="noopener noreferrer"&gt;&lt;code&gt;ai-code-review-action&lt;/code&gt;&lt;/a&gt; for GitHub PR reviews 

&lt;ul&gt;
&lt;li&gt;It surfaced useful feedback, but added too many granular comments&lt;/li&gt;
&lt;li&gt;I had to manually re-feed its notes into Cursor, which broke the flow&lt;/li&gt;
&lt;li&gt;Still looking for a smoother way to bridge GitHub and Cursor-based reviews&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Considering GitHub MCP Server for bridging GitHub reviews with Cursor + LLM context&lt;/li&gt;

&lt;/ul&gt;

&lt;/li&gt;

&lt;/ul&gt;




&lt;h2&gt;
  
  
  Thinking &amp;amp; Reflection
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;For Quilt, Contextual “spreads” are still a conceptual knot: 

&lt;ul&gt;
&lt;li&gt;The metaphor is solid, but implementation will need trial and error&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Starting to see my work as cyclical: build → reflect → teach&lt;/li&gt;

&lt;li&gt;Even with good velocity, I feel behind in documentation and reflection. Hopefully these weekly notes will help figure out what’s important to say.&lt;/li&gt;

&lt;/ul&gt;




&lt;h2&gt;
  
  
  Cursor Rules
&lt;/h2&gt;

&lt;p&gt;These are my current Cursor rules. I can feed it commands to facilitate a largely automated workflow: &lt;code&gt;what's next&lt;/code&gt;, &lt;code&gt;do you have any questions&lt;/code&gt;, &lt;code&gt;act&lt;/code&gt;, &lt;code&gt;test&lt;/code&gt;, &lt;code&gt;code review&lt;/code&gt;, then &lt;code&gt;update memory bank&lt;/code&gt;, and it follows these rules to respond with consistent structure and behavior. As long at I have a clear implementation plan in place, this works fairly smoothly. If it fails, then we update the implementation plan with more detail / learnings from the attempt.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;📄 &lt;a href="https://humans.build.software/assets/files/2025-04-14/implementation_planning.mdc" rel="noopener noreferrer"&gt;&lt;code&gt;implementation_planning.mdc&lt;/code&gt;&lt;/a&gt;&lt;br&gt;&lt;br&gt;
This powers commands like &lt;code&gt;plan this feature&lt;/code&gt; or &lt;code&gt;what's next&lt;/code&gt;. It guides Cursor to produce milestone-driven, end-user-focused implementation plans with learning goals and time estimates.&lt;br&gt;&lt;br&gt;
It reinforces incrementalism and discourages diving straight into code.&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Pros:&lt;/strong&gt; Keeps the LLM from jumping ahead. Forces it to model intent before execution. Helps me think too.&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Cons:&lt;/strong&gt; My implementation plan doc is a giant backlog. I really wish I had a nicer task tracking tool to use here. This is another thing I want to try the Github mcp for.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;📄 &lt;a href="https://humans.build.software/assets/files/2025-04-14/modes_of_operation.mdc" rel="noopener noreferrer"&gt;&lt;code&gt;modes_of_operation.mdc&lt;/code&gt;&lt;/a&gt;&lt;br&gt;&lt;br&gt;
This governs the Cursor event loop. It starts in PLAN mode and doesn’t switch to ACT until I explicitly tell it to &lt;code&gt;ACT&lt;/code&gt;. Every response starts with &lt;code&gt;# Mode: PLAN&lt;/code&gt; or &lt;code&gt;# Mode: ACT&lt;/code&gt; so I know what state it’s in. &lt;strong&gt;Pros:&lt;/strong&gt; Helps prevent the LLM from just going. Claude in Cursor is very eager to please with completed work. Helps the LLM frame the work before executing. Gemini is much better at planning and following the memory bank rules. Claude is better at acting because it’s fine-tuned to the tools. &lt;strong&gt;Cons:&lt;/strong&gt; Claude is not very good at sticking to these rules anyway, and it’s less fluid and ‘natural’ feeling. &lt;strong&gt;Source:&lt;/strong&gt; &lt;a href="https://www.reddit.com/r/cursor/comments/1jqvqjx/thanks_to_the_memory_system_post_productivity/" rel="noopener noreferrer"&gt;reddit post&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;📄 &lt;a href="https://humans.build.software/assets/files/2025-04-14/memory_bank.mdc" rel="noopener noreferrer"&gt;&lt;code&gt;memory_bank.mdc&lt;/code&gt;&lt;/a&gt;&lt;br&gt;&lt;br&gt;
This defines how Cursor treats memory. It assumes total amnesia between sessions and reads the memory bank files on every task. The structure is layered — project briefs, system patterns, current focus — and acts as the persistent state for the project.&lt;br&gt;&lt;br&gt;
Cursor reads from this when initializing, and I can trigger updates with &lt;code&gt;update memory bank&lt;/code&gt;.&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Pros:&lt;/strong&gt; Gives Cursor continuity. I don’t have to restate context every time. Especially powerful for long-form or paused work.&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Cons:&lt;/strong&gt; Not ideal for teams and it’s weird to maintain docs just for the LLM to use. &lt;strong&gt;Source:&lt;/strong&gt; &lt;a href="https://www.reddit.com/r/cursor/comments/1jqvqjx/thanks_to_the_memory_system_post_productivity/" rel="noopener noreferrer"&gt;reddit post&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;📄 &lt;a href="https://humans.build.software/assets/files/2025-04-14/code_review.mdc" rel="noopener noreferrer"&gt;&lt;code&gt;code_review.mdc&lt;/code&gt;&lt;/a&gt;&lt;br&gt;&lt;br&gt;
When I type &lt;code&gt;code review&lt;/code&gt;, Cursor kicks off a structured review using this doc. It automatically diffs against &lt;code&gt;main&lt;/code&gt;, scopes the changes, and walks through architecture, error handling, async, test coverage, performance, naming, and Rust-specific concerns.&lt;br&gt;&lt;br&gt;
It also checks that the memory bank and &lt;code&gt;@implementation-plan.md&lt;/code&gt; are up to date.&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Pros:&lt;/strong&gt; Catches bugs and gaps with zero prompting. I can trust the review will be thorough without micromanaging it.&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Cons:&lt;/strong&gt; I wish I could do this on Github and transfer the feedback back to my local cursor session.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;strong&gt;Overall Use Case:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
These rules make Cursor act like a junior dev or co-pilot with strong instincts and a short memory. I drive the flow by giving high-level commands, and it uses the rule set to respond consistently — whether I’m asking for a review, a next step, or a rewrite.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Biggest Wins:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I don’t need to remember what to ask — I just say what I need (&lt;code&gt;code review&lt;/code&gt;, &lt;code&gt;plan&lt;/code&gt;, &lt;code&gt;test&lt;/code&gt;, etc.) and Cursor knows what to do.&lt;/li&gt;
&lt;li&gt;The structure helps me scale — I can pause work, come back, and pick up from where the memory left off.&lt;/li&gt;
&lt;li&gt;It feels less like prompting, more like pairing with a teammate.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Biggest Gaps:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Missing integration with other tools&lt;/li&gt;
&lt;li&gt;Not good for team environments. Maybe memory-bank could be git ignored and unique to the project/individual? Is it necessary with otherwise good documentation?&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;strong&gt;Overall Pros:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;They reduce cognitive load and bring structure to async LLM dev work.&lt;/li&gt;
&lt;li&gt;They’re a long-term bet on trustable rituals — I can feel the system getting sharper.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Overall Cons:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Still not integrated into GitHub or fully LLM-aware — lots of manual context syncing.&lt;/li&gt;
&lt;li&gt;Too rigid in places (memory bank) and too reliant on me remembering to invoke them.&lt;/li&gt;
&lt;li&gt;They need tooling, or at least automation hooks, to scale or share effectively.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Links &amp;amp; Publishing
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;📝 Blog posts: 

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://humans.build.software/posts/what-the-heck-am-I-supposed-to-be-doing/" rel="noopener noreferrer"&gt;What the heck am I supposed to be doing?&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://humans.build.software/posts/z-crystal-ball/" rel="noopener noreferrer"&gt;Transcript: Vibe Conding a Crystal Ball&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;🌐 Site migration: 

&lt;ul&gt;
&lt;li&gt;Moved blog to &lt;a href="https://humans.build.software" rel="noopener noreferrer"&gt;humans.build.software&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Redirect from GitHub Pages still pending&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;🎓 Teaching idea emerging: 

&lt;ul&gt;
&lt;li&gt;Thinking about running a class on LLM workflows — from philosophy to rituals&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;🔍 Exploring: 

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/eyaltoledano/claude-task-master" rel="noopener noreferrer"&gt;Claude Task Master&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/github/github-mcp-server" rel="noopener noreferrer"&gt;GitHub MCP Server&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;




&lt;h2&gt;
  
  
  Life &amp;amp; Grounding
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Playing &lt;em&gt;Spiritfarer&lt;/em&gt;. It’s steady, gentle, and hitting the right rhythm.&lt;/li&gt;
&lt;li&gt;Mid-day Monday is a lovely time for grocery shopping. It’s nice to feel free of the corporate schedule.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  🐈 Cat Pic of the Week
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F01quq14sgy1otgpwtmcb.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F01quq14sgy1otgpwtmcb.jpeg" alt="An orange tabby cat sits on a high white ledge near the ceiling, looking down. The background includes a wooden ceiling and a blue wall below." width="768" height="1024"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Meta &amp;amp; Cadence
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;This review is more like a week and a half, but it’s all good stuff.&lt;/li&gt;
&lt;li&gt;Working on a post about overall thoughts on an LLM-based workflow&lt;/li&gt;
&lt;li&gt;First weekly notes!!&lt;/li&gt;
&lt;li&gt;For non-cursor LLM chats if you’re having markdown rendering problems: &lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;“Please produce a Markdown document that is entirely wrapped in a single code block with outer backticks (using four backticks for the outer fence) so that inner formatting is preserved.”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;Join me on &lt;a href="https://discord.gg/TXhqgKkr" rel="noopener noreferrer"&gt;VibeColab Discord&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  💬 Open Threads
&lt;/h2&gt;

&lt;p&gt;Curious how others are shaping their LLM workflows. Are you trying something similar with Cursor or memory-based systems?&lt;/p&gt;

&lt;p&gt;This week I’m especially thinking about:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;How to make memory banks more team-friendly&lt;/li&gt;
&lt;li&gt;Integrating LLM review flows with GitHub&lt;/li&gt;
&lt;li&gt;Finding a better bridge between GitHub and Cursor-based reviews 

&lt;ul&gt;
&lt;li&gt;I tried &lt;a href="https://github.com/marketplace/actions/ai-code-review-action" rel="noopener noreferrer"&gt;&lt;code&gt;ai-code-review-action&lt;/code&gt;&lt;/a&gt;, but it overwhelmed me with low-priority comments that I had to manually feed back into Cursor. It broke the rhythm more than it helped.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;If you’re experimenting in this space, I’d love to hear what’s working (or not).&lt;/p&gt;

</description>
      <category>weeklynotes</category>
      <category>llms</category>
      <category>developmentprocess</category>
    </item>
    <item>
      <title>Transcript: Vibe Coding a Crystal Ball</title>
      <dc:creator>Monday Romelfanger</dc:creator>
      <pubDate>Tue, 08 Apr 2025 07:00:00 +0000</pubDate>
      <link>https://dev.to/monday-sun/transcript-vibe-coding-a-crystal-ball-2k31</link>
      <guid>https://dev.to/monday-sun/transcript-vibe-coding-a-crystal-ball-2k31</guid>
      <description>&lt;p&gt;&lt;em&gt;Transcripts are meant to be a reflection on a specific LLM interaction, exploring the context, learnings, and providing a concrete example of making something with AI&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Made
&lt;/h2&gt;

&lt;p&gt;Riding the high of my coming back post, I came across this &lt;a href="https://www.reddit.com/r/BenignExistence/comments/1jrw823/im_a_very_really_real_and_definitely_not_fake/" rel="noopener noreferrer"&gt;reddit post&lt;/a&gt;, and I had the goofy idea to make a website to do this. I dunno why, just cause. So I did pure vibe coding in less than an hour. Introducing: &lt;a href="https://crystal-ball.pandallel.ai/" rel="noopener noreferrer"&gt;Crystal Ball&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqcj4wip2a13n3imangow.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqcj4wip2a13n3imangow.png" alt="Screenshot of the Crystal Ball website showing a glowing purple orb with the text " width="800" height="439"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/pandallel/crystal-ball" rel="noopener noreferrer"&gt;Github Repo&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  A Tangent on Vibe Coding
&lt;/h3&gt;

&lt;p&gt;If you’re new to the phrase vibe coding:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Vibe coding (also written as vibecoding) is a recently-coined term for the practice of writing code, making web pages, or creating apps, by just telling an AI program what you want, and letting it create the product for you. In vibe coding the coder does not need to understand how or why the code works, and often will have to accept that a certain number of bugs and glitches will be present. The verb form of the word is vibe code.&lt;/p&gt;

&lt;p&gt;— &lt;cite&gt;&lt;a href="https://www.merriam-webster.com/slang/vibe-coding" rel="noopener noreferrer"&gt;Merriam-Webster Dictionary of Slang&lt;/a&gt;&lt;/cite&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;We use ‘vibe coding’ in a derogatory/derisive way, but it’s specific to this discovery period. We’re all playing around with these tools, trying to reinvent how we work, and many of us are failing and making mistakes as we go. There have been leaks of OpenAI API keys from vibe coders who had probably never dealt with integrating a third-party tool (which is not that uncommon) or who failed to properly review the work done by the LLM (which is easy to do if you don’t have the effective workflows in place).&lt;/p&gt;

&lt;p&gt;Just this week, one of my friends called me a vibe coder when I mentioned I hadn’t written any code in weeks. I felt defensive because I wanted to think I was working with more structure than &lt;em&gt;other&lt;/em&gt; vibe coders. But everyone labeled a ‘vibe coder’ is a person failing, publicly, to use a tool that we collectively don’t know how to use yet, and that’s equally threatening to change the fabric of our work. Cheers to vibe coding in public 🥂&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Learned
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Whimsy
&lt;/h3&gt;

&lt;p&gt;It’s much easier for me to follow my whimsy. I’ve always enjoyed the silly things people make just cause they want to, but building a simple website has always been challenging for me. There are a lot of details to create even something simple, and I tend to think more about the shapes, structures, and systems of things rather than the focused details. Even in college, I was the integrator. The one who got all the pieces to work together after everyone else wrote them. Using an LLM meant I could build something whimsical by just expressing the shape of what I wanted to make.&lt;/p&gt;

&lt;h3&gt;
  
  
  Letting go of control
&lt;/h3&gt;

&lt;p&gt;Part of developing with an LLM is growing an internal sense of what you can/should try to control and shouldn’t. In this example, if I had built this thing alone, I might have used Nextjs even though it’s overkill because it’s what’s familiar to me and it’s easy to set up. But by letting go of the architecture details and seeing what it did, I realized we could do without and just set up a single Vercel function for the backend (Why Vercel? – I already have an account there for other projects). Then, I had to fight with it and not use Nextjs because most Vercel documentation is Nextjs-based. You win some, you lose some.&lt;/p&gt;

&lt;p&gt;Letting go of control is a core part of my workflow with llms. You can experiment more quickly, so allowing the LLM agent to try without all the answers is okay. You’ll learn something in the process.&lt;/p&gt;

&lt;h2&gt;
  
  
  Crystal Ball Project Cursor Transcript (Generated by Claude 3.7)
&lt;/h2&gt;

&lt;p&gt;The following transcript was generated by Cursor (v0.48.7), using Agent Mode, thinking Claude 3.7 by going through each chat window and requesting a transcript. I didn’t plan or prompt much—I just followed the vibes and let the assistant help me build something goofy, step by step.&lt;/p&gt;




&lt;h3&gt;
  
  
  Initial Request and Setup
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;User:&lt;/strong&gt; I want to build a silly website that will ‘tell you your future’ like this reddit post @&lt;a href="https://www.reddit.com/r/BenignExistence/comments/1jrw823/im%5C_a%5C_very%5C_really%5C_real%5C_and%5C_definitely%5C_not%5C_fake/" rel="noopener noreferrer"&gt;https://www.reddit.com/r/BenignExistence/comments/1jrw823/im\_a\_very\_really\_real\_and\_definitely\_not\_fake/&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;em&gt;Assistant created initial website with:&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;HTML structure with a crystal ball interface&lt;/li&gt;
&lt;li&gt;CSS styling with animations&lt;/li&gt;
&lt;li&gt;JavaScript with hardcoded funny fortunes&lt;/li&gt;
&lt;li&gt;Basic README file&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;User:&lt;/strong&gt; Cool, but I want to use an LLM to generate the fortunes, and probably host it on Vercel&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;em&gt;Assistant updated the project to:&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Created API endpoint using OpenAI&lt;/li&gt;
&lt;li&gt;Modified JavaScript to fetch from the API with fallbacks&lt;/li&gt;
&lt;li&gt;Added Vercel configuration&lt;/li&gt;
&lt;li&gt;Updated documentation for deployment&lt;/li&gt;
&lt;li&gt;Added environment variables for API keys&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;User:&lt;/strong&gt; I’m getting 404s from the API. The page could not be found. NOT_FOUND&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;em&gt;Assistant restructured project to fix API routing:&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Created Next.js compatible directory structure&lt;/li&gt;
&lt;li&gt;Moved API file to pages/api directory&lt;/li&gt;
&lt;li&gt;Moved static files to public directory&lt;/li&gt;
&lt;li&gt;Updated Vercel configuration&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;User:&lt;/strong&gt; According to Vercel docs &lt;a href="https://vercel.com/docs/functions" rel="noopener noreferrer"&gt;https://vercel.com/docs/functions&lt;/a&gt; it was in the right place before&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;em&gt;Assistant acknowledged the API file can work in multiple locations per Vercel docs:&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Kept API in original location&lt;/li&gt;
&lt;li&gt;Updated project structure to ensure proper routing&lt;/li&gt;
&lt;li&gt;Cleaned up duplicate files&lt;/li&gt;
&lt;li&gt;Restarted development server&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Improvements Made During Development
&lt;/h4&gt;

&lt;p&gt;During our conversation, we also:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Added responsive design for mobile devices&lt;/li&gt;
&lt;li&gt;Implemented fallback fortunes if the API call fails&lt;/li&gt;
&lt;li&gt;Added loading animations for better UX&lt;/li&gt;
&lt;li&gt;Improved crystal ball visual effects&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The project now features:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A visually pleasing crystal ball with animations&lt;/li&gt;
&lt;li&gt;OpenAI integration for generating unique fortunes&lt;/li&gt;
&lt;li&gt;Vercel deployment configuration&lt;/li&gt;
&lt;li&gt;Proper fallbacks for error handling&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  New Chat w/ Rollover: Dependency and Package Updates
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;User:&lt;/strong&gt; I’m still getting 404s and we have two copies of the backend code&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;em&gt;Assistant fixed the duplicate backend code:&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Identified two copies of API files in /api and /pages/api&lt;/li&gt;
&lt;li&gt;Removed the duplicate API file&lt;/li&gt;
&lt;li&gt;Updated Vercel configuration for proper routing&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;User:&lt;/strong&gt; [Outside chat]: (I felt like the vercel config and next config it had created were making it confused, so I deleted them)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;User:&lt;/strong&gt; According to Vercel docs &lt;a href="https://vercel.com/docs/functions" rel="noopener noreferrer"&gt;https://vercel.com/docs/functions&lt;/a&gt; /api was the correct location&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;em&gt;Assistant restored the correct setup:&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Put API in original location&lt;/li&gt;
&lt;li&gt;Updated Vercel configuration for standalone functions&lt;/li&gt;
&lt;li&gt;Removed Next.js from configuration&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;User:&lt;/strong&gt; We don’t have Next.js in package.json&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;em&gt;Assistant updated package.json:&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Removed Next.js dependencies and scripts&lt;/li&gt;
&lt;li&gt;Updated to use Vercel’s standalone functions approach&lt;/li&gt;
&lt;li&gt;Configured scripts for Vercel development&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;User:&lt;/strong&gt; Can we get the latest packages? @Web&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;em&gt;Assistant updated dependencies to latest versions:&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Updated OpenAI from 4.30.0 to 4.91.1&lt;/li&gt;
&lt;li&gt;Updated Vercel from 33.2.0 to 41.4.1&lt;/li&gt;
&lt;li&gt;Updated Node.js requirement from &amp;gt;=18.0.0 to &amp;gt;=20.0.0&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The crystal ball project is now properly configured with updated dependencies and the correct API structure for Vercel deployment.&lt;/p&gt;




&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;User:&lt;/strong&gt; Can you add an animation to make the ball cloudy while it’s loading?&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;em&gt;Assistant added a cloudy animation effect for the loading state:&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Created a new CSS class and animation for the cloudy effect&lt;/li&gt;
&lt;li&gt;Added SVG-based cloud patterns inside the crystal ball&lt;/li&gt;
&lt;li&gt;Implemented JavaScript to toggle the cloudy state during loading&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;User:&lt;/strong&gt; Can you make it a little smoother? It like turns on then off.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;em&gt;Assistant improved the animation smoothness:&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Added CSS transitions for gradual changes between states&lt;/li&gt;
&lt;li&gt;Created additional animations for subtle cloud movement&lt;/li&gt;
&lt;li&gt;Adjusted JavaScript timing for a more natural transition&lt;/li&gt;
&lt;li&gt;Extended animation duration for a more magical experience&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;(Side note: it’s still janky, but a little better after this)&lt;/em&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  New Chat w/ Rollover: User Engagement Features
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;User:&lt;/strong&gt; Can we make it so they can only ask for one per day?&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;em&gt;Assistant implemented a daily fortune limit using localStorage:&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Added functions to track when users last received a fortune&lt;/li&gt;
&lt;li&gt;Modified the UI to show “Return Tomorrow” when the daily limit is reached&lt;/li&gt;
&lt;li&gt;Created a countdown showing hours and minutes until the next available fortune&lt;/li&gt;
&lt;li&gt;Added persistence across page refreshes and browser sessions&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;User:&lt;/strong&gt; How are you saving across refreshes?&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;em&gt;Assistant explained the localStorage implementation:&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Used browser’s localStorage API to persist data between sessions&lt;/li&gt;
&lt;li&gt;Stored timestamp of last fortune request&lt;/li&gt;
&lt;li&gt;Added date comparison logic to determine if it’s a new day&lt;/li&gt;
&lt;li&gt;Highlighted that localStorage is widely supported and perfect for this use case&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;User:&lt;/strong&gt; Can we add an inspired by link to @&lt;a href="https://www.reddit.com/r/BenignExistence/comments/1jrw823/im%5C_a%5C_very%5C_really%5C_real%5C_and%5C_definitely%5C_not%5C_fake/" rel="noopener noreferrer"&gt;https://www.reddit.com/r/BenignExistence/comments/1jrw823/im\_a\_very\_really\_real\_and\_definitely\_not\_fake/&lt;/a&gt;?&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;em&gt;Assistant added attribution to the Reddit post:&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Added a styled attribution link in the HTML&lt;/li&gt;
&lt;li&gt;Created CSS styling for the link to match the site’s aesthetic&lt;/li&gt;
&lt;li&gt;Set the link to open in a new tab&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;User:&lt;/strong&gt; This AI appears to be obsessed with socks given the system prompt.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;em&gt;Assistant identified and fixed the sock obsession in the generated fortunes:&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Examined the API prompt and found it was generating too many sock-related fortunes&lt;/li&gt;
&lt;li&gt;Modified the system prompt to explicitly avoid mentions of socks&lt;/li&gt;
&lt;li&gt;Added instructions to use a wider variety of mundane topics&lt;/li&gt;
&lt;li&gt;Increased the temperature parameter for more randomness&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The crystal ball project now features smoother animations, daily fortune limits, proper attribution, and more varied fortunes without the sock obsession.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Had to Do Myself:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Setup the project in Vercel&lt;/li&gt;
&lt;li&gt;Got a new API key and set the env var in Vercel&lt;/li&gt;
&lt;li&gt;Added a cname and linked to my Vercel project&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That’s it. Vibe coded in under an hour. Just an afternoon whimsy.&lt;/p&gt;

</description>
      <category>projects</category>
      <category>llms</category>
      <category>developmentprocess</category>
      <category>cursor</category>
    </item>
    <item>
      <title>When LLMs meet Software Engineering</title>
      <dc:creator>Monday Romelfanger</dc:creator>
      <pubDate>Sat, 05 Apr 2025 07:00:00 +0000</pubDate>
      <link>https://dev.to/monday-sun/when-llms-meet-software-engineering-55h2</link>
      <guid>https://dev.to/monday-sun/when-llms-meet-software-engineering-55h2</guid>
      <description>&lt;p&gt;Hey, I’m Monday. I’m a software engineer exploring what it means to build software alongside LLMs, with a strong focus on process, reflection, and being human in the loop. This is my first post here on Dev.to—adapted from my personal blog—and it’s about where I’m starting from.&lt;/p&gt;

&lt;p&gt;Last year, I made the Seattle -&amp;gt; Bay Area move, started at a young AI startup, which I left in February due to… let’s say, personality and value differences. As the political landscape shifted, many people in my life have had their lives disrupted. I'm trans, as are a lot of my social circle. One friend has left the country, another has discussed with her wife if they should get divorced so they can both stay present in their child’s life, and yet another is stretching themselves thin trying to support everyone around them. We’re all more stressed and more unsettled than we were before.&lt;/p&gt;

&lt;p&gt;At the same time, tech is playing the oligarch game. &lt;a href="https://apnews.com/article/trump-inauguration-tech-billionaires-zuckerberg-musk-wealth-0896bfc3f50d941d62cebc3074267ecd" rel="noopener noreferrer"&gt;CEOs at the inauguration&lt;/a&gt;, Zuckerburg’s &lt;a href="https://www.youtube.com/watch?v=MAw9Tpi4B_U" rel="noopener noreferrer"&gt;masculine energy&lt;/a&gt; (GPT-4o suggested ”gladiator era,” and I’m dying 😂), &lt;a href="https://transparency.meta.com/policies/community-standards/hateful-conduct/" rel="noopener noreferrer"&gt;Meta’s Jan. 7 hateful conduct update&lt;/a&gt;, the &lt;a href="https://www.forbes.com/sites/conormurray/2025/04/02/european-countries-push-back-against-trumps-anti-dei-crusade-here-are-all-the-companies-rolling-back-dei-programs/" rel="noopener noreferrer"&gt;rollback and de-emphasis of DEI programs&lt;/a&gt;, and whatever the hell Elon Musk is doing today. The mood in tech has shifted. To many of us, our leaders and our funders no longer feel trustworthy—not that they necessarily were before, but more people are aware of their flaws. And with LLMs, the very fabric of our work is shifting beneath us.&lt;/p&gt;

&lt;p&gt;When I left my last job, I didn’t know what I was going to do, and I still don’t know exactly. But after a bit of time away playing with some project ideas, I had two thoughts:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;I’m using the LLM wrong for coding, but I’m not sure how, and&lt;/li&gt;
&lt;li&gt;&lt;em&gt;if LLMs are going to do all the coding in the future, what the heck am I supposed to be doing?&lt;/em&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;And that comes to what I want to do with this blog/notes/experiments thing. When I initially started &lt;em&gt;Humans Build Software&lt;/em&gt; last year, I knew something was changing. The name is a mantra as much as a mission: humans—people, with needs, bodies, context, and passions—should drive software development. &lt;em&gt;How&lt;/em&gt; we build and &lt;em&gt;what&lt;/em&gt; we build are in the middle of a vast, industry-changing upheaval.&lt;/p&gt;

&lt;p&gt;I’ve already collected a few threads worth sharing, like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;That time I discovered a much better UX through LLM simulation of a group conversation&lt;/li&gt;
&lt;li&gt;LLMs are a mirror you can talk to&lt;/li&gt;
&lt;li&gt;When Claude did my (Staff) job for me&lt;/li&gt;
&lt;li&gt;How ChatGPT helped me navigate a situation where I was being gaslit&lt;/li&gt;
&lt;li&gt;How using an LLM changes my workflow&lt;/li&gt;
&lt;li&gt;🌶️ LLMs will make software engineering “women’s work” again 🌶️&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As well as sharing notes from experiments I’m trying (some in partnership with &lt;a href="https://www.linkedin.com/in/galluzzo/" rel="noopener noreferrer"&gt;Nic Galluzzo&lt;/a&gt;) –&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Scaffold, a task management app for AuDHD&lt;/li&gt;
&lt;li&gt;Quilt, a local, document-based context provider for LLMs&lt;/li&gt;
&lt;li&gt;Collaborative AI, a team-oriented chat + document writing experience&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I’ll finish out with a bit of call-to-action. What do you want to know about development with LLMs? What do you wish you had time to try out? How are your current tools and workflows failing you? How have LLMs changed your workflow? Let’s define a new world of software engineering that works for &lt;em&gt;us&lt;/em&gt;.&lt;/p&gt;

</description>
      <category>developmentprocess</category>
      <category>llms</category>
      <category>softwareengineering</category>
    </item>
  </channel>
</rss>
