<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: mikkergimenez</title>
    <description>The latest articles on DEV Community by mikkergimenez (@mikkergimenez).</description>
    <link>https://dev.to/mikkergimenez</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/mikkergimenez"/>
    <language>en</language>
    <item>
      <title>You're not prompting it wrong.</title>
      <dc:creator>mikkergimenez</dc:creator>
      <pubDate>Thu, 19 Mar 2026 23:43:44 +0000</pubDate>
      <link>https://dev.to/mikkergimenez/youre-not-prompting-it-wrong-3ckn</link>
      <guid>https://dev.to/mikkergimenez/youre-not-prompting-it-wrong-3ckn</guid>
      <description>&lt;p&gt;I was listening to Grady Booch on The third golden age of software engineering episode of The Pragmatic engineer, and at one point during the episode he mentioned a website called Victorian Engineering Connections — an interactive diagram of how Victorian engineers knew and influenced each other.&lt;/p&gt;

&lt;p&gt;I was in the middle of a session with Claude, so I asked claude to remind me of the name and it pointed me to sixdegreesoffrancisbacon.com.  I then mentioned that I heard Grady Booch talk about it on a podcast. Still, it gave me three plausible-sounding wrong answers.(not quite hallucinations, the links worked.  When I typed the actual name, it found it immediately.&lt;/p&gt;

&lt;p&gt;I went and searched google for victorianengineeringconnections.net and the first link was to a blog post that reviewed the website, so I knew that Claude had references to the website in the training data. &lt;/p&gt;

&lt;p&gt;It made me think of two things.  The Library of Babel and "You're prompting it wrong.&lt;/p&gt;

&lt;p&gt;There are virtually infinite strings of text one could use to prompt an LLM, and each will provide different responses.  So in a sense, an LLM is a giant library of babel(with some randomness) where each person has their own unique index.&lt;/p&gt;

&lt;p&gt;One of the reasons LLM's seem amazing is that they can answer almost any question we have.  They feel right next to you the whole time.  And that's because they are.  People have often said "LLM's don't work for you, because you're prompting it wrong".  But what if LLM's only work for people who are already subject matter experts?  Because they're unique keys - the way they form their strings of text - are the only ones that lead the the correct answers.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Pulp Fiction, Star Wars, and Feedback</title>
      <dc:creator>mikkergimenez</dc:creator>
      <pubDate>Wed, 07 Jan 2026 13:57:26 +0000</pubDate>
      <link>https://dev.to/mikkergimenez/pulp-fiction-star-wars-and-feedback-5b25</link>
      <guid>https://dev.to/mikkergimenez/pulp-fiction-star-wars-and-feedback-5b25</guid>
      <description>&lt;p&gt;&lt;a href="https://youtu.be/1_i67Vb5ftU" rel="noopener noreferrer"&gt;https://youtu.be/1_i67Vb5ftU&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you didn’t know, there’s a story about Star Wars (in the behind the scenes on the dvd and I’m sure elsewhere) that after the first screening they the film, to George Lucas’ director friends it was almost universally agreed George Lucas wasted his on a movie that would be a failure (only Steven Spielberg felt differently.  Apparently in that first screening there was no soundtrack either which I could then imagine feeling that way.)&lt;/p&gt;

&lt;p&gt;Apparently the same thing happened during pulp fiction. Only Katherine Bigelow saw the potential, one of the directors in that first screening was lamenting the heart to heart he would have to have with Quentin until Pulp Fiction won the palm D’or at Sundance before he got the chance.&lt;/p&gt;

&lt;p&gt;Which makes me wonder a lot of questions:&lt;/p&gt;

&lt;p&gt;What is the point of feedback at this stage of the movie is basically done? I guess you could re-edit a few things.  I’m not suggesting people lie? But why tell someone they movie is crap weeks before release of there is nothing to be done? Make it easier to stomach the response?&lt;/p&gt;

&lt;p&gt;How did these director’s approach feedback?  I can sort of see why no one liked Star Wars, it was a radical departure from what was happening in the 70’s and probably a radical departure from the kind of movies these directors thought should be made.&lt;/p&gt;

&lt;p&gt;Pulp fiction is also a radical departure, but it feels like a director’s movie.  Was no one into the sort of 70’s exploitation films that inspired it?  Or, and this is where it gets weird, were they trying to be “objective”?&lt;/p&gt;

&lt;p&gt;Maybe the feedback really was, hey I really liked this but audiences are going to think you’re nuts.&lt;/p&gt;

&lt;p&gt;This reminds me of an HBR article I read that basically said there are two types of valuable feedback:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;I’m an expert and you got this fact/detail wrong.  This can happen all the time in all sorts of different ways, in cinema I imagine there are details about as how you combine lenses and lights to get certain effects that you could get wrong.&lt;/li&gt;
&lt;li&gt;This is my personal subjective opinion.  This can’t be wrong, but should be worded as such.  Ultimately as a commercial artist you’re trying to get in touch with people me subjective opinion.  maybe there’s a few different ways to film a scene and everyone tells you one way evokes the emotion you’re trying to evoke better than the others.  It may not be universally true, but it could be a good signal if you’re genuinely unsure.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;It’s an interesting thing about - especially commercial art, and this probably has to do with product development and anything that creates a curated artifact that people interact with.  There’s a balance between what you personally might like, and what a general audience might like, and I imagine the best directors understand how to balance this almost intuitively (I also get the impression that over time it can be easy to lose this intuition as there is more pressure to make a certain amount of money). Tarantino was worried Pulp Fiction was to Tarantino, a valid concern, but Tarantino was probably so in tune with what makes a movie entertaining that doing it the Tarantino way was doing it the audience way, he just didn’t know it yet.&lt;/p&gt;

&lt;p&gt;Listening to only one strong signal in product development can be tough (though it can depend on where the signal is coming from) but a “strong hire (or string no hire) signal I think is one to be listened too.  Especially if there’s an imbalance in the strength of the reasons.&lt;/p&gt;

</description>
      <category>leadership</category>
    </item>
    <item>
      <title>IaC should be flat (and small)</title>
      <dc:creator>mikkergimenez</dc:creator>
      <pubDate>Tue, 06 Jan 2026 14:48:04 +0000</pubDate>
      <link>https://dev.to/mikkergimenez/iac-should-be-flat-and-small-2d2m</link>
      <guid>https://dev.to/mikkergimenez/iac-should-be-flat-and-small-2d2m</guid>
      <description>&lt;p&gt;There's another question I want to explore, which is essentially - is IaC an API?  Or is it the implementation that uses that API?  If I'm running a platform team, should I be telling folks how to build their IaC repos or should their IaC repos be in their own opinionated style? (Lots to unpack there, but as in all things the answer is "both")&lt;/p&gt;

&lt;p&gt;In either case, the one where I'm advising a team on how to build their own IaC, or perhaps I'm building out an IaC repo for another team to use, I think IaC should be as simple, flat, and as small as possible.&lt;/p&gt;

&lt;p&gt;By Simple and flat, I mean using as few abstractions or indirections as possible.  If you're say creating the ability to generate a bunch of S3 buckets, then the top level of that should just be a list of S3 buckets.  If an application requires 3 S3 buckets.  It might be ok to have IaC that creates a list of those applications.  What is probably not ok is to have a module that creates three of those S3 buckets added to your list of S3 buckets.  Those three S3 buckets should be added on their own.&lt;br&gt;&lt;br&gt;
The most controversial thing I might be saying is something like.  If you need 3 different environments, you shouldn't use terraform workspaces to generate 3 different environments, or maybe not even a module used to generate each of those 3 different environments.  Each of those three different environments should have each of their resources listed out on their own.&lt;/p&gt;

&lt;p&gt;By small, I mean that each IaC workspace should do just one thing.  Sometimes that one thing is fairly large, like say managing a DNS zone, or networking can get pretty hairy.  But definetly infrastructure that doesn't change together or dependon eachother should not be in the same workspace together.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why?
&lt;/h2&gt;

&lt;p&gt;IaC needs to be easy to read.  IaC should not be like a managed codebase where people take time to become accustomed to it before they become experts in adding to it.  The point of declarative code is that you are using it to declare what is being created.  The more indirection you have, the more the 'how' gets mixed into the 'what'.&lt;/p&gt;

&lt;p&gt;Takes too long to run plan and apply jobs Since terraform has to check every resource, the more resources, the longer it takes to run the job.  Speed is of the essence when creating new resources.  I think IaC should strive for the ideal that creating a new resources via IaC is as easy and fast as creating it with clickops.&lt;/p&gt;

&lt;p&gt;Too many conflicts The thing about IaC is long-lived resources rarely move together elegantly.  You &lt;em&gt;want&lt;/em&gt; your dev environment to be different from your stage environment, to be different from your prod environment.  Maybe permanently, like the size of the instances, maybe temporarily like version updates.  You might run a test in your dev environment you don't promote to your prod environment.&lt;/p&gt;

&lt;h2&gt;
  
  
  Exceptions
&lt;/h2&gt;

&lt;p&gt;There are probably a hundred exceptions here.  If you run a big mature Platform team and are getting use out of your big complicated repos, more power to you.  I've never managed terraform at a fortune five hundred.&lt;/p&gt;

&lt;p&gt;Well-defined long-lived replicated environments  This has often been mentioned as a benefit of IaC, and is often in the documentation for how one might use say - Cloudformation, but i've never been in an environment where this was really useful, I suspect like a lot of documentation examples, this is an enterprise thing.  If you have a hundred dev environments you need to spin up, go forth and modularize!&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Most IaC needs to be bulletproof solid useable by people who hate IaC.  To do that minimize cleverness.  Make IaC as small and declarative as possible.  An IaC workspace that supports resources for an application should only support resources for that application.  Don't be afraid to declare bankruptcy, because you don't have to do a full migration to start over.  You can extract pieces one at a time.&lt;/p&gt;

</description>
      <category>terraform</category>
    </item>
    <item>
      <title>DRAFT - Dependency Hierarchies.</title>
      <dc:creator>mikkergimenez</dc:creator>
      <pubDate>Thu, 22 May 2025 15:15:49 +0000</pubDate>
      <link>https://dev.to/mikkergimenez/draft-dependency-hierarchies-36kl</link>
      <guid>https://dev.to/mikkergimenez/draft-dependency-hierarchies-36kl</guid>
      <description>&lt;p&gt;I think lots have been written about, and everyone intuitively understands the idea of a service dependency hierarchy.  Some services are critical lower tiered services, that everything depends on, and some services are less critical higher tiered services.  So, like the network is a lower tier service that everything depends on, and DNS depends on the network, and your service depends on DNS to communicate with other services like your database.  Your database is more critical then your service, as maybe multiple services talk to the same database, and so your database shouldn't have a dependency on your service, and thus go down when your service goes down, likewise, your network shouldn't be dependent on any of your services, especially if those services are considered non-critical, or low SLA, because then your network adopts the same SLA as a non-critical service.&lt;/p&gt;

&lt;p&gt;I think there's more to unpack here, as we think about DevOps and automation.  ArgoCD is a relatively heavyweight service, at least as compared to a command-line tool like Terraform.  Should critical services like Karpenter or Ist.IO be deployed using, and thus have a dependency on a less critical service like ArgoCD?  You could define ArgoCD as a more critical service, but then you've increased the load on the teams that have to manage it.  And maybe ArgoCD isn't even architected to be a tier-0 service.  There are some vulnerabilities in the ways that CRD's are managed that make it such that uninstalling Argocd might uninstall or make your service unmanageable.  &lt;/p&gt;

&lt;p&gt;The specific question that got me going in this route was around automatically creating Alerts for critical EKS services.  Wouldn't it be great if installing a critical service automatically created the alerts to tell you when it was down.  I have a helm chart that does this, but what if we could abstract this functionality behind a platform engineering workflow tool like Koreo or KPT?&lt;br&gt;
Well now we've taken a fairly critical function, that is core service alerting, and made it dependent on a possibly much less critical service, Koreo.  If Koreo goes down, then we're not creating core service alerts, including possibly the alert for telling when Koreo goes down!&lt;/p&gt;

&lt;p&gt;The point being that there's a lot of fun magic in automating Kubernetes and Platform engineering, but some thought has to be given to the dependency hierarchy.  Maybe you only want one deploy tool -- Crossplane or ArgoCD, but then who watches the watchmen?  There should always be a critical tier later that has as few dependencies as possible for bootstrapping, and later operating the core features of the infrastructure - say the Kubernetes Environment.&lt;/p&gt;

&lt;p&gt;For sure, this advice is variable.  I use Kubernetes core services as an example, but if you have automation to create and manage the lifecycle of kubernetes clusters, than maybe even core services like CoreDNS aren't those top tier-0 services, because the management of CoreDNS on a leaf Kubernetes cluster isn't dependent on anything about that Kubernetes Cluster itself, it's dependent on something upstream functioning that may itself be tier0.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>A really great use-case for AI Coding</title>
      <dc:creator>mikkergimenez</dc:creator>
      <pubDate>Tue, 20 May 2025 15:34:39 +0000</pubDate>
      <link>https://dev.to/mikkergimenez/a-really-great-use-case-for-ai-coding-23c7</link>
      <guid>https://dev.to/mikkergimenez/a-really-great-use-case-for-ai-coding-23c7</guid>
      <description>&lt;p&gt;I like many devs, have been relatively underwhelmed by AI coding tools.  I mean on the one hand, they're amazing!  What an amazing capability that seemed like science fiction just a few years ago.  On the other hand, for any reasonably complex application, the amount of context you have to provide, provides too much room for error, and if you let the LLM run wild, you can end up doing more harm than good.&lt;/p&gt;

&lt;p&gt;But there's a class of applications, that I think LLM's could vastly accelerate, especially for the solo developer, and that's any application that's modular.&lt;/p&gt;

&lt;p&gt;Two examples come to mind, one I've actively tested and am using, the other that I haven't gotten back to yet.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Dashboards&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;I have this dashboarding app at work I periodically update to try and unify all of the different information sources I'm interested in.  Some practical like hacker news and a todo list, some nerdy, like I have a vintage photo app slideshow and a Synthwave music player.  It's probably 98% vibe coded, and the specs are markdown file in the "specs" folder:

Here is my layout.md file:

I want the layout of the three columns in the app to be as follows. The order of the components in a column matches the order in the below list.

Header - Contains Global Settings, including light/dark selector.

Left Column - MusicPlayer, LinearComponent, ToDoListApp
CenterColumn - VintageImages, OnThisDay
RightColumn - FlashCard Component, HackerNewsComponent

## Theme.

- The app should have a selectable light and dark theme.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And here is the spec file for a single component:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;I want a music player component that has the following features:

- Select a playlist (The playlists are the list of folders in the music folder, the music is stored in said folder. The music folder is in the root of the project.)
- Play a random song from the selected playlist.
- Remember the last selected playlist.
- Show the currently playing song.
- Show play/pause/skip controls.
- Auto-play the next random song when one finishes.
- There should be a volume control
- The Playlist selector text should be darker or include the name of the playlist.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It's not perfect.  &lt;/p&gt;

&lt;p&gt;Here is my todolist component:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;I want to add a todolist component. The rules of the todo list are as follows.

10 Items on the todo list
A Todolist Item has text and an optional link (This is creatable and editable, there's an edit button)
A Todolist Item has a delete button
There can be more than ten items on the todo list, but you can only see the 10 most recent items.
Store the items in a local json file for now, this should be modular so we can put in an sql backend if needed.
Right now page.tsx is getting big, let's ensure the todolist frontend items and todolist backend items are in their own separate files.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I explicitly say in this componment spec that I want to store the data in a local json file, but the LLM for some reason interpreted "local json file" (which would have persisted on the hard disk) to "local storage" which does not persist between browsers.&lt;/p&gt;

&lt;p&gt;But still, a thing that may have taken me quite a bit of work to code, I can get 80% of the way there with a few minutes of speccing and prompting.&lt;/p&gt;

&lt;h2&gt;
  
  
  DAW
&lt;/h2&gt;

&lt;p&gt;A while ago I had this idea for a modular in-browser DAW.  The idea was that the different components, the sound source, the sequencer, the effects, would be modular in pretty predictable ways.  Rather than having complex say sequencers, you might literally have a sequencer that's just a 4/4 kick drum, with an off-beat kick every 4 measures, or a 16 note hihat, with a knob to vary the tambre of every second hat.  The noise makers would be similarly modular in that instead of having a subtractive synth, you might have a bass synth, a bell synth, a lead synth etc.  The goal being be that for certain kinds of music there are fairly predictable patterns, that you could pull up in natural language, and maybe improvise a track, and expect that each component could be musical and compatible by nature.  There would be global timings and keys that could synchronize all the instruments.  &lt;/p&gt;

&lt;p&gt;It's an experiment, you lose some flexibility but add the ability to have sort of a casual generative music machine in your browser.  But the point was that when I first came to this, I had to code each of these components individually, I look forward to getting back to it, and structuring it in such a way that the context for each component is limited and can be written with spec.  &lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion.
&lt;/h3&gt;

&lt;p&gt;I don't know that any of the above makes giant strides in making it easier to achieve true innovation.  But lots of products probably have a similar design pattern.  For instance, a tool like Zapier lives and dies on all the integrations it has.  And if you thought you didn't want to approach a problem, because even though the core logic was fairly simple, the integrations or modules would be fairly time consuming, AI may help lower that barrier to entry. &lt;/p&gt;

</description>
      <category>programming</category>
      <category>ai</category>
    </item>
    <item>
      <title>What is SuperIntelligence?</title>
      <dc:creator>mikkergimenez</dc:creator>
      <pubDate>Mon, 19 May 2025 14:25:19 +0000</pubDate>
      <link>https://dev.to/mikkergimenez/what-is-superintelligence-3akg</link>
      <guid>https://dev.to/mikkergimenez/what-is-superintelligence-3akg</guid>
      <description>&lt;p&gt;I don't know if I would call it an insight, but in the process of listening to some thought leaders on the future of AI/LLM's, I'm having some realizations as to maybe what PHD level LLM, and "solve any problem" mean.  Like what gets the big companies excited about AI, so excited they put hundreds of billions of dollars into it, and why it doesn't seem to align with lay-engineers experience of it.&lt;/p&gt;

&lt;p&gt;I worked at a FAANG company, and one thing that surprised me was how many manual tasks there were.  How often the solution was "ssh into this server and run a shell script".  And I suspect, maybe rather obviously that the problem is just that the surface area grows that much faster than the number of ops folks you have, and so having an LLM that can maintain all those scripts.  with interfaces to 100's of applications, maybe building libraries translated into 10 different languages.  Those kind of problems will benefit hugely from AI, because it's code, that may already be clearly defined in an API or existing script, and needs to expand it's surface area to all sorts of different use cases.  &lt;/p&gt;

&lt;p&gt;But I think above and beyond that.  There's a certain kind of problem that needs raw intelligence thrown at it to solve.  I think some of the excitement at Bigco, is just that they have &lt;a href="https://news.ycombinator.com/item?id=43985489" rel="noopener noreferrer"&gt;problems&lt;/a&gt; that can be solved just by throwing millions of gpu's and an LLM at a giant matrix of problem spaces.  No small company would spend millions training an LLM to solve advanced algorithms to improve efficiency by 1%.  But 1% at a FAANG?  And growing the ability to make those improvements over and over again.  Superpower.&lt;/p&gt;

&lt;p&gt;Does this mean that software developers will be out of a job in the short term?  I don't think so.  I think all the things sort of lay-software engineers talk about, in terms of the problem being able to translate the customers definition of requirements into working software (including the cross-org and functional conversations, making the right tradeoffs in terms of cost and efficiency, endless meetings etc.) is not what LLM's will be good at, and this is why people will always sort of need to be in the path, because &lt;a href="https://news.ycombinator.com/item?id=44024548" rel="noopener noreferrer"&gt;defining agents&lt;/a&gt; and workflows will still be a relatively human-centered exercise, need thing specific business context a certain human developer has.&lt;/p&gt;

</description>
      <category>ai</category>
    </item>
    <item>
      <title>The Simple Math of why big tech needs AI</title>
      <dc:creator>mikkergimenez</dc:creator>
      <pubDate>Thu, 15 May 2025 15:08:51 +0000</pubDate>
      <link>https://dev.to/mikkergimenez/the-simple-math-of-why-big-tech-needs-ai-15kb</link>
      <guid>https://dev.to/mikkergimenez/the-simple-math-of-why-big-tech-needs-ai-15kb</guid>
      <description>&lt;p&gt;I worked for a FAANG company for a bit, and one thing that surprised me was just how many tasks were manual.  How noticeably patchy the tooling and automation coverage was.  But also very obviously their tooling and automation was state of the art and much more sophisticated than small tech.&lt;/p&gt;

&lt;p&gt;But I think there's basic math at play.  If I work at a small company and have 100 things, and 50% are automated, then I need say 50 engineers to handle the workload.  But if I work at big tech, and there are 100,000 things, and 75% are automated, then I need 25,000 engineers to handle the workload.  &lt;/p&gt;

&lt;p&gt;There's also the algorithm optimization.  While I think alogrithmic optimizations will trickle down, and that's good.  For (AlphaEvolve)[&lt;a href="https://news.ycombinator.com/item?id=43985489" rel="noopener noreferrer"&gt;https://news.ycombinator.com/item?id=43985489&lt;/a&gt;] to have a 1% speedup in most small to medium size businesses.  Fine, but that's not worth spending a lot of time on.  But for AlphaEvolve to have a 1% speedup at google?  May save millions of dollars.&lt;/p&gt;

&lt;p&gt;The point of which is to say, there's probably lots of ways that this will trickle down to every day users, and software engineers across the board may see more and more of their tasks automated, but a continuous truism in the industry is that what works for big tech often doesn't work for you medium-sized MSP.  &lt;/p&gt;

&lt;p&gt;This might explain the difference between why Google says that AI can write 30% of it's code, and medium sized business struggle to get it to work.  There's just a lot less boilerplate to right at medium sized businesses.  At google, being able to automatically upgrade a node.js dependency may literally impact thousands of microservices, saving thousands of engineering hours.  At your company it might save a few hours, if it doesn't take longer than that to get it going in the first place.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>softwareengineering</category>
    </item>
    <item>
      <title>AI Innovation...Limitations?</title>
      <dc:creator>mikkergimenez</dc:creator>
      <pubDate>Thu, 15 May 2025 15:00:00 +0000</pubDate>
      <link>https://dev.to/mikkergimenez/ai-innovationlimitations-2d3d</link>
      <guid>https://dev.to/mikkergimenez/ai-innovationlimitations-2d3d</guid>
      <description>&lt;p&gt;I have this idea for a web-based music app, basically an in-browser DAW.  &lt;/p&gt;

&lt;p&gt;The question of whether or not "AI is Creative" is somewhat unknowable, but I think there are certain sort of mathematical / combinatorial observations one can make about innovation.&lt;/p&gt;

&lt;p&gt;The Library of Babel is a short story by Jorge Luis Borges that presents a universe in the form of a vast library containing all possible books of a certain format.&lt;/p&gt;

&lt;p&gt;The library contains every possible arrangement of letters, spaces, and punctuation marks, meaning it holds not just every book ever written, but every book that could ever be written - including all possible variations with typos, nonsense, and meaningless combinations of characters.&lt;/p&gt;

&lt;p&gt;The library is described as having an infinite number of hexagonal rooms, each containing four walls of bookshelves. The inhabitants of this universe are librarians who search endlessly through this vast collection, seeking meaning.&lt;/p&gt;

&lt;p&gt;The central philosophical conceit is that while the library contains all possible knowledge (including perfect books explaining the library itself, prophecies of the future, and biographies of every person), these meaningful works are lost among an overwhelming mass of gibberish. The searchers cannot distinguish between profound wisdom and random characters.&lt;/p&gt;

&lt;p&gt;The story explores themes of information overload, the search for meaning amid chaos, and the paradox that complete access to all possible information actually results in meaninglessness rather than enlightenment. The librarians become obsessed, depressed, or fanatical in their searches, highlighting how an infinity of possibilities without guidance or context becomes as useless as having no information at all.RetryClaude can make mistakes. Please double-check responses.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;FWIW Here's the prompt I gave Claude.AI on this question:&lt;/strong&gt;&lt;br&gt;
There's a concept for a certain kind of innovation or thinking I'm trying to conceptualize and I wonder if this concept exists already.  People say that LLM's are creative, but it seems like one limitation in that creativity is that an LLM doesn't have a sort of life experience with which to generate the emotional passion behind an idea.  So, sort of in the style of the library of babel, you could have a few LLM's that come up with ideas, or you could probably have infinite LLM's that may be able to come up with all ideas, but part of innovation is the passion to build relationships and exploit life experience to launch an idea.&lt;/p&gt;

&lt;p&gt;I have an idea for a web-based music application, and it's not that I don't think that an LLM could come up with a similar idea, but that there's no way to get over the sort of signal to noise ratio of ideas, especially ideas that may be counter-intuitive or like avant-garde.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Here was Claude's Response:&lt;/strong&gt;&lt;br&gt;
You're touching on something profound about the nature of innovation and creativity. While LLMs can generate countless ideas through recombination and pattern recognition in their training data, they lack what we might call "embodied creativity" - the kind that emerges from lived experience, emotional investment, and personal meaning.&lt;br&gt;
This reminds me of the concept of "tacit knowledge" developed by philosopher Michael Polanyi - knowledge that cannot be fully articulated or transferred symbolically. A jazz musician's feel for improvisation or a chef's intuition for flavor combinations comes from years of embodied practice and emotional connection to their craft.&lt;br&gt;
What you're describing might be considered a form of "experiential innovation" - where the value comes not just from the idea itself but from the contextual understanding and emotional drive behind it. Your music application idea likely stems from personal experiences with music, frustrations with existing tools, and a passion to create something meaningful to you.&lt;br&gt;
The signal-to-noise issue you mention is crucial. LLMs can generate endless variations, but they lack the embodied filter that says "this matters" or "this resonates" based on lived experience. They have no stake in the outcome, no emotional investment that drives persistence through obstacles.&lt;br&gt;
Would you be willing to share more about your music application idea? I'm curious how your personal experiences and passions have shaped this particular innovation.RetryClaude can make mistakes. Please double-check responses.&lt;/p&gt;

</description>
      <category>ai</category>
    </item>
    <item>
      <title>AI Innovation...Limitations?</title>
      <dc:creator>mikkergimenez</dc:creator>
      <pubDate>Wed, 14 May 2025 16:12:51 +0000</pubDate>
      <link>https://dev.to/mikkergimenez/ai-innovationlimitations-1ag1</link>
      <guid>https://dev.to/mikkergimenez/ai-innovationlimitations-1ag1</guid>
      <description>&lt;p&gt;I have this idea for a web-based music app, basically an in-browser DAW.  The idea is to take some concepts from modular synthesis and apply them to a generative DAW, but even more specifically, for the most part, those modules will be simple and specific for sort of casual music generation in the background.  A chord sequencer with limited options for simple chord progressions may lead into an arpeggiator, that leads into a musical instrument that optimizes for a "bell", "pluck" or "pad" experience, including only the knobs and options one would need for that instrument.  There would be a global key setting to keep the music broadly in key.  Basically giving you lots of options for generative music but trying to remove some of the cognitive load of keeping things in sync or tuning a broad instrument.&lt;/p&gt;

&lt;p&gt;And I don't think AI would come up with is idea, which is to say broadly, I think there are lots of ideas AI wouldn't come up with.  Though I'm not really saying it &lt;em&gt;couldn't&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;The question of whether or not "AI is Creative" is probably unknowable.  In a practical sense, I think you could say definitely yes, it combines things it learned about into new creations that didn't exist before, but I think the devil is in the details around whether or not there's something spiritual about creativity that is not capture by AI, and maybe more specifically, whether or not humans have some input that modifies the way they come up with solutions to problems that AI doesn't have, at least in it's current form.  &lt;/p&gt;

&lt;p&gt;But I do think there are certain sort of mathematical / combinatorial observations one can make about innovation.&lt;/p&gt;

&lt;p&gt;The Library of Babel is a short story by Jorge Luis Borges that presents a universe in the form of a vast library containing all possible books of a certain format.&lt;/p&gt;

&lt;p&gt;The library contains every possible arrangement of letters, spaces, and punctuation marks, meaning it holds not just every book ever written, but every book that could ever be written - including all possible variations with typos, nonsense, and meaningless combinations of characters.&lt;/p&gt;

&lt;p&gt;The library is described as having an infinite number of hexagonal rooms, each containing four walls of bookshelves. The inhabitants of this universe are librarians who search endlessly through this vast collection, seeking meaning.&lt;/p&gt;

&lt;p&gt;The central philosophical conceit is that while the library contains all possible knowledge (including perfect books explaining the library itself, prophecies of the future, and biographies of every person), these meaningful works are lost among an overwhelming mass of gibberish. The searchers cannot distinguish between profound wisdom and random characters.&lt;/p&gt;

&lt;p&gt;The story explores themes of information overload, the search for meaning amid chaos, and the paradox that complete access to all possible information actually results in meaninglessness rather than enlightenment. The librarians become obsessed, depressed, or fanatical in their searches, highlighting how an infinity of possibilities without guidance or context becomes as useless as having no information at all.&lt;/p&gt;

&lt;p&gt;I think the library of babel can be applied to the world that we live in with AI, and maybe why AI will never actually replace humans in terms of creativity or innovation / working, or at least not for a while.  Lots of problems actually require sort of a specific imperfect, non-optimal perspective, and the selling of that perspective to a specific community.  They require the experience of working with specific people or kinds of people.  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;FWIW Here's the prompt I gave Claude.AI on this question:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;There's a concept for a certain kind of innovation or thinking I'm trying to conceptualize and I wonder if this concept exists already.  People say that LLM's are creative, but it seems like one limitation in that creativity is that an LLM doesn't have a sort of life experience with which to generate the emotional passion behind an idea.  So, sort of in the style of the library of babel, you could have a few LLM's that come up with ideas, or you could probably have infinite LLM's that may be able to come up with all ideas, but part of innovation is the passion to build relationships and exploit life experience to launch an idea.&lt;/p&gt;

&lt;p&gt;I have an idea for a web-based music application, and it's not that I don't think that an LLM could come up with a similar idea, but that there's no way to get over the sort of signal to noise ratio of ideas, especially ideas that may be counter-intuitive or like avant-garde.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Here was Claude's Response:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You're touching on something profound_(Thanks Claude!)_ about the nature of innovation and creativity. While LLMs can generate countless ideas through recombination and pattern recognition in their training data, they lack what we might call "embodied creativity" - the kind that emerges from lived experience, emotional investment, and personal meaning.&lt;br&gt;
This reminds me of the concept of "tacit knowledge" developed by philosopher Michael Polanyi - knowledge that cannot be fully articulated or transferred symbolically. A jazz musician's feel for improvisation or a chef's intuition for flavor combinations comes from years of embodied practice and emotional connection to their craft.&lt;br&gt;
What you're describing might be considered a form of "experiential innovation" - where the value comes not just from the idea itself but from the contextual understanding and emotional drive behind it. Your music application idea likely stems from personal experiences with music, frustrations with existing tools, and a passion to create something meaningful to you.&lt;br&gt;
The signal-to-noise issue you mention is crucial. LLMs can generate endless variations, but they lack the embodied filter that says "this matters" or "this resonates" based on lived experience. They have no stake in the outcome, no emotional investment that drives persistence through obstacles.&lt;br&gt;
Would you be willing to share more about your music application idea? I'm curious how your personal experiences and passions have shaped this particular innovation.RetryClaude can make mistakes. Please double-check responses.&lt;/p&gt;

</description>
      <category>ai</category>
    </item>
    <item>
      <title>Every piece of advice is based on optimizing away from an impulse.</title>
      <dc:creator>mikkergimenez</dc:creator>
      <pubDate>Mon, 12 May 2025 15:42:00 +0000</pubDate>
      <link>https://dev.to/mikkergimenez/every-piece-of-advice-is-based-on-optimizing-away-from-a-given-impulse-65p</link>
      <guid>https://dev.to/mikkergimenez/every-piece-of-advice-is-based-on-optimizing-away-from-a-given-impulse-65p</guid>
      <description>&lt;p&gt;The reason influencers focus on the grind; fitness, entrepreneurship and the like, is that for most of us it's an unnatural impulse.  I don't want to speak for everyone, but my most natural impulse is towards laziness.  Definetely towards not doing.  But the grind isn't "good".  Not in an unqualified way.  For the first part, if what you're doing is not working, you probably need to grind less, step back and assess your strategy.  &lt;/p&gt;

&lt;p&gt;The entirety of the Google SRE model exists because people optimize too much towards feature delivery.  But there are two equally competing facts -- No one will use your e-mail platform without key features like spam filtering, and ("The most important feature of any system is it's reliability")[&lt;a href="https://www.youtube.com/watch?v=U53wC2A75Is" rel="noopener noreferrer"&gt;https://www.youtube.com/watch?v=U53wC2A75Is&lt;/a&gt;].  These are both true.  Google SRE exists to pull people towards this reality, but it's not the only true reality. &lt;/p&gt;

&lt;p&gt;"Don’t find customers for your products, find products for your customers." -- Seth Godin&lt;/p&gt;

&lt;p&gt;"A product is only ever as good as its UX. You can have the most innovative technology, but if it isn’t user-friendly, it will struggle to reach mainstream adoption" -- Blake Ross&lt;/p&gt;

&lt;p&gt;Sure, a product with 0 uptime has no UX, but I think most early stage companies would say that actually focusing on reliability is probably the wrong task, because you don't have enough users to realize that your product is unreliable.  A little downtime is ok to speed up the learning that comes with developing and testing UX/features quickly to find product/market fit.&lt;/p&gt;

&lt;p&gt;You could say that Ben Treynor isn't wrong in the last scenario, and he's not, but very clearly that piece of advice is not useful because it's not trying to optimize away from an unhealthy impulse.  Focusing on UX and feature testing for product/market fit is the correct impulse.  All other pieces of advice would be irrelevant if not downright harmful.&lt;/p&gt;

&lt;p&gt;Why do I think this is important?  It's important to know that when a blog post states "Well, actually..." It's not really demonizing the impulse it's speaking against -- or, you shouldn't interpret it that way.  Concerns about &lt;a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC9869993/" rel="noopener noreferrer"&gt;Exercise Addiction&lt;/a&gt; are useful to a very small subset of the population, and this population probably doesn't even include ultra-marathoners who run 100 mile races.&lt;/p&gt;

&lt;p&gt;Almost every post on agile fits into this category.  It's only useful if you're company fits into one of the pathologies discussed in the post.  Agile in general is a reaction to absurd levels of planning and refusal to get started until everything fits into a perfectly planned waterfall chart.  It doesn't mean that adding traditional project management techniques to your company will  slow things down and create the pathological behahviors Agile was railing against. Agile means we don't plan is facilly ridiculous.  &lt;/p&gt;

&lt;p&gt;I think the place this becomes most important is when it's not obvious.  Sometimes I hear a piece of advice that I immediately put into this category.  "I generally speaking obsess in this area, so this piece of advice is not specifically for me, I already optimize in that direction".  What pieces of life advice go unquestioned.  &lt;/p&gt;

&lt;h2&gt;
  
  
  Internal Development Platforms.
&lt;/h2&gt;

&lt;p&gt;One of my favorite things to question lately is the idea that an internal platform should be entirely optional.  That if people aren't using it, it's because you've built the wrong thing, and that more product focus is the answer.  I could probably write multiple articles on this.  But let's start by saying.  This is mostly the correct impulse.  Most platform / sre / devops teams are optimized in the opposite direction.  They most would rather just mandate their platform and move on with their lives. &lt;/p&gt;

&lt;p&gt;And maybe the response is more internal sales and marketing over a top-down mandate.  But I think the point of this dichotomy is to imply a push-pull between, sort of an internal self-deprecating focus, towards an external figure out what the problem really is focus.  Even if you've built the right thing, and developers acknowledge that you've built the right thing, there can be other things getting in the way of platform adoption.  So get out there and figure out what that is, and figure out if a team just needs the time to do so -- sure make it easier -- but don't assume you've built the wrong thing just because that, in many ways, is what is easiest to focus on and the most in your control.  Once you start the journey towards platform engineering, you may find that talking to your fellow developers and building the right thing become the easy part.&lt;/p&gt;

&lt;p&gt;I thought I'd close just by linking this inc.com article on "The 7 best pieces of advice for living a happy life"  How does this principle apply to this article?  The subheading here is "This applies to everyone" (Hint: It doesn't)&lt;/p&gt;

&lt;p&gt;inc.com -- &lt;a href="https://www.inc.com/nicolas-cole/the-7-best-pieces-of-advice-for-living-a-happy-life.html" rel="noopener noreferrer"&gt;The 7 Best Pieces of Advice for Living a Happy Life&lt;/a&gt; &lt;em&gt;This applies to everyone.&lt;/em&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Stay true to yourself.&lt;/li&gt;
&lt;li&gt;Do what you love–not what you’re told to love.&lt;/li&gt;
&lt;li&gt;Create the environment that’s right for you.&lt;/li&gt;
&lt;li&gt;Choose your friends wisely.&lt;/li&gt;
&lt;li&gt;Develop positive habits.&lt;/li&gt;
&lt;li&gt;Create certainty and leave room for uncertainty.&lt;/li&gt;
&lt;li&gt;Be vulnerable.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Posts that fit into this category:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://paulgraham.com/ds.html" rel="noopener noreferrer"&gt;Do things that don't scale&lt;/a&gt; Paul Graham&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>platformengineering</category>
      <category>devops</category>
    </item>
    <item>
      <title>Devops is a Job Title</title>
      <dc:creator>mikkergimenez</dc:creator>
      <pubDate>Mon, 05 May 2025 15:23:50 +0000</pubDate>
      <link>https://dev.to/mikkergimenez/devops-is-a-job-title-284i</link>
      <guid>https://dev.to/mikkergimenez/devops-is-a-job-title-284i</guid>
      <description>&lt;p&gt;I've wanted to write this article for a while now.  Because I often see the saying "Devops is not a role" &lt;a href="https://medium.com/@jeromedecinco/the-myth-of-the-devops-engineer-role-a-non-existing-job-title-e0c2628695de" rel="noopener noreferrer"&gt;all&lt;/a&gt; &lt;a href="https://www.linkedin.com/pulse/devops-cultural-shift-job-title-nathan-rasch-mba-pmp-safe-sm-csm/" rel="noopener noreferrer"&gt;over&lt;/a&gt; &lt;a href="https://cto.ai/blog/common-devops-misconceptions-its-not-a-role-but-a-culture/" rel="noopener noreferrer"&gt;the&lt;/a&gt; &lt;a href="https://dev.to/pooyan/devops-is-a-culture-not-a-role-14m"&gt;place&lt;/a&gt;, and every time I see it I have a visceral reaction. &lt;/p&gt;

&lt;p&gt;Ironically, I'm reading the book "Team Topologies", which is the ultimate "Devops is not a Role" book.  But I want to unpack my thoughts for a second.  I've often noticed that when I'm presented with a new idea, my impulse is to argue against it, play devil's advocate.  Partially this is for the same reason most people do.  I'm afraid of change; don't move my cheese, and this reaction is usually not helpful.  Playing Devil's advocate without a strong purpose in mind is an anti-pattern.&lt;/p&gt;

&lt;p&gt;But I do have a strong purpose in mind.  I find it most useful to approach problems this way.  Start with the no case.  We don't to change.  Then start to open up the pieces bit by bit and ask, but what would it look like if we did?  What might we do first?  What's the smallest step we could take to get there?  What am I most wrong about?&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;The thing we're doing now works.  Why change it?  No really.  &lt;em&gt;Why&lt;/em&gt; change it?  What are the reasons?  What will this new thing do better?  What do we want to make sure we're preserving from our current patterns?  &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The second is similar, maybe compatible: What are the reasons for this new thing?  We can't understand why we do this new thing, unless we understand it in contrast to what it's replacing, both the good and the bad.  There's a reason you've been doing things the way you've been doing them, and you're likely to lose something.  What are you going to lose?  And how do you ensure you can sell this to people, without understanding what benefits they're getting from the current thing.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;So in that vein, the root of this question is.  If devops is not a job title, it's a culture.  Why does this need to be argued so hard?  i.e. why are people treating dev/ops like a job title instead of a culture, what are they getting out of it?&lt;/p&gt;

&lt;h3&gt;
  
  
  Ops + IaC is good, actually.
&lt;/h3&gt;

&lt;p&gt;I think one of the core things people say when they talk about devops is not a role, is to then sort of add, just adding IaC to ops doesn't make you devops.  But if you take a team that was doing most of their changes manually, say by clicking in the console, and get them to start making their changes using Terraform.  That's good!  Act as if.   Start to get a feel for what that pattern is like.  Start to understand the problems of working that way. &lt;/p&gt;

&lt;h3&gt;
  
  
  You can't make culture change if you don't have the skills to get there.
&lt;/h3&gt;

&lt;p&gt;If you take a bunch of operations engineers who've never touched IaC or a code base before, and tell them to break down barriers and embed themselves with Software Engineering Teams.  That is destined to fail.  To add on to the above:  Develop the skills in IaC so that when you do maybe embed with a team, or try to make the culture change, you have the skills in the tools needed to respond quickly.&lt;/p&gt;

&lt;h3&gt;
  
  
  Devops is a constant effort.
&lt;/h3&gt;

&lt;p&gt;I think that the devops culture is sort of the fudge of the sundae.  It's not just the cherry on top, it's more deeply integrated than that.  But most of the work your teams do will be done by "devops engineers" on an "ops team".  Ops will deploy and manage infrastructure, somewhat tangentially to your developers needs.  &lt;/p&gt;

&lt;h3&gt;
  
  
  Devops is not a role; is not really actionable.
&lt;/h3&gt;

&lt;p&gt;Devops is not a role, it's a culture is directionally correct, it's the right way to view the problem, and if you get deep into it, there's alot of value there.  But if you're just getting started "Go change the culture" is not really useful advice, and lots of short linked in posts or blog posts amount to summarizing the frustration on the part of the writer, and not real deep advice. &lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;What would I suggest people take away from this?  And how does this impact the way that I work? &lt;br&gt;
"Devops is a culture" is a good long-term vision to have, but per above, it may not be actionable.  Go into it with some skepticism.  Be honest with where you're at on your journey, and that certain organizational patterns may prevent you from really adapting the culture whole hog.  To do devops right, you need to be bought in pretty high in the organization, and if you aren't, I think there are still ways to take advantage of some of the learnings of devops.  Don't let perfect be the enemy of good.&lt;/p&gt;

&lt;p&gt;If you're just looking for small things to do, you can hire someone with IaC experience, or introduce Terraform and that's an ok step to take, you're not failing at running an ops team.  You can hire some folks, call them "devops" and then slowly find ways to build bridges between your team and dev, rather than solely petitioning the svp of engineering to enable "culture change".  In fact, you may not be doing things all that much differently from teams that say they're doing "devops" for real.  Then I'd say use your "devops engineers" to create interest, move towards an enablement team that gets your customers pushing for the cultural changes that devops inspires rather than pushing for them yourselves.&lt;/p&gt;

</description>
      <category>devops</category>
    </item>
    <item>
      <title>Dynamic Programming</title>
      <dc:creator>mikkergimenez</dc:creator>
      <pubDate>Thu, 23 Jan 2025 03:29:51 +0000</pubDate>
      <link>https://dev.to/mikkergimenez/dynamic-programming-1d50</link>
      <guid>https://dev.to/mikkergimenez/dynamic-programming-1d50</guid>
      <description>&lt;p&gt;Every time I've approached learning algorithms, I've come across the phrase "Dynamic Programming". Which sounds complicated, but  it's interesting the book I'm reading just basically says "Dynamic Programming is mostly just a matter of taking a recursive algorithm and finding the overlapping subproblems, then you can cache those results for future recursive calls)&lt;/p&gt;

&lt;p&gt;Which makes it sound pretty easy.  I mean, I know what Recursion is, though, maybe because I didn't learn recursion until much later in my programming journey, but it seems much harder to read, and much less intuitive to me than just using loops.  If I squint I can see why recursion might be intuitive for some, just not for me.  The most common recursive example, the fibonnaci sequence, always seemed bananas to me.  Why do all that work, just add numbers.&lt;/p&gt;

&lt;p&gt;Maybe this is why Question 55 on LeetCode, Jump Game seems bananas to address as a dynamic programming problem.  Sure you can go to all that work, but my observation is that you can solve it with just a single loop, by treating each stop as a gas station.  Start at the first step, and decrement one each time.  If you pass an index that is bigger than your current number, set your "gas level" to that number.  See if you can make it to the end.&lt;/p&gt;

&lt;p&gt;But I'll play the game.  It seems like the base case for Jump Game, is that you check every spot against every other spot to see if that spot can make it to the end.  Then the real optimization for number to is you can memo certain spots, because in the original example you're checking the same spot multiple times.  This reduces a 2^N to a N^2 (Apparently the editorial for Question 55 as of Jan 22 2025 has the wrong example 3 Approach 3: Dynamic Programming Bottom-up?  It looks like it comes from the stock trading example.&lt;/p&gt;

&lt;p&gt;Jump Game 2 seems similar, if you work backwards, you can resolve the question pretty easily by calculating at each step, the minimum number of jumps required to get to that step, starting at 0 for the last step., then the step before 0 is either 1(if the number is over 0) or -1 if that step can never get there.&lt;/p&gt;

&lt;p&gt;Ultimately I think this is the problem with learning dynamic programming, at least as an amateur is so many problems listed on LeetCode as dynamic programming problems actually aren't.&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
