<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Solomon Aboyeji</title>
    <description>The latest articles on DEV Community by Solomon Aboyeji (@solomonaboyeji).</description>
    <link>https://dev.to/solomonaboyeji</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/solomonaboyeji"/>
    <language>en</language>
    <item>
      <title>If LLMs Were ATMs, Would You Still Count Your Money?</title>
      <dc:creator>Solomon Aboyeji</dc:creator>
      <pubDate>Mon, 06 Apr 2026 08:09:43 +0000</pubDate>
      <link>https://dev.to/solomonaboyeji/if-llms-were-atms-would-you-still-count-your-money-53oe</link>
      <guid>https://dev.to/solomonaboyeji/if-llms-were-atms-would-you-still-count-your-money-53oe</guid>
      <description>&lt;p&gt;There is this joke about some Nigerians still counting their money at the ATM. Even with an automated system, people in my home country, Nigeria in West Africa, perform this common practice of counting their cash the moment it comes out of the machine. I have never personally known anyone who got a different amount than what they asked for, but the habit carries on. And I think that habit has something to teach us about how we should be treating LLMs right now.&lt;/p&gt;

&lt;p&gt;The buzz word everywhere you turn is AI, and many companies you know of want to quickly get on this new trend. As we rush to embrace this tool to become more productive and make our businesses more efficient, it falls on us as engineers to check what the output of this technology actually gives us. Do not vibe code to the extent that you no longer have control over what is being written. We should learn to do some extra checks, the same way people still count their notes at the ATM even when the machine has never shortchanged them. Trust is earned, and verification is cheap insurance.&lt;/p&gt;

&lt;p&gt;So what does "counting your money" look like when the machine is an LLM?&lt;br&gt;
First, what if I told you that agents are simply a set of continually prompting an LLM to do something, sending the output back to itself or to a different, more capable model as a prompt, and performing other actions such as criticism and review? Once you see it that way, it becomes obvious that every step in that chain is a place where things can quietly go wrong. You need checks at each step, not just at the end.&lt;/p&gt;

&lt;p&gt;Second, LLMs are still unreliable in some vital areas. We need to put a system and structure in place, with a set of hard rules that the LLM's output is run against. These are not prompt rules, I mean a programmatic set of rules enforced in code. Take access checks as an example. Do not just prompt, "answer this question if the user is an admin". The LLM can fall into prompt injection attacks, where a user slips instructions into their input telling the model to ignore the admin check entirely. Access control belongs in your code, not in your prompt.&lt;/p&gt;

&lt;p&gt;Third, evaluate the model's response regularly. When you change models or change any parameters in your pipeline, run the same checks each time to ensure the model has not degraded or drifted from expected or acceptable answers. This can be done with different evals.&lt;/p&gt;

&lt;p&gt;And finally, if you vibe code a product, please get a Software Engineer to review it before you hand it over to customers, whether paying or non-paying. Many of the topics I mentioned above can be very technical, and you need someone who knows how code should behave to reliably implement them. Businesses should not be too eager to replace engineers.&lt;/p&gt;

&lt;p&gt;The ATM in Nigeria has been around for years, and it still gives people the right amount almost every single time. And still, we count. That instinct is not paranoia, it is discipline. LLMs are newer, less predictable, and far more capable of being wrong in ways you will not notice at a glance. So, count the money, review the code!&lt;/p&gt;

</description>
      <category>ai</category>
      <category>llm</category>
      <category>rag</category>
      <category>programming</category>
    </item>
    <item>
      <title>The Developer Identity Crisis No One Is Talking About</title>
      <dc:creator>Solomon Aboyeji</dc:creator>
      <pubDate>Sat, 28 Feb 2026 15:09:54 +0000</pubDate>
      <link>https://dev.to/solomonaboyeji/the-developer-identity-crisis-no-one-is-talking-about-117o</link>
      <guid>https://dev.to/solomonaboyeji/the-developer-identity-crisis-no-one-is-talking-about-117o</guid>
      <description>&lt;p&gt;We have moved from actively writing code by hand, as this was arguably the fun thing to do in programming: the feeling that we were crafting things from scratch and watching it power engines and support communities around the globe.&lt;/p&gt;

&lt;p&gt;However, in this transition where code is cheap, the goalpost has shifted to actively testing what was built by your AI agents: quality assurance, translating users' problems into well-defined tickets, more time spent on reviewing code. Most of these are things the average engineer dislikes.&lt;/p&gt;

&lt;p&gt;Writing code by hand isn't going away entirely, as we still need engineers who understand code to debug and, importantly, architect systems, and you can only architect what you know how to build. This craft isn't dead, but evolving.&lt;/p&gt;

&lt;p&gt;We now need to ensure that what we are building with our agents is reliable, maintainable even for human readers, and that performance is not degraded. I must say, with AI we should be able to do many of the checks and ensure things are in the right places.&lt;/p&gt;

&lt;p&gt;The developers who only identify with code will struggle to adapt. Skills that matter are shifting, figure them out and learn them.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>agents</category>
      <category>code</category>
      <category>career</category>
    </item>
    <item>
      <title>Going Local? 2,000+ Attackers Are Already Waiting</title>
      <dc:creator>Solomon Aboyeji</dc:creator>
      <pubDate>Tue, 20 Jan 2026 04:55:34 +0000</pubDate>
      <link>https://dev.to/solomonaboyeji/going-local-2000-attackers-are-already-waiting-4gm7</link>
      <guid>https://dev.to/solomonaboyeji/going-local-2000-attackers-are-already-waiting-4gm7</guid>
      <description>&lt;p&gt;Going local means hosting your own infrastructure instead of relying on managed cloud services. In this post, I talk about 2,331 login attempts in less than 30 Days, the reality of running your own VPS.&lt;/p&gt;

&lt;p&gt;You resonate with the idea of having your own setup and deployment. It gives you control and, let's face it, it's cheap. You install Docker, set up UFW, open some ports to the web, block others. You think to yourself: now we're good.&lt;/p&gt;

&lt;p&gt;Well, it takes one curious attacker to find the gaps in your checklist.&lt;/p&gt;

&lt;p&gt;In less than a month, there have been 2,331 failed SSH login attempts and 235 banned IPs, all trying to gain access to the VPS server with bans occurring every 30-60 minutes, around the clock. These are automated, coordinated attempts.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft3i1svj7cn459opnu2nn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft3i1svj7cn459opnu2nn.png" alt="fail2ban status" width="800" height="262"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I recently spun up a VPS and discovered that some ports were exposed to the public even with UFW configured. A MongoDB that was exposed on port 27017 for "internal use only"? It's been public this whole time. Turns out Docker bypasses UFW entirely. This isn't a bug. It's how Docker works and it has been known for years, however many developers either don't know or choose to ignore it due to the &lt;a href="https://dev.to/solomonaboyeji/the-hidden-cost-of-abstraction-27kk"&gt;convenience of abstraction&lt;/a&gt; Docker provides.&lt;/p&gt;

&lt;h2&gt;
  
  
  What are they looking for?
&lt;/h2&gt;

&lt;p&gt;The failed logins reveal what attackers expect to find on cheap VPS servers:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3259u7iwo04hf1txd5jr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3259u7iwo04hf1txd5jr.png" alt="Failed Logins" width="800" height="337"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It seems they're not guessing randomly. They know what people run on $5 servers, and here it seems crypto infrastructure with hot wallets is a prime target.&lt;/p&gt;

&lt;p&gt;Now that you are local, are you secured? In 2026, this is the question we all need to be asking. It's more than just having a presence online, it's about how secure your setup actually is. Whether that means a $3-5 VPS on Hetzner or an old laptop in your garage running your side projects.&lt;/p&gt;

&lt;p&gt;If you're running Docker on an unmanaged VPS, you probably need to fix this. I put together a script that:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Prevents Docker from bypassing your firewall&lt;/li&gt;
&lt;li&gt;Blocks all ports by default, exposing only what you explicitly allow&lt;/li&gt;
&lt;li&gt;Hardens SSH to key-based authentication only&lt;/li&gt;
&lt;li&gt;Installs fail2ban to stop brute-force attempts&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  How it works
&lt;/h2&gt;

&lt;p&gt;The script does four things:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Fixes the Docker/UFW bypass&lt;/strong&gt;&lt;br&gt;
It installs ufw-docker, which modifies the iptables chain order so UFW rules are checked before Docker's. Without this, Docker punches holes through your firewall whenever you expose a port.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Blocks everything by default&lt;/strong&gt;&lt;br&gt;
UFW is configured to deny all incoming traffic. Only ports you explicitly specify (default: 22, 80, 443, 3000) are accessible from the internet. Your containers can still talk to each other internally.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Hardens SSH&lt;/strong&gt;&lt;br&gt;
Disables password authentication (key-only), disables root login, and removes cloud-init overrides that re-enable password auth (be careful with this as you might lock yourself out entirely if your ssh key is missing)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Installs fail2ban&lt;/strong&gt;&lt;br&gt;
Three failed SSH attempts results in a 24-hour ban. This stops brute-force attacks from hammering your server indefinitely. (You can adjust the ban duration based on your needs)&lt;/p&gt;

&lt;p&gt;Here is the link to the Github repo, I would advise you run it on a throw-away server before making use of this in a more serious project: &lt;a href="https://github.com/solomonaboyeji/secure-vps" rel="noopener noreferrer"&gt;https://github.com/solomonaboyeji/secure-vps&lt;/a&gt;&lt;/p&gt;

</description>
      <category>security</category>
      <category>linux</category>
      <category>webdev</category>
      <category>docker</category>
    </item>
    <item>
      <title>Engineers who explore build better AI products</title>
      <dc:creator>Solomon Aboyeji</dc:creator>
      <pubDate>Sat, 20 Dec 2025 17:11:10 +0000</pubDate>
      <link>https://dev.to/solomonaboyeji/engineers-who-explore-build-better-ai-products-1k9m</link>
      <guid>https://dev.to/solomonaboyeji/engineers-who-explore-build-better-ai-products-1k9m</guid>
      <description>&lt;p&gt;&lt;strong&gt;Data is king,&lt;/strong&gt; but not knowing what data you have, its nature and characteristics is what keeps businesses at a loss. I implemented a RAG system to answer questions over a lengthy PDF document, but before feeding it to the model, I spent most of my time doing Exploratory Data Analysis (EDA). &lt;/p&gt;

&lt;p&gt;This helped me go back into my data and clean it better because I now know where most of the confidence scores lie, the areas I need to work on, and most importantly, where humans need to spend my time.&lt;/p&gt;

&lt;p&gt;I believe in saving cost and time by ensuring we focus on the most important tasks and let computers handle the ones without nuances. How would you know what those nuances are if you don't dig into your data and uncover its relationships, attributes, strengths, and anomalies? In a world where everyone is rushing to "AI," do they really have an in-depth understanding of their data and how to leverage it to thrive?&lt;/p&gt;

&lt;p&gt;Your data is trying to tell you something. Go in and listen to it, ask it questions, and you'll gain insights.&lt;/p&gt;

&lt;p&gt;Here is the statistics of the chunking process (I explain it in the next paragraph)&lt;/p&gt;

&lt;p&gt;I enriched the original chunks with more data:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc1ivfmm9rvyoc2kt9gi5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc1ivfmm9rvyoc2kt9gi5.png" alt="Enriched Chunks" width="702" height="1454"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Chunk Classification Report&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0zuknwrmrznpz9r8ik7s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0zuknwrmrznpz9r8ik7s.png" alt="Enrichment Report" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Looking at the enrichment report:&lt;/p&gt;

&lt;p&gt;I broke the document into 434 pieces. First, I ran rule-based classification. For chunks where the rules weren't confident, they were flagged for LLM processing. More than half (58.8%) fell into this category, requiring AI calls to provide their own confidence scores. Now imagine a business or individual with needs at 50x this scale. That's over 12,000 AI calls flagged before even running them. Even with batching, you're processing significantly more tokens through the LLM, and costs add up fast.&lt;/p&gt;

&lt;p&gt;Without EDA, I might have fed all 434 chunks through the LLM, paying for processing that my rules already handled confidently. The 41.2% success rate from simple rules showed me what was already working, that's nearly half the workload I could have unnecessarily sent to AI processing. At 50x scale, that's avoiding 9,000+ wasteful LLM calls.&lt;/p&gt;

&lt;p&gt;Here is another insight, two categories (system_section and spec_table_row) made up 51% of everything. With 41.2% already automated through rules, analyse why these types trigger LLM fallback and build better rules to handle them confidently. However, if that's difficult or genuinely going to be wasteful effort, let the LLM classify them and focus human review on the low confidence results where oversight catches nuances better. &lt;/p&gt;

&lt;p&gt;Not all problems require LLM calls. Some genuinely just need better engineering. With EDA, we understand what data we have, what it can do, and deliver more value to users and stakeholders.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>llm</category>
      <category>rag</category>
      <category>agents</category>
    </item>
    <item>
      <title>The Hidden Cost of Abstraction - Making an Informed Decision</title>
      <dc:creator>Solomon Aboyeji</dc:creator>
      <pubDate>Sun, 23 Nov 2025 22:31:19 +0000</pubDate>
      <link>https://dev.to/solomonaboyeji/the-hidden-cost-of-abstraction-27kk</link>
      <guid>https://dev.to/solomonaboyeji/the-hidden-cost-of-abstraction-27kk</guid>
      <description>&lt;p&gt;Abstraction promises convenience, but it often comes with hidden costs that only reveal themselves as time goes on.&lt;/p&gt;

&lt;p&gt;Remember when you could download music to your phone and truly own it? You could buy CDs or DVDs, and they were yours, that means no subscriptions, no monthly fees. You had to deal with some issues though. You had to manage storage, worry about physical media (disc) breaking, and manually organise everything into folders jazz, afro etc. Today, the likes of Spotify, Youtube, Apple Music, and Netflix abstract all those concerns away. In exchange, you pay a subscription fee or pay with your attention by watching ads.&lt;/p&gt;

&lt;p&gt;This same pattern plays out in software development. LangChain helps developers build LLM-based applications by abstracting away the manual orchestration of pipelines and replacing them with simple calls called chains. They released LangSmith, which helps you trace what each chain is doing and allows you to debug quickly. The first tier is free, but you can easily go over the limit set and will then need to jump on the paid tiers. While truthfully, you can write your own handler to help you trace, it comes with a cost of sitting down and building that logic, time you could spend on more productive work. Thankfully there is Langfuse, which is open source and has a more generous free tier when you use the cloud offering (or you can self-host it entirely).&lt;/p&gt;

&lt;p&gt;The thing is, once you are onboard using an abstraction, you are now tied to whatever terms the provider gives in the future. This is what we call vendor lock-in in technical terms, where you cannot go elsewhere because you have been tied to the provider either through better service, through much more convenience, incompatibility with other providers in the same space or maybe through legal terms as with the Adobe subscription.&lt;/p&gt;

&lt;p&gt;I have enjoyed many abstractions and not all abstractions are exploitative. Sometimes the convenience you or your team gets genuinely justifies the cost, and providers deliver real value. The key is being aware of the trade-off you're making.&lt;/p&gt;

&lt;p&gt;So yes, abstraction brings convenience, but be ready to pay for it immediately or when the time comes, someone bears the costs of making it convenient for you. With that in mind, what abstractions do you enjoy using?&lt;/p&gt;

</description>
      <category>langchain</category>
      <category>ai</category>
      <category>cloud</category>
      <category>discuss</category>
    </item>
    <item>
      <title>Call Them Customers: A Mindset Shift for Engineers in the AI Coding Era</title>
      <dc:creator>Solomon Aboyeji</dc:creator>
      <pubDate>Sun, 05 Oct 2025 12:57:26 +0000</pubDate>
      <link>https://dev.to/solomonaboyeji/call-them-customers-a-mindset-shift-for-engineers-in-the-ai-coding-era-2ln2</link>
      <guid>https://dev.to/solomonaboyeji/call-them-customers-a-mindset-shift-for-engineers-in-the-ai-coding-era-2ln2</guid>
      <description>&lt;p&gt;In 2025, when building products for paying customers, embracing AI-assisted coding can be a no-brainer because the productivity gains are undeniable.&lt;/p&gt;

&lt;p&gt;With this power comes great responsibility. Engineers are now able to do so much in little time. Automatic documentation by just looking at the existing code, writing of unit test cases has improved if done, churning out code to build features or fix bugs in a matter of minutes, if not seconds.&lt;/p&gt;

&lt;p&gt;While all of these are helpful, it is very easy for engineers to lose quality focus. You tend to ship bugs faster while also shipping products faster. We become lazier (not in the &lt;a href="https://www.entrepreneur.com/leadership/bill-gates-says-lazy-people-make-the-best-employees/376746" rel="noopener noreferrer"&gt;Bill Gates&lt;/a&gt; type of laziness).&lt;/p&gt;

&lt;p&gt;A subtle yet powerful mindset shift can resolve this issue to a much greater extent. Start calling your users "customers"; the effect will be psychological. This will help you ensure you always provide quality work for them. Because every line of code, whether written by you or AI, can frustrate your customers. Take more time to review, document, and write unit test cases.&lt;/p&gt;

&lt;p&gt;When you start adopting this, you should start seeing more quality work, as you wouldn't want customers to be frustrated. You also tend to communicate expectations more effectively, keeping your tech lead or product managers informed, who are then responsible for letting customers know what to expect.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Action Items&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Review all code, whether written by AI or by you.&lt;/li&gt;
&lt;li&gt;Write documentation on how things are built and some architecture decisions.&lt;/li&gt;
&lt;li&gt;Communication is needed in this AI era; let other team members know what you are doing or what to expect from you.&lt;/li&gt;
&lt;li&gt;Use AI to enhance quality, not just speed.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Remember, don't sacrifice speed for quality; your customers will thank you.&lt;/p&gt;

</description>
      <category>programming</category>
      <category>ai</category>
      <category>productivity</category>
      <category>testing</category>
    </item>
  </channel>
</rss>
