<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Anthony Barbieri</title>
    <description>The latest articles on DEV Community by Anthony Barbieri (@prince_of_pasta).</description>
    <link>https://dev.to/prince_of_pasta</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/prince_of_pasta"/>
    <language>en</language>
    <item>
      <title>What Do You Depend On? When the Chain of Trust Breaks</title>
      <dc:creator>Anthony Barbieri</dc:creator>
      <pubDate>Tue, 31 Mar 2026 00:26:06 +0000</pubDate>
      <link>https://dev.to/prince_of_pasta/what-do-you-depend-on-when-the-chain-of-trust-breaks-df</link>
      <guid>https://dev.to/prince_of_pasta/what-do-you-depend-on-when-the-chain-of-trust-breaks-df</guid>
      <description>&lt;p&gt;Most teams rely on more than just their application code to ship software. What happens when one of those tools falls victim to an attack? We recently got a demonstration with the popular security scanning tools &lt;a href="https://www.wiz.io/blog/trivy-compromised-teampcp-supply-chain-attack" rel="noopener noreferrer"&gt;Trivy and KICS&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;The attackers leveraged the compromised tooling (GitHub Actions) in a supply chain attack to harvest credentials in any consuming repo. With these additional credentials, they could expand their reach until they achieved the foothold they were looking for. &lt;/p&gt;

&lt;p&gt;I've noticed this risk is not unique to security scanners. While teams commonly consider the libraries their applications directly leverage, there is considerable surface area for what Node calls "devDependencies" or additional tooling your CI/CD pipeline pulls in during execution. This might include a test framework, formatter, or linter. &lt;/p&gt;

&lt;p&gt;Pipelines are high value targets with the secrets and systems they have access to, and teams must be more intentional about the steps they take to protect them. Just because a system does not accept requests from the internet does not mean it can't have a huge impact on your company's data. I've seen many teams adopt lock files for their application dependencies, but less rigor is applied to this additional tooling.&lt;/p&gt;

&lt;p&gt;For example, the overwhelming majority of consumers of a given GitHub action will reference a major version tag (uses: actions/setup-go@v6). These tags are "mutable" which means anyone with write access to the action's repository can change the backing code without any code updates from the consumer. This increases the blast radius an attack can have.&lt;/p&gt;

&lt;p&gt;We can learn from some of the approaches teams already take for their application dependencies. Similar to many dependency files, teams can reference a more specific minor or patch version, which means new releases don't get consumed automatically. This does not resolve the immutability challenge, but avoids the automatic consumption of new versions without review.&lt;/p&gt;

&lt;p&gt;Some programming languages (like Golang via go.sum) take it a step further and verify if the code for a version matches a checksum and inform the user if it has changed. Actions can be referenced by their hash/SHA which prevents the code from shifting beneath a consumer's feet. This tradeoff is not without cost, as automation is needed to make regular updates. However, I'm coming around to the idea that it is not best to always upgrade the moment a new release is available.&lt;/p&gt;

&lt;p&gt;A growing number of package management systems now support "cooldowns". With this configuration, tools like &lt;a href="https://docs.github.com/en/code-security/reference/supply-chain-security/dependabot-options-reference#cooldown-" rel="noopener noreferrer"&gt;dependabot&lt;/a&gt;, &lt;a href="https://docs.astral.sh/uv/concepts/resolution/#dependency-cooldowns" rel="noopener noreferrer"&gt;uv&lt;/a&gt;, or &lt;a href="https://pnpm.io/settings#minimumreleaseage" rel="noopener noreferrer"&gt;pnpm&lt;/a&gt; will not suggest upgrading to a newer version until the time period has passed. If an attack is resolved within a few hours or days, consumers with these configurations will avoid the impact altogether. A couple of days or a week sounds like a reasonable tradeoff, without letting security vulnerabilities go unpatched for too long. &lt;/p&gt;

&lt;p&gt;Another approach to consider is eliminating the impact by removing unneeded dependencies. Is a given third-party GitHub action needed, or is it really just two commands wrapped in the action? Focus on the minimal required set, rather than letting the list grow endlessly.&lt;/p&gt;

&lt;p&gt;You can also consider vendoring. With vendoring, you take a snapshot of the code and store it locally, using it in place of dynamically fetching from upstream. This removes the ability for new releases to directly impact the codebase, but requires additional management to update and make the vendored packages available to the build. &lt;/p&gt;

&lt;p&gt;With custom development, a team can choose to write their own version of a given tool vs putting their trust in a dependency. While it creates an additional burden on the team, the tool can be customized for only what functions are truly needed.&lt;/p&gt;

&lt;p&gt;Platform and security teams must do the work to enable teams to navigate these options. As I've &lt;a href="https://dev.to/prince_of_pasta/dispatch-from-the-other-side-designing-for-leverage-4c62"&gt;previously explored&lt;/a&gt;, defaults in templates and tooling around upgrades can help protect development teams at scale. Additional controls such as an allow list of actions, or defaulting to read permissions can also reduce the impact of an attack. When I implemented the allow list approach, we eliminated the potential for action typosquatting attacks, while also providing evaluation criteria to teams considering new actions.&lt;/p&gt;

&lt;p&gt;As these attacks have demonstrated, it's no longer enough to only focus on the code your application uses. What tooling do your pipelines depend on? What might happen if one of those tools was compromised?&lt;/p&gt;

</description>
      <category>cicd</category>
      <category>cybersecurity</category>
      <category>github</category>
      <category>security</category>
    </item>
    <item>
      <title>Dispatch From the Other Side: Don't Let the Gap Grow</title>
      <dc:creator>Anthony Barbieri</dc:creator>
      <pubDate>Tue, 10 Mar 2026 00:22:44 +0000</pubDate>
      <link>https://dev.to/prince_of_pasta/dispatch-from-the-other-side-dont-let-the-gap-grow-1ji</link>
      <guid>https://dev.to/prince_of_pasta/dispatch-from-the-other-side-dont-let-the-gap-grow-1ji</guid>
      <description>&lt;p&gt;This is the fourth and final post of the Dispatch From the Other Side series.&lt;br&gt;
Links to previous posts:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://dev.to/prince_of_pasta/dispatch-from-the-other-side-from-scripts-to-software-2md8"&gt;From Scripts to Software&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev.to/prince_of_pasta/dispatch-from-the-other-side-designing-for-leverage-4c62"&gt;Designing For Leverage&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev.to/prince_of_pasta/dispatch-from-the-other-side-aligned-incentives-5bb1"&gt;Aligned Incentives&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This week, one of the teams I work with released an internal API which provided an inventory of provisioned tenants for one of our platforms. I provided the OpenAPI spec to a coding agent. Within 10 minutes, I had a CLI and an agent skill so I could simply ask an LLM "Who owns that resource?" for operational or security response scenarios. Development teams are shipping code at that same pace. Can your security tooling keep up?&lt;/p&gt;

&lt;p&gt;As I explored in my &lt;a href="https://dev.to/prince_of_pasta/it-depends-modernizing-dependency-management-in-the-age-of-ai-fic"&gt;It Depends&lt;/a&gt; post, security practitioners can intervene earlier than ever before. Security expertise can be directly provided to coding agents, addressing issues before they're even committed to a codebase. This benefits both security teams and development teams by driving the cost to fix a security issue to near zero.&lt;/p&gt;

&lt;p&gt;This is the promise shift left was always chasing. It lets security teams scale their impact. Rather than chasing an issue already in production, security guidance delivered directly to coding agents is high leverage by design.&lt;/p&gt;

&lt;p&gt;In the &lt;a href="https://dev.to/prince_of_pasta/dispatch-from-the-other-side-designing-for-leverage-4c62"&gt;Designing for Leverage&lt;/a&gt; post, we explored the example of using a minimal base image for containers instead of one with unnecessary packages that create extra work for development teams. A simple instruction to an LLM can enforce that standard at the point of code generation: &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;When a base image is needed for Python, use the one available at company.registry.com/minimal-python:version&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Security teams already analyze vulnerability and misconfiguration data to identify patterns. Those patterns often turn into documentation, tickets, or training. Now they can become agent instructions instead.&lt;/p&gt;

&lt;p&gt;A couple of years ago, most conversations about LLMs focused on hallucinations. Today development teams are already shipping code with them. I saw something similar during the transition to cloud computing. Teams that weren’t curious enough about the shift were left behind. I don't want to see the same thing happen with Generative AI.&lt;/p&gt;

&lt;p&gt;I've found that it's best to approach the space with curiosity. I'm building my understanding of the new surface area that capabilities like Model Context Protocol and code execution introduce. I'm also exploring how to defend against attacks like prompt injection. Navigating this change alongside developers builds the kind of trust we discussed in the &lt;a href="https://dev.to/prince_of_pasta/dispatch-from-the-other-side-aligned-incentives-5bb1"&gt;Aligned Incentives&lt;/a&gt; post.&lt;/p&gt;

&lt;p&gt;The growth in my career has come from following a simple playbook. Learning how a system works, finding points of leverage, and using them to scale my impact. Security teams will always be outnumbered by development teams, and any process that relies on a smaller team reviewing a larger team's work manually won't scale. Figure out how to remove yourself from the equation while ensuring a control stays in place. The technologies will keep changing. The playbook won't.&lt;/p&gt;

</description>
      <category>agents</category>
      <category>ai</category>
      <category>api</category>
      <category>llm</category>
    </item>
    <item>
      <title>Dispatch From the Other Side: Aligned Incentives</title>
      <dc:creator>Anthony Barbieri</dc:creator>
      <pubDate>Tue, 03 Mar 2026 00:31:56 +0000</pubDate>
      <link>https://dev.to/prince_of_pasta/dispatch-from-the-other-side-aligned-incentives-5bb1</link>
      <guid>https://dev.to/prince_of_pasta/dispatch-from-the-other-side-aligned-incentives-5bb1</guid>
      <description>&lt;p&gt;This is post three of the Dispatch From the Other Side series.&lt;br&gt;
Links to previous posts:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://dev.to/prince_of_pasta/dispatch-from-the-other-side-from-scripts-to-software-2md8"&gt;From Scripts to Software&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev.to/prince_of_pasta/dispatch-from-the-other-side-designing-for-leverage-4c62"&gt;Designing For Leverage&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;From the first time I heard it, Charlie Munger's quote on incentives has stuck with me. "Show me the incentive and I'll show you the outcome." It explained something I kept running into. Development teams weren't ignoring security findings because they didn't care. They were responding rationally to how they were measured.&lt;/p&gt;

&lt;p&gt;This is what makes the leveraged approaches we explored in the last post so valuable. Rather than asking teams to change how they work, you reduce the cost of being compliant. When the common case is easy, most teams will choose it.&lt;/p&gt;

&lt;p&gt;Let's compare this to another approach. A vulnerability management team I worked with reached out to a development team about a low risk finding without an available fix. They blocked it from going to production. The security team was measured on eliminating known vulnerabilities. The development team was measured on shipping working software. Leaving that tension unresolved eroded trust faster than any missed vulnerability ever could.&lt;/p&gt;

&lt;p&gt;Development teams have limited time to address security findings. I've seen more success with slowly raising the bar rather than letting perfect be the enemy of good. Before rolling out a new cloud misconfiguration detection as part of the official rule set, I asked development teams for feedback. This gave them time to prepare and built buy-in before enforcement.&lt;/p&gt;

&lt;p&gt;Sometimes the number of people finding issues far exceeds the number empowered to fix them. I've submitted pull requests to fix security configuration issues rather than putting it on the development team. After a few of those, teams started looping me in earlier to figure out how we could remove the insecure option outright. Approaches like this turn it into a partnership and show that you’re invested in their success.&lt;/p&gt;

&lt;p&gt;I’ve learned that security is one dimension of overall quality. Teams are managing operational risk alongside security risk. A security patch can impact the availability of a system if it is not well tested or implemented in a rush. Security debt competes with feature debt, operational debt, and architectural debt. It does not exist in isolation. I've seen many different stakeholder groups ask development teams to take corrective action on an issue without realizing how many other groups are asking them to do the same.&lt;/p&gt;

&lt;p&gt;When incentives align, trust follows. That shift moves security from an adversarial relationship to a partnership. It also increases impact for the issues that can’t be solved with tooling improvements alone.&lt;/p&gt;

&lt;p&gt;As generative AI accelerates how software gets built, those incentive gaps could widen. The question is whether security evolves with them. That’s what I’ll explore in the final piece.&lt;/p&gt;

</description>
      <category>leadership</category>
      <category>management</category>
      <category>security</category>
      <category>softwaredevelopment</category>
    </item>
    <item>
      <title>Dispatch From the Other Side: Designing for Leverage</title>
      <dc:creator>Anthony Barbieri</dc:creator>
      <pubDate>Mon, 23 Feb 2026 23:45:28 +0000</pubDate>
      <link>https://dev.to/prince_of_pasta/dispatch-from-the-other-side-designing-for-leverage-4c62</link>
      <guid>https://dev.to/prince_of_pasta/dispatch-from-the-other-side-designing-for-leverage-4c62</guid>
      <description>&lt;p&gt;This is Part 2 of the series. You can read part 1 &lt;a href="https://dev.to/prince_of_pasta/dispatch-from-the-other-side-from-scripts-to-software-2md8"&gt;here&lt;/a&gt;!&lt;/p&gt;

&lt;p&gt;When I first started in the industry, security reviews were mostly still manual. Security was the department of "no" and could hold up a quarterly release until they were satisfied all required controls were in place. It was before the DevOps movement had hit large enterprises and cycle times to make any change in production could span months. &lt;/p&gt;

&lt;p&gt;As I saw the adoption of code pipelines grow within my company, I also saw the security industry try to modernize alongside it. DevSecOps and "shifting left" became all the rage. While this allowed security practitioners to catch issues earlier and reduce the cost to fix them, breaking the build too often created friction.&lt;/p&gt;

&lt;p&gt;In a CI/CD world, every push triggers a pipeline. If that pipeline takes 10–15 minutes, that delay scales across the organization. Multiplied by hundreds or thousands of engineers, those minutes become real drag on delivery.&lt;/p&gt;

&lt;p&gt;Legacy code scanning tools were usually limited to ad-hoc testing rather than being part of the pipelines because of how long they took to run. While improvements have been made since that time, having to context switch to patch a new vulnerability in the middle of trying to do something else doesn't create fans of security tooling.&lt;/p&gt;

&lt;p&gt;Code pipelines can also be the means of responding to operational incidents. I have witnessed the deployment of a fix being blocked by a security scan. It broke the build as it was heading for production, leading to longer recovery times.&lt;/p&gt;

&lt;p&gt;While the "shift left" concept promised a lot, we stopped short of how far we could go. High quality and fast scanners can be added to pipelines, but it's even better to address issues before they are ever introduced into a codebase in the first place.&lt;/p&gt;

&lt;p&gt;Rather than detecting and adding to an ever-growing pile of misconfigurations and vulnerabilities, security teams can partner with internal shared service teams to reduce the surface area altogether. Encryption at rest can be enabled by default, and not a parameter a development team has to remember to enable. No new detection to triage in the pipeline, but all future instances of the problem disappear. This is where "paved roads" start to demonstrate their value.&lt;/p&gt;

&lt;p&gt;Similarly, with the growth in adoption of containers, a well maintained minimal base image eliminates vulnerabilities that have nothing to do with the application. Instead of patching OS packages the application never uses, these shared base images only provide what's needed to run it. By addressing the problem systematically, vulnerability counts drop significantly, rather than just trying to automate response to individual detections. &lt;/p&gt;

&lt;p&gt;When some flexibility beyond a silent and secure default is needed, I've found that registering the opt-out explicitly helps with auditability. When encryption is on by default, forcing a deliberate configuration like "encryption_disabled: true" is easier to detect than a missing configuration line.&lt;/p&gt;

&lt;p&gt;In order for these controls and systematic approaches to be effective, security practitioners must strive to understand the platforms in use, and how developers consume them. While not everyone needs to be a subject matter expert, finding key partnerships and systems thinking will get you further than chasing individual findings.&lt;/p&gt;

&lt;p&gt;Identifying the source of a problem and solving it there, rather than treating the symptoms, is what separates reactive security from effective security. With code generation only getting faster with generative AI, security teams need high-leverage approaches to reduce risk in the environment. In our next post we'll explore ways of working, incentive structures, and earning credibility.&lt;/p&gt;

</description>
      <category>cicd</category>
      <category>devops</category>
      <category>security</category>
      <category>softwareengineering</category>
    </item>
    <item>
      <title>Dispatch From the Other Side: From Scripts to Software</title>
      <dc:creator>Anthony Barbieri</dc:creator>
      <pubDate>Tue, 17 Feb 2026 01:16:09 +0000</pubDate>
      <link>https://dev.to/prince_of_pasta/dispatch-from-the-other-side-from-scripts-to-software-2md8</link>
      <guid>https://dev.to/prince_of_pasta/dispatch-from-the-other-side-from-scripts-to-software-2md8</guid>
      <description>&lt;p&gt;At the start of my career as a security engineer, I built an allow list management system for our web gateway within the security operations center (SOC) I worked in. Beyond just a script, this was a live system that a core security component relied on. Someone once blocked a /3 vs a /32 IP range, and access to a third of the internet broke for all 40,000 employees. I knew the system I created had to prevent something like that from occurring again. The manager of the SOC analysts would have regular discussions with myself and my manager on any issues that arose, which helped me realize the impact my code had on someone else’s work. That was the first time I felt responsible for software, not just automation.&lt;/p&gt;

&lt;p&gt;This series explores my lessons as I crossed into platform engineering from security engineering. What I might tell my past self, given the chance, and how I now approach problem solving with a software and platform engineer's perspective. If you're a security practitioner wondering what's on the other side, this is for you. If you're a developer who works with security teams, this might help explain why we think the way we do.&lt;/p&gt;

&lt;p&gt;While in the security org, I started to notice that the numbers of vulnerabilities kept growing faster than we could fix them, given how often we patched. As cloud platforms came onto the scene, we also started to produce misconfiguration findings. Hoping to avoid the same outcome there, I took a different approach.&lt;/p&gt;

&lt;p&gt;Rather than creating more findings to track, I identified platform controls that were acceptable to all parties. With stories of public S3 buckets causing data leakage left and right, I implemented a preventative control that disallowed public buckets. This eliminated those types of issues outright for both the development team and the vulnerability management team.&lt;/p&gt;

&lt;p&gt;While the security team does not own and operate all systems in an enterprise, working with one platform or infrastructure team vs 50-100 can greatly reduce remediation time. However, there were a few times where we had to roll back a control as it negatively impacted teams’ ability to operate their own systems. This taught me the value of progressive rollouts, a practice the rest of software engineering already relied on. Similarly, using policy-as-code we could move back to an older version of the policy in minutes.&lt;/p&gt;

&lt;p&gt;Many skillsets in security are more portable than I realized. Conducting security reviews prepared me well for system design discussions and operational troubleshooting when a system might be down. From tracking an attacker's footsteps during incident response, I could debug a running system, stitching together the timeline with logs and traces. &lt;/p&gt;

&lt;p&gt;The most effective controls I've built have been from understanding how development teams work and finding ways to remove risk without increasing friction. While not everyone needs to become a software engineer, understanding the core concepts and ways of working helps find solutions that don't slow teams down. In the next post, we'll explore CI/CD and how to balance effective controls, defaults and exceptions.&lt;/p&gt;

</description>
      <category>career</category>
      <category>devjournal</category>
      <category>security</category>
      <category>softwareengineering</category>
    </item>
    <item>
      <title>Trimming Toil: Automating Repetitive Development Tasks</title>
      <dc:creator>Anthony Barbieri</dc:creator>
      <pubDate>Sun, 11 Jan 2026 22:37:20 +0000</pubDate>
      <link>https://dev.to/prince_of_pasta/trimming-toil-automating-repetitive-development-tasks-59nk</link>
      <guid>https://dev.to/prince_of_pasta/trimming-toil-automating-repetitive-development-tasks-59nk</guid>
      <description>&lt;p&gt;I maintain a small number of open-source tools that require regular patching. As I got more comfortable with coding agents, I finally addressed two minor annoyances I'd let linger for too long.&lt;/p&gt;

&lt;p&gt;The first was having a coding agent produce a &lt;a href="https://github.com/princespaghetti/setup-conftest/blob/main/.github/workflows/update-major-tag.yml" rel="noopener noreferrer"&gt;GitHub Workflow&lt;/a&gt; for moving forward a major version tag (v1) any time I created a new minor version. Major version tracking is a common approach for GitHub Actions where consumers want to reduce the number of updates to their workflow files.&lt;/p&gt;

&lt;p&gt;Previously, this involved a couple extra commands to map the major version to the new commit of the latest release. While it was not "moving a mountain", I'd put off automating it for far too long. I described what I wanted to Claude Code, and the workflow worked on its first use.&lt;/p&gt;

&lt;p&gt;Another situation I found myself in regularly: dependabot would update a library, and the transpiled JavaScript in the &lt;code&gt;dist&lt;/code&gt; folder would no longer match the source. The PR workflow catches this mismatch, which meant I had to check out the branch, rebuild, and push the updated dist. I tolerated running this sequence of commands 5-10 times over the past year.&lt;/p&gt;

&lt;p&gt;Instead of a workflow, I addressed this with an &lt;a href="https://github.com/princespaghetti/setup-conftest/blob/main/.claude/skills/rebuild-dist-for-pr/SKILL.md" rel="noopener noreferrer"&gt;agent skill&lt;/a&gt;. Skills are now an &lt;a href="https://agentskills.io/home" rel="noopener noreferrer"&gt;open standard&lt;/a&gt; that provide agents with a scalable way to automate common tasks. Instead of running a series of commands, I can simply tell my coding agent "rebuild dist on PR 123".&lt;/p&gt;

&lt;p&gt;Both examples share something: they're deterministic by design. The workflow runs the same way every time. The skill gives the agent explicit steps rather than asking it to reason about dependabot PRs from scratch. Coding agents are powerful because they can improvise. But for tasks where I already know the right sequence, encoding that sequence is faster, cheaper, and more predictable.&lt;/p&gt;

&lt;p&gt;Smaller iterative improvements like these add up. This is where I see coding agents actually providing value, not generating entire applications from a prompt, but chipping away at the friction that accumulates around the work. As these improvements pile up, I spend less time on maintenance and more on the tools themselves. I'm curious how these compound at scale when teams start chaining these patterns together.&lt;/p&gt;

</description>
      <category>automation</category>
      <category>devops</category>
      <category>github</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Systems, Stories, and Skills: A 2025 Year in Review</title>
      <dc:creator>Anthony Barbieri</dc:creator>
      <pubDate>Mon, 29 Dec 2025 00:46:13 +0000</pubDate>
      <link>https://dev.to/prince_of_pasta/systems-stories-and-skills-a-2025-year-in-review-3jln</link>
      <guid>https://dev.to/prince_of_pasta/systems-stories-and-skills-a-2025-year-in-review-3jln</guid>
      <description>&lt;p&gt;After a four-year hiatus, 2025 was the year I finally returned to consistent blogging. It was a year defined by transition, both for the industry and for my own career. In late 2024, I stepped into the Enablement and Platform Engineering space, marking my first role outside of a security organization since graduating college. It has been a fascinating vantage point from which to watch the Generative AI landscape shift from simple chat interfaces to the rise of autonomous agents, and standardized tools and skills. &lt;/p&gt;

&lt;h2&gt;
  
  
  Beyond the Hype: The Rise of Utility
&lt;/h2&gt;

&lt;p&gt;When ChatGPT was initially released, I was not impressed. Every week there was another news article about an LLM hallucinating. Without the ability to interact with external data sources or drive change in external systems, I saw limited value. In 2025, there was a shift to standardized tool calling via Model Context Protocol (MCP). &lt;/p&gt;

&lt;p&gt;This year the industry moved beyond single-turn chat responses and smart code-completion. Agentic loops, with the ability to verify implementations with bash commands or controlling the browser, demonstrated the power of coding agents. Rather than be an "all knowing" LLM, the focus shifted to usefulness and ability to use tools (via MCP).&lt;/p&gt;

&lt;h2&gt;
  
  
  From Experimentation to Execution
&lt;/h2&gt;

&lt;p&gt;Claude Code emerged early this year as a practical coding agent, with a number of challengers following as the space matured. These agents helped me get back into creating personal projects following a similar hiatus to what I mention above. They helped greatly reduce the cost of experimentation and quickly move past errors that might have sent me down a rabbit hole previously.&lt;/p&gt;

&lt;p&gt;One of these personal projects was &lt;a href="https://dev.to/prince_of_pasta/feeling-the-vibes-with-verifi-152c"&gt;Verifi&lt;/a&gt; helping make it easier for developers to manage their certificates across programming language ecosystems. Rather than getting bogged down in syntax, I found myself co-developing a well-defined plan with the agent to drive toward the outcomes and non-functional requirements I cared about. Building Verifi made it clear to me that as agent adoption grows, the ability to articulate and evolve a plan will matter more than the mechanics of implementation.&lt;/p&gt;

&lt;h2&gt;
  
  
  The New Frontier: Context Engineering
&lt;/h2&gt;

&lt;p&gt;Early on, I assumed that giving an agent access to more MCP servers would make it more capable. As I added too many tools, performance degraded, reinforcing that agent failures are often the result of system design rather than model limitations. While Prompt Engineering worked well enough for single-turn prompts, this tension pushed Context Engineering to the forefront in 2025. This work focused on ensuring the context window was not overwhelmed by tool definitions, system prompts, and custom instructions layered on top of the conversation itself.&lt;/p&gt;

&lt;p&gt;This evolution raised my expectations for what was achievable by an agent and changed how I evaluated agent quality. Rather than judging agents by the best-case output of a single interaction, I started evaluating whether their behavior remained understandable and predictable across a longer workflow.&lt;/p&gt;

&lt;p&gt;Various strategies emerged to help manage context, including optimizing MCP tool definitions, custom/sub agents, and progressive disclosure. The Agent Skills standard (originally created by Anthropic) helped ensure consistency across coding agents so the right context could be pulled in at the right time (using progressive disclosure techniques). I created some example skills, including one around &lt;a href="https://dev.to/prince_of_pasta/it-depends-modernizing-dependency-management-in-the-age-of-ai-fic"&gt;dependency management&lt;/a&gt;. As I outline in the post an agent can now load dependency-management skills only when a package file changes vs every instruction upfront.&lt;/p&gt;

&lt;h2&gt;
  
  
  Looking Ahead: Skill Management in 2026
&lt;/h2&gt;

&lt;p&gt;Similar to discussions this year around MCP and how to manage the tools they provide at scale, I suspect we will see a similar conversation around managing skills as their adoption grows next year. To cap off the year, I used some downtime between the holidays to prototype a potential solution. If skills become as prevalent as I expect, we'll need tooling to manage them. I built &lt;a href="https://github.com/princespaghetti/skset" rel="noopener noreferrer"&gt;skset&lt;/a&gt; (short for "Skill Sets") to explore what that category might look like, starting with the basic problem of skills scattered across different agent directories.&lt;/p&gt;

&lt;p&gt;In 2026, it will be interesting to see if a standardized marketplace for sharing skills publicly emerges beyond the vendor-specific ecosystems we see today. We are already seeing numerous discussions on how coding agents will fundamentally reshape software engineering and how that shift will impact individual engineers across their entire career journeys.&lt;/p&gt;

&lt;p&gt;While this year brought massive improvements to base model performance, those capabilities are just the foundation. As we head into next year, the "downstream" impact on areas like automated code review, social coding, and quality governance at scale will become clear.&lt;/p&gt;

&lt;p&gt;For me, 2025 was the definitive turning point for AI. As I wrote in &lt;a href="https://dev.to/prince_of_pasta/cloudy-with-a-chance-of-context-navigating-change-with-context-engineering-2hf5"&gt;Cloudy With a Chance of Context&lt;/a&gt;, this era feels remarkably similar to the shift I experienced with cloud in 2018. During that time, cloud computing moved from emerging to general best practice while challenging governance teams to keep up. &lt;/p&gt;

&lt;p&gt;2026 will be the year we see if these autonomous workflows can truly meet the high bar of production-grade engineering. Based on what I've observed building skset and working with context management, the challenge won't be any single capability but orchestrating skills, sub-agents, MCP servers, and other capabilities together effectively. Teams will need to solve discovery without overwhelming context budgets, configure logical groupings per project, and understand how these features interact rather than just layer them on. The organizations that treat agent capabilities as a system to be composed, rather than features to be accumulated, will be the ones that unlock real productivity without sacrificing quality.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>It Depends: Modernizing Dependency Management in the age of AI</title>
      <dc:creator>Anthony Barbieri</dc:creator>
      <pubDate>Tue, 25 Nov 2025 04:40:00 +0000</pubDate>
      <link>https://dev.to/prince_of_pasta/it-depends-modernizing-dependency-management-in-the-age-of-ai-fic</link>
      <guid>https://dev.to/prince_of_pasta/it-depends-modernizing-dependency-management-in-the-age-of-ai-fic</guid>
      <description>&lt;p&gt;With generative AI and coding agents, we're producing code at unprecedented speed while accumulating dependencies to match. Evidence now shows AI doesn't just suggest dependencies: it &lt;a href="https://www.theregister.com/2024/03/28/ai_bots_hallucinate_software_packages/" rel="noopener noreferrer"&gt;hallucinates ones that don't exist&lt;/a&gt;. In this case, a package called "huggingface-cli" was downloaded thousands of times, including by teams at Alibaba, before anyone realized it was completely fictitious. Each hallucinated package name becomes a supply chain vulnerability waiting to be exploited, with attackers racing to register them before developers discover the mistake.&lt;/p&gt;

&lt;p&gt;But hallucinations are just one risk in this acceleration of code generation. Even with legitimate packages, the speed creates problems. In the past, platform and security teams could influence dependency choices through documentation, approved lists, or code reviews. With AI generating suggestions instantly, those touchpoints disappear. The productivity gains are undeniable, but we're now accumulating unmaintained dependencies, vulnerable transitive packages, and license incompatibilities with minimal oversight, far faster than security teams can audit them after the fact.&lt;/p&gt;

&lt;p&gt;As I discussed in my last &lt;a href="https://dev.to/prince_of_pasta/cloudy-with-a-chance-of-context-navigating-change-with-context-engineering-2hf5"&gt;post&lt;/a&gt;, governance teams can scale by making machine-readable guidance available to AI agents. Instead of catching dependency problems in CI/CD after code is written, we can validate packages at the moment they're suggested. The dependency-evaluator skill I built implements this approach. It embeds systematic dependency evaluation directly into Claude Code's workflow, creating that critical checkpoint between AI suggestion and developer acceptance. While I created this for Claude, the pattern can easily be applied to other competing tools.&lt;/p&gt;

&lt;h2&gt;
  
  
  How the Skill Works: Automatic and In-Flow
&lt;/h2&gt;

&lt;p&gt;Traditional dependency evaluation requires developers to remember commands like &lt;code&gt;npm audit&lt;/code&gt;, manual research, and security checks. The dependency-evaluator skill activates automatically when Claude detects dependency-related questions, keeping developers in flow. This works through Claude Code's &lt;a href="https://code.claude.com/docs/en/skills" rel="noopener noreferrer"&gt;skills feature&lt;/a&gt;. The skill's description tells Claude when to activate based on conversation context:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Evaluates whether a programming language dependency should be used by analyzing maintenance activity, security posture, community health, documentation quality, dependency footprint, production adoption, license compatibility, API stability, and funding sustainability. Use when users are considering adding a new dependency, evaluating an existing dependency, or asking about package/library recommendations.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;When a developer asks, "Should I use axios for HTTP requests?" or "Add JWT authentication," Claude recognizes the context and automatically runs the necessary evaluation. No commands to remember, no workflow interruption—just systematic analysis at the moment it's needed.&lt;/p&gt;

&lt;p&gt;Once invoked, the skill evaluates the package and returns a structured assessment:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;## Dependency Evaluation: &amp;lt;package-name&amp;gt;

### Summary
[2-3 sentence overall assessment with recommendation]

**Recommendation**: [ADOPT / EVALUATE FURTHER / AVOID]
**Risk Level**: [Low / Medium / High]
**Blockers Found**: [Yes/No]

### Blockers (if any)
[List any dealbreaker issues - these override all scores]
- ⛔ [Blocker description with specific evidence]

### Evaluation Scores

| Signal | Score | Weight | Notes |
|--------|-------|--------|-------|
| Maintenance | X/5 | [H/M/L] | [specific evidence with dates/versions] |
| Security | X/5 | [H/M/L] | [specific evidence] |
| Community | X/5 | [H/M/L] | [specific evidence] |
| Documentation | X/5 | [H/M/L] | [specific evidence] |
| Dependency Footprint | X/5 | [H/M/L] | [specific evidence] |
| Production Adoption | X/5 | [H/M/L] | [specific evidence] |
| License | X/5 | [H/M/L] | [specific evidence] |
| API Stability | X/5 | [H/M/L] | [specific evidence] |
| Funding/Sustainability | X/5 | [H/M/L] | [specific evidence] |
| Ecosystem Momentum | X/5 | [H/M/L] | [specific evidence] |

**Weighted Score**: X/50 (adjusted for dependency criticality)

### Key Findings

#### Strengths
- [Specific strength with evidence]
- [Specific strength with evidence]

#### Concerns
- [Specific concern with evidence]
- [Specific concern with evidence]

### Alternatives Considered
[If applicable, mention alternatives worth evaluating]

### Recommendation Details
[Detailed reasoning for the recommendation with specific evidence]

### If You Proceed (for ADOPT recommendations)
[Specific advice tailored to risks found]
- Version pinning strategy
- Monitoring recommendations
- Specific precautions based on identified concerns
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The skill provides this analysis in under a minute, offering systematic evaluation at the moment developers need it, not after problems reach production. &lt;a href="https://github.com/princespaghetti/claude-marketplace/blob/main/learnfrompast/skills/dependency-evaluator/SKILL.md" rel="noopener noreferrer"&gt;View the full skill implementation&lt;/a&gt; and &lt;a href="https://github.com/princespaghetti/claude-marketplace/blob/main/learnfrompast/skills/dependency-evaluator/EXAMPLES.md" rel="noopener noreferrer"&gt;example evaluations&lt;/a&gt; showing how the skill assesses packages across different risk scenarios.&lt;/p&gt;

&lt;h2&gt;
  
  
  Beyond Dependencies: A Pattern for Embedding Expertise
&lt;/h2&gt;

&lt;p&gt;Skills like this provide actionable ways for security teams to influence development decisions in the age of generative AI. With economic headwinds and calls for more productivity with existing headcounts, machine-readable guidance lets even a single security engineer embed expertise directly into AI-assisted workflows at scale.&lt;/p&gt;

&lt;p&gt;The dependency-evaluator skill is one example, but this pattern extends to any domain where you need to guide AI suggestions with organizational knowledge: API design standards, cloud resource configuration, authentication patterns, or infrastructure choices. What will you build to embed your expertise into AI-assisted development?&lt;/p&gt;

</description>
      <category>ai</category>
      <category>llm</category>
      <category>security</category>
    </item>
    <item>
      <title>Cloudy With a Chance of Context: Navigating Change with Context Engineering</title>
      <dc:creator>Anthony Barbieri</dc:creator>
      <pubDate>Tue, 18 Nov 2025 01:34:19 +0000</pubDate>
      <link>https://dev.to/prince_of_pasta/cloudy-with-a-chance-of-context-navigating-change-with-context-engineering-2hf5</link>
      <guid>https://dev.to/prince_of_pasta/cloudy-with-a-chance-of-context-navigating-change-with-context-engineering-2hf5</guid>
      <description>&lt;p&gt;Simply writing code was never the bottleneck for software engineering. Software engineering is much broader than programming and involves the end-to-end delivery of value. While shifts like the DevOps movement have pushed teams toward greater ownership, there is still an ever-increasing number of responsibilities overwhelming development teams. Something had to give.&lt;/p&gt;

&lt;p&gt;To understand the scale of the change we are now facing, we must look back at the radical shift that occurred in infrastructure engineering with the adoption of the public cloud. The historical bottlenecks of physical specification sheets and manual capacity planning were causing unacceptable delivery delays. Cloud computing fundamentally raised the abstraction line, moving an engineer's focus away from physical assets like servers and toward logical, API-driven services. Concepts like Infrastructure-as-Code (IaC) became the norm, transforming the role from hardware maintenance to architectural optimization and ultimately driving a focus on higher value activities.&lt;/p&gt;

&lt;p&gt;The public cloud did not come without its own risks. While it allowed teams to move with incredible speed, it tested the ability for safety guardrails to keep pace. Early on, you couldn't go long without seeing a breaking news headline about another public S3 bucket that had exposed sensitive data. This highlighted the need to clearly understand the implications of the various knobs and switches these cloud services made available.&lt;/p&gt;

&lt;p&gt;Platform engineering teams entered the scene to help development teams navigate the complexities of the cloud while allowing them to focus on their functional requirements. However, development teams always outnumber platform teams, and enablement teams can only scale so far. This leaves a gap of teams that may be pioneering on their own and cannot easily get the answers they need to follow best practices. &lt;/p&gt;

&lt;p&gt;Having worked in the security and platform engineering spaces, I am genuinely excited with some of the potential from generative AI, specifically agentic coding agents. Rather than catching something in a CI/CD pipeline or after having already been deployed, a well-configured agent can inject guidance and corrections during code generation. This helps address some of the scaling issues plaguing platform teams and other teams with governance responsibilities.&lt;/p&gt;

&lt;p&gt;If you ask five different developers to build something with a one sentence description, you'll almost certainly get five different answers. Coding agents are no different. Without the necessary context they may completely miss the target outcome or fail to follow internal best practices.&lt;/p&gt;

&lt;p&gt;Security policies, style guides, and internal opinions all become critical context to provide the agent. The challenge becomes finding the right mix of these sources at the right time. The industry is currently calling this "context engineering" as of late 2025. Owners of these sources need to meet developers where they are by making them available for agents to consume. Once that context is available, a coding agent may be guided to ensure an API call is made securely over HTTPS, discouraging a developer from using verify=false to skip certificate validation. Or it might auto-apply parameters to a database configuration to ensure it is encrypted at rest.&lt;/p&gt;

&lt;p&gt;Beyond improving overall code quality, there is also the potential for developers to be more informed on the why behind certain configurations or approaches. When working in larger companies, it can be hard to understand the history or wider architecture that might drive a certain approach. With GenAI, a developer can interrogate and re-frame concepts to ensure they get the necessary learning at the right time.&lt;/p&gt;

&lt;p&gt;Between the improved adherence to best practices and ability to quickly ramp up on a topic, developers can be more focused on delivering value for their business domain. They can finally experience a similar shift to what infrastructure engineers went through during the cloud transition. &lt;/p&gt;

&lt;p&gt;Most definitions of software engineering highlight that the ultimate outcome is to build solutions that meet user needs. The means of achieving that value, however, is undergoing a fundamental shift. Skill sets must change to match the moment, moving more of our focus from coding to clear specification and context engineering. This will help ensure we don't repeat the security and consistency mistakes we made during the early cloud transition. Governance and platform teams can lead this change by making machine-readable guidance available to agents. I encourage everyone to embrace this change with curiosity and see what's possible!&lt;/p&gt;

</description>
      <category>cloud</category>
      <category>devops</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Feeling the Vibes with Verifi</title>
      <dc:creator>Anthony Barbieri</dc:creator>
      <pubDate>Sun, 02 Nov 2025 15:19:35 +0000</pubDate>
      <link>https://dev.to/prince_of_pasta/feeling-the-vibes-with-verifi-152c</link>
      <guid>https://dev.to/prince_of_pasta/feeling-the-vibes-with-verifi-152c</guid>
      <description>&lt;p&gt;Imagine this. You've landed yourself a role at BIG CORP and it's day one. You're excited to dive in and make that first commit all the way to production. You run npm install and suddenly &lt;code&gt;npm ERR! RequestError: unable to get local issuer certificate&lt;/code&gt; appears. No one likes certificate errors, but disabling the checks altogether is not a secure choice. What's a developer to do?&lt;/p&gt;

&lt;p&gt;I've created a utility command line tool to help ease this all-too-frequent experience. &lt;a href="https://github.com/princespaghetti/verifi" rel="noopener noreferrer"&gt;Verifi&lt;/a&gt; is a command-line tool that simplifies certificate configuration across your entire development environment. Instead of configuring certificates separately for npm, pip, git, curl, and dozens of other tools, Verifi does it once, centrally.&lt;/p&gt;

&lt;p&gt;While some applications use the operating system's certificate store, many development tools (npm, pip, git, curl) prefer their own certificate bundle files. This becomes critical in corporate environments where web proxies inspect traffic and internal certificate authorities sign private resources—both of which require trusting additional certificates.&lt;/p&gt;

&lt;p&gt;Verifi helps tackle the problem by making it simple to manage adding and removing certificates to the bundle and helping easily add the necessary configuration to your shell to point to the certificates. It is offline by default, avoiding encountering the very problem it's meant to solve. See the &lt;a href="https://github.com/princespaghetti/verifi/blob/main/README.md" rel="noopener noreferrer"&gt;README&lt;/a&gt; to get started.&lt;/p&gt;

&lt;h2&gt;
  
  
  Vibe Coding
&lt;/h2&gt;

&lt;p&gt;This project was developed using two coding agents over a couple evenings.  I primarily used Claude Code and complemented it with Amp from Sourcegraph. Before I began, I had Claude do some "market research" on if there was a need for such a tool. That can be found &lt;a href="https://github.com/princespaghetti/verifi/blob/main/market_research.md" rel="noopener noreferrer"&gt;here&lt;/a&gt;. I also used Claude as a sounding board for a couple different options for the name. This chat also led to the creation of &lt;a href="https://github.com/princespaghetti/verifi/blob/main/PLAN.md" rel="noopener noreferrer"&gt;PLAN.md&lt;/a&gt; that helped me guide the agents throughout the process.&lt;/p&gt;

&lt;h2&gt;
  
  
  My Development Loop
&lt;/h2&gt;

&lt;p&gt;For each phase I would verify if any additional clarifications were needed using Claude Code's plan mode. This helped identify any requirements that were too vague before proceeding with implementation. Once this was completed I would let Claude Code run the various commands necessary to achieve the goals of the phase. &lt;a href="https://github.com/princespaghetti/verifi/blob/main/CLAUDE.md" rel="noopener noreferrer"&gt;CLAUDE.md&lt;/a&gt; complemented the phased plan with additional guidance for the agent for overall structure approach, and examples. Over time my allow list of commands for the agent grew and would need less interaction. &lt;/p&gt;

&lt;p&gt;I used Amp mainly to validate Claude's work and occasionally when I hit limits with Claude Pro's plan. By having clear validation criteria for each phase, both agents were able to easily validate functionality.&lt;/p&gt;

&lt;p&gt;Once the core functionality was complete, I also used the agents to generate the necessary CI/CD definitions. As is all too common with CI/CD the first run did not succeed. The agents were helpful at working through the various errors that occurred. One of the Amp threads related to that troubleshooting can be seen &lt;a href="https://ampcode.com/threads/T-d1401ca6-4ea5-4511-822b-b111b7d3a51f" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The agents were also helpful in working through making the tooling available through homebrew. This was the first time I had attempted to have that as a distribution channel. By providing Goreleaser's documentation about it, I was able to get the proper config in a couple of tries in under an hour.&lt;/p&gt;

&lt;p&gt;As evidenced above this was not a zero-shot or one-shot prompt. The terms spec-driven development or ai-assisted engineering have been growing in popularity for this approach vs the simple proof-of-concepts produced by simple prompts. This approach helped significantly for this project, ensuring consistency across my coding sessions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Whether you're a developer fighting certificate errors at a new company or a DevOps engineer looking to streamline onboarding, Verifi aims to make certificate management painless. Check out the &lt;a href="https://github.com/princespaghetti/verifi" rel="noopener noreferrer"&gt;repository&lt;/a&gt;, give it a try, and feel free to contribute or open issues with feedback!&lt;/p&gt;

</description>
      <category>cli</category>
      <category>security</category>
      <category>tooling</category>
    </item>
    <item>
      <title>Enterprise Networks Unveiled: A Software Engineer's Guide to the Basics (Wrap Up)</title>
      <dc:creator>Anthony Barbieri</dc:creator>
      <pubDate>Tue, 29 Apr 2025 12:04:07 +0000</pubDate>
      <link>https://dev.to/prince_of_pasta/enterprise-networks-unveiled-a-software-engineers-guide-to-the-basics-wrap-up-8g3</link>
      <guid>https://dev.to/prince_of_pasta/enterprise-networks-unveiled-a-software-engineers-guide-to-the-basics-wrap-up-8g3</guid>
      <description>&lt;p&gt;In this blog series we've explored the core networking concepts you will encounter as a software engineer. I hope this has helped demystify networking concepts that you may encounter in your day-to-day work. &lt;/p&gt;

&lt;p&gt;Please feel free to bookmark this post as an easy way to access the various posts summarized below:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In &lt;a href="https://dev.to/prince_of_pasta/enterprise-networks-unveiled-a-software-engineers-guide-to-the-basics-part-1-5com"&gt;part 1&lt;/a&gt; we explored why enterprise networking matters and how critical it is to something like an API call from a laptop&lt;/li&gt;
&lt;li&gt;In &lt;a href="https://dev.to/prince_of_pasta/enterprise-networks-unveiled-a-software-engineers-guide-to-the-basics-part-2-42ni"&gt;part 2&lt;/a&gt; we walked up and down the TCP/IP stack, understanding how data is encapsulated before being sent across the network&lt;/li&gt;
&lt;li&gt;In &lt;a href="https://dev.to/prince_of_pasta/enterprise-networks-unveiled-a-software-engineers-guide-to-the-basics-part-3-1pj7"&gt;part 3&lt;/a&gt; we followed a packet across a number of networking devices, and explored the functionality of each device type.&lt;/li&gt;
&lt;li&gt;In &lt;a href="https://dev.to/prince_of_pasta/enterprise-networks-unveiled-a-software-engineers-guide-to-the-basics-part-4-3024"&gt;part 4&lt;/a&gt; we explored enterprise network architecture and how multi-site deployments impact network design&lt;/li&gt;
&lt;li&gt;In &lt;a href="https://dev.to/prince_of_pasta/enterprise-networks-unveiled-a-software-engineers-guide-to-the-basics-part-5-5d2b"&gt;part 5&lt;/a&gt; we took a closer look at IP addresses, reserved ranges, and subnetting. We also reviewed network address translation (NAT) and how it's commonly used&lt;/li&gt;
&lt;li&gt;In &lt;a href="https://dev.to/prince_of_pasta/enterprise-networks-unveiled-a-software-engineers-guide-to-the-basics-part-6-556e"&gt;part 6&lt;/a&gt; we reviewed how virtualization impacted network design with the creation of virtual machines and container technologies&lt;/li&gt;
&lt;li&gt;In &lt;a href="https://dev.to/prince_of_pasta/enterprise-networks-unveiled-a-software-engineers-guide-to-the-basics-part-7-5d9"&gt;part 7&lt;/a&gt; we covered how cloud computing has impacted network design and the importance of understanding the shared responsibility model to create a scalable hybrid network architecture&lt;/li&gt;
&lt;li&gt;In &lt;a href="https://dev.to/prince_of_pasta/enterprise-networks-unveiled-a-software-engineers-guide-to-the-basics-part-8-2p9h"&gt;part 8&lt;/a&gt; we reviewed how network design has changed to account for a more distributed world where the data center is not the central component. We also explored the growing role of identity in networking, and how modern solutions help enable the concept of "zero trust"&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;While we've covered a lot of ground, there is plenty more to explore about networking out there. As you've seen throughout the posts, the same core concepts hold, but implementations have shifted as new technology waves impacted the industry. Workloads are more distributed, and a solid understanding of networking will make you a more effective software engineer. Whether it's troubleshooting an application problem during an incident or understanding why an API call might be failing to reach a target service during development, the network is a critical component. &lt;/p&gt;

&lt;p&gt;I was inspired to write this blog series because most software engineers starting at larger companies may not have taken a formal course in networking and usually haven't encountered something as complex as an enterprise network. Not everyone needs to specialize into being a network engineer, but a foundation of knowledge in the space will serve you well. &lt;/p&gt;

</description>
    </item>
    <item>
      <title>Enterprise Networks Unveiled: A Software Engineer's Guide to the Basics (Part 8)</title>
      <dc:creator>Anthony Barbieri</dc:creator>
      <pubDate>Wed, 16 Apr 2025 14:28:35 +0000</pubDate>
      <link>https://dev.to/prince_of_pasta/enterprise-networks-unveiled-a-software-engineers-guide-to-the-basics-part-8-2p9h</link>
      <guid>https://dev.to/prince_of_pasta/enterprise-networks-unveiled-a-software-engineers-guide-to-the-basics-part-8-2p9h</guid>
      <description>&lt;p&gt;We've come a long way in just a few posts from learning about the &lt;a href="https://dev.to/prince_of_pasta/enterprise-networks-unveiled-a-software-engineers-guide-to-the-basics-part-2-42ni"&gt;TCP/IP Stack&lt;/a&gt; to cloud networking constructs in the last &lt;a href="https://dev.to/prince_of_pasta/enterprise-networks-unveiled-a-software-engineers-guide-to-the-basics-part-7-5d9"&gt;post&lt;/a&gt;. Building on all of this knowledge, let's now take a look at how network security has evolved as the world has gone more distributed. In this post we'll explore modern approaches to remote access, what capabilities modern network security solutions provide, what the buzzword phrase "zero trust" is all about, and how identity's role in network controls has taken center stage.&lt;/p&gt;

&lt;p&gt;Although there have been some recent pushes to return to office, we live in a very distributed world. Salespeople might need to connect with different enterprise services while they're on the road, and many companies have adopted hybrid work policies where their employees can work from home. &lt;/p&gt;

&lt;p&gt;Historically we've relied on remote access VPNs to help tackle this issue. We briefly covered this functionality in &lt;a href="https://dev.to/prince_of_pasta/enterprise-networks-unveiled-a-software-engineers-guide-to-the-basics-part-4-3024"&gt;part 4&lt;/a&gt; of the series. As a reminder, a client VPN builds a tunnel across the internet to a termination point which typically resides in a data center. The client device (usually a laptop) is assigned an IP address from a pool (using subnetting techniques we've previously discussed) reserved for VPN users. It then interacts with other devices on the network as if it were also at one of the sites (as shown below from an &lt;a href="https://www.paloaltonetworks.com/cyberpedia/what-is-a-remote-access-vpn" rel="noopener noreferrer"&gt;overview&lt;/a&gt; of the technology by Palo Alto). If the network uses the 10.0.0.0/8 like our previous examples, the VPN will provide an address subnet-ed from that range.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frl0844o63c8bp6vv3bey.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frl0844o63c8bp6vv3bey.png" alt="VPN diagram" width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In a "full tunnel" configuration, VPNs will send all network traffic across the tunnel rather than using the local internet provider as an ingress point for users working from home. If a user is geographically far away from the termination point, this can cause a significant impact to performance. Packets can't break the laws of physics. In a "split tunnel" configuration where some traffic uses the local internet connection instead, security teams will lose visibility of the traffic not sent down the tunnel, which can increase the complexity of responding to an incident.&lt;/p&gt;

&lt;p&gt;The management overhead of a remote access VPN approach at scale can be significant as well if there are requirements to logically separate different populations of users. This would mean creating more pools of addresses and managing which users are assigned to which pools. Firewall rules for VPNs typically leverage IP address range as a proxy for grouping users who should have the same level of access together. This may also increase complexity for the user as they may need multiple profiles to connect to all their required systems.&lt;/p&gt;

&lt;p&gt;Cloud services only added to the complexity. While users may still back-haul to their data centers, more and more services are being modernized into SaaS or IaaS offerings and being removed. Therefore the "data gravity" was leaving the data center and become much more distributed. This led network security vendors to reconsider their approaches to tackling the problem. &lt;/p&gt;

&lt;p&gt;Learning from the success of the cloud, services previously provided by physical devices became virtualized. Rather than connecting to a data center owned by your company, your VPN client could connect to a closer termination point and use that provider's private network connections to route where needed. Another way to think about this connectivity model in an inverted Content Distribution Network (CDN), where the remote users find their closet point of presence and use that to communicate. Many vendors add additional services once the traffic is sent through their network. For example, Zscaler, a popular vendor in the space provides numerous connectivity points across the globe as shown &lt;a href="https://trust.zscaler.com/zscaler.net/data-center-map" rel="noopener noreferrer"&gt;here&lt;/a&gt; and offers a number of different capabilities. Other vendors in the space include Palo Alto Networks, Netskope, and Cato Networks.&lt;/p&gt;

&lt;p&gt;The combined stack of capabilities has the industry acronym of SASE, or Secure Access Service Edge. This term emerged after a number of point solutions tackled different pieces of the puzzle. The stack includes the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Secure Web Gateway: inspecting outbound internet traffic for threats and violation of policies&lt;/li&gt;
&lt;li&gt;Cloud Access Security Broker: Inspect cloud service (typically SaaS focused) bound requests for improper data usage and identify "shadow IT" usage of unsanctioned services&lt;/li&gt;
&lt;li&gt;Firewall-as-a-service FWaaS: scalable firewall services including threat detection and other advanced capabilities (ex: intrusion detection system) beyond IP address-based rules&lt;/li&gt;
&lt;li&gt;Sofware Defined Wide Area Network (SD-WAN): As the variety of connection types (MPLS, Cloud connections (ex: Direct Connect/Express Route), Direct Internet Access) that exist between sites increases, this technology helps manage connectivity amongst the different links.&lt;/li&gt;
&lt;li&gt;Zero Trust Network Access (ZTNA): Uses authentication and authorization based on a user's identity, groups, and attributes about the connection to verify if a request should be allowed to be sent to the target application. This also holds true on the corporate network, where trust is not given to another device just because it is on the same 10.x.x.x network. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The image below (credit to Aruba Networks) captures the transformation from data center centric to having the data center to just being another site like an office or branch. The outer circle in the "now" section provides the various services discussed above as users and services communicate with each other across cloud and on-premise networks.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fosf4nqfh3tvtwug9sdq0.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fosf4nqfh3tvtwug9sdq0.jpg" alt="DC centric vs SASE" width="800" height="360"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;While the exact implementations by the vendors may vary, this model allows for efficient processing of internet and/or cloud service bound traffic to be processed at the edge, while allowing for necessary connections to reach back into the data center. However, these systems have the ability to do host posture validation to ensure an endpoint security software is running on a laptop. Posture validation might also include validating if disk encryption is enabled, or a certain OS patch level is reached. These systems also use identity-based rules to limit who can access which systems. For example, you might want to limit only the human resources organization to be able to access the payroll system's management interface hosted in the data center or a particular SaaS application. This identity-based approach help avoids the complexity above for traditional remote access VPNS.&lt;/p&gt;

&lt;p&gt;The shift to the above model can take time, especially if there was limited network segmentation and more of a "castle and moat" (high levels of trust once on the private network) posture within a corporate network previously. Moving too quickly from an allow all by default to a deny all by default (with explicit allows) would likely break a business.&lt;/p&gt;

&lt;p&gt;Similar to the benefit to endpoints, the SD-WAN component can help process internet bound traffic from remote sites more efficiently at the edge rather than relying on the data center(s) to process all that traffic. This helps match the more distributed shape we find modern networking taking. &lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;As we wrap up our exploration of network security evolution, here are the essential points to remember:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Remote Access Has Evolved Beyond Traditional VPNs - While VPNs served us well for years, they were designed for a centralized world where resources primarily lived in corporate data centers.&lt;/li&gt;
&lt;li&gt;Data Gravity Has Shifted - With the proliferation of SaaS and IaaS offerings, our resources are now distributed across multiple environments, necessitating new security approaches.&lt;/li&gt;
&lt;li&gt;SASE Provides a Comprehensive Solution - The Secure Access Service Edge framework combines multiple capabilities (SWG, CASB, FWaaS, SD-WAN, ZTNA) to address the challenges of securing a distributed workforce accessing distributed resources.&lt;/li&gt;
&lt;li&gt;Identity Has Replaced IP - Modern security approaches use identity attributes rather than network location to determine access privileges, reducing management complexity and improving security posture.&lt;/li&gt;
&lt;li&gt;Edge Processing Improves Performance - By processing traffic at distributed points of presence rather than backhauling everything to central data centers, modern solutions provide better user experiences.&lt;/li&gt;
&lt;li&gt;Zero Trust Is More Than a Buzzword - The principle of "trust but verify" applies not just to remote access but also to internal networks, addressing the limitations of traditional "castle and moat" security models.&lt;/li&gt;
&lt;li&gt;Posture Assessment Adds Context - Beyond just who you are, modern solutions consider the security state of your device when making access decisions, adding an important layer of protection and real time adjustments. &lt;/li&gt;
&lt;li&gt;Transition Requires Planning - Moving from traditional perimeter-based security to a zero-trust model involves careful implementation to avoid disrupting business operations.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As networks continue to evolve with containerization, microservices, and increasingly distributed architectures, these modern security approaches will continue to adapt. The future likely holds even tighter integration between identity, network controls, and application security—further blurring the lines between traditional security domains.&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
