<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Lekshmi Chandra</title>
    <description>The latest articles on DEV Community by Lekshmi Chandra (@lek890).</description>
    <link>https://dev.to/lek890</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/lek890"/>
    <language>en</language>
    <item>
      <title>The Art of Predictable Estimations (Part 2): Staying Aligned Through Delivery</title>
      <dc:creator>Lekshmi Chandra</dc:creator>
      <pubDate>Tue, 10 Mar 2026 08:12:05 +0000</pubDate>
      <link>https://dev.to/lek890/the-art-of-predictable-estimations-part-2-staying-aligned-through-delivery-1l25</link>
      <guid>https://dev.to/lek890/the-art-of-predictable-estimations-part-2-staying-aligned-through-delivery-1l25</guid>
      <description>&lt;p&gt;In Part 1, we looked at how to build predictability into your initial estimates. But creating reliable plans isn’t a one-time exercise—it’s a continuous practice of staying aligned with your team, anticipating change, and protecting your delivery schedule. Here are the next steps to keep your estimates dependable.&lt;/p&gt;

&lt;p&gt;Re-Evaluate and Communicate Early&lt;/p&gt;

&lt;p&gt;Plans drift. People fall sick, requirements change, or unexpected events happen. Regularly check if your estimates still hold. When you spot risks, communicate early—transparency builds trust and gives you room to negotiate timelines before the pressure hits.&lt;/p&gt;

&lt;p&gt;Always Include a Buffer&lt;/p&gt;

&lt;p&gt;Last-minute surprises are inevitable. Protect your team by adding a buffer sprint. If nothing comes up, you finish early and look good. If something unexpected arises, you have breathing room to absorb it without burning out your team.&lt;/p&gt;

&lt;p&gt;Lock in Cross-Team Deadlines&lt;/p&gt;

&lt;p&gt;Dependencies can derail even the best plans. If your release is on September 27, make September 19 the cut-off for feedback from other teams. Managing upstream deadlines shields your team from last-minute chaos and keeps focus where it belongs—on delivering quality.&lt;/p&gt;

&lt;p&gt;Run Estimates by Your Team for Feedback&lt;/p&gt;

&lt;p&gt;Before committing to stakeholders, validate your estimates with the team doing the work. Walk through the high-level plan and ask for risks. Revisit this every few weeks—collect challenges, restate the launch plan, and nudge the team to think about what could go wrong. These conversations surface risks early and strengthen team ownership.&lt;/p&gt;

&lt;p&gt;Build Visibility and Transparency&lt;/p&gt;

&lt;p&gt;Use every tool available to keep your plans visible. Create graphs, charts, and dashboards that track progress accurately—double-check that your data comes from the right filters. Continuously analyze bottlenecks and highlight progress in your communications. Metrics aren’t just for reporting—they help the team see where they stand and where to adjust.&lt;/p&gt;

&lt;p&gt;Takeaway Nugget&lt;/p&gt;

&lt;p&gt;Predictable estimations thrive on transparency—when your team and stakeholders can see the plan, everyone can trust the plan and give their best.&lt;/p&gt;

</description>
      <category>leadership</category>
      <category>management</category>
      <category>productivity</category>
      <category>softwaredevelopment</category>
    </item>
    <item>
      <title>The Art of Predictable Estimations (Part 1): Turning Uncertainty into Clarity</title>
      <dc:creator>Lekshmi Chandra</dc:creator>
      <pubDate>Tue, 10 Mar 2026 08:10:59 +0000</pubDate>
      <link>https://dev.to/lek890/the-art-of-predictable-estimations-part-1-turning-uncertainty-into-clarity-2d8o</link>
      <guid>https://dev.to/lek890/the-art-of-predictable-estimations-part-1-turning-uncertainty-into-clarity-2d8o</guid>
      <description>&lt;p&gt;In software delivery, predictability matters more than precision. The goal is not to guess the future— it is to navigate uncertainty without surprises and reach the strategic goal. Reliable estimates create trust, align teams, and protect focus. Here are five ways to make your estimates truly dependable.&lt;/p&gt;

&lt;p&gt;Deocde all possible unknowns&lt;/p&gt;

&lt;p&gt;Most estimations fail because we overlook the predictable distractions—vacations, training, team events, or chances of people being away due to other reasons. None of these are “surprises”; they’re just unplanned. Collect them early, keep a living list, and bake them into your estimates.&lt;/p&gt;

&lt;p&gt;Build visibility into the timeline&lt;/p&gt;

&lt;p&gt;Is your delivery overlapping with Christmas or the summer holiday rush? Make it visible to everyone. If there is engineering team working during the holidays and the product team will be on vacation, prepare ahead with more feature refinements so that we stay on track. A simple timeline showing these “impact zones” lets the team prepare and helps you adjust estimates before deadlines become unmanageable.&lt;/p&gt;

&lt;p&gt;Create sprint breakdowns&lt;/p&gt;

&lt;p&gt;On a tight schedule, a high-level sprint breakdown can reveal gaps early. Even rough outlines help identify risks, enabling product teams to swap or cut features before timelines crumble.&lt;/p&gt;

&lt;p&gt;Prioritize ruthlessly: Must-Have vs. Should-Have&lt;/p&gt;

&lt;p&gt;Not every feature belongs in your MVP. Keep asking, “Is this launch-critical, or can it follow right after?” Clarity here is the difference between meeting a delivery date and slipping into endless scope creep.&lt;/p&gt;

&lt;p&gt;Estimate in ranges, not dates&lt;/p&gt;

&lt;p&gt;Avoid committing to single-point dates like “September 15.” Instead, offer ranges:&lt;/p&gt;

&lt;p&gt;Early: “Second half of September”&lt;/p&gt;

&lt;p&gt;Midway: “On track for late September”&lt;/p&gt;

&lt;p&gt;Near completion: “Tentatively September 26, unless minor issues push to the 28th.”&lt;/p&gt;

&lt;p&gt;Ranges give flexibility, keep communication honest, and help manage expectations.&lt;/p&gt;

&lt;p&gt;Takeaway nugget&lt;/p&gt;

&lt;p&gt;Predictability isn’t about guessing right—it’s about preparing your team and stakeholders for what’s ahead.&lt;/p&gt;

&lt;p&gt;What’s next?&lt;/p&gt;

&lt;p&gt;In Part 2, we’ll explore the habits that keep your estimates reliable over time: re-evaluating progress, building buffers, and managing cross-team dependencies to avoid last-minute surprises.&lt;/p&gt;

</description>
      <category>management</category>
      <category>productivity</category>
      <category>softwaredevelopment</category>
    </item>
    <item>
      <title>Engineering Management Nugget #7: Undercommit and Overdeliver (On Purpose)</title>
      <dc:creator>Lekshmi Chandra</dc:creator>
      <pubDate>Sun, 01 Feb 2026 05:05:39 +0000</pubDate>
      <link>https://dev.to/lek890/engineering-management-nugget-7-undercommit-and-overdeliver-on-purpose-g7c</link>
      <guid>https://dev.to/lek890/engineering-management-nugget-7-undercommit-and-overdeliver-on-purpose-g7c</guid>
      <description>&lt;p&gt;One of the fastest ways teams lose trust is through overcommitment. Ambitious plans may look good on paper, but they often result in missed deadlines, stress, and reactive delivery.&lt;/p&gt;

&lt;p&gt;For an EM, planning is not about optimism. It is about reliability. Overcommitment usually comes from optimistic estimates, unaccounted dependencies, constant context switching, and pressure from multiple stakeholders. When teams consistently overcommit, they spend more time explaining delays than delivering outcomes. Over time, this also leads to burnout.&lt;/p&gt;

&lt;p&gt;Undercommitting does not mean lowering standards. It means planning with real constraints in mind — team capacity, focus time, and uncertainty.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How it works in practice:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In practice, this shows up as planning for less than theoretical capacity, making risks explicit, and treating stretch goals as optional rather than expected. Recommitment should happen only when progress is clearly closer to the finish line — and only if needed. Stakeholders value predictability far more than ambition when it comes to delivery. &lt;/p&gt;

&lt;p&gt;Practically, it is important not to accept every timeline suggestion from stakeholders. Before committing, consider team capacity: existing priorities, vacations and dependency resolutions. Draft a plan and review it with the team as a confidence check. Always add a buffer of at least a sprint or a week, because unexpected issues are inevitable. The buffer creates breathing room and prevents corners from being cut. When this is communicated transparently, stakeholders better understand why timelines look the way they do.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Takeaway&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When done intentionally, undercommitting and overdelivering supports realistic planning and more effective delivery.&lt;/p&gt;

</description>
      <category>softwareengineering</category>
      <category>leadership</category>
      <category>engineeringmanagement</category>
    </item>
    <item>
      <title>Engineering Management Nugget #6: From Firefighting to Architecting</title>
      <dc:creator>Lekshmi Chandra</dc:creator>
      <pubDate>Sun, 01 Feb 2026 04:44:04 +0000</pubDate>
      <link>https://dev.to/lek890/engineering-management-nugget-6-from-firefighting-to-architecting-2pme</link>
      <guid>https://dev.to/lek890/engineering-management-nugget-6-from-firefighting-to-architecting-2pme</guid>
      <description>&lt;p&gt;The proportion of time a team spends firefighting — responding to incidents, fixing urgent issues, and reacting to last-minute escalations — is an important signal for an EM. While some level of firefighting is inevitable, living in this mode usually points to deeper system gaps.&lt;/p&gt;

&lt;p&gt;Firefighting often feels productive because problems get solved quickly and it feels like we saved the day. However, firefighting pulls effort away from planned work and slowly builds a heroic culture. Over time, constant firefighting hides root causes such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;unclear requirements and expectations&lt;/li&gt;
&lt;li&gt;weak ownership&lt;/li&gt;
&lt;li&gt;lack of in-depth system knowledge&lt;/li&gt;
&lt;li&gt;missing or fragile processes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The cost eventually shows up as burnout, fragile delivery, and repeated issues.&lt;/p&gt;

&lt;p&gt;An EM can influence this by architecting the system to keep firefighting at a minimum.&lt;br&gt;
This is not about designing code — it’s about designing how work flows through the team.&lt;/p&gt;

&lt;p&gt;This includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;clear ownership and responsibility boundaries&lt;/li&gt;
&lt;li&gt;predictable planning and review cycles&lt;/li&gt;
&lt;li&gt;explicit decision-making frameworks&lt;/li&gt;
&lt;li&gt;shared definitions of “done” and quality&lt;/li&gt;
&lt;li&gt;guardrails that prevent common failure modes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The goal is not to eliminate problems, but to make them visible earlier and easier to handle.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How This Works in Practice&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Track recurring incidents and ask why they repeat&lt;/li&gt;
&lt;li&gt;Invest time in preventing the next failure, not just fixing the current one&lt;/li&gt;
&lt;li&gt;Use retrospectives to adjust systems, not assign blame&lt;/li&gt;
&lt;li&gt;Gradually shift time from urgent tasks to structural improvements&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Takeaway&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Firefighting solves today’s problem.&lt;br&gt;
As systems improve through thoughtful architecting, emergencies reduce — and when they do happen, time and effort are used more effectively.&lt;/p&gt;

</description>
      <category>engineeringmanagement</category>
      <category>leadership</category>
      <category>teamhealth</category>
    </item>
    <item>
      <title>Engineering Management Nugget #5: Measuring Without Damaging Culture</title>
      <dc:creator>Lekshmi Chandra</dc:creator>
      <pubDate>Sun, 01 Feb 2026 04:31:49 +0000</pubDate>
      <link>https://dev.to/lek890/engineering-management-nugget-5-measuring-without-damaging-culture-2ie3</link>
      <guid>https://dev.to/lek890/engineering-management-nugget-5-measuring-without-damaging-culture-2ie3</guid>
      <description>&lt;p&gt;Metrics, when used well, create clarity and alignment.&lt;br&gt;
Used poorly, they drive fear and shallow optimization.&lt;br&gt;
An EM’s job is not just to measure, but to measure without breaking trust.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why This Is Hard&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Engineering work is complex and often non-linear. When metrics are treated as targets rather than signals, shallow optimization might be done just to correct the metric.&lt;/p&gt;

&lt;p&gt;Common failure modes include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;measuring individuals instead of systems&lt;/li&gt;
&lt;li&gt;using metrics to evaluate performance rather than to surface issues&lt;/li&gt;
&lt;li&gt;reacting to short-term fluctuations instead of long-term trends&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Metrics are not a scorecard. They are early warning signals of system and team health.&lt;/p&gt;

&lt;p&gt;Good metrics help answer questions like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Where are we slowing down?&lt;/li&gt;
&lt;li&gt;Where is quality degrading?&lt;/li&gt;
&lt;li&gt;Where is work getting blocked?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;How This Works in Practice&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Measure systems, not people. Focus on team-level flow, reliability, and quality.&lt;/p&gt;

&lt;p&gt;For example, a poor metric is the number of commits per developer — people have different coding and committing styles.&lt;br&gt;
Similarly, measuring the number or size of merge requests can be misleading. A one-line change can sometimes be more impactful than a 30-file refactor, such as a large renaming change.&lt;/p&gt;

&lt;p&gt;Instead, measuring DORA metrics is useful. They surface trends around lead time, review time, deployment frequency, and reliability.&lt;/p&gt;

&lt;p&gt;The most important signal is often the simplest:&lt;br&gt;
Did the team on track for the desired outcome within the planned timeframe? If not, that should trigger a conversation with the team.&lt;br&gt;
Transparency builds trust when metrics are not weaponized.&lt;/p&gt;

&lt;p&gt;Looking at trends rather than snapshots is critical. One bad week or month is usually noise. Long-running patterns are signals. Metrics should always be paired with context.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Takeaway&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;As an EM, you set the tone that metrics are used to improve the system — not to rank, pressure, or compare individuals.&lt;br&gt;
When teams feel safe around metrics, they surface problems earlier — and that is the real win.&lt;/p&gt;

</description>
      <category>softwareengineering</category>
      <category>leadership</category>
      <category>engineeringmanagement</category>
    </item>
    <item>
      <title>Engineering Management Nugget #4: Guardrails Over Control</title>
      <dc:creator>Lekshmi Chandra</dc:creator>
      <pubDate>Sun, 01 Feb 2026 04:09:55 +0000</pubDate>
      <link>https://dev.to/lek890/engineering-management-nugget-4-guardrails-over-control-3j0f</link>
      <guid>https://dev.to/lek890/engineering-management-nugget-4-guardrails-over-control-3j0f</guid>
      <description>&lt;p&gt;From my perspective, building accountability in a team starts with understanding control.&lt;br&gt;
The real leverage of an engineering manager is not tighter control, but well-designed guardrails.&lt;/p&gt;

&lt;p&gt;Control often shows up as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;frequent check-ins&lt;/li&gt;
&lt;li&gt;approval gates for small decisions&lt;/li&gt;
&lt;li&gt;defining implementation details&lt;/li&gt;
&lt;li&gt;monitoring activity instead of outcomes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Guardrails, on the other hand, look like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;clear goals and success criteria&lt;/li&gt;
&lt;li&gt;explicit constraints around time, scope, and quality&lt;/li&gt;
&lt;li&gt;shared standards and principles&lt;/li&gt;
&lt;li&gt;predictable review and feedback loops&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;How This Works in Practice&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Before work starts, make the why explicit.&lt;/p&gt;

&lt;p&gt;Then align on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;priorities&lt;/li&gt;
&lt;li&gt;non-negotiables&lt;/li&gt;
&lt;li&gt;acceptable trade-offs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Use metrics and checkpoints as signals, not enforcement mechanisms.&lt;br&gt;
Step in only when guardrails are crossed — not before.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Takeaway&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;An effective EM does not control execution.&lt;br&gt;
They design guardrails that allow teams to move fast, safely, and independently.&lt;/p&gt;

&lt;p&gt;Guardrails require trust and patience, but they scale far better than control.&lt;/p&gt;

</description>
      <category>leadership</category>
      <category>engineeringmanagement</category>
      <category>softwareengineering</category>
    </item>
    <item>
      <title>Engineering Management Nugget #3: Stepping Back</title>
      <dc:creator>Lekshmi Chandra</dc:creator>
      <pubDate>Sat, 31 Jan 2026 04:43:39 +0000</pubDate>
      <link>https://dev.to/lek890/engineering-management-nugget-3-stepping-back-455e</link>
      <guid>https://dev.to/lek890/engineering-management-nugget-3-stepping-back-455e</guid>
      <description>&lt;p&gt;One of the core struggles of engineering management is balancing guidance with ownership.&lt;br&gt;
Intervening too early harms autonomy; intervening too late creates avoidable risks. Knowing when to step back and when to step in is a hard but essential skill.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why Stepping Back Matters&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The primary goal of an EM is to build a team that can consistently deliver high-quality outcomes.&lt;br&gt;
That only happens when teams learn to solve problems independently.&lt;/p&gt;

&lt;p&gt;Over-involvement reduces accountability and increases dependency on the manager.&lt;br&gt;
Stepping back is not neglect — it creates space for learning, ownership, and resilience.&lt;/p&gt;

&lt;p&gt;For EMs with a technical background, this is especially challenging. You will often have strong opinions on how something should be built. But in the EM role, your responsibility shifts from designing solutions to setting guardrails — clarity on goals, constraints, and risks — not the implementation itself.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How This Works in Practice&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Trust the team’s capability&lt;br&gt;
Engineers usually know how to tackle tasks. Your role is to remove blockers and watch for major deviations in effort or scope.&lt;/p&gt;

&lt;p&gt;Adjust based on experience&lt;br&gt;
If a task is new to a developer, explicitly account for the learning curve. This investment compounds into future expertise.&lt;br&gt;
For experienced engineers, step back further — let them lead and intervene mainly through questions that surface risks early.&lt;/p&gt;

&lt;p&gt;Ask, don’t override&lt;br&gt;
The right questions often do more than direct answers, while keeping ownership with the team.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Takeaway&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When stepping into the EM role, the focus shifts from building features to building the team and systems that deliver them.&lt;br&gt;
Your job is to make the why clear — and allow the team to own the how.&lt;/p&gt;

</description>
      <category>engineeringmanagement</category>
      <category>leadership</category>
      <category>softwareengineering</category>
    </item>
    <item>
      <title>Engineering Management Nugget #2: Managing tech debt</title>
      <dc:creator>Lekshmi Chandra</dc:creator>
      <pubDate>Fri, 30 Jan 2026 04:39:51 +0000</pubDate>
      <link>https://dev.to/lek890/engineering-management-nugget-2-managing-tech-debt-2f6c</link>
      <guid>https://dev.to/lek890/engineering-management-nugget-2-managing-tech-debt-2f6c</guid>
      <description>&lt;p&gt;“Legacy” code is often treated as a problem to avoid.&lt;br&gt;
In reality, every long-lived codebase becomes legacy. The question is not whether tech debt exists, but how deliberately it is managed.&lt;/p&gt;

&lt;p&gt;Tech debt accumulates when codebases are not treated as systems that require ongoing maintenance. At the implementation level, this typically shows up as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;outdated dependencies&lt;/li&gt;
&lt;li&gt;missing or brittle tests&lt;/li&gt;
&lt;li&gt;poor design patterns&lt;/li&gt;
&lt;li&gt;poor observability and health checks&lt;/li&gt;
&lt;li&gt;delayed upgrades to supported platforms or frameworks&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Left unattended, these reduce development speed and increase delivery risk.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why Tech Debt Matters&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A simple analogy works here:&lt;br&gt;
you can cut a tree faster with a sharp axe than with a blunt one.&lt;/p&gt;

&lt;p&gt;Teams working on poorly maintained systems spend more time compensating for friction — slower builds, fragile deployments, unexpected regressions. Over time, this directly impacts predictability and morale.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How Engineering Managers Influence Tech Debt&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The EM’s role is not to fix tech debt personally, but to shape how the team approaches maintenance.&lt;/p&gt;

&lt;p&gt;One effective approach is to build a culture of continuous maintenance:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;reserve a small portion of each sprint for improvements&lt;/li&gt;
&lt;li&gt;treat maintenance work as planned, scoped tasks&lt;/li&gt;
&lt;li&gt;prioritize it after sprint goals, not instead of them&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is similar to gardening: small, regular effort prevents large cleanups later. Attempting to “fix everything” in one dedicated window often fails due to unclear scope and competing priorities.&lt;/p&gt;

&lt;p&gt;A continuous approach works better because:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;maintenance tasks are refined and bounded&lt;/li&gt;
&lt;li&gt;engineers understand limits before starting&lt;/li&gt;
&lt;li&gt;work gets completed instead of deferred&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Where This Shows Up in Practice&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Automate what you can: dependency update bots, CI checks, static analysis&lt;/li&gt;
&lt;li&gt;Keep tech debt visible in planning and refinement&lt;/li&gt;
&lt;li&gt;Use EM and Tech Lead collaboration: EMs provide outside perspective, Tech Leads drive execution&lt;/li&gt;
&lt;li&gt;Monitor the maintenance-to-feature ratio — enough to stay healthy, not so much that delivery stalls&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Tech debt management is like going to the gym: consistency matters more than intensity. The goal is system health, not perfection.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Takeaway&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Tech debt is unavoidable. Neglect is optional.&lt;/p&gt;

&lt;p&gt;Approaching tech debt as continuous maintenance keeps systems healthy and delivery predictable — without slowing the business down.&lt;/p&gt;

</description>
      <category>codequality</category>
      <category>leadership</category>
      <category>management</category>
      <category>softwareengineering</category>
    </item>
    <item>
      <title>Engineering Management Nugget #1: Managing expectations</title>
      <dc:creator>Lekshmi Chandra</dc:creator>
      <pubDate>Fri, 30 Jan 2026 04:11:52 +0000</pubDate>
      <link>https://dev.to/lek890/engineering-management-nugget-1-managing-expectations-4k09</link>
      <guid>https://dev.to/lek890/engineering-management-nugget-1-managing-expectations-4k09</guid>
      <description>&lt;p&gt;Missed deadlines and overtime are rarely caused by poor execution.&lt;br&gt;
They are usually the result of misaligned expectations early in the delivery cycle.&lt;/p&gt;

&lt;p&gt;When scope, capacity, or timelines are unclear, teams compensate late. Over time, creates reactive delivery and reduces predictability. Managing this alignment is a core responsibility of an engineering manager.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why Expectation Management Matters?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Engineering managers operate between multiple stakeholders, but the primary responsibility is toward the engineering team. Sustainable delivery depends on it.&lt;/p&gt;

&lt;p&gt;Delivery risk typically accumulates through small, compounding factors:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;optimistic estimates&lt;/li&gt;
&lt;li&gt;incremental change requests&lt;/li&gt;
&lt;li&gt;unplanned dependencies&lt;/li&gt;
&lt;li&gt;fluctuating team availability&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Individually, these are manageable. Together, they distort timelines. The EM’s role is to surface these constraints early and realign expectations before they compound.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Make Reality Visible&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Expectation management becomes easier when reality is visible.&lt;/p&gt;

&lt;p&gt;Clear views of:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;team capacity&lt;/li&gt;
&lt;li&gt;delivery progress&lt;/li&gt;
&lt;li&gt;known risks and dependencies&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;shift discussions from assumptions to facts. Transparency reduces surprise, and reduced surprise improves trust. When stakeholders understand constraints, collaboration improves.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Where This Shows Up in Practice&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;At the daily level, ambiguity is already a risk.&lt;br&gt;
If a developer has two high-priority tasks, clarifying sequence and trade-offs early prevents downstream delays. Over time, teams learn to surface these conflicts themselves.&lt;/p&gt;

&lt;p&gt;At the release level, expectation management is continuous.&lt;br&gt;
Feasibility checks, progress updates, and early risk signaling prevent last-minute corrections and delivery heroics.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Takeaway&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Consistent delivery is the result of well-managed expectations, not increased pressure.&lt;/p&gt;

&lt;p&gt;Strong engineering management aligns reality early — for the team and for the product.&lt;/p&gt;

</description>
      <category>engineeringmanagement</category>
      <category>leadership</category>
      <category>softwareengineering</category>
    </item>
    <item>
      <title>Debugging AWS lambda locally</title>
      <dc:creator>Lekshmi Chandra</dc:creator>
      <pubDate>Thu, 08 Jul 2021 12:37:45 +0000</pubDate>
      <link>https://dev.to/lek890/debugging-aws-lambda-in-local-17l6</link>
      <guid>https://dev.to/lek890/debugging-aws-lambda-in-local-17l6</guid>
      <description>&lt;p&gt;This post explains how to debug a lambda on a developers machine using &lt;code&gt;sam&lt;/code&gt; (serverless application model).&lt;/p&gt;

&lt;p&gt;This will invoke the local AWS lambda function on a docker and quits after the invocation completes.&lt;/p&gt;

&lt;h3&gt;
  
  
  Prepare the dependencies:
&lt;/h3&gt;

&lt;h3&gt;
  
  
  1. Template file
&lt;/h3&gt;

&lt;p&gt;As in any sam application, we need a template file which specifies the serverless application. This has to be in the root of the application. If it is on another path, it can be specified using &lt;code&gt;--template&lt;/code&gt; parameter.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Docker
&lt;/h3&gt;

&lt;p&gt;Keep docker running so that the lambda can be deployed in a container.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Prepare incoming data
&lt;/h3&gt;

&lt;p&gt;Most lambda wakes up on an event occurrence and that event will be passed as parameter to the lambda. In addition, there will be request data. Since we are running the lambda in local, we need to provide these data. For that, create a json file and keep some sample data in it.&lt;/p&gt;

&lt;p&gt;For example, I would create a localTesting folder and keep a lambda-params.json in it. The data to be passed can be configured as below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;//localTesting/lambda-params.json

{
  "body": "{\"url\": \"/test-page\"}"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will be passed as the &lt;code&gt;--event&lt;/code&gt; param.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Prepare env variables
&lt;/h3&gt;

&lt;p&gt;In addition to incoming data, there will be some env variables which the lambda will look for during the execution. We need to provide that too. For that we can create another file called local-env.json in the localTesting folder and keep the required info like below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "Publish": {
    "ENVIRONMENT_NAME": "staging",
    "S3_BUCKET_NAME": "staging-s3",
    "AWS_SDK_LOAD_CONFIG": 1
  }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Configurations specific to another entry point can be configured as comma separated list in this file.&lt;/p&gt;

&lt;p&gt;This file can be specified in the &lt;code&gt;--env-vars&lt;/code&gt; parameter.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. CLI installations
&lt;/h3&gt;

&lt;p&gt;Time to install sam cli. For mac, the installation commands using home brew are the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;brew tap aws/tap
brew install aws-sam-cli
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;more help on installation &lt;a href="https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-sam-cli-install-mac.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-sam-cli-install-mac.html&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  6. Grab the entry point name
&lt;/h3&gt;

&lt;p&gt;Almost there. We can grab the name of the entrypoint which we are going to trigger. It will be in the template file as the name of the lambda description. For my case, it is &lt;code&gt;Publish&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  7. Find CodeUri path
&lt;/h3&gt;

&lt;p&gt;Go to the template file and find the CodeUri in your lambda specification. We need this folder with the lambda logic to be generated before running the lambda. &lt;/p&gt;

&lt;h3&gt;
  
  
  8. Build
&lt;/h3&gt;

&lt;p&gt;If you are using node, go to package.json and find the command which to build the code, which would possibly result in the codeUri path in the previous step to be generated.&lt;/p&gt;

&lt;h3&gt;
  
  
  Run it
&lt;/h3&gt;

&lt;p&gt;Now that we have all the necessary dependencies tackled, we just have to run it&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sam local invoke --env-vars localTesting/local-env.json --event localTesting/lambda-params.json "Publish"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This expects the template to be on the root. If it's on another path, specify that too using &lt;code&gt;--template&lt;/code&gt; parameter.&lt;/p&gt;

&lt;p&gt;For example,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sam local invoke --template src/template.yml --env-vars localTesting/local-env.json --event localTesting/lambda-params.json "Publish"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will print out the lambda response on the terminal. You can add logs to debug or even build newer features easily now.&lt;/p&gt;

&lt;p&gt;Note: remember to rebuild after every change. If you keep build in watch mode, you don't even have to build again.&lt;/p&gt;

&lt;p&gt;Bonus tip:&lt;/p&gt;

&lt;p&gt;If you wan't to test only the application logic of a lambda, you could invoke the built lambda function with necessary params without sam.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Two ways to keep gitlab CI files maintainable</title>
      <dc:creator>Lekshmi Chandra</dc:creator>
      <pubDate>Mon, 12 Apr 2021 15:07:54 +0000</pubDate>
      <link>https://dev.to/lek890/two-ways-to-keep-gitlab-ci-files-maintainable-26de</link>
      <guid>https://dev.to/lek890/two-ways-to-keep-gitlab-ci-files-maintainable-26de</guid>
      <description>&lt;p&gt;Once we had a gitlab CI file. It was short and sweet. One year later, it grew 350 lines long. &lt;/p&gt;

&lt;p&gt;There were these problems:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Too much content - too much scrolling, hard to visualize and work on.&lt;/li&gt;
&lt;li&gt;Hard to disable some jobs temporarily - mostly for debugging infra or test environment or emergency deployment (let that never happen!).&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Let's try to solve it with some features from gitlab.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Leverage templating
&lt;/h3&gt;

&lt;p&gt;Gitlab CI supports using templates within the &lt;code&gt;.gitlab-ci.yml&lt;/code&gt; file. &lt;/p&gt;

&lt;p&gt;Consider a sample &lt;code&gt;.gitlab-ci.yml&lt;/code&gt; file as follows&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;stages:
   - setup
   - soft-qa //lint and unit tests
   - build
   - hard-qa //e2e's
   - deploy-storybook
   - pack
   - notify-devs-staging-can-be-deployed
   - deploy-staging
   - notify-devs-prod-can-be-deployed
   - deploy-production
   - suggest-release-notes

variables: 
    var1: '1'
    // .. and so on

conditions:
    only_master: 
       // configs
    branches:
       // configs
    // and more...

## cache related configs

## setup related configs

## jobs for each for the stages start
.
.
.
. // 200 lines later
.
## jobs end
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I will still keep the stages in the &lt;code&gt;.gitlab-ci.yml&lt;/code&gt; untouched so that I can get a preview of all the stages right at the start of the file.&lt;/p&gt;

&lt;p&gt;Let's now split this into smaller templates. &lt;/p&gt;

&lt;p&gt;For templates, I will create a folder in the repo root called &lt;code&gt;ci-templates&lt;/code&gt;. Now let's extract out one job from the main file and place it in a template in this folder. Note, all these files has to YAML, to be included into a &lt;code&gt;.gitlab-ci.yml&lt;/code&gt; file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/ci-templates/.soft-qa.yml

soft-qa:
  image: node:14.5
  stage: soft-qa
  &amp;lt;&amp;lt;: *npm_cache_pull
  allow_failure: false
  script:
    - yarn lint
    - yarn test:unit:ci
  artifacts:
    paths:
      - coverage/lcov.info
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Time to use the template. Go to &lt;code&gt;.gitlab-ci.yml&lt;/code&gt; and include this file like below. I will place it at the position of the replaced job.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;include: '/ci-templates/.soft-qa.yml'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We are using this syntax - with relative path - as we are using the file from the same repo. You can keep the file in another repo on the same gitlab instance or even in a public remote repository and use it! Just look up the syntax and update accordingly, like below:&lt;/p&gt;

&lt;p&gt;For another repo:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;include:
  - project: 'my-space/my-another-project'
    file: '/templates/.build-template.yml'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For remote:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;include:
  - remote: 'https://somewhere-else.com/example-project/-/raw/master/.build-template.yml'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In a similar way, extract out other jobs as well and remove them from the .gitlab ci file. I have abstracted the notify stages to a file called &lt;code&gt;.notifications.yml&lt;/code&gt; and deployment related jobs to &lt;code&gt;.deploy.yml&lt;/code&gt;, thus separating the concerns from one single file. Now, include becomes a list like below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;include: 
   - '/ci-templates/.soft-qa.yml'
   - '/ci-templates/.build.yml'
   - '/ci-templates/.hard-qa.yml'
   - '/ci-templates/.deploy-storybook.yml'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When pipeline starts, all included files are evaluated and deep merged into the .gitlab-ci file. &lt;/p&gt;

&lt;h3&gt;
  
  
  Catch, Catch, Catch
&lt;/h3&gt;

&lt;p&gt;Things are getting interesting. I have certain conditions like only_tag, only_branches in these jobs. How would I provide them to these files without duplicating it in every files?&lt;/p&gt;

&lt;p&gt;Enter &lt;code&gt;extends&lt;/code&gt;. &lt;/p&gt;

&lt;p&gt;I will consolidate my conditions in a file, say, &lt;code&gt;.conditions.yml&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// /ci-templates/.conditions.yml

.only_tag:
  only:
    - /^v\d+\.\d+\.\d+$/
  // and more...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To use it in a template, include it first in the .gitlab-ci file&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;include: 
   - '/ci-templates/.conditions.yml'
   - '/ci-templates/.soft-qa.yml'
   - '/ci-templates/.build.yml'
   - '/ci-templates/.hard-qa.yml'
   - '/ci-templates/.deploy-storybook.yml'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then, in the template file, add extend. Extend is a way of extending the congfigurations to another file. YAML anchors can be used but only from the same file. So this is a way to use jobs from another template.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;deploy_staging:
  extends: .only_tag
  &amp;lt;&amp;lt;: *deploy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, deploy_staging will be created only for tags. &lt;/p&gt;

&lt;p&gt;Finally, the .gitlab-ci file will look something like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;stages:
   - setup
   - soft-qa //lint and unit tests
   - build
   - hard-qa //e2e's
   - deploy-storybook
   - pack
   - notify-devs-staging-can-be-deployed
   - deploy-staging
   - notify-devs-prod-can-be-deployed
   - deploy-production
   - suggest-release-notes

variables: 
    var1: '1'
    // .. and so on

## cache related configs

## setup related configs

include: 
   - '/ci-templates/.conditions.yml'
   - '/ci-templates/.soft-qa.yml'
   - '/ci-templates/.build.yml'
   - '/ci-templates/.hard-qa.yml'
   - '/ci-templates/.deploy-storybook.yml'
   - '/ci-templates/.pack.yml'
   - '/ci-templates/.notifications.yml'
   - '/ci-templates/.deploy.yml'
   - '/ci-templates/.suggest-release-notes.yml'

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now let's go back to the initial problems and check whether they are fixed:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Too much content to scroll - now, we can view whole content in the IDE window - Fixed&lt;/li&gt;
&lt;li&gt;Hard to disable jobs - now, we just have to comment the template in the include list -  Fixed&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;How will I know, if I commented out a required job? Like, by mistake, I commented the pack job. For deployment packed resources are necessary. Here, we will know that the deployment failed much later when the deployment runs. To solve this, there is a way to mark dependencies on previous jobs. That is explained in next step.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Use &lt;code&gt;needs&lt;/code&gt; or &lt;code&gt;dependencies&lt;/code&gt; as applicable
&lt;/h3&gt;

&lt;p&gt;One way to track that you have the dependent artifacts are available before starting a job is by using &lt;code&gt;dependencies&lt;/code&gt; or &lt;br&gt;
&lt;code&gt;needs&lt;/code&gt; keyword.&lt;/p&gt;

&lt;p&gt;By default, all artifacts from previous stages are passed to each job. However, you can use the dependencies keyword to define a limited list of jobs to fetch artifacts from.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;build_frontend:
   stage: build
   script: yarn build

deploy:
   stage: deployment
   dependencies: 
      - build_frontend
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this case, if the build_frontend is not available, while merging the templates, the pipeline will report error that the &lt;code&gt;build_frontend&lt;/code&gt; is not available. Easy to understand.&lt;/p&gt;

&lt;p&gt;Another utility is the &lt;code&gt;needs&lt;/code&gt;. It is mainly used when you are running jobs out of order and still ensure that the dependency job is completed before starting a job. The difference of &lt;code&gt;needs&lt;/code&gt; from &lt;code&gt;dependencies&lt;/code&gt; is that, in &lt;code&gt;needs&lt;/code&gt;, no artifacts from previous steps are downloaded by default. You need to use &lt;code&gt;artifacts: true&lt;/code&gt; config on the job like below to download them:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;deploy:
  needs:
   - job: build_1
     artifacts: true
   - job: build_2
     artifacts: false
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This helped me reduce the complexity of the CI file. Hope it helps you too. &lt;/p&gt;

&lt;p&gt;Peace ✌️&lt;/p&gt;

</description>
      <category>gitlab</category>
      <category>betterprogramming</category>
      <category>cleancode</category>
    </item>
    <item>
      <title>Honeypot for bots implemented in alpine-nginx docker </title>
      <dc:creator>Lekshmi Chandra</dc:creator>
      <pubDate>Tue, 07 Jul 2020 17:08:19 +0000</pubDate>
      <link>https://dev.to/lek890/honeypot-for-bot-trap-in-alpine-nginx-docker-4387</link>
      <guid>https://dev.to/lek890/honeypot-for-bot-trap-in-alpine-nginx-docker-4387</guid>
      <description>&lt;p&gt;We can try to block the scraping bots which exhaust server resources by setting up a honeypot trap in nginx and block those unwanted IP's. This article is for opensource nginx on alpine linux base image. Nginx plus already has a bot trapping feature.&lt;/p&gt;

&lt;h3&gt;
  
  
  Steps:
&lt;/h3&gt;

&lt;h3&gt;
  
  
  1. Creating the trap
&lt;/h3&gt;

&lt;p&gt;We will add some invisible links to the web page. Human users won't click on on it. Only the bots which scrape the page  will access these specific hidden hrefs. &lt;code&gt;&amp;lt;a href="/one-trap-here"&amp;gt;&amp;lt;/a&amp;gt;&lt;/code&gt; Trap ready.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Add rule in nginx to handle the traps
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   location /one-trap-here {
      include honeytrap.conf;
   }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once the trap is visited, we will include the honey trap conf.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Writing honeytrap.conf
&lt;/h3&gt;

&lt;p&gt;This is another nginx configuration file which will handle the script execution to block the ip that accessed the page.&lt;/p&gt;

&lt;p&gt;Nginx cannot execute scripts on it's own and Common Gateway Interface (&lt;code&gt;CGI&lt;/code&gt;) is used for this purpose. &lt;code&gt;FastCGI&lt;/code&gt; is the package that efficiently manages script execution for large number of  incoming requests. &lt;/p&gt;

&lt;p&gt;Since we need to run a simple script that blocks the ip, we can use &lt;code&gt;fcgiwrap&lt;/code&gt;. fcgiwrap is the lightweight FastCGI wrapper.&lt;/p&gt;

&lt;p&gt;So, we can install &lt;code&gt;fcgiwrap&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;and &lt;code&gt;honeytrap.conf&lt;/code&gt; will look something like&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;fastcgi_intercept_errors off;
fastcgi_pass unix:/run/fcgiwrap-unix.sock;  
include fastcgi_params;
root /usr/local/libexec;
fastcgi_param SCRIPT_FILENAME $document_root/block-ip.cgi;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The fcgiwrap process communicates to nginx using socket file which we have to create in &lt;code&gt;/run/&lt;/code&gt; and the owner, the same user as the nginx.&lt;/p&gt;

&lt;p&gt;once fcgiwrap is installed, the following will link it as a socket file. &lt;/p&gt;

&lt;p&gt;&lt;code&gt;/usr/bin/fcgiwrap -s unix:/run/fcgiwrap-unix.sock &amp;amp;&lt;/code&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Add CGI script
&lt;/h3&gt;

&lt;p&gt;Content of /usr/local/libexec/block-ip.cgi could be to execute a shell script that will do the actual blocking and to return the http status code applicable in this scenario.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/sh

echo "Status: 410 Gone"
echo "Content-type: text/plain"
echo

echo "Get lost, $REMOTE_ADDR!"
/usr/local/bin/block-ip.sh

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Don't forget to make the script executable.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Add shell script
&lt;/h3&gt;

&lt;p&gt;Basic firewall in linux is handled by netfilter and we can add rules to the iptables in linux to drop any incoming request from a specific ip.&lt;/p&gt;

&lt;p&gt;In iptables, there are chains for INPUT, FORWARD and OUTPUT packets and you can add what action to take when a specific address makes requests like below&lt;/p&gt;

&lt;p&gt;&lt;code&gt;/sbin/iptables -A trap1 -s ${REMOTE_ADDR} -j DROP&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This means drop all requests from this remote address, append this to the trap1 chain. Further, this chain needs to be added to the ACCEPT, FORWARD, OUTPUT chains.&lt;/p&gt;

&lt;p&gt;Rather, a more cleaner approach would be to use ipset. With ipset, we can create specific sets for ipv4 addresses, ipv6 addresses, host name etc - basically you can categorize the items and then add them to the iptables.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;ipset -A trap1ipset ${REMOTE_ADDR}&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;To disconnect any keepalive connection to nginx, we could use &lt;code&gt;conntrack-tools&lt;/code&gt; also.&lt;/p&gt;

&lt;p&gt;First step would be to create ipsets for ipv4 and ipv6 addresses and add them to iptables. Note that separate package ipv6tables is required for ipv6 address handling. This has to be done on when the docker image boots up, possibly in the start up scripts.&lt;br&gt;
sample&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ipset -N ipv4trap iphash family inet
ipset -N ipv6trap iphash family inet6
iptables -A INPUT -m set --match-set ipv4trap src -j DROP
ip6tables -A INPUT -m set --match-set ipv6trap src -j DROP
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And,the anatomy of the shell script is as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash
IPT=/sbin/iptables

if [[ -z ${REMOTE_ADDR} ]]; then
    if [[ -z "$1" ]]; then
        echo "REMOTE_ADDR not set!"
        exit 1
    else
        REMOTE_ADDR=$1
    fi
fi

if [[ "$REMOTE_ADDR" != "${1#*[0-9].[0-9]}" ]]; then
  ipset -A ipv4trap ${REMOTE_ADDR}
  /usr/sbin/conntrack -D -s ${REMOTE_ADDR}
elif [[ "$REMOTE_ADDR" != "${1#*:[0-9a-fA-F]}" ]]; then
  ipset -A ipv6trap ${REMOTE_ADDR}
  /usr/sbin/conntrack -D -s ${REMOTE_ADDR}
else
  echo "Unrecognized IP format '$1'"
fi

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Important: To access the network utils docker needs added privileges on start up. &lt;/p&gt;

&lt;p&gt;Either, &lt;code&gt;--cap-add NET_ADMIN&lt;/code&gt; flag has to be passed on docker run command&lt;br&gt;
or&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cap-add 
  - NET_ADMIN 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;has to be added in the docker compose file.&lt;/p&gt;

&lt;p&gt;Now, this can be tested by accessing one of the trap urls. Look for rules getting added in the iptables under ACCEPT chain. The next try to access will not be accepted by the container.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;iptables --list&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;or&lt;/p&gt;

&lt;p&gt;&lt;code&gt;ipset list&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;can be used to view details.&lt;/p&gt;

&lt;p&gt;This solution could be improved for a persistent one where IP's will be saved and loaded to the iptables when the container boots up.&lt;/p&gt;

&lt;p&gt;For that, &lt;code&gt;ipset save&lt;/code&gt; and &lt;code&gt;ipset restore&lt;/code&gt; could come handy - which will write the IP's in memory to a file and restore it.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;ipset save bad-ips -f ipset-bad-ips.backup&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;and&lt;/p&gt;

&lt;p&gt;&lt;code&gt;ipset restore -! &amp;lt; ipset-bad-ips.backup&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Adaptations for docker:&lt;/p&gt;

&lt;p&gt;Copying and making the scripts executable can be done on the Dockerfile as needed.&lt;/p&gt;

&lt;p&gt;Creating the socket file can be done using startup script or startup CMD in Dockerfile.&lt;/p&gt;

</description>
      <category>nginx</category>
      <category>bottrap</category>
      <category>linux</category>
    </item>
  </channel>
</rss>
