<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Tim Abell</title>
    <description>The latest articles on DEV Community by Tim Abell (@timabell).</description>
    <link>https://dev.to/timabell</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/timabell"/>
    <language>en</language>
    <item>
      <title>[Boost]</title>
      <dc:creator>Tim Abell</dc:creator>
      <pubDate>Fri, 13 Mar 2026 22:29:56 +0000</pubDate>
      <link>https://dev.to/timabell/-3ink</link>
      <guid>https://dev.to/timabell/-3ink</guid>
      <description>&lt;div class="ltag__link"&gt;
  &lt;a href="/david_whitney" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__pic"&gt;
      &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F370904%2F17c75e16-5ae8-4b0a-9a82-219408a144f2.jpg" alt="david_whitney"&gt;
    &lt;/div&gt;
  &lt;/a&gt;
  &lt;a href="https://dev.to/david_whitney/existential-dread-and-the-end-of-programming-39kp" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__content"&gt;
      &lt;h2&gt;Existential Dread and the End of Programming&lt;/h2&gt;
      &lt;h3&gt;David Whitney ・ Feb 18&lt;/h3&gt;
      &lt;div class="ltag__link__taglist"&gt;
        &lt;span class="ltag__link__tag"&gt;#ai&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#architecture&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#programming&lt;/span&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
&lt;/div&gt;


</description>
      <category>ai</category>
      <category>architecture</category>
      <category>programming</category>
    </item>
    <item>
      <title>Should you create tickets for tech tasks?</title>
      <dc:creator>Tim Abell</dc:creator>
      <pubDate>Mon, 07 Oct 2024 00:00:00 +0000</pubDate>
      <link>https://dev.to/timabell/should-you-create-tickets-for-tech-tasks-1ki4</link>
      <guid>https://dev.to/timabell/should-you-create-tickets-for-tech-tasks-1ki4</guid>
      <description>&lt;p&gt;In the manner of choosing the colour to paint the bikeshed, the decision of whether to create a ticket for every single tiny commit, no matter whether it’s a giant feature or the tiniest whitespace cleanup in the readme file continues to consume countless hours. This question is not really that important when it comes to delivering software, yet it still comes up from time to time and there is a right and wrong answer, which should be modified only with consideration for the current situation.&lt;/p&gt;

&lt;h2&gt;
  
  
  In short
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Correct: create tickets only for features and larger technical work. Allow engineers to commit smaller improvements ad-hoc.&lt;/li&gt;
&lt;li&gt;Incorrect: create an atmosphere of stifling bureaucracy where ll attempts to keep the work area tidy will require sign-off from the CFO and approval from the product owner who has never written a line of code (some have, but that’s a rare wonder).&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Who cares?
&lt;/h2&gt;

&lt;p&gt;If anyone wonders why engineers get &lt;em&gt;so&lt;/em&gt; bent out of shape about this seemingly harmless request, imagine if someone asked you to fill out a form and submit it for every single piece of your work that you currently don’t have to. Want to pick up the phone?… create a ticket. Want to send an email?…&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;create a ticket. Can you imagine that this might make you a) miserable, b) less productive, and c) give up trying to do more than the bare minimum. By requesting tickets for everything you are introducing friction directly into the path of a programmer’s most important output - the codebase. We instinctively know that fast iteration and short cycle times lead to better delivery of value to users, and instinctively dislike anything that gets in the way. Note that the “auditing” and “why it changed” are already covered by source control (git) history and “pull requests” so adding tickets is redundant toil.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Turning to GPT for explanations
&lt;/h2&gt;

&lt;p&gt;And now, given that there can be reasons for adding a ticket link to everything (for example generating change logs from git), and in the modern tradition, we turn to ChatGPT for a properly argued discussion of both sides of the argument.&lt;/p&gt;

&lt;p&gt;The following is an edited result of my discussion with ChatGPT on the subject. Should you find yourself in the situation where someone has decided that every brush stroke during the painting of the bikeshed should require a JIRA ticket then this article will provide a handy reference to all the arguments that can be made around this oh-so-trivial subject.&lt;/p&gt;

&lt;p&gt;I particularly like the fact you can ask GPT to write in the styles of different personas, so first we have the curt “lead dev” version that gets to the point and is devoid of superfluous wording, empathy, and any other affordances to anyone who thinks in any other way than the tech lead; and then we have the more wordy default style of GPT that explains the points a bit more fully for the benefit of those who need a bit more explanation around the subject. Both are useful depending on the audience, so take your pick.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tech-lead-GPT Says…
&lt;/h2&gt;

&lt;p&gt;Following the usual wordy explanation I fed GPT this follow-up prompt:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“write it in the style of a busy lead dev engineer - still too fancy and formatted. plain ascii, missing capitalisation in places, nothing more than the bare facts”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;And got this straight-to-the-point explanation, lazy capitalization and all:&lt;/p&gt;

&lt;p&gt;requiring a ticket for every tech improvement pr has upsides but will slow us down in key areas. here’s why:&lt;/p&gt;

&lt;p&gt;pros:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;prioritization: a ticket makes sure we focus on impactful work. it forces us to ask if this improvement is really needed now or if it can wait.&lt;/li&gt;
&lt;li&gt;consistency: everyone follows the same process. good for cross-team work or when other stakeholders need visibility.&lt;/li&gt;
&lt;li&gt;accountability: tickets clarify why a change is being made, what the risks are, and who owns it.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;cons:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;overhead: creating tickets for minor improvements (e.g. small refactors or cleanup) adds unnecessary steps. this slows devs down and eats up time for actual coding.&lt;/li&gt;
&lt;li&gt;discourages quick fixes: devs spot opportunities to fix things as they go. forcing them to stop and make a ticket breaks their flow and makes it less likely they’ll bother with small but important changes.&lt;/li&gt;
&lt;li&gt;ticket bloat: if every tiny improvement needs a ticket, we’ll end up with a cluttered backlog full of minor stuff, making it harder to see what’s actually important.&lt;/li&gt;
&lt;li&gt;developer autonomy: devs should be trusted to make small improvements without red tape. too many rules can kill morale and make them feel micromanaged.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;suggestion:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;use tickets for significant changes (major refactors, cross-team impact, etc.), not small things like code cleanup or minor improvements.&lt;/li&gt;
&lt;li&gt;use pr descriptions to document and discuss small technical changes. keeps things lightweight without losing track of what’s been done.&lt;/li&gt;
&lt;li&gt;let devs decide when a ticket is needed vs when a quick pr will do. they’ll know best.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;bottom line: tickets are useful for big changes but requiring them for every single tech improvement will slow us down more than it helps. let’s keep the process flexible where it makes sense.&lt;/p&gt;

&lt;h2&gt;
  
  
  Default friendly GPT explains fully
&lt;/h2&gt;

&lt;p&gt;This part is in GPT’s default style, with some editing of what points to include. It’s pretty much as it was generated, and I think does a good job of explaining the trade-offs for discussion:&lt;/p&gt;

&lt;p&gt;Requiring a ticket for every technical improvement pull request is a policy with both advantages and drawbacks, and it’s important to weigh these carefully to ensure the team’s workflow is as efficient and effective as possible. Pros of Requiring Tickets for Technical Improvements&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Improved Prioritization and Planning Tickets create a formal process for prioritizing work. When technical improvements require a ticket, it forces a discussion on whether the improvement is truly necessary or how urgent it is. This helps teams focus on the most impactful changes and avoid unnecessary or lower-priority adjustments, allowing better alignment with the overall project goals.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Consistency Across Teams In larger teams or organizations, requiring tickets for all work (including technical improvements) helps ensure consistency. Everyone follows the same process, which makes it easier to manage cross-team dependencies, ensure the necessary approvals, and communicate changes to stakeholders.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Clearer Communication and Accountability Tickets provide a clear space for discussing the reasoning behind technical improvements, including potential risks, trade-offs, or alternatives. This promotes better communication among developers, product managers, and other stakeholders, and helps ensure that everyone is on the same page. It also adds accountability, as each improvement is documented and has a clear owner.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Cons of Requiring Tickets for Technical Improvements&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Increased Administrative Overhead One of the biggest downsides is the added overhead. Many technical improvements are small, straightforward fixes or optimizations that don’t warrant the time and effort needed to create, manage, and track a ticket. Forcing developers to do this extra work can slow down progress and take time away from actual coding. Over time, this could lead to frustration or a slowdown in the pace of incremental improvements.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Discourages Quick, Organic Improvements Developers often spot opportunities for small technical improvements while they’re working on other tasks. Having to create a ticket for each of these can disrupt their workflow, making them less likely to act on these improvements when they notice them. This can lead to missed opportunities to clean up code, reduce technical debt, or optimize performance in small but meaningful ways.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Potential for Ticket Bloat In a fast-moving development environment, the requirement for tickets on small improvements could create a flood of minor issues in the ticketing system. This can make it harder to manage more critical work and obscure visibility into the progress of higher-priority features or bugs. The sheer volume of tickets can become difficult to sort through, adding cognitive load to project management.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Less Developer Autonomy Developers often thrive in environments where they can take initiative to improve the codebase. Requiring tickets for every small change might reduce their sense of ownership, as it introduces a layer of bureaucracy between identifying an issue and resolving it. This could dampen their motivation to take care of small, yet important, aspects of code quality.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A Balanced Approach&lt;/p&gt;

&lt;p&gt;Given the pros and cons, there may be a middle ground that allows the team to capture the benefits of ticketing while avoiding its downsides:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Ticketing for Significant Changes: Tickets could be required for larger technical improvements that have broader implications, such as major refactors, changes that affect multiple teams, or improvements that touch critical parts of the system. For smaller changes, like simple code cleanup, minor refactoring, or bug fixes, developers could be allowed to open a pull request without the overhead of a ticket.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Use Pull Requests for Tracking: The pull request itself can serve as a lightweight way to document and discuss technical improvements. Detailed PR descriptions, along with the review process, can provide enough visibility and communication for smaller improvements without the need for a formal ticket.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Developer Discretion: Allow developers to decide whether a ticket is needed based on the scope of the improvement. They are often best positioned to know whether a change is minor enough to be handled via a simple PR or whether it requires more discussion and prioritization through a ticket.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Conclusion&lt;/p&gt;

&lt;p&gt;Requiring tickets for every technical improvement has clear advantages in terms of prioritization, consistency, and accountability, particularly in larger teams or complex projects. However, the added overhead, potential for slowing down minor improvements, and impact on developer autonomy are significant downsides to consider. A balanced approach that applies ticketing requirements more selectively, while still ensuring visibility and discussion through PRs, could be a better fit for fostering both productivity and quality in the development process.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Why user stories should be shippable increments</title>
      <dc:creator>Tim Abell</dc:creator>
      <pubDate>Thu, 12 Sep 2024 00:00:00 +0000</pubDate>
      <link>https://dev.to/timabell/why-user-stories-should-be-shippable-increments-46na</link>
      <guid>https://dev.to/timabell/why-user-stories-should-be-shippable-increments-46na</guid>
      <description>&lt;p&gt;Sometimes I see people create work items / jira tickets / user- stories that if they were implemented and merged in to the codebase would leave the system in a half-finished state, either broken or inconsistent.&lt;/p&gt;

&lt;h2&gt;
  
  
  The problem with non-shippable tickets
&lt;/h2&gt;

&lt;p&gt;This to me is instinctively a “bad thing”, however as not everyone has grokked this important fundamental of team software delivery, I turned to chat gpt for an explanation of why I think this is important, and I have to say it did a stellar job of clearly explaining the things that I have absorbed over my decades of commercial software development. So here you go, here’s why non-shippable tickets are a problem:&lt;/p&gt;

&lt;h3&gt;
  
  
  Incomplete Features Impact User Experience
&lt;/h3&gt;

&lt;p&gt;A half-finished feature can confuse users or negatively impact their experience. If the feature is exposed to users before it’s fully functional, it may appear broken, resulting in dissatisfaction or mistrust in the system’s reliability.&lt;/p&gt;

&lt;h3&gt;
  
  
  Difficult to Test and Validate
&lt;/h3&gt;

&lt;p&gt;Testing half-finished work is challenging, as it’s not in a stable or coherent state. Quality Assurance (QA) teams need a fully functional piece of work to validate it against the requirements. When a ticket is only partially completed, it’s hard to determine if it meets the intended functionality or introduces bugs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Complicates Future Development
&lt;/h3&gt;

&lt;p&gt;Leaving an unfinished task can create dependencies and conflicts with future work. If another developer later works on a related feature, they might have to deal with the incomplete state, which could introduce confusion or lead to mistakes in integrating related functionality.&lt;/p&gt;

&lt;h3&gt;
  
  
  Increases Technical Debt
&lt;/h3&gt;

&lt;p&gt;Incomplete features often accumulate as technical debt, which means extra work will be needed later to finish, refactor, or fix them. This can slow down future progress, as more time will need to be spent revisiting and cleaning up the half-finished work.&lt;/p&gt;

&lt;h3&gt;
  
  
  Blocks Related Development
&lt;/h3&gt;

&lt;p&gt;A half-finished ticket can block progress on other features that depend on it. For example, if a partially completed feature introduces bugs or changes a shared system in an untested way, it could delay the development of other tickets or cause cascading issues across the system.&lt;/p&gt;

&lt;h3&gt;
  
  
  Reduces Team Efficiency
&lt;/h3&gt;

&lt;p&gt;When a ticket is left in a half-finished state, developers may need to pick it up again later or pass it to another developer, leading to context-switching. This results in inefficiency, as time is wasted on re-clarifying the problem, re-understanding the code, and figuring out where the previous developer left off.&lt;/p&gt;

&lt;h3&gt;
  
  
  Unclear Ownership and Accountability
&lt;/h3&gt;

&lt;p&gt;If a ticket is left unfinished, ownership and accountability become unclear. Other team members may not know who is responsible for completing it or what still needs to be done, which can cause confusion and reduce productivity.&lt;/p&gt;

&lt;h3&gt;
  
  
  Inconsistent System Behavior
&lt;/h3&gt;

&lt;p&gt;Partial implementation can cause the system to behave inconsistently. For example, a feature that’s half-completed might trigger errors, create discrepancies in data, or cause parts of the system to function incorrectly or unpredictably.&lt;/p&gt;

&lt;h3&gt;
  
  
  Hinders Deployment and Releases
&lt;/h3&gt;

&lt;p&gt;Half-finished tickets often disrupt deployment cycles, as incomplete features cannot be included in production. This results in unnecessary delays or the need to work around unfinished tickets, making releases more complex and error-prone.&lt;/p&gt;

&lt;h3&gt;
  
  
  Wasted Effort and Resources
&lt;/h3&gt;

&lt;p&gt;When a ticket is not fully completed, there’s a risk that the work may be scrapped or need significant changes by the time it’s revisited. This can result in wasted development effort, resources, and time.&lt;/p&gt;

&lt;h3&gt;
  
  
  Unmet Stakeholder Expectations
&lt;/h3&gt;

&lt;p&gt;Leaving a ticket incomplete prevents stakeholders from seeing tangible progress, undermining confidence in the team’s ability to deliver. Stakeholders expect finished, usable deliverables that they can review and assess, not half-completed tasks that require further work.&lt;/p&gt;

&lt;h3&gt;
  
  
  Difficult to Manage in Agile Processes
&lt;/h3&gt;

&lt;p&gt;In Agile and Scrum methodologies, the goal is to deliver shippable increments of work at the end of each sprint. Leaving a ticket half-finished breaks this cycle, making it hard to close sprints successfully and creating spillover tasks that carry into future iterations.&lt;/p&gt;

&lt;h3&gt;
  
  
  In short
&lt;/h3&gt;

&lt;p&gt;Avoiding half-finished tickets ensures that the system remains stable, progress is clear, and each ticket represents a functional, complete deliverable. This promotes better workflow, higher quality, and more reliable software.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to create small shippable batches
&lt;/h2&gt;

&lt;p&gt;That’s all very well you say, but I don’t want to end up with giant, big-bang stories where engineering vanish for a month or six until every last thing works (if it ever does), so what to do?&lt;/p&gt;

&lt;p&gt;Again I’ve turned to gpt as a starting point, and it’s not done a bad job, I’ve done a bit more editing to this set and added some of my own thoughts:&lt;/p&gt;

&lt;h3&gt;
  
  
  Use Vertical Slicing
&lt;/h3&gt;

&lt;p&gt;Vertical slicing means breaking down features into small, independent pieces that cut through all layers of the stack (UI, backend, database, etc.) and provide end-to-end functionality. Each slice delivers a fully functional piece of the feature, even if it’s a smaller or simpler version of the final product. Example: Instead of building a complete “user profile” page, deliver a ticket for “adding a profile picture” with all the components needed (UI, API, storage).&lt;/p&gt;

&lt;h3&gt;
  
  
  Define Clear Acceptance Criteria
&lt;/h3&gt;

&lt;p&gt;Each ticket should have clear, well-defined acceptance criteria that detail exactly what constitutes a “done” state. These criteria ensure that the ticket can be completed independently and is deliverable as a unit.&lt;/p&gt;

&lt;h3&gt;
  
  
  Focus on the minimum viable product
&lt;/h3&gt;

&lt;p&gt;Like the Minimal Viable Product (MVP) concept, you can apply the same thinking to individual product increments.&lt;/p&gt;

&lt;p&gt;When designing features, focus on delivering the smallest possible version that is still valuable to users. This forces the team to identify the core functionality that is immediately useful, allowing for iterative improvement later.&lt;/p&gt;

&lt;p&gt;Example: Instead of delivering a fully-featured analytics dashboard, start with a basic version that shows just the key metrics, and expand later.&lt;/p&gt;

&lt;h3&gt;
  
  
  Implement Feature Flags
&lt;/h3&gt;

&lt;p&gt;Use feature flags to allow partially completed features to be merged into the main codebase without affecting the user experience. This ensures that work can be released incrementally but hidden or disabled until it is complete.&lt;/p&gt;

&lt;p&gt;Example: A half-built “comments section” can be developed over several tickets but hidden from users until all parts are ready.&lt;/p&gt;

&lt;h3&gt;
  
  
  Break Down Complex Features
&lt;/h3&gt;

&lt;p&gt;Complex features should be split into smaller sub-features that can each be delivered independently. Ensure that each sub-feature is fully functional, even if it doesn’t have all the final functionality.&lt;/p&gt;

&lt;h3&gt;
  
  
  Avoid Over-Engineering
&lt;/h3&gt;

&lt;p&gt;Start with the simplest solution to a problem, and iterate.&lt;/p&gt;

&lt;p&gt;Avoid adding extra complexity upfront by sticking to what’s necessary to fulfill the ticket’s requirements for delivery.&lt;/p&gt;

&lt;p&gt;This includes “gold plating” and “future proofing” when implementing the ticket. Just do the minimum high-quality code that you &lt;em&gt;know&lt;/em&gt; you need now.&lt;/p&gt;

&lt;h3&gt;
  
  
  Make Use of Stubs and Mocks
&lt;/h3&gt;

&lt;p&gt;To avoid leaving a ticket half-finished when other dependencies aren’t ready, use stubs or mocks for parts of the system that are still under development. This allows you to complete and deliver your ticket while waiting for those dependencies.&lt;/p&gt;

&lt;p&gt;Example: If the backend API isn’t ready, a front-end ticket can use mock data to simulate responses, allowing the UI to be built and delivered without waiting.&lt;/p&gt;

&lt;p&gt;This is more appropriate for very early product development rather than for modifying an established system.&lt;/p&gt;

&lt;h3&gt;
  
  
  Communicate Dependencies Clearly
&lt;/h3&gt;

&lt;p&gt;If a feature is dependent on other parts of the system, make sure the dependencies are identified and managed.&lt;/p&gt;

&lt;p&gt;Sometimes it makes sense to coordinate related tickets so they’re developed together or in the right sequence.&lt;/p&gt;

&lt;p&gt;Example: If a new API endpoint is needed for a UI feature, ensure the API ticket is prioritized so the front-end team isn’t blocked, or vice versa.&lt;/p&gt;

&lt;h3&gt;
  
  
  Collaborate in Cross-Functional Teams
&lt;/h3&gt;

&lt;p&gt;Ensure cross-functional teams collaborate on the entire stack of a feature (UI, backend, database, etc.) within the same sprint or ticket. This ensures no part of the feature is half-done or stuck waiting for another team to finish their part.&lt;/p&gt;

&lt;p&gt;Example: A ticket to create a new form should include front-end developers, back-end developers, and testers working together to ensure it’s functional from end to end.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cross-functional brainstorming of story creation
&lt;/h3&gt;

&lt;p&gt;There’s nothing like all members of a team, including product, engineering, UX etc all sitting round a table and throwing around ideas to see if they can come up with a creative way to break down a problem into smaller but still shippable pieces.&lt;/p&gt;

&lt;p&gt;I’ve repeated this exercise with many clients and the flashes of inspiration that come out of it cannot be replicated by thinking harder on one’s own while starting at Jira.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;By structuring work this way, teams can maintain a balance between small, manageable batches of work and ensuring that each piece is fully deliverable and functional on its own.&lt;/p&gt;

&lt;p&gt;This approach maximizes delivery velocity while maintaining system integrity and delivering value incrementally.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Running a beefy virtualbox dev server</title>
      <dc:creator>Tim Abell</dc:creator>
      <pubDate>Fri, 07 Jun 2024 00:00:00 +0000</pubDate>
      <link>https://dev.to/timabell/running-a-beefy-virtualbox-dev-server-3h5g</link>
      <guid>https://dev.to/timabell/running-a-beefy-virtualbox-dev-server-3h5g</guid>
      <description>&lt;h2&gt;
  
  
  Why
&lt;/h2&gt;

&lt;p&gt;Sometimes I have to work on hairy old sprawling legacy code bases in ye-olde c#, which means, unfortunately, no shiny new dotnet-core with Rider on linux and all the speed gains that brings.&lt;/p&gt;

&lt;p&gt;So having fallen over another codebase that takes unbearably long to build and work on when combined with the need to run visual studio in a windows virtualbox vm on top of my linux box I’ve shelled out for a desktop pc with a top-end cpu to use as a dev box for those trickier builds. But who wants to give up the joy of a portable laptop, so here I am with the challenge of connecting to virtualbox from another machine. Turns out there’s a few non-obvious things and options that may or may not work for you, so here’s what worked for me.&lt;/p&gt;

&lt;h3&gt;
  
  
  GNU’s Not Unix and neither is Windows
&lt;/h3&gt;

&lt;p&gt;Aside - windows will never again have ownership of my computer, I relegated it to a VM for emergencies (i.e. clients paying money), where it can stay within those steely confines unable to bother me the rest of the time. But that’s another blog post really.&lt;/p&gt;

&lt;h2&gt;
  
  
  Basic setup
&lt;/h2&gt;

&lt;p&gt;This is pretty simple:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Install &lt;a href="https://www.linuxmint.com/" rel="noopener noreferrer"&gt;Linux Mint&lt;/a&gt; because home servers are a lot easier to deal with when they’re actually just normal desktop operating systems that are switched on more. 

&lt;ol&gt;
&lt;li&gt;Enable full disk encryption during setup (advanced &amp;gt; lvm &amp;gt; encrypt) (this is a bit of a nuisance for remote management but important for security if the machine is stolen)&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;

&lt;li&gt;Turn on automatic updates&lt;/li&gt;

&lt;li&gt;Install virtualbox and openssh-server&lt;/li&gt;

&lt;li&gt;Copy the “Virtualbox VMs” folder across by opening &lt;code&gt;sftp://ur-server-here/&lt;/code&gt; in nemo (the file browser) on the source machine. It’s &lt;a href="https://unix.stackexchange.com/questions/48399/fast-way-to-copy-a-large-file-on-a-lan" rel="noopener noreferrer"&gt;not the quickest&lt;/a&gt; but it did 200gb in 5 hours, and I have other things to do in my life so that’s fine.&lt;/li&gt;

&lt;li&gt;Fire it up on the desktop with the virtualbox gui just to check it’s fine&lt;/li&gt;

&lt;/ol&gt;

&lt;h2&gt;
  
  
  Remote access - linux
&lt;/h2&gt;

&lt;p&gt;To get some remote control of the machine simple &lt;code&gt;ssh&lt;/code&gt; and &lt;code&gt;ssh -X&lt;/code&gt; are enough for cli actions and some gui actions respectively (&lt;code&gt;-X&lt;/code&gt; tunnels x-windows over ssh, meaning you can launch graphical programs on the remote machine and they pop up on the local display. Magic.)&lt;/p&gt;

&lt;h2&gt;
  
  
  Remote access windows vm
&lt;/h2&gt;

&lt;p&gt;This proved to be less obvious.&lt;/p&gt;

&lt;p&gt;There are three possible layers to remote in:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Remote access to the host machine UI&lt;/li&gt;
&lt;li&gt;Remote access to the virtualbox process&lt;/li&gt;
&lt;li&gt;Remote access to the windows vm&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Only number 3, directly connecting to the guest windows vm with RDP worked for me.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Host connection
&lt;/h3&gt;

&lt;p&gt;Remote desktop in linux is still a bit of a mess.&lt;/p&gt;

&lt;p&gt;VNC is klunky old, and I don’t even know if it’s secure any more, it’s also not available out of the box. I didn’t even try.&lt;/p&gt;

&lt;p&gt;There’s some more or less proprietary ones like nx (nomachine?), &lt;a href="https://rustdesk.com/" rel="noopener noreferrer"&gt;rustdesk&lt;/a&gt;, go-to-my-pc etc, which I either haven’t had any luck with previously or haven’t tried / don’t trust.&lt;/p&gt;

&lt;p&gt;I tried running the virtualbox command gui over ssh, but it had two problems:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;It was too laggy / slow updating the screen to be useable for intense visual studio coding work&lt;/li&gt;
&lt;li&gt;Disconnecting killed the virtualbox process and terminated the vm without proper shutdown&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I quickly gave up on this and tried #2…&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Remote access to virtualbox
&lt;/h3&gt;

&lt;p&gt;Tantalizingly &lt;a href="https://www.virtualbox.org/manual/ch07.html" rel="noopener noreferrer"&gt;virtualbox has RDP support available&lt;/a&gt;…. but the punchline is that it’s not part of the open source project, and is a proprietary “extension” that requires agreement to &lt;a href="https://superuser.com/questions/146398/virtualbox-puel-interpretation/1315219#1315219" rel="noopener noreferrer"&gt;a personal/evaluation license that explicitly forbids commercial use&lt;/a&gt;, so that was out. Darn. If you’re only doing personal things you could try this, but I’m not so can’t.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Remote access to windows vm
&lt;/h3&gt;

&lt;p&gt;This was the winner in the end but requires a few steps to get set up:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Change the virtualbox vm network settings to “bridged” instead of the default “NAT” so that the vm is available directly on the network.&lt;/li&gt;
&lt;li&gt;Grab the generated ipv6 address for the machine (or give it a static ip4 lease)&lt;/li&gt;
&lt;li&gt;In the windows guest: 

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://learn.microsoft.com/en-us/windows-server/remote/remote-desktop-services/clients/remote-desktop-allow-access" rel="noopener noreferrer"&gt;Enable remote desktop&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Give the user account a password if it doesn’t already have one (you can’t connect remotely with a passwordless user).&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;

&lt;li&gt;Start the vm with the virtualbox cli instead of the gui (avoids it being terminated when the shell session ends): 

&lt;ol&gt;
&lt;li&gt;&lt;code&gt;vboxmanage startvm WinDev2404Eval --type headless&lt;/code&gt;&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;

&lt;li&gt;On the client laptop install the &lt;a href="https://remmina.org/" rel="noopener noreferrer"&gt;remmina rdp client&lt;/a&gt; with &lt;code&gt;apt&lt;/code&gt;
&lt;/li&gt;

&lt;li&gt;Connect to the vm in remmina with the ip6 address of the guest machine (&lt;em&gt;not&lt;/em&gt; the host desktop)&lt;/li&gt;

&lt;li&gt;Enter the windows user’s username &amp;amp; password when prompted&lt;/li&gt;

&lt;/ol&gt;

&lt;h2&gt;
  
  
  Remote management tips
&lt;/h2&gt;

&lt;p&gt;Here’s some more useful commands for managing the server machine remotely over ssh:&lt;/p&gt;

&lt;p&gt;List virtual machines (useful for getting the name):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;vboxmanage list vms

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Start a virtual machines&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;vboxmanage startvm WinDev2404Eval --type headless

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Shut down the host&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo shutdown -h now

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;(see &lt;a href="https://explainshell.com/explain?cmd=shutdown+-h+now" rel="noopener noreferrer"&gt;https://explainshell.com/explain?cmd=shutdown+-h+now&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;Suspend the host:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;systemctl suspend

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;(ref &lt;a href="https://askubuntu.com/questions/1792/how-can-i-suspend-hibernate-from-command-line/1795#1795" rel="noopener noreferrer"&gt;https://askubuntu.com/questions/1792/how-can-i-suspend-hibernate-from-command-line/1795#1795&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Was it worth it?
&lt;/h2&gt;

&lt;p&gt;Build time of a gnarly project on laptop:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;========== Rebuild completed at 02:44 and took 20:08.230 minutes ==========

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And the same vm running the same full rebuild remoted into the new desktop:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;========== Rebuild completed at 02:28 and took 04:15.142 minutes ==========

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can even copy paste text like this straight out of the RDP’d VM which will be immensely useful.&lt;/p&gt;

&lt;h2&gt;
  
  
  The end
&lt;/h2&gt;

&lt;p&gt;That’s all for this one. An obscure technical thing, that was useful to me, and just tricky enough to be worth documenting for future-me and the rest of the interwebs in case anyone finds it useful. Hurrah for blogs and zero cost of replication.&lt;/p&gt;

&lt;p&gt;Till next time. 👋&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Trunk based (mainline) development is (mostly) wrong</title>
      <dc:creator>Tim Abell</dc:creator>
      <pubDate>Thu, 18 Apr 2024 00:00:00 +0000</pubDate>
      <link>https://dev.to/timabell/trunk-based-mainline-development-is-mostly-wrong-1bgp</link>
      <guid>https://dev.to/timabell/trunk-based-mainline-development-is-mostly-wrong-1bgp</guid>
      <description>&lt;h2&gt;
  
  
  Mainlining
&lt;/h2&gt;

&lt;p&gt;Some very experienced developers, some of whom I’ve heard it from in-person, strongly advocate what is often called “mainline” or “trunk-based” development, meaning that the git history is a series of commits directly to the main branch, with no pull requests or merge commits in sight. This is often held up as the panacea for achieving fast, high-quality delivery.&lt;/p&gt;

&lt;p&gt;I. Do. Not. Agree. The absolute rule of “no merge commits on main” (aka straight-line history), sometimes enforced by github configuration, is absolute balderdash. This assertion is typical of the black-and-white presentations of topics that seem oh-so-appealing when presented in the conference talk circuit where there is no room for nuance and actual trade-offs made in the trenches, and where big extreme statements win the game of attention.&lt;/p&gt;

&lt;p&gt;There is a place for thoughtful, individual mainline commits, so don’t take this as saying the opposite extreme of “only merge commits / PRs on main”. A small-to-medium internally consistent patch (say 1 to 200 lines of diff at most), can be quite a reasonable thing to just push to main.&lt;/p&gt;

&lt;h2&gt;
  
  
  Does pairing remove the need for PRs?
&lt;/h2&gt;

&lt;p&gt;For anything more than trivial changes in your production code and test code you probably want two pairs of eyes on it, and pairing is a great way to achieve that in real-time. But does pairing remove the need for a PR entirely? Some say yes, but I say it depends on the actual patch …&lt;/p&gt;

&lt;p&gt;The question of pairing and the question of “mainline vs pull request” are almost entirely orthogonal. Pairing fixes the async-review problem, and certainly reduces the need for PRs to achieve peer-review, but that doesn’t mean we should reflexively throw out feature branches, merge commits and PRs with the proverbial bath water.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why branches and merge-commits matter
&lt;/h2&gt;

&lt;p&gt;Unconditionally mainlining everything loses some important capabilities:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The ability to group a series of related commits (e.g. reformat, refactor, add feature), into one coherent group by way of merging them to main in one hit. (With or without github and its PRs.) A future reader can use this to look at history at two levels of detail - using &lt;code&gt;git log --first-parent&lt;/code&gt; to look first at the merges and commits to main, and then &lt;em&gt;only if it’s interesting&lt;/em&gt; look at the series of patches in a feature branch that was merged in. And conversely it allows breaking down a patch that needs to go into main in one hit (to avoid breaking things by shipping half-finished work) into a series of easier to understand patches with meaningful commit messages.&lt;/li&gt;
&lt;li&gt;The ability to check your changes with CI in github before merging to main to be sure you don’t break main for everyone else (plus the permanent record of another “Fix main build, oopsie” commit).&lt;/li&gt;
&lt;li&gt;The ability to offload regression checks to github actions instead of having to always run all the tests locally.&lt;/li&gt;
&lt;li&gt;The ability to create a series of commits, discover they are wrong several hours later, and throw them out or rewrite them before they ever hit main.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you think mainlining generates a clean history, how often have you made a series of commits before realising that the shape of code isn’t right and now you have another series of commits to get it into a different shape. With mainline development that completely irrelevant first attempt is now in history to confuse future readers for ever more. With branch based development you can throw it out and pretend it never happened.&lt;/p&gt;

&lt;h2&gt;
  
  
  When to mainline, branch or PR
&lt;/h2&gt;

&lt;p&gt;It’s a bit of an art, so here’s some rules of thumb to guide you through the mainline / branch / PR decisions as you write code:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Pair on everything you can, eliminating the need for a PR for peer-review. Only peel off to solo work when it’s painfully obvious that there’s zero value / knowledge-transfer / alignment to be gained by pairing (e.g. library upgrades).&lt;/li&gt;
&lt;li&gt;Trivial, uncontroversial changes can be mainlined solo (if your org allows any solo pushes to main), especially if you run the tests locally before pushing. Watch out for linters and formatters in CI. Pre-commit / pre-push checks can help you avoid breaking the build with silly mistakes.&lt;/li&gt;
&lt;li&gt;Slightly bigger changes can be pushed directly to main if you are pairing, but they should still be restricted to smaller, simple to understand patches; not entire large features.&lt;/li&gt;
&lt;li&gt;When working on a larger feature, try and peel off as many unrelated pieces as you can and mainline or PR them separately as you go instead of lumping them in to your feature patch/branch, rebasing your feature patch/branch onto the updated main.&lt;/li&gt;
&lt;li&gt;If a piece of work requires a few logically coherent steps to make the change, that are related or build on each other, then group them together in a feature branch. If you are pairing then you can run your tests &amp;amp; lint locally, create a merge commit on main that merges your branch in, and push that to main. No PR needed as it’s already been reviewed.&lt;/li&gt;
&lt;li&gt;You may opt to open a PR in any of the above circumstances anyway in order to: 

&lt;ol&gt;
&lt;li&gt;Get a run of CI&lt;/li&gt;
&lt;li&gt;Get some async input from the broader dev team or across team boundaries.&lt;/li&gt;
&lt;li&gt;To put a novel idea/technique into public and give it time to be considered by yourself and others. The draft PR feature of github can be useful for this. Also useful for “spikes” (experiments in code to learn something or try something out that aren’t necessarily production quality or ready to merge).&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;

&lt;li&gt;Beware of long running branches with many commits, even if they are lots of well crafted commits. It’s a sign your deliverable increment is too large (at feature branch and/or story level), and you should look to take a step back and break it down into smaller increments; maybe even ditching the branch and starting again from main.&lt;/li&gt;

&lt;/ol&gt;

&lt;p&gt;Note that you need to be constantly considering these rules of thumb as you make every change to the code because in the fluid development flow it’s normal to range across all sorts of types of changes in one coding session (e.g. cleanup, refactoring, bug fixes, feature changes, quality improvements, ci fixes, editor config changes, massive file/folder renames, and the actual features, etc etc), and each of these might require a separate approach to generating a high-quality history.&lt;/p&gt;

&lt;h2&gt;
  
  
  Split up commits &lt;em&gt;before&lt;/em&gt; you write more code
&lt;/h2&gt;

&lt;p&gt;Flipping to main and making a separate commit when you realise you want/need a change that is unrelated to your feature branch is a great habit/skill to build. It does however take discipline and practice.&lt;/p&gt;

&lt;p&gt;I see many developers who default to branch-based development and just pile any old thing they need into their current branch, ending up with mammoth PRs that change far too much in one go. They often then complain that splitting it back up is too hard so they should just squash it. As the old saying goes “if I wanted to get to there I wouldn’t have started from here”; it’s far better to split into commits as you go rather than struggling with git’s tricky-but-powerful &lt;code&gt;git rebase --interactive&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;I’ve had devs say to me that “it’s not worth the time” to create a series of atomic coherent and well described commits. And for evidence they often point to their massive commit that does too many things, or their pile of incoherent “wip” commits to a branch, combined with the effort it takes to split and recombine commits into something better. In commercial projects I often relent because at that point the horse has bolted, and they are right that it isn’t worth spending that much client/employer time rescuing their history aside from the opportunity to demonstrate rebase-interactive to them. (Besides, it’s their name on that commit forever, not mine). That doesn’t however make it good enough, or the correct conclusion. The correct answer is that you should work on getting the series of patches right as you go, and the cost of interactive rebases largely goes away.&lt;/p&gt;

&lt;h2&gt;
  
  
  Feature flags DO NOT replace feature branches
&lt;/h2&gt;

&lt;p&gt;Some claim that feature flags (the ability to toggle capabilities on and off at runtime) are the reason you no longer need branches.&lt;/p&gt;

&lt;p&gt;To be sure, they are a useful thing, and an answer to long-lived feature branches that can’t be merged because the feature isn’t ready to ship yet.&lt;/p&gt;

&lt;p&gt;But just because you know how to use feature flags, still doesn’t mean you should throw out the richness and tools available to you with merge commits to main and PRs. The fact that some teams have painted themselves into a corner of PR-hell doesn’t mean you should throw out PRs entirely. You can have feature flags &lt;em&gt;and&lt;/em&gt; the option to use branches, merge commits and PRs when it is the right tool for the job.&lt;/p&gt;

&lt;h2&gt;
  
  
  “But no-one cares about the history”
&lt;/h2&gt;

&lt;p&gt;Someone said this to me today. They aren’t the first person to say it to me either. This is empirically wrong. A friend of mine calls people with this mindset pejoratively “BSBITS devs - Big Save Button In The Sky Developers” meaning that they think that the only thing that matters is the current production code, and that source control is a fancy save/share system.&lt;/p&gt;

&lt;p&gt;Here are some of the ways that I have seen first-hand many many developers including myself “care” about history:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Understanding why the current code is how it is in order to know how/whether to change it.&lt;/li&gt;
&lt;li&gt;Understanding the changes another developer made in order to replicate those changes in a different microservice in a different team.&lt;/li&gt;
&lt;li&gt;Reviewing a large PR with 1000+ lines of code changed, where some of it is refactoring and reformatting and some is functional changes - looking at the commits of the PR/branch allows you to see the intended functional change (sometimes just a few lines) separately from large rote refactors etc.&lt;/li&gt;
&lt;li&gt;To judge the capabilities and productivity of a developer with a view to hiring/firing. Yes this actually happens, I’ve seen it. And guess what, that developer doesn’t get to explain their poor quality history, &lt;strong&gt;and they can be judged harshly for it&lt;/strong&gt;. Are you really sure no-one cares?&lt;/li&gt;
&lt;li&gt;To get a feel for the capabilities of a co-worker.&lt;/li&gt;
&lt;li&gt;To keep up to date with changes to a project that you are either involved in directly or indirectly, perhaps a repo belonging to another team.&lt;/li&gt;
&lt;li&gt;Principle developers that operate cross-team looking for best/worst practices that they need to take action on. (They often have significant HR influence, do you want them judging your poor history badly because “no-one cares”?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Ironically I’ve even seen some of the people who’ve said to me “no-one cares about history” then go on at a later date to look at the git history in order to figure something out while I’m on a screenshare with them. It’s all I can do not to point out the hypocrisy.&lt;/p&gt;

&lt;h2&gt;
  
  
  Build that habit
&lt;/h2&gt;

&lt;p&gt;In the end, a lot of the arguments I see come from people who haven’t mastered the skill of creating high-quality history with commits, branches and thoughtful merges.&lt;/p&gt;

&lt;p&gt;The answer in my eyes is not to declare half of the tooling we have off-limits, and to claim that the generated history is unimportant, it is instead to get really good at creating good history as you go so that it become effortless.&lt;/p&gt;

&lt;p&gt;Treat generation of history just like generation of code. It’s the most visible and permanent record of what you do as a developer of software, and you should treat it with the same pride and diligence as you do the code that runs in production.&lt;/p&gt;

&lt;h2&gt;
  
  
  Fin
&lt;/h2&gt;

&lt;p&gt;Coding is not just computers running things, it’s inter-person communications. Quality history, and the richness that branches and merges give us is part of that tapestry of communication with persons past, present and future.&lt;/p&gt;

&lt;p&gt;Become good at it and the objections melt away.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Why do automated tests matter?</title>
      <dc:creator>Tim Abell</dc:creator>
      <pubDate>Wed, 27 Mar 2024 00:00:00 +0000</pubDate>
      <link>https://dev.to/timabell/why-do-automated-tests-matter-19ei</link>
      <guid>https://dev.to/timabell/why-do-automated-tests-matter-19ei</guid>
      <description>&lt;p&gt;It might seem a bit odd to write a post on software tests after so many years and so much content, yet to this day I see well meaning developers writing software without adequate test coverage. In fact I will share that I myself have been &lt;em&gt;very&lt;/em&gt; late to enlightenment on this front. Sure I’ve been “writing tests” for well over a decade, but I was missing the mental framework that would make those efforts coherent, complete and effective.&lt;/p&gt;

&lt;p&gt;In my view the mental model of why we would write tests is more important than the detailed “how” of writing unit/integration tests, because without that it can only be at best haphazard whether the testing really serves its true purpose.&lt;/p&gt;

&lt;p&gt;I’ll share with you here the keystone ideas that make it all sit together properly, just like the keystone in a stone archway. Then from there, all the tests you write will sit in their true and proper place within that framework, or be obviously either waste or missing.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftd2go4clz2x2kkdeuwlf.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftd2go4clz2x2kkdeuwlf.jpg" alt="Photo: the shard through a brick archway" width="752" height="564"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Yes I’m aware this particular archway in the photo doesn’t actually have a keystone, but I took the pic and I rather like it so it’s staying.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Before Time: TDD &amp;amp; Unit Testing
&lt;/h2&gt;

&lt;p&gt;I started my career in software development way back in 2000 when only a few very forward thinking people were doing high quality automated testing, and it really hadn’t reached the broader consciousness off the software development community.&lt;/p&gt;

&lt;p&gt;Like a large number of software devs of that era, I became aware of “software testing” through the “TDD” movement (test-driven development). I believe a lot of the noise about this came out of the Ruby on Rails community (no coincidence, there’s no compile step so there’s more reliance on good tests in ruby/rails than in compiled languages). My experience of this was that it was largely talked about and taught as a bottom-up approach to testing:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;“Here’s how you write a unit test for a class.”&lt;/li&gt;
&lt;li&gt;“Here’s how you do assertions.”&lt;/li&gt;
&lt;li&gt;“Here’s why you should write the test before the code (red-green-refactor).”&lt;/li&gt;
&lt;li&gt;And in C# land “here’s how you do dependency injection”.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This is all good and important to know but is not enough in itself.&lt;/p&gt;

&lt;p&gt;This was also a time when SOLID was big in the OO consciousness, so classes, isolation, encapsulation and all the small pieces of the machine you were building was the focus, and testing those pieces was often enough to claim you were “doing testing”. Interviews often being more about “do you TDD” than “will your system break for end users”.&lt;/p&gt;

&lt;p&gt;“Integration Testing” (i.e testing more than one piece together) was certainly in my awareness, but mostly as a TDD++, and was usually missing any kind of coherent “why”. You might test some code with a real database instead of a mocked persistence layer, or check that all your pure code classes didn’t blow up when they were wired back together. In C# world a lot of mental overhead was created by trying to work out how to test code that used Microsoft’s not very test-friendly .NET Framework standard library code.&lt;/p&gt;

&lt;p&gt;So I bumbled along, more or less successfully writing tests for my software, but always feeling like I didn’t have a really solid argument for the big picture. For me it was mostly “aim for 100% test coverage” and you’ll catch/prevent lots of bugs, plus it was good at driving software architecture in the minutia by preventing things being too coupled together.&lt;/p&gt;

&lt;p&gt;More recently I’ve had the pleasure on working with people who are super-keen on “outside-in” testing and considerably less keen on acres of unit tests, and it finally dawned on me what I’ve been missing all these years. What follows is the missing piece:&lt;/p&gt;

&lt;h2&gt;
  
  
  The “Why” of Testing
&lt;/h2&gt;

&lt;p&gt;So let’s take a step back for a moment. Why do we write tests at all, what is the big goal that makes this all worthwhile.&lt;/p&gt;

&lt;p&gt;There’s long been discussions of the “cost/benefit” of tests, and there is research that shows &lt;a href="https://en.wikipedia.org/wiki/Test-driven_development#Benefits" rel="noopener noreferrer"&gt;teams that write tests are more productive&lt;/a&gt;, which is good, but a bit abstract when it comes to what we actually need to write to be effective.&lt;/p&gt;

&lt;h3&gt;
  
  
  Preventing Regressions
&lt;/h3&gt;

&lt;p&gt;The goal of writing any software is working software for users and businesses; and they aren’t going to be too happy when something that worked fine on Monday is now broken on Tuesday.&lt;/p&gt;

&lt;p&gt;I don’t think it’s news to anyone that the goal of software tests is to prevent regressions.&lt;/p&gt;

&lt;h3&gt;
  
  
  Axioms
&lt;/h3&gt;

&lt;p&gt;There are two pieces of the testing puzzle that “preventing regressions” alone didn’t make immediately clear to me, and these are they key reasons I’m taking the time to write this post at all. They drive a subtle but foundational shift in what our writing of tests actually looks like and the magnitude of their effectiveness.&lt;/p&gt;

&lt;p&gt;They may seem trivial and obvious at first sight, but do not ignore them so quickly as they are the axioms upon which the whole approach rests. They are:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;What matters is not whether your class has “tests” but whether the software as perceived by the user does the job they need.&lt;/li&gt;
&lt;li&gt;When you add your &lt;em&gt;first&lt;/em&gt; feature it’s easy to manually verify it’s behaviour. When you write your &lt;em&gt;ninety-ninth&lt;/em&gt; feature you still want to be &lt;em&gt;certain&lt;/em&gt; that the other ninety-eight features are &lt;em&gt;all&lt;/em&gt; still intact.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Most systems have many thousands of “features”, especially if you include “non-functional” cross-cutting requirements such as acceptable performance for each capability.&lt;/p&gt;

&lt;h2&gt;
  
  
  Full, Automated Coverage
&lt;/h2&gt;

&lt;p&gt;Every time you touch your software, there is a non-zero risk that something that used to work will no longer work. Anyone who’s written software for any length of time will laugh at the idea that “a change in component A has no possibility of breaking something unrelated in component B”. And as a more rigorous colleague pointed out, we have problem of “emergence” in complex systems which makes it increasingly hard to predict behaviour. A good measure of your tests is how afraid you are of upgrading third-party dependencies, running you tests and immediately shipping the result without further manual verification.&lt;/p&gt;

&lt;p&gt;The only sustainable solution to the presented axioms combined with the need to add more features over time is:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Ensure &lt;em&gt;every&lt;/em&gt; feature the user cares about has a test.&lt;/li&gt;
&lt;li&gt;The software is tested from the perspective of the user.&lt;/li&gt;
&lt;li&gt;The software is tested after every change.&lt;/li&gt;
&lt;li&gt;This testing is fully automated.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Insufficient Test Automation
&lt;/h3&gt;

&lt;p&gt;If you write insufficient automated tests there are only two things that can happen, both bad:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;You spend exponentially more time manually verifying all expected behaviour before any change is passed on for the user to use; bugs get through anyway. Or&lt;/li&gt;
&lt;li&gt;You give up trying to test what was supposed to work, and some of it stops working.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Full automated test coverage of &lt;em&gt;all&lt;/em&gt; delivered features to date is the &lt;em&gt;only&lt;/em&gt; solution to this problem. Any shortcoming in this coverage is a subclass of the generalized “&lt;a href="https://charmconsulting.co.uk/2020/11/27/leaders-guide-to-technical-debt/" rel="noopener noreferrer"&gt;technical debt&lt;/a&gt;” problem that quickly results in a catastrophic drop-off of ability to delivery anything at all if it is allowed to grow:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://charmconsulting.co.uk/2020/11/27/leaders-guide-to-technical-debt/" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F63rgpddj0pvb7pxjh6s4.png" alt="Graph delivery speed plummeting as tech debt piles up" width="672" height="510"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What Kind of Tests
&lt;/h2&gt;

&lt;p&gt;So, if you accept all of the above, what does that mean when you actually fire up your editor and wonder what test to write?&lt;/p&gt;

&lt;p&gt;You already write unit tests, that’s enough, right? … Bzzzzt, Wrong! That fails the “from the user perspective” need.&lt;/p&gt;

&lt;p&gt;If the only thing that matters is that a feature works &lt;em&gt;from the users perspective&lt;/em&gt; then the only automated test that matters is one that tests a feature from the users perspective. In practice that means things like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;End to end tests&lt;/li&gt;
&lt;li&gt;Browser automation&lt;/li&gt;
&lt;li&gt;Smoke tests&lt;/li&gt;
&lt;li&gt;Platform tests&lt;/li&gt;
&lt;li&gt;Performance tests&lt;/li&gt;
&lt;li&gt;Outside-in tests&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And what doesn’t matter a hoot is things like unit tests, integration tests and component tests, i.e. the very things that we were taught in “TDD school”, and then left to figure out “the real world” on our own. (I’m not bitter, just wish I’d figured this out 20 years earlier).&lt;/p&gt;

&lt;h3&gt;
  
  
  Pushing Tests Down the Pyramid
&lt;/h3&gt;

&lt;p&gt;In an ideal world these fully-integrated end-to-end tests of entire features from the user’s perspective would run instantly and reliably on the developer’s machine the moment the code was written to disk, and we could have vast numbers of them at no extra cost of effort or speed. Here in the real world as we know from bitter experience, the most valuable tests are also the most troublesome. They are the slowest, most fragile, most prone to flap, they have the most difficult dependencies to unpick, and are the most subject to the combinatorial explosion in numbers as soon as there’s a few possible code paths to take or inputs to handle.&lt;/p&gt;

&lt;p&gt;Therefore as a matter of pragmatism we are forced to push some of our testing down the testing pyramid towards component, integration and unit tests.&lt;/p&gt;

&lt;p&gt;If we do not lose sight of the highest goal of software testing, then this practically turns out to be be fine, and we can continuously tune the balance between the layers.&lt;/p&gt;

&lt;p&gt;If however we do lose sight, and retreat into the lower levels of testing because it’s hard or slow to create the necessary full system coverage, then we start to slide up the technical debt scale, and will pay the price sooner than we’d like.&lt;/p&gt;

&lt;h3&gt;
  
  
  On Business Driven Development (BDD)
&lt;/h3&gt;

&lt;p&gt;This is one that went off the rails as a concept. I see so many teams entirely miss the point of this movement, which is conceptually a good thing, but: IT’S NOT ABOUT GHERKIN SYNTAX, AND IT’S NOT ABOUT BROWSER AUTOMATION. Teams constantly get the technology confused with the intent and cargo-cult their way to an unholy mess of low quality high volume waste.&lt;/p&gt;

&lt;p&gt;It’s actually about writing down what the users expect of your software, in terms they’d understand, and making sure that those expectations are never broken unintentionally, which is actually a very good thing.&lt;/p&gt;

&lt;p&gt;BDD ends up using gherkin and browser automation because gherkin allows plain English explanations that can be turned into executable tests, and users often interact with software via a browser these days.&lt;/p&gt;

&lt;p&gt;Gherkin (specflow etc) and browser automation test frameworks are tools for achieving BDD, not the definition of BDD.&lt;/p&gt;

&lt;p&gt;Sadly by its nature the gherkin tools end up requiring maintenance of “step definitions” which is a hard cost to bear unless you are very careful what you use it for.&lt;/p&gt;

&lt;h3&gt;
  
  
  Outside-In Testing
&lt;/h3&gt;

&lt;p&gt;Outside-In testing is a term and concept that I came across recently that really aligns with what I have laid out here. It emphasises what the user experiences (even if it’s an API user rather than a web user).&lt;/p&gt;

&lt;p&gt;I think once the concepts in this blog post are internalised, then outside-in is a good shorthand for a good approach to achieving the regression testing goals that have been laid out in this post.&lt;/p&gt;

&lt;p&gt;These are a couple of good posts on the concept:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://thoughtbot.com/blog/testing-from-the-outsidein" rel="noopener noreferrer"&gt;https://thoughtbot.com/blog/testing-from-the-outsidein&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.obeythetestinggoat.com/book/chapter_outside_in.html" rel="noopener noreferrer"&gt;https://www.obeythetestinggoat.com/book/chapter_outside_in.html&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Mindset and Culture
&lt;/h2&gt;

&lt;p&gt;In the end, whether the right sort of automated regression tests are written comes down to the core beliefs of your individual engineers writing the systems and the teams they operate in.&lt;/p&gt;

&lt;p&gt;If there is &lt;em&gt;any&lt;/em&gt; part of them then that isn’t 110% on-board with what I’ve written here, then test coverage ends up being haphazard and incomplete, and over time gets steadily less able to prevent regressions.&lt;/p&gt;

&lt;p&gt;If there is not a full belief in the need for high quality automated testing then even a small friction in the way (e.g. a difficult 3rd party dependency, or the need for complex multi-team multi-service regression test), then teams will generally quietly give up on full feature coverage from the user’s perspective and satisfy themselves with unit or component level tests. Unhelpfully, to the untrained eye a large quantity of tests looks similar to “good coverage”, and its less obvious whether it actually tests anything users care about. We must be constantly vigilant for this as the message on this is still relatively weak in the industry, and often confused by the (valuable) talk of detailed test approaches.&lt;/p&gt;

&lt;h3&gt;
  
  
  Hero culture
&lt;/h3&gt;

&lt;p&gt;If you are particularly unlucky, or have set up perverse incentives, you risk a harder problem to resolve which is the embedding of a self-perpetuating “&lt;a href="https://scalablehuman.com/2023/10/19/the-dangers-of-hero-culture-in-development-teams/" rel="noopener noreferrer"&gt;hero culture&lt;/a&gt;” whereby engineers take the fastest path to delivering &lt;em&gt;percieved&lt;/em&gt; value (whilst incurring significant technical debt and unseen bugs), and then take credit again by rushing to highly visibly fixing the problems that they themselves created.&lt;/p&gt;

&lt;p&gt;Good regression coverage, like much good engineering, takes time to build in the short term for a payoff of peace, reliability and sustained velocity in the long term. Which is incompatible with hero-ing it.&lt;/p&gt;

&lt;p&gt;Beware the “rock star” programmer.&lt;/p&gt;

&lt;h3&gt;
  
  
  Programmer Excuses
&lt;/h3&gt;

&lt;h4&gt;
  
  
  “I don’t have time”
&lt;/h4&gt;

&lt;p&gt;The client/boss is paying for your time, and the boss/client would like the long term benefits, mkay?&lt;/p&gt;

&lt;h4&gt;
  
  
  “The boss/client/manager won’t let me”
&lt;/h4&gt;

&lt;p&gt;The “should I do a good job” question, as I like to call it.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“Which is quicker? Do that one.”&lt;br&gt;&lt;br&gt;
~ Your client / boss&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I often see all but the most experienced software engineers falling into the trap of confusing authority with expertise.&lt;/p&gt;

&lt;p&gt;I often hear engineers complaining that “they aren’t given the time” to write tests, or do a proper job on some aspect of the software they are writing.&lt;/p&gt;

&lt;p&gt;In reality what has always happened is the engineer has presented two options to a client/employer who doesn’t know or care about how software is built:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;I can do this feature properly with all the tests and great architecture etc etc (client glazes over) it’ll be amazing, or&lt;/li&gt;
&lt;li&gt;I can just do the feature without any of that proper stuff. Which one do you want?&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The client / boss / project manager responds [whilst thinking “I have no idea what you are talking about, or why you are asking] “Which is quicker? Do that one.”&lt;/p&gt;

&lt;p&gt;Then the engineer gets the hump that they can’t do a good job.&lt;/p&gt;

&lt;p&gt;Or worse, the engineer assumed the answer for them without even asking.&lt;/p&gt;

&lt;p&gt;Put it this way, if a plumber came to a house, should they ask the homeowner whether to earth the pipes, or should they just do it as part of the cost of the job. The homeowner might not understand why that’s part of the job, but they probably also don’t want to be electrocuted by the radiator.&lt;/p&gt;

&lt;p&gt;It is perfectly reasonable for an experienced software engineer, who is the expert in &lt;em&gt;their&lt;/em&gt; trade, to just include automated regression tests as part of the job of feature delivery. To &lt;strong&gt;not even bring it up in conversation&lt;/strong&gt;. The client/employer NEVER wants to pay for a feature, think its done, only to have it break again four features later. From their point of view, and quite rightly, that’s just shoddy software.&lt;/p&gt;

&lt;p&gt;This is not to say I’m secretive - I work in the open and am open to people asking why things are done the way they they are, and am happy to explain to the client why it’s in their best interest. But whether it happens is not up for discussion.&lt;/p&gt;

&lt;h2&gt;
  
  
  Hiring a QA team - Just Don’t
&lt;/h2&gt;

&lt;p&gt;Some organisations seem to think the answer is to take the regression test problem off developer’s plates by hiring less skilled individuals to do this apparently mundane work. Some even hire SDETs to write automated tests for the developers.&lt;/p&gt;

&lt;p&gt;This is a fundamentally flawed approach in the same way that hiring an Operations teams as a separate function was accepted as a bad idea and replaced with an integrated DevOps product team. (Well, apart from the anti-pattern of a “DevOps” role, but that’s another post).&lt;/p&gt;

&lt;p&gt;It’s flawed in at least the following ways:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;It embeds the “manual testing is okay” approach in the culture, but adds some extra brute force people-power in the hope that this will prevent the eventual arrival of the 99th-feature problem. (The number just gets bigger, it’s still there). This is pretty much like trying to outrun your own shadow.&lt;/li&gt;
&lt;li&gt;It encourages developers to consider testing (automated or otherwise) “that other team/role’s problem”.&lt;/li&gt;
&lt;li&gt;It adds significant delays and an additional silo between writing code and delivering value to users and getting valuable feedback. Lengthening feedback cycles. This goes against &lt;em&gt;everything&lt;/em&gt; we have learned from toyota/kanban/lean etc.&lt;/li&gt;
&lt;li&gt;QA people &lt;em&gt;cannot&lt;/em&gt; actually improve “quality” - they are not the developers working on the actual code, and can at best catch the worst errors.&lt;/li&gt;
&lt;li&gt;The people who could improve quality (developers who actually write the code) lose extremely important feedback signals on how their software behaves when tested and used.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I have also seen QAs used as a “minimum quality gate” that “allows” an organisation to hire an army of poor-to-mediocre programmers. Fairly quickly that goes spectacularly badly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;So the important take-away here is:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Tests must test what your users actually care about.&lt;/li&gt;
&lt;li&gt;Those tests must be automated to sustain your velocity.&lt;/li&gt;
&lt;li&gt;The testing pyramid is good and valid.&lt;/li&gt;
&lt;li&gt;Start from the outside with your testing and work in only as much as you must.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;And why we do it that way is:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The features for users is more important than the internal workings.&lt;/li&gt;
&lt;li&gt;Without full-automated coverage we cannot continue to confidently add features.&lt;/li&gt;
&lt;/ol&gt;

</description>
    </item>
    <item>
      <title>Templated repos with dotnet new</title>
      <dc:creator>Tim Abell</dc:creator>
      <pubDate>Wed, 06 Mar 2024 00:00:00 +0000</pubDate>
      <link>https://dev.to/timabell/templated-repos-with-dotnet-new-3c4a</link>
      <guid>https://dev.to/timabell/templated-repos-with-dotnet-new-3c4a</guid>
      <description>&lt;p&gt;I’ve been digging in to making &lt;code&gt;dotnet new&lt;/code&gt; templates and it turns out to be a remarkably capable bit of tooling.&lt;/p&gt;

&lt;p&gt;It’s particularly useful when you want to build a load of similar microservices with their own git repos.&lt;/p&gt;

&lt;p&gt;It can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Rename files.&lt;/li&gt;
&lt;li&gt;Rename strings (variables, class names etc).&lt;/li&gt;
&lt;li&gt;Preserve case style of renamed strings (through “derived” replacements).&lt;/li&gt;
&lt;li&gt;Create named command line arguments for your template string replacements, e.g. &lt;code&gt;donet new mytemplate --myswitch MyReplacementValue&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Exclude entire blocks of code and files based on switches that user can provide.&lt;/li&gt;
&lt;li&gt;Be installed an tested from a local folder with &lt;code&gt;dotnet new install &amp;lt;path&amp;gt;&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Be built into a nuget package and published on private or public feeds.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Importantly, there’s nothing in the way that it works that stops you from making your template code build/test/run.&lt;/p&gt;

&lt;p&gt;And because it’s a CLI tool, if you want to an update an existing generated repo with a new version of a template you can just run it again with &lt;code&gt;--force&lt;/code&gt;, and use git to pick through what changes you want to take into the generated repo.&lt;/p&gt;

&lt;p&gt;This really takes a lot of the toil out of the copy-paste-modify you would have to do otherwise.&lt;/p&gt;

&lt;p&gt;Learn more:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://learn.microsoft.com/en-us/dotnet/core/tools/custom-templates"&gt;https://learn.microsoft.com/en-us/dotnet/core/tools/custom-templates&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://devblogs.microsoft.com/dotnet/how-to-create-your-own-templates-for-dotnet-new/"&gt;https://devblogs.microsoft.com/dotnet/how-to-create-your-own-templates-for-dotnet-new/&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;See also:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://copier.readthedocs.io/"&gt;https://copier.readthedocs.io/&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
    </item>
    <item>
      <title>New tool: sln-items-sync for Visual Studio solution folders</title>
      <dc:creator>Tim Abell</dc:creator>
      <pubDate>Sat, 13 Jan 2024 00:00:00 +0000</pubDate>
      <link>https://dev.to/timabell/new-tool-sln-items-sync-for-visual-studio-solution-folders-4h6l</link>
      <guid>https://dev.to/timabell/new-tool-sln-items-sync-for-visual-studio-solution-folders-4h6l</guid>
      <description>&lt;p&gt;How and why I created &lt;code&gt;sln-items-sync&lt;/code&gt; - a &lt;code&gt;dotnet tool&lt;/code&gt; to generate SolutionItems from filesystem folders.&lt;/p&gt;

&lt;p&gt;If you want to skip the backstory head over: &lt;a href="https://github.com/timabell/sln-items-sync"&gt;https://github.com/timabell/sln-items-sync&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  15 years of minor irritation
&lt;/h2&gt;

&lt;p&gt;Faced with another set of microservice repos written in dotnet-core, with &lt;code&gt;.sln&lt;/code&gt; files in various states of tidiness I found my self for the 1000th time in 15+ years manually pointy clicky adding fake solution-items folders and subfolders and then toiling away adding files &lt;strong&gt;just&lt;/strong&gt; so I could search them, click them and view them from within Visual Studio or Rider.&lt;/p&gt;

&lt;p&gt;There must be a better way by now I thought, so I went hunting.&lt;/p&gt;

&lt;p&gt;All I turned up was a lot of people asking the same thing and some dead tooling from years ago. Here’s the stackoverflow from 2008 with 90k views and 180 upvotes: &lt;a href="https://stackoverflow.com/questions/267200/visual-studio-solutions-folder-as-real-folders"&gt;https://stackoverflow.com/questions/267200/visual-studio-solutions-folder-as-real-folders&lt;/a&gt;, which didn’t really help in spite of having 23 answers. Not to mention the slew of linked questions where people are asking the same thing with different words.&lt;/p&gt;

&lt;p&gt;(Solution folders aren’t to be confused with adding files to a &lt;em&gt;project&lt;/em&gt; which used to be an equal nightmare before Microsoft saw sense and just included &lt;em&gt;what’s on the filesystem&lt;/em&gt;. There are many old stackoverflow questions on that too from frustrated devs around the world.)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2zFVEla4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://timwise.co.uk/images/blog/sln-items.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2zFVEla4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://timwise.co.uk/images/blog/sln-items.png" alt="screenshot of example solution items folder in Rider" width="426" height="330"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So with the programmer war cry of “how hard can it be, I’ll knock this out in a couple of evenings…” I set about on what turned out to be a significant exercise in yak-shaving in order to sort it out myself once and for all.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“How hard can it be, I’ll knock this out in a couple of evenings…”&lt;/p&gt;

&lt;p&gt;~ Me. Again.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  What to build?
&lt;/h2&gt;

&lt;p&gt;I did briefly look at writing an IntelliJ (aka Rider) plugin but that turned out quickly to be a daunting thing so I put that idea down sharpish.&lt;/p&gt;

&lt;p&gt;I use Rider in preference to Visual Studio and VSCode for C# so didn’t even look at that side. VSCode didn’t even bother with .sln files last I checked.&lt;/p&gt;

&lt;p&gt;Next step was to write a CLI (command-line interface, aka terminal) tool to do it. (sln + filesystem in, mutated sln out, easy…)&lt;/p&gt;

&lt;p&gt;I have recently written command line tools in both GoLang and Rust, but given this is a tool that would only be useful to Microsoft developers I figured I’d do this one in C#. I do actually like C# as a language for all my interest in other things, and thanks to dotnet-core and Rider I can actually write the whole thing on Linux Mint Cinnamon where I like to be.&lt;/p&gt;

&lt;h2&gt;
  
  
  Parsing and writing .sln files
&lt;/h2&gt;

&lt;p&gt;I then hunted around for any nuget packages that might do the grunt work of reading/writing the sln format. Surely after 20-something years there must be something, right? Well, kinda. The VS parsing code is locked away in some windows dll nastyness, probably in C++ and COM or something evil. It even predates XML as a format, never mind JSON.&lt;/p&gt;

&lt;p&gt;What I did find was the &lt;a href="https://www.nuget.org/packages/SlnParser"&gt;SlnParser nuget package&lt;/a&gt;, which someone had kindly written and open-sourced, and after quick test I could see it did a decent job of turning .sln files into an in-memory C# object model (a &lt;code&gt;Solution&lt;/code&gt; class, with lists of things as properties).&lt;/p&gt;

&lt;p&gt;So major yak number one was to fork SlnParser and turn it into a two-way tool. This I did with a lot of hackery and created &lt;a href="https://www.nuget.org/packages/SlnEditor/"&gt;SlnEditor nuget package&lt;/a&gt; which I published on nuget and github with the same Unlicense licensing as the original. Perhaps others will find this gift to the world useful in its own right.&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating sln-items-sync
&lt;/h2&gt;

&lt;p&gt;Finally with that working I was able to create the CLI tool I wanted, which I named &lt;a href="https://github.com/timabell/sln-items-sync"&gt;sln-items-sync&lt;/a&gt;. This was more work than I expected, but I got a first cut working reasonably quickly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tests
&lt;/h2&gt;

&lt;p&gt;I put a good amount of effort into good end to end test coverage on both the parser and the tool itself because I am now a true believer that &lt;strong&gt;without tests&lt;/strong&gt; you will be &lt;strong&gt;unable to make future changes and dependency upgrades with speed and confidence&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;I.e. lack of tests is the epitome of technical debt.&lt;/p&gt;

&lt;p&gt;In fact let me give that a block quote because it’s such an important point:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Lack of tests is the epitome of technical debt.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;(p.s. why is Epitome spelled that way but pronounced epitomy. English. Sigh.)&lt;/p&gt;

&lt;p&gt;This has paid off in spades as the amount of work to get it satisfactorily “done” grew and grew the closer I got to finished.&lt;/p&gt;

&lt;p&gt;The tests in both projects focus on “outside-in” testing rather than mockist unit testing. As such you can see at a glance the overall behaviour, spot any unexpected/unwanted output, and easily write new tests for new desired behaviour, being able to eyeball them easily for correctness. I won’t include one here as they are a bit lengthy, but you can go and look at the source repos on github.&lt;/p&gt;

&lt;p&gt;This is made a bit easier on this tool because the only interfaces to the world are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;a text format (easy to string-compare expected versus actual)&lt;/li&gt;
&lt;li&gt;a filesystem (I went for creating real file trees in tests which worked well and gives even more confidence)&lt;/li&gt;
&lt;li&gt;the command line interface (for the sync tool)&lt;/li&gt;
&lt;li&gt;the API (for the parser lib)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  First contact with real .sln files
&lt;/h2&gt;

&lt;p&gt;As I am doing some work for a C# contracting client currently I was able to try it out on gnarly real solution files, with a view to submitting some small cleanup pull requests that could created really quickly.&lt;/p&gt;

&lt;h3&gt;
  
  
  Stable ordering
&lt;/h3&gt;

&lt;p&gt;The first attempt was a complete failure because the generated patch re-wrote the entire sln file in a completely different order, resulting in a sea of red/green lines full of GUIDs and other cryptic changes in the git diff. While the solution items were updated as intended and could be seen in Rider etc., this was not a patch that could be submitted to the team, or that I would put my name to.&lt;/p&gt;

&lt;p&gt;Getting stable ordering between parsing and writing turned out to be a huge amount of work and refactoring, largely in the SlnEditor lib.&lt;/p&gt;

&lt;p&gt;The key to making stable-ordering work was to add an &lt;code&gt;int sourceLine&lt;/code&gt; property to almost everything when parsing, and to sort by that before rendering back out again. This had the desired effect of keeping everything in the original order no matter how it was mutated, and new items are added to the end (by replacing default &lt;code&gt;0&lt;/code&gt; with &lt;code&gt;Int.MaxValue&lt;/code&gt; before sorting).&lt;/p&gt;

&lt;p&gt;Phew, another yak shaved, lost count now, but got more xmas hols so keep going….!&lt;/p&gt;

&lt;h2&gt;
  
  
  Many bugs and gaps
&lt;/h2&gt;

&lt;p&gt;It surprised me a bit just how many &lt;a href="https://github.com/timabell/sln-items-sync/issues?q=is%3Aissue+is%3Aclosed"&gt;little niggles, edge cases, and small omissions&lt;/a&gt; there were that had to be sorted out before I could use it to submit quality patches to client .sln files for real. Even the ever present byte-order-marker (BOM) was causing unwanted diffs because I hadn’t included it in the render, but .sln files seem to have them.&lt;/p&gt;

&lt;p&gt;Pleasingly I’ve resolved everything I came across, apart from making the parent/child guid mapping order stable which didn’t seem to be worth the effort seeing as they are completely incomprehensible anyway.&lt;/p&gt;

&lt;h2&gt;
  
  
  Making a dotnet-tool
&lt;/h2&gt;

&lt;p&gt;Once it was working, it was only a couple of rounds of building and copying the exe to &lt;code&gt;bin/&lt;/code&gt; before I got fed up with that approach to distribution.&lt;/p&gt;

&lt;p&gt;Amazingly it turns out to be pretty simple to build and publish tools to the &lt;code&gt;dotnet tool&lt;/code&gt; ecosystem, they are actually just slightly special nuget packages, and you only have to add a couple of properties to the &lt;code&gt;.csproj&lt;/code&gt; file.&lt;/p&gt;

&lt;p&gt;Making a dotnet-tool worked great, and is a great user experience for installing and running the tool. It even does updates for no extra effort!&lt;/p&gt;

&lt;h2&gt;
  
  
  Github-actions
&lt;/h2&gt;

&lt;p&gt;To make both of these tools even easier to work on and maintain longer term, I wanted to have a good github action (aka CI) to build and run the tests.&lt;/p&gt;

&lt;p&gt;Build and test is trivial, you can pretty much click the default workflow button for .net in an empty github actions page and it just works.&lt;/p&gt;

&lt;p&gt;I wanted to also automate the nuget publishing of both from github-actions, as although I had a sh file to upload them from my machine that’s a faff and tends to stop working after a few machine rebuilds. Amazingly the author of SlnParser has taken an interest and provided a &lt;a href="https://github.com/timabell/sln-items-sync/pull/15"&gt;PR that gave me a ready-made github-action to push to nuget&lt;/a&gt; for every release tag! So that’s now in place, and to release a new version I can just &lt;code&gt;git tag v1.2.3 &amp;amp;&amp;amp; git push --tags&lt;/code&gt; and github does the rest.&lt;/p&gt;

&lt;h2&gt;
  
  
  The end, I need a lie down
&lt;/h2&gt;

&lt;p&gt;So after all that, I’m not sure it was all worth it, but it’s done and I’m justifying it as a holiday hobby project and a gift to the dotnet developers of the world. I will certainly enjoy it every time I find an out of sync &lt;code&gt;SolutionItems&lt;/code&gt; folder in future and run my tool so that I can ship a patch for it in seconds flat. I also learned a few things and got kata-like practice on shipping quality things at speed.&lt;/p&gt;

&lt;p&gt;So with that, Merry Xmas and a happy new 2024. May all your solution folders be tidy and complete.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>git - what do ‘base’ ‘local’ ‘remote’ mean?</title>
      <dc:creator>Tim Abell</dc:creator>
      <pubDate>Fri, 20 Oct 2023 00:00:00 +0000</pubDate>
      <link>https://dev.to/timabell/git-what-do-base-local-remote-mean-5aa8</link>
      <guid>https://dev.to/timabell/git-what-do-base-local-remote-mean-5aa8</guid>
      <description>&lt;p&gt;The terminology for 3-way git merge, rebase and cherry-pick conflict files is very confusing, particularly because they flip direction between rebase and merge.&lt;/p&gt;

&lt;p&gt;When you run &lt;code&gt;git mergetool&lt;/code&gt; it will spit out 4 files that look like this, and then pass them as arguments to your merge tool of choice:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ gs 
## HEAD (no branch)
UU src/gitopolis.rs
?? src/gitopolis_BACKUP_1585963.rs
?? src/gitopolis_BASE_1585963.rs
?? src/gitopolis_LOCAL_1585963.rs
?? src/gitopolis_REMOTE_1585963.rs

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In kdiff3 (by far the best 3-way merge algorithm out there) it looks like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--jaMxJ1Qb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://timwise.co.uk/images/blog/git-kdiff-3way-merge.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jaMxJ1Qb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://timwise.co.uk/images/blog/git-kdiff-3way-merge.png" alt="kdiff 3-way merge screenshot" width="800" height="441"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Here’s how I think of it
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Merge
&lt;/h3&gt;

&lt;p&gt;You are on the target branch (local), and the patches are coming from the branch you are merging in (remote), kinda like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git checkout local-branch
git merge remote-branch

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Cherry-pick
&lt;/h3&gt;

&lt;p&gt;Same direction as merge.&lt;/p&gt;

&lt;p&gt;You are on the target branch (local), and the patch is coming from the commit you are cherry-picking (remote), kinda like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git checkout local-branch
git cherry-pick some-remote-commit-ref

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Rebase
&lt;/h3&gt;

&lt;p&gt;&lt;em&gt;Opposite&lt;/em&gt; direction to merge.&lt;/p&gt;

&lt;p&gt;You start on your own branch that you want to rebase, but…&lt;/p&gt;

&lt;p&gt;When you start the rebase you end up temporarily while the rebase is running in “detached HEAD” on the branch you are rebasing onto (often &lt;code&gt;origin/main&lt;/code&gt;), so: You are on the target branch (local), and commits to rebase are coming from the branch you are rebasing (remote), kinda like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git checkout your-remote-branch
git rebase target-local-branch

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Terminology
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;“ &lt;strong&gt;BASE&lt;/strong&gt; ”: before anyone changed it (in all cases)&lt;/li&gt;
&lt;li&gt;When &lt;strong&gt;merging&lt;/strong&gt; (other branch coming to me): 

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;LOCAL&lt;/strong&gt; : branch I’m on&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;REMOTE&lt;/strong&gt; : branch I’m merging in&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;When &lt;strong&gt;cherry-picking&lt;/strong&gt; (other commit coming to me): 

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;LOCAL&lt;/strong&gt; : branch I’m on&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;REMOTE&lt;/strong&gt; : commit I’m merging in&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;When &lt;strong&gt;rebasing&lt;/strong&gt; (my own branch coming to me): 

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;LOCAL&lt;/strong&gt; : branch I’m rebasing on to (checked out as detached head mid-rebase)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;REMOTE&lt;/strong&gt; : my commits on branch I’m rebasing&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Refs
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://stackoverflow.com/questions/20381677/in-a-git-merge-conflict-what-are-the-backup-base-local-and-remote-files-that"&gt;https://stackoverflow.com/questions/20381677/in-a-git-merge-conflict-what-are-the-backup-base-local-and-remote-files-that&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
    </item>
    <item>
      <title>Enabling modern app security</title>
      <dc:creator>Tim Abell</dc:creator>
      <pubDate>Wed, 14 Jun 2023 00:00:00 +0000</pubDate>
      <link>https://dev.to/timabell/enabling-modern-app-security-7kh</link>
      <guid>https://dev.to/timabell/enabling-modern-app-security-7kh</guid>
      <description>&lt;p&gt;A broad-view of improving security in any organisation.&lt;/p&gt;

&lt;h2&gt;
  
  
  An inspirational panel discussion
&lt;/h2&gt;

&lt;p&gt;Yesterday I went to a panel discussion hosted by &lt;a href="https://esynergy.co.uk/"&gt;eSynergy&lt;/a&gt;, &lt;a href="https://esynergy.co.uk/event/security-excellence-in-engineering/"&gt;“Innovation at its safest: Excellence in Software Engineering through Integrated Security Best Practices”&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--clw4CtKY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://timwise.co.uk/images/blog/esynergy-security-event-IMG_20230613_175534.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--clw4CtKY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://timwise.co.uk/images/blog/esynergy-security-event-IMG_20230613_175534.jpg" alt="photo of panel discussion" width="800" height="602"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The whole event was live-streamed, &lt;a href="https://www.youtube.com/watch?v=FH5kyUwRZ5Q"&gt;watch the panel discussion recording here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For me who lives in developer-land, it was a useful broadening of perspectives around app security. What follows are some bits that I took away from the discussions, which I think provide a useful starting point for anyone tasked with running any modern software systems in this increasingly hostile security environment.&lt;/p&gt;

&lt;h3&gt;
  
  
  Who’s who at the event
&lt;/h3&gt;

&lt;p&gt;The speakers at the event were as follows:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Intro from &lt;a href="https://www.linkedin.com/in/ulrikeeder/"&gt;Ulrike Eder (eSynergy)&lt;/a&gt; [00:03:47]&lt;/li&gt;
&lt;li&gt;“Beyond OWASP Top 10” from &lt;a href="https://www.linkedin.com/in/rewtd/"&gt;Grant Ongers&lt;/a&gt; from OWASP and &lt;a href="https://securedelivery.io/"&gt;secure delivery&lt;/a&gt; &lt;a href="https://defcon.social/@rewtd"&gt;@rewtd@defcon.social&lt;/a&gt; / &lt;a href="https://twitter.com/rewtd"&gt;@rewtd&lt;/a&gt; [00:06:37]&lt;/li&gt;
&lt;li&gt;Grant was then joined for the panel discussion by: [00:19:40] 

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://www.linkedin.com/in/salman-iqbal-a6a5b026"&gt;Salman Iqbal&lt;/a&gt;, Principal Consultant, DevOps and ML Security at esynergy (hosting the panel)&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.linkedin.com/in/yayiwu/"&gt;Teresa Wu&lt;/a&gt; VP Engineer at J.P. Morgan&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.linkedin.com/in/ben-burdsall-6ba2bb"&gt;Ben Burdsall&lt;/a&gt;, Chief Technology Officer at dunnhumby, non-exec at eSynergy&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.linkedin.com/in/tomtechharris/"&gt;Tom Harris&lt;/a&gt; Chief Technology Officer at ClearBank, BuildCircle, ex-JustEat&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Takeaways
&lt;/h2&gt;

&lt;p&gt;There’s a bewildering array of things you can / should / must do for the security of your systems, users and company.&lt;/p&gt;

&lt;p&gt;Within this article you’ll find some starting points for your onwards security journey&lt;/p&gt;

&lt;h3&gt;
  
  
  Levels
&lt;/h3&gt;

&lt;p&gt;There are two reasons for thinking of security in layers or levels:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Your business needs, risks, regulatory environment and finances&lt;/li&gt;
&lt;li&gt;Your security maturity&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Some businesses are more at risk such as banks and thus need (and can afford) a more significant investment in security measures (such as multi-layered cloud infrastructure defenses), whereas some have less budget and less risk and so can operate at the simpler levels of security.&lt;/p&gt;

&lt;p&gt;If you are currently very poor on security then there’s little point sprinkling some advanced things on top, it’s important to properly address each layer of security capability on the way up.&lt;/p&gt;

&lt;p&gt;Regardless of your business needs, perfect security is always an unattainable ideal, but a worthy target nonetheless. The Unicorn project calls this kind of never-quite-attainable perfection an “Ideal”. &lt;a href="https://www.infoq.com/articles/unicorn-project/"&gt;The Unicorn Project and the Five Ideals: Interview with Gene Kim&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Who’s job is security anyway?
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;The directors are “accountable”.&lt;/li&gt;
&lt;li&gt;The developers, product etc. are “responsible”.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;While developers can and should write “secure code” (SQL Injection vulnerabilities is still on the top 10 list), it’s important that everyone plays their part.&lt;/p&gt;

&lt;p&gt;Notably the product function (“product owners”), as they are the decision makers for balancing the competing demands placed on delivery/development teams, including how much to invest in security defenses. (Much nodding in the audience at this one!)&lt;/p&gt;

&lt;h4&gt;
  
  
  Developers
&lt;/h4&gt;

&lt;p&gt;Tools help but the developers need to understand what is required.&lt;/p&gt;

&lt;p&gt;At ClearBank there is:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;“Lunch and learn” sessions from AppSec team.&lt;/li&gt;
&lt;li&gt;Training with “&lt;a href="https://www.hacksplaining.com/"&gt;HackSplaining&lt;/a&gt;”.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  How ClearBank leveled-up app dev security
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Added security training&lt;/li&gt;
&lt;li&gt;Required pull-request approval from someone with security training&lt;/li&gt;
&lt;li&gt;This created a temporary bottleneck, which encouraged everyone to do the security training&lt;/li&gt;
&lt;li&gt;Incubated an AppSec team to “reduce the cognitive load of security in collaboration with CISO and CTO (Tom) 

&lt;ol&gt;
&lt;li&gt;Enthusiastic internal devs&lt;/li&gt;
&lt;li&gt;Additional external resource&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;
&lt;li&gt;Collaboration at the top then filters down to all the teams&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Justifying security investment
&lt;/h3&gt;

&lt;p&gt;The board of directors now face &lt;strong&gt;criminal&lt;/strong&gt; penalties (i.e. jail time) if they don’t properly approach security. It used to be just financial penalties but that wasn’t enough as they could just be absorbed as a “cost of doing business”.&lt;/p&gt;

&lt;p&gt;If you need the C-suite or board to take security sufficiently seriously you can remind them of the legal penalties and costs!&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Do security right because it’s the right thing to do and you care about your customer and their data.&lt;/li&gt;
&lt;li&gt;There’s the “daily mail test” - how would we feel if there was a breach and it hit the papers?&lt;/li&gt;
&lt;li&gt;Put a cost on breaches, e.g. probability of breach multiplied by cost of breach.&lt;/li&gt;
&lt;li&gt;Use the “house fire” analogy. No-one thinks that insuring your house against fire is a bad investment. The same is true for investing in security before you have a incident or breach.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Lead from the top
&lt;/h3&gt;

&lt;p&gt;Leaders should do the training too, no-one is too important and it sets the tone and culture, encouraging everyone down to the devs to do the training too.&lt;/p&gt;

&lt;h3&gt;
  
  
  Shift left
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;Shift Left: “take a task that’s traditionally done at a later stage of the process and perform that task at earlier stages”&lt;br&gt;&lt;br&gt;
~ &lt;a href="https://devopedia.org/shift-left"&gt;https://devopedia.org/shift-left&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Move security left. Nuff said. Dev+Security not Dev versus Security.&lt;/p&gt;

&lt;p&gt;Check security and licenses at build time. Gives assurance of security for customers.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Top 10 is not enough
&lt;/h3&gt;

&lt;p&gt;The &lt;a href="https://owasp.org/Top10/"&gt;OWASP Top 10&lt;/a&gt; is a good tool for awareness and generating conversations, but addressing these is only the lowest “level” of security.&lt;/p&gt;

&lt;p&gt;A much broader view of security is provided by the &lt;a href="https://owasp.org/www-project-application-security-verification-standard/"&gt;OWASP Application Security Verification Standard&lt;/a&gt; (ASVS). It is also broken down into levels to allow you to start at the bottom and work up as your security capabilities mature, and decide what level your business needs to attain based on the relevant risks and regulations. Banks for example would go all the way to level 3.&lt;/p&gt;

&lt;p&gt;There are also per-environment lists. E.g. &lt;a href="https://mas.owasp.org/"&gt;OWASP Mobile Application Security&lt;/a&gt; for mobile app development.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pen tests
&lt;/h3&gt;

&lt;p&gt;Don’t just tick off “pen test”, ask your pen test providers how they work.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Do they just cover the OWASP Top 10?&lt;/li&gt;
&lt;li&gt;Do they just cover the SAMM Top 20?&lt;/li&gt;
&lt;li&gt;Do they go deeper than the Top-n?&lt;/li&gt;
&lt;li&gt;Do they look at ASVS?&lt;/li&gt;
&lt;li&gt;What tools do they use?&lt;/li&gt;
&lt;li&gt;Do the tools report against the ASVS? (If not talk to the tool provider!)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Threat modelling
&lt;/h3&gt;

&lt;p&gt;Use threat modelling, assess and then defend against that.&lt;/p&gt;

&lt;h3&gt;
  
  
  Red/blue teams
&lt;/h3&gt;

&lt;p&gt;Can be effective, but also very expensive. Do the basics first (e.g. sql-injection training!)&lt;/p&gt;

&lt;h3&gt;
  
  
  Tools &amp;amp; resources to level-up security
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Training and assessments from &lt;a href="https://securedelivery.io/"&gt;Secure Delivery&lt;/a&gt;. They provide security training and assessments for everyone in the business, not just developers.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://owaspsamm.org/"&gt;OWASP Software Assurance Maturity Model&lt;/a&gt; (SAMM) A “measurable way for all types of organizations to analyze and improve their software security posture”&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/slimtoolkit/slim"&gt;Slim Toolkit&lt;/a&gt; - had a massive impact in reducing vulnerabilities at dunnhumby.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.hacksplaining.com/"&gt;HackSplaining&lt;/a&gt; Security training, “Learn to Hack”. In use at ClearBank.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://snyk.io/"&gt;Snyk&lt;/a&gt; (pronounced “sneak”) - security integrated with CI pipelines&lt;/li&gt;
&lt;li&gt;Bug Bounties - good bang for buck, often find privilege escalation at the app level, even for as little as £3k per found vulnerability.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://owasp.org/www-project-cornucopia/"&gt;OWASP Cornucopia physical card game&lt;/a&gt; (also available online - &lt;a href="https://cornucopia.dotnetlab.eu/"&gt;cornucopia online&lt;/a&gt;, &lt;a href="https://github.com/OWASP/cornucopia"&gt;cornucopia game source code&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.meetup.com/OWASP-London/"&gt;OWASP London Chapter Meetups&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  AI, LLMs &amp;amp; ChatGPT
&lt;/h3&gt;

&lt;p&gt;There are new threats and risks with the new AI tools:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Developers incorrectly using information provided by the LLMs&lt;/li&gt;
&lt;li&gt;ChatGPT allows attackers to accelerate, particularly social engineering. E.g. asking ChatGPT for an org chart instead of having to trawl for data manually, then using that in social engineering attacks. This might make it quicker for an attacker to use someone’s manager’s name to lend authority.&lt;/li&gt;
&lt;li&gt;Developers etc accidentally exfiltrating sensitive data such as private keys and passwords by providing it as inputs to an LLM such as ChatGPT that then integrates the data into its model in a way that allows extraction by a malicious third-party.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Currently use of ChatGPT is blocked at some big companies.&lt;/p&gt;

&lt;p&gt;The change to the security landscape is a bit like asking “how did the creation of the internet change theft”.&lt;/p&gt;

&lt;p&gt;OWASP has used LLM technology to help make it easier for clients to decide which of the 150 tools they have are most appropriate.&lt;/p&gt;

</description>
      <category>security</category>
    </item>
    <item>
      <title>Text-based tools - the ultimate format for everything</title>
      <dc:creator>Tim Abell</dc:creator>
      <pubDate>Thu, 01 Jun 2023 00:00:00 +0000</pubDate>
      <link>https://dev.to/timabell/text-based-tools-the-ultimate-format-for-everything-1ca1</link>
      <guid>https://dev.to/timabell/text-based-tools-the-ultimate-format-for-everything-1ca1</guid>
      <description>&lt;p&gt;Having lived in the world of technology for two to three decades now, I’ve come to a fundamental truth: text formats are &lt;strong&gt;the ultimate&lt;/strong&gt; format.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“text formats are &lt;strong&gt;the ultimate&lt;/strong&gt; format”&lt;/p&gt;

&lt;p&gt;~ Me, just now&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;It’s funny really because for everything we’ve invented, of every level of complexity, usability, shinyness etc, when it comes down to it, texts is still king, just like it was in 1980 when I was still learning to talk.&lt;/p&gt;

&lt;h2&gt;
  
  
  Properties of text formats
&lt;/h2&gt;

&lt;p&gt;Things that make text inevitably superior to all other more complicated formats:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Simple - &lt;strong&gt;nothing&lt;/strong&gt; to go wrong.&lt;/li&gt;
&lt;li&gt;Use any text editor you like - vim, &lt;a href="https://marketplace.visualstudio.com/items?itemName=yzhang.markdown-all-in-one"&gt;vscode+vim&lt;/a&gt;, &lt;a href="https://plugins.jetbrains.com/plugin/164-ideavim"&gt;intellij+vim&lt;/a&gt; are my gotos, but there are soooo many.&lt;/li&gt;
&lt;li&gt;Sync, backup and restore are trivial - try as they might, nothing beats a folder-tree of text files.&lt;/li&gt;
&lt;li&gt;They are ultimately portable - no change in technology (windows to linux, desktop to cloud, laptop to mobile) requires you to change anything, text is text, just copy them across and carry on, the ultimate defense against the ever-present pernicious vendor-lockin.&lt;/li&gt;
&lt;li&gt;Conflict resolution is always possible - edited two out of sync copies? No problem, there’s a plethora of tools (&lt;a href="https://kdiff3.sourceforge.net/"&gt;kdiff3&lt;/a&gt; is my favourite), or you can just do it manually if you wish.&lt;/li&gt;
&lt;li&gt;Version control supported - text files are trivially versionable in tools like git, everything understands it and can show diffs etc.&lt;/li&gt;
&lt;li&gt;Simple conventions like markdown, yaml, toml, and even slightly more complicated things like json don’t fundamentally break any of the above.&lt;/li&gt;
&lt;li&gt;With some lightweight processing and structure (noteably markdown), the same basic format can be automatically converted to a plethora of rich and beautiful forms, and with so many tools understanding formats like markdown you are spoilt for choice.&lt;/li&gt;
&lt;li&gt;Supports emoji - this one is more modern, but its usefulness is not to be underestimated, and thanks to utf-8 and unicode the plain-old-text-file can have rich emotions and symbols too.&lt;/li&gt;
&lt;li&gt;You can use all sorts of interesting tools to process text files, many from the linux cli stack such as &lt;code&gt;sed&lt;/code&gt;, &lt;code&gt;grep&lt;/code&gt; (or &lt;code&gt;ag&lt;/code&gt;), plus full-on shell scripting to automate repetitive tasks &lt;a href="https://github.com/timabell/timwise.co.uk/blob/eff17d609f862a14275c4fa0bd8319d13d59574e/new"&gt;such as making a new blog post&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Amazing things you can do with text files
&lt;/h2&gt;

&lt;p&gt;The below are all things I personally swear by and use daily. I wish more things were like this.&lt;/p&gt;

&lt;p&gt;Markdown is by far my favourite text format, and it’s incredibly versatile. In my crusade to basically convert everything to plain text / markdown files having been repeatedly burnt by fancy binary formats (&lt;code&gt;.doc&lt;/code&gt; anyone?). GraphViz (“dot” format) is also a notably powerful text-based system.&lt;/p&gt;

&lt;h3&gt;
  
  
  Blogging
&lt;/h3&gt;

&lt;p&gt;As per this blog, see &lt;a href="https://dev.to/2019/06/24/setting-up-a-jekyll-blog/"&gt;“Setting up a static website/blog with jekyll”&lt;/a&gt; from 2019. No regrets there. Writing this in vim in a terminal.&lt;/p&gt;

&lt;h3&gt;
  
  
  Slide decks
&lt;/h3&gt;

&lt;p&gt;reveal.js can parse markdown files with a sprinkling of html &amp;amp; css allowed inline (very handy) and turn them into stunning modern presentations with slick animations and multi-step reveals, amazing.&lt;/p&gt;

&lt;p&gt;I was trying to create some slides in google-slides thinking that would be the quick way, ran into some bizarre formatting limitation and went hunting for alternatives. I haven’t looked back, at least for things I don’t need real-time collaboration on.&lt;/p&gt;

&lt;p&gt;You can see what I managed to do with &lt;a href="https://rustworkshop.github.io/slide-decks/"&gt;reveal.js for the Rust Workshop&lt;/a&gt; - here’s one of the &lt;a href="https://github.com/rustworkshop/slide-decks/blob/7eb002bfc1431025b47de97fd20e163456b5d7e5/decks/rust-workshop-master/slides.md?plain=1"&gt;source slide markdown files&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Note taking
&lt;/h3&gt;

&lt;p&gt;Markdown, VSCode with some markdown plugins, maybe even a &lt;a href="https://marketplace.visualstudio.com/items?itemName=kortina.vscode-markdown-notes"&gt;markdown-wiki&lt;/a&gt; tool. &lt;a href="https://f-droid.org/packages/net.gsantner.markor/"&gt;Markor&lt;/a&gt; on android. &lt;a href="https://syncthing.net/"&gt;Syncthing&lt;/a&gt; to keep them in sync across devices. Works for me, and any conflicts due to editing files out of sync is easier to deal with than &lt;a href="https://wiki.gnome.org/Apps/Tomboy"&gt;tomboy&lt;/a&gt;’s nasty XML format (yes I know XML is text but it’s still naaaasty).&lt;/p&gt;

&lt;h3&gt;
  
  
  Coding
&lt;/h3&gt;

&lt;p&gt;This entry is only half tongue-in-cheek. I think it’s worth pointing out that programmers have, after flirting with &lt;em&gt;many&lt;/em&gt; other approaches, settled on plain old ASCII as being the one-true-format for explaining to a computer (and other programmers) what the computer is supposed to be doing. Pay attention to what programmers have learnt, there is much depth here on managing vast amounts of precise information in text form. Especially if you are not a programmer or not used to text tools there is much to learn from this world. You might thing programmers are odd creatures that thrive on unnecessary complexity; nothing could be further from the truth, they (we) are &lt;em&gt;obsessive&lt;/em&gt; about solving problems once and for all and being ruthlessly efficient in all things. The fact that programmer practices are seen as odd by the general public is more a sign of just how far programmers have optimised their lives away from the unthinking defaults of the masses than it is of any peculiarity of whim or culture.&lt;/p&gt;

&lt;h3&gt;
  
  
  Graphs &amp;amp; flowcharts
&lt;/h3&gt;

&lt;p&gt;The GraphViz dot format is amazing, it takes a bit of getting used to, but once you’ve got it then you can rearrange your flow chart with vim in a few keypresses and have the whole thing rearranged in milliseconds. Amazing.&lt;/p&gt;

&lt;p&gt;There’s even some neat web based real-time renderers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://dreampuf.github.io/GraphvizOnline/"&gt;https://dreampuf.github.io/GraphvizOnline/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://sketchviz.com/"&gt;https://sketchviz.com/&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The yucky bits
&lt;/h2&gt;

&lt;p&gt;The almost-rans:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Email’s mbox format is kinda text, but due to the way it’s set up is &lt;em&gt;horrible&lt;/em&gt; for sync&lt;/li&gt;
&lt;li&gt;vcf for contacts, what happened there then?!&lt;/li&gt;
&lt;li&gt;ical for calendars, what a disaster, so close but yet never works, shame&lt;/li&gt;
&lt;li&gt;XML - nice try, turned out to be horrible in hindsight, but not before we’d written almost all software to use it (&lt;code&gt;.docx&lt;/code&gt; anyone?)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The text world is a bit short on collaborative real-time editing - google-docs is still king on that one, though it would be perfectly possible for equivalent tools to be created for the above text formats and tools. Watch this space.&lt;/p&gt;

&lt;p&gt;Crappy half-arsed implementations of markdown, looking at you Jira/Confluence/Slack (not really a problem of text, more something where we’re almost there with and then crappy WYSIWIG implementations wreck it).&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Maintaining software - a bare minimum</title>
      <dc:creator>Tim Abell</dc:creator>
      <pubDate>Mon, 22 May 2023 00:00:00 +0000</pubDate>
      <link>https://dev.to/timabell/maintaining-software-a-bare-minimum-2aoi</link>
      <guid>https://dev.to/timabell/maintaining-software-a-bare-minimum-2aoi</guid>
      <description>&lt;p&gt;All the press goes to new features, but there’s a lot that has to happen just to stand still in software development.&lt;/p&gt;

&lt;p&gt;None of the following results in “shiny new feature that everyone is excited about”. It’s the ongoing work that anyone who’s not in day-to-day software development might not appreciate, sometimes questioning where the time is going.&lt;/p&gt;

&lt;p&gt;Here’s a catalog of things that eat engineering time, but that are eventually unavoidable if you don’t want to grind to a halt under a mountain of &lt;a href="https://timwise.co.uk/2020/07/09/approaches-to-refactoring-and-technical-debt/"&gt;tech debt&lt;/a&gt;:&lt;/p&gt;

&lt;h2&gt;
  
  
  Non-feature work
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1) Bugs
&lt;/h3&gt;

&lt;p&gt;Customers (or your monitoring) notice something that’s not working:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Investigate and ship a fix,&lt;/li&gt;
&lt;li&gt;or worse, spend time investigating only to discover it can’t / won’t be changed or fixed.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  2) Minor dependency upgrades
&lt;/h3&gt;

&lt;p&gt;e.g. upgrading &lt;a href="https://www.nuget.org/packages/xunit"&gt;xUnit&lt;/a&gt; from &lt;code&gt;v2.4.0&lt;/code&gt; to &lt;code&gt;v2.4.2&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;These are usually trivial if your tests are good and the authors respect &lt;a href="https://semver.org/"&gt;Semantic Versioning&lt;/a&gt;. They still need to be done regularly to keep the impact small.&lt;/p&gt;

&lt;h3&gt;
  
  
  3) Major dependency upgrades
&lt;/h3&gt;

&lt;p&gt;e.g. &lt;a href="https://github.com/jbogard/MediatR/wiki/Migration-Guide-9.x-to-10.0"&gt;upgrading MediatR from v9.x to v10.0.0&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“This release includes the following breaking changes in the API …”&lt;br&gt;&lt;br&gt;
~ MediatR release notes&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  4) Platform upgrades
&lt;/h3&gt;

&lt;p&gt;e.g.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.fastruby.io/blog/rails/upgrades/upgrade-rails-from-5-2-to-6-0.html"&gt;Upgrading Rails from 5.2 to 6.0&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://learn.microsoft.com/en-us/azure/azure-functions/migrate-version-3-version-4?tabs=net6-in-proc%2Cazure-cli%2Clinux&amp;amp;pivots=programming-language-csharp"&gt;Migrating apps from Azure Functions version 3.x to version 4.x&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There is often significant changes, including removal and changing (sometimes called “breaking” or “breaking changes”) things that your code relies on.&lt;/p&gt;

&lt;p&gt;You might be tempted to put these off. Don’t. The longer you leave it, the worse your problem becomes, eventually becoming insurmountable.&lt;/p&gt;

&lt;h3&gt;
  
  
  5) Fundamental shifts
&lt;/h3&gt;

&lt;p&gt;Sometimes there’s an enormous shift in technology, e.g.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;On-premise compute to cloud compute.&lt;/li&gt;
&lt;li&gt;Desktop to mobile.&lt;/li&gt;
&lt;li&gt;Server-rendered web to API + Single Page Applications (SPAs).&lt;/li&gt;
&lt;li&gt;More recently, the shift from servers to serverless.&lt;/li&gt;
&lt;li&gt;Data storage (SQL vs NoSql, vs Graph databases).&lt;/li&gt;
&lt;li&gt;New hosting and technology platforms.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you don’t keep up to date then you find it increasingly hard to operate what you have (no engineers want to work with the old tech, the online world no longer supports you with information and tooling, etc). And your customers expectations start to demand things that your outdated approaches are just unable to support.&lt;/p&gt;

&lt;p&gt;Have a plan for regularly considering these and taking action. You might spin up new teams to try them out, or give people “Friday time” to explore new things. The only thing you mustn’t do is “nothing”.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why is keeping on top of upgrades important?
&lt;/h2&gt;

&lt;p&gt;Why not just ignore the upgrades till you need them?&lt;/p&gt;

&lt;p&gt;Two reasons:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Security fixes&lt;/li&gt;
&lt;li&gt;The longer you let it pile up, the harder it gets (exponentially so).&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Keeping changes small
&lt;/h2&gt;

&lt;p&gt;If you allow upgrades to pile up for a month or so, you’ll have one big patch that upgrades many things. If something breaks (even with good test coverage) it can be a lengthy process to figure out which upgrade broke it and what to do about it.&lt;/p&gt;

&lt;p&gt;If you do this regularly (weekly at least), then you’ll only be upgrading a few minor versions at a time, and it will be immediately obvious where to start looking if something breaks (i.e. roll back, then upgrade the 5 dependencies one at a time, and look at the changelog of the one that breaks it.)&lt;/p&gt;

&lt;h2&gt;
  
  
  Test coverage
&lt;/h2&gt;

&lt;p&gt;Upgrades are a key reason that good test coverage (and the functionality level) are very important. Without these you will have a significant manual testing effort for every upgrade. Relying on manual testing results in avoiding upgrades for longer, and breakages making it to production unnoticed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Monitoring
&lt;/h2&gt;

&lt;p&gt;Good exception monitoring and telemetry in production will improve your ability to catch any oddities that slip through your test coverage.&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
