<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Jonathan Eccker</title>
    <description>The latest articles on DEV Community by Jonathan Eccker (@drmurloc).</description>
    <link>https://dev.to/drmurloc</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/drmurloc"/>
    <language>en</language>
    <item>
      <title>Why Agile Frameworks Fail - A Psychological Analysis</title>
      <dc:creator>Jonathan Eccker</dc:creator>
      <pubDate>Thu, 26 Mar 2026 16:30:10 +0000</pubDate>
      <link>https://dev.to/dealeron/why-agile-frameworks-fail-a-psychological-analysis-28mp</link>
      <guid>https://dev.to/dealeron/why-agile-frameworks-fail-a-psychological-analysis-28mp</guid>
      <description>&lt;p&gt;I would like to paint a picture. If it's familiar, keep reading. If it sounds outlandish and not relatable, this is not the article for you.&lt;/p&gt;

&lt;p&gt;You have a small company.&lt;/p&gt;

&lt;p&gt;You are winging it from day to day, adapting and iterating with no, or minimal, process.&lt;/p&gt;

&lt;p&gt;Everyone talks out loud of process, of predictability, of structure they will eventually implement. But your small team pushes forward with late hours, big over-the-weekend features that come with incremental wins, and while everyone is exhausted, the sentimentality is "at least we don't have big company culture". The entire team is vastly intelligent, capable to find patterns and angles on problem spaces that many others can't see.&lt;/p&gt;

&lt;p&gt;Innovation is built in at every step. It's a core value, as is similar mantras like "have fun" and "employees first" (even though not a single person in the team knows how to maintain "work-life" balance).&lt;/p&gt;

&lt;p&gt;You become successful.&lt;/p&gt;

&lt;p&gt;You either get a big client or you reach a tipping point on income that allows you to grow ambitious and plan further than one month ahead.&lt;/p&gt;

&lt;p&gt;So you grow.&lt;/p&gt;

&lt;p&gt;You adopt a framework, the structure you always dreamed of. The word "Agile" got used somewhere in the description, so you call the framework "Agile". "We are doing Agile now."&lt;/p&gt;

&lt;p&gt;And you slow.&lt;/p&gt;

&lt;p&gt;Some of the former powerhouses shift into leadership. Some of the powerhouses remain attached to fast-moving projects that "must happen". And one or two others move on, attached to "small company culture". The most vocal quit, either attached to "small company culture" or feeling generally not enabled to provide the value they used to. It's a huge loss as they were specialists on technology or a specific vertical.&lt;/p&gt;

&lt;p&gt;But that's part of the cost of growth, right?&lt;/p&gt;

&lt;p&gt;You keep growing.&lt;/p&gt;

&lt;p&gt;You go through a V2 of your "Agile" process that &lt;em&gt;really&lt;/em&gt; pins down your process to make sure "quality is baked in at every step".&lt;/p&gt;

&lt;p&gt;To some extent, the "Agile" framework helps. You are able to sustain multiple teams, and as you start measuring velocity, you get an understanding of effort-to-value ratios.&lt;/p&gt;

&lt;p&gt;But most of the engineers and some of the management feel like they are focusing on metrics rather than deliverables.&lt;/p&gt;

&lt;p&gt;Why are they so caught up with the numbers? They are meant to just be validation, not the product.&lt;/p&gt;

&lt;p&gt;Everyone in leadership says that, but when it comes to getting things done, it's always about the metrics.&lt;/p&gt;

&lt;p&gt;Do the numbers themselves look right? You aren't sure.&lt;/p&gt;

&lt;p&gt;You most certainly don't feel like you're getting as much done as you had before.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the Hell Happened?
&lt;/h2&gt;

&lt;p&gt;You switched the motivational framework.&lt;/p&gt;

&lt;p&gt;In an unstructured environment, especially in Engineering, motivations come from unsolved, complex problems, novelty, urgency, and personal interest. Responsibility is a byproduct or guiding factor, not a driver.&lt;/p&gt;

&lt;p&gt;Throw those motivation factors into a Google search.&lt;/p&gt;

&lt;p&gt;I'll wait.&lt;/p&gt;

&lt;p&gt;Additionally, in small companies, there is a lot of agency and freedom to tackle problems that require unrefined requirements to be reduced to logical, repeatable patterns that are represented in a vastly complex system.&lt;/p&gt;

&lt;p&gt;Throw "individuals with advanced pattern recognition and rules extrapolation and a pathological need for agency" into that search.&lt;/p&gt;

&lt;h2&gt;
  
  
  Wait, my Team Isn't ADHD or Autistic
&lt;/h2&gt;

&lt;p&gt;It's entirely possible.&lt;/p&gt;

&lt;p&gt;But the line for the two diagnoses revolves around "disability", which, by the social model, is scoped to the relationship between the individual and the environment, not the individual itself. In the right environment, this means symptoms of the disability &lt;em&gt;can&lt;/em&gt; (depending on the severity or type of disability) become completely transparent or seem like "just part of the job".&lt;/p&gt;

&lt;p&gt;It's basically accidental masking by being successful, and it is fairly common.&lt;/p&gt;

&lt;p&gt;I have, personally, become fairly convinced that there exists an alarmingly high amount of undiagnosed ADHD and Autistic Leadership in Engineering that are unintentionally masking simply by having found the right environment to fit their motivational framework. This is partly informed by many tech giants speculating that they would likely have been diagnosed as Autistic when younger (see: Bill Gates).&lt;/p&gt;

&lt;p&gt;On that point, it's worth taking a moment to note that this article exists in the space of "Lived Experience" (community discussions/personal experience) and "Emerging Theory" (professional speculation). "Institutional Knowledge" (DSM-5) is far behind the active discussion space (by design).&lt;/p&gt;

&lt;p&gt;To give an idea of just how far behind institutional knowledge is, it was just a bit more than a decade ago that the DSM recognized that Autism and ADHD could both be diagnosable in the same individual, and there's now an estimated ~50% co-diagnosis rate.&lt;/p&gt;

&lt;p&gt;The point being that discussions on neurodivergence are moving fast.&lt;/p&gt;

&lt;p&gt;What I can confidently say is:&lt;br&gt;
1) Most everyone already knew the Engineering world had a high ratio of neurodivergent representation. There exist studies on this already.&lt;br&gt;
2) Society is going through an unprecedented wave of demasking and diagnosis. (If you're successful in Engineering, I suggest taking this test &lt;a href="https://raads-r.net/" rel="noopener noreferrer"&gt;https://raads-r.net/&lt;/a&gt; if you have some time to kill)&lt;br&gt;
3) Even for those not severe enough to be diagnosed, Autistic/ADHD traits are present across "neurotypical" society. Those traits are going to be very common in engineering, which requires non-linear and highly cognitive processing. (search: Autistic traits continuous in general population).&lt;/p&gt;

&lt;h2&gt;
  
  
  How Does this Relate to Agile?
&lt;/h2&gt;

&lt;p&gt;Great guiding question, me!&lt;/p&gt;

&lt;p&gt;I would like to call attention to the Agile Manifesto. Not an Agile framework. The core philosophy.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://agilemanifesto.org/principles.html" rel="noopener noreferrer"&gt;https://agilemanifesto.org/principles.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In particular:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Build projects around motivated individuals.&lt;br&gt;
Give them the environment and support they need,&lt;br&gt;
and trust them to get the job done.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This is, coincidentally, in line with the current directive for most neurodivergent therapy. Integration, not assimilation. In HR terms, we call it equity.&lt;/p&gt;

&lt;p&gt;When you had low process and high agency, with constant stimulation driven by urgency, novelty, and personal passion, you had an environment that appealed to a neurotype that is &lt;em&gt;very&lt;/em&gt; powerful when motivated.&lt;/p&gt;

&lt;p&gt;This is a neurotype that will often put in 60 hours a week and like it (assuming they are actually motivated). This is a neurotype that will solve complex problems over a weekend that might take others months to even understand.&lt;/p&gt;

&lt;p&gt;The misguided lessons society has taught itself about "work-life balance" have led us to adopt the 9-5 workweek, based on the assertion that it would work for everyone and that it's the box you must fit into to be successful.&lt;/p&gt;

&lt;p&gt;Many companies then built a bunch of fancy metrics around this philosophy in a misguided attempt to show &lt;em&gt;just&lt;/em&gt; how productive this 9-5 workweek was at driving motivation.&lt;/p&gt;

&lt;p&gt;And neurodivergents, in response, invented the mouse wiggler.&lt;/p&gt;

&lt;p&gt;In some jobs, time worked &lt;em&gt;is&lt;/em&gt; the measurable deliverable.&lt;/p&gt;

&lt;p&gt;Engineering is not one of those jobs. You deliver software, not time.&lt;/p&gt;

&lt;h2&gt;
  
  
  You're Suggesting 60-hour Weeks are GOOD?
&lt;/h2&gt;

&lt;p&gt;Sustainably? Every week? Absolutely not.&lt;/p&gt;

&lt;p&gt;I've personally had 60-hour weeks and 10-hour weeks.&lt;/p&gt;

&lt;p&gt;I've solved months of value in 2 hours, and gone down the wrong rabbit hole for weeks.&lt;/p&gt;

&lt;p&gt;AuDHD (Autism + ADHD, that's me) in particular often comes with a drive to solve problems. I'm either solving a problem or looking for a new problem to solve. Responsibility is a guiding factor, not the engine. Burnout stems from either working on long-term projects out of obligation (and not motivation) or from too little going on ("boreout").&lt;/p&gt;

&lt;p&gt;There's a bit of nature vs. nurture in the discussion of where that "eternal antsyness" originates. I'd highly recommend doing some research on that if the question "&lt;em&gt;should&lt;/em&gt; AuDHD be that way" popped into your head. It's a fascinating rabbit hole that will show you the bigger picture of just how much society shoots itself in the foot with misleading messaging and expectations.&lt;/p&gt;

&lt;h2&gt;
  
  
  You Just Implied Velocity Isn't Constant
&lt;/h2&gt;

&lt;p&gt;I'm not implying.&lt;/p&gt;

&lt;p&gt;I'm stating.&lt;/p&gt;

&lt;p&gt;The illusion of constant velocity is one of the most harmful lies society tells itself.&lt;/p&gt;

&lt;p&gt;Across a team and over a sufficiently long timespan, velocity can approach consistency. But &lt;em&gt;not&lt;/em&gt; at a granular, individual level.&lt;/p&gt;

&lt;p&gt;There's an important entry in the manifesto.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Agile processes promote sustainable development.&lt;br&gt;
The sponsors, developers, and users should be able&lt;br&gt;
to maintain a constant pace indefinitely.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;And a second one.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Deliver working software frequently, from a&lt;br&gt;
couple of weeks to a couple of months, with a&lt;br&gt;
preference to the shorter timescale.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Preference is for shorter time periods, but the extreme of "day to day velocity" becomes micromanaging (and risks encroaching on the neurodivergent need for agency, which has a high risk for causing burnout and turnover. Search: Pathological Demand Avoidance).&lt;/p&gt;

&lt;p&gt;Two-week sprints are generally a good time frame for most teams to average out to a "constant pace", so those tend to be what get used. I do believe it's important for a team to have an established contract on their iteration team, especially since time-boxing is such an important concept for ADHD (which often comes with perfectionism).&lt;/p&gt;

&lt;p&gt;But ultimately, at an individual level, the smaller the time frame you consider, the harder it is to say "spontaneous bursts of innovation and complex problem solving" and "predictable delivery speed" in the same breath.&lt;/p&gt;

&lt;p&gt;There's a balance.&lt;/p&gt;

&lt;p&gt;As a side note, search for "ADHD time blindness"; it will likely explain why you struggle with overcommitting or undercommitting in almost every sprint. Many neurodivergent employees try to plan around their highest velocity rather than their expected velocity. Combine that with a bunch of biology around dopamine dysregulation that I'm not going to get into here, and you get powerhouses that are incapable of predicting just how powerful they will be.&lt;/p&gt;

&lt;h2&gt;
  
  
  Wouldn't that Show up in the Metrics?
&lt;/h2&gt;

&lt;p&gt;Enter Autism.&lt;/p&gt;

&lt;p&gt;Pattern and rules extraction are our entire thing. (That's a reductionist statement for the sake of making a point, we have a lot of things, don't @ me)&lt;/p&gt;

&lt;p&gt;Intuition is not intuitive for us. Big-picture thinking (holistic intuition) is often low, while expertise and wisdom (pattern-based intuition) are often higher, but take time to build.&lt;/p&gt;

&lt;p&gt;This is by design in Engineering.&lt;/p&gt;

&lt;p&gt;You want employees who are cognitive-forward; employees who are missing the caching layer of intuition. It means the problems get properly analyzed (cognition) with very few assumptions made (intuition), and a sustainable solution is implemented. Engineers don't just write code; they pattern-match and extract rules from requirements.&lt;/p&gt;

&lt;p&gt;Intuition would tell you, "Don't re-invent the wheel."&lt;br&gt;
Cognition would tell you, "But the wheels we have aren't built for this terrain. In a few weeks, we can invent the plane."&lt;br&gt;
(And Intuition might then question, "Why are we even talking about wheels? We build houses.")&lt;/p&gt;

&lt;p&gt;But when we haven't had time to iterate on the same problem space for long periods (building expertise), we default to repeatable processes and numbers to reduce the problem space to bite-sized, actionable information.&lt;/p&gt;

&lt;p&gt;More often than not, this leads to gamifying the numbers. Mouse wigglers, not pulling in extra work when work is done faster, over-pointing efforts, under-pointing efforts, skipping process completely for certain efforts, etc.&lt;/p&gt;

&lt;p&gt;In the absence of more concrete directives on deliverables, making sure those numbers look good, min/maxing them, is inevitable. It's not generally an active decision; it's just a piece of the puzzle we incorporate into the planning process when everything else is open-ended and doesn't match a pattern we are familiar with.&lt;/p&gt;

&lt;h2&gt;
  
  
  This is Still about Agile, Right?
&lt;/h2&gt;

&lt;p&gt;Yes!&lt;/p&gt;

&lt;p&gt;The core moment you went wrong, the single "what should we have done differently?" decision point, was picking a singular Agile Framework to prescribe to a full department.&lt;/p&gt;

&lt;p&gt;These frameworks often prescribe, or are interpreted as prescribing:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Low, or no, rollover&lt;/li&gt;
&lt;li&gt;No mid-sprint changes&lt;/li&gt;
&lt;li&gt;Predictable velocity at a granular scale&lt;/li&gt;
&lt;li&gt;All work to be completely scoped before committing&lt;/li&gt;
&lt;li&gt;Time commits on work to be predictable and exact&lt;/li&gt;
&lt;li&gt;Turning Engineers into code monkeys, rather than problem solvers, moving decision-making entirely into other teams (Architects, Product, UX, QA, etc.)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you look at the core 12 principles of the Agile manifesto, every single one of those bullet points comes back to a misinterpretation of the Agile philosophy.&lt;/p&gt;

&lt;p&gt;And most importantly, every single one of those bullet points is incompatible with the neurotypes you have built your team out of.&lt;/p&gt;

&lt;p&gt;You have not created an environment for motivated individuals; you have created one for the "neurotypical" mentality, which so few people thrive in. (If you want another fun rabbit hole, search "does the 'neurotypical' person exist at the biological level". Spoilers: It's a sociopolitical term, not medical. Depending on how you draw a few lines in definitions in 'normal', it's not even a majority.)&lt;/p&gt;

&lt;h2&gt;
  
  
  Assuming You're Right, What Do We Do About it?
&lt;/h2&gt;

&lt;p&gt;Be Agile, don't do Agile.&lt;/p&gt;

&lt;p&gt;Provide the right environment.&lt;/p&gt;

&lt;p&gt;The manifesto is the important part, not the frameworks.&lt;/p&gt;

&lt;p&gt;You need different processes for different types of teams, different types of projects, and different budgets.&lt;/p&gt;

&lt;p&gt;I want to call attention to an important entry in the manifesto:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Simplicity--the art of maximizing the amount&lt;br&gt;
of work not done--is essential.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;And another one:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The best architectures, requirements, and designs&lt;br&gt;
emerge from self-organizing teams.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;One-size-fits-all process is the problem, not the existence of process. Process is important, process enables, process protects. But doing less, quite literally, translates to doing more.&lt;/p&gt;

&lt;p&gt;I can personally tell you that when I am mid flow-state on a big project, even transitioning a Jira ticket that carries no perceived value can be enough to break momentum. It's unbearable. I can physically feel the frustration from it in my veins.&lt;/p&gt;

&lt;p&gt;Those without ADHD are likely thinking this is melodramatic. If you know someone with ADHD, ask them. This is a major reason why "context switching" is viewed as a cardinal sin among engineers. Got into the flow in the morning, but was forced into a meeting on a completely unrelated topic? It's a literal crap shoot on whether I get into that flow again.&lt;/p&gt;

&lt;p&gt;Reduce process to only the process that enables development or protects a system.&lt;/p&gt;

&lt;p&gt;Highly critical services in maintenance mode require high process. Lesser critical systems require less.&lt;/p&gt;

&lt;p&gt;"Must do", non open-ended work (if you have buckets of work catered to specific important clients, for instance) is good in high process to self-regulate on commits.&lt;/p&gt;

&lt;p&gt;If something is greenfield and doesn't affect production, does it need a full regression test suite and a Jira step for Product approval in a lower environment? Can you just demo on your local machine?&lt;/p&gt;

&lt;p&gt;For the other Software Architects out there: Design systems to enable greenfield development to occur out of band from those critical services. This is one of the subtle but most valuable benefits of microservices. It lets you right-size the process.&lt;/p&gt;

&lt;p&gt;Another quote:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Working software is the primary measure of progress.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Reduce metrics to those that drive SMART goals. SMART goals are the most empowering weapon for neurodivergent employees, who often crave explicitly defined deliverables (just don't overprescribe how they get TO those deliverables, see: Pathalogical Demand Avoidance again).&lt;/p&gt;

&lt;h2&gt;
  
  
  A Couple Other Principles Viewed through the Neurodivergent Lens
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;Welcome changing requirements, even late in&lt;br&gt;
development. Agile processes harness change for&lt;br&gt;
the customer's competitive advantage.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Pattern-based and cognitive-forward decision-making takes time. Couple this with the low Implicit Social Cognition you get with Autism (difficulty to intuitively keep up with an ongoing discussion), you get employees who need longer than your 1-hour meeting to think or process.&lt;/p&gt;

&lt;p&gt;Sure, you did sprint planning. That was important. You did due diligence on attempting to plan.&lt;/p&gt;

&lt;p&gt;Plans are nothing; planning is everything.&lt;/p&gt;

&lt;p&gt;An Engineer realizing a flaw in a plan a few days into a sprint is part of the process. Encourage text-based communication for asynchronous planning, which is &lt;em&gt;very&lt;/em&gt; neurodivergent-friendly (giving time to right-size communication and process).&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Business people and developers must work&lt;br&gt;
together daily throughout the project.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I mentioned Pathalogical Demand Avoidance a few times.&lt;/p&gt;

&lt;p&gt;This is the best way to avoid the problem. Make engineers feel involved in decisions. They are specialists on the system you are building, &lt;em&gt;use&lt;/em&gt; that, don't silo it. Studies exist to prove performance gain from engineer engagement, if you need proof. See: &lt;a href="https://link.springer.com/article/10.1007/s10664-023-10293-z" rel="noopener noreferrer"&gt;https://link.springer.com/article/10.1007/s10664-023-10293-z&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The most efficient and effective method of&lt;br&gt;
conveying information to and within a development&lt;br&gt;
team is face-to-face conversation.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This might sound counterintuitive, because many people think of "anti-social" when they hear Autism. If you tried to initiate a discussion about the weather, I might give an awkward smile and do my best to extract myself from the physical pain that is "small talk".&lt;/p&gt;

&lt;p&gt;But when it comes to an area of interest, you can't shut us up.&lt;/p&gt;

&lt;p&gt;The literalness of "face to face" leaves much open to interpretation, but at the end of the day, vocal communication, at the very least (even if just Zoom), is significantly more efficient and effective time-wise. Just define your deliverables going into the meeting and organize the conversations with live note-taking to avoid endless tangents and rabbit-holing.&lt;/p&gt;

&lt;p&gt;And finally:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;At regular intervals, the team reflects on how&lt;br&gt;
to become more effective, then tunes and adjusts&lt;br&gt;
its behavior accordingly.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This is, coincidentally, how many people with AuDHD simply live their lives. Iterate -&amp;gt; Analyze -&amp;gt; Consult (ideally) -&amp;gt; Iterate. An endless series of "focus hard, then reflect".&lt;/p&gt;

&lt;p&gt;That, at its core, is what you're aiming for with being Agile.&lt;/p&gt;

&lt;h2&gt;
  
  
  Well, I'm Neurodivergent, and I figured it out, they Can Learn to Push Through it Just like I did
&lt;/h2&gt;

&lt;p&gt;I want to call this out since I've actually heard it said, specifically in the context of prescriptive Agile frameworks.&lt;/p&gt;

&lt;p&gt;This is the equivalent of telling someone with severe depression, "just smile like the rest of us". It's a form of generational trauma, has ableist undertones, and risks violations of ADA directives on handling ADHD and Autism as a disability.&lt;/p&gt;

&lt;p&gt;The world's take on "undue hardship" and equity in ADHD/Autism is shifting. There are already successful lawsuits on underplaying the severity of mental disabilities in the workplace (including the usage of derogatory words like "lazy" or "disorganized").&lt;/p&gt;

&lt;p&gt;Do some searching. Educate yourself. Become powered by neurodiversity, not disabled by it.&lt;/p&gt;

</description>
      <category>management</category>
      <category>psychology</category>
      <category>softwareengineering</category>
      <category>startup</category>
    </item>
    <item>
      <title>Modern Software Architecture is a Misnomer (sort of)</title>
      <dc:creator>Jonathan Eccker</dc:creator>
      <pubDate>Wed, 06 Apr 2022 14:30:53 +0000</pubDate>
      <link>https://dev.to/dealeron/modern-software-architecture-is-a-misnomer-sort-of-1d4k</link>
      <guid>https://dev.to/dealeron/modern-software-architecture-is-a-misnomer-sort-of-1d4k</guid>
      <description>&lt;p&gt;When I first was introduced to the term "software architecture", my mind immediately filled in a picture of someone bent over a massive flow chart, sculpting it into a shape that made sense. Finding the ideal shape that would solve everyone's problems in the most efficient and effective way.&lt;/p&gt;

&lt;p&gt;I likened it to an actual Architect, outlining the shape and details of a sky scraper that would sustain for millennia.&lt;/p&gt;

&lt;p&gt;And that's sort of the only actual parallel: building a solution that will sustain for the foreseeable future.&lt;/p&gt;

&lt;h2&gt;
  
  
  Let's define the industries
&lt;/h2&gt;

&lt;p&gt;The building industry:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Has been around for 12000 or so years (&lt;a href="https://en.wikipedia.org/wiki/List_of_oldest_known_surviving_buildings"&gt;https://en.wikipedia.org/wiki/List_of_oldest_known_surviving_buildings&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Has a fairly (pardon the pun) concrete and well developed list of fundamental infrastructure materials&lt;/li&gt;
&lt;li&gt;Has stakeholders (landlords, tenants, etc.) that have a very finite list of requirements (working, living, retailing, tenant count, etc.)&lt;/li&gt;
&lt;li&gt;Has generally well defined resource limitations (space, height restrictions, budget)&lt;/li&gt;
&lt;li&gt;Has employees that have a pretty common foundation of knowledge and experience with materials and tooling&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The software industry by comparison:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Has been around for less than 80 years (&lt;a href="https://en.wikipedia.org/wiki/History_of_software"&gt;https://en.wikipedia.org/wiki/History_of_software&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Has a growing list of relatively volatile infrastructure materials (see: JavaScript frameworks)&lt;/li&gt;
&lt;li&gt;In many cases has stakeholders that are still figuring out what they even are trying to solve and who they are trying to solve it for&lt;/li&gt;
&lt;li&gt;Is still in the process of defining its resource limitations (look up any pricing for any cloud hosting service and you'll generally come out with more acronyms to google than answers on pricing)&lt;/li&gt;
&lt;li&gt;Has employees that will have vastly different levels and types of experience with various tooling (partly due to the large list of relatively volatile infrastructure)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Very different industries, yes, the point?
&lt;/h2&gt;

&lt;p&gt;The point is that the building industry can make a lot of assumptions and have concrete answers on what it's trying to solve and how it expects to solve it. The end result is large, monolithic, buildings expected to last hundreds, if not thousands, of years.&lt;/p&gt;

&lt;p&gt;The software industry you can make very few assumptions and there often are not concrete answers, just educated (and sometimes uneducated) guesses. This is why modern Software Architects are inclined to focus on two concepts: standardization (increase assume-ability), and modularity (ease to switch in/out educated guesses).&lt;/p&gt;

&lt;h2&gt;
  
  
  If we built Software in the Building Industry
&lt;/h2&gt;

&lt;p&gt;You've likely seen Software designed as if there were very concrete answers built on thousands of years of experience. It's a legacy monolith codebase. I like imagining every company has one, some large piece of PHP or classic ASP sitting in the corner gathering dust.&lt;/p&gt;

&lt;p&gt;When you have confidence that you're solving the right problems with the right technology, you can mold all of your solutions into an all-in-one self-documenting codebase, and expect it to chug away for eternity.&lt;/p&gt;

&lt;p&gt;Because everyone in the industry has some level of standardized experience with the tooling and infrastructure you used, you can hire pretty much anyone to maintain it! It'll last forever!&lt;/p&gt;

&lt;h2&gt;
  
  
  If we built Buildings in the Software Industry
&lt;/h2&gt;

&lt;p&gt;You're handed the requirements for creating living space for people:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;We don't know how many, if any, people will use your solution&lt;/li&gt;
&lt;li&gt;We're pretty sure that a Bed, Microwave Oven, and Toilet for every person is MVP, but we may end up needing a gas oven.&lt;/li&gt;
&lt;li&gt;There's definitely more appliances we'll need to add in, we can speculate at what those are but won't really know until we have people living there&lt;/li&gt;
&lt;li&gt;We think it needs to interact with this other company's building, they support mail and radio (but only AM).&lt;/li&gt;
&lt;li&gt;We hired one person who knows how to work with brick, and 3 people that know drywall but are very eager to learn brick.&lt;/li&gt;
&lt;li&gt;We also don't have a crazy budget for this until people start using it, so make it as close to free as possible.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It's easy to think "those requirements should really be cleaned up", but the fact is that there isn't many years of experience that could tell us what will/won't work in the industry. We're all literally building the airplane while it's flying (&lt;a href="https://www.youtube.com/watch?v=S_dgWl83cTM"&gt;https://www.youtube.com/watch?v=S_dgWl83cTM&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;There's a few options here:&lt;/p&gt;

&lt;h3&gt;
  
  
  Build a Skyscraper
&lt;/h3&gt;

&lt;p&gt;You proactively build one giant building that could fit any possible ask that comes up.&lt;/p&gt;

&lt;p&gt;You added the support for gas in anticipation for the gas oven but the request never came. Now you have to spend resources keeping gas pipes up to standard and good repair.&lt;/p&gt;

&lt;p&gt;Several appliances you guessed would come up did in fact get requested! Value! Some appliances you had not anticipated, like a sauna room, did not really fit into your design though so they had to be hacked in as part of the bathrooms.&lt;/p&gt;

&lt;p&gt;With the delivery industry being so successful, the entire Kitchen was dropped as a supported product. Actually removing all of those kitchens would be a ton of overhead so you remove the appliances that aren't bolted to the wall or floor and leave the rest hoping that someone will be able to use them. &lt;/p&gt;

&lt;p&gt;Halfway through the project you realized that Brick 2.0 came out, which has breaking changes with how you implemented the bathrooms. Brick 3.0 is already being talked about in the industry and hiring skilled employees to maintain your skyscraper built on Brick 1.0 is becoming difficult.&lt;/p&gt;

&lt;p&gt;You implemented a mail box, radio, cable, and a Batman signal for external communication, but since you only utilize the mail box all of the other systems stop working over time as renovations happen around them. There's speculation that someone may be stealing your cable but there's not enough interest to look into it.&lt;/p&gt;

&lt;p&gt;As the skyscraper becomes more popular, there's plenty of room for the inhabitants, but they never quite fully utilize even a decent chunk of the allocated space.&lt;/p&gt;

&lt;h3&gt;
  
  
  Build a House and plan for Renovations
&lt;/h3&gt;

&lt;p&gt;You build a mid sized house that achieves the MVPs of the request, and make sure you leave room for features you think are coming up.&lt;/p&gt;

&lt;p&gt;The request for gas ovens never comes, just means you have extra space in the walls.&lt;/p&gt;

&lt;p&gt;You were able to do some Renovations to the house to account for the sauna, as more requests come in you're noticing that the house is starting to bloat in size and maintaining the electric wiring and Brick upgrades is starting to take more time and interfere with other work in the house.&lt;/p&gt;

&lt;p&gt;Dropping the kitchen as a product was difficult, but doable with some planned renovations. It did require stopping work on other portions of the house for a bit though.&lt;/p&gt;

&lt;p&gt;Eventually a request comes in for a [working] fireplace. But because of the positioning of the solar panels from a few requests earlier you don't have a good place to put a chimney.&lt;/p&gt;

&lt;p&gt;As the house becomes more and more popular, room begins to be a problem. You now have to figure out how to build an exact copy of your house multiple times to accommodate more people.&lt;/p&gt;

&lt;h3&gt;
  
  
  Build a series of Huts
&lt;/h3&gt;

&lt;p&gt;You break down the requirements to identify what particular solutions are being solved:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Providing a sleeping space&lt;/li&gt;
&lt;li&gt;Providing a way to cook food&lt;/li&gt;
&lt;li&gt;Providing a private area for personal needs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You build three huts, one for each ask. You actually found that Drywall was better suited for the sleeping space hut, Brick for the kitchen hut, and a newer technology Tile for the restroom hut.&lt;/p&gt;

&lt;p&gt;Several other smaller applications got added to existing huts, but nothing that really significantly increased their size.&lt;/p&gt;

&lt;p&gt;You considered adding the Sauna as an expansion of the restroom hut, but realized it solved problems for a different set of people so gave it its own hut. This way renovations to your sauna won't break your toilet.&lt;/p&gt;

&lt;p&gt;Dropping support for the kitchen was super easy (barely an inconvenience). You just took a wrecking ball to it.&lt;/p&gt;

&lt;p&gt;The Brick upgrade only really needed to be applied to a few huts. They did interfere with work in those huts but not any in other huts, so was scheduled for a time when the Brick huts would not be worked on.&lt;/p&gt;

&lt;p&gt;As more people are drawn to your solution, you can create many more bedroom huts for virtually no effort, while not needing to create as many of the less used Saunas.&lt;/p&gt;

&lt;h2&gt;
  
  
  Back to software
&lt;/h2&gt;

&lt;p&gt;This last example, the Huts, is what micro services are. They are the centerpiece of modern Software Architecture, and in fact are going the direction of becoming even more micro (serverless, think if a Hut was designed just to have a sink cuz you may need more sinks than toilets).&lt;/p&gt;

&lt;p&gt;The big point to be made, is that the end goal of modern Software Architecture is not about creating codebases that survive until the end of time the way Building Architecture does.&lt;/p&gt;

&lt;p&gt;Modern Software Architecture is about creating small, modular solutions that can be spun up and thrown out on a whim as a reflection of their parent industries that are still practically in their infancies.&lt;/p&gt;

</description>
      <category>architecture</category>
      <category>microservices</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Quality is a Measure of the Problems You Don't Solve</title>
      <dc:creator>Jonathan Eccker</dc:creator>
      <pubDate>Thu, 17 Mar 2022 14:42:35 +0000</pubDate>
      <link>https://dev.to/dealeron/quality-is-a-measure-of-the-problems-you-dont-solve-1340</link>
      <guid>https://dev.to/dealeron/quality-is-a-measure-of-the-problems-you-dont-solve-1340</guid>
      <description>&lt;p&gt;We live in an era of infinite problems. Unfortunately our resources do not scale at the same rate as our problems. The big takeaway being that we can't solve everything.&lt;/p&gt;

&lt;p&gt;An important assertion to have in a world with finite resources is that our usage of those resources should be efficient. This is what quality is, maintaining a high confidence in resource efficiency (whatever the resources may be).&lt;/p&gt;

&lt;p&gt;We spend an insane amount of resources to maintain that high confidence in resource efficiency. The entirety of the Software Architect role exists because it's accepted that problem spaces evolve faster than our solutions.&lt;/p&gt;

&lt;p&gt;There's an extremely resource efficient solution that's always been right in front of us though, and has no additional layers of resource consumption to maximize quality: Solve fewer problems.&lt;/p&gt;

&lt;h2&gt;
  
  
  Not Solving Problems
&lt;/h2&gt;

&lt;p&gt;It almost feels silly giving an example of how to not solve problems. There's billions of examples, I actually find it very fun taking careful note of successful products and identifying the problems they intentionally don't solve.&lt;/p&gt;

&lt;p&gt;But for the sake of this, let's talk Uber.&lt;/p&gt;

&lt;p&gt;It has come a very long way, and solves infinitely more problems than anyone would have ever expected it to solve. But lets go back to their roots.&lt;/p&gt;

&lt;p&gt;Uber was a black car service. They solved the problem space of on demand black cars with trained drivers giving you a ride from point A to point B.&lt;/p&gt;

&lt;p&gt;A list of SOME of what they did not solve:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Tipping&lt;/li&gt;
&lt;li&gt;Multiple stops&lt;/li&gt;
&lt;li&gt;Transporting anything but people&lt;/li&gt;
&lt;li&gt;Non trained drivers&lt;/li&gt;
&lt;li&gt;Curbside taxi style service (think traditional yellow cab)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I am very confident that the technology was there to support those from the start. It would have been "trivial" to add a radio button for "Pickup target: Takeout, Person" to slap in an entire new User Story and make a few extra bucks. But Uber took its time adding features.&lt;/p&gt;

&lt;p&gt;The end result is a very high quality product that solves exactly the problems it set out to solve, no more, no less.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Not?
&lt;/h2&gt;

&lt;p&gt;The question "Why Not" gets used a lot more often than it should. It's by nature dismissive of the nuance of a problem space and is an attempt to bypass decision making or discussion.&lt;/p&gt;

&lt;p&gt;When it comes to adding functionality to a product, it is extremely easy to think "Why Not?" and slip in extra low hanging fruit as functionalities.&lt;/p&gt;

&lt;p&gt;What you will find is that "features" added this way have a high risk of being buggy or misused. They weren't added as part of a planned feature that solves a very specific problem, and will not get the support that other intentionally planned solutions have. This is a very active detriment to quality.&lt;/p&gt;

&lt;h2&gt;
  
  
  What problems should you solve?
&lt;/h2&gt;

&lt;p&gt;There are a thousand decision making frameworks for prioritizing problems. I think the most relevant one to the software industry that also carries technical direction is Domain Driven Design. Microsoft does a much better job giving an overview of this than I could: &lt;a href="https://docs.microsoft.com/en-us/azure/architecture/microservices/model/domain-analysis"&gt;https://docs.microsoft.com/en-us/azure/architecture/microservices/model/domain-analysis&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For those less technically oriented or those who aren't looking to pick up an entire architecture analysis framework right now, there are two primary questions to answer:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Why do consumers spend resources (time, money, etc.) using your product instead of alternative solutions?&lt;/li&gt;
&lt;li&gt;What domain expertise do you have that sets you apart?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Answering those questions is critical. Products that aren't in line with providing value to a consumer or aren't being built with domain experience backing them are very likely to be low quality. These are excellent candidates to start opting out of solving (sunsetting their functionality, outsourcing to a team with more relevant experience, or switching to a third party system to provide required functionality).&lt;/p&gt;

&lt;h2&gt;
  
  
  It's not Lazy, it's Energy Efficient
&lt;/h2&gt;

&lt;p&gt;To wrap this up, I want to make a cultural observation. We've grown up in a society that has an absolute phobia of laziness. It's understandable, the world has achieved an inconceivable amount in the last hundred, thousand, million years. No one wants to be the cog not turning in the machine.&lt;/p&gt;

&lt;p&gt;What many have failed to notice though, is that this phobia has made us feel obligated to tackle every single problem that we come across. To quote Bo Burnham, "apathy is tragedy and boredom is a crime".&lt;/p&gt;

&lt;p&gt;Going back to my first assertion, resources are limited. On some level then, we need to make sure those are being used efficiently. YOU are an extremely limited resource. Go ahead and be energy efficient.&lt;/p&gt;

</description>
      <category>beginners</category>
      <category>productivity</category>
      <category>design</category>
      <category>architecture</category>
    </item>
    <item>
      <title>Server To Server Google Api Credentials Without a Json File in .Net</title>
      <dc:creator>Jonathan Eccker</dc:creator>
      <pubDate>Thu, 17 Jun 2021 14:44:11 +0000</pubDate>
      <link>https://dev.to/dealeron/server-to-server-google-api-credentials-without-a-json-file-in-net-5a4o</link>
      <guid>https://dev.to/dealeron/server-to-server-google-api-credentials-without-a-json-file-in-net-5a4o</guid>
      <description>&lt;p&gt;All Google documentation indicates that you need a special JSON file for configuration of google calls. This breaks the standard configuration pattern that .Net uses (IConfiguration abstraction/layers, IOptions injection, etc.)&lt;/p&gt;

&lt;p&gt;After poking around at the classes available in the Nuget Package, I was able to hook this all up for server side calls in a very simple way.&lt;/p&gt;

&lt;p&gt;To cut to the chase, here's the code, note that this is code for interacting with the Calendar API but it'll be similar for other API services.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public sealed class CalendarEventRepository : ICalendarEventRepository
{
  private readonly GoogleConfiguration _configuration;
  public CalendarEventRepository(IOptions&amp;lt;GoogleConfiguration&amp;gt; options)
  {
    _configuration = options.Value;
  }
  public async Task&amp;lt;IEnumerable&amp;lt;CalendarEvent&amp;gt;&amp;gt; GetUpcomingCalendarEvents(CancellationToken cancellationToken)
  {
    var credentialJson = new
    {
      type = "service_account",
      project_id = _configuration.ProjectId,
      private_key_id = _configuration.PrivateKeyId,
      private_key = _configuration.PrivateKey.Replace("\\n","\n"),
      client_email = _configuration.ClientEmail,
      client_id = _configuration.ClientId,
      auth_uri = "https://accounts.google.com/o/oauth2/auth",
      token_uri = "https://oauth2.googleapis.com/token",
      auth_provider_x509_cert_url = "https://www.googleapis.com/oauth2/v1/certs",
      client_x509_cert_url = "https://www.googleapis.com/robot/v1/metadata/x509/" + Uri.EscapeUriString(_configuration.ClientEmail)
    };
    var credentials = GoogleCredential.FromJson(JsonConvert.SerializeObject(credentialJson)).CreateScoped(new[] { CalendarService.Scope.Calendar });

    var service = new CalendarService(new BaseClientService.Initializer()
    {
      ApplicationName = "&amp;lt;Your Application Name&amp;gt;",
      HttpClientInitializer = credentials
    });
    var eventRequest = service.Events.List(_configuration.CalendarId);
    eventRequest.MaxResults = 10;
    var result = await eventRequest.ExecuteAsync();
    return result.Items.Select(i =&amp;gt; new CalendarEvent(HashToGuid(i.Id), i.Summary, DateTimeOffset.Parse(i.Start.Date), DateTimeOffset.Parse(i.End.Date))).ToArray();
  }

  private Guid HashToGuid(string input)
  {
    using (MD5 md5 = MD5.Create())
    {
      byte[] hash = md5.ComputeHash(Encoding.Default.GetBytes(input));
      return new Guid(hash);
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The important points to note are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You can use &lt;code&gt;GoogleCredentials.FromJson&lt;/code&gt; and assign that to &lt;code&gt;HttpClientInitializer&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;I did a &lt;code&gt;.Replace("\\n","\n")&lt;/code&gt; on the private key because many configuration providers don't support new lines&lt;/li&gt;
&lt;li&gt;The variables you'll need to add into configuration are: &lt;code&gt;ProjectId&lt;/code&gt;, &lt;code&gt;PrivateKeyId&lt;/code&gt;, &lt;code&gt;PrivateKey&lt;/code&gt;, &lt;code&gt;ClientId&lt;/code&gt;, &lt;code&gt;ClientEmail&lt;/code&gt; (and &lt;code&gt;CalendarId&lt;/code&gt; if you're using calendars)&lt;/li&gt;
&lt;li&gt;The Google Nuget Package I'm using here is &lt;code&gt;Google.Apis.Calendar.v3&lt;/code&gt;, version &lt;code&gt;1.52.0.2312&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
    </item>
    <item>
      <title>Rolling Back SSDT (And Why You Can't)</title>
      <dc:creator>Jonathan Eccker</dc:creator>
      <pubDate>Thu, 17 Jun 2021 13:55:06 +0000</pubDate>
      <link>https://dev.to/dealeron/rolling-back-ssdt-and-why-you-can-t-19b5</link>
      <guid>https://dev.to/dealeron/rolling-back-ssdt-and-why-you-can-t-19b5</guid>
      <description>&lt;p&gt;&lt;a href="https://docs.microsoft.com/en-us/sql/ssdt/download-sql-server-data-tools-ssdt?view=sql-server-ver15"&gt;SSDT&lt;/a&gt; is a tool for source controlling SQL Schema.&lt;/p&gt;

&lt;h3&gt;
  
  
  How SSDT works
&lt;/h3&gt;

&lt;p&gt;It works very similar to application builds. SSDT compiles "code" (CREATE scripts) into a "build" (a &lt;a href="https://docs.microsoft.com/en-us/sql/relational-databases/data-tier-applications/data-tier-applications?view=sql-server-ver15"&gt;Dacpac&lt;/a&gt;), and you can deploy it (generally using &lt;a href="https://docs.microsoft.com/en-us/sql/tools/sqlpackage/sqlpackage?view=sql-server-ver15"&gt;SqlPackage&lt;/a&gt;) to a database.&lt;/p&gt;

&lt;p&gt;You can even publish it from visual studio the same way you might an application. There's a ton of parallels in their functionality.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Catch
&lt;/h3&gt;

&lt;p&gt;In most applications, when you deploy a bad build and a fix is not available in a timely manner, you can typically just deploy the previous build to "roll it back".&lt;/p&gt;

&lt;p&gt;This does not work with dacpac deploys. Dacpacs are designed to only deploy in a forward, progressive manner.&lt;/p&gt;

&lt;p&gt;There's a number of reasons for this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;SqlPackage and the database will not know how to revert refactors that have been ran (this is the big one)&lt;/li&gt;
&lt;li&gt;Columns that were added to populated tables cannot be deleted without disabling &lt;code&gt;BlockOnPossibleDataLoss&lt;/code&gt; (very high risk)&lt;/li&gt;
&lt;li&gt;Newly created objects will not be dropped unless you enable &lt;code&gt;DropObjectsNotInSource&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There's some other smaller issues you can run into in edge cases as well.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Solution
&lt;/h3&gt;

&lt;p&gt;It's generally pretty simple, to fix a bad dacpac/build, you simply create a new one with whatever refactorlog changes or table changes are needed to make the fix.&lt;/p&gt;

&lt;p&gt;For higher risk changes (changing column names for example) it can be suggested to have a "just in case" dacpac ready that would undo the one you just deployed, in the event you need to urgently "roll it back".&lt;/p&gt;

&lt;p&gt;The philosophy, much like many CI/CD philosophies, is to keep builds moving forward, never taking steps back.&lt;/p&gt;

&lt;p&gt;It might also be worth noting there are mechanisms like &lt;a href="https://docs.microsoft.com/en-us/sql/relational-databases/synonyms/synonyms-database-engine?view=sql-server-ver15"&gt;synonyms&lt;/a&gt; and &lt;a href="https://docs.microsoft.com/en-us/sql/relational-databases/tables/specify-computed-columns-in-a-table?view=sql-server-ver15"&gt;computed columns&lt;/a&gt; for minimizing risk of changing names.&lt;/p&gt;

&lt;h3&gt;
  
  
  Example
&lt;/h3&gt;

&lt;p&gt;I have a very simple SSDT example project with a single &lt;code&gt;Person&lt;/code&gt; table:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CREATE TABLE [dbo].[Person]
(
  [Id] INT NOT NULL PRIMARY KEY IDENTITY(1,1),
  [Name] NVARCHAR(64)
)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I've created &lt;em&gt;Dacpac 1&lt;/em&gt; for this, and deployed it.&lt;/p&gt;

&lt;p&gt;Let's roll out a change to the name column, using the Visual Studio refactor tool to refactor the column Name to FullName:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CREATE TABLE [dbo].[Person]
(
  [Id] INT NOT NULL PRIMARY KEY IDENTITY(1,1),
  [FullName] NVARCHAR(64)
)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This also creates a refactor log entry:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;Operation Name="Rename Refactor" Key="813ae462-b83f-4dc8-ab48-12c56b09137b" ChangeDateTime="06/10/2021 17:37:09"&amp;gt;
  &amp;lt;Property Name="ElementName" Value="[dbo].[Person].[Name]" /&amp;gt;
  &amp;lt;Property Name="ElementType" Value="SqlSimpleColumn" /&amp;gt;
  &amp;lt;Property Name="ParentElementName" Value="[dbo].[Person]" /&amp;gt;
  &amp;lt;Property Name="ParentElementType" Value="SqlTable" /&amp;gt;
  &amp;lt;Property Name="NewName" Value="[FullName]" /&amp;gt;
&amp;lt;/Operation&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I created &lt;em&gt;Dacpac 2&lt;/em&gt; for this, and deployed it. The script looks like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;PRINT N'The following operation was generated from a refactoring log file 813ae462-b83f-4dc8-ab48-12c56b09137b';
PRINT N'Rename [dbo].[Person].[Name] to FullName';
GO
EXECUTE sp_rename @objname = N'[dbo].[Person].[Name]', @newname = N'FullName', @objtype = N'COLUMN';
GO
-- Refactoring step to update target server with deployed transaction logs
IF OBJECT_ID(N'dbo.__RefactorLog') IS NULL
BEGIN
    CREATE TABLE [dbo].[__RefactorLog] (OperationKey UNIQUEIDENTIFIER NOT NULL PRIMARY KEY)
    EXEC sp_addextendedproperty N'microsoft_database_tools_support', N'refactoring log', N'schema', N'dbo', N'table', N'__RefactorLog'
END
GO
IF NOT EXISTS (SELECT OperationKey FROM [dbo].[__RefactorLog] WHERE OperationKey = '813ae462-b83f-4dc8-ab48-12c56b09137b')
INSERT INTO [dbo].[__RefactorLog] (OperationKey) values ('813ae462-b83f-4dc8-ab48-12c56b09137b')
GO
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;But oh no! Everything broke cuz we forgot to update the application first.&lt;/p&gt;

&lt;p&gt;Trying to redeploy &lt;em&gt;Dacpac 1&lt;/em&gt;, the script ends up looking like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/*
The column [dbo].[Person].[FullName] is being dropped, data loss could occur.
*/
IF EXISTS (select top 1 1 from [dbo].[Person])
    RAISERROR (N'Rows were detected. The schema update is terminating because data loss might occur.', 16, 127) WITH NOWAIT
GO
PRINT N'Altering [dbo].[Person]...';
GO
ALTER TABLE [dbo].[Person] DROP COLUMN [FullName];
GO
ALTER TABLE [dbo].[Person]
    ADD [Name] NVARCHAR (64) NULL;
GO
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That doesn't actually undo the rename. It drops &lt;code&gt;FullName&lt;/code&gt; and creates &lt;code&gt;Name&lt;/code&gt;. Or, more accurately, it fails because there is data in the table.&lt;/p&gt;

&lt;p&gt;So the resolution is to create a new build. I use the Visual Studio refactor tool to rename the column FullName to Name:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CREATE TABLE [dbo].[Person]
(
  [Id] INT NOT NULL PRIMARY KEY IDENTITY(1,1),
  [Name] NVARCHAR(64)
)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now the refactor log has &lt;em&gt;2 entries&lt;/em&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;Operation Name="Rename Refactor" Key="813ae462-b83f-4dc8-ab48-12c56b09137b" ChangeDateTime="06/10/2021 17:37:09"&amp;gt;
  &amp;lt;Property Name="ElementName" Value="[dbo].[Person].[Name]" /&amp;gt;
  &amp;lt;Property Name="ElementType" Value="SqlSimpleColumn" /&amp;gt;
  &amp;lt;Property Name="ParentElementName" Value="[dbo].[Person]" /&amp;gt;
  &amp;lt;Property Name="ParentElementType" Value="SqlTable" /&amp;gt;
  &amp;lt;Property Name="NewName" Value="[FullName]" /&amp;gt;
&amp;lt;/Operation&amp;gt;
&amp;lt;Operation Name="Rename Refactor" Key="7c29f405-8585-41ed-8ae0-47ffb0ec8128" ChangeDateTime="06/10/2021 17:43:44"&amp;gt;
  &amp;lt;Property Name="ElementName" Value="[dbo].[Person].[FullName]" /&amp;gt;
  &amp;lt;Property Name="ElementType" Value="SqlSimpleColumn" /&amp;gt;
  &amp;lt;Property Name="ParentElementName" Value="[dbo].[Person]" /&amp;gt;
  &amp;lt;Property Name="ParentElementType" Value="SqlTable" /&amp;gt;
  &amp;lt;Property Name="NewName" Value="[Name]" /&amp;gt;
&amp;lt;/Operation&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Generating this as the build &lt;em&gt;Dacpac 3&lt;/em&gt;, the script comes out looking like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;PRINT N'The following operation was generated from a refactoring log file 7c29f405-8585-41ed-8ae0-47ffb0ec8128';
PRINT N'Rename [dbo].[Person].[FullName] to Name';
GO
EXECUTE sp_rename @objname = N'[dbo].[Person].[FullName]', @newname = N'Name', @objtype = N'COLUMN';
GO
-- Refactoring step to update target server with deployed transaction logs
IF NOT EXISTS (SELECT OperationKey FROM [dbo].[__RefactorLog] WHERE OperationKey = '7c29f405-8585-41ed-8ae0-47ffb0ec8128')
INSERT INTO [dbo].[__RefactorLog] (OperationKey) values ('7c29f405-8585-41ed-8ae0-47ffb0ec8128')
GO
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Which successfully gets us back to a working application.&lt;/p&gt;

&lt;p&gt;For science, lets look at what happens when this is deployed to a database that has not had either &lt;em&gt;Dacpac 2&lt;/em&gt; or &lt;em&gt;Dacpac 3&lt;/em&gt; deployed to it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-- Refactoring step to update target server with deployed transaction logs
IF NOT EXISTS (SELECT OperationKey FROM [dbo].[__RefactorLog] WHERE OperationKey = '813ae462-b83f-4dc8-ab48-12c56b09137b')
INSERT INTO [dbo].[__RefactorLog] (OperationKey) values ('813ae462-b83f-4dc8-ab48-12c56b09137b')
IF NOT EXISTS (SELECT OperationKey FROM [dbo].[__RefactorLog] WHERE OperationKey = '7c29f405-8585-41ed-8ae0-47ffb0ec8128')
INSERT INTO [dbo].[__RefactorLog] (OperationKey) values ('7c29f405-8585-41ed-8ae0-47ffb0ec8128')
GO
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It's smart enough to know it does not need to change the column Name to FullName and then back to Name. It just inserts the refactor logs, and does nothing else.&lt;/p&gt;

&lt;h3&gt;
  
  
  Manual Rollbacks
&lt;/h3&gt;

&lt;p&gt;I'm going to call this out as a capability, but would really like to emphasize that it can introduce a lot of risk and is not suggested except for the most dire and urgent of situations.&lt;/p&gt;

&lt;p&gt;If you were to manually undo a refactor, as in via non dacpac generated SQL execute a "ALTER TABLE..." script and delete the entry from "__RefactorLog", you can effectively put the database into a state as if the refactor had never happened.&lt;/p&gt;

&lt;p&gt;This introduces a lot of complex, risky, variables. Especially if you are using database versioning, you could have a database that thinks it's on one version but is actually missing a piece of it. You could potentially have one environment that has undone one refactor but other environments haven't, so you get inconsistent behavior of subsequent deploys. If you botch the manual rollback, you're going to end up in a weird state where further dacpac deploys do weirder things that can potentially make your database state even worse.&lt;/p&gt;

&lt;p&gt;So to summarize, except in very very rare edge cases (that we should design process to avoid), rollbacks for dacpacs don't really exist. Keep things progressing forward.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Dropping Objects with SSDT</title>
      <dc:creator>Jonathan Eccker</dc:creator>
      <pubDate>Tue, 15 Jun 2021 13:55:57 +0000</pubDate>
      <link>https://dev.to/dealeron/dropping-objects-with-ssdt-33g2</link>
      <guid>https://dev.to/dealeron/dropping-objects-with-ssdt-33g2</guid>
      <description>&lt;p&gt;&lt;a href="https://docs.microsoft.com/en-us/sql/ssdt/download-sql-server-data-tools-ssdt?view=sql-server-ver15"&gt;SSDT&lt;/a&gt; is a tool for source controlling SQL Schema.&lt;/p&gt;

&lt;h3&gt;
  
  
  How SSDT works
&lt;/h3&gt;

&lt;p&gt;It works ALMOST exactly like you would expect it to work: you add CREATE scripts to define objects. You publish this script utilizing a &lt;a href="https://docs.microsoft.com/en-us/sql/relational-databases/data-tier-applications/data-tier-applications?view=sql-server-ver15"&gt;Dacpac&lt;/a&gt; which is essentially a build. &lt;a href="https://docs.microsoft.com/en-us/sql/tools/sqlpackage/sqlpackage?view=sql-server-ver15"&gt;SqlPackage&lt;/a&gt; uses this dacpac to diff what's in your build versus what's in the target database, and creates scripts to resolve differences.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Catch
&lt;/h3&gt;

&lt;p&gt;Accounting for human error means accepting the possibility that a typo can lead to an entire table to be dropped. Simply renaming a column or table and not checking in a refactor log, without checks could lose irrecoverable data.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Solution
&lt;/h3&gt;

&lt;p&gt;SqlPackage has two major mechanisms to enable avoiding this:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Dacpac deploys will &lt;em&gt;NOT&lt;/em&gt; by default drop objects it finds in the target database that do not exist in the source. This means just deleting a table in SSDT will not try to drop it on a deploy. This does not apply to columns.&lt;/li&gt;
&lt;li&gt;Before a dacpac deploy tries to delete any object that can hold data (I.E: a column, table) it does a select to check if there's data that would be deleted. If it finds data, it aborts the operation.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;There are options you can enable in SSDT to disable these layers of protection. It is HIGHLY recommended that you do not enable these options except when you have time to pay extra attention to a deployment.&lt;/p&gt;

&lt;h4&gt;
  
  
  Drop Objects In Target But Not In Source
&lt;/h4&gt;

&lt;p&gt;The SqlPackage parameter is &lt;code&gt;DropObjectsNotInSource&lt;/code&gt;, in visual studio you can find this under &lt;code&gt;Advanced&lt;/code&gt; -&amp;gt; &lt;code&gt;Drop&lt;/code&gt; -&amp;gt; &lt;code&gt;Drop objects in target but not in source&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;This does NOT disable the second layer of protection (preventing data loss), but it will cause the deploy to ATTEMPT to drop tables/stored procedures/etc. that do not exist in the dacpac.&lt;/p&gt;

&lt;p&gt;It is important to note that you can specify what types of objects to NOT drop when using this. For instance, if you don't use dacpac deploys to manage Users/Permissions/etc. you can set &lt;code&gt;DoNotDropObjectTypes=Users,Permissions&lt;/code&gt; to NOT drop those.&lt;/p&gt;

&lt;h4&gt;
  
  
  Block Incremental Deployment If Data Loss Might Occur
&lt;/h4&gt;

&lt;p&gt;The SqlPackage parameter is &lt;code&gt;BlockOnPossibleDataLoss&lt;/code&gt;, in visual studio you can find this under &lt;code&gt;Advanced&lt;/code&gt; -&amp;gt; &lt;code&gt;Block incremental deployment if data loss might occur&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;This is the big scary one you should be extremely careful with. Disabling it causes any drops that would occur to happen without checking for data first.&lt;/p&gt;

&lt;h3&gt;
  
  
  Example
&lt;/h3&gt;

&lt;p&gt;I have a quick example SSDT project, it has a single &lt;code&gt;Person&lt;/code&gt; table:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CREATE TABLE [dbo].[Person]
(
  [Id] INT NOT NULL PRIMARY KEY IDENTITY(1,1),
  [Name] NVARCHAR(64)
)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If I delete &lt;code&gt;Person.sql&lt;/code&gt; and generate a publish script:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;USE [$(DatabaseName)];
GO
PRINT N'Update complete.';
GO
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Nothing happens.&lt;/p&gt;

&lt;p&gt;Now, if we enable &lt;code&gt;DropObjectsNotInSource&lt;/code&gt;, and generate the script again:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;USE [$(DatabaseName)];
GO
/*
Table [dbo].[Person] is being dropped.  Deployment will halt if the table contains data.
*/
IF EXISTS (select top 1 1 from [dbo].[Person])
    RAISERROR (N'Rows were detected. The schema update is terminating because data loss might occur.', 16, 127) WITH NOWAIT
GO
PRINT N'Dropping [dbo].[Person]...';
GO
DROP TABLE [dbo].[Person];
GO
PRINT N'Update complete.';
GO
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And once more, with both &lt;code&gt;DropObjectsNotInSource&lt;/code&gt; enabled AND &lt;code&gt;BlockOnPossibleDataLoss&lt;/code&gt; DISABLED:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;USE [$(DatabaseName)];
GO
PRINT N'Dropping [dbo].[Person]...';
GO
DROP TABLE [dbo].[Person];
GO
PRINT N'Update complete.';
GO
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Recommended Processes For Deleting Objects
&lt;/h3&gt;

&lt;p&gt;I highly recommend avoiding disabling &lt;code&gt;BlockOnPossibleDataLoss&lt;/code&gt; as much as possible. From experience, it is VERY easy for a small oversight or misunderstanding of a dacpac/build to lead to losing some critical data.&lt;/p&gt;

&lt;h4&gt;
  
  
  The Simple Process
&lt;/h4&gt;

&lt;p&gt;The simplest course of action for dropping tables is to manually delete all rows in the table when you're 100% confident they are not used, then they'll be dropped with no complaints with a normal &lt;code&gt;DropObjectsNotInSource&lt;/code&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  A More Controllable Process
&lt;/h4&gt;

&lt;p&gt;A process that takes a bit more overhead but provides more confidence before dropping objects is to deprecate tables/columns/etc. before dropping them.&lt;/p&gt;

&lt;p&gt;You can utilize a &lt;code&gt;deprecated&lt;/code&gt; schema (you can use schema specific permissions to enforce applications not accidentally using things in this schema) that you move tables/etc. into to mark them for deletion. Or append &lt;code&gt;_deprecated&lt;/code&gt; to column names to mark them as dead. This will also give you time to roll back the deprecation if you do find something still using it.&lt;/p&gt;

&lt;p&gt;Then, on some regular basis, you can go through and delete all deprecated objects at once. When you do this you will want to pay extra attention to the generated script BEFORE you deploy it. Make sure it's only going to drop what you expect it to drop.&lt;/p&gt;

&lt;p&gt;You can do this in visual studio by selecting &lt;code&gt;Generate Publish Script&lt;/code&gt; instead of &lt;code&gt;Publish&lt;/code&gt; in the Publish window (I wish those weren't right next to each other). This is actually a good habit to get into every now and then to get insight into what deploys actually do.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>The SSDT Refactor Log</title>
      <dc:creator>Jonathan Eccker</dc:creator>
      <pubDate>Fri, 11 Jun 2021 18:02:17 +0000</pubDate>
      <link>https://dev.to/dealeron/the-ssdt-refactor-log-4jk6</link>
      <guid>https://dev.to/dealeron/the-ssdt-refactor-log-4jk6</guid>
      <description>&lt;p&gt;&lt;a href="https://docs.microsoft.com/en-us/sql/ssdt/download-sql-server-data-tools-ssdt?view=sql-server-ver15" rel="noopener noreferrer"&gt;SSDT&lt;/a&gt; is a tool for source controlling SQL Schema.&lt;/p&gt;

&lt;h3&gt;
  
  
  How SSDT works
&lt;/h3&gt;

&lt;p&gt;It works ALMOST exactly like you would expect it to work: you add CREATE scripts to define objects. You publish this script utilizing a &lt;a href="https://docs.microsoft.com/en-us/sql/relational-databases/data-tier-applications/data-tier-applications?view=sql-server-ver15" rel="noopener noreferrer"&gt;Dacpac&lt;/a&gt; which is essentially a build. &lt;a href="https://docs.microsoft.com/en-us/sql/tools/sqlpackage/sqlpackage?view=sql-server-ver15" rel="noopener noreferrer"&gt;SqlPackage&lt;/a&gt; uses this dacpac to diff what's in your build versus what's in the target database, and creates scripts to resolve differences.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Catch
&lt;/h3&gt;

&lt;p&gt;There's a somewhat big catch. The above process relies on being able to match objects in the dacpac to objects in the target database. If you rename a table, how would it know that [RenamedTableName] is the same table as [TableName]? From SqlPackage's perspective, it would think that [TableName] got dropped and [RenamedTableName] was created. (It won't actually drop it unless you tell it to, but that's a blog for another time)&lt;/p&gt;

&lt;h3&gt;
  
  
  The Solution
&lt;/h3&gt;

&lt;p&gt;The solution is to be more explicit with when certain refactors happen. The dacpac carries instructions on refactors that happened that SqlPackage would not be able to intuitively know.&lt;/p&gt;

&lt;p&gt;There's two mechanisms that enable this:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The &lt;code&gt;&amp;lt;projectName&amp;gt;.refactorlog&lt;/code&gt; file that gets checked in that lets you record refactors&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;__RefactorLog&lt;/code&gt; table that keeps track of which refactors have already been ran on a database&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  .RefactorLog File
&lt;/h4&gt;

&lt;p&gt;The &lt;code&gt;.refactorlog&lt;/code&gt; file consists of XML, mainly just a list of &lt;code&gt;&amp;lt;Operation&amp;gt;&lt;/code&gt; elements. Each of those represent an individual refactor.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

&amp;lt;Operation Name="Rename Refactor" Key="3a7b06a1-5e83-47f9-95d7-2a2fbaf2a20e" ChangeDateTime="06/10/2021 15:16:10"&amp;gt;
  &amp;lt;Property Name="ElementName" Value="[dbo].[Country]" /&amp;gt;
  &amp;lt;Property Name="ElementType" Value="SqlTable" /&amp;gt;
  &amp;lt;Property Name="ParentElementName" Value="[dbo]" /&amp;gt;
  &amp;lt;Property Name="ParentElementType" Value="SqlSchema" /&amp;gt;
  &amp;lt;Property Name="NewName" Value="[CountryInfo]" /&amp;gt;
&amp;lt;/Operation&amp;gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The above is an example of a refactor that changes a table &lt;code&gt;Country&lt;/code&gt;'s name to &lt;code&gt;CountryInfo&lt;/code&gt;. I wouldn't worry too much about learning the syntax for these entries, if you are using SSDT in Visual Studio it creates them for you.&lt;/p&gt;

&lt;h4&gt;
  
  
  __RefactorLog Table
&lt;/h4&gt;

&lt;p&gt;This is a VERY simple table that just keeps track of which refactors have already been ran. Note that it automatically gets created when you deploy a dacpac to your database.&lt;/p&gt;

&lt;p&gt;It only has one column, &lt;code&gt;OperationKey&lt;/code&gt;, which just holds a list of GUIDs. Those are the GUIDs you can see in the &lt;code&gt;Key&lt;/code&gt; field in the Operation entry shown above.&lt;/p&gt;

&lt;p&gt;When you deploy a dacpac, it will fill this table with every refactor that got ran, to avoid it trying to run the same refactor every single time (which would obviously fail in most cases).&lt;/p&gt;

&lt;h3&gt;
  
  
  Example
&lt;/h3&gt;

&lt;p&gt;Here's a brief example of changing a column name. I've started with a very simple SSDT project with just one table in it:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

CREATE TABLE [dbo].[Person]
(
  [Id] INT NOT NULL PRIMARY KEY IDENTITY(1,1),
  [Name] NVARCHAR(64)
)


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now to test it WITHOUT the refactor log, let's just change the column name &lt;code&gt;Name&lt;/code&gt;.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

CREATE TABLE [dbo].[Person]
(
  [Id] INT NOT NULL PRIMARY KEY IDENTITY(1,1),
  [FullName] NVARCHAR(64)
)


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Generating a script for what this would do (doable from the publish window in Visual Studio), we get:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

/*
The column [dbo].[Person].[Name] is being dropped, data loss could occur.
*/
IF EXISTS (select top 1 1 from [dbo].[Person])
    RAISERROR (N'Rows were detected. The schema update is terminating because data loss might occur.', 16, 127) WITH NOWAIT
GO
PRINT N'Altering [dbo].[Person]...';
GO
ALTER TABLE [dbo].[Person] DROP COLUMN [Name];
GO
ALTER TABLE [dbo].[Person]
    ADD [FullName] NVARCHAR (64) NULL;
GO


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Obviously this is not what we were going for.&lt;/p&gt;

&lt;p&gt;Undoing the manual change, and utilizing the Visual Studio Refactor option (I HIGHLY suggest using this as opposed to manually making refactor logs. It also will automatically update stored procedures, functions, views, etc. that were referencing the renamed table/column for you):&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcxhwypigc4prei4opaj7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcxhwypigc4prei4opaj7.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It has now created a &lt;code&gt;ExampleDatabase.refactorlog&lt;/code&gt; (because this was the first refactor), with a single Operation entry:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

&amp;lt;Operation Name="Rename Refactor" Key="0517c5b8-8ae1-4642-ba65-9465fa2daf3c" ChangeDateTime="06/10/2021 15:36:05"&amp;gt;
  &amp;lt;Property Name="ElementName" Value="[dbo].[Person].[Name]" /&amp;gt;
  &amp;lt;Property Name="ElementType" Value="SqlSimpleColumn" /&amp;gt;
  &amp;lt;Property Name="ParentElementName" Value="[dbo].[Person]" /&amp;gt;
  &amp;lt;Property Name="ParentElementType" Value="SqlTable" /&amp;gt;
  &amp;lt;Property Name="NewName" Value="[FullName]" /&amp;gt;
&amp;lt;/Operation&amp;gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Generating the script again, we get:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

GO
PRINT N'The following operation was generated from a refactoring log file 0517c5b8-8ae1-4642-ba65-9465fa2daf3c';
PRINT N'Rename [dbo].[Person].[Name] to FullName';
GO
EXECUTE sp_rename @objname = N'[dbo].[Person].[Name]', @newname = N'FullName', @objtype = N'COLUMN';
GO
-- Refactoring step to update target server with deployed transaction logs
IF OBJECT_ID(N'dbo.__RefactorLog') IS NULL
BEGIN
    CREATE TABLE [dbo].[__RefactorLog] (OperationKey UNIQUEIDENTIFIER NOT NULL PRIMARY KEY)
    EXEC sp_addextendedproperty N'microsoft_database_tools_support', N'refactoring log', N'schema', N'dbo', N'table', N'__RefactorLog'
END
GO
IF NOT EXISTS (SELECT OperationKey FROM [dbo].[__RefactorLog] WHERE OperationKey = '0517c5b8-8ae1-4642-ba65-9465fa2daf3c')
INSERT INTO [dbo].[__RefactorLog] (OperationKey) values ('0517c5b8-8ae1-4642-ba65-9465fa2daf3c')
GO


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Which is the intended result.&lt;/p&gt;

&lt;h3&gt;
  
  
  Rolling Back Refactors
&lt;/h3&gt;

&lt;p&gt;At some point I'll have an entire article on rolling back Dacpac deployments, or more accurately how you CAN'T roll back Dacpac deployments.&lt;/p&gt;

&lt;p&gt;I think it's worth mentioning here, however, that there isn't really a mechanism for undoing refactor log changes. You can't simply deploy an older dacpac and have it undo refactors - it wouldn't even know what to undo.&lt;/p&gt;

&lt;p&gt;The best (safest, most structured, controllable) mechanism to undo a refactor log is to create a new refactor that undoes it, and deploy that via a newer dacpac. SSDT is about always having builds move forward, even if "moving forward" is changes that undo the previous "move forward".&lt;/p&gt;

&lt;p&gt;As someone who's manually deleted from the &lt;code&gt;__RefactorLog&lt;/code&gt; table and undone schema changes by hand in the midst of urgent, super hot, situations, I can definitely say that the potential complications manual management of the &lt;code&gt;__RefactorLog&lt;/code&gt; can introduce will almost never be a worthwhile risk versus moving forward with an "undo" refactor log committed.&lt;/p&gt;

</description>
      <category>sql</category>
    </item>
    <item>
      <title>Journey Before Destination: A Discussion of Goals</title>
      <dc:creator>Jonathan Eccker</dc:creator>
      <pubDate>Fri, 05 Feb 2021 15:27:11 +0000</pubDate>
      <link>https://dev.to/dealeron/journey-before-destination-a-discussion-of-goals-oeb</link>
      <guid>https://dev.to/dealeron/journey-before-destination-a-discussion-of-goals-oeb</guid>
      <description>&lt;p&gt;I've adopted the mantra of &lt;a href="https://www.brandonsanderson.com/the-stormlight-archive-series/#THEWAYOFKINGS"&gt;Brandon Sanderson's "Way of Kings"&lt;/a&gt; for a little more than a year now. "Journey Before Destination" has been at the core of a lot of decisions I have made.&lt;/p&gt;

&lt;p&gt;Truth be told I've tried to write several articles on it, but they always feel somewhat hollow when written for other people. I think finding out what the mantra means to you is a very important part of the journey, and that's not something that can be conveyed via an article.&lt;/p&gt;

&lt;p&gt;What I WOULD like to talk about, however, is how it relates to Goals.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is a Goal?
&lt;/h2&gt;

&lt;p&gt;A quick google gives you "the object of a person's ambition or effort; an aim or desired result". The term "result" is key in how most people treat goals, it is a destination. It's viewed as some final state that we can reach to provide some absolute measure of success.&lt;/p&gt;

&lt;p&gt;In theory, goals are helpful. They are intended to help drive us forward, give us some ideal future state to look forward to, and help guide us when making difficult decisions.&lt;/p&gt;

&lt;h2&gt;
  
  
  There wouldn't be an article without a but...
&lt;/h2&gt;

&lt;p&gt;But. That's all in theory. We, as humans, have a habit of distorting goals into something that loses most of the original value.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Three States
&lt;/h3&gt;

&lt;p&gt;When we talk about goals, there's really only three states we like categorizing them in: In Progress, Succeeded, and Failed.&lt;/p&gt;

&lt;p&gt;This oversimplification means that if you are no longer pursuing a goal, and you did not succeed in it, it defaults to being viewed as a failure. This viewpoint is a massive driver of the negative relationship that a large portion of Americans have with college. Once you've started down the college route, any divergence from that route is viewed as a failure.&lt;/p&gt;

&lt;h3&gt;
  
  
  Rigidity
&lt;/h3&gt;

&lt;p&gt;Goals have a habit of being set in stone.&lt;/p&gt;

&lt;p&gt;Never mind that we are human. We learn and change, and that the world learns and changes around us.&lt;/p&gt;

&lt;p&gt;This is very common in people who want to lose weight. Generally when you set out down the path of losing weight you set some unrealistic target weight (or BMI) based on some number the internet spit out, make some progress and eventually peter off towards some asymptote that is defined by how much you changed your lifestyle (likely still far off from your target).&lt;/p&gt;

&lt;p&gt;With our tendency to view goals of being rigid, the routes you will often hear about taken here are:&lt;br&gt;
1) Maintain their current lifestyle and stress endlessly that they can't meet their goal (likely causing more damage from stress than the weight)&lt;br&gt;
2) Massively change the lifestyle in a frantic attempt to reach their goal (often undoing the lifestyle change the second they reach it)&lt;/p&gt;

&lt;h3&gt;
  
  
  Idealization
&lt;/h3&gt;

&lt;p&gt;Once you reach that goal, life will be different.&lt;/p&gt;

&lt;p&gt;We have the habit of picturing ourselves with a degree, at our target weight, married, with a million dollars, whatever that goal is. We're fed all the imagery we need to glamorize it.&lt;/p&gt;

&lt;p&gt;Going back to the college degree, there's sort of this mindset of:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Go to College&lt;/li&gt;
&lt;li&gt;Get Degree&lt;/li&gt;
&lt;li&gt;???&lt;/li&gt;
&lt;li&gt;Profit!&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I can think of very few things in this world (that I've encountered) which, once reached, are like flipping a switch of "and now life is better".&lt;/p&gt;

&lt;h3&gt;
  
  
  The lie we grew up with
&lt;/h3&gt;

&lt;p&gt;There's this saying, I would be very surprised if anyone hadn't heard it. "You can achieve anything you want if you try hard enough". We do ourselves a disservice by living by that saying.&lt;/p&gt;

&lt;p&gt;It puts an unreasonable responsibility on us as individuals, and implies that if we don't succeed then it's all our fault because we didn't try hard enough. It completely ignores the fact that there are millions of variables that are out of our control and there are quite simply some things in the universe that are impossible.&lt;/p&gt;

&lt;h2&gt;
  
  
  Ok so we should give up and never try to achieve anything. Thanks.
&lt;/h2&gt;

&lt;p&gt;No!&lt;/p&gt;

&lt;p&gt;The point of this is to empower you to make the right decisions, to follow a good, positive, path.&lt;/p&gt;

&lt;p&gt;Life isn't made up of big, eventful, goal-reachings.&lt;/p&gt;

&lt;p&gt;Life is made up of a series of steps. A series of decisions.&lt;/p&gt;

&lt;p&gt;The most you can ever do, the most anyone (including yourself) can ever expect from you, is that &lt;a href="https://dev.to/dealeron/the-creed-a3d"&gt;you make the best decisions you can&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;For me this means following aspirations as opposed to goals. More flexible priorities that help me make every day decision (think "eat healthier" versus "lose weight"). The more I go down this route the more I realize that the Sims got it right with whims and aspirations.&lt;/p&gt;

&lt;p&gt;No matter what all of this means to you, the most important thing to remember is that life is happening now. Live your journey.&lt;/p&gt;

</description>
      <category>productivity</category>
      <category>motivation</category>
    </item>
    <item>
      <title>Escape the Blame Culture</title>
      <dc:creator>Jonathan Eccker</dc:creator>
      <pubDate>Fri, 31 Jul 2020 18:17:12 +0000</pubDate>
      <link>https://dev.to/dealeron/escape-the-blame-culture-49hp</link>
      <guid>https://dev.to/dealeron/escape-the-blame-culture-49hp</guid>
      <description>&lt;p&gt;In the modern era of information, it is generally very apparent when things don't go the way we intend them to. We have a habit of chalking these non-ideal outcomes as a failure. These non ideal outcomes include all sorts of symptoms: a system failing to do what was expected of it, a time line not being reached, the wrong expectation having been set.&lt;/p&gt;

&lt;p&gt;From what I've seen, the product development world has largely accepted that these "failures" are a part of general development life-cycle. "We learn from our failures more than our successes" is a pretty common phrase I have heard in relation to development, even going back to high school.&lt;/p&gt;

&lt;p&gt;So if we all agree that these "failures" are an important part of the process it makes sense to embrace failure and do everything we can to maximize the benefits from it, right?&lt;/p&gt;

&lt;p&gt;Unfortunately we are surrounded by a culture that does not seem to want to embrace failure. The answer to failure is very often to find some sort of scape-goat. This is especially true when it's someone reacting to (what they view as) a failure in a product or system that they have no immediate direct control over.&lt;/p&gt;

&lt;h1&gt;
  
  
  The Blame Game
&lt;/h1&gt;

&lt;p&gt;Let's say a piece of code gets deployed that drops the entirety of your users table given some special condition that . Let's play the blame game.&lt;/p&gt;

&lt;p&gt;It was QA's fault for not testing 100% of every single possible out comes. It was the developer's fault for writing bad SQL. It was the reviewer's fault for not catching the bad SQL. It was the project owner's fault for not properly scoping the ticket to include test cases that account for that edge case. It was the architects fault for not having better, more testable design patterns implemented that could have caught this. It was the team lead's fault for not thoroughly checking every single one of their PRs that came their way. It was the dev-ops fault for deploying the bad code.&lt;/p&gt;

&lt;p&gt;Cool, great, we did it. We blamed all the things. Some of those probably sound pretty ridiculous, too. Now we can move on, we have fixed the problem. Wait, no, we didn't. We solved nothing. This thing is going to occur again in a week because SQL mistakes are just inherently difficult to catch.&lt;/p&gt;

&lt;h1&gt;
  
  
  Everything has a Risk
&lt;/h1&gt;

&lt;p&gt;Nothing we ever do will be completely fool proof. There is no perfect state where I can click a button and be 100% confident that that button will do the same exact thing it has always done. In fact Chaos Engineering is a [really cool] style of software design that actively introduces unpredictability to force us to design around it.&lt;/p&gt;

&lt;p&gt;To complicate the problem even more every risk is not equal. Using Chaos Engineering as a fairly meta example, the impact of an overlooked risk is often not high enough to warrant the overhead of maintaining something like Chaos Monkey that has massive overheads and implications on development life cycles.&lt;/p&gt;

&lt;p&gt;So to suffice it to say that, if we can't be expected to be able to accommodate for all risks, we should not be held accountable for when one makes it through the cracks. This is not to say that we have no responsibility and shouldn't be held accountable for improving some portion of the system or process. I'm actually saying the opposite.&lt;/p&gt;

&lt;h1&gt;
  
  
  Empower Your Failures
&lt;/h1&gt;

&lt;p&gt;So stepping back, we have our failure. Something went wrong, expectations were not met. Do we roll everything back to when everything was failure free and hide in a cave from ever making a change again because obviously the way we have it now is perfect?&lt;/p&gt;

&lt;p&gt;No! We mitigate the damages and use the "failure" as leverage to implement change to improve process or the system. One of the wisest people I know taught me that some of the greatest changes are born in chaos. No one wants to make process or design changes when everything is working.&lt;/p&gt;

&lt;p&gt;If you have no means to provide an improvement, and this is super important, don't feel obligated to have an opinion. It's very easy to find where a piece of process could be improved, especially in hindsight. We have an entire internet full of people waiting to tell you what went wrong.&lt;/p&gt;

&lt;p&gt;The valuable part, the reason people make tons of money doing jobs that require decades of experience having learned from these failures time and time again, is improving the system or process to avoid having repeat mistakes. Throwing out blame about what the actual problem was or who caused the problem is not helping, and will often become a distraction to the people attempting to solve it.&lt;/p&gt;

&lt;h1&gt;
  
  
  It's About Trust
&lt;/h1&gt;

&lt;p&gt;We love to categorize people. Stereotyping, generalizing, it's all human tools to avoid the fact that we can't actually comprehend everything that goes on in a person's mind without having lived their entire life. When someone makes a decision we don't like or breaks something, it's so easy to chalk it up to them being incompetent and move on with your life.&lt;/p&gt;

&lt;p&gt;I would challenge you to default to trusting everyone until really proven they don't deserve it. I'm sure you just had ten people pop in your head that aren't trustworthy. Flip that script, there are other people that probably had you pop in their head as untrustworthy. They are wrong, right?&lt;/p&gt;

&lt;p&gt;This is a hard exercise: actually believing that people are by default good, well intending, and aren't intentionally causing problems just because they're some movie villain that likes disrupting your life.&lt;/p&gt;

&lt;p&gt;With this trust, however, you will find that instead of blaming someone you will start looking for context that you were missing. Something that drove decisions that were made that might have been outside of the scope of what you know. And with that knowledge comes not only the the ability to improve the system or product, but the ability to improve yourself.&lt;/p&gt;

</description>
      <category>career</category>
    </item>
    <item>
      <title>Adventures in SQL Source Control with SSDT</title>
      <dc:creator>Jonathan Eccker</dc:creator>
      <pubDate>Fri, 07 Feb 2020 11:05:57 +0000</pubDate>
      <link>https://dev.to/dealeron/adventures-in-sql-source-control-with-ssdt-13i4</link>
      <guid>https://dev.to/dealeron/adventures-in-sql-source-control-with-ssdt-13i4</guid>
      <description>&lt;p&gt;SQL Source control in our stack was introduced because of a number of pain points. We had to do full database refreshes any time we spun up a new environment, we were fairly limited in options to run databases locally, we had no clean way to properly version databases, and SQL scripts often were manual operations. We had come up with a couple processes to alleviate these pain points, but they often weren't sustainable or flexible.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is SQL Source Control?
&lt;/h2&gt;

&lt;p&gt;SQL Source control is a way to treat your SQL database as a proper application (a Data Tier Application, to be precise). Changes made to schema and data get the same attention that a standard application would. This allows you to publish a database to a target and it will update it to the schema you provided.&lt;/p&gt;

&lt;p&gt;There are a number of SQL Source control providers, we went with SSDT because it fits into our stack very nicely, and is free.&lt;/p&gt;

&lt;h2&gt;
  
  
  SSDT
&lt;/h2&gt;

&lt;p&gt;SSDT, or Sql Server Data Tools, is managed very similarly to standard .Net application. The solution and project layout is very similar, you can msbuild it or build and publish it in visual studio, all in a way that'll feel familiar to any .Net developer.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting It Up
&lt;/h2&gt;

&lt;p&gt;The first step is creating a new &lt;code&gt;SQL Server Database Project&lt;/code&gt; in visual studio (under the &lt;code&gt;SQL Server&lt;/code&gt; section).&lt;/p&gt;

&lt;p&gt;Visual studio then provides a very handy schema import tool. If you right click on the project, and highlight &lt;code&gt;Import&lt;/code&gt;, you can select to import the schema from a database. We opted for the &lt;code&gt;Schema\Object Type&lt;/code&gt; folder structure as it would assist in providing some better boundaries between different schemata.&lt;/p&gt;

&lt;p&gt;Once a connection was selected (to a QA server), you click start and it'll do the rest.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Git Ignore
&lt;/h2&gt;

&lt;p&gt;For the initial commit, it became apparent there was a lot of excess stuff that probably shouldn't go into source control. After playing around a bit the &lt;code&gt;.gitignore&lt;/code&gt; came out looking like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;.vs
bin
obj
*.user
*.dbmdl
*.jfm
*.log
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When making changes you will occasionally see things called .refactorlog, &lt;strong&gt;&lt;em&gt;do not gitignore these&lt;/em&gt;&lt;/strong&gt;, they are required for properly doing refactors like changing column names, and need to be checked in.&lt;/p&gt;

&lt;h2&gt;
  
  
  It was all broken
&lt;/h2&gt;

&lt;p&gt;Over the last few years I've put a lot of effort into pruning our database. We inherited a rather gross setup with over 1000 stored procedures, half of which were not used anywhere anymore, and hundreds of dead tables. I've gotten a number of these deleted (see &lt;a href="https://dev.to/dealeron/using-graphdbs-to-visualize-code-sql-dependencies-3370"&gt;https://dev.to/dealeron/using-graphdbs-to-visualize-code-sql-dependencies-3370&lt;/a&gt;), but there was still a lot of clutter.&lt;/p&gt;

&lt;p&gt;Lo and behold, there were a couple dozen stored procedures and views that simply would not build, as well as a few foreign keys. This provided immediate value, identifying a ton of dead weight that could be removed.&lt;/p&gt;

&lt;p&gt;The foreign keys were broken because they were set up in such a way that did not create a hierarchical cascade delete, and had the possibility of spinning up multiple delete paths. They were set up on an older version of SQL Server and managed to port over to a newer version of SQL Server that does not allow this to occur.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deleting the Broken Stuff
&lt;/h2&gt;

&lt;p&gt;The goal then became to delete all the broken stuff. I had assumed that just deleting them from source control and republishing would delete the objects. This proved not to be true unless I selected a "Delete Objects in target not in source control" option. This also proved to not work because of a fail safe that prevents you from accidentally deleting tables that still have rows in them.&lt;/p&gt;

&lt;p&gt;The solution was to provide &lt;code&gt;DROP IF EXISTS&lt;/code&gt; scripts in a data file (see next section). This provides a bit of long lived bloat, but deleting tables typically is uncommon so does not create too much of a pain point.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Data
&lt;/h2&gt;

&lt;p&gt;It's pretty common to have table data that is considered part of schema. Lookup tables for referential integrity or required for applications are the most common example. To make sure that these get included and maintained via deployments/publishes, you need to add a post deployment script.&lt;/p&gt;

&lt;p&gt;I created a data folder and added a &lt;code&gt;LookupTables&lt;/code&gt; folder into it to accommodate for all tables that are considered lookup tables. As of writing this, I'm up to 4 folders, &lt;code&gt;LookupTables&lt;/code&gt;, &lt;code&gt;ConfigurationTables&lt;/code&gt;, &lt;code&gt;DataChanges&lt;/code&gt;, and &lt;code&gt;ManualDrops&lt;/code&gt;. &lt;strong&gt;ConfigurationTables&lt;/strong&gt; is mostly consisting of data that &lt;em&gt;should&lt;/em&gt; be initialized by the supporting applications, but is not yet. &lt;strong&gt;DataChanges&lt;/strong&gt; is for scripts used to change the shape of data or migrate data.&lt;/p&gt;

&lt;p&gt;I then added a &lt;code&gt;Post-Deployment Script&lt;/code&gt; (under &lt;code&gt;User Scripts&lt;/code&gt; in &lt;code&gt;Add Item&lt;/code&gt; when you right click on the data folder). You can only have one post deployment script per project, but can make that script pull in scripts from other files.&lt;/p&gt;

&lt;p&gt;This post deployment script ends up looking like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;:r ./LookupTables/SomeLookupTable.sql
:r ./LookupTables/AnotherLookupTable.sql
:r ./ConfigurationTables/AConfigurationTable.sql
:r ./DataChanges/Ticket#.sql
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;:r&lt;/code&gt; is a command (from SQL CMD) that pulls in a sql script from another file. Make sure those other files are not included in the build, or else the compiler will try to treat them as tables/stored procedures/views/etc., and will give you compile time errors.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Catch With Data
&lt;/h2&gt;

&lt;p&gt;When deploying a dacpac (the "build" of a Data Tier Application), &lt;em&gt;ALL&lt;/em&gt; data scripts will run &lt;em&gt;EVERY&lt;/em&gt; time you deploy. This means you need to make sure that everything is re-runnable, and may need to clear out bloat if a script will be determined to not be runnable after a schema change occurs.&lt;/p&gt;

&lt;p&gt;So far I haven't seen this become too much of a concern. There has been a few scripts that we ran occasionally that we identified would need to stop being SQL scripts and turn into a piece of functionality that lives on an application, but other than that the re-runability is solved by doing simple &lt;code&gt;IF NOT EXISTS(SELECT * FROM TableName)&lt;/code&gt; wrappers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Mad Gains
&lt;/h2&gt;

&lt;p&gt;I've been beta testing SSDT for our company for a few months now. It's become apparent that SSDT, dacpac deploys, and SQL Source Control solve many more problems than I had initially set out to address.&lt;/p&gt;

&lt;h3&gt;
  
  
  LocalDbs
&lt;/h3&gt;

&lt;p&gt;It's now extremely simple to spin up a super lightweight database on a local machine. Simply spin up a new SQL Server (docker, localdb, sql server express, whatever), and either do a dacpac deploy or publish to it from visual studio, and you have a functional database.&lt;/p&gt;

&lt;p&gt;New environments in general are much easier to spin up, and have much lower maintenance cost since we can start them with small databases and build them up instead of doing a full pull of production data with a data scrub on sensitive information.&lt;/p&gt;

&lt;h3&gt;
  
  
  No Manual Executions
&lt;/h3&gt;

&lt;p&gt;There doesn't need to be communication between development and other teams (dev ops, QA, etc.) about what scripts to execute alongside other tickets. Just like an application, as long as they deploy a specific build, it'll all be there.&lt;/p&gt;

&lt;h3&gt;
  
  
  Breaking Up Database by Domain
&lt;/h3&gt;

&lt;p&gt;Our database was a bit of a monolith. Since SSDT by default doesn't drop tables that aren't manually dropped, we are able to virtually break up the database into multiple projects that all just happen to deploy to the same instance of SQL Server.&lt;/p&gt;

&lt;p&gt;This makes it super trivial to migrate tables related to a specific team/domain into their own micro database (with a bit of migration plan). It essentially decouples tables from the database that's hosting them.&lt;/p&gt;

&lt;h3&gt;
  
  
  Visual Studio Writes Your SQL
&lt;/h3&gt;

&lt;p&gt;Visual studio provides a ton of helper functionality for changing schema. Typically writing a SQL script to change the name of a column might be a bit gross, but in visual studio you can just go into the designer view, change the name, and it'll provide a refactor log to make the change for you without you needing to write any SQL.&lt;/p&gt;

&lt;h3&gt;
  
  
  Compile Time Schema Errors
&lt;/h3&gt;

&lt;p&gt;As stated earlier, it's now very apparent when something about the database is not lining up. Deleting a table that's required in a view won't even make it past the build step, let alone making it to trying to run it against a database.&lt;/p&gt;

&lt;h3&gt;
  
  
  Merge Conflicts
&lt;/h3&gt;

&lt;p&gt;I know it sounds odd, but merge conflicts are really a good thing. They are a strong indicator that a piece of functionality is possibly getting modified to do two different things. Previously it was always a possibility that two people making changes to the same stored procedure, function, or view, would just overwrite the other's changes and no one would be the wiser until we started getting bug reports.&lt;/p&gt;

&lt;h3&gt;
  
  
  Tests
&lt;/h3&gt;

&lt;p&gt;I haven't played around with tests, but being able to provide tests around triggers, functions, views, constraints, and stored procedures is really powerful. Putting logic in the database has always seemed iffy because of how opaque it can be. I still stand by trying to have most logic live at the application layer, but having the database be testable massively helps ease concerns when we want/need to put logic in the database for performance purposes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Not so Mad Pains
&lt;/h2&gt;

&lt;p&gt;There's definitely some issues that come with using SSDT.&lt;/p&gt;

&lt;h3&gt;
  
  
  Data Bloat
&lt;/h3&gt;

&lt;p&gt;As you start adding more and more DataChanges that go through SQL source control you may occasionally need to go in and trim some of the DataChanges that were meant to only run once. You ideally would want to do this after you've verified that all databases are up to the latest version so that they have the data change you are deleting. This also may happen as RefactorLogs start to pile up.&lt;/p&gt;

&lt;h3&gt;
  
  
  Gross Merge Conflicts
&lt;/h3&gt;

&lt;p&gt;While merge conflicts in stored procedures and the like can be helpful, I've seen a lot of merge conflicts come out of the post deployment script, as well as the &lt;code&gt;.sqlproj&lt;/code&gt; file. This can cause a bit of a headache and can create a bit of uncertainty, but most have been deal-able with a take-right-then-left resolution.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Overall SSDT has been a fantastic experience. I've really enjoyed learning it, and it has proven to be a very powerful tool. I would love if it had a better pattern for it to handle data changes, but that's really the only pain point.&lt;/p&gt;

</description>
      <category>sql</category>
      <category>database</category>
    </item>
    <item>
      <title>The GraphQl Monolith: 1 Year Later</title>
      <dc:creator>Jonathan Eccker</dc:creator>
      <pubDate>Fri, 22 Nov 2019 20:29:24 +0000</pubDate>
      <link>https://dev.to/dealeron/the-graphql-monolith-1-year-later-4i7b</link>
      <guid>https://dev.to/dealeron/the-graphql-monolith-1-year-later-4i7b</guid>
      <description>&lt;h1&gt;
  
  
  The Journey So Far
&lt;/h1&gt;

&lt;p&gt;A year ago our development team was going through several initiatives. The first one was a migration of some super legacy code out of form submission into Web Api backed pages, the second of which was creating a search page that allowed our company's internal users to run audits that would previously require a ticket for development to create a SQL query to audit. These of course were not the ONLY initiatives going on, these just happened to be ones I was most involved in.&lt;/p&gt;

&lt;p&gt;As part of doing some R&amp;amp;D for the search page I happened across this concept of GraphQl. It seemed extremely fitting, as it would allow the front end to shape the query it needs. This allows dynamic columns to be selected for the results, so if there's a piece of information you aren't showing in the UI, you don't need to query for it. The tooling available for GraphQl also made our front end developers happy, so it stuck.&lt;/p&gt;

&lt;p&gt;I then got more involved with the legacy code clean up. This had run into a lot of issues because there was a lot of pages that needed similar-but-not-exactly-the-same data to each-other, making traditional Rest APIs run into the traditional battle between doing 5 API calls per page load or building apis catered to specific pages, reducing reusability.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;"But wait"&lt;/em&gt;, I thought, &lt;em&gt;"I already made this information available in the Search Page I built"&lt;/em&gt;. With extremely minimal effort, I could make these pages read from that data, add in any extra we need, cut down on duplicated code used for querying and standardize the way we retrieve information across the board. We all agreed early on not to utilize GraphQl for writing/commands. GraphQl mutations don't feel like they offer too much of an advantage over REST.&lt;/p&gt;

&lt;p&gt;So we built a vision, kind of a half baked &lt;a href="https://martinfowler.com/bliki/CQRS.html"&gt;CQRS&lt;/a&gt; stack, where we could have micro-services writing into/maintaining databases, and one big GraphQl application that acts as the de-facto query application across the board.&lt;/p&gt;

&lt;p&gt;We never achieved that vision, but we did get a decent chunk of everything reading from GraphQl, and had a lot of plans in place to make everything read from it.&lt;/p&gt;

&lt;h1&gt;
  
  
  It Was Great
&lt;/h1&gt;

&lt;p&gt;I feel like a lot of the nay-sayers for GraphQl haven't felt how great it is to be able to write a GraphQl query, and have it get any piece of information you need back without needing to worry about all of the efficiency/permissions complications you've already solved in the back end. There were at least 3 cases where I was able to put out some crazy critical fixes with only a few one line changes that would have otherwise required changes in the Rest API.&lt;/p&gt;

&lt;p&gt;Being able to traverse between two domains (Websites and Inventory, in this case) within one query is just plain awesome, which is something that historically had been a pain point for us. Sure it was a bit odd translating the way we represented Inventory in Elastic Search to play nicely to fit cleanly with how the rest of the system worked, but once it worked, it worked great.&lt;/p&gt;

&lt;p&gt;It also was nice that these GraphQl queries often spat the information out in a means that was most of the time directly usable by the UI, there was no need for us to manipulate the results of multiple Rest API calls into the shape the UI needed, it pretty much just worked.&lt;/p&gt;

&lt;h1&gt;
  
  
  There's Always a But...
&lt;/h1&gt;

&lt;p&gt;The astute probably already picked up on it when I mentioned it earlier. I didn't notice it until it was too late. Inventory in the database that GraphQl sits over often holds duplicate entries for the same product. This created for some awkward traversals and aggregations that didn't really fit with the way that the rest of the GraphQl Monolith acted as an authoritative database. Products didn't have a unique identifier, because they just simply were not unique.&lt;/p&gt;

&lt;p&gt;If we wanted to build a search engine for products specifically backed by this GraphQl application, you could not do it without getting duplicate results.&lt;/p&gt;

&lt;p&gt;This is kind of the crux of the problem. Monolithic GraphQl applications are GREAT when they sit over authoritative databases. If you have, stored somewhere, &lt;strong&gt;THE&lt;/strong&gt; product, &lt;strong&gt;THE&lt;/strong&gt; website, &lt;strong&gt;THE&lt;/strong&gt; user, then it's just a matter of figuring out a way to draw a relationship between those objects and you have a super friendly API.&lt;/p&gt;

&lt;p&gt;Having these specially designed query databases is very common in companies that embrace CQRS. Databases that hold information in a non-authoritative way just for the sake of being super fast, or super searchable, or super aggregate-able. But because they are designed for solving one particular problem, they often can be problematic to relate to other, more authoritative, data.&lt;/p&gt;

&lt;h1&gt;
  
  
  Looking to the Future
&lt;/h1&gt;

&lt;p&gt;It's probably worth mentioning that I still will argue that GraphQl is vastly superior to REST for querying data in most cases. I will always be pushing to support it on as many applications as possible.&lt;/p&gt;

&lt;p&gt;There has been a lot of discussion around getting away from centralized authoritative databases. There are a lot of advantages to this, namely reducing the amount of coordination two teams/database may need to do around maintaining the same database. The advantages here are for another article, I highly recommend looking up &lt;a href="https://dddcommunity.org/learning-ddd/what_is_ddd/"&gt;Domain Driven Design&lt;/a&gt;, and CQRS. But as we get further down that road, it's going to get harder and harder to maintain a GraphQl application that's capable of bridging every single database and every single domain in a coherent, consistent way. &lt;/p&gt;

</description>
      <category>graphql</category>
      <category>architecture</category>
      <category>cqrs</category>
    </item>
    <item>
      <title>The Creed</title>
      <dc:creator>Jonathan Eccker</dc:creator>
      <pubDate>Fri, 15 Nov 2019 14:56:19 +0000</pubDate>
      <link>https://dev.to/dealeron/the-creed-a3d</link>
      <guid>https://dev.to/dealeron/the-creed-a3d</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;We understand and truly believe that everyone did the best job they could given what they knew at the time.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This is the creed we have read out at the beginning of every retro at DealerOn. They are really not hard words to say and to agree with. I doubt there are many people out there in the software world that would disagree with the message that lays on the surface.&lt;/p&gt;

&lt;p&gt;I've found myself applying and reciting the creed in many aspects in life recently. Not simply within the tech stack, or work in general, but in life. And the more I've said these words, the more I really started to reflect on the meaning. I realized it goes much deeper than I had originally thought.&lt;/p&gt;

&lt;h1&gt;
  
  
  The Surface
&lt;/h1&gt;

&lt;p&gt;The immediate message is simple. Don't place blame. When something goes wrong, there is no need to blame anyone because you can have faith that they were doing the best they could with the knowledge they had.&lt;/p&gt;

&lt;p&gt;I think this can actually be extended to much more than just coworkers. Call me an optimist, but I typically like to assume that people are well-intentioned. Generally when someone makes a decision that seems outright stupid in hindsight, I believe it's because they were stuck with the limitations of the knowledge they had at the time. Or even more common, maybe there's a piece of information I'm missing that they acted off of.&lt;/p&gt;

&lt;p&gt;With this assumption in hand, it becomes more of a matter of identifying what knowledge may have been missing when something goes wrong, so that you can have the appropriate knowledge going forward making similar decisions.&lt;/p&gt;

&lt;h1&gt;
  
  
  Journey Before Destination
&lt;/h1&gt;

&lt;p&gt;If you dig in just a little bit more, you run into common theme you'll see in many "how to architect/code" guidelines. Make decisions using the knowledge you have now to make the best decisions you can. Not hypothetical knowledge that you don't have.&lt;/p&gt;

&lt;p&gt;Picture yourself in a year looking back at action you are thinking out. Are you putting days of effort making your new code generic friendly because you know there's a use case for it, or because you don't know if there's a use case for it? Make the best decision based on what you know, not what you don't know.&lt;/p&gt;

&lt;p&gt;This philosophy of "Journey Before Destination" (stolen from The Way of Kings) is another pretty big one for me. I could write an entire post on that alone (and likely will). We, as developers, have a tendency to over-engineer things to focus on solving some magical end goal state instead of solving the current problem of the journey. This is completely making decisions based on what you don't know, and often leads to much wasted time when goals shift (welcome to the world of software design).&lt;/p&gt;

&lt;p&gt;Instead focus on solving your current problem with your current knowledge, and maximizing learning from the solutions you implement to best guide your actions in the future.&lt;/p&gt;

&lt;h1&gt;
  
  
  The Good Stress
&lt;/h1&gt;

&lt;p&gt;If you live by the creed, you can wear it as a shield from distress (the bad stress).&lt;/p&gt;

&lt;p&gt;Literally everyone has screwed up at some point. I have forgotten my share of where clauses in delete commands. The fact that a &lt;code&gt;=&lt;/code&gt; instead of a &lt;code&gt;==&lt;/code&gt; can make an entire application crash and burn (and I've been there too) means that we live in a super volatile crazy industry.&lt;/p&gt;

&lt;p&gt;When this happens, it's really easy to focus on everything you did wrong. This causes distress. Some of the knee-jerk reaction bug-fixes I've seen come from this mentality have caused almost as many problems as the initial bug. Instead focus on learning. You learn more from mistakes than successes.&lt;/p&gt;

&lt;p&gt;Trust in yourself that you did the best you could, and trust that in the future you will do even better armed with the new found knowledge. This is difficult to do, and takes a lot of mental training, but when you finally truly embrace it you will find your distress replaced with eustress (the good stress). Instead of inducing anxiety and depression, you'll find yourself motivated to improve.&lt;/p&gt;

&lt;h1&gt;
  
  
  It's About Trust
&lt;/h1&gt;

&lt;p&gt;At the end of the day, it all really boils down to trust. And not just built trust, but universally applied trust. Trust and then verify. It's a hard ask, we are surrounded on a regular basis with news stories spun to make it seem like people have no idea what they are doing.&lt;/p&gt;

&lt;p&gt;This trust that your coworkers are doing the best they can is how companies prosper. It's the foundation of this new age of crazy concepts like unlimited vacation, setting your own hours, and setting your own pay.&lt;/p&gt;

&lt;h1&gt;
  
  
  Closing Thoughts
&lt;/h1&gt;

&lt;p&gt;It took me a year or so of listening to the creed on a monthly basis before it really sunk in. It can often be easy to take to the "don't point fingers" part, but I think almost more important is having trust in yourself. Trust that you are doing the best you can now, and that you will continue doing the best you can.&lt;/p&gt;

</description>
      <category>career</category>
      <category>productivity</category>
    </item>
  </channel>
</rss>
