<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Sean Drew</title>
    <description>The latest articles on DEV Community by Sean Drew (@seanmdrew).</description>
    <link>https://dev.to/seanmdrew</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/seanmdrew"/>
    <language>en</language>
    <item>
      <title>Writing C# Code Like a Klingon</title>
      <dc:creator>Sean Drew</dc:creator>
      <pubDate>Tue, 19 Aug 2025 14:54:40 +0000</pubDate>
      <link>https://dev.to/seanmdrew/writing-c-code-like-a-klingon-1bgh</link>
      <guid>https://dev.to/seanmdrew/writing-c-code-like-a-klingon-1bgh</guid>
      <description>&lt;p&gt;Developers aim for clean, maintainable, and readable code, but sometimes readability drifts into verbosity and methods start sounding like bureaucratic procedures instead of executable instructions.&lt;/p&gt;

&lt;p&gt;What if, instead, you wrote code like a Klingon warrior? Klingons value strength, honor, and directness and their programming style would be the same: no wasted words, no unnecessary ceremony, and no tolerance for weakness.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Philosophy of Klingon Code&lt;/strong&gt;&lt;br&gt;
&lt;u&gt;Klingon code is:&lt;/u&gt;&lt;br&gt;
• Aggressive: Methods act as commands, not suggestions.&lt;br&gt;
• Direct: No fluff, no politeness just action.&lt;br&gt;
• Efficient: Do what must be done, nothing more.&lt;br&gt;
• Honorable: Fail loudly, succeed gloriously.&lt;br&gt;
A Klingon programmer does not write "PerformBattleAgainstEnemy", they write "HIv" which is the Klingon word for "attack."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example: Standard C# vs. Klingon C#&lt;/strong&gt;&lt;br&gt;
Federation style namby-pamby standard C# error handling. Softly check inputs and wraps dangerous operations in try/catch blocks. Never let the system explode and log issues politely.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// ==========================
// Federation Style: Gentle cautious error handling
// ==========================
public class Warrior
{
  // Private fields for warrior's name and honor points
  private readonly string _name;
  private int _honor;

  // Constructor initializes warrior with a name
  // If the name is null or empty, assigns a safe default
  public Warrior(string name)
  {
    if (string.IsNullOrWhiteSpace(name))
    {
      Console.WriteLine("Warning: Name is empty. Assigning 'Unknown Warrior'.");
      _name = "Unknown Warrior";
    }
    else
    {
      _name = name;
    }

    _honor = 100; // Start honor at 100 points
  }

  // Battle method attempts to fight an enemy
  // Wraps execution in try/catch to prevent crashes
  public void Battle(string enemy)
  {
    try
    {
      // Check for invalid or empty enemy
      if (string.IsNullOrWhiteSpace(enemy))
      {
        Console.WriteLine("Warning: Cannot battle a void enemy.");
        return; // Skip this battle safely
      }

      // Perform the battle and increase honor
      Console.WriteLine($"{_name} battles {enemy}!");
      _honor += 10;
    }
    catch (Exception ex)
    {
      // Polite error reporting
      Console.WriteLine($"An unexpected error occurred during battle: {ex.Message}");
    }
  }

  // ShowHonor method displays current honor points
  public void ShowHonor()
  {
    try
    {
      Console.WriteLine($"{_name} has {_honor} honor points.");
    }
    catch (Exception ex)
    {
      // Polite error reporting
      Console.WriteLine($"Unable to display honor: {ex.Message}");
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Klingon style direct and decisive error handling. Klingons do not wrap operations in polite try/catch. They confront errors head-on. If something is wrong, the program stops, leaving a clear battle scar (exception) for all to see. The code does not politely show honor. It declares it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// ==========================
// Klingon Style: Direct, decisive error handling
// ==========================
public class tlhIngan
{
  // Fields for name and honor points
  string name;
  int honor = 100;

  // Constructor requires a valid name; throws exception if missing
  // Klingons do not tolerate cowardice or indecision
  public tlhIngan(string n)
  {
    if (string.IsNullOrWhiteSpace(n))
      throw new ArgumentException("A tlhIngan must have a name! Cowardice is not tolerated.");

    name = n;
  }

  // HIv method performs an attack on a target (jagh)
  // Throws immediately if the target is invalid
  public void HIv(string jagh)
  {
    if (string.IsNullOrWhiteSpace(jagh))
      throw new ArgumentException($"{name} refuses to HIv an empty target. Dishonor is unacceptable!");

    // Perform the attack and increase honor
    Console.WriteLine($"{name} HIv {jagh}!");
    honor += 10;
  }

  // quv method displays current honor
  public void quv()
  {
    Console.WriteLine($"{name} quv {honor}");
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Federation vs Klingon Coding Conventions&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Error Handling
• Federation: Gentle—uses warnings and try/catch to avoid crashes.
• Klingon: Aggressive—throws immediately; failures are loud and visible.&lt;/li&gt;
&lt;li&gt;Input Validation
• Federation: Substitutes safe defaults if input is missing or invalid.
• Klingon: Requires explicit, valid input; invalid input triggers immediate exceptions.&lt;/li&gt;
&lt;li&gt;Method Logic
• Federation: Safe, layered, cautious execution.
• Klingon: Direct, decisive; actions happen immediately.&lt;/li&gt;
&lt;li&gt;Variable and Method Naming
• Federation: Descriptive, friendly, easy to read.
• Klingon: Action-oriented and battle-themed (e.g., HIv, quv).&lt;/li&gt;
&lt;li&gt;Honor / Points Management
• Federation: Updated carefully; dishonor is handled politely.
• Klingon: Central to all actions; any dishonor halts execution.&lt;/li&gt;
&lt;li&gt;Output Messages
• Federation: Polite, informative, and explanatory.
• Klingon: Bold, concise, dramatic; announces every action clearly.&lt;/li&gt;
&lt;li&gt;Try/Catch Usage
• Federation: Extensive, for safety and stability.
• Klingon: Minimal; only at the top level to observe battle results.&lt;/li&gt;
&lt;li&gt;Initialization
• Federation: Allows defaults or safe substitutions.
• Klingon: Must be explicit; cowardice or missing data is unacceptable.&lt;/li&gt;
&lt;li&gt;Philosophy
• Federation: Preserve harmony, avoid crashing, be forgiving.
• Klingon: Confront errors, act boldly, and leave evidence of every battle.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Klingon Developer’s Guide&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Fail Loudly
• Principle: A Klingon program never hides dishonor.
• Implementation: Throw exceptions immediately on invalid input or actions.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;if (string.IsNullOrWhiteSpace(name))
    throw new ArgumentException("A tlhIngan must have a name! Cowardice is not tolerated.");
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Direct, Decisive Logic
• Principle: Do not dance around problems. Code must act immediately.
• Implementation: Perform actions and enforce rules in a single, straightforward method.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public void HIv(string jagh)
{
  if (string.IsNullOrWhiteSpace(jagh))
     throw new ArgumentException($"{name} refuses to HIv an empty target!");
     honor += 10; // Honor increases immediately
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;No Safety Wrappers
• Principle: Try/catch is cowardly. Let failure be obvious.
• Implementation: Allow exceptions to propagate; they leave evidence of the battle.&lt;/li&gt;
&lt;li&gt;Aggressive Naming
• Principle: Names reflect the action and the honor system.
o HIv → Attack enemy
o quv → Display honor
o honor → Track warrior honor points&lt;/li&gt;
&lt;li&gt;Honor is Central
• Principle: Every action affects honor. Dishonor stops execution.
• Implementation: Increment or decrement honor in every meaningful method.&lt;/li&gt;
&lt;li&gt;Explicit Initialization
• Principle: No defaults. Every "tlhIngan" must have a valid name and starting honor.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;string name;
int honor = 100;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Clear Concise Output
• Principle: Output is bold, descriptive, and battle-themed.
&lt;code&gt;Console.WriteLine($"{name} HIv {jagh}!"); // Clearly announces the attack&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Optional: Battle Logging
• Principle: Keep records of success and failure; each battle is a story.
• Implementation can include logging to console or file; nothing is hidden.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
Federation code preserves stability, avoids harm, soft on errors. Klingon code confronts dishonor, punishes mistakes, leaves battle scars. Klingon coding is all about clarity, honor, and decisiveness. Fail loudly, act boldly, and let your code reflect the warrior ethos. Writing C# like a Klingon is not about replacing English with Klingon vocabulary it is about embracing the philosophy of strength and clarity. Code should not whisper its intent. It should shout it with honor - Qapla'!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0j5jhoadl8hb6cskfale.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0j5jhoadl8hb6cskfale.png" alt=" " width="517" height="776"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>csharp</category>
    </item>
    <item>
      <title>Designing a SOLID, Layered ASP.NET Core Solution for Patient Lookup</title>
      <dc:creator>Sean Drew</dc:creator>
      <pubDate>Wed, 30 Jul 2025 12:45:42 +0000</pubDate>
      <link>https://dev.to/seanmdrew/designing-a-solid-layered-aspnet-core-solution-for-patient-lookup-33ji</link>
      <guid>https://dev.to/seanmdrew/designing-a-solid-layered-aspnet-core-solution-for-patient-lookup-33ji</guid>
      <description>&lt;p&gt;A simple patient lookup might not sound exciting, but it is a perfect chance to show how clean code and solid architecture make all the difference. This example uses ASP.NET Core and stored procedures to do just that. While the use case is straightforward, the architecture adheres to the SOLID principles, maintains separation of concerns, and follows clean code and enterprise-level structuring. This makes the project a practical example of how even simple features benefit from strong software design.&lt;/p&gt;

&lt;p&gt;Software development is not only about delivering functionality it is about delivering maintainable, scalable, and testable solutions. Even in small applications or single-feature projects like a patient lookup, poor design decisions can lead to tight coupling, duplication, and fragile code that is difficult to evolve.&lt;/p&gt;

&lt;p&gt;The goal is to try and demonstrate how to take a simple use case retrieving a patient by ID from a SQL Server database and wrap it in a SOLID-compliant architecture using ASP.NET Core. The goal is to illustrate how enterprise-quality design is not just for complex systems. It brings clarity, reliability, and extensibility to any codebase, regardless of size.&lt;/p&gt;

&lt;p&gt;By isolating responsibilities into Models, Repositories, Services, and Web UI layers, you can ensure the system remains:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Easy to understand and reason about&lt;/li&gt;
&lt;li&gt;Simple to test and mock&lt;/li&gt;
&lt;li&gt;Safe to extend or refactor&lt;/li&gt;
&lt;li&gt;Well-suited to real-world business logic growth&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Even if the system starts small, this approach sets a strong foundation that avoids rewrites and future technical debt. It is not about overengineering, but about applying clean architectural principles from the beginning when they are cheapest and most effective to implement.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Goals&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Clean separation of concerns using N-Tier Architecture&lt;/li&gt;
&lt;li&gt;Full adherence to the SOLID principles&lt;/li&gt;
&lt;li&gt;Use of Dependency Injection, async patterns, and stored procedures&lt;/li&gt;
&lt;li&gt;A minimal but functional Web API and Razor Page frontend&lt;/li&gt;
&lt;li&gt;Centralized configuration and clean routing&lt;/li&gt;
&lt;li&gt;A project structure that scales with growth&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Recommended Solution Structure&lt;/strong&gt;&lt;br&gt;
This solution is organized into the following projects:&lt;/p&gt;

&lt;p&gt;PatientLookupSolution&lt;br&gt;
├── PatientLookup.Models        // Plain old class object&lt;br&gt;
├── PatientLookup.Repositories  // SQL logic and ADO.NET (SqlDataClient)&lt;br&gt;
├── PatientLookup.Services      // Business logic&lt;br&gt;
└── PatientLookup.Web           // API and Razor UI&lt;/p&gt;

&lt;p&gt;Each layer only depends on the one below it:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Web depends on Services&lt;/li&gt;
&lt;li&gt;Services depends on Repositories&lt;/li&gt;
&lt;li&gt;Repositories depends on Models
This architecture ensures modularity, testability, and scalability.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;SOLID Breakdown&lt;/strong&gt;&lt;br&gt;
&lt;u&gt;Single Responsibility&lt;/u&gt;: Each class serves a single purpose: controller, service, repository, etc.&lt;br&gt;
&lt;u&gt;Open/Closed&lt;/u&gt;: You can extend behaviors (e.g., new services) without modifying existing code.&lt;br&gt;
&lt;u&gt;Liskov Substitution&lt;/u&gt;: Interfaces are respected; substituting mock services or test repositories is safe.&lt;br&gt;
Interface Segregation: Interfaces are focused (e.g., IPatientService only exposes needed methods).&lt;br&gt;
&lt;u&gt;Dependency Inversion&lt;/u&gt;: Higher layers depend on abstractions (IPatientRepository), not implementations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Code Walkthrough&lt;/strong&gt;&lt;br&gt;
&lt;u&gt;Patient.cs (Model Layer)&lt;/u&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;namespace PatientLookup.Models
{
  public class Patient
  {
    public int Id { get; set; }
    public string FirstName { get; set; } = string.Empty;
    public string LastName { get; set; } = string.Empty;
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Just a normal class, no attributes describing infrastructure concerns or other responsibilities&lt;/li&gt;
&lt;li&gt;No data annotations to keep the model agnostic of UI or DB layers.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;u&gt;IPatientRepository&lt;/u&gt;&lt;br&gt;
This is the part of the application responsible for talking directly to the database in a controlled and consistent way. It defines the contract for how the rest of the system can ask for patient data without needing to know how that data is actually retrieved.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public interface IPatientRepository
{
  Task&amp;lt;Patient?&amp;gt; GetByIdAsync(int id);
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Basically it tells the system: "If you give me a patientId, I will return the matching Patient object asynchronously."&lt;/p&gt;

&lt;p&gt;But it does not say how it does that. The actual logic of running the stored procedure and mapping the SQL result to a model is handled in the class that implements the IPatientRepository public interface (PatientRepository).&lt;/p&gt;

&lt;p&gt;IPatientRepository defines a clear boundary between the application and the data layer. It is a small piece with a big impact when it comes to keeping the code clean, testable, and future-proof.&lt;/p&gt;

&lt;p&gt;&lt;u&gt;PatientRepository (Repository Layer)&lt;/u&gt;&lt;br&gt;
PatientRepository is the implementation of IPatientRepository. It contains the actual logic for accessing the database by calling a stored procedure to fetch patient data. It focuses only on fetching data. No business logic. No UI code. This keeps the data layer clean, testable, and easy to change without breaking the rest of the system.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public class PatientRepository : IPatientRepository
{
  private readonly string _connectionString;

  public PatientRepository(IConfiguration config)
  {
    _connectionString = config.GetConnectionString("DefaultConnection")
      ?? throw new InvalidOperationException("Missing DefaultConnection string.");
  }

  public async Task&amp;lt;Patient?&amp;gt; GetByIdAsync(int id)
  {
    using var conn = new SqlConnection(_connectionString);
    using var cmd = new SqlCommand("cp_GetPatientById", conn)
    {
      CommandType = CommandType.StoredProcedure
    };
    cmd.Parameters.AddWithValue("@patid", id);
    await conn.OpenAsync();

    using var reader = await cmd.ExecuteReaderAsync();
    if (await reader.ReadAsync())
    {
      return new Patient
      {
        Id = reader.GetInt32(reader.GetOrdinal("Id")),
        FirstName = reader.GetString(reader.GetOrdinal("FirstName")),
        LastName = reader.GetString(reader.GetOrdinal("LastName"))
      };
    }

    return null;
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Uses ADO.NET for performance and transparency.&lt;/li&gt;
&lt;li&gt;Clean separation from business logic.&lt;/li&gt;
&lt;li&gt;Uses a stored procedure for optimized and secure DB access.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;u&gt;IPatientService&lt;/u&gt;&lt;br&gt;
IPatientService defines the business-facing contract for how the application can retrieve patient information. It sits above the repository and lets higher layers (like the controller) request patient data without knowing anything about the data source. This abstraction makes the service layer testable, swappable, and cleanly separated from the database layer.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public interface IPatientService
{
  Task&amp;lt;Patient?&amp;gt; GetPatientByIdAsync(int id);
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Acts as the middleman between the controller and the data layer, handling patient-related business logic.&lt;/li&gt;
&lt;li&gt;Offers a simple async method "GetPatientByIdAsync" to fetch a patient by their ID.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;u&gt;PatientService (Service Layer)&lt;/u&gt;&lt;br&gt;
PatientService is the implementation of IPatientService. It calls the repository to fetch data and can apply business rules, validation, logging, or transformations before returning the result. Basically, it acts as a bridge between the controller and the repository cleanly separating concerns and centralizing any logic beyond raw data access.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public class PatientService : IPatientService
{
  private readonly IPatientRepository _repository;

  public PatientService(IPatientRepository repository)
  {
    _repository = repository;
  }

  public Task&amp;lt;Patient?&amp;gt; GetPatientByIdAsync(int id)
  {
    return _repository.GetByIdAsync(id);
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;u&gt;PatientController (Web API Layer)&lt;/u&gt;&lt;br&gt;
PatientController is the entry point of the application. It handles incoming HTTP requests, delegates work to the service layer, and returns appropriate responses (usually JSON or a view). In this case, it returns the response to a view.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[Route("api/[controller]")]
[ApiController]
public class PatientController : ControllerBase
{
  private readonly IPatientService _service;

  public PatientController(IPatientService service)
  {
    _service = service;
  }

  [HttpGet("{id}")]
  public async Task&amp;lt;IActionResult&amp;gt; GetPatient(int id)
  {
    var patient = await _service.GetPatientByIdAsync(id);
    if (patient == null)
      return NotFound();

    return Ok(patient);
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Validates incoming requests at a high level (e.g., route-level).&lt;/li&gt;
&lt;li&gt;Delegates the actual logic to the service layer.&lt;/li&gt;
&lt;li&gt;Returns a clean HTTP response (200 OK, 404 Not Found, etc.).&lt;/li&gt;
&lt;li&gt;It does not contain business or data logic.&lt;/li&gt;
&lt;li&gt;Minimal and clean controller.&lt;/li&gt;
&lt;li&gt;Relies only on the service interface.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;u&gt;Razor Page: PatientView.cshtml&lt;/u&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@page
@model PatientLookup.Web.Pages.PatientViewModel
@{
  ViewData["Title"] = "Patient Details";
}
&amp;lt;h2&amp;gt;Patient Details&amp;lt;/h2&amp;gt;
@if (Model.Patient != null)
{
  &amp;lt;div&amp;gt;
    &amp;lt;strong&amp;gt;ID:&amp;lt;/strong&amp;gt; @Model.Patient.Id&amp;lt;br /&amp;gt;
    &amp;lt;strong&amp;gt;Name:&amp;lt;/strong&amp;gt; @Model.Patient.FirstName @Model.Patient.LastName
        &amp;lt;/div&amp;gt;
}
else
{
  &amp;lt;div&amp;gt;Patient not found.&amp;lt;/div&amp;gt;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;u&gt;Code-Behind: PatientView.cshtml&lt;/u&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public class PatientViewModel : PageModel
{
  private readonly IPatientService _service;

  public PatientViewModel(IPatientService service)
  {
    _service = service;
  }

  [BindProperty(SupportsGet = true)]
  public int Id { get; set; }

  public Patient? Patient { get; set; }

  public async Task&amp;lt;IActionResult&amp;gt; OnGetAsync()
  {
    Patient = await _service.GetPatientByIdAsync(Id);
    return Patient == null ? NotFound() : Page();
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Clean and simple frontend view.&lt;/li&gt;
&lt;li&gt;Demonstrates reuse of business logic via dependency injection in Razor Pages.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;u&gt;Program.cs&lt;/u&gt;&lt;br&gt;
Program.cs is the glue that connects and configures all the architectural layers at startup. It is the application entry point in ASP.NET Core and configures the web host, registers services for dependency injection, defines how the application will run, etc.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;using Microsoft.AspNetCore.Builder;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Hosting;
using PatientLookup.Repositories;
using PatientLookup.Services;
using System.Data;
using System.Data.SqlClient;

var builder = WebApplication.CreateBuilder(args);

// Load configuration
builder.Configuration.AddJsonFile("appsettings.json", optional: false, reloadOnChange: true);

// Register controllers and framework services
builder.Services.AddControllers();
builder.Services.AddEndpointsApiExplorer();
builder.Services.AddSwaggerGen();

// Register custom services and repositories
builder.Services.AddScoped&amp;lt;IPatientRepository, PatientRepository&amp;gt;();
builder.Services.AddScoped&amp;lt;IPatientService, PatientService&amp;gt;();

// Register SQL connection for dependency injection
builder.Services.AddScoped&amp;lt;IDbConnection&amp;gt;(sp =&amp;gt;
  new SqlConnection(builder.Configuration.GetConnectionString("DefaultConnection")));

var app = builder.Build();

app.UseHttpsRedirection();
app.UseAuthorization();

app.MapControllers();
app.Run();
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;u&gt;appsettings.json&lt;/u&gt; (environment based configuration)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "ConnectionStrings": {
    "DefaultConnection": "Server=SQLServer;Database=YourDb;Trusted_Connection=True;"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Best Practice Highlights&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use Stored Procedures: More secure, better for performance.&lt;/li&gt;
&lt;li&gt;Separation of Concerns: Data, business, and presentation logic are isolated.&lt;/li&gt;
&lt;li&gt;Thin Controllers: Controllers delegate logic to services.&lt;/li&gt;
&lt;li&gt;Interface-Driven Design: Easily testable and extensible.&lt;/li&gt;
&lt;li&gt;Async Everywhere: Improves scalability.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;u&gt;How all the Layers Work Together&lt;/u&gt;&lt;br&gt;
This architecture separates responsibilities across four key layers, each with a clear purpose:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Controller Layer&lt;br&gt;
Responsibility: Handles HTTP requests and returns responses.&lt;br&gt;
Example: PatientController&lt;br&gt;
Accepts GET /api/patient/123&lt;br&gt;
Calls IPatientService.GetPatientAsync(123)&lt;br&gt;
Returns 200 OK with the patient object or 404 Not Found&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Service Layer&lt;br&gt;
Responsibility: Contains business logic and orchestrates data access.&lt;br&gt;
Example: PatientService&lt;br&gt;
Validates input (e.g., patientId &amp;gt; 0)&lt;br&gt;
Calls repository: IPatientRepository.GetPatientByIdAsync()&lt;br&gt;
Applies any business rules before returning data&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Repository Layer&lt;br&gt;
Responsibility: Communicates with the database (via stored procedures or queries).&lt;br&gt;
Example: PatientRepository&lt;br&gt;
Executes cp_APIGetPatientById&lt;br&gt;
Maps SqlDataReader result to a Patient model&lt;br&gt;
Returns the data to the service layer&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Model Layer&lt;br&gt;
Responsibility: Defines the shape of your data objects.&lt;br&gt;
Example: Patient class&lt;br&gt;
Represents patient data across layers&lt;br&gt;
Keeps your code strongly typed and structured&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;u&gt;Separation of Concerns&lt;/u&gt;: Each layer does one thing and does it well&lt;br&gt;
&lt;u&gt;Testability&lt;/u&gt;: You can mock or stub any layer during unit testing&lt;br&gt;
&lt;u&gt;Maintainability&lt;/u&gt;: Changes in one layer (e.g., database schema) do not break the rest&lt;br&gt;
&lt;u&gt;Scalability&lt;/u&gt;: Easily extend logic (e.g., add caching or logging) without disrupting the design&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
This example illustrates how clean, scalable architecture is not limited to large enterprise projects. By applying the SOLID principles, structuring with layered boundaries, and embracing clean code practices, you create software that is:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Easy to maintain&lt;/li&gt;
&lt;li&gt;Easy to extend&lt;/li&gt;
&lt;li&gt;Easier to onboard new developers&lt;/li&gt;
&lt;li&gt;Ready for scaling into production&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>csharp</category>
    </item>
    <item>
      <title>Refactoring Repetitive Model Validation in ASP.NET Core</title>
      <dc:creator>Sean Drew</dc:creator>
      <pubDate>Wed, 16 Jul 2025 14:55:23 +0000</pubDate>
      <link>https://dev.to/seanmdrew/refactoring-repetitive-model-validation-in-aspnet-core-2hb7</link>
      <guid>https://dev.to/seanmdrew/refactoring-repetitive-model-validation-in-aspnet-core-2hb7</guid>
      <description>&lt;p&gt;In one of my ASP.NET Core APIs, I was working with [FromBody] models that had a number of required fields, some were nullable integers, others were strings that needed to be non-empty. Initially, I handled validation directly in the controller action with a series of repetitive if statements like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;if (!model.patienId!.HasValue || (int)model.patienId! == 0)
{ return new ContentResult() { Content = "patienId is a required parameter", StatusCode = 400 }; }

if (string.IsNullOrEmpty(model.ndcnumber))
{ return new ContentResult() { Content = "ndcnumber is a required parameter", StatusCode = 400 }; }

if (!model.qty!.HasValue || (int)model.qty! &amp;lt;= 0) 
{ return new ContentResult() { Content = "qty is a required parameter", StatusCode = 400 }; }

// ... and so on for every field
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This approach worked well for what I was doing at the time, but it became messy very quickly, especially as I added more fields and repeated the same pattern in multiple actions and across different controllers. It was hard to read, error-prone to update, and encouraged copy/paste over clean reuse.&lt;/p&gt;

&lt;p&gt;To clean this up, I decided to refactor the common validation checks into a dedicated "ValidationHelper" class. This utility would let me validate nullable value types and strings in a reusable and centralized way.&lt;/p&gt;

&lt;p&gt;In this post, I show a simple refactor I made to clean up these types of repetitive validation "if" statements in my controller actions by introducing a lightweight ValidationHelper. The result is clearer code, consistent error handling, and much better maintainability.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Refactoring with a Validation Helper&lt;/strong&gt;&lt;br&gt;
Basically, a helper class is a fancy way to describe a separate CS file where I put reusable code like small utility functions that I can call from my controllers (or anywhere else). It keeps things clean and avoids repeating the same logic in every method. So, instead of cluttering my controllers with repetitive checks (like verifying required fields), I just put those checks in a helper class and call them as needed. It makes the code easier to read and maintain.&lt;/p&gt;

&lt;p&gt;This helper lives in its own file under the "/Helpers/ValidationHelper.cs" folder, and I keep it in the shared "MyAPI.Helpers" namespace.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// simple helper method for validating required model fields in ASP.NET Core controllers.
public static class ValidationHelper
{
  // Validate a nullable value type is not null and not its default value (e.g., 0 for int, false for bool).
  // &amp;lt;typeparam name="T"&amp;gt;A struct type (e.g., int, bool, DateTime).&amp;lt;/typeparam&amp;gt;
  // &amp;lt;param name="value"&amp;gt;The value to validate.&amp;lt;/param&amp;gt;
  // &amp;lt;param name="name"&amp;gt;The name of the parameter (used in the error message).&amp;lt;/param&amp;gt;
  // Returns ContentResult with a 400 status if the value is missing or default; otherwise, null.
  public static ContentResult? Required&amp;lt;T&amp;gt;(T? value, string name) where T : struct
  {
    // Check if the value is null or equals the default value for its type (e.g., 0 for int).
    if (!value.HasValue || EqualityComparer&amp;lt;T&amp;gt;.Default.Equals(value.Value, default))
    {
      return new ContentResult
      {
        Content = $"{name} is a required parameter",
        StatusCode = 400
      };
    }

    // If valid, return null to indicate no validation error.
    return null;
  }

  // Validate a string is not null, empty, or whitespace.
  // &amp;lt;param name="value"&amp;gt;The string value to validate.&amp;lt;/param&amp;gt;
  // &amp;lt;param name="name"&amp;gt;The name of the parameter (used in the error message).&amp;lt;/param&amp;gt;
  // Returns ContentResult with a 400 status if the string is null or empty; otherwise, null.
  public static ContentResult? Required(string? value, string name)
  {
    // Check if the string is null, empty, or whitespace-only.
    if (string.IsNullOrWhiteSpace(value))
    {
      return new ContentResult
      {
        Content = $"{name} is a required parameter",
        StatusCode = 400
      };
    }

    // If valid, return null to indicate no validation error.
    return null;
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;A Sample of My Model&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public class MySampleModel
{
  [Required] public int? patientId { get; set; }
  [Required] public string? ndcnumber { get; set; } = string.Empty;
  [Required] public int? qty { get; set; }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Sample Usage in my Controller&lt;/strong&gt;&lt;br&gt;
Now, instead of repeating lots of "if" validation logic in every action I simply do something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[ApiController]
[Route("[controller]")]
public class MyController : ControllerBase
{
  private ContentResult? _validationResult;

  [HttpPost("DoSomethingEndpoint")]
  public IActionResult DoSomethingEndpoint([FromBody] MySampleModel model)
  {
    // Use ValidationHelper to check required fields
    if ((_validationResult = ValidationHelper.Required(model.patientId, "patientId")) != null) return _validationResult;
    if ((_validationResult = ValidationHelper.Required(model.ndcnumber, "ndcnumber")) != null) return _validationResult;
    if ((_validationResult = ValidationHelper.Required(model.qty, "qty")) != null) return _validationResult;

    // if all fields are valid, proceed with processing
    return Ok("Request accepted");
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Controller-Wide Result Field&lt;/strong&gt;&lt;br&gt;
To avoid declaring the same "ContentResult? result;" variable in every action, I made it a private field on the controller, like this:&lt;br&gt;
&lt;code&gt;private ContentResult? _validationResult;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Then I reused it across all of my methods.&lt;br&gt;
&lt;code&gt;if ((_validationResult = ValidationHelper.Required(model.qty, "qty")) != null) return _validationResult;&lt;/code&gt;&lt;br&gt;
Because ASP.NET Core creates a new instance of the controller per request, this pattern is safe and avoids unnecessary duplication while keeping the logic tight and readable.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why This Is Better&lt;/strong&gt;&lt;br&gt;
Switching to a helper-based approach gave me several benefits:&lt;br&gt;
• Cleaner, reusable validation logic. the core logic lives in one place and is reused everywhere&lt;br&gt;
• Consistent error formatting, validation lines are short, clear, and consistent&lt;br&gt;
• Easier maintenance. Changes to the validation rules only require updates in one spot. Easy to extend and apply to other models.&lt;br&gt;
• Cleaner controller actions. No external validation libraries required&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
This refactor started as a way to clean up clutter in a single action, but it quickly became a general pattern I now apply across my API projects.&lt;/p&gt;

&lt;p&gt;Also, for more complex validation rules, like conditional logic (e.g., field B is required if field A is X) I can simply extend this pattern or supplement it with FluentValidation. If you're writing the same validation statements over and over, give a helper like this a try. It might be all you need to make your codebase more elegant and maintainable.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Transitioning to JSON: Supporting Both Body and Query Parameters in ASP.NET Core</title>
      <dc:creator>Sean Drew</dc:creator>
      <pubDate>Mon, 14 Jul 2025 12:41:59 +0000</pubDate>
      <link>https://dev.to/seanmdrew/transitioning-to-json-supporting-both-body-and-query-parameters-in-aspnet-core-374b</link>
      <guid>https://dev.to/seanmdrew/transitioning-to-json-supporting-both-body-and-query-parameters-in-aspnet-core-374b</guid>
      <description>&lt;p&gt;&lt;strong&gt;How to Support Both JSON Body and Query Parameters in ASP.NET Core APIs Without the Headaches&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Recently, I was tasked with updating older POST endpoints to support [FromBody] input instead of the traditional [FromQuery] model. As part of this transition, I needed to support both [FromQuery] and [FromBody] inputs for backward compatibility until all clients can be updated. This approach allows existing clients to continue functioning without interruption, while new clients benefit from cleaner, structured JSON.&lt;/p&gt;

&lt;p&gt;While this pattern is helpful for POST or PUT endpoints where the client is sending data, it is not intended for GET requests. In GET requests the input should remain as route or query parameters and not the request body.&lt;/p&gt;

&lt;p&gt;Sounds simple, right? But in practice, ASP.NET Core’s model binding and validation pipeline is not designed to easily accept both [FromBody] and [FromQuery] inputs for the same model. You typically need to choose one source of input (not both).&lt;/p&gt;

&lt;p&gt;In this write-up I go through a (relatively) reusable way to support both input models without introducing unnecessary complexity or breaking existing client functionality.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Objective&lt;/strong&gt;&lt;br&gt;
Allow my ASP.NET Web API POST/PUT endpoints to accept parameters either through:&lt;br&gt;
• JSON body (using [FromBody])&lt;br&gt;
• Query string (using [FromQuery])&lt;br&gt;
The logic is:&lt;br&gt;
• If the body exists and is valid then use it instead of query parameters.&lt;br&gt;
• If the body is null or invalid then look at query parameters.&lt;br&gt;
• If the query parameters are null or invalid then re-check body as fallback.&lt;br&gt;
• If neither exist or are invalid then return a validation error.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Problem&lt;/strong&gt;&lt;br&gt;
• If I decorate my action parameters with just [FromBody], then clients must send JSON. If they do not, they will see errors like “Unsupported Media Type” or model validation failures because the body is missing. I am doing this to help with migration so existing clients do not have to change right away and to support a smooth transition period.&lt;br&gt;
• Mixing [FromBody] and [FromQuery] on the same action parameters does not work particularly well and could throw unexpected validation failures.&lt;br&gt;
• I need to move to a JSON body input but fallback gracefully to query parameters if the body is missing or invalid, especially for existing clients.&lt;br&gt;
• I want validation to “just work” with either input method.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;My Solution is an Extension Method That Takes Care of Both Input Types&lt;/strong&gt;&lt;br&gt;
Here is how I approached it:&lt;/p&gt;

&lt;p&gt;• I wrote a helper method that first tries to read and validate my model from the JSON body. A helper method is a fancy way of saying "code" (method) that I have in a in a class file. This helps to simplify and centralize logic that I might otherwise duplicate across multiple places.&lt;/p&gt;

&lt;p&gt;• If that does not work (maybe because the client sent no body or the JSON is invalid) then it then tries to build and validate the model from query parameters instead.&lt;/p&gt;

&lt;p&gt;• The plan is to end up with one valid model object but it is also possible that neither input works.&lt;/p&gt;

&lt;p&gt;• I call this helper from my controller action. If it returns a valid model, preferably from the JSON body (which is the goal here) then I continue with my business logic. If it doesn’t then I return a validation error to the caller.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What Does That Look Like?&lt;/strong&gt;&lt;br&gt;
I added a new class file named HttpRequestExtensions.cs to my project.&lt;/p&gt;

&lt;p&gt;The file contains an extension method for my ASP.NET controllers that helps seamlessly retrieve and validate a model from either the JSON request body ([FromBody]) or the URL query parameters ([FromQuery]). In a nut shell:&lt;br&gt;
• Read and parse the JSON body first, validating the model if present.&lt;br&gt;
• If the body is missing or invalid, it then attempts to build and validate the model from the query string parameters.&lt;br&gt;
• It returns the first valid model it finds, or null if neither source is valid.&lt;/p&gt;

&lt;p&gt;This simplifies my controller code by centralizing the logic for supporting both input styles, making migration smoother and keeping the endpoint implementation clean and consistent.&lt;/p&gt;

&lt;p&gt;Here is what the HttpRequestExtensions.cs file extension method looks like.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// This static class provides an extension method for Controller to support input binding
// from either JSON body ([FromBody]) or query string ([FromQuery]).
public static class HttpRequestExtensions
{
  // Generic method to try binding a model of type T from the body or query.
  // T must be a class with a parameterless constructor.
  public static async Task&amp;lt;T?&amp;gt; TryGetModelFromBodyOrQueryAsync&amp;lt;T&amp;gt;(this Controller controller) where T : class, new()
  {
    var request = controller.HttpContext.Request;

    // --- STEP 1: Try to bind from JSON body if content exists and is of type application/json ---
    if (request.ContentLength &amp;gt; 0 &amp;amp;&amp;amp; 
        request.ContentType?.Contains("application/json", StringComparison.OrdinalIgnoreCase) == true)
    {
      try
      {
        // Attempt to deserialize the JSON body into the model
        var bodyModel = await request.ReadFromJsonAsync&amp;lt;T&amp;gt;();

        // If deserialization is successful and model passes validation, return it
        if (bodyModel != null &amp;amp;&amp;amp; controller.TryValidateModel(bodyModel)) 
          return bodyModel;
      }
      catch
      {
        // Ignore JSON parsing errors and fall through to try query string
      }
    }

    // --- STEP 2: Fallback to building the model from query string parameters ---
    var queryModel = new T(); // Create a new instance of T
    var props = typeof(T).GetProperties(); // Reflectively get all public properties

    foreach (var prop in props)
    {
      var key = prop.Name;

      // Skip if the query string does not contain the key
      if (!request.Query.ContainsKey(key)) continue;

      var value = request.Query[key].ToString();

      // Skip if value is empty or whitespace
      if (string.IsNullOrWhiteSpace(value)) continue;

      try
      {
        // Determine the correct type, handling nullable types
        var targetType = Nullable.GetUnderlyingType(prop.PropertyType) ?? prop.PropertyType;

        // Convert the string query value to the correct property type
        var convertedValue = Convert.ChangeType(value, targetType, CultureInfo.InvariantCulture);

        // Assign the converted value to the property
        prop.SetValue(queryModel, convertedValue);
      }
      catch
      {
        // Ignore conversion errors (e.g., invalid format); leave default value
      }
    }

    // Return the query-bound model if it passes validation; otherwise return null
    return controller.TryValidateModel(queryModel) ? queryModel : null;
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;How I use it in my controller&lt;/strong&gt;&lt;br&gt;
I replaced the existing model-binding code in my controller with this for each of the actions:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;var model = await this.TryGetModelFromBodyOrQueryAsync&amp;lt;MyModelType&amp;gt;();
if (model == null)
    return ValidationProblem(ModelState);

// Proceed with my normal business logic using the valid model
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Why Is This Awesome?&lt;/strong&gt;&lt;br&gt;
• Clean &amp;amp; simple controller actions with no messy manual parameter parsing.&lt;br&gt;
• Supports both old clients (query string) and new clients (JSON body).&lt;br&gt;
• Graceful fallback logic. JSON body is preferred but will fall back to query parameters (or error out).&lt;br&gt;
• It is reusable and generic. It works with any of my public property model classes that are defined in my Model.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Is This Method a Good Approach?&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;It depends.&lt;/strong&gt; Supporting both [FromBody] and [FromQuery] is not ideal for long-term API design, but it serves as a practical and effective short-term solution when transitioning existing endpoints from query parameters to JSON body input without breaking existing clients&lt;br&gt;
• Have clients relying on query parameters.&lt;br&gt;
• Want a clean and reusable way to handle dual input methods (JSON and parameters).&lt;br&gt;
• Want validation to work seamlessly for either input method (JSON and parameters).&lt;br&gt;
• Want to avoid complex conditional code scattered in my controllers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pros of This Approach&lt;/strong&gt;&lt;br&gt;
• Backwards compatibility: Existing clients keep working while new clients use JSON.&lt;br&gt;
• Future-proof: Existing clients can switch to JSON body without changing server logic.&lt;br&gt;
• Fewer Breaking Changes: Avoids forcing clients to rewrite request logic.&lt;br&gt;
• Clean controller code: Single line to get your model, no manual parameter parsing.&lt;br&gt;
• Generic and reusable: Can be used with any defined model class.&lt;br&gt;
• Graceful fallback: Prefers JSON body but falls back to query parameters if body is missing or invalid.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cons of This Approach&lt;/strong&gt;&lt;br&gt;
• Increased complexity: The API endpoint is no longer truly RESTful in a pure sense because it supports two very different input mechanisms simultaneously.&lt;br&gt;
• Not Idiomatic: ASP.NET Core expects one clear model-binding path (e.g., [FromBody] or [FromQuery]). Supporting both circumvents the framework’s intended design.&lt;br&gt;
• Potential Maintenance Burden: basically supporting two versions of input in one endpoint.&lt;br&gt;
• Validation Complexity: Now have to validate two sources and resolve conflicts cleanly.&lt;br&gt;
• Performance overhead: Minor overhead in parsing and validation twice per request.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Is There a Better Way?&lt;/strong&gt;&lt;br&gt;
Of course, there always is.&lt;br&gt;
• Separate endpoints: Keep old endpoints accepting query parameters for legacy, and new ones accepting JSON body, rather than mixing both in one action. Create one endpoint that supports query parameters (/v1/endpoint) and another that only accepts a JSON body (/v2/endpoint). This is clean, versioned, and explicit.&lt;br&gt;
• Custom model binders: Implement a custom model binder that can elegantly handle both query and body input, but this requires more advanced ASP.NET Core expertise.&lt;br&gt;
• API Gateway or Middleware: Use an API gateway or middleware that transforms query parameters into JSON bodies before the request reaches your controller.&lt;br&gt;
• Deprecate Gradually: Add support for JSON body now, log usage, and phase out query parameter support over time and eventually simplify the codebase to expect only one model-binding source (JSON).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Is Supporting Both JSON Body and Query Parameters a NoNo?&lt;/strong&gt;&lt;br&gt;
• Although not a strict “NoNo,” it is generally discouraged for clarity and maintainability reasons.&lt;br&gt;
• APIs should ideally be explicit and consistent in how they accept input,  mixing sources could potentially confuse both clients and maintainers.&lt;br&gt;
• During transitions or migrations supporting both input styles is a reasonable and often necessary compromise.&lt;br&gt;
• Keep the logic simple and well documented if supporting both input methods.&lt;br&gt;
• Ultimately, the best practice is to have a clear contract (either query parameters or JSON body).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why I Chose to Do This&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Transition Period During API Modernization
I am modernizing older API endpoints that originally only accepted query parameters, but now need to support more expressive JSON payloads.
Instead of requiring all clients to switch immediately, I am allowing both input methods temporarily to maintain backward compatibility.
This approach ensures that existing clients keep working without forcing major changes on them.&lt;/li&gt;
&lt;li&gt;Convenience for Testing and Prototyping
When I am testing endpoints in Postman it is easier to pass a few parameters through the URL for quick tests. This is especially helpful during development when the payload is small or simple.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;When to Avoid&lt;/strong&gt;&lt;br&gt;
• Building a new API from scratch.&lt;br&gt;
• For clean consistent POST/PUT API contracts.&lt;br&gt;
• Prioritize long-term maintainability over short-term flexibility.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Summary&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;How easy is it to implement?
• This approach takes a bit of setup since it uses a helper method.
• Other options might be trickier, like writing a custom model binder.&lt;/li&gt;
&lt;li&gt;How good is it for backwards compatibility?
• This is one of the biggest strengths of this method—it lets old clients keep working without changes.
• Other approaches might need separate versioned endpoints.&lt;/li&gt;
&lt;li&gt;How clear and maintainable is the code?
• Mixing query and body input adds a little complexity.
• Alternatives that use one clear input model are easier to read and maintain.&lt;/li&gt;
&lt;li&gt;Does validation still work well?
• Yep. FluentValidation works fine with this method.
• Same goes for most alternatives.&lt;/li&gt;
&lt;li&gt;Is this how ASP.NET Core is “supposed” to be used?
• Not exactly. It bends the usual model binding rules a bit.
• Alternatives tend to stick closer to how the framework was designed to work.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Final Thoughts&lt;/strong&gt;&lt;br&gt;
This approach helps me keep the API backwards compatible without cluttering up the controller logic. It is a practical middle ground while I migrate endpoints, especially since I still need to support some legacy clients for now.&lt;/p&gt;

&lt;p&gt;Once I am ready to drop query parameters entirely, I can just update the action signatures and remove the fallback helper with no major refactoring needed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Bottom Line&lt;/strong&gt;&lt;br&gt;
• In the short term, I am okay supporting both [FromQuery] and [FromBody]as I am doing this intentionally with clear fallback rules and proper validation.&lt;br&gt;
• For the long term, the plan is to eventually fully migrate to [FromBody] because it is cleaner and aligns better with modern, RESTful API design. Eventually I will deprecate [FromQuery] for POST and PUT requests.&lt;br&gt;
• As a transition strategy my goal is to move each existing endpoint to JSON body input when the time is right.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>SQL Server Stored Procedure Design for Flexible Record Lookup</title>
      <dc:creator>Sean Drew</dc:creator>
      <pubDate>Thu, 10 Jul 2025 20:49:15 +0000</pubDate>
      <link>https://dev.to/seanmdrew/sql-server-stored-procedure-design-for-flexible-record-lookup-2h97</link>
      <guid>https://dev.to/seanmdrew/sql-server-stored-procedure-design-for-flexible-record-lookup-2h97</guid>
      <description>&lt;p&gt;&lt;strong&gt;Using CTEs and JSON to Support Multiple Matching Strategies&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When working with data I often run into situations where the available information varies. Sometimes I get a clean patientid, which makes the lookup easy. Other times, all I have is a date of birth and ZIP code or even just an invoice number from billing.&lt;/p&gt;

&lt;p&gt;Instead of creating a separate stored procedure for each case or writing messy dynamic SQL, I prefer a cleaner approach. I design one flexible stored procedure that can handle all of these lookup paths while keeping my logic organized which makes the code easier to maintain and helps my system adapt to whatever input it receives.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What This Procedure Supports&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Lookups by @patientid&lt;/li&gt;
&lt;li&gt;Lookups by &lt;a class="mentioned-user" href="https://dev.to/dob"&gt;@dob&lt;/a&gt; and &lt;a class="mentioned-user" href="https://dev.to/zip"&gt;@zip&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Lookups by @invoice_no&lt;/li&gt;
&lt;li&gt;Optional parameters with fallback matching&lt;/li&gt;
&lt;li&gt;Clean modular structure using Common Table Expressions (CTEs)&lt;/li&gt;
&lt;li&gt;Output in structured JSON, ideal for API responses&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Why Flexible Lookup Matters&lt;/strong&gt;&lt;br&gt;
I rarely get the same kind of data twice. One part of the system might send me a patientid while another might only have a date of birth and ZIP code or just an invoice number.&lt;/p&gt;

&lt;p&gt;Here is how it usually breaks down:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;u&gt;Scenario&lt;/u&gt;: API call with internal ID&lt;/li&gt;
&lt;li&gt;
&lt;u&gt;Inputs I Get&lt;/u&gt;: @patientid&lt;/li&gt;
&lt;li&gt;
&lt;u&gt;Scenario&lt;/u&gt;: Registration&lt;/li&gt;
&lt;li&gt;
&lt;u&gt;Inputs I Get&lt;/u&gt;: &lt;a class="mentioned-user" href="https://dev.to/dob"&gt;@dob&lt;/a&gt;, &lt;a class="mentioned-user" href="https://dev.to/zip"&gt;@zip&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;u&gt;Scenario&lt;/u&gt;: Billing or payment inquiry&lt;/li&gt;
&lt;li&gt;
&lt;u&gt;Inputs I Get&lt;/u&gt;: @invoice_no&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That kind of variety means I need a lookup approach that can adapt. By building in flexible matching logic, I avoid writing separate stored procedures for each case which helps to keep things consistent, avoid duplication, and help me handle more situations with less code with only one stored procedure instead of multiple.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How It Works&lt;/strong&gt;&lt;br&gt;
The stored procedure follows a clear order of evaluation:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;u&gt;Scenario&lt;/u&gt;: @patientid provided&lt;/li&gt;
&lt;li&gt;
&lt;u&gt;What Happens&lt;/u&gt;: Direct match&lt;/li&gt;
&lt;li&gt;
&lt;u&gt;Scenario&lt;/u&gt;: &lt;a class="mentioned-user" href="https://dev.to/dob"&gt;@dob&lt;/a&gt; + &lt;a class="mentioned-user" href="https://dev.to/zip"&gt;@zip&lt;/a&gt; provided&lt;/li&gt;
&lt;li&gt;
&lt;u&gt;What Happens&lt;/u&gt;: Match via demographics&lt;/li&gt;
&lt;li&gt;
&lt;u&gt;Scenario&lt;/u&gt;: @invoice_no provided&lt;/li&gt;
&lt;li&gt;
&lt;u&gt;What Happens&lt;/u&gt;: Match via invoice number&lt;/li&gt;
&lt;li&gt;
&lt;u&gt;Scenario&lt;/u&gt;: Multiple match types provided&lt;/li&gt;
&lt;li&gt;
&lt;u&gt;What Happens&lt;/u&gt;: Uses first match in priority order&lt;/li&gt;
&lt;li&gt;
&lt;u&gt;Scenario&lt;/u&gt;: No match&lt;/li&gt;
&lt;li&gt;
&lt;u&gt;What Happens&lt;/u&gt;: Returns empty JSON&lt;/li&gt;
&lt;li&gt;
&lt;u&gt;Scenario&lt;/u&gt;: Any error&lt;/li&gt;
&lt;li&gt;
&lt;u&gt;What Happens&lt;/u&gt;: Returns structured JSON error&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Why Use CTEs?&lt;/strong&gt;&lt;br&gt;
Common Table Expressions (CTEs) help keep the matching logic clean, modular, and easy to extend. Each match strategy goes into its own named block:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Each strategy is easy to isolate and debug&lt;/li&gt;
&lt;li&gt;Logic is easier to read and debug&lt;/li&gt;
&lt;li&gt;Maintenance is simpler when business rules change&lt;/li&gt;
&lt;li&gt;Match types can be reordered or extended without affecting the others&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Stored Procedure Walkthrough&lt;/strong&gt;&lt;br&gt;
This is a simplified version of the procedure that covers all three match types:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;use [thedb]
go

drop procedure if exists GetPatientSumInfo
go

set ansi_nulls on
go

set quoted_identifier on
go

create procedure GetPatientSumInfo
  @patientid int = null,
  @dob date = null,
  @zip varchar(20) = null,
  @invnum varchar(50) = null
as
begin
  set nocount on;

  begin try

    ;with patidmatch as (
      select patientid
      from patinfo
      where patientid = @patientid
    ),
    demographicmatch as (
      select p.patientid
      from patinfo as p
      join patinfo_addr as pa on p.patientid = pa.patientid
      where
        @patientid is null and
        @dob is not null and
        @zip is not null and
        p.dob = @dob and
        pa.zip = @zip
    ),
    invoicematch as (
      select distinct p.patientid
      from orderinfo as o
      join patinfo as p on o.patientid = p.patientid
      where
        @patientid is null and
        @invnum is not null and
        o.invnum = @invnum
    ),
    allmatches as (
      select patientid from patidmatch
      union
      select patientid from demographicmatch
      union
      select patientid from invoicematch
    ),
    matchedpatient as (
      select top 1 patientid
      from allmatches
      order by patientid
    )
    select
      p.patientid,
      p.first_name,
      p.last_name,
      p.dob,
      (
        select
          pa.addr_id,
          pa.street,
          pa.city,
          pa.state,
          pa.zip
        from patinfo_addr as pa
        where pa.patientid = p.patientid
        for json path
      ) as address
    from matchedpatient as m
    join patinfo p on p.patientid = m.patientid
    for json path, without_array_wrapper;

  end try
  begin catch
    select
      error_number() as error_number,
      error_message() as error_message,
      error_line() as error_line,
      error_procedure() as error_procedure
    for json path, without_array_wrapper;
  end catch
end

go
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Sample Usage&lt;/strong&gt;&lt;br&gt;
&lt;code&gt;exec GetPatientSumInfo @dob = '1980-05-01', @zip = '12345'&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Sample Output&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "patientid": 123,
  "first_name": "John",
  "last_name": "Doe",
  "dob": "1980-05-01",
  "address": [
    {
      "addr_id": 789,
      "street": "123 Main St",
      "city": "Springfield",
      "state": "IL",
      "zip": "12345"
    }
  ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Error Handling&lt;/strong&gt;&lt;br&gt;
If any unexpected error happens (e.g., bad join, null issue, query failure, etc.), the CATCH block will return a JSON response like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "error_number": 208,
  "error_message": "Invalid object name 'patinfo'.",
  "error_line": 7,
  "error_procedure": "GetPatientSumInfo"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is great for logging or for passing through to an API that displays a friendly error message to the caller.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Performance Comparison&lt;/strong&gt;&lt;br&gt;
I like using the CTEs + UNION approach because it keeps the code clean and easy to maintain, but it is not always the fastest option, especially in systems with a lot of traffic or big datasets.&lt;/p&gt;

&lt;p&gt;Depending on the situation, I sometimes consider other approaches. Here is a quick comparison I use when deciding what fits best:&lt;/p&gt;

&lt;p&gt;&lt;u&gt;CTEs + UNION&lt;/u&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Performance: Medium&lt;/li&gt;
&lt;li&gt;Maintainability: Excellent&lt;/li&gt;
&lt;li&gt;Best Use: Clean modular logic with de-duplication&lt;/li&gt;
&lt;li&gt;Notes: Extra cost for de-dupe&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;u&gt;CTEs + UNION ALL&lt;/u&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Performance: Good&lt;/li&gt;
&lt;li&gt;Maintainability: Excellent&lt;/li&gt;
&lt;li&gt;Best Use: Clean logic when overlap is impossible&lt;/li&gt;
&lt;li&gt;Notes: Slightly faster if safe to use&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;u&gt;Single SELECT + OR conditions&lt;/u&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Performance: Fastest&lt;/li&gt;
&lt;li&gt;Maintainability: Moderate&lt;/li&gt;
&lt;li&gt;Best Use: High-performance environments&lt;/li&gt;
&lt;li&gt;Notes: Harder to maintain and scale&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;u&gt;Temp table / Table variable&lt;/u&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Performance: Good&lt;/li&gt;
&lt;li&gt;Maintainability: Moderate&lt;/li&gt;
&lt;li&gt;Best Use: Multi-step logic or complex joins&lt;/li&gt;
&lt;li&gt;Notes: Good for control, indexing flexibility&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;u&gt;Indexed view&lt;/u&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Performance: Very Fast&lt;/li&gt;
&lt;li&gt;Maintainability: Hard to manage&lt;/li&gt;
&lt;li&gt;Best Use: High-volume repeat queries&lt;/li&gt;
&lt;li&gt;Notes: Overhead on inserts, complex setup&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Final Thoughts&lt;/strong&gt;&lt;br&gt;
This kind of flexible stored procedure has worked really well for me in situations where I need to match records using different types of information. By combining optional parameters, CTEs, and JSON output, I get something that is clean, modern, and easy to connect to an API.&lt;/p&gt;

&lt;p&gt;If performance ever becomes a concern, there are some good ways to optimize without throwing out the whole design. I can switch from UNION to UNION ALL if I know the match paths are mutually exclusive. I can also cache commonly used reference data in the application layer, or preload smaller datasets into temp tables inside the procedure. And in heavier systems, I might use indexed views to speed up the matching process by querying a flattened, pre-joined structure.&lt;/p&gt;

&lt;p&gt;But for most systems I have worked on, starting with this kind of clean, modular, CTE-based design provides a solid foundation. It is easy to read, easy to extend, and flexible enough to support all kinds of input combinations.&lt;/p&gt;

</description>
      <category>sql</category>
    </item>
    <item>
      <title>Dynamic API Dispatching in C#</title>
      <dc:creator>Sean Drew</dc:creator>
      <pubDate>Tue, 24 Jun 2025 16:06:56 +0000</pubDate>
      <link>https://dev.to/seanmdrew/dynamic-api-dispatching-in-c-1a38</link>
      <guid>https://dev.to/seanmdrew/dynamic-api-dispatching-in-c-1a38</guid>
      <description>&lt;p&gt;&lt;strong&gt;A flexible way to handle changing JSON inputs without messy "if" or "switch" statements&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When building an API, one of the first things you need to figure out is how to route incoming requests to the right piece of code. This is usually done by checking which fields are present in the request and deciding which method to run. For simple APIs, this is often handled with a few if or switch statements. But once your inputs start to vary and grow more complex, this approach becomes hard to manage.&lt;/p&gt;

&lt;p&gt;In my case, I needed to build an API endpoint that could handle multiple types of patient-related data. Sometimes a request had a PatientIdNum, other times it had just a date of birth and zip code, or maybe an invoice number. I wanted a way to let the API decide which method to call based on what was sent, without having to write a long list of conditions in code. So I went with a rule-based approach using a feature in C# called reflection.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is Dispatching?&lt;/strong&gt;&lt;br&gt;
"Dispatching" simply means sending something to the right place. In APIs, this usually means figuring out which method should handle a request. A "dispatcher" is just a piece of code that makes this decision.&lt;/p&gt;

&lt;p&gt;In my case, I wrote a dispatcher that checks the incoming JSON request, matches it to a rule in a JSON file, and then calls the matching method by name. This keeps the routing logic separate from the rest of the business logic and makes it easier to change.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Problem with Hardcoded Logic&lt;/strong&gt;&lt;br&gt;
If you've ever written code like this, you know how quickly it can grow:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;if (request.PatientIdNum != null)
  return HandleByPatientIdNum(request);
else if (request.PatInfo?.Dob != null &amp;amp;&amp;amp; request.PatInfo?.Zip != null)
  return HandleByDemographics(request);
else if (request.OmInfo?.FirstOrDefault()?.InvoiceNo != null)
  return HandleByInvoice(request);
else
  return BadRequest("Insufficient identifiers");
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This kind of logic is fine at first, but over time it:&lt;br&gt;
• Gets harder to read and maintain&lt;br&gt;
• Tangles routing and business logic together&lt;br&gt;
• Requires code changes every time the JSON input and rules change&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A Simpler Way: Rule-Based Routing&lt;/strong&gt;&lt;br&gt;
With rule-based routing, you define your routing rules outside of your code, usually in a file. For me, that file is "rules.json". Each rule lists the fields required to trigger a handler method. Here's an example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[
  {
    "name": "HandleByPatientIdNum",
    "match": ["PatientIdNum"]
  },
  {
    "name": "HandleByDemographics",
    "match": ["patinfo.dob", "patinfo.zip"]
  },
  {
    "name": "HandleByInvoice",
    "match": ["ominvoice.invoice_no"]
  }
]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Basically, each rule says: If these fields are present and not null, then run the method with this "name".&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How Reflection Helps&lt;/strong&gt;&lt;br&gt;
Reflection is a .NET feature that lets your code look at itself while it's running. That means you can find a method by name and call it even if you didn't hardcode it directly. So, if the rule says to call "HandleByPatientIdNum", and that method exists in your controller, you can call it just by using its name (Reflection).&lt;/p&gt;

&lt;p&gt;This makes the system very flexible. To add new functionality, you just write a new method and add a new rule to the JSON file (rules.json).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Reflection Scope Note:&lt;/strong&gt;&lt;br&gt;
For safety and clarity, I restrict the reflection lookup to non-public instance methods only using BindingFlags.NonPublic | BindingFlags.Instance. This avoids exposing public framework or utility methods unintentionally and keeps the dynamic invocation limited to the intended handler methods.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How It Works&lt;/strong&gt;&lt;br&gt;
The dispatcher loads the rules from the JSON file.&lt;/p&gt;

&lt;p&gt;&lt;u&gt;Rule Validation:&lt;/u&gt;&lt;br&gt;
Before evaluating incoming requests, I run a simple preflight validator to check for common mistakes in the "rules.json" file. This helps catch issues early and avoid runtime surprises. The validator:&lt;br&gt;
• Confirms all name entries refer to actual methods (using GetMethod(...))&lt;br&gt;
• Ensures match arrays are not empty&lt;br&gt;
• Optionally checks for duplicate or overly generic rules (e.g., rules that match any input)&lt;/p&gt;

&lt;p&gt;This step is run at app startup and helps ensure all routing rules are both valid and intentional.&lt;/p&gt;

&lt;p&gt;Then:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The request comes in as nested JSON&lt;/li&gt;
&lt;li&gt;The JSON is flattened into a dictionary with keys like patinfo.dob or ominvoice.invoice_no&lt;/li&gt;
&lt;li&gt;The dispatcher loads the validated rules from rules.json&lt;/li&gt;
&lt;li&gt;It checks each rule to see if all required fields are present and not null&lt;/li&gt;
&lt;li&gt;It picks the most specific matching rule&lt;/li&gt;
&lt;li&gt;It uses reflection to call the method named in the rule ("name")&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Flattening the JSON input&lt;/strong&gt;&lt;br&gt;
You cannot easily check for field presence like "patinfo.dob" or "ominvoice.invoice_no" with a simple dictionary lookup because those values are nested inside JSON objects.&lt;/p&gt;

&lt;p&gt;Flattening turns nested structures into a flat dictionary where the keys represent the full JSON path using dot notation, which makes things much easier.&lt;/p&gt;

&lt;p&gt;Here is an example of a nested JSON request:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "patinfo": {
    "dob": "1980-05-01",
    "address": {
      "zip": "12345"
    }
  },
  "ominvoice": {
    "invoice_no": "INV123"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And the dictionary (flattened) representation of the JSON input.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "patinfo.dob": "1980-05-01",
  "patinfo.address.zip": "12345",
  "ominvoice.invoice_no": "INV123"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each key is a string path pointing to the original value in the nested JSON which is easier to check instead of writing complex null checks or traversal code.&lt;/p&gt;

&lt;p&gt;&lt;u&gt;Example Dispatcher Code&lt;/u&gt;&lt;br&gt;
This sample code uses the flattened dictionary to check against the rules in "rules.json" and then calls the right method.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Find the first rule that matches ALL of its required fields
var matchedRule = rules
  // Keep only the rules where every required field in the rule's `Match` array:
  //   - Exists in the flattened input dictionary
  //   - Is not null
  .Where(r =&amp;gt; r.Match.All(field =&amp;gt;
      flattenedInput.ContainsKey(field) &amp;amp;&amp;amp; flattenedInput[field] != null))

  // If multiple rules match, prefer the one with the most required fields
  // (i.e., the most specific match)
  .OrderByDescending(r =&amp;gt; r.Match.Count)

  // Return the first matching rule, or null if none match
  .FirstOrDefault();

// If no rule matched, return a 400 Bad Request
if (matchedRule == null)
  return BadRequest("No matching handler found");

// Use reflection to locate a method on this controller with the same name
// as the rule's `name` field. Assume all handlers are private instance methods.
var method = this.GetType().GetMethod(
    matchedRule.Name,
    BindingFlags.NonPublic | BindingFlags.Instance
);

// If the method doesn't exist (e.g., typo in rules.json), return error
if (method == null)
  return BadRequest($"Handler method '{matchedRule.Name}' not found");

// Dynamically invoke the matched handler method and pass the `request` object
// This assumes all handler methods follow the signature: 
//   Task&amp;lt;IActionResult&amp;gt; MethodName(PatientRequest request)
var result = await (Task&amp;lt;IActionResult&amp;gt;)method.Invoke(
    this,
    new object[] { request }
);

// Return the handler’s result
return result;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And the matching handler methods look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;private async Task&amp;lt;IActionResult&amp;gt; HandleByPatientIdNum(PatientRequest request)
{
  // code here
}

private async Task&amp;lt;IActionResult&amp;gt; HandleByDemographics(PatientRequest request)
{
  // code here
}

private async Task&amp;lt;IActionResult&amp;gt; HandleByInvoice(PatientRequest request)
{
  // code here
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Why This Is Useful&lt;/strong&gt;&lt;br&gt;
• You can change the rules without touching the code.&lt;br&gt;
• Helps keep routing logic out of the controller methods.&lt;br&gt;
• Avoids giant if-else/switch blocks.&lt;br&gt;
• Easy to scale by just adding new methods and rules.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Final Thoughts&lt;/strong&gt;&lt;br&gt;
This rule-based dispatch system has made it much easier for me to handle flexible JSON inputs without burying myself in if-then/switch conditions. By flattening the input to a dictionary and matching fields to rules, I let my code decide what to do based on the data. It's clean, adaptable, and easy to extend. If your API needs to handle many different input patterns, this could be a great way to go.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt;&lt;br&gt;
While externalizing (rules.json) routing rules offers flexibility, the rules themselves must be managed carefully to avoid becoming a new source of complexity.&lt;/p&gt;

</description>
      <category>csharp</category>
      <category>api</category>
    </item>
    <item>
      <title>Identifying Stored Procedures Created or Modified Within a Date Range in SQL Server</title>
      <dc:creator>Sean Drew</dc:creator>
      <pubDate>Wed, 18 Jun 2025 20:39:07 +0000</pubDate>
      <link>https://dev.to/seanmdrew/identifying-stored-procedures-created-or-modified-within-a-date-range-in-sql-server-1246</link>
      <guid>https://dev.to/seanmdrew/identifying-stored-procedures-created-or-modified-within-a-date-range-in-sql-server-1246</guid>
      <description>&lt;p&gt;As part of rewriting the documentation for my custom API, I needed to test each endpoint to ensure it behaved as expected, including validating both input and output. During this process, I discovered that several existing stored procedures required modification. Unfortunately, I did not keep track of which procedures were changed. Once the documentation was complete, I had to go back and identify the stored procedures that had been altered.&lt;/p&gt;

&lt;p&gt;This experience led me to think more seriously about how to track changes to stored procedures; something that would be valuable for auditing, deployment reviews, and debugging. Having visibility into when a stored procedure was created or last modified can help both developers and database administrators identify recent stored procedure changes.&lt;/p&gt;

&lt;p&gt;To address this, I took a look at SQL Server's system catalog views. I decided on the sys.procedures view which contain metadata such as creation and modification dates for stored procedures. This brief write-up outlines how to use T-SQL to retrieve a list of stored procedures that were either created or modified within a specified date range, including the script I wrote to accomplish this task.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The TSQL Query&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;use [yourdatabase]
go

-- select SPs that were created or modified within a specific date range
select
  schema_name(schema_id) as schema_name, -- retrieve schema name (e.g., 'dbo') for the procedure
  name as procedure_name, -- the name of the stored procedure
  create_date, -- the date the procedure was originally created
  modify_date -- the date the procedure was last altered
from sys.procedures -- contains metadata for user-defined procedures
where create_date between '2025-06-01' and '2025-06-18' or modify_date between '2025-06-01' and '2025-06-18'
order by modify_date desc
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;What This Query Does&lt;/strong&gt;&lt;br&gt;
It retrieves a list of stored procedures from your selected SQL database that meet either of the following criteria:&lt;/p&gt;

&lt;p&gt;• Created or altered between June 06 2025 and June 18 2025&lt;br&gt;
• Each row in the result set includes:&lt;br&gt;
• The schema name (such as dbo)&lt;br&gt;
• The procedure name&lt;br&gt;
• The creation date&lt;br&gt;
• The last modification date&lt;/p&gt;

&lt;p&gt;The results are sorted in descending order by modify_date (most recent appear at the top).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Additional Notes&lt;/strong&gt;&lt;br&gt;
Each database has their own "sys.procedures" and "sys.objects".&lt;/p&gt;

&lt;p&gt;If you want to find other objects besides stored procedures you can use "sys.objects" instead of "sys.procedures". Example: Specify type = 'P' with sys.objects to limit the results to stored procedures.&lt;/p&gt;

&lt;p&gt;Here is a query to find other object types such as triggers, user tables, etc.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;select
distinct
type,
type_desc
from sys.objects
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;type    type_desc&lt;br&gt;
TT  TYPE_TABLE&lt;br&gt;
FN  SQL_SCALAR_FUNCTION&lt;br&gt;
UQ  UNIQUE_CONSTRAINT&lt;br&gt;
SQ  SERVICE_QUEUE&lt;br&gt;
F   FOREIGN_KEY_CONSTRAINT&lt;br&gt;
U   USER_TABLE&lt;br&gt;
D   DEFAULT_CONSTRAINT&lt;br&gt;
PK  PRIMARY_KEY_CONSTRAINT&lt;br&gt;
V   VIEW&lt;br&gt;
S   SYSTEM_TABLE&lt;br&gt;
IT  INTERNAL_TABLE&lt;br&gt;
P   SQL_STORED_PROCEDURE&lt;br&gt;
TR  SQL_TRIGGER&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
SQL Server makes it easy to find out which stored procedures have been added or changed recently. Using the built-in system views, you can quickly get a list of procedures based on when they were created or last modified. This is helpful when trying to audit recent changes, review what went out in a deployment, or troubleshoot something that suddenly stopped working. Pretty good with keeping track of changes to help with things like documentation.&lt;/p&gt;

</description>
      <category>sqlserver</category>
      <category>sql</category>
    </item>
    <item>
      <title>Customizing Output Caching in ASP.NET Web Forms and C# APIs</title>
      <dc:creator>Sean Drew</dc:creator>
      <pubDate>Fri, 30 May 2025 13:55:51 +0000</pubDate>
      <link>https://dev.to/seanmdrew/customizing-output-caching-in-aspnet-web-forms-and-c-apis-5cob</link>
      <guid>https://dev.to/seanmdrew/customizing-output-caching-in-aspnet-web-forms-and-c-apis-5cob</guid>
      <description>&lt;p&gt;Efficient performance is essential for responsive, scalable web applications and one way to achieve that and boost performance is through caching. By avoiding repetitive data processing and rendering, caching allows applications to serve content more efficiently. In ASP.NET Web Forms and C# API-based applications, output caching and data caching can help improve load times and reduce server resource usage when applied appropriately.&lt;/p&gt;

&lt;p&gt;&lt;u&gt;Good Fit for Caching&lt;/u&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Data is read often (rarely written)&lt;/li&gt;
&lt;li&gt;Content is shared across users&lt;/li&gt;
&lt;li&gt;Calls are slow or expensive&lt;/li&gt;
&lt;li&gt;Data freshness is not business-critical&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;u&gt;Not a Good Fit for Caching&lt;/u&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Data is updated frequently or in real-time&lt;/li&gt;
&lt;li&gt;Data is personalized or user-specific&lt;/li&gt;
&lt;li&gt;Data is already fast or trivial to retrieve&lt;/li&gt;
&lt;li&gt;Data freshness is business-critical&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Some Benefits of Using Caching&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Reduce server load. By serving cached responses, the server uses fewer CPU, disk, and memory resources.&lt;/li&gt;
&lt;li&gt;Improve User Experience. Serving cached responses will make response time faster, smoother, and more responsive.&lt;/li&gt;
&lt;li&gt;Scales Better Under Load. Serving cached responses will help the system handle high volumes of traffic without performing full processing for each incoming request.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Caching is a valuable performance tool, but it is not the only option. Other options such as asynchronous processing, batching, and queuing can also help improve responsiveness and reduce system load, especially when caching is not appropriate for situations where frequently updated data or tasks require the most current data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why Use Caching to Boot Performance?&lt;/strong&gt;&lt;br&gt;
Caching reduces the need to repeatedly perform expensive operations for things like database queries, page rendering, data serialization, and network and disk I/O.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Querying a database, especially with complex joins or large datasets can be time-consuming. Caching the result set means subsequent requests can bypass the database entirely and deliver the same data as a cached response.&lt;/li&gt;
&lt;li&gt;Calling an external API or service (weather, stock prices, third-party data over HTTP).&lt;/li&gt;
&lt;li&gt;Database access over the network (querying a SQL database hosted on another machine or cloud service).&lt;/li&gt;
&lt;li&gt;Accessing a microservice (sending requests between services in a distributed system).&lt;/li&gt;
&lt;li&gt;Retrieving remote files (documents, images, PDFs, etc.)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;u&gt;Computation or Business Logic&lt;/u&gt;&lt;br&gt;
If a response depends on heavy processing (aggregating reports, business rules, etc.) caching can help avoid repeating that work for each request.&lt;/p&gt;

&lt;p&gt;&lt;u&gt;Rendering&lt;/u&gt;&lt;br&gt;
Is typically for ASP.NET Web Form applications and is the process of generating HTML from server controls (or other content) that will be sent to the browser. In the context of ASP.NET Web Forms, this involves taking server-side controls (like GridView, Label, etc.) and converting them into HTML markup.&lt;/p&gt;

&lt;p&gt;&lt;u&gt;Serialization&lt;/u&gt;&lt;br&gt;
Is typically for APIs and can be things like serializing return data into JSON or XML. Caching the final JSON or XML result (output caching) is pretty much skipping the rendering/serialization step for repeated (the same) inputs/outputs.&lt;/p&gt;

&lt;p&gt;&lt;u&gt;External Resource Access&lt;/u&gt;&lt;br&gt;
Fetching data from third-party services or external APIs (weather, stock info, etc.) could introduce latency and maybe even rate limits. Caching these results reduces dependency on the third-party service and external API.&lt;/p&gt;

&lt;p&gt;&lt;u&gt;Network and Disk I/O&lt;/u&gt;&lt;br&gt;
Reading static files or loading configuration from disk or across the network can also add delays. Caching them in memory will allow for quicker access to that information.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Output Caching&lt;/strong&gt;&lt;br&gt;
&lt;u&gt;Output Caching in ASP.NET Web Forms&lt;/u&gt;&lt;br&gt;
Output caching stores the final rendered HTML of a page and reuses it for subsequent requests. This is particularly effective for pages with static or content that rarely changes and can be represented as more static content.&lt;/p&gt;

&lt;p&gt;To implement output caching in a Web Forms application, you can use the OutputCache directive:&lt;br&gt;
&lt;code&gt;&amp;lt;%@ OutputCache Duration="60" VaryByParam="none" %&amp;gt;&lt;/code&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Duration defines how long (in seconds) the page is cached.&lt;/li&gt;
&lt;li&gt;VaryByParam allows different versions of the cache based on query string or form parameters.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For example, to cache different versions of a product page by productId for 120 seconds&lt;br&gt;
&lt;code&gt;&amp;lt;%@ OutputCache Duration="120" VaryByParam="productId" %&amp;gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This ensures that each unique productId results in a distinct cached page.&lt;/p&gt;

&lt;p&gt;You can also configure output caching programmatically in the page code-behind which provides more caching behavior control.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    Response.Cache.SetExpires(DateTime.UtcNow.AddMinutes(2));
    Response.Cache.SetCacheability(HttpCacheability.Public);
    Response.Cache.VaryByParams["productId"] = true;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;u&gt;Output Caching in C# APIs&lt;/u&gt;&lt;br&gt;
ASP.NET Web API does not include built-in output caching like Web Forms does but basic HTTP response caching can be simulated by using a custom action filter. This filter sets caching headers on the response, allowing browsers or intermediary proxies to cache the result for a specified duration.&lt;/p&gt;

&lt;p&gt;Here is a simple example defining a custom action filter:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public class OutputCacheAttribute : ActionFilterAttribute
{
  private readonly int _durationSeconds;

  public OutputCacheAttribute(int durationSeconds)
  {
      _durationSeconds = durationSeconds;
  }

  public override void OnActionExecuted(HttpActionExecutedContext context)
  {
    var response = context.Response;
    if (response != null)
    {
      response.Headers.CacheControl = new CacheControlHeaderValue
      {
        Public = true,
        MaxAge = TimeSpan.FromSeconds(_durationSeconds)
      };
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You would then use the custom action filter in a controller action doing something like this. This caches the response for 120 seconds and reduces the load on the backend for repeated requests with the same parameters.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[OutputCache(120)]
public IHttpActionResult GetProduct(int id)
{
  var product = _productService.GetProductById(id);
  return Ok(product);
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Data Caching&lt;/strong&gt;&lt;br&gt;
In addition to output caching, data caching offers greater flexibility and can be used effectively in both Web Forms and APIs. Data caching is the storing of data temporarily in memory so that it can be retrieved faster the next time it is needed. This reduces expensive operations such as repeated database calls.&lt;/p&gt;

&lt;p&gt;&lt;u&gt;Data Caching for ASP.NET Web Forms&lt;/u&gt;&lt;br&gt;
In an ASP.NET Web Form application, data caching is implemented using the "System.Web.Caching.Cache" class. Example of getting a list of products and cache it for 5 minutes and then reuse it to avoid repeated "database" access.&lt;/p&gt;

&lt;p&gt;ProductRepository.cs&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public class ProductRepository
{
  public List&amp;lt;string&amp;gt; GetProducts()
  {
    return new List&amp;lt;string&amp;gt; { "Apple", "Banana", "Cherry" };
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Default.aspx.cs&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public partial class _Default : System.Web.UI.Page
{
  protected void Page_Load(object sender, EventArgs e)
  {
    if (!IsPostBack)
    {
      var products = GetProductsFromCache();
      ListBox1.DataSource = products;
      ListBox1.DataBind();
    }
  }

  private List&amp;lt;string&amp;gt; GetProductsFromCache()
  {
    var products = Cache["ProductList"] as List&amp;lt;string&amp;gt;;
    if (products == null)
    {
      var repo = new ProductRepository();
      products = repo.GetProducts();
      Cache.Insert("ProductList", products, null, DateTime.Now.AddMinutes(5), System.Web.Caching.Cache.NoSlidingExpiration);
    }
    return products;
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Default.aspx&lt;br&gt;
&lt;code&gt;&amp;lt;asp:ListBox ID="ListBox1" runat="server" Width="200"&amp;gt;&amp;lt;/asp:ListBox&amp;gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;u&gt;Data Caching in an ASP.NET Core API&lt;/u&gt;&lt;br&gt;
In ASP.NET Core APIs, data caching is commonly implemented using the built-in IMemoryCache service. This allows you to store frequently requested data (such as database query results) in memory to reduce redundant database calls and improve performance. Example of getting a list of products and cache it for 5 minutes and then reuse it to avoid repeated "database" access.&lt;/p&gt;

&lt;p&gt;ProductRepository.cs&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public class ProductRepository
{
  public List&amp;lt;string&amp;gt; GetProducts()
  {
    return new List&amp;lt;string&amp;gt; { "Apple", "Banana", "Cherry" };
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;ProductController.cs&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;using Microsoft.AspNetCore.Mvc;
using Microsoft.Extensions.Caching.Memory;

[ApiController]
[Route("products")]
public class ProductsController : ControllerBase
{
  private readonly IMemoryCache _cache;
  private readonly ProductRepository _repo = new ProductRepository();

  public ProductsController(IMemoryCache cache)
  {
    _cache = cache;
  }

  [HttpGet]
  public List&amp;lt;string&amp;gt; Get()
  {
    if (!_cache.TryGetValue("ProductList", out List&amp;lt;string&amp;gt; products))
    {
      products = _repo.GetProducts();
      _cache.Set("ProductList", products, TimeSpan.FromMinutes(5));
    }

    return products;
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Program.cs&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;var builder = WebApplication.CreateBuilder(args);
builder.Services.AddMemoryCache();
builder.Services.AddControllers();

var app = builder.Build();
app.MapControllers();
app.Run();
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
Caching is a good way to boost ASP.NET Web Forms and C# API application performance. &lt;/p&gt;

&lt;p&gt;Output caching is a good way to speed up an entire page or response and is ideal for quickly serving complete pages or responses.&lt;br&gt;
Data Caching is a good way to improve business logic or data access performance and supports reusability and responsiveness at the business logic layer.&lt;/p&gt;

&lt;p&gt;Also, it is handy that you can control how long things stay in the cache and even change what gets cached based on things like query strings or user roles. Basically, you can cache what makes sense when it makes sense and also refresh the cache when required.&lt;/p&gt;

</description>
      <category>csharp</category>
      <category>c</category>
      <category>aspnet</category>
      <category>dotnet</category>
    </item>
    <item>
      <title>Filtering and Exporting SQL Stored Procedures with PowerShell</title>
      <dc:creator>Sean Drew</dc:creator>
      <pubDate>Tue, 25 Mar 2025 13:50:07 +0000</pubDate>
      <link>https://dev.to/seanmdrew/filtering-and-exporting-sql-stored-procedures-with-powershell-16g1</link>
      <guid>https://dev.to/seanmdrew/filtering-and-exporting-sql-stored-procedures-with-powershell-16g1</guid>
      <description>&lt;p&gt;When managing SQL Server stored procedures, there are times when I need to quickly generate scripts for them, usually for backups, migrations, a specific client or functionality, or documentation. In this document, I show how I use T-SQL and PowerShell to filter stored procedures based on naming patterns and export them into individual .sql files. Each file contains the full DROP IF EXISTS and CREATE script which allows for the easy recreation of the stored procedures.&lt;/p&gt;

&lt;p&gt;Running the script. The user account executing this script will require permissions to connect to the target database and read from the "sys.procedures" and "sys.sql_modules" system tables. Additionally, the account must have write permissions for the designated output file directory.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Script&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Define the SQL Server connection string
$connectionString = "Server=SQLServerName;Database=DBName;Integrated Security=True"

 # Define the output folder for stored procedure scripts
$outputFolder = "C:\API_SPs" # Can also be a network UNC
$logFile = "$outputFolder\ErrorLog.txt" # Log file to store errors
$errorsOccurred = $false  # Flag to track if any errors occurred
$scriptsWritten = $false  # Flag to track if any scripts were actually written

# Ensure output folder exists, create it if necessary
if (!(Test-Path $outputFolder))
{
  New-Item -ItemType Directory -Path $outputFolder -Force | Out-Null
}

# Define SQL query to retrieve stored procedure definitions
$query = @"
SELECT 
 p.name AS ProcedureName,
 s.name AS SchemaName,
 'DROP PROCEDURE IF EXISTS [' + s.name + '].[' + p.name + '];' + CHAR(13) + CHAR(10) +
 'GO' + CHAR(13) + CHAR(10) +
 sm.definition + CHAR(13) + CHAR(10) +
 'GO' AS ScriptContent
FROM sys.procedures p
JOIN sys.schemas s ON p.schema_id = s.schema_id
JOIN sys.sql_modules sm ON p.object_id = sm.object_id
WHERE 
(
 p.name LIKE 'XXX_YYY%' 
 AND p.name NOT IN ('%BACKUP%', '%OLD%')
)
"@

# Function to log errors to a file and display in the console
function Log-Error
{
  param ([string]$message)
  $timestamp = Get-Date -Format "yyyy-MM-dd HH:mm:ss"
  $logEntry = "$timestamp - ERROR: $message"
  Add-Content -Path $logFile -Value $logEntry  # Append error to log file

  # Print error directly to console
  Write-Host "ERROR: $message" -ForegroundColor Red  

  # Mark that an error has occurred
  $global:errorsOccurred = $true
}

# Initialize SQL connection and execute query
$sqlConnection = $null  # Variable to hold the connection object
$procedures = @{}  # Dictionary to store procedure names and their scripts

try
{
  # Create and open SQL connection
  $sqlConnection = New-Object System.Data.SqlClient.SqlConnection($connectionString)
  $sqlConnection.Open()

  # Check if connection is successful
  if ($sqlConnection.State -ne 'Open')
  {
    throw "Failed to open SQL connection."
  }

  # Execute the query
  $sqlCommand = $sqlConnection.CreateCommand()
  $sqlCommand.CommandText = $query
  $reader = $sqlCommand.ExecuteReader()

  # Read query results
  while ($reader.Read())
  {
    # Ensure valid file names by replacing invalid characters
    $procedureName = $reader["ProcedureName"] -replace '[\\\/:*?"&amp;lt;&amp;gt;|]', '_' # replace \\\/:*?"&amp;lt;&amp;gt;| with underscore
    $scriptContent = $reader["ScriptContent"]
    $procedures[$procedureName] = $scriptContent  # Store in PS hashtable/dictionary
  }

  $reader.Close()  # Close the SQL data reader
}
catch
{
  Log-Error "Database error: $_"
}
finally
{
  # Ensure SQL connection is closed to avoid resource leaks
  if ($sqlConnection -ne $null -and $sqlConnection.State -eq 'Open')
  {
    $sqlConnection.Close()
  }
}

# Write stored procedure scripts to files only if there are procedures to save
if ($procedures.Count -gt 0)
{
  foreach ($procedureName in $procedures.Keys) # $procedures is a PS hashtable/dictionary and .Keys retrieves all keys from the hashtable
  {
    try
    {
      $filePath = "$outputFolder\$procedureName.sql"
      # Write the script content to the file using UTF-8 encoding
      [System.IO.File]::WriteAllText($filePath, $procedures[$procedureName], [System.Text.Encoding]::UTF8)
      $scriptsWritten = $true  # Mark that at least one script was successfully written
    }
    catch
    {
      Log-Error "Failed to write file: $filePath - $_"
    }
  }
}

# Final status message
if ($errorsOccurred -and -not $scriptsWritten)
{
    Write-Host "No stored procedure scripts were saved. Errors occurred. See $logFile for details." -ForegroundColor Yellow
}
elseif ($errorsOccurred -and $scriptsWritten)
{
    Write-Host "Stored procedure scripts have been saved to: $outputFolder, but some errors occurred. See $logFile for details." -ForegroundColor Yellow
}
elseif ($scriptsWritten)
{
    Write-Host "Stored procedure scripts have been successfully saved to: $outputFolder" -ForegroundColor Green
}
else
{
    Write-Host "No stored procedures matched the criteria. No scripts were generated." -ForegroundColor Cyan
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Steps Performed by the Script&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Connect to SQL Server: The script establishes a connection to the SQL Server using the provided credentials.&lt;/li&gt;
&lt;li&gt;Query the Database: It executes a SQL query to retrieve a list of stored procedures that match the specified criteria (e.g., names starting with "CP_API").&lt;/li&gt;
&lt;li&gt;Generate Drop and Create Scripts: For each stored procedure, the script constructs a SQL script containing the DROP PROCEDURE IF EXISTS and CREATE PROCEDURE statements, including the procedure's full definition.&lt;/li&gt;
&lt;li&gt;Save to Files: The script generates an individual .sql file for each stored procedure and saves it to the designated directory (e.g., C:\SQLScripts).&lt;/li&gt;
&lt;li&gt;Handle Exceptions: Any issues encountered during the execution (such as permission issues, missing procedures, cannot connect to SQL server) are handled by the script. Error messages are written to ErrorLog.txt.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
This PowerShell script provides an automated and efficient way to mass generate DROP and CREATE scripts for stored procedures in SQL Server, saving them as individual files for easy deployment or backup. By customizing the SQL query criteria, you can target specific stored procedures, such as those beginning with a particular prefix or excluding certain procedures from the output.&lt;/p&gt;

&lt;p&gt;Using this script, database administrators or developers can quickly export the definitions of stored procedures for version control, migrations, or disaster recovery planning. This solution simplifies the process of managing stored procedure scripts, making it a valuable tool for SQL Server environments.&lt;/p&gt;

</description>
      <category>sql</category>
      <category>sqlserver</category>
      <category>powershell</category>
    </item>
    <item>
      <title>Some Random SQL Error Handling Techniques for Reliability and Transactional Integrity</title>
      <dc:creator>Sean Drew</dc:creator>
      <pubDate>Mon, 24 Feb 2025 20:21:16 +0000</pubDate>
      <link>https://dev.to/seanmdrew/some-random-sql-error-handling-techniques-for-reliability-and-transactional-integrity-14ei</link>
      <guid>https://dev.to/seanmdrew/some-random-sql-error-handling-techniques-for-reliability-and-transactional-integrity-14ei</guid>
      <description>&lt;p&gt;Error handling is an important part of writing SQL code. Whether working with stored procedures, triggers, or transactions, proper error handling ensures that database operations remain reliable, maintainable, and capable of handling unexpected situations. In this write-up, I put together some very basic samples of SQL error handling, including TRY...CATCH blocks, logging, transaction management, as well as SAVEPOINT, XACT_ABORT, and stored procedure return codes.&lt;/p&gt;

&lt;p&gt;For simplicity, the T-SQL code snippets in this write-up are provided for illustrative purposes only and do not necessarily reflect best coding practices.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Fundamentals of SQL Error Handling&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;TRY...CATCH Block&lt;/strong&gt;&lt;br&gt;
The TRY...CATCH block is the primary mechanism for handling errors in SQL Server. It allows capturing exceptions and responding appropriately. If an error occurs inside the TRY block, the CATCH block executes, returning error details in a select statement. If you need execution to stop, then you will need to throw the error instead on simply doing a select statement.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;begin try
  -- your main sql code here
  -- your main sql code here
end try
begin catch
  select
  error_message() as ErrorMessage,
  error_number() as ErrorNumber,
  error_severity() as ErrorSeverity,
  error_state() as ErrorState
end catch
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;THROW and RAISEERROR&lt;/strong&gt;&lt;br&gt;
RAISEERROR allows you to generate custom error messages with specified severity levels.&lt;/p&gt;

&lt;p&gt;THROW is used to re-throw captured errors and halt execution.&lt;/p&gt;

&lt;p&gt;The point of this code is to demonstrate conditional logic before raising an error in SQL Server using RAISERROR with TRY...CATCH error handling.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-- example of conditional logic before raising an error
declare @somecondition int = 1 -- this will cause the error to be thrown

begin try
  -- conditionally raise an error
  if @somecondition = 1 -- if @somecondition &amp;lt;&amp;gt; 1 then error would not be thrown
  begin
    raiserror('my custom error occurred due to some condition.', 16, 1) -- raise error
  end

  -- your main sql code here
  select 'hello there'
  select 'no error occurred'
end try
begin catch
  -- capture the error and re-throw it
  throw  -- re-throw the error
end catch
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;u&gt;Breaking Down the Concept of "Rethrow"&lt;/u&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;An error occurs in the TRY block.&lt;/li&gt;
&lt;li&gt;If @somecondition = 1, the THROW statement inside TRY raises an error.&lt;/li&gt;
&lt;li&gt;Execution immediately jumps to the CATCH block.&lt;/li&gt;
&lt;li&gt;The CATCH block captures the error.&lt;/li&gt;
&lt;li&gt;SQL Server automatically provides access to error details (ERROR_MESSAGE(), ERROR_NUMBER(), etc.).&lt;/li&gt;
&lt;li&gt;Just handling the error in CATCH does not automatically stop execution as it needs to be re-raised if necessary.&lt;/li&gt;
&lt;li&gt;Using THROW in CATCH (Re-throwing the error)&lt;/li&gt;
&lt;li&gt;Instead of logging the error or handling it silently, THROW raises the exact same error again.&lt;/li&gt;
&lt;li&gt;This allows the error to propagate up to the caller (e.g., another stored procedure or application).&lt;/li&gt;
&lt;li&gt;No need to pass error details manually because SQL Server remembers the error context.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;u&gt;Why "Re-throw" Instead of Just "Throw"?&lt;/u&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The term "throw" generally refers to raising a new error.&lt;/li&gt;
&lt;li&gt;The term "rethrow" is used when you catch an existing error and raise it again.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;u&gt;When to Use Rethrow?&lt;/u&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;When you don’t want to change the original error.&lt;/li&gt;
&lt;li&gt;When you want the error to propagate up the call stack as if the TRY...CATCH was not there.&lt;/li&gt;
&lt;li&gt;When handling errors in nested stored procedures or transaction rollback scenarios.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;An alternative version of the SQL code using just THROW without RAISERROR&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;declare @somecondition int = 1 -- this will cause the error to be thrown

begin try
  -- conditionally throw an error
  if @somecondition = 1 -- if @somecondition &amp;lt;&amp;gt; 1 then the error would not be thrown
  begin
    throw 50001, 'my custom error occurred due to some condition.', 1
  end

  -- your main sql code here
  select 'hello there'
  select 'no error occurred'
end try
begin catch
  -- capture the error and re-throw it
  throw  -- re-throw the error
end catch
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;u&gt;Key Differences Between THROW and RAISERROR&lt;/u&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Simpler Syntax:&lt;/li&gt;
&lt;li&gt;THROW 50001, 'My custom error occurred due to some condition.', 1&lt;/li&gt;
&lt;li&gt;No need to specify severity (16) like in RAISERROR.&lt;/li&gt;
&lt;li&gt;Built-in THROW for Re-Raising Errors:&lt;/li&gt;
&lt;li&gt;THROW inside CATCH automatically re-throws the caught error without needing parameters.&lt;/li&gt;
&lt;li&gt;RAISERROR requires capturing and re-raising the original error manually.&lt;/li&gt;
&lt;li&gt;Always Uses Severity 16+:&lt;/li&gt;
&lt;li&gt;With THROW, the severity must be 50000 or higher, which is the custom error range for SQL server.&lt;/li&gt;
&lt;li&gt;RAISERROR allows you to specify lower severity levels.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;u&gt;When to Use THROW Instead of RAISERROR?&lt;/u&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;When you need simpler and cleaner error handling.&lt;/li&gt;
&lt;li&gt;When you don't need advanced formatting. RAISERROR supports message substitution using placeholders.&lt;/li&gt;
&lt;li&gt;When using TRY...CATCH blocks, THROW is the preferred way to re-raise exceptions.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;2. Transaction Management and Atomicity&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Using TRY...CATCH with Transactions&lt;/strong&gt;&lt;br&gt;
Transactions in SQL Server guarantee atomicity, meaning that all operations within a transaction either succeed as a group or fail as a group. If any part of the process fails, the system rolls back to its original state, preventing partial updates and ensuring database integrity.&lt;/p&gt;

&lt;p&gt;Using transactions is crucial when executing multiple SQL statements that must either fully complete or not execute at all to maintain consistency.&lt;/p&gt;

&lt;p&gt;&lt;u&gt;Basic Transaction with Error Handling&lt;/u&gt;&lt;br&gt;
This sample code shows how a transaction ensures atomicity. If a specific condition is met, an error is thrown which causes the transaction to roll back:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;declare @somecondition int = 1 -- this will cause the error to be thrown

begin transaction
begin try
  if @somecondition = 1 -- if @somecondition &amp;lt;&amp;gt; 1 then the error would not be thrown
  begin
    throw 50001, 'my custom error occurred due to some condition.', 1
  end

  -- sql operations (insert, update, delete, etc.)
  commit transaction
end try
begin catch
  rollback transaction
  throw  -- re-throw the error
end catch
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If any SQL statement inside the TRY block fails, SQL Server immediately jumps to the CATCH block. The CATCH block rolls back the transaction, undoing any changes made since BEGIN TRANSACTION, thereby preventing partial updates.&lt;/p&gt;

&lt;p&gt;&lt;u&gt;Logging Errors in Transactions&lt;/u&gt;&lt;br&gt;
Adding error logging to the CATCH block can help track failures and diagnose issues effectively.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;begin transaction
begin try
  -- some sql operations (insert, update, delete, etc.)
  commit transaction
end try
begin catch
    rollback transaction

    -- log error details to a custom audit table
    insert into errorlog (errormessage, errorprocedure, errorline, errortime)
    values (ERROR_MESSAGE(), ERROR_PROCEDURE(), ERROR_LINE(), GETDATE())

    select ERROR_MESSAGE() as errormessage
end catch
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Transactions are essential for atomicity and when performing multiple dependent SQL operations. ROLLBACK ensures data consistency by reverting changes in case of failure and the TRY…CATCH block allows for logging errors and prevents partial updates. This is important in all operations that interact with SQL and where data integrity is crucial.&lt;/p&gt;

&lt;p&gt;Transactions ensure atomicity, which means that when performing multiple dependent SQL operations, they all must succeed together or fail together. ROLLBACK maintains consistency by reverting changes in case of failure and the TRY…CATCH block handles errors effectively while allowing for error logging and preventing partial updates. Logging errors provides valuable debugging information, ensuring better system reliability.&lt;/p&gt;

&lt;p&gt;Transactions are essential in any SQL operation where data integrity is critical, ensuring that changes to the database remain consistent and reliable even in the event of failures.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Using SAVEPOINT for Partial Transaction Rollbacks&lt;/strong&gt;&lt;br&gt;
SAVEPOINT allows rolling back specific parts of a transaction while keeping others intact. In SQL Server, SAVEPOINT provides finer control over transactions by allowing partial rollbacks. Unlike a full ROLLBACK TRANSACTION, which undoes all changes since BEGIN TRANSACTION, SAVEPOINT enables rolling back only a portion of the transaction while keeping the rest intact. This is useful when handling multiple operations where only certain parts should be undone in case of an error.&lt;/p&gt;

&lt;p&gt;SAVEPOINT can help preserve successful operations so if an error occurs, you can roll back to the last SAVEPOINT rather than discarding the entire transaction. This can allow for finer control when handling failures within a large transaction which can be helpful when dealing with dependent operations where certain steps should remain committed even if a later step fails and is rolled back.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;begin transaction
begin try
  -- insert new record into the orders table
  insert into orders (orderid, customername, orderdate)
  values (101, 'John Doe', getdate())

  -- define savepoint before inserting a new
  -- record into the orderdetails table
  save transaction OrderDetailsSavepoint

  -- insert new record into the orderdetails table
  insert into orderdetails (orderid, productid, quantity)
  values (101, 1, 5)

  -- simulate an error
  throw 50001, 'simulated error occurred after OrderDetails insertion.', 1

  commit transaction
end try
begin catch
  select 'error. rolling back to OrderDetailsSavepoint savepoint.'

  -- rollback only the changes made after the savepoint
  rollback transaction OrderDetailsSavepoint

  select 'continuing with remaining transaction...'

  -- you can still commit other successful operations
  commit transaction
end catch
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If a transaction is fully rolled back (full ROLLBACK TRANSACTION), all SAVEPOINTS within it are also discarded. Using SAVEPOINT does not release locks until the full transaction is committed or fully rolled back.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Using XACT_ABORT for Automatic Rollbacks&lt;/strong&gt;&lt;br&gt;
XACT_ABORT is a session-level (entire transaction) setting in SQL Server that automatically rolls back the entire transaction if a runtime error occurs. This eliminates the need for explicit error handling with ROLLBACK TRANSACTION which can help ensure data integrity with minimal code. It is very useful for bulk inserts, bulk updates and other "batch" type operations as it prevents partial updates in large batch operations.&lt;/p&gt;

&lt;p&gt;XACT_ABORT ensures full rollback on failure. If any statement inside a transaction fails, the entire transaction is automatically rolled back. You do not need explicit BEGIN CATCH and ROLLBACK TRANSACTION with every transaction (reduces manual error handling).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;set xact_abort on -- turn it on
-- do not need to turn off xact_abort
-- because it applies to the current session or batch
-- after the session ends or a new connection is made
-- xact_abort resets to its default state (off).

begin transaction
begin try
  -- insert new record into the orders table
  insert into orders (orderid, customername, orderdate)
  values (102, 'Jane Doe', getdate())

  -- simulate an error
    insert into orderdetails (orderid, productid, quantity)
    values (102, NULL, 5)  -- NULL is not allowed for ProductID

    commit transaction  -- this will not run if an error occurs
end try
begin catch
    select 'error. xact_abort automatically rolled back the transaction.'
end catch
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;3. Error Logging and Reporting&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Logging Errors to a Table&lt;/strong&gt;&lt;br&gt;
Storing error details in a log table helps with troubleshooting and debugging.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;begin catch
  insert into errorlog (errormessage, errornumber, errorseverity, errorstate)
  values (ERROR_MESSAGE(), ERROR_NUMBER(), ERROR_SEVERITY(), ERROR_STATE())
end catch
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It couldn't hurt to index the errorlog table for better query performance and to add other logging information such as SESSION_USER, APP_NAME(), HOST_NAME(), etc. to help make debugging easier and to add more context. If the error occurs in a stored procedure, then adding ERROR_PROCEDURE() and ERROR_LINE() for good measure might also be helpful.&lt;/p&gt;

&lt;p&gt;&lt;u&gt;Using Output Parameters for Error Reporting&lt;/u&gt;&lt;br&gt;
Stored Procedures can return error messages via an output parameter.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;drop procedure if exists dbo.MyProcedure
go

create procedure dbo.MyProcedure @ErrorMessage nvarchar(4000) output
as
begin
  begin try
    -- sql logic
  end try
  begin catch
    set @errormessage = error_message() -- return the error message to the calling code
  end catch
end
go
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
Using SQL error handling techniques ensures database reliability and maintainability. Using TRY...CATCH, transaction management, and logging to a table can help improve system resilience, handle unexpected situations effectively and maintain system integrity.&lt;/p&gt;

</description>
      <category>sql</category>
      <category>sqlserver</category>
    </item>
    <item>
      <title>Centralizing SQL Connection Handling with a DatabaseHelper Class</title>
      <dc:creator>Sean Drew</dc:creator>
      <pubDate>Tue, 11 Feb 2025 20:18:01 +0000</pubDate>
      <link>https://dev.to/seanmdrew/centralizing-sql-connection-handling-with-a-databasehelper-class-1mob</link>
      <guid>https://dev.to/seanmdrew/centralizing-sql-connection-handling-with-a-databasehelper-class-1mob</guid>
      <description>&lt;p&gt;For applications that rely on SQL Server connectivity for real-time data processing and high-volume data operations, transient connection failures can occur due to network issues, server load, timeouts, etc. Implementing a custom SQL connection retry mechanism ensures that these temporary disruptions do not compromise application stability.&lt;/p&gt;

&lt;p&gt;In this write-up, I look at a custom GetResilientConnection method in ADO.NET when connecting to SQL Server using SqlClient to ensure my application handles connection disruptions gracefully by implementing automated custom retry logic that allow my applications to attempt reconnections with controlled delays and limits.&lt;/p&gt;

&lt;p&gt;&lt;u&gt;Transient Connection Failures&lt;/u&gt;&lt;br&gt;
These faults are temporary connectivity issues, often due to network instability or database server connectivity issues. While these errors are typically resolved once the network or server recovers, they can disrupt an application's flow unless handled properly. Some general transient connection examples can include:&lt;/p&gt;

&lt;p&gt;• Network-related errors (e.g., error code 40613)&lt;br&gt;
• SQL Server unavailability (e.g., error code 10928)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Implementing Custom Retry Logic in ADO.NET&lt;/strong&gt;&lt;br&gt;
A simple yet effective way to enhance resilience is by implementing retry logic. When a transient error occurs, the application should wait for a brief period before attempting to reconnect. This code defines a class named DatabaseHelper, which provides a method called GetResilientConnection. This method is responsible for handling transient database connection failures by implementing a retry mechanism.&lt;/p&gt;

&lt;p&gt;&lt;u&gt;Sample Retry Pattern Code Example&lt;/u&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;using System;
using System.Data.SqlClient;
using System.IO;
using System.Linq;
using System.Net.Sockets;
using System.Threading;

public class DatabaseHelper
{
  public static SqlConnection GetResilientConnection(string connectionString)
  {
    int maxRetries = 3;  // maximum retry attempts
    int delay = 2000;    // initial delay (2 seconds)

    for (int i = 0; i &amp;lt; maxRetries; i++)
    {
      try
      {
        // attempt to establish a SQL connection
        var connection = new SqlConnection(connectionString);
        connection.Open(); // open the connection
        return connection; // successfully connected, return the connection
      }
      catch (Exception ex) when (IsTransientError(ex)) // catch broader transient errors
      {
        Console.WriteLine($"Transient error encountered: {ex.Message}");
        Console.WriteLine($"Retrying in {delay / 1000} seconds...");

        Thread.Sleep(delay); // wait before retrying (exponential backoff)
        delay *= 2; // double the delay for the next retry
      }
    }

    // if all retries fail, throw an exception indicating failure
    throw new Exception("Max retry attempts exceeded.");
  }

  private static bool IsTransientError(Exception ex)
  {
    // some common SQL server transient error codes
    int[] transientErrors = { 40613, 10928, 40197, 40501, 49918, 1205, 233, -2 };

    if (ex is SqlException sqlEx)
    {
      // check if any of the SQL error numbers match the transientErrors errors array
      if (sqlEx.Errors.Cast&amp;lt;SqlError&amp;gt;().Any(e =&amp;gt; transientErrors.Contains(e.Number)))
      {
        return true;
      }
    }

    // none of the erros codes in the transientErrors errors array were found
    // so look at these other types 
    // additional transient errors based on exception types
    return ex is TimeoutException  // SQL command timeout
      || ex is IOException       // general network failure
      || ex is InvalidOperationException // connection unexpectedly closed
      || ex is SocketException; // low-level socket failure
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;u&gt;This GetResilientConnection example method does the following:&lt;/u&gt;&lt;br&gt;
• Attempts to establish a SQL Server connection using the provided connection string.&lt;br&gt;
• Retries up to 3 times if a transient failure occurs, with an increasing wait time between attempts - waits (2s → 4s → 8s) before retrying which can improve application server recovery chances. Retries up to 3 times before throwing an exception.&lt;br&gt;
• Uses IsTransientError to detect common transient errors, including SQL timeouts, deadlocks, and network issues. Detects timeouts, network failures, and connection drops.&lt;br&gt;
• Logs the failure and waits before retrying, using exponential backoff (doubling the delay each time).&lt;br&gt;
• Throws an exception if all retry attempts fail.&lt;br&gt;
• Encapsulated in a reusable helper class and can be called from multiple parts of the application for consistent connection handling.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example of GetResilientConnection Usage&lt;/strong&gt;&lt;br&gt;
Instead of rewriting SQL connectivity retry logic everywhere in my application where I need to make a SQL connection, I can simply call DatabaseHelper.GetResilientConnection. When another part of my application references DatabaseHelper, it is using this class as a utility to establish SQL connections with built-in resilience.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;using (var connection = DatabaseHelper.GetResilientConnection(connectionString))
{
  Console.WriteLine("Connected successfully!");

  // use the connection here
  // and do your data processing stuff
  // in the context of the connection
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This approach promotes code maintainability and follows the Single Responsibility Principle (SRP) and ensures consistent handling of connection failures across my application.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Encapsulation Part&lt;/strong&gt;&lt;br&gt;
I created the custom DatabaseHelper class to centralize database logic and avoid repeating connection code throughout my application. By keeping (encapsulating) this logic in one place, I improve code maintainability and make it easier to update connection handling in the future (only one place if I want to add functionality or fix bugs).&lt;/p&gt;

&lt;p&gt;Doing this also ensures that database connection logic remains separate from the rest of my application logic, which helps with making the codebase cleaner and more modular. &lt;/p&gt;

&lt;p&gt;Within this class, the IsTransientError method is marked private, meaning transient error detection is only accessible inside DatabaseHelper, which ensures that SQL connectivity error handling is consistent and can only be leveraged by referencing the DatabaseHelper class.&lt;/p&gt;

&lt;p&gt;Basically - since DatabaseHelper.GetResilientConnection(connectionString) is public and static, any part of my application that needs a resilient SQL connection can simply call it. This prevents duplicate code and ensures consistent retry logic across the application.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why Use It&lt;/strong&gt;&lt;br&gt;
• Improves database connection reliability in case of temporary issues.&lt;br&gt;
• Ensures automatic retry instead of failing immediately.&lt;br&gt;
• Centralized connection handling reduces redundant code across my application.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
By creating and using GetResilientConnection, my application can gracefully handle transient failures and improve reliability without requiring manual intervention and ensures that application database connectivity remains stable(ish). Proper logging and exception handling within the retry mechanism further help diagnose persistent issues, making it a valuable strategy for robust database interactions.&lt;/p&gt;

</description>
      <category>aspnet</category>
      <category>csharp</category>
      <category>sqlserver</category>
    </item>
    <item>
      <title>Launching Executables and Switching Focus in Delphi</title>
      <dc:creator>Sean Drew</dc:creator>
      <pubDate>Tue, 21 Jan 2025 16:39:14 +0000</pubDate>
      <link>https://dev.to/seanmdrew/launching-executables-and-switching-focus-in-delphi-5aef</link>
      <guid>https://dev.to/seanmdrew/launching-executables-and-switching-focus-in-delphi-5aef</guid>
      <description>&lt;p&gt;I am currently using Delphi to provide support, implement enhancements, and resolve bugs in legacy monolithic Windows applications. One key enhancement involved creating a launch pad to manage multiple pre-existing external executables, requiring seamless interaction with external programs. Whether it’s launching secondary tools like an image importer or bringing an already running application to the foreground, Delphi offers a powerful suite of tools to facilitate these tasks. By leveraging the Windows API and Delphi’s advanced process management capabilities, I can ensure smooth and efficient integration with external applications.&lt;/p&gt;

&lt;p&gt;In this write up, I show how to achieve this functionality using Windows API and Delphi code.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Steps&lt;/strong&gt;&lt;br&gt;
&lt;u&gt;Check if the Process is Running&lt;/u&gt;&lt;br&gt;
Use the "IsProcessRunning" function to determine if the target application is already running. This function scans the active processes and matches their executable names with the desired process name.&lt;/p&gt;

&lt;p&gt;&lt;u&gt;Find the Window Associated with the Process&lt;/u&gt;&lt;br&gt;
If the process is running, use the "FindWindowByProcess" function to locate the window handle of the application.&lt;/p&gt;

&lt;p&gt;&lt;u&gt;Bring the Application to the Foreground&lt;/u&gt;&lt;br&gt;
Once the window handle is retrieved, the "SetForegroundWindow" API is used to bring the application window to the front. If minimized, the "ShowWindow" API restores it.&lt;/p&gt;

&lt;p&gt;&lt;u&gt;Launch the Application if it is Not Running&lt;/u&gt;&lt;br&gt;
If the application is not running, the "ShellExecuteEx" API launches it. After launching, the program waits for the application window to appear and then brings it to the foreground.&lt;/p&gt;

&lt;p&gt; &lt;br&gt;
&lt;strong&gt;Example Code&lt;/strong&gt;&lt;br&gt;
The following code implements the steps for a button click event in my Delphi application. This serves as the core logic and relies on two supporting functions.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;procedure TForm1.btnMyButtonClientEvent(Sender: TObject);
var
  ExecInfo: TShellExecuteInfo; // structure to define execution parameters
  Handle: HWND;               // handle to the window of the external application
begin
  // check if the process "TheExternalProgram.exe" is already running
  if IsProcessRunning('TheExternalProgram.exe') then
  begin
    // ff the process is running, find its window by its title
    Handle := FindWindow(nil, 'Extenal Program Window Title');
    if Handle &amp;lt;&amp;gt; 0 then
    begin
      // ff the window is found, restore it if minimized
      ShowWindow(Handle, SW_RESTORE);
      // bring the application window to the foreground
      SetForegroundWindow(Handle);
      end;
  end
  else
  begin
    // if the external executable is not running then prepare to launch it
    ZeroMemory(@ExecInfo, SizeOf(ExecInfo)); // clear the structure
    ExecInfo.cbSize := SizeOf(ExecInfo);  // set the size of the structure
    ExecInfo.fMask := SEE_MASK_NOCLOSEPROCESS; // ensure process handle remains open after launch
    ExecInfo.lpFile := 'c:\thepath\TheExternalProgram.exe'; // path to the executable
    ExecInfo.nShow := SW_SHOWNORMAL; // show the application normally

    // attempt to launch the executable
    if ShellExecuteEx(@ExecInfo) then
    begin
      // wait for the external exe main window to appear
      repeat
        Handle := FindWindow(nil, 'Extenal Program Window Title');
        Sleep(100); // pause for 100ms to allow the external exe to initialize
      until (Handle &amp;lt;&amp;gt; 0); // exit loop once the window handle is found

      // restore the external exe window if minimized and bring it to the foreground
      ShowWindow(Handle, SW_RESTORE);
      SetForegroundWindow(Handle);
    end
    else
    begin
      // show an error message ff launching the external exe fails
      MessageDlg('Failed to launch TheExternalProgram.exe', mtInformation, [mbOk], 0);
    end;
  end;
end;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Supporting Functions&lt;/strong&gt;&lt;br&gt;
&lt;u&gt;1. Checking if the Process is Running&lt;/u&gt;&lt;br&gt;
This supporting code is for the "IsProcessRunning" function. This function checks whether a specific process, identified by its executable name, is currently running on the system. It takes a process name as input, takes a snapshot of all running processes, and compares each process name to the target. If it finds a match, the function returns True, indicating the process is running; otherwise, it returns False.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;function IsProcessRunning(const AProcessName: string): Boolean;
var
  SnapShot: THandle; // handle to the process snapshot
  ProcessEntry: TProcessEntry32; // structure to store process information
begin
      Result := False; // default to process not running
  SnapShot := CreateToolhelp32Snapshot(TH32CS_SNAPPROCESS, 0); // take a snapshot of all processes
  if SnapShot = INVALID_HANDLE_VALUE then Exit;

  ProcessEntry.dwSize := SizeOf(TProcessEntry32); // initialize the structure size

  if Process32First(SnapShot, ProcessEntry) then // iterate through the processes in the snapshot
  begin
    repeat
      // compare each process name with the target process name (case-insensitive)
      if SameText(ProcessEntry.szExeFile, AProcessName) then
      begin
        Result := True; // process is running
        Break;
      end;
    until not Process32Next(SnapShot, ProcessEntry); // move to the next process
  end;

  CloseHandle(SnapShot); // release the snapshot handle
end;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;u&gt;2. Finding the Window Associated with a Process&lt;/u&gt;&lt;br&gt;
This supporting code is for the "FindWindowByProcess" function. This function searches for a window that is associated with a running process by matching the process name. It iterates through all processes and their windows, comparing the process ID of each window to the target process. If a match is found, it returns the handle of the corresponding window.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;function FindWindowByProcess(const ProcessName: string): HWND;
var
  Snapshot: THandle; // handle to the process snapshot
  ProcessEntry: TProcessEntry32; // Structure to store process information
  Handle: HWND; // handle to the window
begin
  Result := 0; // default to no window found
  Snapshot := CreateToolhelp32Snapshot(TH32CS_SNAPPROCESS, 0); // take a snapshot of all processes
  if Snapshot = INVALID_HANDLE_VALUE then Exit;

  ProcessEntry.dwSize := SizeOf(ProcessEntry); // initialize the structure size
  // iterate through the processes in the snapshot
  if Process32First(Snapshot, ProcessEntry) then
  begin
    repeat
      // check if the process name matches the target name (case-insensitive)
      if AnsiCompareText(ProcessEntry.szExeFile, ProcessName) = 0 then
      begin
        // iterate through all top-level windows
        Handle := FindWindow(nil, nil);
        while Handle &amp;lt;&amp;gt; 0 do
        begin
          // check if the window belongs to the target process
          if GetWindowThreadProcessId(Handle, nil) = ProcessEntry.th32ProcessID then
          begin
            Result := Handle; // found the window handle
            Break;
          end;
          Handle := GetWindow(Handle, GW_HWNDNEXT); // move to the next window
        end;
        Break;
      end;
    until not Process32Next(Snapshot, ProcessEntry); // move to the next process
  end;

  CloseHandle(Snapshot); // release the snapshot handle
end;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;u&gt;A Brief Note About the Order of Things&lt;/u&gt;&lt;br&gt;
In Delphi, the "IsProcessRunning" and "FindWindowByProcess" functions must be defined before the "btnMyButtonClick" event handler in the code. This order is essential because the button click event needs to reference these functions and they must be available at the time the event is triggered. By defining these functions earlier in my code, I ensure that the button click handler can successfully call them to check if the process is running and to find the associated window.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
Incorporating external executables and managing their interactions within a Delphi application is essential when working with legacy systems. By leveraging Delphi’s integration with the Windows API and utilizing key API functions such as "ShellExecuteEx", "ShowWindow", and "SetForegroundWindow", I can easily check if the external program is running, bring its window to the foreground if necessary, and launch it if it’s not already active.&lt;/p&gt;

</description>
      <category>delphi</category>
      <category>pascal</category>
    </item>
  </channel>
</rss>
