When you’re a small team—or even a solo dev—the friction of managing CI/CD pipelines across multiple repos adds up fast.
A missing permission, a stale PAT, or a malformed GUID can break your release.
That’s what pushed me to think differently: treat pipeline configuration like infrastructure you version, maintain, and compose.
In this post, I’ll walk through how I built HoneyDrunk.Pipelines, a centralized, reusable template system for Azure DevOps, and the lessons that came out of it.
🧠 The Problem: Pipeline Sprawl & Runtime Breakage
Before building a unified system, here’s what I was dealing with:
- Every repo had its own
azure-pipelines.yml
, each slightly different - Copy-pasted feed GUIDs, mismatched naming, and inconsistent versioning
- Permission and service connection issues that only surfaced at runtime
- Changing SDK versions or build logic meant updating several repos manually
A single pipeline failure captured the pain perfectly:
- The build service identity didn’t have Contributor on the feed
- The service connection used a stale PAT
- The feed URL (with GUID) was mis-typed
Each of these were independent problems that only appeared after I pushed.
That’s when I decided: pipelines can’t be snowflakes anymore.
⚙️ The Vision: Pipelines as Reusable Infrastructure
I wrote down a few core principles before rebuilding:
- Single source of truth. One repo for templates, many consumers
- Versioned logic. Pipeline changes are reviewed and auditable
- Composable building blocks. Stages, jobs, and steps can be mixed and matched
- Separation of concerns. Projects define what to build; templates define how to build
🗂️ Repository Layout (Simplified)
HoneyDrunk.Pipelines/
├── stages/
├── jobs/
├── steps/
└── README.md
ConsumerRepo/
└── azure-pipelines.yml # references templates from HoneyDrunk.Pipelines
Consumer repos reference the template repo via Azure DevOps resources:
resources:
repositories:
- repository: pipelines
type: git
name: HoneyDrunk/HoneyDrunk.Pipelines
ref: refs/heads/main
stages:
- template: stages/pr-validation.stage.yaml@pipelines
parameters:
projectPath: 'src/MyLib/MyLib.csproj'
runTests: true
- template: stages/dotnet-publish.stage.yaml@pipelines
parameters:
dependsOn: PR_Validation
condition: and(succeeded(), eq(variables['Build.SourceBranch'], 'refs/heads/main'))
packagePath: '$(Build.ArtifactStagingDirectory)/**/*.nupkg'
feedName: 'HoneyDrunk-Internal'
That’s about 10 lines of YAML per project — the rest lives in the infrastructure layer.
🧩 Core Template Mechanics
PR Validation Stage
parameters:
- name: projectPath
type: string
- name: configuration
type: string
default: 'Release'
- name: runTests
type: boolean
default: true
- name: dotnetVersion
type: string
default: '9.x'
stages:
- stage: PR_Validation
displayName: 'Build, Test, Validate'
jobs:
- job: Build
pool:
vmImage: 'ubuntu-latest'
steps:
- template: ../steps/install-dotnet-sdk.step.yaml
parameters:
dotnetVersion: ${{ parameters.dotnetVersion }}
- template: ../steps/dotnet-restore-build.step.yaml
- template: ../steps/dotnet-build.step.yaml
parameters:
projectPath: ${{ parameters.projectPath }}
configuration: ${{ parameters.configuration }}
- ${{ if eq(parameters.runTests, true) }}:
- template: ../steps/dotnet-test.step.yaml
- template: ../steps/dotnet-pack.step.yaml
parameters:
projectPath: ${{ parameters.projectPath }}
configuration: ${{ parameters.configuration }}
Each step is modular, reusable, and easy to evolve.
Package Publish Step
parameters:
- name: packagePath
type: string
- name: feedName
type: string
- name: serviceConnection
type: string
default: 'HoneyDrunk-AzureDevOps'
steps:
- task: NuGetAuthenticate@1
displayName: 'Authenticate with Azure Artifacts'
inputs:
nuGetServiceConnections: ${{ parameters.serviceConnection }}
- task: DotNetCoreCLI@2
displayName: 'Push to ${{ parameters.feedName }}'
inputs:
command: 'push'
packagesToPush: ${{ parameters.packagePath }}
nuGetFeedType: 'internal'
publishVstsFeed: 'HoneyDrunk/${{ parameters.feedName }}'
allowPackageConflicts: false
And to avoid brittle GUIDs, this script dynamically builds the feed URL:
- script: |
FEED_URL="https://pkgs.dev.azure.com/$(System.TeamFoundationCollectionUri | sed 's|https://dev.azure.com/||')/_packaging/${{ parameters.feedName }}/nuget/v3/index.json"
echo "##vso[task.setvariable variable=FeedUrl]$FEED_URL"
displayName: 'Compute Feed URL'
No more copy-paste errors, no more mismatched feed identifiers.
🧰 Troubleshooting & Lessons Learned
Permission Setup
Ensure your Azure Pipelines build identity (Project Build Service) has Contributor access to the feed.
Without it, pushes will silently fail.
Service Connections
Use NuGetAuthenticate@1 instead of personal PATs wherever possible — it leverages the build service identity.
Parameterization
Anything that can change — paths, feed names, versions — should be a parameter.
Hardcoded values are time bombs.
🚀 The Impact
✅ Shared templates mean all projects follow the same structure
✅ Versioning and naming stay consistent
✅ Permission errors are practically gone
✅ Onboarding a new project takes minutes
⚖️ Before vs After
Before
Copy-paste massive YAML files
Search & replace project names
Debug GUIDs and PATs by trial and error
After
Reference templates
Set parameters
Push → it just works
🐝 Why It Matters for Small Teams
For indie devs or small studios like mine:
Time is precious. Every failed build costs momentum
Context switching is brutal. You shouldn’t be debugging YAML daily
Future-you will forget. Document it once, reuse forever
Treating pipelines as infrastructure is how you scale solo work.
🔮 Next Steps
Auto-generate release notes from commits
Add semantic versioning via conventional commits
Extend templates to handle multiple languages and deployment targets
Build once. Reuse forever.
That’s how HoneyDrunk Studios automates the grid.
🧱 Originally published on TattedDev.com
Top comments (0)