DEV Community

Niklas Engberg
Niklas Engberg

Posted on • Originally published at hackernoon.com on

My way of working with internal nuget packages

Picture borrowed from https://cdn.pixabay.com/photo/2013/07/13/10/22/box-157074_1280.png

The company I’m working at is big enough to span over multiple development teams. To be able to share a common way of working and to prevent that the wheel is invented over and over we rely on shared nuget packages.

I am author of many packages and the main contributor of those and that requires me to spend some time maintaining them. To be able to do this I need a working method that enables me to do this efficiently.

In this post I’m going to share my way of doing this, step by step.

Version control

To version control the packages git is used. Git is a de facto standard and need no further description :)

Naming

Decide on a good naming convention that will work for you. We use a prefix with our company separated by points narrowing down. For instance: CompanyA.AspNetCore.Mvc would be our own package with custom classes for ASP.NET Core MVC.

Project structure

To stay consistent a project structure should be used. I base mine on this Gist by David Fowler. It is not strictly followed, but rather inspired by. For example, the test project is not separated into its own test folder, instead it is included in the src folder. This setup is done once, when I introduce additional packages I just copy the structure from another project. This structure could easily be extracted into a dotnet template.

Nuspec file

Follow thisreference by adding your nuspec file.

Tests

Make sure you cover your packages with unit tests. That will give you confidence that your code works as you expect it to work and will also help other contributors. I’m using xUnit and in some projects AutoFixture to maximize maintainability and reduce code in the Arrange phase.

Documentation

Add a README.md to the project that explains how to use the package. More on the README.md below in the section about Crafting the package. I am considering looking into Read the Docs as a complement.

How to introduce changes?

The workflow that is used is Git Flow which means that we do not directly push changes to the master branch, but rather to feature branches that we integrate against the develop branch. I am not going to go into detail about Git Flow here.

When a feature is done, a pull request is opened and stakeholders are invited. And hopefully the feature is merged back to develop and ready for the next upcoming release. The git tag should reflect the version to be released. I explain my process of doing this in the section Crafting the package.

Breaking changes or not?

So, what is a breaking change?

A change in one part of a software system that potentially causes other components to fail; occurs most often in shared libraries of code used by multiple applications — Wikipedia

If you alter the definition of an existing method that you are exposing to the consumer or remove properties from an object that is a breaking change. Please make sure you have a consuming test client targeting your packages so that you can verify that the changes you’ve made are correctly done.

When it’s time to update an already existing package you need to take your consumers into consideration. You need to ask yourself if it’s a breaking change or not that you are introducing. If it isn’t, then it’s fine. That change should then only alter the minor and/or patch version.

If you are about to introduce a breaking change you need to keep support for the deprecated feature at least one version before removing it. I tend to do it this way

  1. Add the new feature
  2. Deprecate the old feature by adding an ObsoleteAttribute and describe how the consumer should migrate.
  3. Bump the minor version and release a new version

In the next major release, remove the obsoleted feature. In this way the consumer is given some time to migrate their existing code to use the new version.

Crafting the package

Before publishing a new package I make sure that

  1. The dependencies in the nuspec file is updated and that the supported target frameworks are in place
  2. The README.md is updated with new functionality and examples
  3. Versioning is correct

Dependencies in nuspec

There is a section in the nuspec file manifest that enabled to specify the dependencies that the package has. Even if packages only support one target framework they are included in a dependency group. That makes it easy in the future to add additional target frameworks.

Updating the README.md

To make sure that I update the README.md so that it is easy to follow I use Markdown Live Preview when editing it. That gives me a visual editor on how the file will be displayed. I then copy the content into my README.md and commit it.

Versioning the package

I’m using semantic versioning. To version the package the numbers must be consistent in *.csproj, .nuspec and in the git repository. For this I’m utilizing custom made PowerShell scripts.

This is executed by running it from the src folder.

.\bump-assembly-version.ps1 -Version 0.0.1

This is excuted by running it from the src folder.

.\bump-nuspec-version-ps1 -Version 0.0.1

I then commit those changes

git commit -am "Incremented project version"

Before pushing anything I make sure that my repository is tagged with the same version so that we have a tag that corresponds to the release

git tag -a 0.0.1 -m "Release 0.0.1"

(All of the above could easily be built into one single script file. I still do it separately but might consolidate in the future :))

And then push the tag

git push origin 0.0.1

And the code

git push

Publishing the package

There are different ways you can do this. Either locally using the nuget CLI to push the package to your feed, or let your CI service do this work.

I am letting Azure DevOps and Azure Pipelines do this. The build process consist of standard .NET build / run tests tasks. If you are interested in how that might look you can find examples here.

Finally the nuget package is crafted based on the nuspec file. The package is then pushed to our internal nuget feed. For this to work seamless a service connection to our nuget feed is added in Azure DevOps. The package manager that is used is ProGet.

When this package published I usually notify all the developers on Slack that new packages are available and take any further discussions from there.

This is a process that is working for me. If you have any opinions or other ways of doing this I’d like to hear. What features should be extracted into nuget packages really depends on your company and how you share code between team. I prefer to have a consistent way of doing things and I’ve noticed that we save a lot of time in our projects if we can reuse good things that others already have built.

If you liked the post, you know what to do! 👏


Top comments (5)

Collapse
 
rafalpienkowski profile image
Rafal Pienkowski

Nice article.

Could you tell me how many nuget packages do you have in your company? I’d like to know only the rough number (dozens, hundreds, thousands).

My second question is how to do you manage the dependencies between nuget packages. I mean, what do you do in the situation where the package A depends on another package X version 1.0, and the package B depends on the package X version 2.0, and your project utilize both packages A and B? What is your strategy in that situation?

Collapse
 
engberrg profile image
Niklas Engberg

Thank you Rafal!

We have a dozen packages at the moment.

Good question about managing dependencies. I actually ran into a issue related to this earlier this week. What I did in that situation was that I updated Package A to depend on the same version as Package B and released a new version. I think that is fine, since it is a major release and breaking changes are expected between 1.0 and 2.0. I always strive and make sure that I communicate what the breaking changes are so that the consumer is well aware of what has been removed / changed.

It would be interesting to hear how you've solved it. :)

/Niklas

Collapse
 
rafalpienkowski profile image
Rafal Pienkowski

Your solution is perfectly fine for me. I'd do the same and align version of the depending packages. In most cases, it was a quite straightforward operation. In the longtime perspective, it became a quite big pain in the neck because of the number of dependencies between nuget packages rapidly grew.
We decided to rearrange our package in the way which reduces the amount of it. I consolidated some of them. Some of them we put into the simple library (depending on our needs). The number of nuget packages decreases by 60%. If the problem still occurs, we make binding redirects in app.config files like in the example below:

<?xml version="1.0" encoding="utf-8" ?>
<configuration>
    <startup> 
        <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.6.1" />
    </startup>
  <runtime>
    <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">
      <dependentAssembly>
        <assemblyIdentity name="log4net" publicKeyToken="669e0ddf0bb1aa2a" culture="neutral" />
        <bindingRedirect oldVersion="0.0.0.0-5.0.0.0" newVersion="1.2.11" />
      </dependentAssembly>
    </assemblyBinding>
  </runtime>
</configuration>

My lessons learned:

  • use nuget packages only for independent/generic places in the solution. Referring to the DDD: shared kernel, generic context.
  • check if the creation of the nuget package is required and if it could be replaced with a simple dll library. Example question: is this library needed in more than one repository?
  • use nuget packages for the stable projects. It's annoying when nuget package version updates multiple time in a day.

To sum up, I'm not saying that nuget packages are evil, but it's easy to make a Nuget Hell in your project. The pragmatic approach is essential.

Of course, that is only my thoughts based on my experience. I hope it somehow helps you too.

Thread Thread
 
engberrg profile image
Niklas Engberg

Yeah this is definitely a learning process. I’m pretty sure the way we are doing it will in some way change in the future. Time will tell. :)

Most of the packages are library functions that we do not want to write over and over again. If a bug or update is neccessary we only have to do it in one place. It is also a motivator to get people more involved as well.

Thanks for sharing your experience!

Best
Niklas

Thread Thread
 
rafalpienkowski profile image
Rafal Pienkowski

That’s True. As I mentioned earlier, I replaced Nuget packages with responsible libraries in cases where they weren't in separate repositories. My motivation was to remove unused (be me) abstraction. Maybe in the Future, I’ll be able to utilize the power if the Nuget in 100%. Fingers crossed.

Cheers!