<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: martinjt</title>
    <description>The latest articles on DEV Community by martinjt (@martinjt).</description>
    <link>https://dev.to/martinjt</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/martinjt"/>
    <language>en</language>
    <item>
      <title>Evil Monkeypatching in C# with Rosyln Source Generators</title>
      <dc:creator>martinjt</dc:creator>
      <pubDate>Tue, 03 May 2022 15:48:16 +0000</pubDate>
      <link>https://dev.to/martinjt/evil-monkeypatching-in-c-with-rosyln-source-generators-4g6f</link>
      <guid>https://dev.to/martinjt/evil-monkeypatching-in-c-with-rosyln-source-generators-4g6f</guid>
      <description>&lt;p&gt;I’ve been working on an OSS project recently where I wanted to seamlessly redirect a call that a developer thinks they’re using to do some additional bits. I couldn’t find any real documentation on this, so I thought I’d investigate some ways to do it. Special Thanks to &lt;a href="https://twitter.com/markrendle"&gt;Mark Rendle&lt;/a&gt; for helping me with being evil.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Now I have to fully qualify my classes right back to global:: because you’re evil&lt;/p&gt;

&lt;p&gt;&lt;cite&gt;Mark Rendle&lt;/cite&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  What is Monkeypatching?
&lt;/h2&gt;

&lt;p&gt;If you’re unfamiliar with the term &lt;a href="https://en.wikipedia.org/wiki/Monkey_patch"&gt;Monkey patching&lt;/a&gt;, it’s a process of changing some code “on-the-fly”, where the author of the code didn’t necessarily want that behaviour. It isn’t, however, something you can do in C# (or .NET) in general. There are some libraries like &lt;a href="https://github.com/pardeike/Harmony"&gt;Harmony&lt;/a&gt; that can do it partially, but are based around your code running and doing the patching of the libraries. The general premise is that you want to redirect a call between a method the code thought it was going to call, and some other method. This can be incredibly useful if want to change/fix the functionality of something you don’t control. Khalid (&lt;a href="https://twitter.com/buhakmeh"&gt;@buhakmeh&lt;/a&gt;) has a post on some of these approaches &lt;a href="https://khalidabuhakmeh.com/fix-dotnet-dependencies-with-monkey-patching"&gt;here&lt;/a&gt;, and we’re going to look at an alternative that has a very narrow use case.&lt;/p&gt;

&lt;h2&gt;
  
  
  What are Roslyn Source generators?
&lt;/h2&gt;

&lt;p&gt;If you’re not familiar with source generators, they’re a new bit of functionality in .NET that allow you to generate source code at build time that the developer doesn’t see. They can be used in a few different ways, and specifically, they’re really useful for allowing users to use Attributes to generate additional code. However, that’s not what we’re going to do here as we don’t want the developer to have to change anything.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Code
&lt;/h2&gt;

&lt;p&gt;There is sample code &lt;a href="https://github.com/martinjt/monkeypatch-example"&gt;here&lt;/a&gt;, and each commit shows the different phases. In this basic example, we’ll redirect our own code that is using &lt;code&gt;System.Console&lt;/code&gt; to our new &lt;code&gt;PrefixConsole&lt;/code&gt;.  Our new class prepends “WithPrefix: ” to the front of all our console WriteLines.&lt;/p&gt;

&lt;h3&gt;
  
  
  Phase 1 – We hate ourselves
&lt;/h3&gt;

&lt;p&gt;This is obviously not a real world example of what would someone would do (unless they’re REALLY evil and hate themselves).  What we’re proving here is that we can redirect a call in our code from one type (in this case the &lt;code&gt;Console&lt;/code&gt; class from the Framework) to another type (&lt;code&gt;PrefixConsole&lt;/code&gt;).  We’ll come onto something a little more interesting soon.&lt;/p&gt;

&lt;p&gt;First, we’ll create a clean new console app&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;dotnet new console &amp;amp;&amp;amp; dotnet run

Hello World!
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next our new Console Prefixer class, you’ll see if a normal C# class, nothing special.  We’ll then internally call the &lt;code&gt;System.Console&lt;/code&gt; class.  The full namespace is important here, otherwise you’ll end up in an infinite loop.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
public static class PrefixConsole
{
    public static void WriteLine(string text)
    {
        System.Console.WriteLine("WithPrefix: " + text);
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then we’ll add using in our class that will override the usage.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
using Console = monkeypatch_test.PrefixConsole;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So, that’s not bad.  We’re pointing our own class at this, so we can see where it’s happening, and it’s pretty obvious where we’re sending the calls.  When we use our IDE to go to the definition, it will take us to our &lt;code&gt;PrefixConsole&lt;/code&gt; class, so there’s nothing dodgy here, just a little indirection that we probably didn’t need to do…&lt;/p&gt;

&lt;p&gt;Now, lets take this a step further to annoy the rest of our team&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Phase 2 – We hate our team mates (Global &lt;code&gt;using&lt;/code&gt;)&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;So now, lets move that using statement as it’s too obvious what we’re doing.  Also, we want ALL the usages of &lt;code&gt;Console&lt;/code&gt; to be prefixed, and we’re too lazy to go into every class file and do it.  So let’s a global using.&lt;/p&gt;

&lt;p&gt;Global using statements came in with C# 10.  They allow you to really cut down on the bloat in our files.  If you’ve used Razor templates before, you’ll have done something similar where you added all the using states to &lt;code&gt;View_start.cshtml&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;We’ll add this &lt;code&gt;Globals.cs&lt;/code&gt; but the name doesn’t matter.  Then remove our using statement from the file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
global using Console = monkeypatch_test.PrefixConsole;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will point all references to &lt;code&gt;Console&lt;/code&gt; (that aren’t fully qualified) to our new &lt;code&gt;PrefixConsole&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;So, that’s getting bad now.  It’s not obvious from looking at the code of your class that you’re being redirected.  At least, however, you’re IDE will navigate you to the write class.&lt;/p&gt;

&lt;p&gt;So, now, let’s make things even worse.&lt;/p&gt;

&lt;h3&gt;
  
  
  Phase 3 – We hate everyone (Source generators)
&lt;/h3&gt;

&lt;p&gt;In the previous 2 phases, we’ve been using our class in our project.  That’s annoying, but not that bad.  It’s useful if you want to provide some consistency, and even if that class is in a NuGet package, it’s not terrible.&lt;/p&gt;

&lt;p&gt;If it’s in a NuGet package though, people need to add a pesky line of code to their solution to do that redirection.  That’s pretty bad, why would we want people who use our library to write MORE lines of code?&lt;/p&gt;

&lt;p&gt;So lets use a source generator to do that redirection, that will REALLY mess people up and makes us popular everywhere.&lt;/p&gt;

&lt;p&gt;First, lets move that class into our new console library called &lt;code&gt;EvilConsolePrefixer&lt;/code&gt; (because this is where we get a little evil).  We can also remove our &lt;code&gt;Globals.cs&lt;/code&gt; too now.&lt;/p&gt;

&lt;p&gt;We can then add the SourceGenerator packages to our new library.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;dotnet add package Microsoft.CodeAnalysis.Analyzers
dotnet add package Microsoft.CodeAnalysis.CSharp.Workspaces
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we can add the really evil part.  We’re going to tell the generator to add our Global.cs to the main project at compile time, without the developer knowing.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
[Generator]
public class EvilConsolePrefixerGenerator : ISourceGenerator
{
    public void Execute(GeneratorExecutionContext context)
    {
        context.AddSource("Globals", "global using Console = EvilConsolePrefixer.PrefixConsole;");
    }
    public void Initialize(GeneratorInitializationContext context)
    {
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will add a source file into the compile pipeline with a single line that will do our redirection.  Why do it this way, and not just add a global using in thing the &lt;code&gt;EvilConsolePrefixer&lt;/code&gt; class? global usings (and usings in general) are scoped to the project, so they wouldn’t transition into the calling project. Using a SourceGenerator like this means that our global using will be added to the main project as if it was code that the developer wrote.&lt;/p&gt;

&lt;p&gt;All that remains is to add the EvilConsolePrefixer project as a reference to our main project.  As we’re doing this locally (i.e. not through NuGet), we’ll need to add an additional attribute to the import.  This isn’t required if we use a NuGet package.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
  &amp;lt;ItemGroup&amp;gt;
    &amp;lt;ProjectReference Include="..\EvilConsolePrefixer\EvilConsolePrefixer.csproj" OutputItemType="Analyzer"/&amp;gt;
  &amp;lt;/ItemGroup&amp;gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The additional attribute is &lt;code&gt;OutputItemType="Analyzer"&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What makes this evil you might ask?&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Mark Rendle said it’s Evil and I shouldn’t do it.&lt;/li&gt;
&lt;li&gt;We shouldn’t redirect calls, it makes code hard to reason about as your context isn’t correct.&lt;/li&gt;
&lt;li&gt;Redirecting at compile time like that could break your user’s compilation as your new code may not have all the methods and properties that the original class did.&lt;/li&gt;
&lt;li&gt;IDE’s will navigate the user to the original class, not the one being injected at compile time&lt;/li&gt;
&lt;li&gt;Decompiling the solution will look like the developers directly referenced your code, which isn’t actually correct.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;So why would you do this Martin?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;I came across this as I was trying to find a way to add a shim onto a sealed class from the Microsoft BCL.  The goal was to provide a package that allowed people using that class to get a wrapper very easy without having to change their code.&lt;/p&gt;

&lt;p&gt;Unfortunately, extension methods can’t override normal methods, and because this class (&lt;code&gt;ActivitySource&lt;/code&gt;) doesn’t come from any kind of factory, and was &lt;code&gt;sealed&lt;/code&gt; I couldn’t inherit and override.&lt;/p&gt;

&lt;p&gt;In addition, what I wanted to use in my new code was &lt;code&gt;[CallerLineNumber]&lt;/code&gt; and &lt;code&gt;[CallerFilePath]&lt;/code&gt;, which need to be done at build time as it’s the only time it is available (without PDBs).  You couldn’t, for instance, use an existing interface without those properties, and simply add them to your class that implements the interface.&lt;/p&gt;

&lt;p&gt;This solution works for that use case, and they’ll be blog post about the library I’m creating soon, despite it being evil, it serves a very real benefit.  I just hope that this isn’t misused, and removed at some point in the future.&lt;/p&gt;

</description>
      <category>net</category>
      <category>development</category>
      <category>sourcegenerators</category>
      <category>dotnet</category>
    </item>
    <item>
      <title>Estimates are not metrics</title>
      <dc:creator>martinjt</dc:creator>
      <pubDate>Sun, 06 Jun 2021 14:44:30 +0000</pubDate>
      <link>https://dev.to/martinjt/estimates-are-not-metrics-51hd</link>
      <guid>https://dev.to/martinjt/estimates-are-not-metrics-51hd</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;This post has been inspired by the countless &lt;del&gt;arguments&lt;/del&gt; conversations I have with my good friend David Whitney who has produced his own post about estimation that should also read: &lt;a href="https://www.davidwhitney.co.uk/Blog/2021/02/10/agile_software_estimation_for_everyone"&gt;https://www.davidwhitney.co.uk/Blog/2021/02/10/agile_software_estimation_for_everyone&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Estimation is an intrinsic part of software development, it is however, used wrongly in my opinion. Within a small iteration of a development team, the act of estimation itself is an important tool to understand complexity, perform slicing, share understanding, the “number” that may be produced as part of that is less important. In contrast, when you’re trying to establish a Time to Market, or a budget, the rolled up “number” is important. In this post I’m going to discuss what is wrong, and how to frame this with your teams, managers, leaders.&lt;/p&gt;

&lt;p&gt;I’ll state this so it’s not ambiguous, I HATE story points. I believe the only unique value they provide is a stick to beat the team with when velocity changes, or they get the number wrong. Every other perceived value can be achieved much easier, and with more advantages, using other means.&lt;/p&gt;

&lt;p&gt;Further clarification, if you arrived here with an anticipation that I’d give you a roadmap to remove estimation from your company, move to a “NoEstimates” culture or provide a reference for “Martin said estimation is bad” you’re in the wrong place. NoEstimates doesn’t mean we don’t estimate, it’s just a hashtag that Woody created to start a conversation about the use (and misuse) of estimates in Software Development with like minded people, and how we can do better.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is “Estimation”?
&lt;/h2&gt;

&lt;p&gt;So what exactly are we referring to when it comes to “Estimation” in software development.&lt;/p&gt;

&lt;p&gt;Lets start with estimation as a term in general vocabulary. Estimation isn’t a software development concept, we perform all sorts of estimation in our daily lives. From estimating the impact of rush hour on our travel to work, to how much we’ll need to save each month. We do all of this without really thinking about, and without complaining, and the reason is that we know “why” we’re doing it.&lt;/p&gt;

&lt;p&gt;According to Dictionary.com:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;to form an approximate judgment or opinion regarding the worth, amount, size, weight, etc., of; calculate approximately&lt;br&gt;
&lt;cite&gt;&lt;a href="https://www.dictionary.com/browse/estimate"&gt;Estimate | Definition of Estimate at Dictionary.com&lt;/a&gt;&lt;/cite&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Essentially an estimate is a “guess” at a specific measure of something based on information you’ve been given. Let that sink in, it’s a GUESS. It’s sometimes influenced by analysis, or investigation on the subject, however, it’s still a guess.&lt;/p&gt;

&lt;p&gt;Most importantly, an estimate isn’t just 1 thing, it has “dimensions” such as size, weight, duration, value, etc. Therefore in order to know which dimension we should use, we need to know how our estimate is going to be used. Why do we estimate how long it will take to get to the office in rush hour? it’s to work out when we need to set off, therefore estimating the number of miles we’ll travel around the back streets of London to get around the traffic won’t necessarily help.&lt;/p&gt;

&lt;p&gt;Estimating in Software development is no different. Asking a team to come up with arbitrary numbers (story points, t-shirt sizes etc.) are pointless if they don’t know why, and how they’re going to be used. Further, story-points, t-shirt points hold up better at larger abstraction levels than smaller ones.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bad estimates
&lt;/h2&gt;

&lt;p&gt;The things I’ve seen as what I would call “Bad Estimates” can normally be categorised as estimates that are subsequently used as “metrics” that are monitored. David Whitney defined estimates as “like a shotgun, accurate at close range, and useless at long range”, and this rings true when you start to think about estimates as a measure. If they aren’t accurate beyond the next few iterations, they can’t (and shouldn’t) be monitored beyond that.&lt;/p&gt;

&lt;p&gt;In short, don’t take those story points, multiple by 1.5, put that in a Gantt chart and beat the team when they’re not at the right milestones based on those guesses they made. This practice has, over time, forced developers to be over cautious, non-committal, and generally push against the estimation process in general. Resulting in everything from overly conservative estimate “It’ll take 2 years at least”, or a complete refusal to engage.&lt;/p&gt;

&lt;h2&gt;
  
  
  Estimation is a discovery task
&lt;/h2&gt;

&lt;p&gt;One thing that myself and David agree on is that the act of performing estimation at various levels is an important part of discovery. If this is done with the team doing the work, and facilitated in the right way, you can get accurate ranges at high levels of abstraction.&lt;/p&gt;

&lt;p&gt;Performing the estimation work will help the teams understand lots of things about the work being proposed including:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Complexity of the ask (what specialisms might be needed, Automation, performance, compliance)&lt;/li&gt;
&lt;li&gt;Slicing (how can we split this up and deliver in smaller chunks)&lt;/li&gt;
&lt;li&gt;Dependency (internal or external things that will dictate when it can be completed)&lt;/li&gt;
&lt;li&gt;Spikes/Investigations (do we need to run smaller discoveries to prove something)&lt;/li&gt;
&lt;li&gt;External factors affecting delivery (Certification, procurement, etc.)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All of these are incredibly valuable to the software development process, and are an important by product of estimation. They can help with the mental wellbeing of the team by ensuring that small victories can be celebrated, and ensure that the team doesn’t over commit in smaller cycles.&lt;/p&gt;

&lt;h2&gt;
  
  
  Outcomes not Outputs
&lt;/h2&gt;

&lt;p&gt;When Estimates become metrics, you’re actually measuring the output of your team, and not the outcome. That is to say that you’re not measuring the “ask” of the team, but rather metrics that are by products of that in the hope that they’re indicative of the outcome you wanted.&lt;/p&gt;

&lt;p&gt;When you move into a product business with product teams, the focus is no longer on the output of the teams, teams are autonomous and measured by the outcomes they’re asked to produce. In these teams, it’s simply not acceptable to say “but we have a predictable velocity”, or “we did all these story points though”, these just aren’t how the teams are measured anymore. The team will test, learn, adapt, and focus on constant feedback with regular value delivery.&lt;/p&gt;

&lt;p&gt;In these sorts of teams, the value of an overall estimate based on time is at best irrelevant, at worst detrimental to the teams focus and goals. This is where it becomes important to think about the reason for estimation, and establish what the end goal of the estimation is.&lt;/p&gt;

&lt;h2&gt;
  
  
  Making Estimation work
&lt;/h2&gt;

&lt;p&gt;Different types of estimation WILL work for different usecases. There is never a one size fits all approach to providing any kind of estimate.&lt;/p&gt;

&lt;p&gt;Estimation can work in Software Development, and the key is knowing “Why” you’re either making estimates, or providing estimates to others. This is what allows us to provide the right “kind” of estimate, at the right level, and also inject the right amount of effort into providing the estimate in the first place. These are the sorts of questions you should be asking:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What will it be used to inform? (Project Budgets, Resourcing, Customer commitments)&lt;/li&gt;
&lt;li&gt;What’s the risk of it being too long/short/high/low/expensive/cheap? (Project green lights, regulatory, reputation)&lt;/li&gt;
&lt;li&gt;What does success look like for the estimate? (Delivery time, Cost, Profit)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What we’re trying to establish is the “value” of the estimate we’re giving, so we can apply the appropriate amount of time and effort to producing it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Monitoring the estimate
&lt;/h2&gt;

&lt;p&gt;Once you’ve provided an estimate on a large project, you’ll often find that people then want to use that to “track” progress against that “guess”. This is where estimates become toxic. This is specific to Large projects that span multiple months/business cycles.&lt;/p&gt;

&lt;p&gt;Estimating the cost/timeline is providing an endpoint of the project. This is the definition of success (or failure) of the project. This doesn’t mean that you can use this to monitor whether a project is on track on a regular basis.&lt;/p&gt;

&lt;p&gt;When you start to question the stakeholders for the reason the monitoring, what I’ve found is that what they really want to know is “Are we on track to deliver the project based on what was projected”. I’ve never found that monitoring on a granular level can give the right level of confidence to the stakeholders.&lt;/p&gt;

&lt;p&gt;So what’s the answer? Milestones. A milestone is a point in time that you can use to check whether you’ve completed the required things that will get you towards that outcome. These don’t need to be “finished” things, they shouldn’t be a “number of stories” or an “amount of points”, they should, however, give you an indication that you’re on the right road, and you’re heading in the right direction, at the right pace. I would also encourage you to see these not a Date (i.e. 12:36pm on March 25th), but more a vague range like “End of March” or “Before the summit”.&lt;/p&gt;

&lt;p&gt;Breaking up a larger project into a series of milestones that provide tangible outputs is BY FAR the best way to give stakeholders the confidence that the project is heading the right direction. These milestones should have defined outcomes that are tangible, demoable, and show a progression to the overall outcome of the project.&lt;/p&gt;

&lt;p&gt;Defining the milestones is something that you should do as a collaborative task with the WHOLE team, and include the stakeholders. If the stakeholders don’t want to be involved in that, they likely only care about the end goal, and therefore the monitoring isn’t really important.&lt;/p&gt;

&lt;p&gt;There is literally know better way to give the stakeholders the confidence that you know what you’re doing, and you’re on time/budget. There is no amount of numbers, spreadsheets, Gantt charts or (god forbid) powerpoints, that can replace actually talking to people regularly, and showing them the things they asked for.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;To summarise my stance on this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Large project estimation is important, and relatively easy to achieve with a degree of variance. &lt;/li&gt;
&lt;li&gt;Monitoring of the progress of Large projects should use different information than estimation.&lt;/li&gt;
&lt;li&gt;Estimation shouldn’t be used to produce a number that is tracked&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In general, moving to outcome tracking on smaller iterations is where I desire all my teams to be. This is all about empowerment and trusting the teams to deliver value than work to metrics that don’t map to business value.&lt;/p&gt;

</description>
      <category>estimation</category>
      <category>noestimates</category>
      <category>teammonitoring</category>
      <category>agile</category>
    </item>
    <item>
      <title>Deploying .NET 5 Azure functions with Pulumi and GitHub Actions</title>
      <dc:creator>martinjt</dc:creator>
      <pubDate>Mon, 03 May 2021 21:08:30 +0000</pubDate>
      <link>https://dev.to/martinjt/deploying-net-5-azure-functions-with-pulumi-and-github-actions-2dbp</link>
      <guid>https://dev.to/martinjt/deploying-net-5-azure-functions-with-pulumi-and-github-actions-2dbp</guid>
      <description>&lt;p&gt;In this post I’ll show you how to deploy the .NET 5 “Out of process” azure functions using Pulumi. We’ll be using a GitHub action to build the code, which will also create the infrastructure too, then deploy the function to that infrastructure. In this example, we’ll be using a Azure Blob Storage to store the state of our Pulumi stack.&lt;/p&gt;

&lt;p&gt;If you’d just like to view the solution, you can find the code here: &lt;a href="https://github.com/martinjt/pulumi-dotnet5"&gt;martinjt/pulumi-dotnet5 (github.com)&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Pulumi?
&lt;/h2&gt;

&lt;p&gt;Pulumi is an Infrastructure as Code framework that allows you to declare your infrastructure in the same language you do your code. In this case, we’ll be writing it in C# to match the code of our function.&lt;/p&gt;

&lt;h2&gt;
  
  
  What are Github Actions?
&lt;/h2&gt;

&lt;p&gt;Github Actions are free Build runners provided by GitHub that run against your Github repo. You currently get 2000 free build minutes for each of your personal accounts. You can use them to do anything, and we’ll be using them to build and deploy our code to Azure for free!&lt;/p&gt;

&lt;h2&gt;
  
  
  Pre-requisites
&lt;/h2&gt;

&lt;p&gt;For this tutorial, you’ll need:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;An Azure Subscription to deploy to
(You’ll need Contributor rights for Storage Accounts, Blobs, AppService/Function apps)&lt;/li&gt;
&lt;li&gt;An Azure Storage blob for storing the state&lt;/li&gt;
&lt;li&gt;Access to (or the ability to create) a service principal.&lt;/li&gt;
&lt;li&gt;Fork of the repository at &lt;a href="https://github.com/martinjt/pulumi-dotnet5"&gt;https://github.com/martinjt/pulumi-dotnet5&lt;/a&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Step 1 – Creating a Service Principal
&lt;/h2&gt;

&lt;p&gt;You’ll need to have a service principal that Pulumi can use to create the resources in Azure. Behind the scenes, Pulumi is hitting the Azure REST API, and for that it needs credentials&lt;/p&gt;

&lt;p&gt;If you have a service principal, then you can skip this part.&lt;/p&gt;

&lt;p&gt;The easiest way I’ve found to do this is using the Azure CLI. Note that currently, the CLI is creating a Service Principal with Contributor permissions, however, there is a message saying this will not always be the case. You’ll need Contributor permissions for this.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
az login
az account set -s &amp;lt;subscriptionId&amp;gt;
az ad sp create-for-rbac --name pulumi-tests

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the above commands, you’ll notice that I’m setting the subscription as default. This is something that I would recommend, but you could equally just pass the &lt;code&gt;subscriptionId&lt;/code&gt; on the command to create the principal.&lt;/p&gt;

&lt;p&gt;This should return you a JSON blob that looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
{
  "appId": "",
  "displayName": "pulumi-tests",
  "name": "http://pulumi-tests",
  "password": "",
  "tenant": ""
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Keep hold of this as we’ll need it in a subsequent step.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2 – Hosted State file setup
&lt;/h2&gt;

&lt;p&gt;We’ll be storing our Pulumi state file in Azure Blob Storage, so that will need creating manually.&lt;/p&gt;

&lt;p&gt;Pulumi maintains an “internal” dictionary of all the resources it’s created as part of the stack, and how those map to the things it needs to create. As GitHub actions don’t have a shared place for these things, we need a persistent store for them. We’ll be using Azure Blob for this.&lt;/p&gt;

&lt;p&gt;You’ll need a storage account (using an existing one is completely fine). I’d recommend storing this in the same subscription as the infrastructure it’s managing (i.e. if you have a subscription for Dev, test and prod, put the state files for each environment in those).&lt;/p&gt;

&lt;p&gt;The only recommendation around creating this I would have is disabling the public access.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://martinthwaites.files.wordpress.com/2021/05/create-storage-account-disable-public.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--IB4n8npm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://martinthwaites.files.wordpress.com/2021/05/create-storage-account-disable-public.png%3Fw%3D1024" alt=""&gt;&lt;/a&gt;Screenshot of the “Enable blob public access” checkbox for blob storage&lt;/p&gt;

&lt;p&gt;Once you have a Storage account, you’ll need a blob creating within there. Note down the names of both the storage account, and the blob as these will be need in the subsequent steps.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 3 – Github Secrets
&lt;/h2&gt;

&lt;p&gt;For the pipeline to run, you’ll need to add a couple of secrets that will grant access to the Azure Blob storage, the Infrastructure, and also encrypt the state created by Pulumi. You’ll also need the variables for the Storage account name, and the blob name.&lt;/p&gt;

&lt;p&gt;Within your fork, setup the following secrets in  =&amp;gt; Settings =&amp;gt; Secrets =&amp;gt; New Repository Secret.&lt;/p&gt;

&lt;p&gt;Note: This works equally well with Organisation or Environment secrets if you’re using them.&lt;/p&gt;

&lt;h3&gt;
  
  
  Service Principal
&lt;/h3&gt;

&lt;p&gt;The Github Action workflow is setup to use a secret called &lt;code&gt;AZURE_PRINCIPAL&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
      - name: Azure Login
        uses: azure/login@v1
        with:
          creds: ${{ secrets.AZURE_PRINCIPAL }}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This comes from the Action provided by azure, and expects the following json format:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
{  
  "clientId": "&amp;lt;appId&amp;gt;",
  "clientSecret": "&amp;lt;password&amp;gt;",
  "tenantId": "&amp;lt;tenant&amp;gt;",
  "subscriptionId": "&amp;lt;subscriptionId&amp;gt;"
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These should all be available from the principal that was setup in Step 1.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pulumi Secret
&lt;/h3&gt;

&lt;p&gt;You’ll need to provide pulumi with a “key” for which it will use to encrypt the state of your workflow. This is set as &lt;code&gt;PULUMI_PASSPHRASE&lt;/code&gt; and can be any string&lt;/p&gt;

&lt;h3&gt;
  
  
  Storage Account
&lt;/h3&gt;

&lt;p&gt;Next you’ll need to provide a valid storage account (that the principal supplied has access to). This is done with the &lt;code&gt;STATE_STORAGE_ACCOUNT&lt;/code&gt; secret&lt;/p&gt;

&lt;h3&gt;
  
  
  Storage Blob
&lt;/h3&gt;

&lt;p&gt;This is the name of the blob used for the Pulumi State file, and is set as &lt;code&gt;STATE_STORAGE_BLOB&lt;/code&gt;. You need to ensure that the Service Principal supplied has Read, Write and List permissions to the blob.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 4 – Run the workflow
&lt;/h2&gt;

&lt;p&gt;That’s it, you should now be able to run the workflow from the menus.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://martinthwaites.files.wordpress.com/2021/05/image.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--MabEhUot--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://martinthwaites.files.wordpress.com/2021/05/image.png%3Fw%3D1024" alt=""&gt;&lt;/a&gt;Screenshot of the Pipeline&lt;/p&gt;

&lt;h2&gt;
  
  
  The important bits
&lt;/h2&gt;

&lt;p&gt;There are a few important things to note when creating both Azure functions and .NET 5 Azure Functions with Pulumi.&lt;/p&gt;

&lt;h3&gt;
  
  
  Functions Runtime
&lt;/h3&gt;

&lt;p&gt;When you’re deploying an Azure Function with .NET 5, you’ll need to make sure that you set the AppSettings &lt;code&gt;FUNCTIONS_WORKER_RUNTIME&lt;/code&gt; to &lt;code&gt;dotnet-isolated&lt;/code&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Changing Blobs on build
&lt;/h3&gt;

&lt;p&gt;The solution I’ve provided uses the &lt;code&gt;WEBSITE_RUN_FROM_PACKAGE&lt;/code&gt; AppSetting that points to a blob in Blob storage. This provides a nice separation of the code and function. However, due to the way that pulumi works, the &lt;code&gt;Blob&lt;/code&gt; resource is not marked as updated, and therefore does not change the URLs. As the URL to the blob doesn’t change, the Function App will not pick up the new code.&lt;/p&gt;

&lt;p&gt;To workaround this, I’ve added a DateTime to the blob’s name, so it’s updated on every build. I would recommend having this use the commit hash instead.&lt;/p&gt;

&lt;p&gt;This does still have limitations in that old Function Apps could start errors as the old blob is being deleted.&lt;/p&gt;

&lt;h3&gt;
  
  
  Don’t publish your Infrastructure code
&lt;/h3&gt;

&lt;p&gt;When you do a &lt;code&gt;dotnet publish&lt;/code&gt; in your application, make sure that you target the application’s project, and not the solution. If your solution file contains both the application and the infrastructure, publishing the solution will result in around 200MB of extra libraries that you don’t need. This quickly fills up your GitHub data allowance.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Deploying an Azure function with Pulumi is pretty easy, adding in the complexity of Github Actions/Workflows is actually pretty easy too. There are a fewhop incantations that you need in order for the Action to access azure, but other than that, it was fairly smooth sailing.&lt;/p&gt;

&lt;p&gt;I hope that forking and deploying this repo makes it easier to understand what’s needed.&lt;/p&gt;

</description>
      <category>azure</category>
      <category>github</category>
      <category>dotnet</category>
      <category>pulumi</category>
    </item>
    <item>
      <title>Grafana On Azure – AzureAD Authentication</title>
      <dc:creator>martinjt</dc:creator>
      <pubDate>Sat, 10 Apr 2021 20:37:46 +0000</pubDate>
      <link>https://dev.to/martinjt/grafana-on-azure-azuread-authentication-3492</link>
      <guid>https://dev.to/martinjt/grafana-on-azure-azuread-authentication-3492</guid>
      <description>&lt;p&gt;This is part of a multi-part series on how to deploy and host Grafana safely, and cheaply on Azure, and how to get some decent visbility from Azure Monitor/App Insights through it. Hopefully parts of this will be useful.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://martinjt.me/2021/03/13/grafana-on-azure-part-1-hosting-configuration/"&gt;Part 1 – Hosting/Configuration&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://martinjt.me/2021/03/13/grafana-on-azure-part-2-azure-mysql-storage/"&gt;Part 2 – Azure MySQL storage&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://martinjt.me/2021/03/13/grafana-on-azure-part-3-enabling-ssl-with-letsencrypt/"&gt;Part 3 – SSL with LetsEncrypt&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Part 4 – Azure AD Login (this post)&lt;/li&gt;
&lt;li&gt;Part 5 – Azure Monitor Datasource (Coming Soon)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Pre-requisites
&lt;/h2&gt;

&lt;p&gt;If you’ve followed the previous 3 steps, you’ll have everything setup correctly. Otherwise, you’ll need the following:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Grafana instance (obviously)&lt;/li&gt;
&lt;li&gt;Access to the &lt;code&gt;grafana.ini&lt;/code&gt; file on that instance&lt;/li&gt;
&lt;li&gt;Grafana using SSL (this is a requirement for AzureAD’s response/callback URLs)&lt;/li&gt;
&lt;li&gt;AzureAD instance&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Under both scenarios, you’ll need:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Access to create App Registrations in the Azure Portal.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Overview
&lt;/h2&gt;

&lt;p&gt;In this post, we’ll be looking at adding Azure Active Directory (AzureAD) support to a Grafana instance. This is what I would advise if you’re hosting on Azure as you’re already likely to have all of your potential Grafana users setup in Active Directory, and either this is AzureAD native, or you have passwords sync’d with a standard Active Directory instance.&lt;/p&gt;

&lt;p&gt;You will still be able to have local users, as well as AzureAD, and I’d recommend keeping the &lt;code&gt;admin&lt;/code&gt; user with a very strong password for maintainence.&lt;/p&gt;

&lt;p&gt;Using AzureAD as your authentcation system for Grafana also allows you to have Two-Factor Authentication (2FA) for Grafana by enabling this within AzureAD.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is AzureAD?
&lt;/h2&gt;

&lt;p&gt;This is the cloud based authentication system used to access the Azure portal. If you’re using Azure, you likely already have one. It’s the next generation Active Directory which is Microsoft’s centralised IAM system.&lt;/p&gt;

&lt;p&gt;It provides interfaces for common authentication protocols like OIDC (OpenIdConnect) and SAML2. This is what Grafana will use to verify the identity of your users.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 1 – Create the Azure App
&lt;/h2&gt;

&lt;p&gt;The first step is to create an Azure AD “Application” that will be what is used for Grafana to communicate get access to Azure. For this step, the application will be used to identify user information. We’ll be breaking the Application creation into 2 steps, the first will allow the use the application, then the second will allow you to map Azure AD groups to Grafana roles.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://martinthwaites.files.wordpress.com/2021/04/azure-ad-appregistration.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ofzKFZo6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://martinthwaites.files.wordpress.com/2021/04/azure-ad-appregistration.png%3Fw%3D1024" alt=""&gt;&lt;/a&gt;The Name is a friendly name that you users will see the first time they try to login. Use something recognisable to your user base, and also descriptive to ensure that you users trust the login.&lt;/p&gt;

&lt;p&gt;The Redirect URI is required for this Grafana integration. You’ll need you domain here and the value should look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;https://&amp;lt;domain&amp;gt;/login/azuread
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It’s important that this is a domain and not an IP as you’ll need to use HTTPS and have a valid certificate.&lt;/p&gt;

&lt;p&gt;Once the app is created, you’ll need to record 2 details. The TenantId and the ClientId:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://martinthwaites.files.wordpress.com/2021/04/azure-ad-app-information-page.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---pvjB1-x--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://martinthwaites.files.wordpress.com/2021/04/azure-ad-app-information-page.png%3Fw%3D1024" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;These will be needed for the grafana config in the next steps.&lt;/p&gt;

&lt;p&gt;Next you’ll need to create a “Client Secret” which is how Azure can know that it’s your Grafana instance, rather than a someone else’s.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://martinthwaites.files.wordpress.com/2021/04/azure-ad-app-secrets.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--OqLmKg3a--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://martinthwaites.files.wordpress.com/2021/04/azure-ad-app-secrets.png%3Fw%3D1024" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Client the “New client secret” link, then give this secret a descriptive name. The maximum expiration is 2 years, however, I’d recommend using 6 months and schedule a reminder to update it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://martinthwaites.files.wordpress.com/2021/04/azure-ad-app-add-client-secret-1.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--GdHv2Mxh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://martinthwaites.files.wordpress.com/2021/04/azure-ad-app-add-client-secret-1.png%3Fw%3D732" alt=""&gt;&lt;/a&gt;Once you’ve added the secret, you’ll need to copy this out as it will be required in the next steps. You’ll only be able to copy this secret at this stage, so it’s important that you copy it out. before leaving the page.&lt;/p&gt;

&lt;p&gt;Step 2 – Grafana Config&lt;/p&gt;

&lt;p&gt;Next you’ll need to tell grafana about the config from the Azure AD Application. There is a section specifically for this in the &lt;code&gt;grafana.ini&lt;/code&gt; file called &lt;code&gt;[auth.azuread]&lt;/code&gt; The important things here are:&lt;/p&gt;

&lt;p&gt;Name = Friendly name, it’s not really used anywhere&lt;br&gt;&lt;br&gt;
Enabled = set this to true&lt;br&gt;&lt;br&gt;
client_id = that you copied from the main Azure AD app screen&lt;br&gt;&lt;br&gt;
client_secret = that you copied from the main Azure AD app screen&lt;br&gt;&lt;br&gt;
scopes = &lt;code&gt;openid email profile&lt;/code&gt;&lt;br&gt;&lt;br&gt;
auth_url = &lt;code&gt;https://login.microsoftonline.com/{tenant}/oauth2/v2.0/authorize&lt;/code&gt; replacing &lt;code&gt;{tenant}&lt;/code&gt; with the tenant ID from the main Azure AD App screen&lt;br&gt;&lt;br&gt;
 auth_url = &lt;code&gt;https://login.microsoftonline.com/{tenant}/oauth2/v2.0/token&lt;/code&gt;replacing &lt;code&gt;{tenant}&lt;/code&gt; with the tenant ID from the main Azure AD App screen&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[auth.azuread]
name = Azure AD
enabled = true
;allow_sign_up = true
client_id = 
client_secret = 
scopes = openid email profile
auth_url = https://login.microsoftonline.com/{tenant}/oauth2/v2.0/authorize
token_url = https://login.microsoftonline.com/{tenant}/oauth2/v2.0/token
;allowed_domains =
;allowed_groups =
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Restart the service and you should now be able to login with your Azure AD credentials.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl restart grafana.server.service
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://martinthwaites.files.wordpress.com/2021/04/grafana-azuread-login-button.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--INevFeTB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://martinthwaites.files.wordpress.com/2021/04/grafana-azuread-login-button.png%3Fw%3D723" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this post you’ve seen just how easy it is to enable AzureAD authentication. There is more that you can like enable groups for the users, and removing the ability to have a local login form. Those are all for another post.&lt;/p&gt;

&lt;p&gt;In the next post, we’ll look at using this Azure AD application to enable access to Azure Monitor, and Azure Log Analytics.&lt;/p&gt;

</description>
      <category>azure</category>
      <category>observability</category>
      <category>azureactivedirectory</category>
      <category>grafana</category>
    </item>
    <item>
      <title>Grafana on Azure – Enabling SSL with LetsEncrypt</title>
      <dc:creator>martinjt</dc:creator>
      <pubDate>Sat, 13 Mar 2021 22:02:59 +0000</pubDate>
      <link>https://dev.to/martinjt/grafana-on-azure-enabling-ssl-with-letsencrypt-1ega</link>
      <guid>https://dev.to/martinjt/grafana-on-azure-enabling-ssl-with-letsencrypt-1ega</guid>
      <description>&lt;p&gt;This is part of a series of posts about running Grafana on Azure. Checkout the others&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://martinjt.me/2021/03/13/grafana-on-azure-part-1-hosting-configuration/"&gt;Part 1 – Hosting/Configuration&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://martinjt.me/2021/03/13/grafana-on-azure-part-2-azure-mysql-storage/"&gt;Part 2 – Azure MySQL Storage&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Part 3 – Enabling SSL with LetsEncrypt (this post)&lt;/li&gt;
&lt;li&gt;&lt;a href="https://martinjt.me/2021/04/10/grafana-on-azure-azuread-authentication/"&gt;Part 4 – Azure AD Login&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Part 5 – Azure Monitor Datasource (coming soon)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What is LetsEncrypt?
&lt;/h2&gt;

&lt;p&gt;LetsEncrypt.org is an initiative to promote sites using SSL. Regardless of whether there is a data you feel is critical, SSL should still be enabled. LetsEncrypt provides completely FREE SSL certificates that you can use on any domain. The certificates are limited to 30 days, which means that you need to regnerate the key regularly. However, there is a utility called “certbot” which will automate this for us.&lt;/p&gt;

&lt;p&gt;There are multiple ways you can generate the certificates, but for this example, we’ll be use what is referred to as the HTTP Challenge method. It’s generally a little bit more secure to run the DNS Challenge method as this doesn’t rely on you opening up additional ports, however, we won’t be covering that method here as it specific to the DNS provider you’re using.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 1 – Setup your domain name
&lt;/h2&gt;

&lt;p&gt;In order to get a valid certificate, we’ll need to have a domain name, this won’t work on an IP address.&lt;/p&gt;

&lt;p&gt;Setup your domain name to point to the IP address of your grafana instance. To make sure that you’re not going to hit issues, I’d recommend checking using &lt;code&gt;dig&lt;/code&gt; or &lt;code&gt;nslookup&lt;/code&gt; to ensure that it’s correctly pointed. Also, try hitting the grafana using that domain on port 3000 (i.e. &lt;code&gt;http://{domain}:3000/&lt;/code&gt;).&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2 – Open the required ports
&lt;/h2&gt;

&lt;p&gt;You’ll need to open up ports in your Network Security group so that both LetsEncrypt can communicate on Port 80 (http) for the HTTP Challenge, and your users can communicate on Port 443 (https).&lt;/p&gt;

&lt;p&gt;Navigate to your Virtual Machine in the Azure Portal, Click “Networking” then “Add Inbound Port Rule”.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://martinthwaites.files.wordpress.com/2021/03/networking-rules-http.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--y4lezwnJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://martinthwaites.files.wordpress.com/2021/03/networking-rules-http.png%3Fw%3D1024" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For “Destination Port Ranges” enter both 80 and 443 separated by a comma e.g. &lt;code&gt;80,443&lt;/code&gt;, and click Save.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 3- Let Grafana Access the certificates on the filesystem
&lt;/h2&gt;

&lt;p&gt;The grafana process will used the certificates as it has it’s own inbuilt HTTP server. Therefore, the process will need access to the location where the certificates are generated.&lt;/p&gt;

&lt;p&gt;First, we’ll create a group that will provide access to our ssl certificates:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
sudo groupadd sslcerts

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then we’ll create the directories for the certificates, and change the ownership to our newly created group, and modify some permissions&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
sudo mkdir /etc/letsencrypt
sudo mkdir /etc/letsencrypt/archive
sudo mkdir /etc/letsencrypt/live
sudo chown -R root:sslcerts /etc/letsencrypt/
sudo chmod 755 /etc/letsencrypt/archive
sudo chmod 755 /etc/letsencrypt/live

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, we’ll add grafana’s process user to our newly created group.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
sudo usermod -G sslcerts -a grafana

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 4 – Install and run LetsEncrypt Certbot
&lt;/h2&gt;

&lt;p&gt;Next we’ll need to install the certbot which is the application that will communicate with LetsEncrypt. There is a package in APT, so that’s pretty easy.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
sudo apt install -y certbot

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then we can run the tool in standalone mode.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
sudo certbot certonly --standalone

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You’ll need to give LetsEncrypt an email address, and agree to their terms, then add the domain you want to generate the certificate for. This will be the one you setup in Step 1, without https, etc.&lt;/p&gt;

&lt;p&gt;Certbot will then setup a temporary http server running on port 80 that will allow LetsEncrypt’s servers to verify that you actually own the domain by literally just hitting the url.&lt;/p&gt;

&lt;p&gt;Once that’s done, you’ll find that there is now a certificate file in the folders we created in step 3.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;azureuser@grafana-azure:~$ sudo ls -hal /etc/letsencrypt/live
total 16K
drwx------ 3 root root 4.0K Mar 13 21:08 .
drwxr-xr-x 9 root sslcerts 4.0K Mar 13 21:08 ..
-rw-r--r-- 1 root root 740 Mar 13 21:08 README
drwxr-xr-x 2 root root 4.0K Mar 13 21:08 grafanablog.martinjt.me
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You’ll notice that the certbot hasn’t honoured our changing of the directories groups, so we’ll need to rectify that. If we change it on the files, certbot will honor them on renewal.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 5 – Configure Grafana to use SSL
&lt;/h2&gt;

&lt;p&gt;Open the grafana config for editing&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
sudo nano /etc/grafana/grafana.ini

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Edit the following settings:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;protocol = https
domain = &amp;lt;your-domain&amp;gt;
enforce_domain = true
root_url = https://&amp;lt;your-domain&amp;gt;
cert_file = /etc/letsencrypt/live/&amp;lt;your-domain&amp;gt;/fullchain.pem
cert_key = /etc/letsencrypt/live/&amp;lt;your-domain&amp;gt;/privkey.pem
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now restart the grafana service:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
sudo systemctl restart grafana-server.service

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should now be able to access your service using:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;https://&amp;lt;your-domain&amp;gt;:3000/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h5&gt;
  
  
  Using port 443 for https
&lt;/h5&gt;

&lt;p&gt;by default, Grafana won’t be able to listen on port 443 due to restrictions in Linux. You’ll need to enable this using the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
sudo setcap 'cap_net_bind_service=+ep' /usr/sbin/grafana-server

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we can edit our &lt;code&gt;grafana.ini&lt;/code&gt; again:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;http_port = 443 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then restart our service&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
sudo systemctl restart grafana-server.service

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now you should be able to access your grafana instance without the port:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;https://&amp;lt;your-domain&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Additional Links
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;a href="https://blog.hackzenwerk.org/2019/05/13/setup-grafana-on-ubuntu-18-04-with-letsencrypt/"&gt;Setup Grafana on Ubuntu 18.04 with LetsEncrypt&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>azure</category>
      <category>observability</category>
      <category>grafana</category>
      <category>letsencrypt</category>
    </item>
    <item>
      <title>Grafana on Azure – Azure MySQL Storage</title>
      <dc:creator>martinjt</dc:creator>
      <pubDate>Sat, 13 Mar 2021 22:01:34 +0000</pubDate>
      <link>https://dev.to/martinjt/grafana-on-azure-azure-mysql-storage-7dg</link>
      <guid>https://dev.to/martinjt/grafana-on-azure-azure-mysql-storage-7dg</guid>
      <description>&lt;p&gt;In this multi-part series you’ll learn how to host Grafana safely, and cheaply on Azure, and how to get some decent visbility from Azure Monitor/App Insights through it.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://martinjt.me/2021/03/13/grafana-on-azure-part-1-hosting-configuration/"&gt;Part 1 – Hosting/Configuration&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://martinjt.me/2021/03/13/grafana-on-azure-part-2-azure-mysql-storage/"&gt;Part 2 – Azure MySQL Storage (this post)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://martinjt.me/2021/03/13/grafana-on-azure-part-3-enabling-ssl-with-letsencrypt/"&gt;Part 3 – SSL with LetsEncrypt&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://martinjt.me/2021/04/10/grafana-on-azure-azuread-authentication/"&gt;Part 4 – Azure AD Login&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Part 5 – Azure Monitor Datasource (Coming Soon)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In the last post, you’ve setup a basic Grafana VM on Azure that is using a database on the VM’s disk as it’s datastore.&lt;/p&gt;

&lt;p&gt;In this post, we’ll move that database to Azure PaaS.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Azure Database for MySQL?
&lt;/h2&gt;

&lt;p&gt;This is Azure’s managed version of MySQL. It’s cheap (~£25/month) and supports everything you’d expect an Azure PaaS platform to do. There’s no patch management, backups, etc. just an endpoint for you to hit. What’s not to love?&lt;/p&gt;

&lt;p&gt;This makes it great for our database for Grafana as the point of a monitoring system is be always up, and you don’t want to be monitoring your monitoring system with your monitoring system. Having a single VM, with the database, the frontend, etc. as a single point of failure just isn’t great. Having then to add backups of all your data regularly, and suddenly the quick and easy Grafana instance is now a management overhead in itself.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 1 – Azure MySQL Setup
&lt;/h2&gt;

&lt;p&gt;Grafana stores a very small amount of data, and can be stored in a shared server if you already have one. We’ll be setting up a server from scratch for this, but most of the steps will work on an existing server if you have one.&lt;/p&gt;

&lt;p&gt;You can get away with the smallest possible instance, which a B1 at ~£20/month, with the minimum base storage (5GB). This will give you the ability to have backups, SSL, and DB maintainance. I’m going to guess that this is likely nothing in comparison to the other things you’re running, and also nothing in terms of the ~1day/month it would take you to verify all backups, etc. if you ran your own.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://martinthwaites.files.wordpress.com/2021/03/create-mysql-server.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--dLluxMN4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://martinthwaites.files.wordpress.com/2021/03/create-mysql-server.png%3Fw%3D812" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You’ll need to create a database and a user for Grafana. Later in this post, we’ll look at locking down the Azure MySQL instance, for now, you’ll need to manually add your IP.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://martinthwaites.files.wordpress.com/2021/03/mysql-personal-ip-connection-security.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--m5Q-1UCH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://martinthwaites.files.wordpress.com/2021/03/mysql-personal-ip-connection-security.png%3Fw%3D1024" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click the link “Add current client IP Address”, and then click save. This will add your public IP Address to the SQL Server.&lt;/p&gt;

&lt;p&gt;Now connect to the MySQL instance from the command line (if you don’t want to install mysql, running the docker image is a nice alternative).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mysql -h {servername}.mysql.database.azure.com -u {adminuser}@{servername} -p
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Your user will need full privileges to manage the database as grafana has builtin database migrations.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CREATE USER '&amp;lt;username&amp;gt;'@'%' IDENTIFIED BY '&amp;lt;password&amp;gt;';
CREATE DATABASE grafana_data;
GRANT ALL ON grafana_data.* TO '&amp;lt;username&amp;gt;'@'%';
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That’s it for MySQL. You’ve created the database, and it’s setup so you can access it. Next we’ll make grafana use it.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2 – Configuring Grafana’s DB
&lt;/h3&gt;

&lt;p&gt;The first thing we’ll need to do is allow Grafana to access the database. For now, we’ll enable the “Allow access to Azure Services”, this will enable access from Azure IPs which will work for now, and we can secure this later.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://martinthwaites.files.wordpress.com/2021/03/mysql-allow-access-to-azure-services.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0rZVvDw1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://martinthwaites.files.wordpress.com/2021/03/mysql-allow-access-to-azure-services.png%3Fw%3D817" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next we’ll need to tell grafana where to find our database, and how to log in. SSH to your VM and open up the grafana config file&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo nano /etc/grafana/grafana.ini
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In there, find the section labelled &lt;code&gt;[Database]&lt;/code&gt; and set the following values, leaving all the others at their defaults.&lt;/p&gt;

&lt;p&gt;Note: the ini file uses &lt;code&gt;;&lt;/code&gt; to comment out the settings, remove it to set them.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;type = mysqlhost = &amp;lt;servername&amp;gt;.mysql.database.azure.com:3306name = &amp;lt;database-name&amp;gt;user = &amp;lt;username&amp;gt;@&amp;lt;server-name&amp;gt;&lt;/code&gt;&lt;br&gt;&lt;br&gt;
&lt;code&gt;ssl_mode = skip-verify&lt;/code&gt;&lt;br&gt;&lt;br&gt;
&lt;code&gt;ca_cert_path = /etc/ssl/certs/ca-certificates.crt&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Now restart grafana, and you’ll be need to login again with the default credentials.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl restart grafana-server
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you end up with your url timing out, check the logs.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo tail -f /var/log/grafana/grafana.log
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A common error is accessing the MySQL instance. This will manifest as the log tail above stalling at “Starting DB Migrations”. Likely causes are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Username hasn’t include &lt;code&gt;@&amp;lt;server-name&amp;gt;&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;ssl_mode&lt;/code&gt; and &lt;code&gt;ca_cert_path&lt;/code&gt; aren’t set&lt;/li&gt;
&lt;li&gt;MySQL firewall rules haven’t been setup.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If all has gone well, you’ll be presented with the Grafana logon screen, and be asked to input the default admin password and reset it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://martinthwaites.files.wordpress.com/2021/03/grafana-homepage.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--BSPQZnAj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://martinthwaites.files.wordpress.com/2021/03/grafana-homepage.png%3Fw%3D1024" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3 – Securing MySQL connection
&lt;/h3&gt;

&lt;p&gt;Right now the MySQL instance has been setup so that your public IP and literally anything that runs on Azure can access it. This is not a desireable situation, so we’ll need to secure that further.&lt;/p&gt;

&lt;p&gt;There are a few things you can do to secure the MySQL instance.&lt;/p&gt;

&lt;h4&gt;
  
  
  Option 1: IP restrictions
&lt;/h4&gt;

&lt;p&gt;You can restrict your MySQL instance to just the IP of your VM. This is the simplest approach, but isn’t really scalable as if you kill the VM, you’ll need to update the IP. It also isn’t scalable if you bring on multiple instances/scaling.&lt;/p&gt;

&lt;p&gt;That said, to get started, this is a completely viable approach. You can set this by going to you MySQL Server &amp;gt; Connection Security and adding your VM public IP, and then disabling “Allow access to Azure services”&lt;/p&gt;

&lt;h4&gt;
  
  
  Option 2: VNet Security
&lt;/h4&gt;

&lt;p&gt;This is generally my preferred option. This allows you to make your MySQL Server only accessible from within a specific VNET’s subnet. Unfortuately, this is only available on Standard edition MySQL servers, which come in at ~£50/month.&lt;/p&gt;

&lt;h4&gt;
  
  
  Rant on “Allow Access to Azure Services”
&lt;/h4&gt;

&lt;p&gt;A quick gripe with this setting. It can be tempting to just enable this setting as Azure Managed services can then access your MySQL instance. However, this is NOT limited to the services in your subscription. Therefore, anyone running their app in Azure can access your MySQL instance. Given the amount of people running in Azure, this doesn’t really provide much security. Use with caution.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this post we’ve connected our grafana instance to an Azure MySQL instance so that we get backups, security and a persistent database. This means that should our Grafana VM become corrupt, or otherwise become compromised, we can just terminate it and bring up a new one from scratch.&lt;/p&gt;

&lt;p&gt;In addition, you can also setup as manage VMs for grafana as you want, add load balancers, host it in containers, anything.&lt;/p&gt;

&lt;p&gt;In the next post, we’ll look at securing the frontend using SSL, and generating certificates using &lt;a href="https://letsencrypt.org/"&gt;LetsEncrypt&lt;/a&gt;&lt;/p&gt;

</description>
      <category>azure</category>
      <category>observability</category>
      <category>azuremysql</category>
      <category>grafana</category>
    </item>
    <item>
      <title>Grafana on Azure – Hosting/Configuration</title>
      <dc:creator>martinjt</dc:creator>
      <pubDate>Sat, 13 Mar 2021 21:59:50 +0000</pubDate>
      <link>https://dev.to/martinjt/grafana-on-azure-hosting-configuration-550n</link>
      <guid>https://dev.to/martinjt/grafana-on-azure-hosting-configuration-550n</guid>
      <description>&lt;p&gt;This intended to be a multi-part series on how to host Grafana safely, and cheaply on Azure, and how to get some decent visbility from Azure Monitor/App Insights through it. Hopefully parts of this will be useful.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://martinjt.me/2021/03/13/grafana-on-azure-part-1-hosting-configuration/"&gt;Part 1 – Hosting/Configuration (this post)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://martinjt.me/2021/03/13/grafana-on-azure-part-2-azure-mysql-storage/"&gt;Part 2 – Azure MySQL storage&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://martinjt.me/2021/03/13/grafana-on-azure-part-3-enabling-ssl-with-letsencrypt/"&gt;Part 3 – SSL with LetsEncrypt&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://martinjt.me/2021/04/10/grafana-on-azure-azuread-authentication/"&gt;Part 4 – Azure AD Login&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Part 5 – Azure Monitor Datasource (Coming Soon)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Overview
&lt;/h3&gt;

&lt;p&gt;In this first post, we’ll look at building a Grafana Instance with a managed MySQL instance using Azure MySQL.&lt;/p&gt;

&lt;p&gt;The aim simplicity, speed of deployment, reduced management overhead and cost.&lt;/p&gt;

&lt;p&gt;There as a balancing act to be done here. Speed and simplicity could be achieved using a database on the machine, that’s then backed up, at the expense of management overhead. Reduced Management could be achieved using ACI or App Service, at the expense of cost (unless you already have a container infrastructure). The solution here, I believe, gives a fair balance&lt;/p&gt;

&lt;h3&gt;
  
  
  Why Grafana
&lt;/h3&gt;

&lt;p&gt;Grafana has been my go-to tool for a long time and therefore where I’m comfortable. I’m sure that some or all of what I’m doing here could be achieved with Azure Dashboard, but I haven’t learnt this yet.&lt;/p&gt;

&lt;p&gt;Grafana is a graphing tool that allows you represent data from multiple sources. In modern development, we don’t use just a single provider for our platforms, therefore visualisation needs to be able to draw in multiple sources.&lt;/p&gt;

&lt;h4&gt;
  
  
  Grafana Cloud
&lt;/h4&gt;

&lt;p&gt;A quick note on Grafana Cloud. This is Grafana’s Managed Service offering and provides an easy way to get up and running without hosting it yourself. The pricing is a little expensive once you get to 20-30 users, which the installation I’m proposing can handle. For 10 users, at $49/month, it’s a steal, and I’d recommend that route. If you want to start pushing these dashboards out to business stakeholders, every developer, it can start to get expensive.&lt;/p&gt;

&lt;p&gt;There is also now an AWS Managed version, which is even more expensive at $9/month/user, &lt;a href="https://aws.amazon.com/grafana/pricing/"&gt;https://aws.amazon.com/grafana/pricing/&lt;/a&gt;. This seems very expensive to me, especially as that doesn’t include the enterprise plugins like Timestream.&lt;/p&gt;

&lt;p&gt;_ &lt;strong&gt;Note&lt;/strong&gt; : for Grafana staff, you should should look at adding unlimited “View Only” users, or a 4:1 ratio of free to paid users just for viewing dashboards like stackholders and wallboards._&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1 – Create the Azure Virtual Machine
&lt;/h3&gt;

&lt;p&gt;Grafana requires VERY little resources, unless you’re running this at scale. This is because, for the most part, it’s just proxying queries from the user’s browser to the Analytics backend (e.g. InfluxDB, AppInsights, CloudWatch, etc.). This changes as you add support for alerting, but it’s still not a massive amount, and as we’ll be using Azure analytics, most of the alerting will come from there.&lt;/p&gt;

&lt;p&gt;For this tutorial, I’ve successfully run this using a B1Ls which is currently ~£5/month. We’ll be using Ubuntu, so all the commands and paths will be based on that. For this we’ll be using Ubuntu 20.04.&lt;/p&gt;

&lt;p&gt;Create a standard B1LS virtual machine with Ubuntu 20.04, you’ll need to open up port 22, and have access to the SSH key. It’s out of scope of this post to explain how to create VMs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2 – Install Grafana
&lt;/h3&gt;

&lt;p&gt;SSH onto the machine&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
nano install-grafana.sh

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then paste this into the file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
#!/bin/sh
sudo apt install -y software-properties-common

#add grafana to list of allowed software
sudo add-apt-repository "deb https://packages.grafana.com/oss/deb stable main"
wget -q -O - https://packages.grafana.com/gpg.key | sudo apt-key add -

sudo apt update &amp;amp;&amp;amp; apt install -y grafana
sudo systemctl enable grafana-server.service
sudo systemctl start grafana-server

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You’ll notice from the above that it’s adding a respository. This is adding grafana’s package respository that will override the default ubuntu packages. This means you’ll be getting the very latest version of Grafana.&lt;/p&gt;

&lt;p&gt;Save the file (Ctrl+X) and then we need to make the file executable&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
chmod +x install-grafana.sh

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As the script will be installing some things, you’ll need to run it with super user permissions.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
sudo ./install-grafana.sh

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That will get you a running grafana instance, with a local sqlite database on the machine. Meaning that if the machine dies, and you haven’t used persistent disks, you’ll lose all of the data. Obviously we could stop here and use persistent disks, but that would mean we’d lose the ability to scale-out, and also the ability to image the machine in case of corruption, etc. You’re stuck with “fixing” the machine’s image, which isn’t the cloud way.&lt;/p&gt;

&lt;p&gt;Once you’ve run the above you’ll need to open up grafana’s default Port (3000).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://martinthwaites.files.wordpress.com/2021/03/networking-rules.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--9WHzVodu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://martinthwaites.files.wordpress.com/2021/03/networking-rules.png%3Fw%3D1024" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once that port has been opened up, you should be able to access the grafana instance using the following&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;http://&amp;lt;ip&amp;gt;:3000/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The default credentials are admin/admin, and once you’ve put those in, you’ll need to set a new password for admin. In a further step we’ll add Azure AD credentials for logging, but for now, this will be what is used to login.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;This is post shows the basics of getting a Grafana instance working on an Azure VM. In the next posts we’ll work on making this more resilient and secure.&lt;/p&gt;

&lt;p&gt;You can stop at this point and have a fully functioning grafana instance that you can configure and use. You can mitigate outages by making the disk persistent so it can be restored into another VM. However, in the next post, you’ll see how we can leverage Azure MySQL PaaS to host the database so the VM will become largely irrelevant.&lt;/p&gt;

</description>
      <category>azure</category>
      <category>observability</category>
      <category>grafana</category>
    </item>
    <item>
      <title>Pulumi Multiple projects with Custom Backends</title>
      <dc:creator>martinjt</dc:creator>
      <pubDate>Thu, 04 Mar 2021 22:19:04 +0000</pubDate>
      <link>https://dev.to/martinjt/pulumi-multiple-projects-with-custom-backends-4odh</link>
      <guid>https://dev.to/martinjt/pulumi-multiple-projects-with-custom-backends-4odh</guid>
      <description>&lt;p&gt;I’ve been working with &lt;a href="https://www.pulumi.com"&gt;Pulumi&lt;/a&gt; to create a reference architecture for a client that provides individual team autonomy while providing some shared resources. In addition, the client is also wanting to utilise a custom backend in Azure Blob storage. This presented some issues as the documentation around projects, stacks and stack references for custom backends. Hopefully this post should clear up some of those patterns.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is a StackReference?
&lt;/h2&gt;

&lt;p&gt;A stack reference in pulumi is a way of your current project accessing the outputs of another project. This is really useful when you’re sharing things like a Load Balancer, API Gateway, AppService Plan, etc, between multiple services.&lt;/p&gt;

&lt;p&gt;In Terraform this would be a “Remote State” that you’d bring in as a datasource.&lt;/p&gt;

&lt;p&gt;More information on Stack References: &lt;a href="https://www.pulumi.com/docs/intro/concepts/stack/#stackreferences"&gt;https://www.pulumi.com/docs/intro/concepts/stack/#stackreferences&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Multiple Projects?
&lt;/h2&gt;

&lt;p&gt;When you’re working with multiple teams of people, you will inevitably hit a situation where you’ll want to have multiple, independently versioned and iterated infrastructure projects. Some of the time, there will be outputs from these projects that other projects depend on.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is a Custom Backend?
&lt;/h2&gt;

&lt;p&gt;Pulumi offers an awesome service for managing the state at &lt;a href="https://app.pulumi.com"&gt;https://app.pulumi.com&lt;/a&gt;. This provides an intuitive UI, along with some cool features like Deployment Histories, and integration into CI/CD pipelines etc. This is my go to, and if you can justify the cost, it’s something I would highly recommend. However, this isn’t mandatory as you can also host the backend yourself as you would with terraform.&lt;/p&gt;

&lt;p&gt;You can choose to store that recorded state, however, in your own storage engine like S3 or Azure Blob Storage. Full information of the storage options are here:&lt;a href="https://www.pulumi.com/docs/intro/concepts/state/#logging-into-a-self-managed-backend"&gt;https://www.pulumi.com/docs/intro/concepts/state/#logging-into-a-self-managed-backend&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Projects and Custom Backends
&lt;/h2&gt;

&lt;p&gt;When using &lt;code&gt;app.pulumi.com&lt;/code&gt; for hosting you backend, the providers automatically take into account your project name, as well as the stack name. Therefore if you run this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
pulumi stack init dev

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In a project called &lt;code&gt;demo&lt;/code&gt;you’ll have stack called&lt;code&gt;my-awesome-org/demo/dev&lt;/code&gt;. However, when it comes to using a custom backend, the default syntax doesn’t provide this separation.&lt;/p&gt;

&lt;p&gt;When using Azure Blob Storage, that syntax will result in a stack simply called &lt;code&gt;dev&lt;/code&gt;, therefore you will need to namespace the stack manually.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
pulumi stack init demo.dev

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You’ll note that I’ve not added slashes, and instead used periods. This is because the CLI doesn’t allow slashes for stack names. I’ve also not added the organisation name as that’s specific to the pulumi service.&lt;/p&gt;

&lt;h2&gt;
  
  
  Referencing using a StackReference
&lt;/h2&gt;

&lt;p&gt;Now we have 2 stacks, we’ll need to reference the first in the second. We can do this in C# using the following code, referencing the now qualifed stack reference.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;var stackReference = new StackReference("shared", new StackReferenceArgs { Name = "demo.dev" });
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Limitations
&lt;/h2&gt;

&lt;p&gt;This doesn’t provide the ability to reference a stack in a separate blob storage container, this isn’t something that’s supported right now as far as I can tell.&lt;/p&gt;

</description>
      <category>infrastructureascode</category>
      <category>pulumi</category>
      <category>custombackend</category>
      <category>stackreference</category>
    </item>
    <item>
      <title>Speed up legacy ASP.NET applications with HttpContext.Items caching</title>
      <dc:creator>martinjt</dc:creator>
      <pubDate>Sun, 21 Feb 2021 16:06:33 +0000</pubDate>
      <link>https://dev.to/martinjt/speed-up-legacy-asp-net-applications-with-httpcontext-items-caching-iek</link>
      <guid>https://dev.to/martinjt/speed-up-legacy-asp-net-applications-with-httpcontext-items-caching-iek</guid>
      <description>&lt;p&gt;One of the goals of most websites is to return the page for a user within the fastest time possible. Adding caching of frequently accessed data is a really easy way to do this, however, in legacy applications, there can be a lot of context heavy code making it hard to understand what can be cached, and what can’t.&lt;/p&gt;

&lt;p&gt;What I’ve seen over and over again is code that runs the same function multiple times, and within that function using the current users’ information.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
        public string GetUserAge()
        {
            var userId = HttpContext.User.Claims.FindFirst(c =&amp;gt; c.Name == "UserId");
            var user = context.Users.FirstOrDefault(u =&amp;gt; u.Id == userId);
            return user.Age;
        }

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the above block, we’re extracting the UserId from the claims, then getting the whole user from the database and returning the age. This is an example I’ve seen a number of times and it’s easy. It was written with the best of intentions, and when it was first used, a single database hit was absolutely fine. However over time, people have gradually added calls to this function in other places. Those functions are now used in lots of places too.&lt;/p&gt;

&lt;p&gt;The biggest example of this is when in a web-page, you start to add multiple components/views. Even worse when it’s a CMS with functional components that aren’t thought about ahead of time. That means that for a single Web Request for a page, that function could be hit 5-10 times, each resulting in a database hit (putting aside that there will be caching if you’re using EntityFramework or other ORMs that support In-Memory entity caching).&lt;/p&gt;

&lt;p&gt;So how do we improve the performance of this code that has now become a critical perfomance bottleneck.&lt;/p&gt;

&lt;h2&gt;
  
  
  Options
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Option 1: Rewrite EVERYTHING
&lt;/h3&gt;

&lt;p&gt;Perfectly valid approach, a little nuclear, but if you’ve got this problem, it’s probably in lots of places. That said, this is likely going to be a long-term goal, and not a quick win.&lt;/p&gt;

&lt;h3&gt;
  
  
  Option 2: Cache all the users
&lt;/h3&gt;

&lt;p&gt;This is normally the solution I see implemented. There will be some kind of global cache, or shared cache like Redis that stores all the users. Alternatively, they’ll be an In-Memory list of the users.&lt;br&gt;&lt;br&gt;
This is, again, a perfectly valid way of doing it. You’re still adding lots of latency, and possibly even serialization costs. On top of that, you’ve also now got a global cache to maintain. It does have the benefit that it reduces the database impact over multiple requests though.&lt;/p&gt;
&lt;h3&gt;
  
  
  Option 3: Just cache it for the current request.
&lt;/h3&gt;

&lt;p&gt;This is my preferred option as you have only a few lines of code, and now, only a single request to the database for each context based execution (e.g. Web Request).&lt;/p&gt;
&lt;h2&gt;
  
  
  How?
&lt;/h2&gt;

&lt;p&gt;HttpContext should be familiar to most people in the ASP.NET world. It’s been around since… well I can’t remember so “awhile”. In the full Framework world, it was a static you could reference that had all the properties of the current request, like the URL, the cookies, etc.&lt;/p&gt;

&lt;p&gt;One of the under-used parts of the HttpContext object is something called “Items”, with a dictionary that only lives for the life of the current request. This opens up some safe caching options, that will also be fairly memory efficient. Here’s the code above but adding in a Items cache.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
        public double GetUserAge()
        {
            if (HttpContext.Current.Items.Contains("UserAge"))
                return (double)HttpContext.Current.Items["UserAge"];

            var userId = HttpContext.Current.User.Claims.FindFirst(c =&amp;gt; c.Name == "UserId");
            var user = dbContext.Users.FirstOrDefault(u =&amp;gt; u.Id == userId);

            HttpContext.Current.Items["UserAge"] = user.Age;

            return user.Age;
        }

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In normal operation, there will likely be no performance degradation, even if there is a single hit, as it’s just adding an item to a dictionary. However, if this is hit multiple times inside a single request, you’ve just reduced the database calls to a single call per-request.&lt;/p&gt;

</description>
      <category>net</category>
      <category>development</category>
      <category>aspnet</category>
      <category>performance</category>
    </item>
    <item>
      <title>Infrastructure Autonomy using DNS Delegation and internal Top Level Domains</title>
      <dc:creator>martinjt</dc:creator>
      <pubDate>Sat, 14 Nov 2020 18:06:58 +0000</pubDate>
      <link>https://dev.to/martinjt/infrastructure-autonomy-using-dns-delegation-and-internal-top-level-domains-4k2d</link>
      <guid>https://dev.to/martinjt/infrastructure-autonomy-using-dns-delegation-and-internal-top-level-domains-4k2d</guid>
      <description>&lt;p&gt;In this post we’ll talk about using a specific Top Level domain to separate your internal application infrastructure addresses from what you’re users see. Further, how to provide team level autonomy to using DNS delegation to provide a predictable naming strategy.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem
&lt;/h2&gt;

&lt;p&gt;One of the big issues with DNS management is the security elements of allowing people to add what they want to your prized potential, your front facing domain. I’ve seen this process create teams that simply manage the DNS, which is not great really very cost effective. On the opposite side, I’ve seen organisations where everyone has access to change any DNS entry, or even transfer the domain ownership.&lt;/p&gt;

&lt;p&gt;When working with infrastructure as code, and creating things like AWS ALB’s or Azure Load Balancers, their names are… less than predictable. Further, when you’re treating this environments as something that can be torn down and spun up, those services within Azure or AWS will get new, random, names each time. This means that any other teams relying on these services will have to constantly change their configurations.&lt;/p&gt;

&lt;p&gt;Providing team autonomy to manage as much of their stack is hard when they have to submit a request, signed by the CEO, COO, Chairman, etc. that takes a week to anction, just to point their subdomain to the new things they’ve created.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Solution
&lt;/h2&gt;

&lt;p&gt;Providing a predictable, static name for all your internal resources allows for things outside of the immediate teams to be “slow moving”, while still allowing the autonomy of the team to iterate at it’s own pace.&lt;/p&gt;

&lt;p&gt;I also like to use predictable DNS entries to allow people to navigate and identify the purpose, team etc. of the service. This provides a lot of consistency when it comes to cross team working, and further allows the “underlying” resource to change, without external notification.&lt;/p&gt;

&lt;p&gt;What I’ve done at every business I’ve been into recently is create a “delegation” infrastructure.&lt;/p&gt;

&lt;p&gt;I’ve spent quite a lot of time defining and using an approach to combat this in both Azure and AWS. It’s not exactly “news” or “complicated”, what I would say is that this has been a stumbling block in cloud understanding/adoption. Understanding how to disassociate the “old school” static servers with known IP addresses, with transient services with IP addresses you don’t control is where a lot of the old school developers and engineers have struggled.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Idea
&lt;/h2&gt;

&lt;p&gt;The basic idea starts with separating how you address your “internal” addressing of services with what the end users see.&lt;/p&gt;

&lt;p&gt;So, address your services using an internal domain name, and then link them together using CNAME entries.&lt;/p&gt;

&lt;p&gt;E.g. if your domain is &lt;a href="http://www.martinjt.me"&gt;http://www.martinjt.me&lt;/a&gt;, use a new domain internal-martinjt.me&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--uqhXhL1J--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://martinthwaites.files.wordpress.com/2020/06/dns-delegation.jpg%3Fw%3D381" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--uqhXhL1J--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://martinthwaites.files.wordpress.com/2020/06/dns-delegation.jpg%3Fw%3D381" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this example, our users use &lt;a href="http://www.martinjt.me"&gt;http://www.martinjt.me&lt;/a&gt; However, our server lives on an emphemeral IP inside AWS which is currently 50.50.50.50. The team developing their website DO NOT have control over martinjt.me entries, as this is managed centrally by a shared team for “security and/or compliance” reasons. They do have access to add/delete/amend records on the internal domain though, which they use to setup a consistent record that is updated every time they change their backend service. Finally, the central/shared team has setup a CNAME entry to that internal domain, that won’t change.&lt;/p&gt;

&lt;p&gt;The result in this example is that the team has access to change the destination of &lt;a href="http://www.martinjt.me"&gt;http://www.martinjt.me&lt;/a&gt;, without having direct access to it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Subdomain Delegation
&lt;/h2&gt;

&lt;p&gt;You can take this a step further, so that you don’t need to setup lots of Top Level domains.&lt;/p&gt;

&lt;p&gt;Both Azure and AWS have the ability for you to setup DNS Zones for a Subdomain (e.g. team1.internal-martinjt.me, team2.internal-martinjt.me, etc.). These DNS Zones can then be setup inside of that teams control, and they can create more entries. Personally, I prefer to setup a structure like this:&lt;/p&gt;

&lt;p&gt;{application|service}.{env}.{team}.internal-martinjt.me&lt;/p&gt;

&lt;p&gt;e.g.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://webserver.test.booking.internal-martinjt.me"&gt;https://webserver.test.booking.internal-martinjt.me&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this example, “booking.internal-martinjt.me” is delegated to the “Booking” team, that manage a series of applications, and they then create DNS entries for each of their applicaitons, in each environment.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Granting access to your frontend domain is risky, and requires a lot of trust. However, providing the team with the autonomy is predicated on granting them access to it.&lt;/p&gt;

&lt;p&gt;All the cloud providers are able to provide this ability, and further, you’re also able to use a DNS provider like GoDaddy, etc. that you can give the teams access to.&lt;/p&gt;

&lt;p&gt;This doesn’t, however, mitigate the security risk of this new domain being used to redirect the main site. It does stop that from being a permanent issue where that security is breached.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>azure</category>
      <category>development</category>
      <category>infrastructureascode</category>
    </item>
    <item>
      <title>Manage Cross team terraform and azure-cli versions with docker</title>
      <dc:creator>martinjt</dc:creator>
      <pubDate>Mon, 07 Sep 2020 18:51:28 +0000</pubDate>
      <link>https://dev.to/martinjt/manage-cross-team-terraform-and-azure-cli-versions-with-docker-c1f</link>
      <guid>https://dev.to/martinjt/manage-cross-team-terraform-and-azure-cli-versions-with-docker-c1f</guid>
      <description>&lt;p&gt;One of the issues with having multiple teams, and pushing for autonomy to choose everything from infrastructure to languages, is making sure that you have the right versions of everything installed.&lt;/p&gt;

&lt;p&gt;As developers, we want to be on the bleeding edge, playing with new things, new versions, etc. However, that can have an impact beyond our machine, and can introduce weird issues with cross-team work and even within the team.&lt;/p&gt;

&lt;p&gt;I’ve been working on an approach to isolate that, and streamline the workflow with getting new team members up to speed quickly. Essentially, this boils down to using docker for just about everything. In this post, I’ll walk through doing this for Terraform and the Azure-CLI.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Caveat: I will admit that there are generally images in wild, already built with this combination. However, this approach allows for you to customise the image at a repository level. It allows the primary team to provide the other team’s developers an isolated environment for them to enact change in their team in a frictionless way.&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Dockerfile
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
FROM mcr.microsoft.com/azure-cli

ENV TERRAFORM_VERSION 0.12.28

RUN apk add --update wget ca-certificates unzip git bash &amp;amp;&amp;amp; \
    wget -q -O /terraform.zip "https://releases.hashicorp.com/terraform/${TERRAFORM_VERSION}/terraform_${TERRAFORM_VERSION}_linux_amd64.zip" &amp;amp;&amp;amp; \
    unzip /terraform.zip -d /bin &amp;amp;&amp;amp; \
    apk del --purge wget ca-certificates unzip &amp;amp;&amp;amp; \
    rm -rf /var/cache/apk/* /terraform.zip

VOLUME ["/data"]
WORKDIR /data

RUN printf "\\nalias tf='terraform'" &amp;gt;&amp;gt; /root/.bashrc

ENTRYPOINT ["/bin/bash"]

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Lets have a look at what it’s doing.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
FROM mcr.microsoft.com/azure-cli

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This means we’re going to pull the latest version of the Azure CLI (or use the version we’ve already pulled). If we want to be a little more specific, you could add a “:” tag to the end.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
ENV TERRAFORM_VERSION 0.12.28

RUN apk add --update wget ca-certificates unzip git bash &amp;amp;&amp;amp; \
    wget -q -O /terraform.zip "https://releases.hashicorp.com/terraform/${TERRAFORM_VERSION}/terraform_${TERRAFORM_VERSION}_linux_amd64.zip" &amp;amp;&amp;amp; \
    unzip /terraform.zip -d /bin &amp;amp;&amp;amp; \
    apk del --purge wget ca-certificates unzip &amp;amp;&amp;amp; \
    rm -rf /var/cache/apk/* /terraform.zip

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will install the specific version of terraform, then clean itself up afterwards.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
VOLUME ["/data"]
WORKDIR /data

RUN printf "\\nalias tf='terraform'" &amp;gt;&amp;gt; /root/.bashrc

ENTRYPOINT ["/bin/bash"]

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, we add a volume tag, which is where our terraform code will be mounted and add an alias as typing “terraform” is way too many keystrokes when you can just right “tf”.&lt;/p&gt;

&lt;p&gt;We can then build the image&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
docker build . local/tf:12-28

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The additional “:12-28” means that we’re keeping it versioned.&lt;/p&gt;

&lt;h3&gt;
  
  
  Running
&lt;/h3&gt;

&lt;p&gt;Then run the image&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
docker run -it --rm -v ${PWD}:/data -v azure-cli:/root/.azure --name tf-inf local/tf:12-28

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The secret sauce here is the name and the volumes. The adding of ${PWD} means you’re mounting your current directory as the source for the container.&lt;/p&gt;

&lt;p&gt;The azure-cli volume is what allows for persisting your validation, so it’s not lost when you/if you kill the container, and also allows it to be shared with other containers.&lt;/p&gt;

&lt;p&gt;Note: the &lt;code&gt;--rm&lt;/code&gt; flag will remove the container afterwards. This is very useful due to a bug with WSL2/Docker that deletes the contents of the mount after a restart occasionally.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;Managing cross-team work is hard, and when it comes to being an infrastructure consultant, or someone who is working in anyway across multiple autonomous teams, docker is definitely your friend!&lt;/p&gt;

</description>
      <category>azure</category>
      <category>docker</category>
      <category>terraform</category>
      <category>iac</category>
    </item>
  </channel>
</rss>
