<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Laura Santamaria</title>
    <description>The latest articles on DEV Community by Laura Santamaria (@nimbinatus).</description>
    <link>https://dev.to/nimbinatus</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/nimbinatus"/>
    <language>en</language>
    <item>
      <title>DevRel Strategy</title>
      <dc:creator>Laura Santamaria</dc:creator>
      <pubDate>Thu, 06 Oct 2022 05:00:00 +0000</pubDate>
      <link>https://dev.to/nimbinatus/devrel-strategy-5ebc</link>
      <guid>https://dev.to/nimbinatus/devrel-strategy-5ebc</guid>
      <description>&lt;p&gt;Here's a quick view of my take on developer relations strategy.&lt;/p&gt;

&lt;p&gt;I was chatting about DevRel and community strategy with some folks at &lt;a href="https://devopsdays.org/events/2022-houston/welcome/"&gt;DevOpsDays Houston&lt;/a&gt; after my keynote earlier this week, and I promised I'd write a bit about how I view strategy. The key thing about strategy to me is providing a framework for evaluation of tactics.&lt;/p&gt;

&lt;p&gt;Strategy is a definition of risk valuation and reward opportunities across multiple focus areas. If you're defining strategy simply by naming a bunch of tactics, you're missing out on the creativity boost a good strategy framework can provide for a devrel team. Developer relations opportunities can appear and disappear quickly, like a collaboration proposed during a hallway discussion at a conference or a last-minute invitation to be on a livestream or podcast. You need to empower your devrel team to make decisions quickly and decisively. Strategy provides the framework to make swift decisions that benefit the company and the community as a whole.&lt;/p&gt;

&lt;p&gt;A good devrel strategy explains what risk your company can handle for different focus areas. For example, a speaking engagement requiring expensive travel is high risk. To make it worthwhile, the reward for the company and the community should be as high or better. Maybe that means you get to reach a lot of people. Maybe that means you're reaching a new, very receptive community with a talk that will teach a lot to that community. You'll need to define that reward level for your company and communities. Conversely, a low risk opportunity may also be low reward and not worth the time. A generic blog post that isn't saying anything new might be low risk, but it also does nothing for the community (and so never benefits the company, either). If you can define the levels of risk your company can take with the reward levels you want to reach, you don't need to map out a bunch of tactics (the actions you'll take to reach a goal). Instead, you can map out a few and hunt for unique opportunities along the way.&lt;/p&gt;

&lt;p&gt;By using a true strategy framework instead of defining a bunch of tactics, you open up the door for creative ideas—the kinds of things that catapult your devrel program to new heights—to flourish. So the next time you find yourself in a planning session, ask: Are you defining strategy or tactics? You'll surprise yourself what you can come up with when you have the mental space to innovate and evaluate those innovations (and prove value quickly to your leadership).&lt;/p&gt;

&lt;p&gt;(I originally put this up as a thread on Twitter, in case you want to &lt;a href="https://go.nimbinatus.com/devrel-strategy"&gt;explore the original&lt;/a&gt;.)&lt;/p&gt;

</description>
      <category>devrel</category>
      <category>avocados</category>
      <category>devadvocate</category>
      <category>strategy</category>
    </item>
    <item>
      <title>An Analogy about DevRel</title>
      <dc:creator>Laura Santamaria</dc:creator>
      <pubDate>Sat, 19 Mar 2022 05:00:00 +0000</pubDate>
      <link>https://dev.to/nimbinatus/an-analogy-about-devrel-4mip</link>
      <guid>https://dev.to/nimbinatus/an-analogy-about-devrel-4mip</guid>
      <description>&lt;p&gt;So I’ve had a few folks who apparently liked my developer relations (DevRel) analogy that JJ Asghar outed for me at our panel on community building for GAN and Orbit for SXSW this year. So, since folks are asking, I'm replicating it here:&lt;/p&gt;

&lt;p&gt;So imagine your company is a bed and breakfast in a quaint little town. Your engineering team and product team are all the people keeping things running, from cooking to cleaning to fixing. They work their tails off to make the bed and breakfast a great experience. The sales team is the folks who greet you at the door. They help you find your room, or they seat you at your table. They’re working with your business ops folks to ensure the money is gathered and counted so everyone can keep doing what they’re doing. The marketing team, meanwhile, are the folks making the front porch the most welcoming place to be. They’re maintaining the lawn, keeping the paint sparkling (and shiny compared to the neighbors), and ensuring that anyone who sees the building is drawn to it.&lt;/p&gt;

&lt;p&gt;So where’s DevRel in all this? Well, they’re out in the town square five or six blocks away. They’re helping folks find where they’re going, they’re explaining the town’s history to passersby, they’re recommending restaurants to tourists. Oh, and they’re wearing the bed and breakfast’s logo and recommending it as a place to stay or eat with a wink and an “Of course, I’m a bit biased as they ensure I can be out here to help you.” People respond well; they go find the bed and breakfast that this nice, helpful person talked about.&lt;/p&gt;

&lt;p&gt;DevRel professionals are out in front of your company, introducing the company to new communities by being the helpers and the teachers and driving a positive association so it’s easier for sales to call out and encourage someone to come in for a glass of sweet tea. People like me are trying to be 2-5 years ahead of where the company is now, laying the groundwork for the company so when it’s time to introduce yourself to a new community, they’re already interested in listening to you. We're making sure everyone has a great experience, one person at a time. Because that's how you win communities' hearts (and their minds will follow when they experience your product, hopefully). As some smarter folks than me said, "It's up to us to ensure they know we're here."&lt;/p&gt;

&lt;p&gt;(I originally put this up as a thread on Twitter, in case you want to &lt;a href="https://go.nimbinatus.com/devrel-analogy"&gt;explore the original&lt;/a&gt;.)&lt;/p&gt;

</description>
      <category>devrel</category>
      <category>avocados</category>
      <category>devadvocate</category>
      <category>analogy</category>
    </item>
    <item>
      <title>Upgrade Strategies: An Introduction for IaC</title>
      <dc:creator>Laura Santamaria</dc:creator>
      <pubDate>Mon, 07 Mar 2022 15:59:56 +0000</pubDate>
      <link>https://dev.to/pulumi/upgrade-strategies-an-introduction-for-iac-5cja</link>
      <guid>https://dev.to/pulumi/upgrade-strategies-an-introduction-for-iac-5cja</guid>
      <description>&lt;p&gt;When you're working with infrastructure, you're inevitably going to need to upgrade or update that infrastructure. Whether it's an operating system update or a desire to get CPU or memory upgrades, you will need the ability to pick resources and change them as necessary. In the past, this kind of upgrade would be done on the basis of individual resources, with each one being updated and checked either by hand or programmatically before moving onto the next resource. If you've ever done a database migration or if you ever did the recommended way of upgrading your computer's operating system including all of the backup steps, you're familiar with this process. Stand up the new resource. Check everything works. Move over the data. Check again. Tear down the old infrastructure. In a cloud computing environment, though, you're often dealing with hundreds or thousands of resources, and doing one-by-one replacement is a nightmare that takes ages. However, there are other options, many borrowed from the application deployment world, that we have available to us because we write infrastructure as code.&lt;/p&gt;

&lt;p&gt;Generally, there's a few strategies for replacing something in a cloud computing environment. You'll often hear about them as &lt;em&gt;deployment strategies&lt;/em&gt;. These strategies generally differ based on the order of operations: Do you create a new resource before you delete the old one? Do you replace some and pause to gather data? Do you YOLO and just toss everything out and build new? All of these strategies are worth considering depending on the needs of your situation. A production system likely needs to maintain uptime, or the amount of time a system is available to the end user, to meet &lt;a href="https://cloud.google.com/blog/products/devops-sre/sre-fundamentals-sli-vs-slo-vs-sla"&gt;service-level agreements (SLAs)&lt;/a&gt;, which might include an availability promise such as a promise of &lt;a href="https://www.atlassian.com/blog/statuspage/high-availability"&gt;5 nines (99.999%) of uptime&lt;/a&gt;. In that case, a YOLO strategy won't be very acceptable, will it?&lt;/p&gt;

&lt;p&gt;As many of these deployment strategies initially came from the application world, we can't just use the same deployment strategies exactly as they appear for an application because there's a bit more going on under the hood. We need to consider what we're building and where we are on our stack. If we have multiple instances of our application running on identical containers, for example, we can treat the containers a bit differently than, say, the single load balancer in front of all of the containers or the node underneath a single-node Kubernetes cluster.&lt;/p&gt;

&lt;p&gt;To illustrate all of the following deployment scenarios, let's imagine we have a system where an application is deployed on some infrastructure. What the application is doesn't really matter; we're going to focus on the infrastructure here. We'll pick a simple situation: There's a security update for one of your pieces of infrastructure.&lt;/p&gt;

&lt;h2&gt;
  
  
  Big Bang
&lt;/h2&gt;

&lt;p&gt;Let's say the infrastructure you're working with is a large number of virtual machines (VMs) that are running an outdated, insecure version of an operating system. If you weren't concerned about uptime or the stored data on those systems, you could certainly just tear them down and stand up new ones—this situation could reflect that perhaps you're working with a development environment that is ephemeral and isn't in use right now, or you don't have any load just yet. Some folks refer to this kind of upgrade strategy as a &lt;em&gt;"big bang" strategy&lt;/em&gt;, though folks who run data centers would likely say that the big bang strategy still at least required testing and careful consideration of data transitions and you would rarely completely wipe the original system before replacing it with another. If you're wondering where this upgrade strategy came from, it was originally used in non-cloud-native systems where you likely didn't have the hardware available to run other deployment strategies. In these cases, you would have maintenance windows and change processes, and the whole process was mapped out well ahead of time. This kind of fast switchover, especially with little to no testing, really is not a good idea for a cloud-native or cloud-based production environment, and it's frowned upon if you're working with a cloud-based environment that others are also using. There's so many better options than scheduling maintenance windows and ripping apart systems when you have the capability to stand up parallel virtual hardware.&lt;/p&gt;

&lt;h2&gt;
  
  
  Blue-Green Deployments
&lt;/h2&gt;

&lt;p&gt;Now, let's say that instead of tearing down and then standing up new VMs, you instead were to create an almost identical environment with the new operating system, test it under load, and then transition your traffic over and wait for success before tearing down the old version. This is known as a &lt;a href="https://martinfowler.com/bliki/BlueGreenDeployment.html"&gt;&lt;em&gt;blue-green deployment&lt;/em&gt;&lt;/a&gt;, and it's a fairly popular strategy. A blue-green deployment is, in short, a create-check-delete process. You may have heard this strategy called "red/black deployment" or "a/b deployment." The &lt;a href="https://gitlab.com/-/snippets/1846041"&gt;rationale&lt;/a&gt; behind using "blue" and "green" as names originally comes from needing easy names that didn't have any kind of connotation of one group of systems being "better" than the other (e.g., "red" is the same color as alert lights, so don't you want to keep the "black" deployment?). So you certainly can name it whatever you'd like in your company, but the concept is the same. The "blue" environment is currently running. To upgrade, you stand up an almost identical environment with the upgrade you want to perform. That environment is your "green" environment. Then, you run any test that you can against that green environment to ensure it's ready. Finally, you switch your traffic over to the green system and monitor for any issues. Once you're sure the environment is stable and can handle the load of your normal traffic (typically measured in hours to days depending on when you made the switch and what your traffic patterns demonstrate), you tear down the blue environment.&lt;/p&gt;

&lt;p&gt;Note that, while a blue-green deployment traditionally is standing up and tearing down entire systems, you don't have to do an entire system if you can do a subsystem instead so long as that subsystem is self-contained. The idea behind the rollout strategy is the same: You use a load balancer to transition traffic from one complete system to another. Since we're talking infrastructure, the more traditional version needs to be modified by thinking of subsystems. For example, if you're replacing the load balancer as well, you would stand up the green load balancer, point it at the rest of the blue deployment, make the switch of the traffic to the green load balancer, and then complete the move from the rest of the blue deployment to the green deployment. Then, finally, the blue load balancer and the rest of the blue deployment can be decommissioned.&lt;/p&gt;

&lt;h2&gt;
  
  
  Canary Deployments
&lt;/h2&gt;

&lt;p&gt;Now, a blue-green deployment isn't the only upgrade strategy, and it's not always the best one for a cloud-native system. There's also a strategy called a &lt;a href="https://martinfowler.com/bliki/CanaryRelease.html"&gt;&lt;em&gt;canary deployment&lt;/em&gt;&lt;/a&gt;. In a canary deployment, you stand up new infrastructure and move traffic over in small increments, such as 5% of overall traffic. That small slice of traffic is considered a "canary," an indicator of failure or success of such a move. While it's often used in application deployment as a way to gather user  feedback on an application change, this strategy is also useful for infrastructure as it highlights issues that only occur under true, random load before they become a problem for a full deployment. This strategy is fairly popular in microservices architectures and with platforms like Kubernetes as it's much easier to implement than with more traditional, VM-based systems.&lt;/p&gt;

&lt;h2&gt;
  
  
  Rolling Deployments
&lt;/h2&gt;

&lt;p&gt;The final upgrade strategy we'll consider is called a &lt;em&gt;rolling deployment&lt;/em&gt;. In a rolling deployment, each element of a system is replaced one at a time, with each new instance being checked before decommissioning and removing the old instance. That check is called a health check on some platforms, and the basic idea is that the deployment tool sends a request to the new element and waits for a response that indicates the system is functional and responsive as expected. Once the health check on the new element clears, the old one is removed, and the deployment tool "rolls" to the next element in the system being updated. We find this one in many Kubernetes-based applications, and it can work well for cloud-native infrastructure, as well.&lt;/p&gt;

&lt;p&gt;In the next article in this series, we'll explore how these three strategies are similar and different for infrastructure as code, and why you would use one over the other. Stay tuned!&lt;/p&gt;




&lt;p&gt;In the next parts of the series, we'll try these deployment strategies with Pulumi, exploring how to use code to define each kind with a test system. Watch this space!&lt;/p&gt;

&lt;p&gt;Meanwhile, while you're waiting, we did a few videos on this topic over at &lt;a href="https://www.youtube.com/c/PulumiTV/featured"&gt;PulumiTV&lt;/a&gt;, like &lt;a href="https://www.youtube.com/watch?v=vviIVCloMKQ&amp;amp;t=1s"&gt;this one on blue-green deployments with Pulumi and Python on GCP&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>cloud</category>
      <category>iac</category>
      <category>cicd</category>
      <category>devops</category>
    </item>
    <item>
      <title>2021 December Hackathon: Introduction</title>
      <dc:creator>Laura Santamaria</dc:creator>
      <pubDate>Mon, 14 Feb 2022 16:48:25 +0000</pubDate>
      <link>https://dev.to/pulumi/2021-december-hackathon-introduction-1pgh</link>
      <guid>https://dev.to/pulumi/2021-december-hackathon-introduction-1pgh</guid>
      <description>&lt;p&gt;Pulumi's &lt;a href="https://www.pulumi.com/blog/multi-lang-hackathon"&gt;hackathon tradition&lt;/a&gt; continued in the last weeks of 2021 with our 2021 December hackathon. For one solid week, we had teams from across the company focus on improvements across the Pulumi ecosystem, and we brought in people from outside the engineering org to get perspectives on different needs. While there were some projects that were focused on internal work, there were still quite a few open-source projects that we can talk about publicly. We'll get more details from some of those teams over a few more posts. In this post, however, we're going to explore a bit about how we worked.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting up for success
&lt;/h2&gt;

&lt;p&gt;We set up the teams for success before the hackathon began by starting discussions around project ideas in a Slack channel asynchronously. In in-person hackathons, teams often decide on projects the day of the start of the hackathon. By starting discussions early, we let folks decide on projects they'd be interested in contributing to and drove early architectural discussions so teams could hit the ground running. People also started to get excited about working on the various projects, and they were able to start connecting with folks outside of the engineering team to find time on their calendars early.&lt;/p&gt;

&lt;p&gt;Using a survey, we gathered what sorts of projects or areas of the ecosystem people were interested in working on, their timezone, and how they would like to collaborate. By using this data to sort people into teams, we ensured that people who were interested in learning about new areas of the ecosystem could find the right projects, that people who needed to work in a more synchronous environment could be paired with others in their timezone, and that people who worked similarly could find people who worked like them. It was an interesting experiment.&lt;/p&gt;

&lt;h2&gt;
  
  
  Working remotely
&lt;/h2&gt;

&lt;p&gt;One cool thing that we tracked this round is all of the different ways we collaborated across the different projects. In general, Pulumi is a remote workplace, and none of the teams were co-located for this hackathon. As a result, we had to consider how teams might work across timezones and outside of the classic physical co-located team dynamic. Each team got to pick the best ways for them to collaborate. Our teams' preferences ranged from daily standups over Zoom to collaboration solely on Slack to extensive pair programming and Zoom hangouts. In all, each team used multiple ways to communicate and engage depending on which parts of their projects they were working on. Teams working on explorations of possible solutions often found that pair programming remotely was a great way to brainstorm, for example. Once teams had a solid sense of what needed to be done to get to the next part of a project, many still attended a hackathon-wide Zoom hangout session daily to have someone to hang out with and discuss any issues that arose. The whole org also had the opportunity to get to know one another a little better, and it did not have the feeling of lost time or forced socialization that weekly virtual watercoolers can have after a couple of meetings as many have found during this period of forced virtual interactions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Showing off what we did
&lt;/h2&gt;

&lt;p&gt;During the demo at the end of the hackathon, we asked teams to self-report which strategies they found the most helpful, what they learned, and obviously what the results of their hackathon project were. We found that everyone level-set to how their team chose to work, and the pairing of people based on location and work preferences helped a lot to have teams succeed.&lt;/p&gt;




&lt;p&gt;We'll have more from each team soon! Keep watch on this space for new posts about hackathon projects.&lt;/p&gt;

&lt;p&gt;If you like our way of working and are interested in a new position, we have a lot of positions open right now and will have more in the future! Please head over to our &lt;a href="https://www.pulumi.com/careers"&gt;careers page&lt;/a&gt; to find out how to apply.&lt;/p&gt;

</description>
      <category>culture</category>
      <category>teamwork</category>
      <category>pulumi</category>
    </item>
    <item>
      <title>Understanding State</title>
      <dc:creator>Laura Santamaria</dc:creator>
      <pubDate>Mon, 14 Feb 2022 16:40:27 +0000</pubDate>
      <link>https://dev.to/pulumi/understanding-state-3ahg</link>
      <guid>https://dev.to/pulumi/understanding-state-3ahg</guid>
      <description>&lt;p&gt;Let's talk about state, shall we? State is the collective properties of the system from one point in time. Think of it effectively as a snapshot of a system. State in computer science is actually a lot like state in physics, so let's start with something that's a bit easier to understand.&lt;/p&gt;

&lt;p&gt;We're going to examine a physical system: A ball dropping from my hand to the ground one meter (1m) below. The ball starts out at one point in time where it is at rest in my hand. It has no velocity, no motion. It has properties like color, texture, etc. that do not and will not change. The &lt;em&gt;state&lt;/em&gt; of the ball can be thought of as a position of 1m off the ground, with a color, texture, etc., and no velocity. Each of these variables has a specific value at that point in time.&lt;/p&gt;

&lt;p&gt;Now I open my hand. At the instant the ball leaves my hand, the ball has moved some distance to the ground, and its velocity has increased. Its state, therefore, has changed. If we imagine capturing the ball's motion with a slow-motion camera, we see a single frame for each position of the ball. Each frame is a single state of the system, and the difference between the frames is a change in state. When our state changes, one or more variables change. In this case, the variables of speed and direction (combined as velocity) and distance from the ground all change in each snapshot of time, or each state. We can use this knowledge to predict how the ball's state will change, allowing us to identify patterns.&lt;/p&gt;

&lt;p&gt;When we think about our infrastructure that we manage with systems like Pulumi, we're thinking about states of the infrastructure system. How we move from one state to another, which variables change from state to state, and what the starting state and ending state are would all be considered and tracked. Most infrastructure-as-code systems track state in some fashion, though most rely on you, the user, to manage that state tracking with state files or other systems that you have to manage and choose. For this deep dive, though, I'm going to focus on how the hosted Pulumi service manages state.&lt;/p&gt;

&lt;p&gt;When considering the state of your infrastructure over time, we need to think about the transition of the infrastructure's state between one point in time and another. Our program for any infrastructure-as-code platform defines the ideal, final state of the system. As the code executes, the infrastructure goes through a sequence of states, which we call the behavior of the system. For each tick of processing of the code, there is a defined state. Therefore, during the execution of the code, we see transitions in state. That state change needs to be tracked so that, at any point in time, we know how the behavior of the system changed. That's important for having multiple programs trying to execute at once, debugging system changes, and other important considerations for working in teams across a remote, cloud-based environment. In short, it's good to know what changed! When you use Pulumi, you have access to that change information through &lt;a href="https://www.pulumi.com/docs/intro/pulumi-service/audit-logs/"&gt;audit logging&lt;/a&gt; and can use &lt;a href="https://www.pulumi.com/docs/intro/pulumi-service/webhooks/"&gt;webhooks&lt;/a&gt; to feed those changes into other systems for observation, like a shared monitoring system with your security team or a distributed team that can't look over your shoulder as something deploys.&lt;/p&gt;

&lt;p&gt;Now, code execution doesn't always happen &lt;em&gt;exactly&lt;/em&gt; as we want it to due to all kinds of environmental factors from different chipsets to varying network connectivity and more. If you &lt;em&gt;really&lt;/em&gt; want to go down the rabbit hole here, I'm going to point you to formal methods, especially TLA+. Formal methods are a great way to model state for distributed, concurrent systems to identify race conditions, poor assumptions, and other common flaws in temporal logic. For now, though, we're going to keep talking about state in the more abstract sense.&lt;/p&gt;

&lt;p&gt;Putting all of the states together along with the transitions they can have so that we have pathways from initial states to next states in a clean pattern, we get what's called a state machine. When working with concurrent distributed systems, or systems that can have multiple things happening simultaneously that are spread out over many machines—basically, any cloud system created ever,—knowing the various states, changes, and combinations thereof is extremely important to ensuring that the one path we &lt;em&gt;want&lt;/em&gt; the system to take to a final desired state is the one that is taken.&lt;/p&gt;

&lt;p&gt;When using Pulumi, you don't have to worry about the state machine. The Pulumi Service tracks all of those states for you once the infrastructure's initial state is declared by importing the infrastructure to or creating it with Pulumi. You declare the desired state in code in the language of your choosing, and then that code tells the Pulumi CLI what you want. The CLI does all of the state computation, requesting and defining the pathway to the infrastructure final state defined in the program, and the Pulumi Service stores the state at each moment in time. The Pulumi dashboard, by extension, is your window into the Pulumi Service where you can see current state, desired state, and the behaviors of the system.&lt;/p&gt;

&lt;p&gt;I hope this short introduction to how state works, especially with infrastructure-as-code platforms, helps get you on your way! If you want to read more about state with Pulumi (and get some nifty diagrams), head to &lt;a href="https://www.pulumi.com/docs/intro/concepts/state"&gt;State and Backends&lt;/a&gt;. Until next time!&lt;/p&gt;




&lt;p&gt;Leslie Lamport has some fantastic, free resources and videos about the formal specifications in TLA+, which he created, at &lt;a href="http://lamport.azurewebsites.net/tla/tla.html"&gt;his site on TLA+&lt;/a&gt;. I'm a huge fan.&lt;/p&gt;

&lt;p&gt;Also, if you want to watch a short video on state to get a better sense of the physics example, head on over to &lt;a href="https://www.youtube.com/c/PulumiTV/videos"&gt;PulumiTV&lt;/a&gt; for &lt;a href="https://youtu.be/u2C71uF0rdM"&gt;episode 3&lt;/a&gt; of our &lt;a href="https://youtube.com/playlist?list=PLyy8Vx2ZoWlohOiedbaQqT5xYRkcDsm10"&gt;Quick Bites of Cloud Engineering series&lt;/a&gt; all about state.&lt;/p&gt;

</description>
      <category>iac</category>
      <category>infrastructure</category>
      <category>devops</category>
      <category>pulumi</category>
    </item>
    <item>
      <title>Serverless Logging Performance, Part 2</title>
      <dc:creator>Laura Santamaria</dc:creator>
      <pubDate>Fri, 26 Jun 2020 16:33:39 +0000</pubDate>
      <link>https://dev.to/logdna/serverless-logging-performance-part-2-laj</link>
      <guid>https://dev.to/logdna/serverless-logging-performance-part-2-laj</guid>
      <description>&lt;p&gt;&lt;em&gt;When thinking about serverless applications, one thing that comes to mind immediately is efficiency. Running code that gets the job done as swiftly and efficiently as possible means you spend less money, which means good coding practices suddenly directly impact your bottom line. How does logging play into this, though? Every logging action your application takes is within the scope of that same performance evaluation. Logging processes can also be optimized just like for any process your code spins up. In this series of posts, let’s dive into how you can think about logging in a serverless world.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;In part 1, we examined the concerns around cold starts and message construction. Now, we’ll talk about how logging objects (structured logs) instead of text can affect the cost of your serverless architecture.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Part 2: Performance, Episode 2
&lt;/h2&gt;

&lt;p&gt;Best practices for logging currently focus on structured log objects rather than text-based messages, mainly because machines, rather than humans, are the primary audience for logs in our world of automation. Serverless logging is no different, and it’s even more important than other systems considering the sheer volume of data a serverless architecture generates over a similar amount of time and the fact that the majority of serverless management tools rely on parsing log data to function. Alerts, monitoring, performance metrics, and even triggers for other actions all need to check logs and work off of them to help you manage your serverless systems.&lt;/p&gt;

&lt;h3&gt;
  
  
  Structured Logs
&lt;/h3&gt;

&lt;p&gt;First, what are structured logs? Structured logs are data-enriched objects that enable machines to parse out relevant information without relying on regular expressions or other text processing. Think of a structured log as a text-based message (I’ve been calling them strings, but I know that the term can mean a lot of different things based on programming language) with associated metadata attached. The text-based message is for the human poking at the logs. The metadata, though, is really for the machine, and having that metadata ensures a machine will accurately identify a type of log line 100% of the time assuming the incoming data is accurate.&lt;/p&gt;

&lt;p&gt;If you were only to use text-based logging, you run the risk of two messages being so similar that a regex-based parsing solution would flag them both as the same type when they are completely different. In addition, the extra processing time needed to parse a text-based message increases the cost associated with a serverless system. We’ve already explored that the time needed to run a serverless call is pretty well correlated with the cost of running that call, so we know that time is money here. However, is a structured log really more performant and therefore more cost-effective in a serverless world? To answer that, let’s revisit the Python example from the previous article.&lt;/p&gt;

&lt;h3&gt;
  
  
  Construction, Revisited
&lt;/h3&gt;

&lt;p&gt;For structured logging with Python, there’s a lot of small libraries out there on PyPI we could explore. I chose to work with the standard logging library and a library called structlog. I did a similar benchmark test to the one I did last round using a few different configurations of structured logging:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--K-j3v6tl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/vgmbgs45fnksbe3zt8y0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--K-j3v6tl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/vgmbgs45fnksbe3zt8y0.png" alt="Screenshot of terminal showing the performance run; numbers are duplicated in the article text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Method&lt;/th&gt;
&lt;th&gt;With 1 Variable (microsec)&lt;/th&gt;
&lt;th&gt;With 2 Variables (microsec)&lt;/th&gt;
&lt;th&gt;With Multiple Variables (microsec&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;text, fastest string&lt;/td&gt;
&lt;td&gt;23.2711&lt;/td&gt;
&lt;td&gt;23.6432&lt;/td&gt;
&lt;td&gt;25.7583&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;structured, built-in&lt;/td&gt;
&lt;td&gt;31.3718&lt;/td&gt;
&lt;td&gt;33.7994&lt;/td&gt;
&lt;td&gt;49.9785&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;structured, structlog&lt;/td&gt;
&lt;td&gt;36.3576&lt;/td&gt;
&lt;td&gt;40.0466&lt;/td&gt;
&lt;td&gt;60.9309&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;structured, structlog performance run&lt;/td&gt;
&lt;td&gt;12.8722&lt;/td&gt;
&lt;td&gt;13.1769&lt;/td&gt;
&lt;td&gt;20.8687&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Time Context&lt;/th&gt;
&lt;th&gt;Time Ran&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;real&lt;/td&gt;
&lt;td&gt;3m23.373s&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;user&lt;/td&gt;
&lt;td&gt;2m55.168s&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;sys&lt;/td&gt;
&lt;td&gt;0m12.598s&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;In this case, the various methods are as follows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;text, fastest string&lt;/code&gt; to provide a sort of control. This method was the %-formatting direct call from last round.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;structured, built-in&lt;/code&gt; is using the built-in standard logging library and the method found in the logging cookbook for generating a structured logging setup&lt;sup id="fnref1"&gt;1&lt;/sup&gt;&lt;sup&gt;,&lt;/sup&gt;&lt;sup id="fnref2"&gt;2&lt;/sup&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;structured, structlog&lt;/code&gt; is using the structlog library with a default setup.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;structured, structlog performance run&lt;/code&gt; is using the structlog library with a performance tune based on the structlog docs.&lt;sup id="fnref3"&gt;3&lt;/sup&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I’m adding in the time to run the full script so you can see how long I ran the benchmark; the timing was generated by running &lt;code&gt;time&lt;/code&gt; at the terminal. This addition is to ensure I’m running the script over a long enough period of time to account for system differences (e.g., silicon and hardware differences, I/O differences in tuning, OS processes).&lt;/p&gt;

&lt;p&gt;You’ll notice that, believe it or not, the performance tune of structlog is significantly faster than even the fastest text-based logging method we had. That in and of itself is remarkable, especially considering the library still calls the standard library in its configuration. The likely scenario for this performance is twofold. First, the structlog library has a method called &lt;code&gt;cache_logger_on_first_use&lt;/code&gt; that reduces build time. This choice is very similar to the performance boost you receive when you reduce your cold start times for the overall application. Second, the JSON serializer in the standard library is not all that fast compared to other libraries out there.&lt;sup id="fnref4"&gt;4&lt;/sup&gt; The structlog docs encourage using a different JSON serializer. As I’m running this test on 3.7.4, I have access to RapidJSON’s port to Python from C++ and therefore switched the serializer to RapidJSON. Both of these performance boosts should explain the significantly faster run time.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Meaning of Speed
&lt;/h3&gt;

&lt;p&gt;To go back to why this matters specifically for serverless systems, we need to consider why structured logs are preferred for serverless systems. Automation triggered by events captured by logs need the consistency of structured logging. Most folks assume that generating the string is faster than generating an entire object that includes metadata, though, and therefore prefer generating the string. As is seen here, though, generating the structured logs in a specifically tuned manner can still give you the use you need with the cost savings that you want.&lt;/p&gt;

&lt;p&gt;However, throughout the start of this series, we’ve only been considering speed. Another part of the equation that we haven’t considered is networking throughput. When we start talking about the differences in text-based logs and structured logs, we have to consider how serverless providers charge for network traffic. In the next few parts of the series, we’ll examine that angle along with understanding how much memory is required to build logging for your serverless application.&lt;/p&gt;




&lt;ol&gt;

&lt;li id="fn1"&gt;
&lt;p&gt;&lt;a href="https://docs.python.org/3.7/howto/logging-cookbook.html#implementing-structured-logging"&gt;https://docs.python.org/3.7/howto/logging-cookbook.html#implementing-structured-logging&lt;/a&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn2"&gt;
&lt;p&gt;Note that I used a slightly different call than the one in the most current docs as I’m running 3.7.4 for this series. ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn3"&gt;
&lt;p&gt;&lt;a href="https://www.structlog.org/en/stable/performance.html"&gt;https://www.structlog.org/en/stable/performance.html&lt;/a&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn4"&gt;
&lt;p&gt;&lt;a href="https://python-rapidjson.readthedocs.io/en/latest/benchmarks.html#serialization"&gt;https://python-rapidjson.readthedocs.io/en/latest/benchmarks.html#serialization&lt;/a&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;/ol&gt;

</description>
      <category>serverless</category>
      <category>logging</category>
      <category>devops</category>
    </item>
    <item>
      <title>Serverless Logging Performance, Part 1</title>
      <dc:creator>Laura Santamaria</dc:creator>
      <pubDate>Mon, 15 Jun 2020 15:11:28 +0000</pubDate>
      <link>https://dev.to/logdna/serverless-logging-performance-part-1-3ifp</link>
      <guid>https://dev.to/logdna/serverless-logging-performance-part-1-3ifp</guid>
      <description>&lt;p&gt;&lt;em&gt;When thinking about serverless applications, one thing that comes to mind immediately is efficiency. Running code that gets the job done as swiftly and efficiently as possible means you spend less money, which means good coding practices suddenly directly impact your bottom line. How does logging play into this, though? Every logging action your application takes is within the scope of that same performance evaluation. Logging processes can also be optimized just like for any process your code spins up. In this series of posts, let's dive into how you can think about logging in a serverless world.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Part 1: Performance, Episode 1
&lt;/h2&gt;

&lt;p&gt;I know that one of the benefits of serverless architectures is not managing your own hardware, but you still need to consider the hardware your code requires when thinking about managing cost. Most serverless providers charge by some combination of the amount of memory and unit of time needed by your function or application. So, in a significantly simplified pricing model on two offerings each just for argument's sake, AWS charges per request and byte-second of memory (Lambda) or per vCPU byte-second and byte-second of memory (Fargate) whereas GCP charges per vCPU byte-second (Cloud Run) or per invocation (Cloud Function). There's a lot more involved in pricing structures for these as-a-Service systems, including the cost for network ingress or egress, storage, data stores, and analytics. However, we're just going to examine the actual compute power here when discussing a deep dive on logging.&lt;/p&gt;

&lt;p&gt;Knowing how much compute power you'll need for your code drives the decision for different tiers, and therefore performance equals money. Your choices around how to log data can affect those performance metrics. Let's start with examining the first time your code spins up.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cold Start Considerations
&lt;/h3&gt;

&lt;p&gt;One of the biggest concerns with serverless performance and efficiency is the cold start, or the first spin-up of an instance when there's no warm (recent) instance available. During the start of a serverless system, the service brings a container online, and then the container starts running your specific system. At the point of the container starting to run your unique instance, the meter starts running. The first thing the container does on a cold start is a bootstrapping process where it installs any dependencies you expect to need. If you've ever watched a Docker container image building, you know that this process can run out on the scale of minutes. Then the container's environment is set, and your code begins to run. On a warm start, on the other hand, the environment comes preset, and your code begins running immediately.&lt;/p&gt;

&lt;p&gt;As you might guess, the bootstrapping process in a cold start can be expensive. This need for efficient setup is really where dependency management becomes a not-so-secret weapon for lowering the cost of a serverless system. Your logging library of choice is no exception. Since setting up dependencies does take compute time, you have to factor in that time in your performance calculations. As a result, the ideal state for logging in a serverless architecture is to use built-in methods wherever possible. That means using Python's built-in logging library, for example, to reduce calls to PyPI. If you can't use a built-in library, such as if you're running a NodeJS function and need more than just &lt;code&gt;console.log()&lt;/code&gt;, the best logging library for you is the one that balances the features you want with the least number of dependencies needed to make those features happen. &lt;/p&gt;

&lt;h3&gt;
  
  
  Construction
&lt;/h3&gt;

&lt;p&gt;You should examine how you are constructing logging messages and objects. If there is upfront processing going on when constructing a message or an object, consider offloading that processing time to see if you can reduce usage. To go back to Python as an example, consider this simple benchmark set&lt;sup id="fnref1"&gt;1&lt;/sup&gt;, run from Python 3.7.4:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Method&lt;/th&gt;
&lt;th&gt;With 1 String (microsec)&lt;/th&gt;
&lt;th&gt;With String and Integer (microsec)&lt;/th&gt;
&lt;th&gt;With Multiple Inputs (microsec)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;%-format&lt;/td&gt;
&lt;td&gt;19.376829&lt;/td&gt;
&lt;td&gt;19.629766&lt;/td&gt;
&lt;td&gt;21.335113&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;%-format, direct call&lt;/td&gt;
&lt;td&gt;20.347689&lt;/td&gt;
&lt;td&gt;19.808525&lt;/td&gt;
&lt;td&gt;20.626333&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;str.format()&lt;/td&gt;
&lt;td&gt;20.324141&lt;/td&gt;
&lt;td&gt;20.329610&lt;/td&gt;
&lt;td&gt;21.905827&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;f-string&lt;/td&gt;
&lt;td&gt;19.461603&lt;/td&gt;
&lt;td&gt;19.875766&lt;/td&gt;
&lt;td&gt;20.073513&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;concatenation&lt;/td&gt;
&lt;td&gt;18.930094&lt;/td&gt;
&lt;td&gt;19.837633&lt;/td&gt;
&lt;td&gt;24.505294&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;direct log, as variable&lt;/td&gt;
&lt;td&gt;21.637833&lt;/td&gt;
&lt;td&gt;--&lt;/td&gt;
&lt;td&gt;--&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;direct log, direct insertion&lt;/td&gt;
&lt;td&gt;21.816895&lt;/td&gt;
&lt;td&gt;--&lt;/td&gt;
&lt;td&gt;--&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;direct log, as list&lt;/td&gt;
&lt;td&gt;--&lt;/td&gt;
&lt;td&gt;22.872546&lt;/td&gt;
&lt;td&gt;23.445436&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;These &lt;code&gt;timeit&lt;/code&gt; results came from using different string generation methods and putting them into a logger to drop a human-readable message into the logs, adding complexity as we went. The methods in the benchmark are&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;the original %-formatting method generating a string and then passing that to the logger,&lt;/li&gt;
&lt;li&gt;a direct method passing a format string with %-formatting and then the variables to the logger,&lt;/li&gt;
&lt;li&gt;the &lt;code&gt;str.format()&lt;/code&gt; method generating a string and then passing that to the logger,&lt;/li&gt;
&lt;li&gt;the f-string formatting method generating a string and then passing that to the logger,&lt;/li&gt;
&lt;li&gt;a generic string concatenation method generating a string and then passing that to the logger,&lt;/li&gt;
&lt;li&gt;direct pass of a variable pointing to a string to the logger to generate the message internally,&lt;/li&gt;
&lt;li&gt;direct pass of a string to the logger to generate the message internally, and&lt;/li&gt;
&lt;li&gt;direct pass of a list to the logger to add to the message at the end.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As we add more integers and strings, the complexity of each call rises, and we start to encounter the different costs associated with each string conversion call. The concatenation method, for example, requires a call to the &lt;code&gt;str()&lt;/code&gt; method to convert the integer into a string so it can be concatenated. You’ll notice that, depending on need, different methods are more efficient than others. Under the hood, Python’s logging library uses %-formatting, the oldest method, to ensure backwards compatibility.&lt;/p&gt;

&lt;p&gt;We’ll spend some more time in a different series digging into Python’s logging library to understand better benchmarks, but this example should give you a fairly good sense of why you need to consider how you construct your logging messages if you’re going text-based. These numbers may not seem like much, but, in reality, you probably are making hundreds of calls to your logger on a larger instance than this. The difference between the fastest method and the slowest method when you have multiple inputs is on the order of 4.5 microseconds, and this message generation example is extremely simple in comparison to what you typically need for logging in a serverless application. When milliseconds count, this kind of consideration is a priority.&lt;/p&gt;

&lt;h2&gt;
  
  
  Next time
&lt;/h2&gt;

&lt;p&gt;Once you've considered how to improve the performance of startup and basic text-based messages in your logging setup for serverless, you need to start thinking about crafting logging objects to transmit over those networks in the smallest possible package as fast as possible. In the next post, we're going to dive into logging objects versus strings; and then in subsequent posts, we'll start thinking about memory allocation, concurrency and state, and security.&lt;/p&gt;




&lt;ol&gt;

&lt;li id="fn1"&gt;
&lt;p&gt;If you want to test these benchmarks yourself on your own system, you can find the code at &lt;a href="https://github.com/nimbinatus/benchmarking-logs"&gt;https://github.com/nimbinatus/benchmarking-logs&lt;/a&gt;. The repo is a work in progress, so this code may be adjusted when you read this. ↩&lt;/p&gt;
&lt;/li&gt;

&lt;/ol&gt;

</description>
      <category>serverless</category>
      <category>logging</category>
      <category>devops</category>
    </item>
    <item>
      <title>Adding Logs to Legacy Applications</title>
      <dc:creator>Laura Santamaria</dc:creator>
      <pubDate>Wed, 27 May 2020 15:02:29 +0000</pubDate>
      <link>https://dev.to/logdna/adding-logs-to-legacy-applications-3e5l</link>
      <guid>https://dev.to/logdna/adding-logs-to-legacy-applications-3e5l</guid>
      <description>&lt;p&gt;As the final interactive in my mini-workshop at DeveloperWeek Austin 2019, I posed the following scenario to the audience:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;You have a legacy application that has not been updated in 5 years. The system is running Python 2, which is sunsetting in January 2020. The system recently had its first incident in nearly 4 years, and your team was among the group that had to bring it back up. The logs that you received were not very helpful, and bringing the production instance back up ended up being a lot of trial and error.&lt;/p&gt;

&lt;p&gt;Management has decided all applications must be on Python 3 by the end of code freeze in January 2020. Your team has been tasked with updating the application to use Python 3. It's the ideal time to add proper logging. How would you go about planning and executing that logging update?&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The scenario generated a lot of discussion, and people had some very good answers. However, someone on Twitter (thanks for the question, &lt;a href="https://twitter.com/russellyazbeck/status/1192526502073683973?s=20"&gt;@russellyazbeck&lt;/a&gt;!) pointed out to me that I completely forgot to provide my take on the same scenario. Whoops! I did respond in a thread on Twitter, but I'd like to lay it out and expand on it a bit here as it's definitely easier to find.&lt;/p&gt;

&lt;p&gt;First and foremost, the members of the audience that started with bringing everyone together were spot on. You can certainly have your own data that your immediate team gathered during the incident, reaching out to the rest of the teams that responded to the incident and gathering their data is also important. Why? They have different perspectives. It's very easy to get lost in your own context and stop noticing quirks or issues unique to your programming language of choice, to a platform you're somewhat familiar with, or even to a team that you have worked with in the past. Ask anyone who has ever taught a concept to someone else successfully, and you'll find that nine times out of ten, those people will mention how they were surprised initially where someone got lost or how hard it was to avoid jargon that needed further explanation. So ensure you get data from anyone who was involved in that incident so you can start understanding what other people might need to understand the same issue. &lt;/p&gt;

&lt;p&gt;Once you've gotten everyone together, start talking about the ideal incident response and the ideal data that you would have gathered. What should those logs have shown? What data did you actually need? In addition, what data was just noise? Was there any data that duplicated information? In addition, you can use this time to discuss which log levels would be useful for each type of data. Log levels help reduce or structure the noise coming from a logging system. Since a good production system allows for tuning the logs based on which environment you're in (dev, test/QA, staging, prod, or some combination thereof) and since any and all teams doing ops work on said system would likely love you if they don't have to decide whether a log raised by your system is a deprecation warning (WARN) or something that isn't acting right but won't take down the whole house of cards (ERROR) or something that took down everything including your databases and networking (CRITICAL), coming to a consensus on which logging levels are necessary and how to define them is really important both now and in the future.&lt;/p&gt;

&lt;p&gt;Now that you have a much better idea of what kind of data you actually needed, you would pick a library or logging structure that could help give you what you needed. If you were in a scenario that had a bunch of other apps going, as would be likely in a scenario like this where there's a legacy application and multiple teams that likely are working on multiple projects, I'd definitely look to a structured logs library like &lt;a href="http://www.structlog.org/en/stable/"&gt;structlog&lt;/a&gt;. While I could roll my own on top of the standard logging library, my guess is the rest of the team (and future team members) would likely find a library with good docs and standardized uses much easier to use to maintain good logs in the long run. An opinionated logging library would likely be best to ensure everyone logs well. Personally, I wouldn't use only text logs for this sort of situation, even if there's only one application that your company owns. Start as you intend to continue so that it's a lot easier down the line to ensure future systems are easier for others to onboard onto with similar features, common style, and other familiar elements. However, you have to keep in mind that this mentality includes a point that I made in the workshop: The audience of structured logs isn't really a human or a set of humans, but rather many machines parsing the data for you.&lt;/p&gt;

&lt;p&gt;By the way, I want to point something out. I chose the Python 2 to Python 3 conversion scenario because it's one of those moments that's an ideal time to add logs. You're already in the codebase digging around and touching everything. You're getting to know what's there, so you're unlikely to skip anything major (well, assuming you're not using &lt;a href="https://pypi.org/project/six/"&gt;six&lt;/a&gt; or the built-in &lt;a href="https://docs.python.org/3/library/2to3.html"&gt;2to3&lt;/a&gt;). It's also the ideal time, as noted by a few folks in the audience, to add in a deprecation warning for anything that relied on the Python 2 conventions for hitting the application. However, it is a bit of a red herring. You can add these kinds of logs to any system at any time. Legacy systems are often viewed as the most dreaded to work with, hence the scenario, and incidents are one of the ideal times to take a step back and understand what data is flowing through your system. However, you can use this same thought process for a modern application, an application that hasn't had an incident, or even an application that's brand new. Walk through the scenario as if your application just had that moment happen (or knock it over deliberately in dev or staging when those environments are not in use for anything critical), and see what comes out of the brainstorming exercise with the various teams that would theoretically be involved. Then add logs and monitor the outcome.&lt;/p&gt;

&lt;p&gt;How else would you respond to this scenario I laid out here?&lt;/p&gt;

</description>
      <category>logging</category>
      <category>legacy</category>
      <category>devops</category>
    </item>
    <item>
      <title>LogDNA Engineering</title>
      <dc:creator>Laura Santamaria</dc:creator>
      <pubDate>Wed, 20 May 2020 18:27:10 +0000</pubDate>
      <link>https://dev.to/logdna/logdna-engineering-2cp4</link>
      <guid>https://dev.to/logdna/logdna-engineering-2cp4</guid>
      <description>&lt;p&gt;Let’s get this blog started!&lt;/p&gt;

&lt;h2&gt;
  
  
  What’s this blog all about?
&lt;/h2&gt;

&lt;p&gt;We’re going to focus on technical discussions around logging and discuss some of the challenges and failures that we meet along the way. You’ll hear from me (Laura) a lot, but you’ll also hear from members of our engineering teams.&lt;/p&gt;

&lt;p&gt;You won't see product pitches or other vendor-y posts here. This is a technical blog, not a sales blog. We will likely mention LogDNA sometimes—we can't help but talk about the company when we're discussing unique challenges we've faced along the way. However, the focus will always be on the technical side of things.&lt;/p&gt;

&lt;p&gt;If there's any topics around logging you'd love to hear about, let us know! &lt;/p&gt;

</description>
      <category>logging</category>
      <category>devops</category>
      <category>engineering</category>
      <category>startup</category>
    </item>
    <item>
      <title>Nevertheless, Laura Coded (and admin'd and ops'd and...)</title>
      <dc:creator>Laura Santamaria</dc:creator>
      <pubDate>Fri, 06 Mar 2020 15:26:16 +0000</pubDate>
      <link>https://dev.to/nimbinatus/nevertheless-laura-coded-and-admin-d-and-ops-d-and-jad</link>
      <guid>https://dev.to/nimbinatus/nevertheless-laura-coded-and-admin-d-and-ops-d-and-jad</guid>
      <description>&lt;p&gt;I'm what a dev looks like. I'm a DevOps practitioner, a self-taught coder, a self-taught admin, and a beginner-safe speaker.&lt;/p&gt;

&lt;p&gt;I'm also an example of what a developer looks like when they've changed careers and came to the industry as an adult. I've been a literacy researcher, a science museum educator, a standardized test editor, and a technical writer. I was the kid who read the manuals and watched as others played video games (though I got to play sometimes, too). I got into hardware as a hobbyist, learned to put Linux on old boxes because I needed a space to tinker with an operating system I had heard about but couldn't afford to mess up the computer that I needed for school, taught myself to solder so I could teach kids at the museum, and picked up Python by myself--and then taught other people how to code from there. When I had an opportunity to prove my coding skills in the real world, I grabbed on with both hands. Little did I know that I was going to be taking on a production environment for a legacy system.&lt;/p&gt;

&lt;p&gt;Nevertheless, I coded, admin'd, ops'd, owned prod, knocked over prod, rebuilt, and showed the world what a DevOps practitioner can look like.&lt;/p&gt;

&lt;p&gt;I go out now and speak in front of people along with coding and running systems, and I delight in helping people new to tech understand how things work. If I'm going to take people on a deep dive into a topic, I try my hardest to ensure everyone can keep up with me at every step of the way.&lt;/p&gt;

&lt;p&gt;For International Women's Day, here's my message to all of you (a little early, yes, but I want y'all to get this): Dream big. Fight hard. Show the world what you can do. I believe in you because I've walked this road with you already. You are not alone.&lt;/p&gt;

</description>
      <category>wecoded</category>
    </item>
    <item>
      <title>Working Within Imposter Syndrome</title>
      <dc:creator>Laura Santamaria</dc:creator>
      <pubDate>Mon, 04 Nov 2019 06:00:00 +0000</pubDate>
      <link>https://dev.to/nimbinatus/working-within-imposter-syndrome-54f6</link>
      <guid>https://dev.to/nimbinatus/working-within-imposter-syndrome-54f6</guid>
      <description>&lt;p&gt;Imposter syndrome is a huge topic to cover in tech right now. I constantly see people on Twitter bringing up their imposter syndrome when talking about their everyday jobs, their interviews, their life history. I especially see it now during the ramp up to the next conference season as CFPs are opening and closing. I'm not immune, either. I nearly posted my own version of this frustration today while sitting here staring at my screen like my next CFP response would write itself. Writer's block and imposter syndrome is a very brutal combination. As I started on my own coping strategies, I realized I've never written down the tactics I've used to combat my own imposter syndrome. I've shared it during talks, coaching sessions, and informal gatherings, but I've never really written some of them down. So how about we talk about how to work within imposter syndrome to get a positive result, instead?&lt;/p&gt;

&lt;h2&gt;
  
  
  What it looks like for me
&lt;/h2&gt;

&lt;p&gt;For a bit of history, I actually come to the development world through a fairly circuitous path. I'm a self-taught dev. While I was surrounded by tech growing up, I almost always ended up watching or reading the manuals instead of doing any tech myself. I loved to tinker with hardware, though, from doing FIRST Robotics in high school to messing with old laptops in college. I installed a lot of Linux distros on the old laptops just because I could afford it and I wanted to see what my older brother was talking about. In college, I took one programming course, and I picked up the rudiments of a few other languages while I went through my degree in earth and atmospheric sciences. My love of hardware brought me to Arduinos and some C work and then to Python, but I was a hobbyist at most. Even when I ended up starting to teach Python to beginners, I never felt that I was a real developer. I was just an editor that did programming for fun. It wasn't until I joined Rackspace that I was given the chance to take development on as a profession, and I grabbed the opportunity with both hands.&lt;/p&gt;

&lt;p&gt;As you might imagine, this checkered history means imposter syndrome for me comes out in a lot of ways. When I sit down to respond to a CFP, for example, it hits when I start looking at featured speakers or at how many attendees they expect. It hits whenever I sit down to write on this blog about anything remotely technical. It can even hit in the middle of a talk if I see someone getting up to leave—or someone coming in after I start to sit down and listen. The manifestation should be fairly familiar to many: Who am I to talk about this? How can my small experiences even compare to the history that person has on this topic? Why are they here listening to me?&lt;/p&gt;

&lt;h2&gt;
  
  
  Strategies from the writing world
&lt;/h2&gt;

&lt;p&gt;When I end up facing my imposter syndrome periodically like when I need to respond to a CFP or when I know I want to get a blog post out, I start treating it a bit like I would treat writer's block. One thing that always worked well for me for writing was to change how I was writing. If I was staring at a screen, I picked up a pen and grabbed a notebook. I actually did that to start writing this article. Once I got moving, I could switch back to the medium I intended to write from and keep going. Just that change of venue helped tremendously. My theory is that it works for two reasons. First, my brain gets stuck in a loop staring at a screen thinking that I have nothing to write. By switching to a different medium, that thought process falls away and allows my brain to start a new train of thought. Second, the tactile stimulus of a pen and a piece of paper sparks different neural pathways, prompting the ability to think creatively about something new. Not the most scientific rationale, but if it works, it works!&lt;/p&gt;

&lt;p&gt;Another strategy when I'm dealing with imposter syndrome in the middle of something is to get up and walk away. If I'm writing from a place of feeling inferior or a fraud, my writing will reflect that. I know that, even if I'm still moving, the momentum I have isn't going to get me to the end of the task because I will need to rewrite it again later, or I'll be fretting about it for hours afterward. So I actually physically leave my desk, often to go make a cup of tea. Leaving the task to go do something else, especially something so process-driven and calming as making a cup of tea, resets my brain. I stop thinking about how I don't know what I'm doing when I get a whiff of my favorite black tea or a good herbal blend.&lt;/p&gt;

&lt;p&gt;Now, sometimes I can't physically walk away. Maybe I'm in the middle of helping with an incident, to go completely outside of the world of writing. The problem with imposter syndrome for me is it's often caught up in a charged situation. High stress, high nerves, strong reactions. In short, a situation where I care a lot about what's going on and about getting things right for one reason or another. Even in those cases, though, I can take a moment to take a sip of water, or to turn to a slightly different facet of the problem. It's a forced disconnection with the situation at hand, even for a moment. Something just long enough to let myself release that white-knuckle grip and release whatever thoughts of insecurity I have to remember that I'm sitting where I am for a reason, even if I can't believe it at the time. These kinds of highly charged situations really are more often solved for me, though, with strategies I learned as a performer.&lt;/p&gt;

&lt;h2&gt;
  
  
  Strategies from the performance world
&lt;/h2&gt;

&lt;p&gt;Right before I got into editing, I was a science museum educator. I had to give shows and demos all day long. Despite my degree, you better believe that I had imposter syndrome come up daily! I learned a lot of coping strategies to help myself get through every talk.&lt;/p&gt;

&lt;p&gt;Have you ever heard the phrase "fake it until you make it"? Well, that attitude applies to imposter syndrome for me when it comes to being up on stage. The illusion of confidence and knowing what I'm talking about goes a long way to actually making me confident enough to stand up on stage, and believing that I am knowledgeable enough to answer questions helps a lot. I get there by having a confidence booster collection that I'll read before I go onstage. It's a collection of messages, comments, testimonials, or anything really that come from other people that tell me I did a good job. If you use Slack a lot like me, starred messages work wonderfully for this. When I was at Rackspace, I starred every message that noted things I did well, from solving a production problem to positive messages about hosting our internal conference. Whenever I started really spiraling into imposter syndrome before a talk, I would flip through those messages to remind myself that others respected my voice. Back when I was at the museum, I would remind myself of where I came from by periodically writing the name of my degree or alma mater on the inside of my wrist under my watch in tiny letters. Whenever I needed that confidence boost, I would look at that message to myself and take a moment to remember. It's amazing how a small reminder that you've done harder things can help you fake that confidence in the moment until you actually believe yourself.&lt;/p&gt;

&lt;p&gt;Another thing I had to learn was how to handle mistakes, missteps, or failures, including demo failures, during live shows. The fear of exposure as a fraud that comes with imposter syndrome is a lot like the fear of failing at something while on stage for me. Both situations, as I mentioned before, are highly charged. So I learned the art of acknowledging a failure live. Depending on the situation, I would turn the failure into a joke ("I guess the volcano is still tired from daylight savings time") or would just forge onward ("Well, that dataset isn't the one I thought it was. However, we can still talk about this one."). Sometimes, none of my coping strategies work and I still feel like a fraud. That's ok. I force myself to change the lyrics live. Rather than "Who am I to talk about this?", I respond "While I might not be the one to talk about this, I won't know until I try." Or instead of "Why are they listening to me?", I make myself think "Well, they're here, so let's make it fun anyway." Acknowledging that fear and then changing the focus is a strong strategy that I use whenever I can't seem to shake that feeling of inadequacy. It takes practice, just like handling situations live, but it's well worth the time it takes to learn.&lt;/p&gt;

&lt;h2&gt;
  
  
  IRL
&lt;/h2&gt;

&lt;p&gt;I'll be using these a lot this week as I'm going to be presenting at DeveloperWeek Austin. I always get nerves and imposter syndrome before I go on stage, no matter how many times I've done it. If you happen to be here and look really closely before I go up on stage, you'll likely see some tiny letters on the inside of my wrist, waiting for me to need that boost, or me flipping through my phone reading my confidence collection. In addition to the conference, I have a lot more CFPs to respond to this month. I'll be using all of these strategies to get through the stressful process of getting words on a page.&lt;/p&gt;

&lt;p&gt;How do you work from within your imposter syndrome? Come find me on &lt;a href="https://twitter.com/nimbinatus/status/1191485763474599936"&gt;Twitter&lt;/a&gt; and let me know.&lt;/p&gt;

</description>
      <category>devrel</category>
      <category>imposter</category>
    </item>
    <item>
      <title>Pelican and GitHub Pages</title>
      <dc:creator>Laura Santamaria</dc:creator>
      <pubDate>Sat, 28 Sep 2019 05:00:00 +0000</pubDate>
      <link>https://dev.to/nimbinatus/pelican-and-github-pages-1m2i</link>
      <guid>https://dev.to/nimbinatus/pelican-and-github-pages-1m2i</guid>
      <description>&lt;p&gt;I thought I'd throw together a quick post on using Pelican with GitHub Pages and GitHub Actions.&lt;/p&gt;

&lt;p&gt;I decided to use a Python-based static-site generator instead of Jekyll because (a) I love Python and (2) Ruby and I aren't always on the best of terms. In addition, I wanted a system that I could really dig into and throw plugins together whenever I wanted. In another job, I managed a custom Jekyll system with plugins and hooks that I built myself, but I know doing that sort of thing in Ruby would take me a lot longer than just knocking it together in Python. Perhaps in the future I'll build the site with a different system depending on how complex it gets, but a simple static-site generator really has a lot of appeal for something this simple. I chose Pelican mainly because I wanted to figure out how it worked; nothing Earth-shattering there.&lt;/p&gt;

&lt;p&gt;The biggest challenge I had was automating publication of the site to GitHub Pages. I wanted to be able to write a post on my phone on a plane and be able to publish that post from my phone without the ability to build the site locally. I certainly could have done that with a more traditional content management system (CMS) like Ghost or (&lt;em&gt;shudder&lt;/em&gt;) Wordpress. However, there was an appealing challenge in making it work, and I admit to jumping at the chance to try GitHub Actions. So I started getting the site in order locally, and then I joined the beta for Actions to get started.&lt;/p&gt;

&lt;p&gt;First, I needed to trigger a build when a push goes the right branch. I decided to have a different host branch for the source code of the site since user pages on GitHub require publishing from master in a specially named repo. So my workflow had to trigger from a different branch:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;on:
  push:
    branches:
      - source

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I decided on an Ubuntu image to build on as I run Ubuntu at home, and therefore the build should be fairly close to my local builder. Everything from here on are job steps.&lt;/p&gt;

&lt;p&gt;Next, I had to check out that specific branch. I borrowed an Action to make that happen:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    - uses: actions/checkout@v1

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then, the workflow needed to set up the overall environment:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    - name: Set up Python 3.7
      uses: actions/setup-python@v1
      with:
        python-version: 3.7
    - name: Install dependencies
      run: |
        python -m pip install --upgrade pip
        pip install -r requirements.txt

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, a make command! The Makefile ensures that I don't have to remember all of my options I run, and that means a more consistent build when I &lt;em&gt;do&lt;/em&gt; run it on my local system.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    - name: Build the prod site
      run: |
        make publish

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This next bit is really, really important for publishing to GitHub Pages, and it's a bit silly of a step, really. You need to stop Jekyll from trying to parse your new raw HTML files and failing. The way to do that on GitHub Pages is an empty file named &lt;code&gt;.nojekyll&lt;/code&gt;. So, a quick touch command to the designated output directory for the publish build before moving it to the master branch, and we're ready for publication:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    - name: Add nojekyll
      run: |
        touch ./output/.nojekyll

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, deployment time! We just need to send the raw HTML from the output directory to the master branch. However, I had to find a good Action to do it because I have a custom domain. All of the other Actions I tried actually broke my custom domain, so this one actually adds the custom domain designation every time. The creator (see the &lt;code&gt;uses&lt;/code&gt; value) made some clear docs around the Action, too, which makes me very happy.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    - name: Deploy to GitHub Pages
      uses: JamesIves/github-pages-deploy-action@master
      if: success()
      env:
        ACCESS_TOKEN: ${{ secrets.GH_PAT }}
        BASE_BRANCH: source # The branch the action should deploy from.
        BRANCH: master # The branch the action should deploy to.
        CNAME: nimbinatus.com
        FOLDER: output # The folder the action should deploy.

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So there you have it. I just push changes like this post to my source branch, and GitHub Actions publishes the whole thing for me. And all on the back of Python, which makes me happy.&lt;/p&gt;

&lt;p&gt;Check out the full workflow in &lt;a href="https://github.com/nimbinatus/nimbinatus.github.io/blob/source/.github/workflows/pelican.yml"&gt;my repo&lt;/a&gt;. Questions? Feel free to reach out to me on &lt;a href="https://twitter.com/nimbinatus"&gt;Twitter&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>metastuff</category>
      <category>blog</category>
      <category>meta</category>
      <category>pelican</category>
    </item>
  </channel>
</rss>
