<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Caseywrble</title>
    <description>The latest articles on DEV Community by Caseywrble (@caseywrble).</description>
    <link>https://dev.to/caseywrble</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/caseywrble"/>
    <language>en</language>
    <item>
      <title>Help Your Future Self: Bare Necessities for All Log Lines</title>
      <dc:creator>Caseywrble</dc:creator>
      <pubDate>Mon, 26 Jul 2021 14:11:50 +0000</pubDate>
      <link>https://dev.to/caseywrble/help-your-future-self-bare-necessities-for-all-log-lines-1peh</link>
      <guid>https://dev.to/caseywrble/help-your-future-self-bare-necessities-for-all-log-lines-1peh</guid>
      <description>&lt;p&gt;Context is Everything&lt;/p&gt;

&lt;p&gt;This is the second article in my "Help Your Future Self" series on how to do logging well. Check out the first article here: &lt;br&gt;
&lt;a href="https://bit.ly/3BKEYwg"&gt;https://bit.ly/3BKEYwg&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you read my previous post, you know that logs are about laying out enough information that the reader can garner a sufficient understanding of what an application has done in its past to further one or more of a specific set of new objectives. Each of these objectives is different, and the key to achieving them is having enough context to understand the relevant actions taken by the application. In a future post, we will explore some specific use cases, but before we do that, there are some bits of context that are essential to pretty much every imaginable use case for consuming application logs.&lt;/p&gt;

&lt;p&gt;Timestamp&lt;/p&gt;

&lt;p&gt;There’s a reason the timestamp is usually the first field in any log message. In most cases, it’s not enough to know that something happened, you also need to know when it happened. There are a few fine points that you want to get right.&lt;/p&gt;

&lt;p&gt;Date/Time Format&lt;/p&gt;

&lt;p&gt;There are a ton of date/time formats out there, but you should probably just use RFC 3339. If you aren’t familiar with RFC 3339, and you find yourself thinking what’s wrong with good old ISO 8601, put down your fountain pen. There’s no need to draft a formal protest letter. RFC 3339 is just a profile of ISO 8601 which makes a few decisions about things which ISO 8601 leaves as available alternatives. It’s considered the standard date/time format of Internet protocols, and while you are probably not writing one of those, you can still take advantage of the interoperability problems it solves.&lt;/p&gt;

&lt;p&gt;If RFC 3339 or another ISO 8601 profile doesn’t do it for you, there are plenty of other formats out there. Still, try to ensure whatever you use has at least the following properties:&lt;/p&gt;

&lt;p&gt;formatted date/time strings sort chronologically&lt;br&gt;
time zone information is included&lt;/p&gt;

&lt;p&gt;Time Zone&lt;/p&gt;

&lt;p&gt;A common bit of guidance you’ll see is to use UTC everywhere. This isn’t bad advice, but in keeping with the idea that logs are often read by humans, it can help your readers a great deal to have timestamps in their local time. If you’ve picked a good date format, the time zone information should be part of the format, so you can always use a tool to reformat the timestamp in another time zone for convenient reading. If, for some reason, however, you can’t use that tool, and you know all of your readers will be consuming the log from a single specific time zone, it may make sense to log in that time zone. But... probably just use UTC.&lt;/p&gt;

&lt;p&gt;Granularity&lt;/p&gt;

&lt;p&gt;Another problem you can run into occurs when your timestamp’s granularity is too coarse. Ideally, no two log messages should share a timestamp (at least not from the same log source). If they do, tools which sort log messages according timestamp may paint a confusing picture by presenting events in a different order than that in which they occurred. To avoid this, you can either ensure that your timestamp includes a sufficiently high-precision fractional second or add a sequence number to each log message.&lt;/p&gt;

&lt;p&gt;Log Level&lt;/p&gt;

&lt;p&gt;While not strictly necessary, a coarse log level is a very common and helpful field to include in your log messages. Typically this looks something like the following:&lt;/p&gt;

&lt;p&gt;FATAL / PANIC / CRITICAL / EMERGENCY — a severe event, very likely indicating the failure of a significant component or even the entire application&lt;/p&gt;

&lt;p&gt;ERROR — an application-level or system-level event not severe enough to impede continuing application or system function&lt;br&gt;
WARN / WARNING — a noteworthy event that, while not in itself a failure condition, could indicate degraded application function or could contribute to a failure condition given compounding events&lt;br&gt;
NOTICE — a significant event that may require special attention or intervention&lt;/p&gt;

&lt;p&gt;INFO — a normal event, notable enough to keep a durable record of having occurred&lt;/p&gt;

&lt;p&gt;DEBUG — an event or detail regarding an event which is useful for investigating the correctness or performance of some part of an application and is not generally needed to understand what has occurred during normal operation of the application&lt;br&gt;
TRACE — similar to debug, but typically with much more detail and/or higher granularity and/or frequency&lt;/p&gt;

&lt;p&gt;The levels above are ordered to facilitate filtering either when the log event is being produced or when the log event is being consumed (i.e. “only show me warnings or worse”), but that may not be the dimension you care the most about. Sometimes a specific error can be more important to you than a component panic. If you do use some sort of logging framework to control whether log messages at or beyond a certain threshold get logged, pay very close attention to both the threshold and the level of each particular message. You don’t want to miss a serious problem by accidentally throwing its log message into the void.&lt;/p&gt;

&lt;p&gt;Category / Component&lt;/p&gt;

&lt;p&gt;The most common expression of a log category is found in the Java ecosystem. In a typical Java application using Log4J, for example, log messages will be prefixed either with the fully-qualified or arbitrarily truncated class name (e.g. com.wrble.logging.Logger or logging.Logger). Especially in a large system, this allows log consumers to narrow their focus to one or a few of the specific components they are working on without being overwhelmed by the noise of the complete system. Using a hierarchical category like a class name can make it very easy to dial in at just the right level of granularity, from debug logs of very specific components to everything from entire packages.&lt;/p&gt;

&lt;p&gt;Not all logging frameworks have built-in support for a category field, but that doesn’t mean it shouldn’t be considered a best practice. If your logging facility does not have this concept, consider rolling your own analogue. If you organize your application into modules of any kind, logging the module name as the category is simple and effective.&lt;/p&gt;

&lt;p&gt;Source&lt;/p&gt;

&lt;p&gt;The concept of a log source isn’t applicable to all applications, but it is absolutely critical for some, like a distributed application or one where you are managing a large number of remote clients. Assuming that every source has some sort of unique identifier, you should include it with every log message originating from that endpoint, along with a prefix of some kind that makes it easy to filter aggregated logs down to just those from a particular endpoint. For example:&lt;/p&gt;

&lt;p&gt;2021-04-30T12:34:56.789Z - client=314159 - app launched&lt;/p&gt;

&lt;p&gt;If you don’t have a unique identifier, other options might be the remote IP address or perhaps a session identifier. Avoid sensitive data like name, email address, etc. (more on this in a future post on privacy). Also, don’t mix and match identifiers — if you have two valid ways to identify the same log source (say, a user ID and a username), pick one and stick with it. Don’t force consumers of the log to read one, query the other, and build a log search query that returns all events matching both identifiers.&lt;/p&gt;

&lt;p&gt;The Event&lt;/p&gt;

&lt;p&gt;This should go without saying, but don’t forget to log the actual log message itself.&lt;/p&gt;

&lt;p&gt;Wrapping Up&lt;/p&gt;

&lt;p&gt;As state up top, the recommendations in this post are fairly general and apply broadly to logging in general for most applications. In the next few posts, we’ll take a look at different scenarios in which one may find oneself as a log consumer and describe some specific practices to help you as the log producer simplify the downstream task at hand.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>opensource</category>
      <category>cloud</category>
      <category>showdev</category>
    </item>
    <item>
      <title>One Sentence for Every AWS Instance Type (Cheat sheet) </title>
      <dc:creator>Caseywrble</dc:creator>
      <pubDate>Mon, 12 Jul 2021 12:09:01 +0000</pubDate>
      <link>https://dev.to/caseywrble/one-sentence-for-every-aws-instance-type-cheat-sheet-man</link>
      <guid>https://dev.to/caseywrble/one-sentence-for-every-aws-instance-type-cheat-sheet-man</guid>
      <description>&lt;p&gt;On cache invalidation and naming things - AWS has certainly had a hard time with the latter. Between the variety of instance types they offer and how long they're been around, there’s no consistency.&lt;/p&gt;

&lt;p&gt;Whether starting a new service to run internally or reviewing something we have going, it’s always a struggle to find the right instance type for the needs. For example, there are three families (r, x, z) that optimize RAM in various ways in various combinations yet I always forget about the x and z variants.&lt;/p&gt;

&lt;p&gt;We’ve been using this 'cheat sheet' internally and thought to share here for anyone else to use; pull requests welcome for updates. &lt;/p&gt;

&lt;p&gt;Check it out: &lt;a href="https://github.com/wrble/public/blob/main/aws-instance-types.md"&gt;https://github.com/wrble/public/blob/main/aws-instance-types.md&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Did i miss anything?&lt;/p&gt;

</description>
      <category>aws</category>
      <category>devops</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Solarwinds' Loggly and Papertrail Default to Sending Unencrypted Logs</title>
      <dc:creator>Caseywrble</dc:creator>
      <pubDate>Tue, 06 Jul 2021 10:57:41 +0000</pubDate>
      <link>https://dev.to/caseywrble/solarwinds-loggly-and-papertrail-default-to-sending-unencrypted-logs-4ci5</link>
      <guid>https://dev.to/caseywrble/solarwinds-loggly-and-papertrail-default-to-sending-unencrypted-logs-4ci5</guid>
      <description>&lt;p&gt;Logs often contain sensitive data; are you sure they're encrypted?&lt;/p&gt;

&lt;p&gt;Solarwinds scrutiny is well-earned these days. Despite being a security company by nature and owning two logging companies - Loggly &amp;amp; Papertrail - I can’t help but notice they aren't taking care of their own business.&lt;/p&gt;

&lt;p&gt;For example, if you follow the Loggly or Papertrail default instructions, your syslog will be configured to send everything over the internet as plain-text. Of course, they support TLS and have advanced/optional instructions for configuring that but after decades of experience in software and devops we know the danger of defaults against a motivated attacker. At Wrble we offer unencrypted channels if you must but default and strongly recommend configuring TLS and it is in there by default - saving you time while strengthening security posture.&lt;/p&gt;

&lt;p&gt;This is critical and many customers who otherwise require vendors to secure SOC2 accreditation or fill out endless security questionnaires are relying on companies that don't take it seriously for their own products' day-to-day operation.&lt;/p&gt;

&lt;p&gt;How to conduct a self-audit of your logs&lt;br&gt;
First off, do you know what you’re using for logs right now? If you aren't sure what tool you're using or someone other than you (who has maybe moved on to another project, department, or company!) originally configured Syslog on all your servers. Considering what’s at stake, it’s definitely worth a look. We recommend that everyone who sends logs, especially on older systems, review how they're sending and make sure it’s secure as can be. Here are some tips on how to check:&lt;/p&gt;

&lt;p&gt;Whether you're configuring a syslog daemon, using a custom library, or a custom daemon from a vendor - there are two main ways to get logs off a system: Syslog and HTTP. Now since there are secure and insecure versions of each, we'll run through them and talk about how you can make sure that they're sending over TLS:&lt;/p&gt;

&lt;p&gt;Syslog TCP/UDP&lt;br&gt;
Verify Configuration&lt;/p&gt;

&lt;p&gt;Unencrypted Syslog traffic generally flows over port 514 on either TCP or UDP.&lt;/p&gt;

&lt;p&gt;If it's encrypted, you should see one or both of the lines in your conf file:&lt;/p&gt;

&lt;p&gt;ActionSendStreamDriver gtls 6514&lt;/p&gt;

&lt;p&gt;This config file will probably be a ".conf" file in the /etc/rsyslog.d/ directory, but could also be /etc/rsyslog.conf or /etc/rsyslog.conf. If you're running inside Docker, it might be configured inside the image or on the host system so it's good to check both places.&lt;/p&gt;

&lt;p&gt;Check Traffic&lt;/p&gt;

&lt;p&gt;Checking configuration is great, but can be complicated and hard to tell which config file is authoritative. Here we'll go over how to check for both unencrypted and encrypted traffic originating from Syslog so you can make sure it's configured correctly:&lt;/p&gt;

&lt;p&gt;First, let's check for unencrypted traffic on 514. We'll be using a tool called tcpdump that prints out traffic going over a specific port. You can install it via yum: yum install tcpdump or apt-get: apt-get install tcpdump (you may need to update your package cache with apt-get update).&lt;/p&gt;

&lt;p&gt;OK, now that we have that installed here's how to print all unencrypted Syslog traffic on the TCP port:&lt;/p&gt;

&lt;p&gt;tcpdump -i any -nn -v port 514&lt;/p&gt;

&lt;p&gt;Once that's running it will print out any packets coming or going over TCP 514, trigger something in your application that causes a log line to be sent and see if it shows up there. If it's empty you should be all good, but there are a few more things to check, here's how to look at 514 for UDP traffic:&lt;/p&gt;

&lt;p&gt;tcpdump -i any -nn -v udp port 514&lt;/p&gt;

&lt;p&gt;Trigger log traffic again and see if anything shows up. If nothing shows up you're in good shape. Now we've verified the negative lack of unencrypted traffic but let's look to see if we can prove it is transiting out of the system securely by watching secure traffic:&lt;/p&gt;

&lt;p&gt;TCP: tcpdump -i any -nn -v port 6514&lt;/p&gt;

&lt;p&gt;UDP: tcpdump -i any -nn -v udp port 6514&lt;/p&gt;

&lt;p&gt;Hopefully at this point you'll see some lines come across as logs are triggered. It will be all garbled as you're seeing the TLS packets themselves but their existence and garbled nature are a great sign... congratulations!&lt;/p&gt;

&lt;p&gt;HTTP&lt;br&gt;
If you aren't using Syslog, your logs are probably going over HTTP(S) even if it's wrapped up in several layers of vendor libraries or daemons. It's really hard to list all the configurations to check, but we can still audit the traffic with tcpdump.&lt;/p&gt;

&lt;p&gt;HTTP(S) traffic can be pretty noisy on many hosts, when we're checking this we'll limit to traffic originating from your host as that should be much less noisy. NOTE: change "10.0.0.10" to your host's IP address in all the tcpdump commands below.&lt;/p&gt;

&lt;p&gt;tcpdump -i any src host 10.0.0.10 and port 80&lt;/p&gt;

&lt;p&gt;Trigger log sending and see if any traffic shows up, if it's nothing or just other outgoing HTTP connections from your box you're in good shape. Let's also check HTTPS:&lt;/p&gt;

&lt;p&gt;tcpdump -i any src host 10.0.0.10 and port 443&lt;/p&gt;

&lt;p&gt;Traffic here can be hard to discern from other outbound connections but look for your logging vendor's hostname as a target of one of the packets, if they're showing up here you're in good shape!&lt;/p&gt;

&lt;p&gt;Insecure HTTPS&lt;br&gt;
If a vendor is controlling the HTTPS send via a library or daemon it could still be possible for them to "break" the encryption by bypassing TLS security checks when configuring their HTTPS library. This can be especially common on old or embedded system where TLS certs are hard to configure. The most common bypasses are checking the hostname and certificate validity, here are instructions to verify both of those.&lt;/p&gt;

&lt;p&gt;This is a little harder to verify but worth your time if you're sending critical data to your logging provider.&lt;/p&gt;

&lt;p&gt;First we'll need to set up an "HTTP echo server" on a separate machine like this:&lt;/p&gt;

&lt;p&gt;docker run -d -p 443:443 danthegoodman/https-echoserver&lt;/p&gt;

&lt;p&gt;Note that it's listening on 443 and will terminate TLS but it is not a secure server, the TLS certificate is self-signed. This is what we want in this rare case!&lt;/p&gt;

&lt;p&gt;Now configure the box that you're application is running on by editing /etc/hosts like this:&lt;/p&gt;

&lt;p&gt;10.0.0.1 ingest.loggingprovider.com&lt;/p&gt;

&lt;p&gt;but replace 10.0.0.1 with the IP of the machine running your echo server and ingest.loggingprovider.com with the hostname your logging provider uses to receive logs.&lt;/p&gt;

&lt;p&gt;If you now trigger log lines to be sent and they show up in the echo server you'll know that they've disabled hostname and/or certificate checking as we don't have a valid certificate for their domain! Report this to your provider as that’s one big glaring security hole.&lt;/p&gt;

&lt;p&gt;Whew, that was a lot of work!&lt;br&gt;
We know security is hard and it's a confusing world out there (Syslog was first released in the 80's when security was an afterthought at best!). But we hope these tips were able to make it a little less painful and a bit quicker. Reach out directly if I can help walk you through the securing of your systems or to learn how Wrble can reduce your logging spend while improving security.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>cloudnative</category>
      <category>opensource</category>
      <category>logs</category>
    </item>
    <item>
      <title>Help Your Future Self: Consider Your Logging Audience</title>
      <dc:creator>Caseywrble</dc:creator>
      <pubDate>Mon, 28 Jun 2021 08:26:54 +0000</pubDate>
      <link>https://dev.to/caseywrble/help-your-future-self-consider-your-logging-audience-14el</link>
      <guid>https://dev.to/caseywrble/help-your-future-self-consider-your-logging-audience-14el</guid>
      <description>&lt;p&gt;Why should you log and how to think about what to log.&lt;/p&gt;

&lt;p&gt;An application's logs are the transcript of its life story, chronicling the salient moments and various turning points of its journey through time. Each application has its own unique story to tell, whether it is a mobile app, a distributed cloud service, or an ETL batch job. And, like all stories, some are better than others. One might be an interminable long-winded slog, and the next a joyless chronology of tedious minutiae. Some stories might even turn out to be both! But whereas readers may be forgiven for deciding they just don't care what ends up happening to Leopold Bloom, when it comes to your application's story, someone is probably going to have to buckle down and get through it at some point. That someone may be you!&lt;/p&gt;

&lt;p&gt;So, as you consider how your application should write its life story, consider the needs of those who will be reading it down the road. Take into account their knowledge, skills, assumptions, and what they will want to get out of the experience. As they say, know your audience. We’ll go over a few of the different scenarios.&lt;/p&gt;

&lt;p&gt;Logging for the Author (You)&lt;/p&gt;

&lt;p&gt;The easiest requirements to gather are your own. Remember, however, the audience isn't exactly you—it's future you. So, how can we help future you to be more productive? Let's look at some reasons why future you might be reading your application's logs.&lt;/p&gt;

&lt;p&gt;Investigating a Bug Report&lt;/p&gt;

&lt;p&gt;If you're a developer, this is probably your primary use case. It's also the hardest to reason about. By its very nature, a bug is something you didn't expect to happen, so both the problem and its solution fall into the category of unknown unknowns. The best way to proactively assist yourself in debugging with logs is to ask yourself what information you would need in order to know which lines of code would be executed for a given input, and what their side effects would be. This is easier said than done, of course, but we’ll look at some specific examples in our post on debugging.&lt;/p&gt;

&lt;p&gt;Improving Performance&lt;/p&gt;

&lt;p&gt;Logs are not always the best tool for measuring performance, but they are a convenient first resort. We'll get further into this in our post on structured vs. unstructured logging, but one of the best things about an application log is the ease with which you can add a new log statement without worrying too much about breaking something. If you are using any kind of continuous deployment to deliver your application, you can answer simple performance questions with minimal effort and risk. One classic performance bottleneck is where your application needs to make a synchronous request to a database of some sort in order to complete a task. If the remote resource goes down or becomes unavailable, the problem is often discovered immediately. If the problem is more subtle, however, like in the case of a database query whose performance increases slowly over time as your data set grows, it can result in steadily decreasing application performance. One very simple approach to guard against this sort of performance problem getting out of hand is to log any database operation that takes more than a few seconds (or milliseconds, perhaps, depending on the composition and requirements of the system).&lt;/p&gt;

&lt;p&gt;Estimating Impact&lt;/p&gt;

&lt;p&gt;Imagine you have found, by inspection, a worrisome defect in a critical component. For the sake of this discussion, it doesn’t matter exactly what it is, but let’s say it would be “really bad” if a particular set of circumstances were to occur in production, and worse, the fix is not immediately clear. The first thing you’d want to know is whether, in fact, this issue is already occurring in production. You should add a test for these circumstances, log it, and redeploy ASAP. With the right search in your logging dashboard (and alerts), you should know very quickly whether this is a theoretical problem or an actual one. You will still probably want to address the issue, but with this information you can better strategize your way forward.&lt;/p&gt;

&lt;p&gt;Preparing for a Change&lt;/p&gt;

&lt;p&gt;Another thing we do all the time as developers is make changes to existing systems. Whether it’s a new feature, a bug fix, or an engineering refactor, sometimes a little extra information from your production environment can make a planned change go more smoothly. We make implementation decisions based on our own assumptions of how data and state flows through systems all the time, so why wouldn’t we want to check those assumptions before moving forward? If you’re ever at all uncertain as to whether real-world inputs to a function are a certain size, or contain a particular distribution of characters, for example, a new log statement can be a great way to quickly find out.&lt;/p&gt;

&lt;p&gt;Logging for Other Developers (Not You)&lt;/p&gt;

&lt;p&gt;The scenarios in which developers who are not you are sent scurrying to view your application’s logs are very much the same as your own (debugging, etc.). The main difference between them and you is that they are not you, and they don’t necessarily know everything you know. This may seem obvious to you now, but it will be even more infuriatingly obvious to the poor programmer who has to figure out what went wrong when they see a log message like this:&lt;/p&gt;

&lt;p&gt;2021-04-01 01:23:45,678 - oops! something went wrong.&lt;/p&gt;

&lt;p&gt;Don’t do this to your coworkers. Help them to understand how your code works as well as you do by giving them the context you already carry around in your head. For example:&lt;/p&gt;

&lt;p&gt;2021-04-01 01:23:45,678 - Server.start(): failed to start because the server configuration file could not be found&lt;/p&gt;

&lt;p&gt;This is only a very simple example, but note that the log message explains what encountered the error, where it happened, and a description of what the error was.&lt;/p&gt;

&lt;p&gt;Non-Developers&lt;/p&gt;

&lt;p&gt;Sometimes you will need to use your logs to answer questions for people from parts of the business outside of development. Very commonly this will include Dev Ops, who need to be able to run and scale your application, making decisions about production infrastructure without access to developers at all hours of the day. You probably also have Support staff, who need to be able to investigate customer issues and respond to end users without constantly escalating to engineering. You may even have a Legal department who needs to be able to document whether certain events have or have not occurred. A few well placed log messages can take care of these non-development needs as they arise.&lt;/p&gt;

&lt;p&gt;Product Management / Sales&lt;/p&gt;

&lt;p&gt;While logging is probably not the best tool for product and business analytics, sometimes it can do in a pinch. A few well placed log messages can be used to answer product and user behavior questions, and it is an option worth considering, especially if you do not or can not use an analytics package for some reason. As an example, you could keep track of REST API usage almost trivially by logging the request URI in combination with a user or session identifier of some sort. For another, you could log mobile application launches and whether or not the launch was initiated from a push notification and configure your logging dashboard to produce a simple user engagement report without very much effort at all.&lt;/p&gt;

&lt;p&gt;Tools&lt;/p&gt;

&lt;p&gt;Sometimes your logs aren’t consumed by a human at all, but by some kind of automated tool. In this case, your audience is both the user of the tool and the tool itself, which will often have its own very particular requirements. This is usually very specific to your own environment and best to check internally about which tools might be used.&lt;/p&gt;

&lt;p&gt;Wrapping Up&lt;br&gt;
The above are just a few types of users and use cases. There are likely many more varied and unique ones in your own organization.&lt;/p&gt;

&lt;p&gt;If you are struggling to find new insight into how your application logs might be used by others, think of your logs as just another interface to your application, and the consumers of those logs as another user. With that mindset, you can address the question of what to log through the same product development processes you're already familiar with. Without prescribing any particular methodology, consider what it would look like to involve the consumers of your logs in that process, whether through formal requirements gathering, conducting interviews, modeling use cases, writing user stories, or some other ritual.&lt;/p&gt;

&lt;p&gt;Hopefully this gives you a good overview of who will be reading your logs and why. In the next post, we’ll go into more detail as to exactly what you should log with each and every message.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>serverless</category>
      <category>cloud</category>
      <category>opensource</category>
    </item>
  </channel>
</rss>
