<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Prathamesh Sonpatki</title>
    <description>The latest articles on DEV Community by Prathamesh Sonpatki (@prathamesh).</description>
    <link>https://dev.to/prathamesh</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/prathamesh"/>
    <language>en</language>
    <item>
      <title>Starting o11y.wiki</title>
      <dc:creator>Prathamesh Sonpatki</dc:creator>
      <pubDate>Tue, 07 Mar 2023 15:22:20 +0000</pubDate>
      <link>https://dev.to/prathamesh/starting-o11ywiki-1mlm</link>
      <guid>https://dev.to/prathamesh/starting-o11ywiki-1mlm</guid>
      <description>&lt;p&gt;I have started a project for maintaining glossary of all terms and definitions related to Observability. &lt;/p&gt;

&lt;p&gt;It is called &lt;a href="https://o11y.wiki"&gt;o11y.wiki&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;The GitHub repo can be found here &lt;a href="https://github.com/prathamesh-sonpatki/o11y-wiki"&gt;https://github.com/prathamesh-sonpatki/o11y-wiki&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The contents are available at &lt;a href="https://o11y.wiki/"&gt;https://o11y.wiki/&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;This is an open source project aimed at beginners like me who don’t know a lot of observability and reliability. The idea is to learn about these terms and note them down at one place, so others can benefit from it.&lt;/p&gt;

&lt;p&gt;Here are few next set of terms that can be added --&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;MTTR&lt;/li&gt;
&lt;li&gt;MTTD&lt;/li&gt;
&lt;li&gt;MTBI&lt;/li&gt;
&lt;li&gt;Incident&lt;/li&gt;
&lt;li&gt;Alert&lt;/li&gt;
&lt;li&gt;Metric&lt;/li&gt;
&lt;li&gt;Samples&lt;/li&gt;
&lt;li&gt;Cardinality&lt;/li&gt;
&lt;li&gt;Log&lt;/li&gt;
&lt;li&gt;Span&lt;/li&gt;
&lt;li&gt;Event&lt;/li&gt;
&lt;li&gt;Exceptions&lt;/li&gt;
&lt;li&gt;Serverless&lt;/li&gt;
&lt;li&gt;SRE&lt;/li&gt;
&lt;li&gt;Platform engineering&lt;/li&gt;
&lt;li&gt;PromQL&lt;/li&gt;
&lt;li&gt;Service Discovery&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I am looking for help while making this project better and solid for everyone! 🏗️ ❤️&lt;/p&gt;

</description>
      <category>observability</category>
      <category>webdev</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Prometheus Alternatives</title>
      <dc:creator>Prathamesh Sonpatki</dc:creator>
      <pubDate>Tue, 07 Feb 2023 11:29:16 +0000</pubDate>
      <link>https://dev.to/last9/prometheus-alternatives-3j7b</link>
      <guid>https://dev.to/last9/prometheus-alternatives-3j7b</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fczoaoav9153n8wgj5rc4.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fczoaoav9153n8wgj5rc4.jpg" alt="Prometheus Alternatives" width="800" height="418"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Prometheus is a popular open-source platform for metrics and alerting created by SoundCloud in 2012 and officially released as open-source in 2015. Designed for both dynamic service-oriented architectures and system monitoring, Prometheus focuses on reliability, multidimensional data collection, and data visualization.&lt;/p&gt;

&lt;p&gt;While &lt;a href="https://last9.io/blog/prometheus-monitoring" rel="noopener noreferrer"&gt;Prometheus&lt;/a&gt; is an excellent option for tracking metrics, other open-source and SAAS alternatives in the ecosystem might better suit your needs.&lt;/p&gt;

&lt;p&gt;This article compares Prometheus with InfluxDB, Zabbix, Datadog, and Graphite, Grafana based on their data model and storage, architecture, APIs and access methods, partitioning, compatible operating systems, pricing, visualization, alerting, and supported programming languages, use cases and supported workloads.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Prometheus Alternatives&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The following is an overview of each tool compared in this article.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;What is Prometheus?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;As mentioned above, &lt;a href="https://github.com/prometheus/prometheus" rel="noopener noreferrer"&gt;&lt;strong&gt;&lt;u&gt;Prometheus&lt;/u&gt;&lt;/strong&gt;&lt;/a&gt; is a monitoring and alerting system that helps developers manage applications, tools, databases, and even network monitoring. It has a comprehensive set of built-in features for collecting metric data and acts as a full-stack observability and monitoring system for microservices and &lt;a href="https://dev.to/prathamesh/kubernetes-monitoring-with-prometheus-and-grafana-2ic3-temp-slug-4793421"&gt;cloud-native applications.&lt;/a&gt; It has merged with &lt;a href="https://www.cncf.io/projects/prometheus/" rel="noopener noreferrer"&gt;&lt;strong&gt;&lt;u&gt;Cloud Native Computing Foundation(CNCF)&lt;/u&gt;&lt;/strong&gt;&lt;/a&gt; since 2016 as the second most popular project after &lt;a href="https://www.cncf.io/projects/kubernetes/" rel="noopener noreferrer"&gt;&lt;strong&gt;&lt;u&gt;Kubernetes&lt;/u&gt;&lt;/strong&gt;&lt;/a&gt; &lt;strong&gt;&lt;u&gt;. &lt;/u&gt;&lt;/strong&gt; While Prometheus is an excellent tool for DevOps and SRE teams, it can run into scalability issues where tools such as &lt;a href="https://dev.to/prathamesh/thanos-vs-cortex-2hh5-temp-slug-2519808"&gt;Thanos&lt;/a&gt;, &lt;a href="https://dev.to/prathamesh/thanos-vs-cortex-2hh5-temp-slug-2519808"&gt;Cortex&lt;/a&gt;, and &lt;a href="https://last9.io/products/levitate/" rel="noopener noreferrer"&gt;Levitate&lt;/a&gt; can help.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;InfluxDB&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://www.influxdata.com/" rel="noopener noreferrer"&gt;&lt;strong&gt;&lt;u&gt;InfluxDB&lt;/u&gt;&lt;/strong&gt;&lt;/a&gt; is a leading time series database that comes in three editions: an open-source version called InfluxDB and two commercial versions called InfluxDB Cloud and InfluxDB Enterprise. It provides a complete set of data tools for ingesting, processing, and manipulating multiple data points. It includes the InfluxDB user interface (InfluxDB UI) and Flux, a functional scripting and query language.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Zabbix&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://www.zabbix.com/" rel="noopener noreferrer"&gt;&lt;strong&gt;&lt;u&gt;Zabbix&lt;/u&gt;&lt;/strong&gt;&lt;/a&gt; is a scalable, accessible, open-source monitoring solution used for both small environments and enterprise-level distributed systems with millions of metrics.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Datadog&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://www.datadoghq.com/" rel="noopener noreferrer"&gt;&lt;strong&gt;&lt;u&gt;Datadog&lt;/u&gt;&lt;/strong&gt;&lt;/a&gt; is a monitoring and analytics platform used for event monitoring and measuring the performance of cloud applications and infrastructure. It combines real-time metrics from disparate sources such as applications, servers, databases, and containers with end-to-end tracing to deliver alerts and visualizations. It can collect data from various data sources with its built-in integrations.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Graphite&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Created by Chris Davis at Orbitz in 2006 and released as open source in 2008, &lt;a href="https://graphiteapp.org/" rel="noopener noreferrer"&gt;&lt;strong&gt;&lt;u&gt;Graphite&lt;/u&gt;&lt;/strong&gt;&lt;/a&gt; is a monitoring solution that collects time series data from applications, servers, infrastructure, and networks. It focuses on storing passive time series data and analyzing it through the Graphite web UI.&lt;/p&gt;

&lt;h2&gt;
  
  
  Grafana
&lt;/h2&gt;

&lt;p&gt;Grafana is a data visualization tool developed by Grafana Labs. It is available as open source, managed (Grafana Cloud), or enterprise edition. Grafana can combine data from many data sources into a single dashboard. It solves the problem of visualization of time series data.&lt;/p&gt;

&lt;h3&gt;
  
  
  Is Grafana the same as Prometheus?
&lt;/h3&gt;

&lt;p&gt;We keep seeing this common question; while Prometheus is a time series database, Grafana is a data visualization tool. It supports Prometheus, Graphite, and InfluxDB as data sources. So they are not the same, but they work better together. Grafana is a standard for the visualization of Prometheus data.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Prometheus Alternatives in action&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;This section compares Prometheus to InfluxDB, Zabbix, Datadog, and Graphite using the following criteria:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Data model and storage&lt;/li&gt;
&lt;li&gt;Architecture&lt;/li&gt;
&lt;li&gt;APIs and access methods&lt;/li&gt;
&lt;li&gt;Partitioning&lt;/li&gt;
&lt;li&gt;Compatible operating systems&lt;/li&gt;
&lt;li&gt;Supported programming languages&lt;/li&gt;
&lt;li&gt;Open Source vs. Proprietary&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Data Model and Storage&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Prometheus captures and accumulates metric data as time series data and stores it in a local database. A metric name and optional key-value pairs are unique identifiers or labels for each time series.&lt;/p&gt;

&lt;p&gt;Data can be queried in real-time using the &lt;a href="https://prometheus.io/docs/prometheus/latest/querying/basics/" rel="noopener noreferrer"&gt;&lt;strong&gt;&lt;u&gt;Prometheus Query Language&lt;/u&gt;&lt;/strong&gt;&lt;/a&gt; (PromQL) and presented in tabular or graphical form.&lt;/p&gt;

&lt;p&gt;Prometheus supports the float64 data type with limited support for strings and millisecond resolution timestamps. Prometheus also supports long-term storage to different layers via &lt;a href="https://dev.to/prathamesh/how-to-improve-prometheus-remote-write-performance-at-scale-34c6-temp-slug-8212458"&gt;Prometheus remote write&lt;/a&gt; protocol and can be run in an agent mode.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;InfluxDB: Data Model and Storage&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;InfluxDB maintains a time series database optimized for time-stamped data, much like Prometheus. Data elements also comprise a unique combination of timestamps, tags, fields, and measurements. Tags are indexed key-value pairs used as labels, while fields are sequenced key-value pairs, which function as secondary labels with limited use.&lt;/p&gt;

&lt;p&gt;InfluxDB uses a proprietary query language similar to SQL called &lt;a href="https://docs.influxdata.com/influxdb/v1.7/query_language/spec/" rel="noopener noreferrer"&gt;&lt;strong&gt;&lt;u&gt;InfluxQL&lt;/u&gt;&lt;/strong&gt;&lt;/a&gt; and supports timestamp, float64, int64, string, and bool data types.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Zabbix: Data Model and Storage&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Zabbix uses an external database to store the collected data and configuration information. It integrates with leading relational database management system (RDBMS) database engines such as MySQL, MariaDB, Oracle, PostgreSQL, IBM Db2, and SQLite, which allows Zabbix to store more complex data types such as system logs. Zabbix stores raw data collected from hosts in history tables, while trends tables store consolidated hourly data.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Datadog: Data Model and Storage&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Datadog uses Kafka to process incoming data points and a mix of Redis, Cassandra, and S3 to store and query time series. It also uses Elasticsearch to store and query events (such as alerts and deployments) that are not represented as a time series and uses PostgreSQL for metadata.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Graphite: Data Model and Storage&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Like Prometheus, Graphite stores time series data using its specialized database, but data collection is passive. Data is collected from collection daemons or other monitoring tools (including Prometheus) and sent to Graphite's Carbon component.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Summary&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://dev.to/prathamesh/prometheus-vs-influxdb-23do-temp-slug-126429"&gt;InfluxDB&lt;/a&gt; and Graphite both use time series databases similar to Prometheus. Graphite, however, doesn't store raw data as Prometheus does. InfluxDB offers full support for strings and timestamps as well as int64 and bool data types, while Prometheus only provides full support for float64. Zabbix integrates with more familiar RDBMS database engines and is suitable for storing historical data. At the same time, Datadog uses several data models and storage types to store both time-series and non-time-series data.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Architecture&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Prometheus servers are standalone and run independently of each other. They rely on local on-disk storage rather than network or remote storage services for the core functionality of scraping, rule processing, and alerting. Data is stored for fourteen days, but Prometheus can be integrated with remote solutions such as &lt;a href="https://last9.io/products/levitate/" rel="noopener noreferrer"&gt;Levitate&lt;/a&gt; for long-term storage.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;InfluxDB: Architecture&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Like Prometheus, open-source InfluxDB servers are standalone and use local storage for scraping, alerting, and rule processing. Commercial InfluxDB versions come with distributed storage by default that allows queries and storage to be managed by many nodes simultaneously, making it easier to perform horizontal scaling.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Zabbix: Architecture&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Zabbix architecture comprises servers that store statistical, operational, and configuration data and agents installed on the machines that collect the data. Agents monitor and report data collected from local resources and applications to Zabbix servers.&lt;/p&gt;

&lt;p&gt;Agents and servers support passive checks, where the server requests a value from the agent, and active checks, where the agent periodically sends results to the server.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Datadog: Architecture&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Datadog uses Kafka for independent storage systems. It acts as a persistent storage and query layer. Kafka is an open-source, distributed, partitioned, replicated log service developed by LinkedIn as a unified platform for handling large-scale, real-time data feeds.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Graphite: Architecture&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Graphite architecture is made up of three components:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Carbon, the primary backend daemon that listens for time series data sent to Graphite and stores it in Whisper, the backend database&lt;/li&gt;
&lt;li&gt;Whisper, a fast, file-based local time series database that creates one file per stored metric&lt;/li&gt;
&lt;li&gt;The Graphite web UI, the frontend UI for the backend storage system that renders graphs on demand&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Summary&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;While InfluxDB and Prometheus both use standalone servers, commercial versions of InfluxDB offer distributed storage to support horizontal scaling. The Zabbix architectural model uses servers with agents, which allows for both passive and active data checks. Datadog's use of Kafka for its persistent data storage layer will enable it to store large amounts of real-time data. Graphite's architecture includes a web app, which is a good choice if you want to render graphics on demand.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;APIs and Access Methods&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Prometheus uses RESTful HTTP endpoints with responses in JSON.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;InfluxDB: APIs and Access Methods&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;The InfluxDB API provides a set of HTTP endpoints for accessing and managing system information, security and access control, resource access, data I/O, and other resources and returns JSON-formatted responses. The Enterprise version also provides support for TCP and UDP ports.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Zabbix: APIs and Access Methods&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Zabbix uses the JSON-RPC 2.0 protocol. Requests and responses between clients and the API are encoded using JSON.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Datadog: APIs and Access Methods&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Datadog uses the HTTP REST API. Resource-oriented URLs are used to call the API, with JSON being returned from all requests.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Graphite: APIs and Access Methods&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Graphite data is queried over HTTP via its Metrics API or the Render URL API. The Graphite API is an alternative to the Graphite web UI that retrieves metrics from a time series database and renders graphs or generates JSON data based on these time series.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Summary&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;All tools provide support for HTTP requests and JSON-formatted responses.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Partitioning&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Prometheus supports sharding. You can scale horizontally by splitting target metrics into shards on multiple Prometheus servers to create more minor instances.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;InfluxDB: Partitioning&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;InfluxDB organizes data into shards to create a highly scalable approach that increases throughput and maintains performance as the data grows. Shards are placed into shard groups containing encoded and compressed time series data for a specific time range. The shard group duration defines the period for each shard group, and each group has a corresponding retention policy that applies to all the shards within the group.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Zabbix: Partitioning&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Partitioning with Zabbix depends on the database being used. MySQL, PostgreSQL, IBM Db2, and MariaDB (with the Spider storage engine) offer sharding capabilities.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Datadog: Partitioning&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Datadog uses Kafka partitions to scale by customer, metric, and tag set. You can isolate by the customer or scale concurrently by metric. Sharding is implemented as a group of Kafka partitions.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Graphite: Partitioning&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Graphite does not support partitioning.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Summary&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;All tools except for Graphite offer some form of support for portioning. Prometheus, InfluxDB, and Datadog provide sharding and horizontal scaling features, while Zabbix support depends on your chosen external database.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Compatible Operating Systems&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Prometheus supports the Linux and Windows operating systems.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;InfluxDB: Compatible Operating Systems&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;InfluxDB supports Linux, Windows, and macOS.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Zabbix: Compatible Operating Systems&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Zabbix supports Linux, Windows, macOS, IBM AIX, Solaris, and HP-UX operating systems.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Datadog: Compatible Operating Systems&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Datadog supports Windows, Linux, and macOS operating systems and cloud service providers, including Google Cloud, AWS, Red Hat OpenShift, and Microsoft Azure.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Graphite: Compatible Operating Systems&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Graphite supports Linux and Unix operating systems.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Summary&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;All tools except Graphite supports Windows and Linux operating systems; Graphite only supports Linux and Unix. InfluxDB, Zabbix, and Datadog also support macOS, with Datadog providing additional support for cloud service providers.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Supported Programming Languages&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Prometheus provides several official and unofficial client libraries for .NET, C++, Go, Haskell, Java, JavaScript (Node.js), Python, and Ruby. It also supports &lt;a href="https://dev.to/prathamesh/best-practices-using-and-writing-prometheus-exporters-34lb-temp-slug-6306814"&gt;Prometheus Exporters&lt;/a&gt; to collect data from systems that do not directly have client libraries.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;InfluxDB: Supported Programming Languages&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;InfluxDB supports client libraries for C++, Java, JavaScript, .NET, Perl, PHP, and Python. It can be directly used with the REST API.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Zabbix: Supported Programming Languages&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Zabbix supports Java, JavaScript, .NET, Perl, PHP, Python, R, Ruby, Elixir, Go, and Rust.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Datadog: Supported Programming Languages&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Client libraries are available in C#/.NET, Java, Python, PHP, Go, Node.js, Ruby, and Swift, along with many integrations.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Graphite: Supported Programming Languages&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Graphite has client libraries in Python and JavaScript (Node.js) programming languages.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Summary&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Prometheus, InfluxDB, Zabbix, and Datadog all support the major programming languages. Graphite, however, only provides support for Python and JavaScript.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Comparison summary&lt;/strong&gt;
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;Prometheus&lt;/th&gt;
&lt;th&gt;InfluxDB&lt;/th&gt;
&lt;th&gt;Zabbix&lt;/th&gt;
&lt;th&gt;Datadog&lt;/th&gt;
&lt;th&gt;Graphite&lt;/th&gt;
&lt;th&gt;Levitate&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Data Model and Storage&lt;/td&gt;
&lt;td&gt;Multi-dimensional data model with Time series data&lt;/td&gt;
&lt;td&gt;Time series data&lt;/td&gt;
&lt;td&gt;External database stores including RDBMS&lt;/td&gt;
&lt;td&gt;Both time series and non time series data&lt;/td&gt;
&lt;td&gt;Time series data&lt;/td&gt;
&lt;td&gt;PromQL compatible time series data&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;API and Access methods&lt;/td&gt;
&lt;td&gt;HTTP API&lt;/td&gt;
&lt;td&gt;HTTP API&lt;/td&gt;
&lt;td&gt;HTTP API&lt;/td&gt;
&lt;td&gt;HTTP API&lt;/td&gt;
&lt;td&gt;HTTP API&lt;/td&gt;
&lt;td&gt;HTTP API&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Partitioning&lt;/td&gt;
&lt;td&gt;Supported&lt;/td&gt;
&lt;td&gt;Supported&lt;/td&gt;
&lt;td&gt;Supported, depends on RDBMS of choice&lt;/td&gt;
&lt;td&gt;Supported&lt;/td&gt;
&lt;td&gt;Supported&lt;/td&gt;
&lt;td&gt;Managed TSDB&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Open Source&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Yes. Proprietary also available.&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;No. Proprietary&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;No. Proprietary&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Programming languages&lt;/td&gt;
&lt;td&gt;Tons of client libraries and exporters&lt;/td&gt;
&lt;td&gt;C++, Java, JavaScript, .NET, Perl, PHP, and Python.&lt;/td&gt;
&lt;td&gt;Java, JavaScript, .NET, Perl, PHP, Python, R, Ruby, Elixir, Go, and Rust&lt;/td&gt;
&lt;td&gt;Tons of integrations&lt;/td&gt;
&lt;td&gt;Python and JavaScript (Node.js)&lt;/td&gt;
&lt;td&gt;It can be directly used with the REST API&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Prometheus's strengths lie in its support for multidimensional data collection. It has a powerful query language that can be used for both dynamic service-oriented architectures and machine-centric monitoring. It's a good choice when you primarily want to record numeric time series.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.to/prathamesh/prometheus-vs-influxdb-23do-temp-slug-126429"&gt;InfluxDB and Prometheus&lt;/a&gt; use similar data compression techniques and support multidimensional data using key-value data stores; InfluxDB is better for event logging. A commercial version provides the best option if you need to process large amounts of data, as its default configuration scales horizontally.&lt;/p&gt;

&lt;p&gt;Zabbix focuses on hardware and device management and monitoring. It's a better option than Prometheus if you are more familiar with RDBMS database engines and need to store many historical and varied data types. However, the use of an external database can slow down performance.&lt;/p&gt;

&lt;p&gt;Prometheus's internal time series database provides faster connectivity to data but is not suitable for storing data types like text or event logs. Since Prometheus only keeps data for fourteen days, it's also not a good option if you need to store historical data (unless configured for remote storage).&lt;/p&gt;

&lt;p&gt;Datadog and Prometheus can be used for application performance monitoring(APM). However, Datadog has more application monitoring capabilities than Prometheus and is geared toward monitoring infrastructure at scale. Datadog is best for monitoring infrastructure and apps and visualizing data from disparate sources in mid to large-scale environments.&lt;/p&gt;

&lt;p&gt;Graphite runs well on all hardware and cloud infrastructure, making it suitable for small businesses with limited resources and large-scale production environments. Choose Graphite when you need a solution focused on storing and analyzing historical data and fast retrieval.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Conclusion&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Prometheus is a popular option for tracking metrics and alerting, but one of the four alternatives mentioned above might suit your needs depending on your requirements.&lt;/p&gt;

&lt;p&gt;For processing large amounts of data, choose a commercial version of InfluxDB, but if you want the familiarity of an RDBMS engine, then go with Zabbix. Datadog's wide range of monitoring features makes it the go-to choice for monitoring infrastructure in larger environments. Still, if you operate on a smaller scale, Graphite can get the job done with whatever hardware and resources you have.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://last9.io/" rel="noopener noreferrer"&gt;&lt;strong&gt;&lt;u&gt;Last9&lt;/u&gt;&lt;/strong&gt;&lt;/a&gt;, a site reliability engineering (SRE) platform. We remove the guesswork in improving the reliability of your distributed systems. Last9's &lt;a href="https://last9.io/products/levitate" rel="noopener noreferrer"&gt;&lt;strong&gt;&lt;u&gt;Levitate&lt;/u&gt;&lt;/strong&gt;&lt;/a&gt;, a managed time series database(TSDB), helps you understand, track, and improve your organization's system dependencies to reduce the challenges of time series database management.&lt;/p&gt;

&lt;p&gt;Access the intelligence you need to deliver reliable software with Last9's reliability platform.&lt;/p&gt;




&lt;p&gt;This post was originally published on &lt;a href="https://last9.io/blog/prometheus-alternatives/" rel="noopener noreferrer"&gt;Last9 Blog&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>prometheus</category>
      <category>influxdb</category>
      <category>grafana</category>
      <category>timeseries</category>
    </item>
    <item>
      <title>A practical guide for implementing SLO</title>
      <dc:creator>Prathamesh Sonpatki</dc:creator>
      <pubDate>Thu, 12 Jan 2023 05:30:00 +0000</pubDate>
      <link>https://dev.to/last9/a-practical-guide-for-implementing-slo-1pej</link>
      <guid>https://dev.to/last9/a-practical-guide-for-implementing-slo-1pej</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw8z94m9u5yy8de9o985m.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw8z94m9u5yy8de9o985m.jpg" alt="A practical guide for implementing SLO" width="800" height="418"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is a mini guide to the SLO process that SREs and DevOps teams can use as a rule of thumb. This guide not necessarily automates the SLO process but gives a direction in which one can go using SLOs effectively.&lt;/p&gt;

&lt;p&gt;The process essentially involves 3 steps&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Identify the level of the Service&lt;/li&gt;
&lt;li&gt;Identify the right type of the SLO&lt;/li&gt;
&lt;li&gt;Set the SLO Targets&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Before diving deep into it, let’s understand a few terminologies in the Site Reliability Engineering and Observability world.&lt;/p&gt;

&lt;h2&gt;
  
  
  SLO Terminologies
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Service Level Indicator(SLI)
&lt;/h3&gt;

&lt;p&gt;A Service level indicator ( &lt;strong&gt;SLI&lt;/strong&gt; ) is a measure of the service level provided by a service provider to a customer. It is a quantitative measure that captures key metrics, like the percentage of successful requests or completed requests within 200 milliseconds, for example.&lt;/p&gt;

&lt;h3&gt;
  
  
  Service Level Objective(SLO)
&lt;/h3&gt;

&lt;p&gt;A &lt;a href="https://sre.google/sre-book/service-level-objectives/" rel="noopener noreferrer"&gt;Service Level objective&lt;/a&gt; is a codified way to define a goal for service behaviour using a Service Level indicator within a compliance target.&lt;/p&gt;

&lt;h3&gt;
  
  
  Service Level Agreement(SLA)
&lt;/h3&gt;

&lt;p&gt;A &lt;a href="https://www.gartner.com/en/information-technology/glossary/sla-service-level-agreement" rel="noopener noreferrer"&gt;service level agreement&lt;/a&gt; defines the level of service expected by users in terms of customer experience. They also include penalties in case of agreement violation.&lt;/p&gt;

&lt;p&gt;Let’s go through the SLO process now.&lt;/p&gt;

&lt;h2&gt;
  
  
  Identify the level of Service
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Customer-Facing Services&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A Service running HTTP API / apps/ GRPC workloads where the caller expects an immediate response to the request they submit.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Stateful Services&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Services like a database. It is common to confuse a database as not being a service in a microservices environment where multiple services call the same database. Try answering this straightforward question next time you are unable to decide.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;My service HAS a database OR my Service CALLS a database.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Asynchronous Services&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Any service that does not respond with the request result instead queues it to be processed later. The only response is to acknowledge whether the service successfully accepted the task or not; the service will process the actual result/available later.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Operational Services&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Operational Services are usually internal to an organization and deal with jobs like Reconciliation, Infrastructure bring-up, tear-down, etc. These jobs are typically asynchronous. But with a greater focus on accuracy vs. throughput. The Job may run late, but it must be correct as much as possible&lt;/p&gt;

&lt;h2&gt;
  
  
  Identify the right type of the SLO
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Request Based SLO
&lt;/h3&gt;

&lt;p&gt;Request-based SLOs perform &lt;strong&gt;&lt;em&gt;some&lt;/em&gt;&lt;/strong&gt; aggregation of Good &lt;strong&gt;requests&lt;/strong&gt; vs. The total number of &lt;strong&gt;requests&lt;/strong&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;First, there is a notion of a &lt;strong&gt;Request&lt;/strong&gt;. A request is a single operation on a component that succeeds or fails in generic terms.&lt;/li&gt;
&lt;li&gt;Secondly, the SLIs have to be not pre-aggregated because Request SLOs perform an aggregation over a period of time. One can’t use pre-aggregated metrics(eg. Cloudwatch / Stackdriver which directly returns P99 latency rather than total requests and latency per request) for Request SLOs.&lt;/li&gt;
&lt;li&gt;Additionally, for low-traffic services, Request SLOs can be noisy because they can keep flapping even when a very small % of requests fail. Eg. if your throughput is 10 rpm in a day, setting a 99% compliance target does not make sense because 1 request will bring down the compliance to 90% depleting the error budget.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Window Based SLO
&lt;/h3&gt;

&lt;p&gt;Window-based SLO is a ratio of Good &lt;strong&gt;time intervals&lt;/strong&gt; vs. total &lt;strong&gt;time intervals&lt;/strong&gt;. For some sources, the requests are not available.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;For example,&lt;/strong&gt; In the case of a Kubernetes Cluster, the availability of a Cluster is the percentage of pods allocated vs. pods requested. Sometimes, you may not want to calculate the SLO as the overall performance of the service over a period of time.&lt;/p&gt;

&lt;p&gt;Eg. in the case of a payment service, even if only 2% of requests fail in a window of 5 minutes, it is unacceptable because it is a critical service for my business. Even though overall performance has not degraded but that 2% of requests none of the payments was successful. Window-based SLOs are useful in such cases.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Using the above guidelines, we can create a rough flowchart to decide which type of SLO to choose depending on certain decision points.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdpjwob5n6zpcvrj6lr53.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdpjwob5n6zpcvrj6lr53.png" alt="A practical guide for implementing SLO" width="800" height="444"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;SLO Process&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Set the SLO Targets
&lt;/h2&gt;

&lt;p&gt;When you start thinking about setting objectives, some questions will arise:&lt;/p&gt;

&lt;p&gt;&lt;u&gt;Should I set 99.999% from the start or be conservative?&lt;/u&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Start conservatively. Look at historical numbers and calculate your 9s or dive right in with the lowest 9 such as 90%.&lt;/li&gt;
&lt;li&gt;The baseline of the service or historical data of the customer experience can be helpful in this case.&lt;/li&gt;
&lt;li&gt;Keep your systems running against this objective for a period of time and see if there is no depletion of the error budget.&lt;/li&gt;
&lt;li&gt;If there are, improve your system’s stability. If there aren’t, move up to the next ladder of service reliability. From 90% go to 95 % then to 99% and so on.&lt;/li&gt;
&lt;li&gt;Keep in mind Service Level agreements or SLAs that you may have with customers or third-party upstream services that you are dependent on. You can’t have a higher compliance target than a third-party service giving you a lower SLA.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;&lt;u&gt;What should be the compliance window?&lt;/u&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Generally, this is 2x of your sprint window so that you can measure the performance of the service in a large enough duration to make an informed decision in the next sprint cycle on whether to focus on new features or maintenance.&lt;/li&gt;
&lt;li&gt;If you are not sure start with a day and expand to a week. Remember that the longer your window, the longer the effects of a broken / recovered SLO.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;&lt;u&gt;How many ms should I set for latency?&lt;/u&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;It depends. What kind of user experience are you aiming for? Is your application a payment gateway? Is it a batch processing system where real-time feedback isn’t important?&lt;/li&gt;
&lt;li&gt;To start out, measure your P50, and P99 latencies and initially give yourself some headroom and set your SLOs against P99 latency. Depending on the stability of your systems, use the same ladder-based approach as shown above and iterate.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Service Level Objectives are not a silver bullet
&lt;/h2&gt;

&lt;p&gt;Let us take a simple scenario:&lt;/p&gt;

&lt;p&gt;A user makes a request to a web application hosted on Kubernetes served via a load balancer. The request flow is as follows:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftshbm6p5qy4eoxej5mvi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftshbm6p5qy4eoxej5mvi.png" alt="A practical guide for implementing SLO" width="800" height="111"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Request Flow&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Instead of setting a blind SLO on the load balancer and calling it a day, ask yourself the following questions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Where should I set the SLO - ALB or Ambassador or K8s or all of them? Typically SLOs are best set closest to the user or something that represents the end user’s experience e.g. if in the above example, one might want to set an SLO on the ALB but if the same ALB is serving multiple backends it might be a good idea to set the SLO on the next hop - Ambassador.&lt;/li&gt;
&lt;li&gt;If I set a latency SLO, what should be the right latency value? Look at baseline percentile numbers. Do you want to catch degradations of the P50 customer experience, the P95 customer experience, or a static number?&lt;/li&gt;
&lt;li&gt;Do I have enough metrics  I need to construct an SLI expression? AWS Cloudwatch reports latency numbers as pre-calculated P99 values i.e. if you want to set a request-based SLO with the expression, you can’t do that because the data is pre-aggregated. So you cannot set request-based SLOs, you can only use window-based SLOs.&lt;/li&gt;
&lt;li&gt;Suppose you set an availability SLO on Ambassador with the expression                &lt;code&gt;availability = 1 - (5xx / throughput)&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;What happens if the Ambassador pod crashes on K8s and does not emit &lt;code&gt;5xx&lt;/code&gt; / &lt;code&gt;throughput&lt;/code&gt; signal?&lt;/li&gt;
&lt;li&gt;Does the expression become &lt;code&gt;availability = 1 - 0 / 0&lt;/code&gt;  or &lt;code&gt;availability = undefined&lt;/code&gt;?&lt;/li&gt;
&lt;li&gt;For a payment processing application, there might be a lag between the time at which the transaction was initiated v/s the time at which it was completed.&lt;/li&gt;
&lt;li&gt;How does &lt;code&gt;availability = 1 - (5xx / throughput)&lt;/code&gt; work now?&lt;/li&gt;
&lt;li&gt;How do I know  &lt;code&gt;5xx&lt;/code&gt; that I got was for a request present in the current throughput or was it a previous retry that failed?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is not an exhaustive list of questions. Real-world scenarios will be complicated and that makes the task of setting achievable reliability targets involving multiple stakeholders and critical user journeys tricky.&lt;/p&gt;

&lt;h3&gt;
  
  
  So does this mean all hope is &lt;em&gt;SLOst&lt;/em&gt;?
&lt;/h3&gt;

&lt;p&gt;Of course not! SLOs are a way to gauge your system’s health and customer experience over a time period. But they are not the &lt;strong&gt;only&lt;/strong&gt; way. In the above scenario, one could:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Set a request-based SLO on the Ambassador.&lt;/li&gt;
&lt;li&gt;Set an uptime window SLO or an alert that checks for no-data situations for signals that are always ≥ 0 e.g. Ambassador throughput.&lt;/li&gt;
&lt;li&gt;Set relevant alerts to catch pod crashes of the application.&lt;/li&gt;
&lt;li&gt;Set alerts on load balancer 5xx to catch scenarios where ALB had an issue and the request was not forwarded to the Ambassador backend.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Want to know more about Last9 and how we make using SLOs dead simple? Check out &lt;a href="http://last9.io/" rel="noopener noreferrer"&gt;last9.io&lt;/a&gt;; we're building SRE tools to make running systems at scale, fun, and _ &lt;strong&gt;embarrassingly easy.&lt;/strong&gt; _ &lt;strong&gt;🟢&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>slo</category>
      <category>deepdives</category>
      <category>last9engineering</category>
      <category>observability</category>
    </item>
    <item>
      <title>Changing default git branch to main locally</title>
      <dc:creator>Prathamesh Sonpatki</dc:creator>
      <pubDate>Mon, 24 May 2021 11:55:19 +0000</pubDate>
      <link>https://dev.to/prathamesh/changing-default-git-branch-to-main-locally-2bpo</link>
      <guid>https://dev.to/prathamesh/changing-default-git-branch-to-main-locally-2bpo</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rC0bhXJW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/44c2a87fl571uka9lb8c.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rC0bhXJW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/44c2a87fl571uka9lb8c.jpeg" alt="git main branch"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;GitHub has &lt;a href="https://github.blog/changelog/2020-10-01-the-default-branch-for-newly-created-repositories-is-now-main/"&gt;shifted to &lt;code&gt;main&lt;/code&gt; branch&lt;/a&gt; for new projects but when creating a new project locally with &lt;code&gt;git init&lt;/code&gt; still creates master branch. In this post, we will see how to create &lt;code&gt;main&lt;/code&gt; branch locally by default.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;git init&lt;/code&gt; command accepts value for name of the branch in the &lt;code&gt;-b&lt;/code&gt; flag.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;git init &lt;span class="nt"&gt;-b&lt;/span&gt; main
Initialized empty Git repository &lt;span class="k"&gt;in&lt;/span&gt; /private/tmp/food/.git/
&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;touch &lt;/span&gt;tmp.txt
&lt;span class="nv"&gt;$ &lt;/span&gt;git add &lt;span class="nb"&gt;.&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;git commit &lt;span class="nt"&gt;-m&lt;/span&gt; &lt;span class="s2"&gt;"main"&lt;/span&gt;
&lt;span class="o"&gt;[&lt;/span&gt;main &lt;span class="o"&gt;(&lt;/span&gt;root-commit&lt;span class="o"&gt;)&lt;/span&gt; 0b3550b] main
 1 file changed, 0 insertions&lt;span class="o"&gt;(&lt;/span&gt;+&lt;span class="o"&gt;)&lt;/span&gt;, 0 deletions&lt;span class="o"&gt;(&lt;/span&gt;-&lt;span class="o"&gt;)&lt;/span&gt;
 create mode 100644 tmp.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;This is possible only with Git 2.24 onwards.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;We can also configure the default branch globally.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git config &lt;span class="nt"&gt;--global&lt;/span&gt; init.defaultBranch main
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For older Git versions, we can switch to the &lt;code&gt;main&lt;/code&gt; branch before committing anything because the branch doesn't exist for real until something is committed.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cd &lt;/span&gt;sport
git init
git branch &lt;span class="nt"&gt;-m&lt;/span&gt; main
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;My git version was &lt;code&gt;2.31.1&lt;/code&gt; while writing this post.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git &lt;span class="nt"&gt;--version&lt;/span&gt;
git version 2.31.1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>github</category>
      <category>git</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Installing GNU grep and find on OS X</title>
      <dc:creator>Prathamesh Sonpatki</dc:creator>
      <pubDate>Mon, 24 May 2021 10:07:47 +0000</pubDate>
      <link>https://dev.to/prathamesh/installing-gnu-grep-and-find-on-os-x-2l4b</link>
      <guid>https://dev.to/prathamesh/installing-gnu-grep-and-find-on-os-x-2l4b</guid>
      <description>&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;brew &lt;span class="nb"&gt;install grep
&lt;/span&gt;brew &lt;span class="nb"&gt;install &lt;/span&gt;findutils
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After that update your &lt;code&gt;.bashrc&lt;/code&gt; or &lt;code&gt;.zshrc&lt;/code&gt; with following to setup the paths correctly.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# .bashrc/.zshrc&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;PATH&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"/usr/local/opt/findutils/libexec/gnubin:&lt;/span&gt;&lt;span class="nv"&gt;$PATH&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;PATH&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"/usr/local/opt/grep/libexec/gnubin:&lt;/span&gt;&lt;span class="nv"&gt;$PATH&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's all!&lt;/p&gt;

</description>
      <category>systems</category>
      <category>programming</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Adding jemalloc to Rails apps on Heroku</title>
      <dc:creator>Prathamesh Sonpatki</dc:creator>
      <pubDate>Sun, 23 May 2021 01:46:23 +0000</pubDate>
      <link>https://dev.to/prathamesh/adding-jemalloc-to-rails-apps-on-heroku-3i1l</link>
      <guid>https://dev.to/prathamesh/adding-jemalloc-to-rails-apps-on-heroku-3i1l</guid>
      <description>&lt;p&gt;&lt;code&gt;jemalloc&lt;/code&gt; is a malloc implementation developed by Jason Evans which is known to improve memory consumption of Rails apps, &lt;a href="https://devcenter.heroku.com/articles/ruby-memory-use#excess-memory-use-due-to-malloc-in-a-multi-threaded-environment"&gt;especially on Heroku&lt;/a&gt;. By default Ruby uses &lt;code&gt;malloc&lt;/code&gt; from C to manage memory but it can run into memory fragmentation issues.&lt;/p&gt;

&lt;p&gt;Whereas &lt;code&gt;jemalloc&lt;/code&gt; describes itself as a malloc implementation which tries to avoid memory fragmentation. A &lt;a href="https://dev.to/devteam/how-we-decreased-our-memory-usage-with-jemalloc-4d5n"&gt;lot&lt;/a&gt; of &lt;a href="https://pawelurbanek.com/2018/01/15/limit-rails-memory-usage-fix-R14-and-save-money-on-heroku/"&gt;people&lt;/a&gt; have successfully tested jemalloc in production apps deployed on Heroku to verify that it reduces memory usage compared to the default memory management that comes with Ruby.&lt;/p&gt;

&lt;p&gt;Enabling it in a Rails app on Heroku consists of following steps.&lt;/p&gt;

&lt;h3&gt;
  
  
  Add jemalloc buildpack
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;heroku buildpacks:add --index 1 https://github.com/gaffneyc/heroku-buildpack-jemalloc.git
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Enable jemalloc
&lt;/h3&gt;

&lt;p&gt;This can be done in two ways. Either we can set the environment variable, &lt;code&gt;JEMALLOC_ENABLED&lt;/code&gt; to &lt;code&gt;true&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;heroku config:set JEMALLOC_ENABLED=true
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Alternatively, we can add &lt;code&gt;jemalloc.sh&lt;/code&gt; prefix to the processes listed in the Procfile.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Procfile

web: jemalloc.sh bin/puma -C config/puma.rb
worker: jemalloc.sh bundle exec sidekiq -q default -q mailers -c ${SIDEKIQ_CONCURRENCY:-5}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;Note that setting the JEMALLOC_ENABLED environment variable will enable jemalloc for all processes of your app, where as adding &lt;code&gt;jemalloc.sh&lt;/code&gt; prefix in Procfile can give you control over for which processes you want to enable it.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;After this is done, deploying the app on Heroku will enable &lt;code&gt;jemalloc&lt;/code&gt; and we can monitor the memory consumption.&lt;/p&gt;

&lt;h3&gt;
  
  
  jemalloc version
&lt;/h3&gt;

&lt;p&gt;We can choose the &lt;code&gt;jemalloc&lt;/code&gt; version by setting &lt;code&gt;JEMALLOC_VERSION&lt;/code&gt; to a version number from this &lt;a href="https://github.com/gaffneyc/heroku-buildpack-jemalloc#jemalloc_version"&gt;list&lt;/a&gt;. By default, the buildpack chooses the most recent version.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;With just enabling jemalloc, we can see significant drop in memory usage on Heroku without doing any code change. So I highly recommend enabling it in production apps deployed on Heroku.&lt;/p&gt;

</description>
      <category>heroku</category>
      <category>rails</category>
      <category>bestpractices</category>
    </item>
    <item>
      <title>Bundler 2.2.3+ and deployment of Ruby apps</title>
      <dc:creator>Prathamesh Sonpatki</dc:creator>
      <pubDate>Sun, 18 Apr 2021 14:06:49 +0000</pubDate>
      <link>https://dev.to/prathamesh/bundler-2-2-3-and-deployment-of-ruby-apps-2661</link>
      <guid>https://dev.to/prathamesh/bundler-2-2-3-and-deployment-of-ruby-apps-2661</guid>
      <description>&lt;p&gt;While deploying a new Rails 6.1.3.1 application built with Bundler 2.2.16 on Heroku, I ran into this error in Heroku build log.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nt"&gt;-----&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; Ruby app detected
&lt;span class="nt"&gt;-----&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; Installing bundler 2.2.15
&lt;span class="nt"&gt;-----&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; Removing BUNDLED WITH version &lt;span class="k"&gt;in &lt;/span&gt;the Gemfile.lock
&lt;span class="nt"&gt;-----&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; Compiling Ruby/Rails
&lt;span class="nt"&gt;-----&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; Using Ruby version: ruby-3.0.1
&lt;span class="nt"&gt;-----&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; Installing dependencies using bundler 2.2.15
       Running: &lt;span class="nv"&gt;BUNDLE_WITHOUT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'development:test'&lt;/span&gt; &lt;span class="nv"&gt;BUNDLE_PATH&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;vendor/bundle &lt;span class="nv"&gt;BUNDLE_BIN&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;vendor/bundle/bin &lt;span class="nv"&gt;BUNDLE_DEPLOYMENT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1 bundle &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-j4&lt;/span&gt;
       Your bundle only supports platforms &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"x86_64-darwin-20"&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt; but your &lt;span class="nb"&gt;local &lt;/span&gt;platform
       is x86_64-linux. Add the current platform to the lockfile with &lt;span class="sb"&gt;`&lt;/span&gt;bundle lock
       &lt;span class="nt"&gt;--add-platform&lt;/span&gt; x86_64-linux&lt;span class="sb"&gt;`&lt;/span&gt; and try again.
       Bundler Output: Your bundle only supports platforms &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"x86_64-darwin-20"&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt; but your &lt;span class="nb"&gt;local &lt;/span&gt;platform
       is x86_64-linux. Add the current platform to the lockfile with &lt;span class="sb"&gt;`&lt;/span&gt;bundle lock
       &lt;span class="nt"&gt;--add-platform&lt;/span&gt; x86_64-linux&lt;span class="sb"&gt;`&lt;/span&gt; and try again.
 &lt;span class="o"&gt;!&lt;/span&gt;
 &lt;span class="o"&gt;!&lt;/span&gt; Failed to &lt;span class="nb"&gt;install &lt;/span&gt;gems via Bundler.
 &lt;span class="o"&gt;!&lt;/span&gt;
 &lt;span class="o"&gt;!&lt;/span&gt; Push rejected, failed to compile Ruby app.
 &lt;span class="o"&gt;!&lt;/span&gt; Push failed
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;If you are using Bundler 1.x you will not run into this error. It only happens with Bundler 2.2.3 and above. My bundler version is 2.2.16.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The app was built on Mac OS X. The &lt;code&gt;Gemfile.lock&lt;/code&gt; contained following line.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="no"&gt;PLATFORMS&lt;/span&gt;
  &lt;span class="n"&gt;x86_64&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;darwin&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;20&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As per the warning it seems that my &lt;code&gt;Gemfile&lt;/code&gt; was generated for Mac OS X but I am deploying to Linux and it was raising a flag. Kudos to Heroku buildpack to raise a user friendly error message!&lt;/p&gt;

&lt;p&gt;A lot of gems such as &lt;code&gt;nokogiri&lt;/code&gt; ship platform specific releases. In previous versions of Bundler, the approach for detecting the platform specific version was error-prone as per &lt;a href="https://github.com/rubygems/rubygems/issues/4269#issuecomment-758564690"&gt;this comment&lt;/a&gt;. To mitigate such errors, bundler now sets up &lt;code&gt;Gemfile.lock&lt;/code&gt; for the platform on which it was generated.&lt;/p&gt;

&lt;p&gt;When deploying to Heroku, the Heroku Ruby buildpack runs bundle install in &lt;a href="https://bundler.io/man/bundle-install.1.html#DEPLOYMENT-MODE"&gt;deployment mode&lt;/a&gt;. It expects that &lt;code&gt;Gemfile.lock&lt;/code&gt; to be frozen and already compatible with the platform on which it is being run. In our case, the &lt;code&gt;Gemfile.lock&lt;/code&gt; is not  compatible with the platform on which Heroku is deploying which is Linux and the platform on which it was generated which is Mac.&lt;/p&gt;

&lt;p&gt;If we do not use the deployment mode, then bundler &lt;strong&gt;will resolve the gems in real time with current platform&lt;/strong&gt; and this problem will not happen.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Though you should not do this as dev-prod parity breaks down and in production, bundler may resolve to the gems which you have not tested in development or test mode.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;To fix this, as the warning recommended, I added linux platform using following command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;bundle lock &lt;span class="nt"&gt;--add-platform&lt;/span&gt; x86_64-linux
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This generated following diff in &lt;code&gt;Gemfile.lock&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;diff &lt;span class="nt"&gt;--git&lt;/span&gt; a/Gemfile.lock b/Gemfile.lock
index 9f4c3ad..ab20d37 100644
&lt;span class="nt"&gt;---&lt;/span&gt; a/Gemfile.lock
+++ b/Gemfile.lock
@@ &lt;span class="nt"&gt;-102&lt;/span&gt;,6 +102,8 @@ GEM
     nio4r &lt;span class="o"&gt;(&lt;/span&gt;2.5.7&lt;span class="o"&gt;)&lt;/span&gt;
     nokogiri &lt;span class="o"&gt;(&lt;/span&gt;1.11.3-x86_64-darwin&lt;span class="o"&gt;)&lt;/span&gt;
       racc &lt;span class="o"&gt;(&lt;/span&gt;~&amp;gt; 1.4&lt;span class="o"&gt;)&lt;/span&gt;
+ nokogiri &lt;span class="o"&gt;(&lt;/span&gt;1.11.3-x86_64-linux&lt;span class="o"&gt;)&lt;/span&gt;
+ racc &lt;span class="o"&gt;(&lt;/span&gt;~&amp;gt; 1.4&lt;span class="o"&gt;)&lt;/span&gt;
     pg &lt;span class="o"&gt;(&lt;/span&gt;1.2.3&lt;span class="o"&gt;)&lt;/span&gt;
     public_suffix &lt;span class="o"&gt;(&lt;/span&gt;4.0.6&lt;span class="o"&gt;)&lt;/span&gt;
     puma &lt;span class="o"&gt;(&lt;/span&gt;5.2.2&lt;span class="o"&gt;)&lt;/span&gt;
@@ &lt;span class="nt"&gt;-198&lt;/span&gt;,6 +200,7 @@ GEM

 PLATFORMS
   x86_64-darwin-20
+ x86_64-linux
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After pushing this change to Heroku, the deployment went through.&lt;/p&gt;

&lt;h3&gt;
  
  
  Non-Heroku deployments
&lt;/h3&gt;

&lt;p&gt;This is not specific to deployments on Heroku only. It can be seen in any other deployment platform as well where we are deploying to Linux and developing on Mac. The fix is same, add the platform to &lt;code&gt;Gemfile.lock&lt;/code&gt; and redeploy.&lt;/p&gt;




</description>
      <category>bundler</category>
      <category>deployment</category>
      <category>rails</category>
    </item>
    <item>
      <title>Puma installation issue due to missing ctype.h on Mac OS X</title>
      <dc:creator>Prathamesh Sonpatki</dc:creator>
      <pubDate>Sun, 04 Oct 2020 10:01:39 +0000</pubDate>
      <link>https://dev.to/prathamesh/puma-installation-issue-due-to-missing-ctype-h-on-mac-os-x-1f7o</link>
      <guid>https://dev.to/prathamesh/puma-installation-issue-due-to-missing-ctype-h-on-mac-os-x-1f7o</guid>
      <description>&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;puma_http11.c:203:22: note: include the header &amp;lt;ctype.h&amp;gt; or explicitly provide a declaration for 'isspace'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Are you facing this error recently while trying to install puma gem 4.3.5 on Mac OS X?&lt;/p&gt;

&lt;p&gt;This issue is reported on &lt;a href="https://github.com/puma/puma/issues/2304"&gt;Puma issue tracker&lt;/a&gt; here and fixed in version 4.3.6 and in the latest 5.0.0 release.&lt;/p&gt;

&lt;p&gt;But if you want to fix it without updating the Puma gem version from 4.3.5, you can use run following command to update your Bundler configuration.&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;bundle config build.puma --with-cflags="-Wno-error=implicit-function-declaration"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;bundle install&lt;/code&gt; will run successfully after this.&lt;/p&gt;

&lt;p&gt;Even better is to just update Puma to latest version which also fixes this issue.&lt;/p&gt;

</description>
      <category>puma</category>
      <category>osx</category>
    </item>
    <item>
      <title>OR query with multiple conditions on same column using Sequel</title>
      <dc:creator>Prathamesh Sonpatki</dc:creator>
      <pubDate>Sun, 16 Aug 2020 13:40:50 +0000</pubDate>
      <link>https://dev.to/prathamesh/or-query-with-multiple-conditions-on-same-column-using-sequel-3la1</link>
      <guid>https://dev.to/prathamesh/or-query-with-multiple-conditions-on-same-column-using-sequel-3la1</guid>
      <description>&lt;p&gt;I recently started using &lt;a href="http://sequel.jeremyevans.net/"&gt;Sequel&lt;/a&gt; to manipulate the PostgreSQL database in our Rails application at &lt;a href="https://last9.io"&gt;Last9&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I had to write an SQL query as follows.&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;users&lt;/span&gt; &lt;span class="k"&gt;where&lt;/span&gt; &lt;span class="n"&gt;status&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'inactive'&lt;/span&gt; &lt;span class="k"&gt;OR&lt;/span&gt; &lt;span class="n"&gt;status&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'deleted'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Sequel provides &lt;code&gt;or&lt;/code&gt; function that can be used to construct &lt;code&gt;OR&lt;/code&gt; expressions.&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="n"&gt;exp&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;Sequel&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;or&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="ss"&gt;x: &lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;y: &lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="no"&gt;DB&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="ss"&gt;:users&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nf"&gt;where&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;exp&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This results into following SQL.&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="nv"&gt;"SELECT * FROM &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="nv"&gt;users&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="nv"&gt; WHERE ((&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="nv"&gt;x&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="nv"&gt; = 1) OR (&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="nv"&gt;y&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="nv"&gt; = 2))"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now let's try this same technique to write SQL for our use case. We want to select all users whose status is either&lt;/p&gt;

&lt;p&gt;&lt;code&gt;inactive&lt;/code&gt; or &lt;code&gt;deleted&lt;/code&gt;.&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="no"&gt;DB&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="ss"&gt;:users&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nf"&gt;where&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="no"&gt;Sequel&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;or&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="ss"&gt;status: &lt;/span&gt;&lt;span class="s1"&gt;'deleted'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;status: &lt;/span&gt;&lt;span class="s1"&gt;'inactive'&lt;/span&gt;&lt;span class="p"&gt;)).&lt;/span&gt;&lt;span class="nf"&gt;sql&lt;/span&gt;
&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;pry&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;&lt;span class="mi"&gt;8&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="ss"&gt;warning: &lt;/span&gt;&lt;span class="n"&gt;key&lt;/span&gt; &lt;span class="ss"&gt;:status&lt;/span&gt; &lt;span class="n"&gt;is&lt;/span&gt; &lt;span class="n"&gt;duplicated&lt;/span&gt; &lt;span class="n"&gt;and&lt;/span&gt; &lt;span class="n"&gt;overwritten&lt;/span&gt; &lt;span class="n"&gt;on&lt;/span&gt; &lt;span class="n"&gt;line&lt;/span&gt; &lt;span class="mi"&gt;8&lt;/span&gt;
&lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="s2"&gt;"SELECT * FROM &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;users&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt; WHERE (&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;status&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt; = 'inactive')"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;But as you can see if we use same key which is &lt;code&gt;status&lt;/code&gt; in this case, Ruby ignores it and the query that gets generated only has last value which is &lt;code&gt;inactive&lt;/code&gt; in our case.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Ruby ignores multiple keys with same name in a hash so beware if you are using multiple values with same keys in a hash.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;So how do we generate the &lt;code&gt;OR&lt;/code&gt; query on same column using Sequel?&lt;/p&gt;

&lt;p&gt;We can pass the arguments to &lt;code&gt;Sequel.or&lt;/code&gt; as Array instead of Hash.&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="no"&gt;DB&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="ss"&gt;:users&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nf"&gt;where&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="no"&gt;Sequel&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;or&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;
                              &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"status"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"inactive"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; 
                              &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"status"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"deleted"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
                             &lt;span class="p"&gt;])&lt;/span&gt;
                   &lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="s2"&gt;"SELECT * FROM &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;users&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt; WHERE (('status' = 'inactive') 
    OR ('status' = 'deleted'))"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This change makes sure that the &lt;code&gt;OR&lt;/code&gt; query on same &lt;code&gt;status&lt;/code&gt; column gets generated correctly.&lt;/p&gt;




&lt;p&gt;Subscribe to my &lt;a href="https://prathamesh.tech/mailing-list"&gt;newsletter&lt;/a&gt;or follow me on &lt;a href="https://twitter.com/_cha1tanya"&gt;Twitter&lt;/a&gt;to know more about how to use Sequel with Rails apps.&lt;/p&gt;

</description>
      <category>rails</category>
      <category>sequel</category>
      <category>postgres</category>
    </item>
    <item>
      <title>Managing infra code ⚙️🛠🧰</title>
      <dc:creator>Prathamesh Sonpatki</dc:creator>
      <pubDate>Wed, 12 Aug 2020 16:52:58 +0000</pubDate>
      <link>https://dev.to/last9/managing-infra-code-43bp</link>
      <guid>https://dev.to/last9/managing-infra-code-43bp</guid>
      <description>&lt;p&gt;Do you care about the quality of your infra code?&lt;/p&gt;

&lt;p&gt;A. As much as product code&lt;br&gt;
B. Somewhat but mostly no&lt;br&gt;
C. We create infra via UI&lt;/p&gt;

&lt;p&gt;Let's discuss how do you manage Infra code! Feel free to share your thoughts in the comments section.&lt;/p&gt;

</description>
      <category>discuss</category>
      <category>sre</category>
      <category>devops</category>
      <category>poll</category>
    </item>
    <item>
      <title>Creating unlogged (PostgreSQL) tables in Rails</title>
      <dc:creator>Prathamesh Sonpatki</dc:creator>
      <pubDate>Mon, 10 Aug 2020 16:52:30 +0000</pubDate>
      <link>https://dev.to/prathamesh/creating-unlogged-postgresql-tables-in-rails-365a</link>
      <guid>https://dev.to/prathamesh/creating-unlogged-postgresql-tables-in-rails-365a</guid>
      <description>&lt;p&gt;One of the most important aspects of a relational database is durability. The database has to make certain guarantees which add overhead to the database system. But what if you want to give up on the durability aspect and increase the speed instead?&lt;/p&gt;

&lt;p&gt;This can be especially be done in test environment where one may not care about the durability aspect and want to run tests faster. PostgreSQL supports &lt;a href="https://www.postgresql.org/docs/current/non-durability.html"&gt;multiple settings&lt;/a&gt; for non durability which forgo data integrity and can increase the performance. One such thing is &lt;strong&gt;unlogged tables&lt;/strong&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Data written to unlogged tables is not written to the write-ahead log which makes them considerably faster than ordinary tables. But it comes with a rider. &lt;strong&gt;These tables are not crash proof.&lt;/strong&gt; Whenever there is a crash or unintended shutdown, such tables are truncated.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;But they can be used in test environment where we don't really care about the durability aspect. They can also be used for temporary tables which are recreated even if they are wiped out. We can create unlogged tables as follows.&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="n"&gt;prathamesh&lt;/span&gt;&lt;span class="err"&gt;@&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;tmp&lt;/span&gt;&lt;span class="ss"&gt;:prathamesh&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;create&lt;/span&gt; &lt;span class="n"&gt;unlogged&lt;/span&gt; &lt;span class="n"&gt;table&lt;/span&gt; &lt;span class="n"&gt;users&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;name&lt;/span&gt; &lt;span class="n"&gt;varchar&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;email&lt;/span&gt; &lt;span class="n"&gt;varchar&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="no"&gt;CREATE&lt;/span&gt; &lt;span class="no"&gt;TABLE&lt;/span&gt;
&lt;span class="no"&gt;Time&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mf"&gt;0.031&lt;/span&gt;&lt;span class="n"&gt;s&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Unlogged tables and Rails
&lt;/h3&gt;

&lt;p&gt;Rails allows creating unlogged tables with PostgreSQL adapter from Rails 6 onwards. We can either create unlogged tables in a migration or we can set a global setting that all tables are created as unlogged.&lt;/p&gt;

&lt;h4&gt;
  
  
  Creating unlogged table in a migration
&lt;/h4&gt;

&lt;p&gt;Rails provides &lt;code&gt;create_unlogged_table&lt;/code&gt; similar to &lt;code&gt;create_table&lt;/code&gt; which creates an unlogged table.&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;CreateUsers&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="no"&gt;ActiveRecord&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="no"&gt;Migration&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mf"&gt;6.0&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;change&lt;/span&gt;
    &lt;span class="n"&gt;create_unlogged_table&lt;/span&gt; &lt;span class="ss"&gt;:users&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;id: :uuid&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="n"&gt;t&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;
      &lt;span class="n"&gt;t&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;text&lt;/span&gt; &lt;span class="ss"&gt;:name&lt;/span&gt;
      &lt;span class="n"&gt;t&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;text&lt;/span&gt; &lt;span class="ss"&gt;:email&lt;/span&gt;

      &lt;span class="n"&gt;t&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;timestamps&lt;/span&gt;
    &lt;span class="k"&gt;end&lt;/span&gt;
  &lt;span class="k"&gt;end&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Creating all tables as unlogged tables
&lt;/h4&gt;

&lt;p&gt;We can set &lt;code&gt;ActiveRecord::ConnectionAdapters::PostgreSQLAdapter.create_unlogged_tables&lt;/code&gt; to &lt;code&gt;true&lt;/code&gt; to create all tables as unlogged. This can be set for test environment as follows.&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="c1"&gt;# config/environments/test.rb&lt;/span&gt;

&lt;span class="c1"&gt;# Create unlogged tables in test environment to speed up build&lt;/span&gt;
  &lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;to_prepare&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
    &lt;span class="no"&gt;ActiveRecord&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="no"&gt;ConnectionAdapters&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="no"&gt;PostgreSQLAdapter&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create_unlogged_tables&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kp"&gt;true&lt;/span&gt;
  &lt;span class="k"&gt;end&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it. All tables created in test environment are by default unlogged.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;This setting is set to false by default in all environments.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h4&gt;
  
  
  Caution!
&lt;/h4&gt;

&lt;p&gt;Unlogged tables are not crash proof and should not be used in production environment unless durability is not a concern. Don't use them blindly.&lt;/p&gt;




&lt;p&gt;Interested in knowing more about Rails and PostgreSQL, subscribe to my &lt;a href="https://prathamesh.tech/mailing-list"&gt;newsletter&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>rails</category>
      <category>postgres</category>
    </item>
    <item>
      <title>Effective enqueuing of background jobs</title>
      <dc:creator>Prathamesh Sonpatki</dc:creator>
      <pubDate>Wed, 05 Aug 2020 18:39:50 +0000</pubDate>
      <link>https://dev.to/prathamesh/effective-enqueuing-of-background-jobs-254i</link>
      <guid>https://dev.to/prathamesh/effective-enqueuing-of-background-jobs-254i</guid>
      <description>&lt;h4&gt;
  
  
  TLDR;
&lt;/h4&gt;

&lt;p&gt;Lot of times we can avoid enqueuing the jobs to the background queue if they are going to be discarded immediately upon execution.&lt;/p&gt;




&lt;p&gt;Let's say we have a background job to send a webhook notification to Slack.&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# app/services/events_service.rb

class EventsService
  def process
    @event = create_event
    SlackJob.perform_later(@org, @event)
  end
end

# app/jobs/slack_job.rb

class SlackJob &amp;lt; ApplicationJob
  def perform(org, payload)
    if org.slack_enabled?
      SendSlackNotification.new(org, payload).process
    end
  end
end
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The job is enqueued for sending the notification and then inside the job we are checking if the organization has enabled slack or not. Only if slack is enabled then the job will be executed. But it still gets enqueued in the queue and the worker process has to pick it up and start executing. Only then it realizes that the job does not satisfy required conditions and discards it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;This means that each time the SlackJob gets enqueued and gets picked up for execution by the worker process.&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A better approach will be to enqueue only the jobs that are going to be actually &lt;strong&gt;&lt;em&gt;executed&lt;/em&gt;&lt;/strong&gt;. This will avoid unnecessarily enqueuing jobs which are going to be discarded immediately.&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# app/services/events_service.rb

class EventsService
  def process
    @event = create_event
    if @org.slack_enabled?
      SlackJob.perform_later(@org, @event)
    end
  end
end

# app/jobs/slack_job.rb

class SlackJob &amp;lt; ApplicationJob
  def perform(org, payload)  
    SendSlackNotification.new(org, payload).process
  end
end
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This pattern should be used in cases where the decision of whether to execute the job or not too complex and does not depend on too many entities. If the job has to take decision based on complex logic then it can he handled in the job itself instead of doing it before enqueuing the job. But for simple conditionals, we can check them beforehand to avoid enqueuing of the job and immediately discarding.  &lt;/p&gt;




&lt;p&gt;Interested in knowing more about my thoughts on web programming using Ruby on Rails? Subscribe &lt;a href="https://prathamesh.tech/mailing-list"&gt;here&lt;/a&gt; or follow me on &lt;a href="https://twitter.com/_cha1tanya"&gt;Twitter&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>rails</category>
      <category>programming</category>
    </item>
  </channel>
</rss>
