<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Scorpil</title>
    <description>The latest articles on DEV Community by Scorpil (@scorpil).</description>
    <link>https://dev.to/scorpil</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/scorpil"/>
    <language>en</language>
    <item>
      <title>You (probably) don't need DateTime</title>
      <dc:creator>Scorpil</dc:creator>
      <pubDate>Wed, 13 Dec 2023 16:52:46 +0000</pubDate>
      <link>https://dev.to/scorpil/you-probably-dont-need-datetime-1p60</link>
      <guid>https://dev.to/scorpil/you-probably-dont-need-datetime-1p60</guid>
      <description>&lt;p&gt;It's easy to make mistakes when dealing with date and time information. The way we measure time is very much irregular and it's challenging for programmers to be thorough enough when describing time. It doesn't help that time-related bugs often lead to issues that stay hidden until some exceptional date, like the 29th of February, or at &lt;a href="https://en.wikipedia.org/wiki/Leap_second" rel="noopener noreferrer"&gt;leap second&lt;/a&gt;. Some of these issues can be avoided by choosing the correct format for temporal data representation.&lt;/p&gt;

&lt;p&gt;While most programming languages and databases support timezone-aware DateTime data types out of the box, it is quite common to see date/time information stored in a Unix epoch timestamp format. Even experienced software engineers sometimes struggle to choose the best format for the use case. This article aims to help make that decision easier and to clarify common misconceptions about managing time in geographically distributed systems.&lt;/p&gt;

&lt;p&gt;When talking about calendars, dates, and times it's easy to get disoriented without clear terminology. After all, most people usually don't think about things like time zones in their daily life. Even if you travel often, or work in a distributed team, your intuition is still likely local-first.&lt;/p&gt;

&lt;p&gt;So let's establish some ground rules:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;the &lt;strong&gt;date&lt;/strong&gt; is dictated by your calendar. What date is considered current for you depends on your physical location. That's the reason New Year's celebration travels around the globe every year and doesn't happen everywhere at once.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;date/time&lt;/strong&gt; is, as one would expect, a date coupled with wall clock time. Same as with date, your date/time is a local concept. The correct time is determined by what timezone your location is assigned to.&lt;/li&gt;
&lt;li&gt;yet, time is the same for everyone (relativistic effects, while fascinating, are out of the scope of this article). A single event occurs at a single &lt;strong&gt;point-in-time&lt;/strong&gt;, that's mapped to different dates and times across timezones.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Most of the time when you use &lt;strong&gt;date/time&lt;/strong&gt; in daily life, you implicitly assume the &lt;strong&gt;local&lt;/strong&gt; &lt;strong&gt;timezone&lt;/strong&gt;. Without this crucial piece of information &lt;strong&gt;date/time&lt;/strong&gt; can not be mapped to a &lt;strong&gt;point in time&lt;/strong&gt;. So is a &lt;strong&gt;date/time&lt;/strong&gt; without a timezone useful at all?&lt;/p&gt;

&lt;p&gt;Intuitively, we use date/time without timezone for things where precision is not important. When you move to a different country, you do not re-calculate your birthday, even though, technically, in the new timezone the moment of your birth might've happened on a different date. A birthday date and other holidays are examples of &lt;strong&gt;date&lt;/strong&gt; information that inherently doesn't have a timezone, so storing it in a timezone-aware format requires some application-level handling.&lt;/p&gt;

&lt;p&gt;With software systems, however, precision is often crucial. When a server aggregates events from around the globe, it usually needs the ability to order them based on when they happened in the stream of time; humanity's arbitrary time perception rules are not important here. So the time information encapsulated in the incoming event needs to mark &lt;strong&gt;a point-in-time&lt;/strong&gt;, and not &lt;em&gt;just&lt;/em&gt; the local &lt;strong&gt;date and time&lt;/strong&gt; of when the event has happened.&lt;/p&gt;

&lt;p&gt;There are a few ways of defining a point in time. Recording date and time &lt;strong&gt;with timezone&lt;/strong&gt; is probably the most obvious way of doing it, yet it has certain drawbacks:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;such format is not inherently sortable (without complex logic, or conversion to other timezone/format)&lt;/li&gt;
&lt;li&gt;not possible to calculate the time difference between two events (again, without first converting them to the same timezone)&lt;/li&gt;
&lt;li&gt;real-world timezones are &lt;a href="https://www.zainrizvi.io/blog/falsehoods-programmers-believe-about-time-zones/" rel="noopener noreferrer"&gt;messy and complex&lt;/a&gt;; &lt;a href="https://wiki.debian.org/TimeZoneChanges" rel="noopener noreferrer"&gt;timezones change&lt;/a&gt; and &lt;a href="https://en.wikipedia.org/wiki/Daylight_saving_time" rel="noopener noreferrer"&gt;fluctuate&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Sure, most of these concerns are, to a large extent, managed by libraries, but even they are not perfect. Besides, however good the library is, it's very easy to create a bug by using it incorrectly. If your data is global by nature (i.e. describes &lt;strong&gt;the point in time&lt;/strong&gt;, not tied to any particular timezone), avoiding timezones altogether might be a good idea.&lt;/p&gt;

&lt;h2&gt;
  
  
  Unix Timestamp for Point-in-Time Data
&lt;/h2&gt;

&lt;p&gt;Unix timestamp is defined as the number of non-leap seconds that have passed since January 1st, 1970, 00:00 UTC. It ignores (most) complexities of our calendars and elegantly packs time into a simple integer value.&lt;/p&gt;

&lt;p&gt;There is a common misconception that Unix Timestamp does not work in multi-timezone environments. Nothing could be further from the truth: note that Unix Timestamp explicitly uses a &lt;strong&gt;point in time&lt;/strong&gt; in the &lt;strong&gt;UTC timezone&lt;/strong&gt; as a reference point, which means &lt;em&gt;in a correctly configured environment&lt;/em&gt;, the hypothetical &lt;code&gt;unix_timestamp()&lt;/code&gt; function will return the same value in every timezone. After all, your location doesn't change how much time has passed since the particular event.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9zwor5ogfbmp36c8irqy.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9zwor5ogfbmp36c8irqy.jpg" alt="Unix Timestamp relation to local Date and Time"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That said, Unix timestamp is &lt;em&gt;timezone-agnostic&lt;/em&gt; in the sense that it does not &lt;em&gt;record&lt;/em&gt; information about the timezone where the event has happened. If such information is of interest, it could potentially be stored separately, but at this point using a &lt;strong&gt;DateTime&lt;/strong&gt; datatype might be a better option.&lt;/p&gt;

&lt;p&gt;Another common misconception is that Unix time can't be used to represent dates before January 1st, 1970. In reality, systems that need it use negative numbers to naturally extend the supported time range to the past. This might require application-level support, as not all systems expect negative values for a Unix timestamp.&lt;/p&gt;

&lt;p&gt;One quirk of the Unix time format worth mentioning is that it &lt;em&gt;does not account for a leap second.&lt;/em&gt; Unix timestamp assumes a day is exactly 86400 seconds long, so whenever a leap second occurs, the Unix timestamp counter gets "stuck" for a second. This logic allows the unix timestamp to be kept in sync with UTC despite leap seconds.&lt;/p&gt;

&lt;p&gt;The main reason for Unix timestamp's popularity is its simplicity. You don't need specific data type support to store it, since the value is just an integer. It's sortable, and counting the time difference in seconds is trivial. Effectively it consolidates the complexity of our timekeeping system into a single integer value. Just make sure you use at least a 64-bit integer when using Unix time to avoid &lt;a href="https://en.wikipedia.org/wiki/Year_2038_problem" rel="noopener noreferrer"&gt;fixing your codebase in 2038&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's a DateTime
&lt;/h2&gt;

&lt;p&gt;Unlike Unix timestamp, there is no such thing as a universal DateTime format. Each programming language or tool relies on its own internal logic and implementation details for this data type. For the purpose of this article, we'll categorize DateTime as an embedded calendar-aware and timezone-aware module, capable of handling the complexity of modern time-keeping systems. Note that this is not always the case in the real world: JavaScript has a Date module that is just a relatively thin wrapper around timestamp-like value (thin enough that in all but the most trivial use cases you probably don't want to work with it directly without special date-handling libraries).&lt;/p&gt;

&lt;p&gt;Due to DateTime implementations being so platform-dependent, they often lead to issues when migrating data between storage technologies. Concrete DateTime implementation might look similar in a trivial case but differ in their range, resolution, timezone-conversion behavior, or special values.&lt;/p&gt;

&lt;p&gt;For example, MySQL DateTime field uses an 8-byte struct with an ability to store dates between years 1000 and 9999, with a resolution of 1 second, and no way of storing timezone. PostgreSQL has two types for the DateTime field (they're named &lt;code&gt;timestamp&lt;/code&gt; and &lt;code&gt;timestamptz&lt;/code&gt;, hinting at their internal representation): with timezone and without timezone. Their resolution is much higher - 1 microsecond. Furthermore, PostgreSQL's DateTime can assume special values "infinity" (larger than any other value) and "-infinity" (smaller than any other value). These two technologies have two incompatible DateTime datatype implementations.&lt;/p&gt;

&lt;h2&gt;
  
  
  Should you use timestamp or DateTime?
&lt;/h2&gt;

&lt;p&gt;You are likely better off with &lt;strong&gt;a Unix timestamp&lt;/strong&gt; if:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  you have &lt;strong&gt;point-in-time&lt;/strong&gt; data (independent of timezone)&lt;/li&gt;
&lt;li&gt;  you want to ensure interoperability between multiple systems&lt;/li&gt;
&lt;li&gt;  you have a geographically distributed system reading/writing data simultaneously&lt;/li&gt;
&lt;li&gt;  you want to keep your temporal data comparable/sortable (at scale)&lt;/li&gt;
&lt;li&gt;  you need granular control over data storage requirements (at scale)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Choose &lt;strong&gt;DateTime with timezone&lt;/strong&gt; if:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  you have &lt;strong&gt;point-in-time&lt;/strong&gt; data but you want to retain information about the timezone from which the data originates&lt;/li&gt;
&lt;li&gt;  you need to perform calendar-based manipulations with the date (e.g. use or modify month/week/date/year)&lt;/li&gt;
&lt;li&gt;  it's important for your use case for raw data to remain human-readable&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Choose &lt;strong&gt;DateTime without timezone&lt;/strong&gt; if:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  your data does not reflect &lt;strong&gt;point-in-time&lt;/strong&gt;, but rather a social construct. This kind of data refers to a different point in time when the local calendar or local clock changes (birthday, appointment times, etc.).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Thank you for reading this article, I hope it helped you choose the right format for storing date and time. If you have any interesting date/time handling stories, please share them in the comments below.&lt;/p&gt;

</description>
      <category>programming</category>
      <category>database</category>
      <category>learning</category>
      <category>coding</category>
    </item>
    <item>
      <title>An Overview of AWS Step Functions</title>
      <dc:creator>Scorpil</dc:creator>
      <pubDate>Wed, 04 Oct 2023 16:56:43 +0000</pubDate>
      <link>https://dev.to/scorpil/an-overview-of-aws-step-functions-49l6</link>
      <guid>https://dev.to/scorpil/an-overview-of-aws-step-functions-49l6</guid>
      <description>&lt;p&gt;One aspect I appreciate about working with AWS is its throve of fully managed, infinitely scalable services like DynamoDB, Lambda, SNS/SQS, S3, etc. Combining them makes it possible to create an infinitely scalable infrastructure without an upfront cost and avoid the extra effort required to manage dedicated servers or platforms like Kubernetes. Sometimes, this may lead to a situation where the project's backend consists of dozens of services directly influencing each other, resembling some serverless Ruth-Goldberg machine.&lt;/p&gt;

&lt;p&gt;The dangers of these kinds of architecture are subtle. Although the components are simple and well-isolated, tracking their interactions can be challenging. Naturally, as any software system grows, interactions between internal components become increasingly complex, involving activities like retries, edge-case scenario paths, legacy compatibility layers, etc. As time goes on, updating this architecture becomes increasingly risky. It is imperative to have up-to-date technical documentation, including dependency diagrams.&lt;/p&gt;

&lt;p&gt;When it comes to operating a system, i.e., debugging issues, fixing outages, and providing all kinds of support, the importance of an advanced centralized logging and monitoring strategy becomes evident. Otherwise, tracing what went wrong in a multi-stage process spanning multiple services is a challenging task.&lt;/p&gt;

&lt;p&gt;Monitoring, logging, and especially documentation are notoriously hard to get right and maintain long-term. I prefer to choose solutions that provide high levels of observability out of the box whenever possible, avoiding the need for a complex custom monitoring system. AWS Step Functions is one of just such services. It's a must-have tool for your toolbox if your job entails designing AWS-based architectures.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc2p3m25pluot54o9tupa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc2p3m25pluot54o9tupa.png" alt="An example of an AWS Step Functions"&gt;&lt;/a&gt;An example of an AWS Step Functions workflow with a manual approval task, &lt;a href="https://aws.amazon.com/step-functions/use-cases/" rel="noopener noreferrer"&gt;from AWS docs&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  One service to rule them all
&lt;/h2&gt;

&lt;p&gt;Step Functions is a kind of Meta-Service: it coordinates the execution of other services, including potentially those running outside of the AWS ecosystem. You define your process by constructing a State Machine with a JSON-based definition language. State Machine consists of steps and transitions between them, corresponding to nodes and edges in the execution graph. A limited set of logic operations, such as conditionals, are supported out of the box. With a little effort and ingenuity, it is possible to construct any number of complex patterns, such as loops.&lt;/p&gt;

&lt;p&gt;A step is the fundamental building block of a State Machine. If you're defining a State Machine based on an existing serverless architecture hosted in AWS, most of your steps leverage one of the hundreds of existing AWS service integrations: start a Lambda function, put an item into SQS, execute AWS Glue job, etc.&lt;/p&gt;

&lt;p&gt;Gluing together calls to multiple unrelated services with different APIs requires adapting argument values. Step Function definitions language allows modifying Step inputs/outputs directly in the step through so-called &lt;a href="https://docs.aws.amazon.com/step-functions/latest/dg/amazon-states-language-intrinsic-functions.html" rel="noopener noreferrer"&gt;intrinsic functions&lt;/a&gt;. This is a great feature that helps to avoid unnecessary intermediary Lambda function steps. However, the unusual syntax used can take time to understand.&lt;/p&gt;

&lt;p&gt;A less well-known but essential feature of the Step Functions service is its ability to &lt;a href="https://docs.aws.amazon.com/step-functions/latest/dg/connect-to-resource.html" rel="noopener noreferrer"&gt;integrate with external systems&lt;/a&gt;. The integration implementation is amazingly straightforward: State Machine puts task descriptions into the SQS queue for the external system to pick up. Once that system has finished the execution of the task, it reports the result back to the State Machine by making an API call with a unique token obtained from the original SQS message. State Machine execution blocks until the callback arrives.&lt;/p&gt;

&lt;h2&gt;
  
  
  Benefits of AWS Step Functions
&lt;/h2&gt;

&lt;p&gt;By defining the State Machine, AWS gets information about the architecture of your process as a whole. In return, it provides a central place to describe service interactions and a consistent way to handle state changes.&lt;/p&gt;

&lt;p&gt;An execution graph is automatically generated based on task definitions and is guaranteed to stay up-to-date. It's also possible to see the execution state of every instance of the state machine, which is a great starting point for bug-hunting. Similarly, performance metrics let developers focus on optimizing the slowest steps without rolling out a custom metrics aggregation strategy.&lt;/p&gt;

&lt;p&gt;A unified exception-handling mechanism helps build more reliable alerting and incident response systems. A whole run is marked as failed when unhandled failure happens on any step, so there is a single event to listen to and react to.&lt;/p&gt;

&lt;p&gt;Step Functions' architecture and APIs are versatile enough to enable diverse possible use cases. It can be used as an ETL pipeline scheduler (similar to Apache Airflow), as a backbone for a complex alerting system (sending out different notifications to different stakeholders depending on what went wrong with the underlying system),  to coordinate automatic provisioning of an execution environment from CI/CD server and in many other capacities.&lt;/p&gt;

&lt;h2&gt;
  
  
  When to avoid Step Functions
&lt;/h2&gt;

&lt;p&gt;Despite all of the benefits, Step Functions are not a silver bullet. Finding out whether this service can be helpful on a particular project is a complex task. Integrating Step Functions too early or building an overly complex State Machine can lead to counter-productive results.&lt;/p&gt;

&lt;p&gt;Step Functions, like any other complex cloud service, require some upfront learning. JSON-based State Machine definition may appear confusing to someone unfamiliar with it. Understanding the nuances of state transitions, error handling, and intrinsic functions may require additional training.&lt;/p&gt;

&lt;p&gt;State Machines can potentially execute arbitrarily complex workflows, but they excel in service orchestration and are designed for that use case; bringing business logic to this level mixes responsibility domains, makes testing more challenging, unnecessarily complicates the execution graph, and potentially makes business logic harder to understand by hiding parts of it from source code. Keep the business logic in the code.&lt;/p&gt;

&lt;p&gt;Consider which process you wish to represent as a State Machine and evaluate if this representation is suitable. Does it have a distinct beginning and end? Does it contain numerous relatively independent stages? Are these stages mostly synchronous? If an answer to any of the previous questions is "no," Step Functions is likely not the right tool for the job.&lt;/p&gt;

&lt;p&gt;Defining a workflow in a Step Function by design creates a cross-cutting concern. Ensure it does not interrupt the existing work organization, deployment dependencies, or cross team boundaries. Sometimes, to avoid these issues, using Step Functions to manage just a tiny part of the whole process might be for the best.&lt;/p&gt;

&lt;p&gt;As always, the most essential part is to understand the tradeoffs that the service provides. And if you're looking for assistance with your AWS architecture challenges, feel free to reach out to me at &lt;a href="mailto:freelance@scorpil.com"&gt;freelance@scorpil.com&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;This is a re-post, original at: &lt;a href="https://scorpil.com/post/overview-of-aws-step-functions/" rel="noopener noreferrer"&gt;An Overview of AWS Step Functions&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>cloud</category>
      <category>microservices</category>
      <category>aws</category>
    </item>
    <item>
      <title>Things Elixir's Phoenix Framework Does Right</title>
      <dc:creator>Scorpil</dc:creator>
      <pubDate>Mon, 26 Oct 2020 07:30:08 +0000</pubDate>
      <link>https://dev.to/scorpil/things-elixir-s-phoenix-framework-does-right-de7</link>
      <guid>https://dev.to/scorpil/things-elixir-s-phoenix-framework-does-right-de7</guid>
      <description>&lt;p&gt;I dabbled in Phoenix for a while now, but never &lt;em&gt;really&lt;/em&gt; got my hands dirty with it right up until now. Apart from the whole framework being surprisingly well thought through, there are a few things that strike me as being done &lt;em&gt;exceptionally&lt;/em&gt; well in Phoenix, compared to the rest of modern web frameworks.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Striking a balance between flexibility and strictness
&lt;/h3&gt;

&lt;p&gt;Modern web frameworks can be roughly divided into two camps:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Flexible "DIY" frameworks are a little more than a set of utilities for the most common web-related tasks. Most Go frameworks are like this, as well as ExpressJS. They enforce little to no rules for the structure of your applications and rely on the community to come up with the extensions and best practices. As a result, they are very flexible; those with a large community have extensions to perform any task imaginable. On the flip side, apps built on such a foundation can, given poor governance, slowly evolve into an unsupportable mess of incompatible plugins and mismatched coding styles.&lt;/li&gt;
&lt;li&gt;Strict "batteries included" frameworks bring with them a complete set of tools to perform common web development tasks, as well as a set of conventions to go with it. They guide the developer into optimal code structure and typically strive to provide a single favored way of doing things. Of course, these kinds of frameworks are also extendable, but built-in tools often get embedded so deep into the project that they are almost irreplaceable. In this category, the most popular examples are Django and Ruby on Rails.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Of course, most frameworks are not on an extreme end of the scale, but the distinction is there. Worth noting that neither group is strictly better than the other -- each has its usecases.&lt;/p&gt;

&lt;p&gt;Phoenix Framework, in my mind, holds very close to the middle of this scale for these reasons:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;it builds upon Elixir's functional philosophy, so it has a very clear idea how things &lt;em&gt;should&lt;/em&gt; work (single request context passed around as the first argument to all components that participate in a response generation, avoiding side effects where possible, MVC-inspired architecture, etc.)&lt;/li&gt;
&lt;li&gt;it does not hide its internal details in a "black box", quite the opposite - it encourages you to understand internal conventions to write your own code in the same fashion. When you get comfortable using a framework, you can probably read its code without too much trouble.&lt;/li&gt;
&lt;li&gt;by default Phoenix comes with a huge pack of tools and utils (ORM, routing, test suit, HTML rendering, metrics dashboard (sic!)...), but in most cases, there's a trivial way to swap them out or turn them off.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  2. Reactiveness
&lt;/h3&gt;

&lt;p&gt;When an app requires bi-directional communication between client and server, you usually either&lt;br&gt;
integrate a 3rd party library into the framework, which means writing a pile of glue code, or&lt;br&gt;
use a specialized framework like Tornado, which (caution, personal opinion here) kind of an awkward choice for those parts of the web app that do not concern themselves with WebSockets.&lt;/p&gt;

&lt;p&gt;Phoenix is great for classical HTTP, but persistent communication is where it &lt;em&gt;really&lt;/em&gt; shines. Primitives it gives you with channels, PubSub and Presence are just enough to avoid boilerplate without sacrificing flexibility. Recent live view release is a whole new way of building dynamic apps. I wouldn’t go as far as to call it revolutionary, but it is definitely an intriguing attempt of bridging the gap between frontend and backend.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Performance
&lt;/h3&gt;

&lt;p&gt;Phoenix’s performance has surprisingly little to do with the framework itself. It inherited its impressive concurrence characteristics from Elixir, which got it from Erlang, which got it thanks to the primitives of the BEAM virtual machine and the architectural patterns of OTP. The main principle at work to achieve concurrency is to schedule lightweight threads of execution to run each independent piece of work concurrently. You might have seen this approach in other languages (goroutines, python's greenlets, etc.), that’s because it works great to organize concurrent code execution without performance hit and with a minimal headache for a developer. However, BEAM gives this concept support on a VM level, which means it can be optimized even on a hardware level.&lt;/p&gt;

&lt;p&gt;While lightweight processes help you perform well on your IO-bound tasks, Elixir being a compiled language means that CPU bound tasks won't bottleneck easily as well, and perform on less computing resources than most alternatives. While I can't be 100% sure that it will be faster for your application than Go or Rust in terms of CPU usage, I'm reasonably sure that it will be more than fast enough in a context of a typical web app.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Update:&lt;/strong&gt; correction based on discussion in here and on other platforms: Elixir is slower than Go/Rust on purely CPU-bound tasks, mainly because BEAM interrupts running threads for task scheduling. Also, Elixir/Erlang compiles to bytecode, not directly to the machine code (although BeamAsm, JIT compiler for Erlang's VM, has &lt;a href="https://github.com/erlang/otp/pull/2745"&gt;landed in master 4 days ago&lt;/a&gt;, so this should change in the next OTP release).&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Failure tolerance and cluster-awareness
&lt;/h3&gt;

&lt;p&gt;You might have heard Phoenix being called a "monolithic framework". This is true to some extent: Phoenix &lt;em&gt;does&lt;/em&gt; encourage you to put your frontend, backend, and background tasks in the same app. However, it also provides facilities to ensure that failure in a single component of the app will not affect other independent components. To explain in short, the app is divided into processes that communicate with each other via kind-of event messages. Each component is supervised, the supervisor will catch unhandled failures and restart the process in an attempt to fix them. It's somewhat reminiscent of a microservice architecture, just on a lower level.&lt;/p&gt;

&lt;p&gt;Unlike most frameworks, Phoenix understands that it will most likely run on more than one node. It provides a way to communicate over the network in exactly the same way you communicate between the processes within your app.&lt;/p&gt;

&lt;p&gt;This does not mean that Phoenix is a bad choice for microservices, it just means that framework itself can handle some of the same concerns microservices are designed to handle. A smaller app might benefit from that by avoiding some of the complexities of building up your own microservices architecture. Bigger apps, and those that are already built as microservices, can still incorporate Phoenix effectively.&lt;/p&gt;

&lt;h3&gt;
  
  
  Are there any drawbacks?
&lt;/h3&gt;

&lt;p&gt;Elixir, and functional programming in general, is still a long way away from the mainstream. If you don't know your way around closures, immutable data structures, and functional thinking it will take you a while before getting to feel comfortable with Phoenix. The good news is that functional programming is on an upwards trend, and with applications growing more parallel and distributed, I don't think this trend will reverse any time soon. So the knowledge you acquire in the process will serve you well going forward, independent from what the future holds for Phoenix.&lt;/p&gt;

&lt;p&gt;The included tooling is great, but you might miss some 3rd party SDK's when you need them. That's definitely something to consider when starting a project. To give you an example: &lt;a href="https://aws.amazon.com/getting-started/tools-sdks/"&gt;AWS&lt;/a&gt; at the time of writing does not provide an official elixir client library.&lt;/p&gt;

&lt;p&gt;Phoenix Framework a rock-solid, production-ready tool with a variety of usecases. It feels fresh and well thought through. I won’t be surprised if Phoenixes popularity continues to grow to reach the level of „Top tier“ frameworks in the next few years.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>elixir</category>
      <category>functional</category>
    </item>
  </channel>
</rss>
