<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Maxim Vynohradov</title>
    <description>The latest articles on DEV Community by Maxim Vynohradov (@max_vynohradov).</description>
    <link>https://dev.to/max_vynohradov</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/max_vynohradov"/>
    <language>en</language>
    <item>
      <title>What problem do you want  to solve when use Express.js inside AWS Lambda?</title>
      <dc:creator>Maxim Vynohradov</dc:creator>
      <pubDate>Fri, 29 Jan 2021 22:11:15 +0000</pubDate>
      <link>https://dev.to/max_vynohradov/what-problem-do-you-want-to-solve-when-use-express-js-inside-aws-lambda-5cff</link>
      <guid>https://dev.to/max_vynohradov/what-problem-do-you-want-to-solve-when-use-express-js-inside-aws-lambda-5cff</guid>
      <description>&lt;p&gt;Hi! Two months ago I have published article &lt;strong&gt;Six reasons why you shouldn’t run Express.js inside AWS Lambda&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;And looks like there are many people that disagree with this. Maybe somebody here can provide another props and cons of Express.js inside AWS Lambda? &lt;/p&gt;

&lt;p&gt;Links:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;dev.to - &lt;a href="https://dev.to/max_vynohradov/six-reasons-why-you-shouldn-t-run-express-js-inside-aws-lambda-2o88"&gt;https://dev.to/max_vynohradov/six-reasons-why-you-shouldn-t-run-express-js-inside-aws-lambda-2o88&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;medium.com - &lt;a href="https://medium.com/dailyjs/six-reasons-why-you-shouldnt-run-express-js-inside-aws-lambda-102e3a50f355"&gt;https://medium.com/dailyjs/six-reasons-why-you-shouldnt-run-express-js-inside-aws-lambda-102e3a50f355&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;hackernoon.com - &lt;a href="https://hackernoon.com/6-reasons-why-you-should-not-connect-expressjs-and-aws-lambda-b71n31st"&gt;https://hackernoon.com/6-reasons-why-you-should-not-connect-expressjs-and-aws-lambda-b71n31st&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;reddit.com - somebody created 3 topics - &lt;a href="https://www.reddit.com/search/?q=Six%20reasons%20why%20you%20shouldn%E2%80%99t%20run%20Express.js%20inside%20AWS%20Lambda"&gt;https://www.reddit.com/search/?q=Six%20reasons%20why%20you%20shouldn%E2%80%99t%20run%20Express.js%20inside%20AWS%20Lambda&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>discuss</category>
      <category>serverless</category>
      <category>node</category>
      <category>aws</category>
    </item>
    <item>
      <title>The right way to make advanced and efficient MongoDB pagination</title>
      <dc:creator>Maxim Vynohradov</dc:creator>
      <pubDate>Mon, 25 Jan 2021 08:57:41 +0000</pubDate>
      <link>https://dev.to/max_vynohradov/the-right-way-to-make-advanced-and-efficient-mongodb-pagination-16oa</link>
      <guid>https://dev.to/max_vynohradov/the-right-way-to-make-advanced-and-efficient-mongodb-pagination-16oa</guid>
      <description>&lt;p&gt;Onсe upon a time, we had a complex project enough (ride-sharing and taxi application) with stack Node.js and MongoDB. We have chosen this stack because it was preferable by the customer, good known by our team, and at the same time looks like good a suite for project tasks.&lt;/p&gt;

&lt;p&gt;Everything was great, the number of users became more than twelve thousand, the number of active drivers was close to three hundred drivers. In one year, the number of rides becomes more than two million.&lt;/p&gt;

&lt;p&gt;But once we need to create an admin panel to control and monitor all processes (from the business point of view) in the main application. The huge percent of requirements was to have advanced lists of a variety of entities, with bind statistics over them. &lt;/p&gt;

&lt;p&gt;Because we use &lt;a href="https://www.npmjs.com/package/mongoose"&gt;mongoose&lt;/a&gt;, as ODM, first of all, we take a look at its plugins. The most popular of them, that related to pagination are&lt;br&gt;
&lt;a href="https://www.npmjs.com/package/mongoose-paginate"&gt;&lt;strong&gt;mongoose-paginate&lt;/strong&gt;&lt;br&gt;
*Pagination plugin for Mongoose Note: This plugin will only work with Node.js &amp;gt;= 4.0 and Mongoose &amp;gt;= 4.0. Add plugin to…*www.npmjs.com&lt;/a&gt;&lt;br&gt;
&lt;a href="https://www.npmjs.com/package/mongoose-paginate-v2"&gt;&lt;strong&gt;mongoose-paginate-v2&lt;/strong&gt;&lt;br&gt;
*A cursor based custom pagination library for Mongoose with customizable labels.*www.npmjs.com&lt;/a&gt;&lt;br&gt;
&lt;a href="https://www.npmjs.com/package/mongoose-aggregate-paginate"&gt;&lt;strong&gt;mongoose-aggregate-paginate&lt;/strong&gt;&lt;br&gt;
*mongoose-aggregate-paginate is a Mongoose plugin easy to add pagination for aggregates. This plugin can be used in…*www.npmjs.com&lt;/a&gt;&lt;br&gt;
&lt;a href="https://www.npmjs.com/package/mongoose-aggregate-paginate-v2"&gt;&lt;strong&gt;mongoose-aggregate-paginate-v2&lt;/strong&gt;&lt;br&gt;
*A cursor based custom aggregate pagination library for Mongoose with customizable labels. If you are looking for basic…*www.npmjs.com&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Another requirement was to have a possibility to choose a specific page on-demand, so the approach to use “&lt;em&gt;previous-next&lt;/em&gt;”-like pagination, that based on cursors was prohibited immediately — its &lt;em&gt;mongoose-paginate-v2&lt;/em&gt; and *mongoose-aggregate-paginate-v2 *libraries.&lt;/p&gt;

&lt;p&gt;The oldest, and probably the simplest in usage is &lt;em&gt;mongoose-paginate&lt;/em&gt; — it uses simple search queries, limit, sort, and the skip operations. I guess it’s a good variant for simple pagination — just install a plugin, add few lines of code to your endpoint, and that’s all — work is done. It even can use “&lt;em&gt;populate&lt;/em&gt;” of mongoose, — something that emulates joins from SQL world. Technically it just makes additional queries to the database, that probably not the way you want. Even more, when you just have a little bit more complicated query, with any data transformation, it will be totally unusable. I know just one way to normally use it in such cases — first create &lt;a href="https://docs.mongodb.com/manual/core/views/"&gt;MongoDB View&lt;/a&gt; — technically its pre-saved queries, that MongoDB represents as read-only collections. And just then run pagination using mongoose-paginate over this view. Not bad — you will hide complicated queries under view, but we have a better idea of how to solve this problem.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;MongoDB Aggregation Framework is here!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You know, I guess, it was really day for the MongoDB community when Aggregation Framework was released. Probably it allows for most of the queries that you can imagine. So, we think about taking &lt;em&gt;mongoose-aggregate-paginate&lt;/em&gt; into use*.*&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;But the next two things that disappointed us:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;**What does this plugin need? **I mean — what task does it help to solve, that cannot be solved without this plugin, with the same effort. Looks like it just one more additional dependency in your project, because it doesn’t bring any profit, even don’t save your time…&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Internal codebase, and general approach&lt;/strong&gt; to making queries. This library makes &lt;strong&gt;TWO&lt;/strong&gt; calls into a database and waits for responses via &lt;em&gt;Promise.all&lt;/em&gt;. First — to fetch query result and second — to calculate the count of total records that query returns, without &lt;strong&gt;$filter&lt;/strong&gt; and &lt;strong&gt;$limit&lt;/strong&gt; stages. It needs this to calculate the number of total pages.&lt;/p&gt;

&lt;p&gt;How we can avoid additional queries into the database? The worst thing here that we need to run all aggregation pipeline twice, that can be costly enough in terms of memory and CPU usage. Even more, if collection huge, and documents tend to be few megabytes, it can impact Disc I/O usage, that also a big problem.&lt;/p&gt;

&lt;p&gt;The good news — Aggregation Framework has a specific stage in its arsenal, that can solve this problem. It’s &lt;strong&gt;$facet:&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Processes multiple &lt;a href="https://docs.mongodb.com/manual/core/aggregation-pipeline/#id1"&gt;aggregation pipelines&lt;/a&gt; within a single stage on the same set of input documents. Each sub-pipeline has its field in the output document where its results are stored as an array of documents.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://docs.mongodb.com/manual/reference/operator/aggregation/facet/"&gt;MongoDB documentation about $facet stage&lt;/a&gt; . &lt;/p&gt;

&lt;p&gt;Aggregation Pipeline for pagination will have the next shape:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt; 
  &lt;span class="nl"&gt;$facet&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;outputField1&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;stage1&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;stage2&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;...&lt;/span&gt; &lt;span class="p"&gt;],&lt;/span&gt;
      &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;outputField2&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;stage1&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;stage2&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;...&lt;/span&gt; &lt;span class="p"&gt;],&lt;/span&gt;
      &lt;span class="p"&gt;...&lt;/span&gt;

   &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Also, pipelines for pagination can be improved by customization for specific cases. Some tips are listed below:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Run all operations, that don’t directly affect on final pagination result, after all possible filters (&lt;em&gt;$match&lt;/em&gt; stages). There are stages like $project or &lt;em&gt;$lookup&lt;/em&gt; — that don’t change the number or order of result documents. Try to cut off as many documents as you can at once.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Try to make your models as more self-sufficient, to avoid additional &lt;em&gt;$lookups&lt;/em&gt;. It’s normal to duplicate some data or make pre-computing fields.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If you have a really huge pipeline, that processes many data, your query will probably &lt;a href="https://docs.mongodb.com/manual/core/aggregation-pipeline-limits/#memory-restrictions"&gt;use more than 100MB&lt;/a&gt;. In this case, you need to use &lt;strong&gt;&lt;em&gt;allowDiskUse&lt;/em&gt;&lt;/strong&gt; flag.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Follow Aggregation Pipelines performance &lt;a href="https://docs.mongodb.com/manual/core/aggregation-pipeline-optimization/"&gt;optimization guide&lt;/a&gt;. This advice helps you to make your queries more efficient.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;And technically — you can make dynamic queries on your application code side — depends on conditions you can add, remove or modify specific stages. It can speed up your queries, and moreover, make your code more eloquent.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Coz NDA, I cannot show you real database schema and real queries. But let me show you a small example of such pagination. &lt;/p&gt;

&lt;p&gt;Imagine that you have two collections — &lt;strong&gt;Statistic&lt;/strong&gt; and &lt;strong&gt;Drivers&lt;/strong&gt;. &lt;em&gt;Drivers&lt;/em&gt; collection is static enough in thinking of types and amount of fields in different documents.  But &lt;em&gt;Statistic&lt;/em&gt; is polymorphic, can be changed during time, as a result of business requirements updates. Also, some drivers could have big statistic documents and history in general. So you cannot make Statistic as subdocument of Driver.&lt;/p&gt;

&lt;p&gt;So code and MongoDB query will have the next shape:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;ridesInfoPaginationPipeline&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;filter&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{},&lt;/span&gt; &lt;span class="nx"&gt;skip&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;limit&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;sort&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{})&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;[{&lt;/span&gt;
    &lt;span class="na"&gt;$match&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="p"&gt;...&lt;/span&gt;&lt;span class="nx"&gt;filter&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;active&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;$sort&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="p"&gt;...&lt;/span&gt;&lt;span class="nx"&gt;sort&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;createdAt&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;$lookup&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;from&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;statistic&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;localField&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;_id&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;foreignField&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;driverId&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;as&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;driver&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;$unwind&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;$driver&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;preserveNullAndEmptyArrays&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;$project&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;driver&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="na"&gt;$ifNull&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[{&lt;/span&gt;
          &lt;span class="na"&gt;$concat&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;$driver.firstName&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt; &lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;$driver.lastName&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
        &lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Technical&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
      &lt;span class="p"&gt;},&lt;/span&gt;
      &lt;span class="na"&gt;entityId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;message&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;meta&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;createdAt&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;$facet&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;total&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[{&lt;/span&gt;
        &lt;span class="na"&gt;$count&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;createdAt&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
      &lt;span class="p"&gt;}],&lt;/span&gt;
      &lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[{&lt;/span&gt;
        &lt;span class="na"&gt;$addFields&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="na"&gt;_id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;$_id&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
      &lt;span class="p"&gt;}],&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;$unwind&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;$total&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;$project&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="na"&gt;$slice&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;$data&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;skip&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="na"&gt;$ifNull&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;limit&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;$total.createdAt&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
        &lt;span class="p"&gt;}]&lt;/span&gt;
      &lt;span class="p"&gt;},&lt;/span&gt;
      &lt;span class="na"&gt;meta&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="na"&gt;total&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;$total.createdAt&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;limit&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="na"&gt;$literal&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;limit&lt;/span&gt;
        &lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="na"&gt;page&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="na"&gt;$literal&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;skip&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="nx"&gt;limit&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="na"&gt;pages&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="na"&gt;$ceil&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="na"&gt;$divide&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;$total.createdAt&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;limit&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
          &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="p"&gt;},&lt;/span&gt;
      &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="p"&gt;];&lt;/span&gt;



&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;executePagination&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;Statistic&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;aggregate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;ridesInfoPaginationPipeline&lt;/span&gt;&lt;span class="p"&gt;());&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you see, using &lt;em&gt;Aggregation Framework&lt;/em&gt; and &lt;em&gt;$facet&lt;/em&gt; stage we can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;make data transformation and complex queries;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;fetch data from multiple collections;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;get metadata of pagination (total, page, pages)in the one query without additional query execution. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Regarding the &lt;strong&gt;main drawbacks&lt;/strong&gt; of such an approach, I guess just one is major — &lt;strong&gt;&lt;em&gt;higher complicity of development and debug process, along with higher entry threshold&lt;/em&gt;&lt;/strong&gt;. It includes performance troubleshooting, knowledge of a variety of stages, and data modeling approaches.&lt;/p&gt;




&lt;p&gt;So, pagination, that based on MongoDB Aggregation Framework, is not pretending to be a silver bullet. But after many attempts and pitfalls — we found that this solution is covered all our cases, with no effects and no high coupling to a specific library.&lt;/p&gt;

</description>
      <category>mongodb</category>
      <category>mongoose</category>
      <category>node</category>
      <category>database</category>
    </item>
    <item>
      <title>Six reasons why you shouldn't run Express.js inside AWS Lambda</title>
      <dc:creator>Maxim Vynohradov</dc:creator>
      <pubDate>Mon, 07 Dec 2020 12:07:16 +0000</pubDate>
      <link>https://dev.to/max_vynohradov/six-reasons-why-you-shouldn-t-run-express-js-inside-aws-lambda-2o88</link>
      <guid>https://dev.to/max_vynohradov/six-reasons-why-you-shouldn-t-run-express-js-inside-aws-lambda-2o88</guid>
      <description>&lt;p&gt;Some facts why usage Express.js inside AWS Lambda is pitiful design anti-pattern and how to give it up without pain.&lt;/p&gt;




&lt;p&gt;Last few years popularity of NPM packages, that allow you to use Express.js inside AWS Lambda handler, grow up rapidly. These packages provide some functionality that allows you to run Express.js middlewares, controllers with some limitations, instead of plain AWS Lambda handler.&lt;br&gt;
Some examples of such libraries:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.npmjs.com/package/aws-serverless-express"&gt;aws-serverless-express&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Run serverless applications and REST APIs using your existing Node.js application framework, on top of AWS Lambda &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://www.npmjs.com/package/serverless-http"&gt;serverless-http&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;But why developers decided to do so? They’re just a few foremost reasons, that I’ve usually met in practice:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;No interest to learn new approaches to write handlers for API.For different reasons — want to use serverless architecture, but have not time to adopt and re-write existed Express.js based solution to Lambda handlers.&lt;/li&gt;
&lt;li&gt;Wish to use the existing Express.js functionality and ecosystem, mostly it’s about huge numbers of third-party middleware.&lt;/li&gt;
&lt;li&gt;Tries to reduce costs using AWS Lambda instead of development server (like EC2, AWS ECS, AWS EKS, etc.)&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;strong&gt;So below the list of reasons why the usage of Express.js inside AWS Lambda in most cases are redundant, you probably get many drawbacks from this approach.&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;em&gt;Increasing node_modules size and cold starts&lt;/em&gt;
&lt;/h3&gt;

&lt;p&gt;Simple point — the bigger node_modules do your artifact have, the bigger cold starts of AWS Lambda will you have. With no exceptions. Raw Express.js is near 541.1 KB, but you also need additional dependencies, mostly middleware, that can increase your node_modules several times.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;em&gt;Additional operational time&lt;/em&gt;
&lt;/h3&gt;

&lt;p&gt;When you use standalone Express.js on the server (standard way), each HTTP request is some kind of text that the server parses to a well-known request object. Lambdas that people tried to use with Express.js inside, usually runs under API Gateway or AWS Application Load Balancer, and data that come from this event source are already parsed by API GW and ALB! Yes, it’s different, but anyway.&lt;br&gt;
When you use Express.js inside AWS Lambda your “system” make the next thing with input HTTP data:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;AWS API GW or AWS ALB parses HTTP request and convert them to the event payload.&lt;/li&gt;
&lt;li&gt;The library that wraps the Express.js server maps the lambda event to server request. &lt;/li&gt;
&lt;li&gt;Express.js one more time converts this to its request object.&lt;/li&gt;
&lt;li&gt;The similar with a response — the library that wraps Express.js converts HTTP response to AWS Lambda response object.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;So many supplementary conversions. Sometimes it looks like just wasting processor time.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;em&gt;AWS Lambda has a different limitation, that can be unexpected for your Express.js application:&lt;/em&gt;
&lt;/h3&gt;

&lt;p&gt;First of all, lambdas are stateless — each AWS Lambda instance is an AWS Firecracker container, that will shut down in some time after inactivity. So you cannot simply persist data and share them across all lambda instances. The same situation with sessions — to use it with AWS Lambda, you need additional storage, for instance, Redis instance hosted as AWS ElasticCache.&lt;/p&gt;

&lt;p&gt;Lambdas container can live during several handler executions (warm lambdas), but in any way, it quits unexpectedly. And this could break some tools or make their behavior unpredictable. The most impressive case is related to buffering, loggers, and any error trackers, like Sentry. Usually, they don’t send all logs, data immediately, they buffering them firstly, and then send several logs items at once, to make this more efficient. But when your lambda’s container quits, time-to-time these buffers do not have time to be flushed into storage or third-party services. For sure, we can disable buffering, but some of the services require another SDKs, that specific for AWS Lambda. And they cannot be re-used simply as Express.js middleware — you should wrap them up as your own middleware, that double work.&lt;/p&gt;

&lt;p&gt;Also, you cannot use web-sockets (WebSockets, socket.io) inside the Express.js application, for the same reason — the lifetime of lambda execution container. But at the same time, AWS API GW supports web sockets, but they are implemented in another way, you cannot connect socket.io to them.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;em&gt;Some things that you are used to do in the Express.js app are different in AWS Lambda and has more adequate alternatives&lt;/em&gt;
&lt;/h3&gt;

&lt;p&gt;Despite all disadvantages, the embedded middleware pattern in Express.js is probably one of the popular things in the Node.js world. However, there is no need to use Express.js just for this, coz at least one middleware library is suited for AWS Lambda better:&lt;br&gt;
&lt;a href="https://www.npmjs.com/package/@middy/core"&gt;@middy/core&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Core component of the middy framework, the stylish Node.js middleware engine for AWS Lambda To install middy you can &lt;a href="http://www.npmjs.com"&gt;www.npmjs.com&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Also, it implements an onion-like middleware pattern, that is much more flexible than Express.js can provide for you.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;em&gt;Best practices for Express.js and AWS Lambda are different&lt;/em&gt;
&lt;/h3&gt;

&lt;p&gt;At least you can easily find out the next point — security protection approaches are different. When Express.js best practice guide proposes to use Helmet.js library, it doesn’t applicable to AWS Lambdas. AWS proposes to use AWS WAF service that:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Protect your web applications from common web exploits&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  &lt;em&gt;Lost benefits from individual packaging of lambdas&lt;/em&gt;
&lt;/h3&gt;

&lt;p&gt;When you write classic AWS Lambda handlers, you usually can package each lambda artifact separately, to reduce each artifact size. But when you use Express.js, you cannot do this — all lambdas require the same dependencies. Technically you can, but all of them will have the same size, which negates their advantages. Also, in this case, serverless-webpack-plugin cannot optimize imports correctly, because technically each lambda will have the same dependencies tree.&lt;/p&gt;




&lt;p&gt;Despite all of the above, I believe, that there some cases when the usage of Express.js inside AWS Lambda are valid and justified:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;em&gt;Pet projects&lt;/em&gt; — coz great AWS Free Tier, probably you can run them for free.&lt;/li&gt;
&lt;li&gt;Your service is &lt;em&gt;not mission-critical&lt;/em&gt;, and you are okay with all issues described above — so, okay, you can use it without any doubts (but &lt;em&gt;don’t forget about technical debt&lt;/em&gt;).&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;Hope this information will be useful and you will not forget this when decide to use Express.js inside AWS Lambda the next time.&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>aws</category>
      <category>webdev</category>
      <category>node</category>
    </item>
    <item>
      <title>Tools to Manage NPM Dependency in Your Project as A Professional</title>
      <dc:creator>Maxim Vynohradov</dc:creator>
      <pubDate>Mon, 16 Nov 2020 23:16:34 +0000</pubDate>
      <link>https://dev.to/max_vynohradov/tools-to-manage-npm-dependency-in-your-project-as-a-professional-47ek</link>
      <guid>https://dev.to/max_vynohradov/tools-to-manage-npm-dependency-in-your-project-as-a-professional-47ek</guid>
      <description>&lt;p&gt;Why do we talk about project quality and technical debt so much? Because this directly or indirectly affects the speed of development, the complexity of support, the time to implement new functionality, and the possibility of extending the current one.&lt;/p&gt;

&lt;p&gt;There is a great number of aspects that affect a project’s quality. Some of them are difficult to understand, hard to check, and require manual checks from highly experienced developers or QA engineers. Others are plain and simple. They can be checked and fixed automatically. Despite this fact, they represent a weighty part of the whole project’s quality.&lt;/p&gt;

&lt;p&gt;In this article, you will find NPM packages that can automatically check some critical sides of your project, such as NPM dependencies, their licenses, and validating security issues.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Find Missed or Unused Dependencies&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Depcheck is a useful tiny library to check what dependencies are unused and what dependencies are missing from package.json but used in your code base.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.npmjs.com/package/depcheck"&gt;depcheck - Check for Vulnerabilities in Your Dependencies&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It’s highly recommended to use it locally (for instance, on pre-commit hooks) or in remote CI to avoid the following issues:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Redundant dependencies increase build/bundle size, which leads to these consequences: Docker images become bigger,
AWS Lambda handler also has longer cold starts, and an event can surpass lambda size limits.&lt;/li&gt;
&lt;li&gt;Missed dependencies can break your applications in totally unexpected ways in production. Moreover, they can crash
your CI/CD pipelines if they are dev dependencies.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Installation and usage&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm install -g depcheck
// or
yarn global add depcheck
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;Example of usage&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// usage as npm script
"dependencies:check": "yarn run depcheck",
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By running this command, you can see a list of issued dependencies:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--s9upW3a7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/qs8h58ltpq3azsz56b10.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--s9upW3a7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/qs8h58ltpq3azsz56b10.png" alt="yarn run depcheck"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;List of issued dependencies&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://docs.npmjs.com/cli/v6/commands/npm-audit"&gt;npm-audit&lt;/a&gt;, &lt;a href="https://classic.yarnpkg.com/en/docs/cli/audit/"&gt;yarn audit&lt;/a&gt;, and &lt;a href="https://www.npmjs.com/package/improved-yarn-audit"&gt;improved-yarn-audit&lt;/a&gt; are tools that can find out dependency vulnerabilities. Moreover, they automatically update packages to resolve issues. Both npm audit and yarn audit are pre-installed with package managers, but I prefer improved-yarn-audit. It’s a wrapper around yarn audit that provides some improvements — especially for usage in CI pipelines (from docs):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;No way to ignore advisories&lt;/li&gt;
&lt;li&gt;Unable to filter out low severity issues&lt;/li&gt;
&lt;li&gt;Ongoing network issues with NPM registry cause false positives&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://www.npmjs.com/package/improved-yarn-audit"&gt;improved-yarn-audit - This project aims to improve upon the existing Yarn package manager audit functionality&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Installation&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm install -g improved-yarn-audit
// or
yarn global add improved-yarn-audit
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;Example of usage&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"dependencies:audit": "yarn run improved-yarn-audit — min-severity moderate",
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Below, you can see the results of using this command in a real project codebase. This tool searches for vulnerabilities in transitive dependencies too:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--r2YPT-wn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/q46jvri2g1u38y3sgjtx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--r2YPT-wn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/q46jvri2g1u38y3sgjtx.png" alt="yarn run improved-yarn-audit — min-severity moderate"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Check Licenses of Dependencies&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;For real production projects, dependency licenses are critical to use because the way that you are using the dependency can be prohibited by the package’s license. To avoid this, you should continuously check the licenses of all dependencies that you use in a project. And if your project is a startup, appropriate usage of dependencies according to their licenses is mandatory to get investors to approve your product. license-checker is the best way to do this!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.npmjs.com/package/license-checker"&gt;license-checker - Ever needed to see all the license info for a module and its dependencies?&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Installation&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm install -g license-checker
// or
yarn global add license-checker
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;Example of usage&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"licenses:check": "yarn run license-checker --exclude 'MIT, MIT OR X11, BSD, ISC'",
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Checking licenses of dependencies&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--MKfuk5CQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/wq9xlg8sn3w66846iq2m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--MKfuk5CQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/wq9xlg8sn3w66846iq2m.png" alt="yarn run license-checker — exclude ‘MIT, MIT OR X11, BSD, ISC"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;But for usage inside CI/CD, I prefer the following variant because it’s much shorter:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"licenses:check:ci": "yarn run license-checker — summary",
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--MK7nj6Vs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/9xn8rg3rxonon4rk1xfb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--MK7nj6Vs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/9xn8rg3rxonon4rk1xfb.png" alt="yarn run license-checker — summary"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;I hope this article helped you to address or avoid problems with NPM packages.&lt;/em&gt; &lt;br&gt;
&lt;em&gt;Thanks for reading!&lt;/em&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>javascript</category>
      <category>tutorial</category>
      <category>npm</category>
    </item>
    <item>
      <title>What We Have Learned While Using AWS Lambda in Our Production Cycles for More than One Year</title>
      <dc:creator>Maxim Vynohradov</dc:creator>
      <pubDate>Tue, 15 Sep 2020 21:38:52 +0000</pubDate>
      <link>https://dev.to/brocoders/what-we-have-learned-while-using-aws-lambda-in-our-production-cycles-for-more-than-one-year-1e0i</link>
      <guid>https://dev.to/brocoders/what-we-have-learned-while-using-aws-lambda-in-our-production-cycles-for-more-than-one-year-1e0i</guid>
      <description>&lt;p&gt;Over the last few years, serverless approaches have gained decent traction in the web app designing, developing and implementing sectors. In the early days, many engineers treated serverless just like another hype. Still, almost all of those who tried to use it had to admit that the technology turned out to be as good as traditional and standalone virtual machines for hosting web-applications.&lt;/p&gt;

&lt;p&gt;To date, we can see that startups tend to utilize serverless technology stack as a part of their systems or even as their primary solution for building products in different domains.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--XpHdUTiI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/04hdn4e4n41ncge3sgt7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--XpHdUTiI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/04hdn4e4n41ncge3sgt7.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;First things first&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Our team decided to test out the technology while working on the product during the last year — an &lt;em&gt;on-demand bike taxi app&lt;/em&gt; that uses a serverless approach for one of its components. In fact, it is much similar to an Uber app.&lt;/p&gt;

&lt;p&gt;Technically, it was mostly a REST API and cron-tasks, anchored by the following technologies (all of these are provided by the Amazon Web Services):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;API Gateway as a platform for API management.&lt;/li&gt;
&lt;li&gt;CloudWatch Rules for scheduling cron-tasks.&lt;/li&gt;
&lt;li&gt;Lambdas as computing units.&lt;/li&gt;
&lt;li&gt;S3 buckets to store static files.&lt;/li&gt;
&lt;li&gt;CloudWatch Logs with Logs Insights for log management.&lt;/li&gt;
&lt;li&gt;Tools for continuous integration and deployment of our application: AWS CodeBuild, AWS CodePipeline and AWS CodeDeploy.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Initially, we used Node.js version 10 to write the code (a few months ago it was upgraded to version 12 without any issues). And all the infrastructure part (I mean all the AWS objects descriptions) is created and managed by an open-source Serverless Framework.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;*This guide is not about AWS, FaaS (Function as a Service) or Serverless framework, since there is a lot of such content on the Internet. Here you will only find the things that our team faced during the development and after-launch stages. This info might be helpful if you come up with doubts about which technology to adopt for your next project. *&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;The Serverless world — the remarkable benefits of using AWS Lambdas&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Let's start with the good parts! No matter what any hater says, the Serverless world provides a bunch of excellent features that you cannot achieve in any other way under equal conditions.&lt;/p&gt;

&lt;p&gt;When we started this project mostly from scratch, it didn't require any severe capacity in measurements of Memory, CPU or network, to name a few. The same statement can be made not only about the development phase but also about the Staging, QA and Pre-Prod environments.&lt;/p&gt;

&lt;p&gt;Traditionally, we need four servers, whether that be virtual machines, docker containers or any other platforms where we can host servers. For sure, it might be quite expensive to keep and maintain servers, even small and low-power ones. Even switching them off at nights and weekends is no way an option.&lt;/p&gt;

&lt;p&gt;However, the Serverless world has an alternative solution – the so-called "Pay as you go" payment approach. It means that you pay only for the computing resources and network load that you use, even though the entire infrastructure is deployed and accessible at any moment.&lt;/p&gt;

&lt;p&gt;In practice, it means that we were not burdened with any cost savings during the project's development. Moreover, while we remained within the AWS Free Tier limits, the actual cloud usage was charge-free until we reached the production stage.&lt;/p&gt;

&lt;p&gt;So here are some advantages of AWS Lambdas worth mentioning here.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Outstanding scalability&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The app was designed for the city with more than 13 million people. So it’s no wonder that the number of users started snowballing right after the first release. By "snowballing", I mean thousands of new users per hour in the first few weeks, hence a bunch of rides and ride requests as well.&lt;/p&gt;

&lt;p&gt;That's where we felt all the benefits of the AWS Lambdas' &lt;em&gt;incredible scalability and zero management&lt;/em&gt; of the scaling process. You know, this feeling when you see a rapidly growing number of requests on the chart (that was automatically provided by AWS). And the greatest part is that you shouldn't even worry about this, since the AWS Lambdas are scaled automatically. All you have to do is set a threshold for the concurrent invocation.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;A standard set of monitoring and logging tools&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Aside from the automatic scalability feature, AWS provides a basic set of tools for Lambdas. So, you don't have to waste your precious time dealing with the annoying configuration of basic monitoring metrics, such as Memory Usage, Execution time or Errors Count.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rH8XNiZS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/a4mxhyrcf7miz3dw4bqt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rH8XNiZS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/a4mxhyrcf7miz3dw4bqt.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Moreover, you can customize your own dashboards in the &lt;a href="https://aws.amazon.com/cloudwatch/"&gt;CloudWatch service&lt;/a&gt; that would help you track performance issues and execution errors throughout the entire serverless application.&lt;/p&gt;

&lt;p&gt;For sure, you won't come up with as many customizable graphics options, as Grafana or Kibana can provide, but at the same time, the AWS CloudWatch metrics, alarms and dashboards are way cheaper. Besides, you can attune these without much preparation, and last but not least — the cloud provider takes responsibility for the efficiency of the monitoring tools described above.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Isolated environment&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Well, let's say you managed to customize a dashboard without any problems. But then you realized that the Lambdas execution process took more time than it should have, and it looked like Lambdas performed some sophisticated calculation. Luckily, it's not a problem for AWS Lambda, since every function-handler runs in an isolated environment, with its own configuration system of Memory or CPU.&lt;/p&gt;

&lt;p&gt;In-fact, each instance of Lambda is a separate AWS Firecracker Container that spawns on a trigger (in case of a REST API, the trigger is an HTTP request). That said, all you have to do is just increase CPU units Count or Memory for the specific Lambda, with no need for global updates, as if it were done in a classic server.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Flexible errors management&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Another outstanding benefit that you can enjoy while using AWS Lambda is &lt;em&gt;decent error handling&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--QvDAz_sQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/9jm3b71tx11q9xkdwmlo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--QvDAz_sQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/9jm3b71tx11q9xkdwmlo.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As said above, each Lambda has an isolated environment, so even if one of your Lambda instances fails for any reason, all other Lambdas will continue to operate normally. It's fantastic when you have just one or two errors from a few hundred possible AWS Lambda invocations, isn't it?&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Automated retry attempts&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Furthermore, retry attempts is another out-of-the-box feature that AWS provides. Should a Lambda fail for any reason, it would be automatically re-invoked with the same event payload during the pre-configured period. I must say, it’s a quite useful feature if your Lambda is invoked by schedule and is trying to send a request to a third party resource that can be unavailable.&lt;/p&gt;

&lt;p&gt;Finally, AWS Lambda supports the &lt;a href="https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-dead-letter-queues.html"&gt;Dead letter queue concept&lt;/a&gt; that means you can acquire relevant notifications and tracking information about failed Lambdas.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;The AWS Lambda disadvantages — a few pain points to learn from&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;On the flip side of the coin, AWS Lambda and the serverless concept are not entirely perfect yet and have enough unresolved problems and pitfalls that make the development and support processes a little bit harder.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--XU5kGpEG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/1g77ea1i1m8g9hudusf5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--XU5kGpEG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/1g77ea1i1m8g9hudusf5.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Duration limits&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;For our project, it was all about limits. For example, we ended up with an execution duration limit — a Lambda can be performed within 15 minutes maximum. Moreover, if a trigger is requested from an API Gateway, the duration must be no more than 30 seconds.&lt;/p&gt;

&lt;p&gt;Perhaps, we could accept such limits for the API, but a 15-minutes limit for the cron-tasks was way too tight to execute the particular scope of tasks on time. That said, since the computed intensive tasks couldn't be invoked with Lambdas, we had to create a separate server specifically for long-running tasks.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;CloudFormation deployment limitations&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Another significant issue we faced was the Lambda deployment via CloudFormation (the AWS service for infrastructure and deployment). At the very beginning of the project, everything was fine. Still, when the number of Lambdas mushroomed into more than 30 CloudFormations, the stack started failing with different errors like "Number of resources exceeded", "Number of outputs exceeded".&lt;/p&gt;

&lt;p&gt;Thankfully, the serverless framework and its plugins helped us to tackle this issue early on. There are also a few other ways to solve such kinds of problems, but that'll be a topic for another article.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Failure to expand monitoring and debugging toolset&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Even though AWS provides some basic level of monitoring and debugging, it's still &lt;em&gt;impossible to extend this part and make some custom metrics&lt;/em&gt; that could be useful for particular cases and projects. This time around, we had to use third-party services that you usually need to integrate as libraries into your code to be able to monitor some specific stuff.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--vYXicLl4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/z70o367hyogjtqnvh9j9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vYXicLl4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/z70o367hyogjtqnvh9j9.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Cold start-related delays&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;As mentioned above, each Lambda instance is in-fact a tiny Firecracker Container with some basic runtime environment, libraries and your code. It's created temporarily to process any event evoked by the triggers. It's a well-known fact that creating a container or running an executable environment and code takes some operational time called a cold start.&lt;/p&gt;

&lt;p&gt;It can take random time between 100 milliseconds to a few minutes. Moreover, if you keep your Lambdas under VPC (Virtual Private Cloud), cold starts will take more time because the system will have to create additional resources for each Lambda, called Elastic Network Interfaces.&lt;/p&gt;

&lt;p&gt;This, in turn, results in annoying delays, so the end-users have to wait for the app to respond, which is definitely not good at all, isn't it? The workaround here is to ping your Lambda every 5 minutes to keep containers "warm". The AWS system is smart enough and doesn't kill Lambdas containers immediately, since it is based on the concept that triggers would keep spawning new events.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Database connection pitfalls&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;In view of the above, it's problematic to manage a database connection for such a system. You cannot just open a connection pool to your MongoDB or MySQL servers at the application startup phase and reuse it during the entire lifecycle.&lt;/p&gt;

&lt;p&gt;So there are at least two ways to manage connections:&lt;/p&gt;

&lt;p&gt;You should open a connection for each Lambda invocation and close it after your code with logic would be completed; You can try to reuse a connection and keep it in the Lambda memory as a reference in code or field in context — it allows you to keep a connection within the same Lambda containers till closing.&lt;/p&gt;

&lt;p&gt;However, both have their own limitations. In the first case, we end up with additional delays since we have to open a connection for each Lambda call. In the second case, we can't be sure for how long Lambda would keep a connection, and consequently — we can't handle a connection shut-down properly.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Local test limitations&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Besides, the serverless apps are hard to test locally, because usually there are a lot of integrations between AWS services, like Lambdas, S3 buckets, DynamoDB, etc. For any type of local testing, developers must mock all this stuff, which usually is a formidable and time-consuming task.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Inability to adopt caching in a traditional way&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;On top of everything else, you can't implement a traditional caching for classic-like servers. Usually, you have to use other services like S3, DynamoDB or ElasticCache (de-facto Redis hosted on AWS) to keep Lambda's state or cache some data between AWS Lambda invocations.&lt;/p&gt;

&lt;p&gt;In most cases, it results in extra costs of the entire infrastructure. Not to mention additional operational overhead — you'll have to put and fetch cached data from remote storage, which, in turn, can slow down the performance of your cache.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Complex payment model&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The last one worth mentioning is a sophisticated price calculation. Even though AWS Lambda is quite cheap, various supplementary elements can significantly increase the total costs. People tend to think that the pricing for using the AWS Lambda's API is based on its computing resources and duration of code execution. In fact, you should keep in mind that you'll have to pay for additional services, such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Network traffic,&lt;/li&gt;
&lt;li&gt;API Gateway,&lt;/li&gt;
&lt;li&gt;Logs stored in the CloudWatch.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--p_Nduy4I--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/m1cuixqj3nqda1dsjq72.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--p_Nduy4I--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/m1cuixqj3nqda1dsjq72.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Wrapping up&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Summarizing the above, I want to say that the AWS serverless approach is a great way to strengthen your development practices. Nevertheless, you have to keep in mind that it's quite different from traditional servers.&lt;/p&gt;

&lt;p&gt;To leverage the life-changing benefits of this technology, you have to get acquainted with all the subtleties and pitfalls in the first place. Besides, you also have to think through the architecture and its specifics for your particular solution.&lt;/p&gt;

&lt;p&gt;Otherwise, the serverless approach can bring you rather problems than beneficial features as of insufficient educational background.&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>faas</category>
      <category>aws</category>
      <category>node</category>
    </item>
    <item>
      <title>Crime and Punishment: Hype Driven Development</title>
      <dc:creator>Maxim Vynohradov</dc:creator>
      <pubDate>Sun, 12 Jul 2020 22:28:55 +0000</pubDate>
      <link>https://dev.to/brocoders/crime-and-punishment-hype-driven-development-38i8</link>
      <guid>https://dev.to/brocoders/crime-and-punishment-hype-driven-development-38i8</guid>
      <description>&lt;p&gt;&lt;em&gt;&lt;strong&gt;Disclaimer&lt;/strong&gt;&lt;/em&gt; : long read. About the painful stuff, what I want to share for more than a month. Also does not claim to be complete, it is more about how I try to deal with it.&lt;/p&gt;

&lt;h3&gt;
  
  
  Table Of Contents
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Introduction: what is &lt;em&gt;Hype Driven Development&lt;/em&gt; &lt;/li&gt;
&lt;li&gt;HDD definition as a crime &lt;/li&gt;
&lt;li&gt;Objective side and consequences  of the HDD crime&lt;/li&gt;
&lt;li&gt;How to prevent HDD in project&lt;/li&gt;
&lt;li&gt;Conclusion and afterword&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  &lt;a&gt;Introduction: what is &lt;em&gt;Hype Driven Development&lt;/em&gt;?&lt;/a&gt;
&lt;/h4&gt;

&lt;p&gt;Here I want to share a few thoughts on how to convince the team / manager / product owner to not use the &lt;em&gt;"hype"&lt;/em&gt; technology or make proof that this technology fits good in your project.&lt;/p&gt;

&lt;p&gt;It is well known that representatives of different professions often invent ways to solve a particular problem. For instance, which approach and constituent materials to use and positions, the choice of a specialist often determines the success of achieving the goals.  For example, would the cafe be profitable, and would the bridge withstand the load successfully, would the operated heart beat for many years. And, unfortunately, one wrong decision can lead to fatal consequences and suffer great losses, and even worse – harm the health and life of people.&lt;/p&gt;

&lt;p&gt;Engineers, software developers, web developers, and other technical guys are no exception. Moreover, the final software products are extremely dependent on real business, with real people and finances. And the harm that can be caused by badly designed and implemented software can potentially be huge and irreparable damage to the entire business.&lt;/p&gt;

&lt;p&gt;Let's look at one of the most common crimes committed by various teams that somehow participate in the design and development of various types of IT projects, named &lt;strong&gt;Hype Driven Development&lt;/strong&gt; (hereinafter referred to as HDD).&lt;/p&gt;

&lt;p&gt;This concept is &lt;a href="https://blog.daftcode.pl/hype-driven-development-3469fc2e9b22#.t3yqgxync"&gt;not new&lt;/a&gt;, but it does not go back to the past century, and means &lt;em&gt;a decision about a technological stack or software architecture that is taken without technical justification, research, based only on loud and popular words from the Internet or other sources, "buzzwords ", unverified rumors&lt;/em&gt;. It's a development based on fashion and trends, without explanation of your choice from a technical point of view and not confirming by checking or evaluating the proposed solution.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--PGcJ403N--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/myspiby0qrs5ij93wv3v.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--PGcJ403N--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/myspiby0qrs5ij93wv3v.jpg" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What I have most often seen: blockchain, data science, machine / deep / any other learning, MongoDB, GraphQL, Serverless, Clouds, Kubernetes, and more. These "hype" technologies and much more I've seen in different proposals of various technical requirements, in RFPs (Request for proposal), spreadsheets with estimates, and so on. Don't get me wrong, I have a deep respect for the above-listed technologies, and, as an example, I often use MongoDB and Serverless. The problem is that these technologies were proposed to be used and added in the post-production phase, and their connection with business logic seemed as far-fetched as possible.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;a&gt;HDD definition as a crime&lt;/a&gt;
&lt;/h4&gt;

&lt;p&gt;Why I started with such a long prelude at the beginning - you can imagine when a builder chooses from what material to build a skyscraper "because it is so fashionable". Or maybe a doctor who prescribes pills according to the principle "well, those people take similar pills abroad, and this may help you too." Personally, I have not, I have never seen this, I wish the same to you. &lt;strong&gt;For me, such an approach and attitude to work looks like a crime. Therefore,  I propose to consider HDD a crime, at least in the article&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;To begin with, I think it's worth remembering what constituent elements of a crime are. Without the presence of corpus delicti, an act cannot be considered criminal.&lt;/p&gt;

&lt;p&gt;It's a quite extensive and complex definition, but let's summarize and simplify it a little, coz after all, this is not a jurisprudence article, especially since the laws of countries and states are different enough. We are interested in the following components of constituent elements of a crime:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;the object&lt;/em&gt; - is the part to which the damage was caused;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;the objective side&lt;/em&gt;- is the external manifestations of the crime, action or inaction, the consequences of the crime, as well as the connection between them or, for example, the time, method, means of committing;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;subject&lt;/em&gt; - the one who committed the crime;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;the subjective side&lt;/em&gt; - is a person's attitude to a crime that he has committed. Was it intent (direct or indirect) or it was negligence (frivolity or negligence), there is a motive and what is the purpose of the crime.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let's start with the &lt;strong&gt;subject&lt;/strong&gt;. In our case, this is one of the members of the development team in the broad sense of the word: engineers, managers, product owners. That is all those people who in one way or another influence the development process. But they can be divided into two categories. Some of them work on business logic, tasks, and functional requirements - usually the manager, business analyst, and product owner. On the other hand, we have technical specialists - test engineers, developers, architects, team leads. Of course, they can work at the junctions of the technical and non-technical parts of the project, but now it is important that both the former and the latter can offer, perhaps even insist on using the hype technology.&lt;/p&gt;

&lt;p&gt;And here we smoothly move on to the &lt;strong&gt;subjective side&lt;/strong&gt; of the HDD crime - did anyone want to harm when choosing an HDD technology? I think nobody would choose technology or design architecture to the detriment of their project or product. Then why?&lt;/p&gt;

&lt;p&gt;Based on personal experience, I can say, that typically, customers (meaning people or companies which order development of product, not product users ), product owners choose one of these technologies to attract the attention of investors and potential users as sources of funds and income. The reason is more than clear and transparent - often investors are not technical specialists, so, many of them are being led by beautiful marketing slogans, that promise many profitable things for the product. But at the same time, they usually don't understand the real essence of things.  As a result, we get a technical solution based on marketing. There are also cases when a customer already comes with technical requirements for a future product. When asked "why does he want such a stack of technologies?", we usually get one of the options: "one of my projects uses such a stack and everything is fine", "my partner's project uses such a stack - and he is happy with everything". People who say this usually do not think about the fact that if functional and non-functional requirements are different, then the technologies used do not have to be the same.&lt;/p&gt;

&lt;p&gt;Sometimes it happens that someone from the technical part of the team initiates the use of "hype" technology. Usually, this is less experienced teammates (juniors), which still cannot always assess the risks and make appropriate time estimations. For them, it all comes down to the fact that they want to try a new technology, which is given in a beautiful wrapper. And also usually they do not look at the license of the solution (let's say it is a library or a framework), or at its maturity.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;a&gt;Objective side and consequences of the HDD crime&lt;/a&gt;
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;The objects of the HDD crime&lt;/strong&gt; can also be different. It depends on what side of the project we are considering - technical or business. If we are talking about the technical side, there are codebases in general, its architecture, the infrastructure of deployments. On the other side, the business operated by other concepts, like include cash flows, investments, profits, and other business processes.&lt;/p&gt;

&lt;p&gt;So, let's say a "hype" technology penetrates the project, seeps into the code base, a lot of logic, additional code and configurations begin to rise around the technology. In this case, what and who will suffer from the introduction of "hype" technology, and how (we are talking about the &lt;strong&gt;objective side&lt;/strong&gt;) will the HDD affect the project and the business as a whole (&lt;strong&gt;the object of the crime&lt;/strong&gt;)?&lt;/p&gt;

&lt;p&gt;There are a lot of things, consider that you are playing dice - it all depends on how hype and not properly researched technology is deeply integrated into the business logic of your application, code and development processes in general. The list below presents the most obvious consequences.&lt;/p&gt;

&lt;p&gt;🚫 &lt;strong&gt;&lt;em&gt;Building a business model around technology, not adopting technology to business&lt;/em&gt;&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;This happens relatively rarely, but extremely unfortunate. It happens when a "hype" technology is at the very core of the business logic and, accordingly, the code base, when the very idea of ​​the application is to build something over the technology. Accordingly, if the technology breaks down in one way or another (for example, an unobvious bug pops up) - with a high probability, the product that is based on it breaks too.&lt;/p&gt;

&lt;p&gt;🚫 &lt;strong&gt;&lt;em&gt;Low conversion from "hype" technologies&lt;/em&gt;&lt;/strong&gt; - that is, the discrepancy between the expected profit from the technology and the real one, as they often do not take into account that needs to pay for additional infrastructure along with the support of such a solution, as well as the work of specialists who will configure this solution.&lt;/p&gt;

&lt;p&gt;🚫 &lt;strong&gt;&lt;em&gt;The black box problem&lt;/em&gt;&lt;/strong&gt;. Usually, the external dependency is perceived as something atomic, as an integral component within our system, with its public API (interfaces, methods, endpoints). But in fact, this is the same program code as the one in which we integrated this component, with its dependencies and bugs, performance, and security problems. You probably say: "But often the code of such components is open source!". Now think, how often do you have enough time, desire, and immediate need to look into the program code of one of your libraries or database (which is also a software product with its source code)? I think not often (but usually it tends to never). For us, a bug (this is how it can harm your project) in any external dependency is a kind of analog of "Schrodinger's cat in a box", which is both there and not. We often find out about the presence of bugs only when this external bug affects the state of our system.&lt;/p&gt;

&lt;p&gt;🚫 &lt;strong&gt;&lt;em&gt;Lack of professionalism and experience in the introduced technology&lt;/em&gt;&lt;/strong&gt;. It's simple - the devil is in the details. Even if you have over-qualified specialists in your team who love work and are ready to delve into everything new on their way, hardly anyone can comprehend the depth of this or that technology in a short time.&lt;/p&gt;

&lt;p&gt;🚫 &lt;strong&gt;&lt;em&gt;Increasing the entropy of the program code and the system as a whole.&lt;/em&gt;&lt;/strong&gt; Hmm, okay, but what is entropy? It is a very broad concept, rooted in natural sciences, thermodynamics, and later mechanics and software. The initial idea is that in an isolated system the degree of chaos and uncertainty will not decrease over time but will remain the same or increase. On the other hand, entropy is a measure of the unknown about the system as a whole. The concept first appeared in the context of software in the book Object-Oriented Software Engineering (by Ivar Jacobson, Patrik Jonsson, ACM Press Staff, Magnus Christerson, Gunnar Overgaard). In general, think of entropy as a measure of the dishonesty of the system. The more disorder, uncertainty, the higher the probability of breakage. Compare a hammer and a car, which is a hundred times more complex in its structure - which one is broken down more often and faster?  A car has many more constituent components with its level of entropy, and therefore the total entropy is higher. Returning to HDD, the more complex and more unknown technology we involve in the process, the higher the likelihood that the project entropy will increase, and hence the number of bugs and &lt;a href="https://en.wikipedia.org/wiki/Mean_time_between_failures"&gt;Mean time between failures (MTBF&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;🚫 &lt;strong&gt;&lt;em&gt;Life cycle and aging of software&lt;/em&gt;&lt;/strong&gt;. Are you sure that the hype framework, the server, will exist, developed, and be supported for a long time, at least as long as the life of your project / product? And that in the next version the public API will not change completely - you need to either translate the API to a new one or live with the deprecated API. Have you assessed these risks correctly? Let's discuss this further.&lt;/p&gt;

&lt;p&gt;🚫 &lt;strong&gt;&lt;em&gt;Increased risks&lt;/em&gt;&lt;/strong&gt;. New technologies usually mean new risks:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Going out of bounds by time estimation.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Risk of unsuccessful completion of the project.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The additional overhead for infrastructure, development, and support.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Additional vulnerabilities of the system crash.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Nobody guarantees that critical security updates will be issued and fixed on time.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  &lt;a&gt;How to prevent HDD in project&lt;/a&gt;
&lt;/h4&gt;

&lt;p&gt;Ok, it seems that we have considered all the components of this crime. But how to prevent and avoid the use of HDD on a project without realizing full awareness of what technology the team will live with on the project? Or you can prove the validity of the implementation and evaluate all the pros and cons. Once again, just in case, if the technology is hype, this does not mean at all that it is full of bugs, has not matured, or does not apply to your project. It's just that there should be common sense in everything, namely in the analysis of technology from a technical and business point of view, a sound assessment of risks and additional costs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--3_7BGa3N--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/u363hfi0rzv2zbb3dmiu.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--3_7BGa3N--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/u363hfi0rzv2zbb3dmiu.jpg" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here are some ideas and approaches, the use of which can influence the views and awareness of the team regarding the choice of hype technology.&lt;/p&gt;

&lt;p&gt;💡 &lt;strong&gt;&lt;em&gt;The opinion of experts&lt;/em&gt;&lt;/strong&gt;, both in the technical part and in the subject area of business - "do we need it?" Before getting into Data Science, Serverless, and the like, consult a specialist with experience with this technology. Probably, you will not give up the technology, but completely rethink its use in the project. Moreover, it has a positive effect on grades.&lt;/p&gt;

&lt;p&gt;💡 &lt;strong&gt;&lt;em&gt;Proof of Concept&lt;/em&gt;&lt;/strong&gt; creation is as old as the world - before doing a full-scale implementation and implementation, you should make the simplest demo version tied to any technology. Most likely, you will not notice any deep and hidden problems, but you can throw away the technology at the start if you see that it does not suit you in any way.&lt;/p&gt;

&lt;p&gt;💡 &lt;strong&gt;&lt;em&gt;A collective brainstorm "for" and "against"&lt;/em&gt;&lt;/strong&gt; - decisions should never be made alone or in a narrow circle of employees. It is worth discussing and listening to all opinions, perhaps a sensible thought will come from an unexpected side. Also, interesting is the Three amigos approach, which is rooted in BDD and Agile, and means that there are three parties involved in the process: Business, Development, Testing. Thus, you can try to identify the largest number of problems and corner cases associated with the introduced technology. &lt;a href="https://cucumber.io/docs/bdd/who-does-what/#the-three-amigos"&gt;More details here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;💡 "&lt;a href="https://en.wikipedia.org/wiki/Five_whys"&gt;&lt;strong&gt;&lt;em&gt;5 Whys&lt;/em&gt;&lt;/strong&gt;&lt;/a&gt;" - and again about how not to drag unnecessary things to your project. Try to apply this technique to the next technology adoption issue. Ask yourself and your colleagues - "why are we doing this," and so on, 5 times according to the list (see the article). It may well be that simpler steps are sufficient to solve the original problem.&lt;/p&gt;

&lt;p&gt;Also, consider whether you can achieve the desired results without using this technology. How much more difficult and costly it will be.&lt;/p&gt;

&lt;p&gt;💡 &lt;strong&gt;&lt;em&gt;KISS (keep it simple stupid) And YAGNI (you ain't gonna need it).&lt;/em&gt;&lt;/strong&gt; A huge number of articles have been written about these principles, and I see no need to retell them. Just make sure you don't break them when you introduce the next technology to your project.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;a&gt;Conclusion and afterword&lt;/a&gt;
&lt;/h4&gt;

&lt;p&gt;We're finally here, and I'll try to make the ending shorter! There is a huge difference between "&lt;em&gt;adding and usage of new technology&lt;/em&gt;" and "&lt;em&gt;deliberately adding and usage of new technology&lt;/em&gt;". It would seem that the phrases differ by one word, but there is a huge gap between them. Hype technologies are often interesting, progressive, the main thing is to choose and apply them deliberately, without risking the project and business, not forgetting about your technical debt.&lt;/p&gt;

</description>
      <category>architecture</category>
      <category>hype</category>
      <category>engineering</category>
      <category>management</category>
    </item>
  </channel>
</rss>
