<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Marcin Piczkowski</title>
    <description>The latest articles on DEV Community by Marcin Piczkowski (@piczmar_0).</description>
    <link>https://dev.to/piczmar_0</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/piczmar_0"/>
    <language>en</language>
    <item>
      <title>How to use relational and graph db in one project?</title>
      <dc:creator>Marcin Piczkowski</dc:creator>
      <pubDate>Sat, 06 Mar 2021 18:23:17 +0000</pubDate>
      <link>https://dev.to/piczmar_0/how-to-use-relational-and-graph-db-in-one-project-pfh</link>
      <guid>https://dev.to/piczmar_0/how-to-use-relational-and-graph-db-in-one-project-pfh</guid>
      <description>&lt;p&gt;I have some use cases which better fit graph db (neo4j), but most of the feature are handled in MySql.&lt;/p&gt;

&lt;p&gt;I'm thinking of how to put a glue between both.&lt;br&gt;
Would you duplicate some data to bind MySql with Neo4j, e.g nodes in neo4j keep reference to IDs from MySql or vice versa?&lt;/p&gt;

&lt;p&gt;I'm using Spring and thought I'd use chained transactions to handle rollback gracefully of transactions which span both dbs.&lt;/p&gt;

&lt;p&gt;For entities on the edge of both sql and graph worlds i plan to duplicate entities (not all properties, just the ones which are used in each db world + IDs)on both and on queries to join data on IDs.&lt;/p&gt;

&lt;p&gt;In DDD it's a known technique to split domains and use separate DB for each domain, where certain DB technology fits better. Bit in my case it's really the same domain but different views.&lt;/p&gt;

&lt;p&gt;I'm totally aware it all depends from the context and business use cases, but are you aware of some resources/examples of how people are doing it?&lt;/p&gt;

&lt;p&gt;I know, "Google is my friend", but I had difficulties to find useful resources.&lt;/p&gt;

</description>
      <category>discuss</category>
      <category>graph</category>
      <category>rdbms</category>
      <category>neo4j</category>
    </item>
    <item>
      <title>What's your experience with text IDs in SQL database?</title>
      <dc:creator>Marcin Piczkowski</dc:creator>
      <pubDate>Thu, 20 Aug 2020 13:44:16 +0000</pubDate>
      <link>https://dev.to/piczmar_0/what-s-your-experience-with-text-ids-in-sql-database-11dd</link>
      <guid>https://dev.to/piczmar_0/what-s-your-experience-with-text-ids-in-sql-database-11dd</guid>
      <description>&lt;p&gt;I'm thinking of usage of text IDs in SQL database, similar to how Stripe generates them, e.g.: &lt;code&gt;ch_19iRv22eZvKYlo2CAxkjuHxZ&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;I found &lt;a href="https://stackoverflow.com/questions/41970461/how-to-generate-a-random-unique-alphanumeric-id-of-length-n-in-postgres-9-6?noredirect=1&amp;amp;lq=1"&gt;this&lt;/a&gt; StackOverflow very helpful.&lt;/p&gt;

&lt;p&gt;I could use the function to generate such IDs with some entropy provided by &lt;code&gt;pgcrypto&lt;/code&gt; extension function &lt;code&gt;gen_random_bytes&lt;/code&gt;, but still, I'm hesitant.&lt;/p&gt;

&lt;p&gt;Especially, after reading this comment: &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;No, it absolutely is not safe to use in any way shape or form which is why I'm for closing this question. If you want something that's safe to use, use UUID. If you want to play around with something likely to burn you severely and leave you crying. A solution which requires you create your own function that is worse in every way than the stock feature set to do this, then have it at. =)&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I would not like to reach a point later on production when the IDs start to duplicate.&lt;/p&gt;

&lt;p&gt;Therefore, I'm more biased to use UUIDs, which on the other hand are not very user-friendly, e.g. when they are used in URLs, reports, etc. I personally like the way Stripe does it.&lt;/p&gt;

&lt;p&gt;What's your experience? Did anyone do sth similar to Stripe? Did you face any problems?&lt;/p&gt;

</description>
      <category>discuss</category>
      <category>sql</category>
    </item>
    <item>
      <title>Cutting and merging PDF files from the command line</title>
      <dc:creator>Marcin Piczkowski</dc:creator>
      <pubDate>Tue, 21 Jul 2020 21:22:11 +0000</pubDate>
      <link>https://dev.to/piczmar_0/cutting-and-merging-pdf-files-from-the-command-line-23nb</link>
      <guid>https://dev.to/piczmar_0/cutting-and-merging-pdf-files-from-the-command-line-23nb</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--VtfeMoN9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/gv5xbh4jesjm8fwisjmj.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--VtfeMoN9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/gv5xbh4jesjm8fwisjmj.jpg" alt=""&gt;&lt;/a&gt;&lt;br&gt;
&lt;small&gt;&lt;i&gt;Photo by &lt;a href="https://unsplash.com/@sharonmccutcheon?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText"&gt;Sharon McCutcheon&lt;/a&gt; on &lt;a href="https://unsplash.com/s/photos/documents?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText"&gt;Unsplash&lt;/a&gt;&lt;/i&gt;&lt;small&gt;&lt;/small&gt;&lt;/small&gt;&lt;/p&gt;

&lt;p&gt;I wanted to show you a few simple commands which saved me a lot of time while tidying up some PDF files.&lt;br&gt;
You can cut them into pages, merge or split based on the page range. Here it is: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;cut off the first page from &lt;code&gt;input.pdf&lt;/code&gt; file and save result as &lt;code&gt;output.pdf&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pdftk input.pdf &lt;span class="nb"&gt;cat &lt;/span&gt;2-end output output.pdf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;merge &lt;code&gt;file1.pdf&lt;/code&gt; and &lt;code&gt;file2.pdf&lt;/code&gt; into single &lt;code&gt;output.pdf&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gs &lt;span class="nt"&gt;-dBATCH&lt;/span&gt; &lt;span class="nt"&gt;-dNOPAUSE&lt;/span&gt; &lt;span class="nt"&gt;-q&lt;/span&gt; &lt;span class="nt"&gt;-sDEVICE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;pdfwrite &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;-sOutputFile&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;output.pdf file1.pdf file2.pdf 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;split &lt;code&gt;input.pdf&lt;/code&gt; into single-page pdf files starting from 2-nd and ending on 15-th page. It will create files: &lt;code&gt;output_2.pdf&lt;/code&gt;, &lt;code&gt;output_3.pdf&lt;/code&gt;, &lt;code&gt;output_4.pdf&lt;/code&gt; and so on.. up to &lt;code&gt;output_15.pdf&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pdfseparate &lt;span class="nt"&gt;-f&lt;/span&gt; 2 &lt;span class="nt"&gt;-l&lt;/span&gt; 15 input.pdf out_%d.pdf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;convert image to pdf file&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;First, install img2pdf:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get &lt;span class="nb"&gt;install &lt;/span&gt;img2pdf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then convert &lt;code&gt;image.jpg&lt;/code&gt; to &lt;code&gt;output.pdf&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;img2pdf image.jpg &lt;span class="nt"&gt;-o&lt;/span&gt; output.pdf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;convert multiple images to pdf file&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can do it with another very useful tool - &lt;a href="https://imagemagick.org/index.php"&gt;ImageMagic&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;You can install it like that:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt update 
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get &lt;span class="nb"&gt;install &lt;/span&gt;build-essential
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then let's say we want to create a pdf file from &lt;code&gt;image1.jpg&lt;/code&gt;, &lt;code&gt;image2.jpg&lt;/code&gt; and &lt;code&gt;image3.bmp&lt;/code&gt; :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;convert image1.jpg image2.png image3.bmp -quality 100 output.pdf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You may get error message like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;convert-im6.q16: no images defined `output.pdf' @ error/convert.c/ConvertImageCommand/3258.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This means ImageMagic protects you from processing pdf files. To allow it you need to have root privilege and edit &lt;code&gt;/etc/ImageMagick-6/policy.xml&lt;/code&gt; and comment out this line&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; &amp;lt;!--policy domain="coder" rights="none" pattern="PDF" --/&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>pdf</category>
      <category>bash</category>
      <category>linux</category>
    </item>
    <item>
      <title>Interactive TypeScript programming with IDE</title>
      <dc:creator>Marcin Piczkowski</dc:creator>
      <pubDate>Thu, 02 Jan 2020 22:06:38 +0000</pubDate>
      <link>https://dev.to/piczmar_0/interactive-typescript-programming-with-vscode-2nio</link>
      <guid>https://dev.to/piczmar_0/interactive-typescript-programming-with-vscode-2nio</guid>
      <description>&lt;p&gt;In this post I want to prepare small project setup for interactive experiments with TypeScript code without a need of manual stop - compile - start loop.&lt;/p&gt;

&lt;p&gt;You can compare it to a JavaScript shell in browser or other programming languages "read-evaluate-print-loop" shells, but all inside your favourite editor.&lt;/p&gt;




&lt;p&gt;As a side note, if you're using VSCode editor, I also recommend installing &lt;a href="https://prettier.io/" rel="noopener noreferrer"&gt;Prettier&lt;/a&gt; extension and turning on code formatting on-save feature.&lt;br&gt;
To do so you need to open Settings: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;On Windows/Linux - File &amp;gt; Preferences &amp;gt; Settings&lt;/li&gt;
&lt;li&gt;On macOS - Code &amp;gt; Preferences &amp;gt; Settings&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fxczwdm9bcy6q1fwpmxyu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fxczwdm9bcy6q1fwpmxyu.png" alt="Open settings"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then type "format" in search field and mark "Format on Save".&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fb150xrz0cfxuj723n8f1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fb150xrz0cfxuj723n8f1.png" alt="Enable format on save"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;In my working project I want to have the following goodies: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;auto-build (or rather should say transpile) from TypeScript to JS and reload file on save&lt;/li&gt;
&lt;li&gt;auto-execute on file save&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;First, you should have nodejs installed. The fresher version the better.&lt;/p&gt;

&lt;p&gt;Next, install TypeScript compiler (tsc)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm i &lt;span class="nt"&gt;-g&lt;/span&gt; tsc
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now it's time to create first demo project.&lt;/p&gt;

&lt;p&gt;1) Use npm to generate fresh project&lt;/p&gt;

&lt;p&gt;Create a new folder &lt;code&gt;demo-project&lt;/code&gt;.&lt;br&gt;
In the folder start shell and run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm init
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Confirm defaults to all questions in prompt.&lt;/p&gt;

&lt;p&gt;2) Generate TypeScript config file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;tsc &lt;span class="nt"&gt;--init&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It will create &lt;code&gt;tsconfig.json&lt;/code&gt;&lt;br&gt;
In this file we have to update two lines:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; "outDir": "./build",                        
 "rootDir": "./src", 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It is setting a location where we keep our source files and where to put target JavaScript files. Separating them is a good practice not to get lost in a mess of .js mixed with .ts files in a single folder.&lt;/p&gt;

&lt;p&gt;Finally, the file should look like below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"compilerOptions"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"target"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"es5"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"module"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"commonjs"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"outDir"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"./build"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"rootDir"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"./src"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"strict"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"esModuleInterop"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"forceConsistentCasingInFileNames"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We also need to create &lt;code&gt;src&lt;/code&gt; and &lt;code&gt;build&lt;/code&gt; folders in project root folder.&lt;/p&gt;

&lt;p&gt;3) Install required modules for build and reload&lt;/p&gt;

&lt;p&gt;We will use &lt;a href="https://www.npmjs.com/package/nodemon" rel="noopener noreferrer"&gt;nodemon&lt;/a&gt; and &lt;a href="https://www.npmjs.com/package/concurrently" rel="noopener noreferrer"&gt;concurrently&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm i &lt;span class="nt"&gt;--save-dev&lt;/span&gt; nodemon concurrently

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;4) Configure build and run scripts&lt;/p&gt;

&lt;p&gt;We will add a few scripts for convenient build and run with single command. The run scripts will take JavaScript file from &lt;code&gt;./build&lt;/code&gt; folder.&lt;/p&gt;

&lt;p&gt;Let's put the following lines in &lt;code&gt;package.json&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="nl"&gt;"scripts"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"start:build"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"tsc -w"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"start:run"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"nodemon build/index.js"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"start"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"concurrently npm:start:*"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="err"&gt;,&lt;/span&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Whenever you run in bash &lt;code&gt;npm start&lt;/code&gt; then two processes will execute concurrently: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;TypeScript files are transpiled to JavaScript (&lt;code&gt;tsc -w&lt;/code&gt;), the &lt;code&gt;-w&lt;/code&gt; flag means "watch mode" - an updated file will be recompiled automatically. &lt;code&gt;tsc&lt;/code&gt; will take files from &lt;code&gt;./src&lt;/code&gt; folder and put target JS file in &lt;code&gt;./build&lt;/code&gt; folder thanks to &lt;code&gt;tsconfig.json&lt;/code&gt; settings.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;nodemon&lt;/code&gt; will restart application from JavaScript source (&lt;code&gt;./build/index.js&lt;/code&gt;)&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The argument &lt;code&gt;npm:start:*&lt;/code&gt; passed in the command means that &lt;code&gt;concurrently&lt;/code&gt; will look into scripts defined in &lt;code&gt;package.json&lt;/code&gt; and run each script which has a pattern of &lt;code&gt;start:*&lt;/code&gt;, in our case &lt;code&gt;start:build&lt;/code&gt; and &lt;code&gt;start:run&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;At this point you should have a template for any future project ready.&lt;/p&gt;

&lt;p&gt;Let's check how it works.&lt;/p&gt;

&lt;p&gt;Create &lt;code&gt;index.ts&lt;/code&gt; file in &lt;code&gt;./src&lt;/code&gt; folder, then add one line, e.g.:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Hello World!&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, run in terminal:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm start
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The first time you run it you may see an error, because &lt;code&gt;concurrently&lt;/code&gt; tries to start an app from &lt;code&gt;./build/index.js&lt;/code&gt; before it is even transpiled by TypeScript, but the second time you run the command you can see that if you update &lt;code&gt;index.ts&lt;/code&gt;  the file will be auto-compiled and executed.&lt;/p&gt;

&lt;p&gt;This is a good start for interactive TypeScript programming without necessity to manually build and start program every time something has changed.&lt;/p&gt;

&lt;p&gt;What next ?&lt;/p&gt;

&lt;p&gt;If you're going to use some core nodejs features from TypeScript, e.g. read/write files with &lt;code&gt;fs&lt;/code&gt; library, you'll have to install nodejs definitions:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm i @types/node
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>typescript</category>
      <category>node</category>
    </item>
    <item>
      <title>Tracking failed SQS messages</title>
      <dc:creator>Marcin Piczkowski</dc:creator>
      <pubDate>Sat, 28 Dec 2019 01:26:45 +0000</pubDate>
      <link>https://dev.to/piczmar_0/tracking-failed-sqs-messages-21l6</link>
      <guid>https://dev.to/piczmar_0/tracking-failed-sqs-messages-21l6</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fe6gnb7jkw9el1t00t162.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fe6gnb7jkw9el1t00t162.jpg" alt="Photo by Paweł Czerwiński on Unsplash"&gt;&lt;/a&gt;&lt;br&gt;
&lt;sup&gt;&lt;em&gt;Photo by Paweł Czerwiński on Unsplash&lt;/em&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;Tracking the message flow through the system is indispensable ability in large distributed applications. &lt;/p&gt;

&lt;p&gt;Amazon provides limited tools out of the box. There are better dedicated services for this purpose, but sometimes we are left with what we have in AWS.&lt;/p&gt;

&lt;p&gt;In this article I will show you how you can track the logs of Lambda function. &lt;/p&gt;

&lt;p&gt;The function was invoked from CloudWatch scheduled event and had dead-letter queue (DLQ) assigned.&lt;br&gt;
As a result the failing message landed in DLQ.&lt;/p&gt;

&lt;p&gt;1) Get SQS DLQ message - navigate in AWS Cloud Console to SQS service, then find your DLQ and select "Queue Actions -&amp;gt; View/Delete Messages". &lt;/p&gt;

&lt;p&gt;You can then see all messages. Select one message and you should see something similar as below:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

{
"version":"0",
"id":"29d8bf9d-94c0-0e45-b68b-b9aeb4c891c2",
"detail-type":"Scheduled Event",
"source":"aws.events",
"account":"763369520800",
"time":"2019-09-19T23:30:15Z",
"region":"eu-west-1",
"resources":["arn:aws:events:eu-west-1:763369520800:rule/trigger-lambda-function"],
"detail":{}
}



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;2) Get the id value (&lt;code&gt;29d8bf9d-94c0-0e45-b68b-b9aeb4c891c2&lt;/code&gt;) and lookup in &lt;a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AnalyzingLogData.html" rel="noopener noreferrer"&gt;CloudWatch Log Insights&lt;/a&gt; for your Lambda function, e.g. use query:&lt;/p&gt;

&lt;p&gt;fields @timestamp, &lt;a class="mentioned-user" href="https://dev.to/message"&gt;@message&lt;/a&gt;&lt;br&gt;
| sort @timestamp desc&lt;br&gt;
| limit 20&lt;br&gt;
| filter id='29d8bf9d-94c0-0e45-b68b-b9aeb4c891c2'&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fus6mlrzxsbmeeor82vpd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fus6mlrzxsbmeeor82vpd.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As a result you would get a bunch of messages like: &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;


@ingestionTime 1568936016992
@log 763369520800:/aws/lambda/my-lambda-function
@logStream 2019/09/19/[$LATEST]45f87b84a2fd4fb0a79df2c81c57cc06
@message 2019-09-19T23:33:30.528Z b276b9b4-bc43-4876-8052-0f94a38720a7 event: {"version":"0","id":"29d8bf9d-94c0-0e45-b68b-b9aeb4c891c2","detail-type":"Scheduled Event","source":"aws.events","account":"763369520800","time":"2019-09-19T23:30:15Z","region":"eu-west-1","resources":["arn:aws:events:eu-west-1:763369520800:rule/trigger-lambda-function"],"detail":{}}
@requestId b276b9b4-bc43-4876-8052-0f94a38720a7
@timestamp 1568936010528
account 763369520800
detail-type Scheduled Event
id 29d8bf9d-94c0-0e45-b68b-b9aeb4c891c2
region eu-west-1
resources.0arn:aws:events:eu-west-1:763369520800:rule/trigger-lambda-function
source aws.events
time 2019-09-19T23:30:15Z



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Pay attention to &lt;code&gt;@logStream&lt;/code&gt; value. Now you can lookup this stream in CloudWatch:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fsem00i42hshmaz8fw54u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fsem00i42hshmaz8fw54u.png" alt="Search CloudWatch stream"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The stream can contain plenty of logs so navigating through all of them and searching interesting ones can be tedious work.&lt;/p&gt;

&lt;p&gt;Instead of fetching Lambda logs in AWS Cloud Console there are better tools. &lt;/p&gt;

&lt;p&gt;You can check &lt;a href="https://github.com/TylerBrock/saw" rel="noopener noreferrer"&gt;saw&lt;/a&gt; and the &lt;a href="https://dev.to/piczmar_0/pleasant-parsing-of-aws-lambda-logs-fdp"&gt;other article&lt;/a&gt; which I dedicated to using this tool.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>lambda</category>
      <category>observability</category>
      <category>serverless</category>
    </item>
    <item>
      <title>Pleasant parsing of AWS Lambda logs</title>
      <dc:creator>Marcin Piczkowski</dc:creator>
      <pubDate>Sat, 28 Dec 2019 01:25:31 +0000</pubDate>
      <link>https://dev.to/piczmar_0/pleasant-parsing-of-aws-lambda-logs-fdp</link>
      <guid>https://dev.to/piczmar_0/pleasant-parsing-of-aws-lambda-logs-fdp</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fa3a21oxmxk3wft3eis8p.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fa3a21oxmxk3wft3eis8p.jpg" alt="Photo by Markus Spiske on Unsplash"&gt;&lt;/a&gt;&lt;br&gt;
&lt;sup&gt;&lt;em&gt;Photo by Markus Spiske on Unsplash&lt;/em&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;Looking up logs in AWS Cloud Console is a tedious work.&lt;br&gt;
CloudWatch interface has its own limitation. &lt;br&gt;
Using shell commands and tools is much more productive.&lt;/p&gt;

&lt;p&gt;One of not so known tools is &lt;a href="https://github.com/TylerBrock/saw" rel="noopener noreferrer"&gt;saw&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;In this article I will show you some typical use cases when developing and debugging Lambda functions. &lt;/p&gt;

&lt;p&gt;By "debugging" I mean the simplest way called also "print line" debugging ;) when you put some &lt;code&gt;console.log&lt;/code&gt; statements in your code, deploy it, execute and watch at the output logs.&lt;/p&gt;

&lt;p&gt;Assuming you have already downloaded and installed saw tool (instruction at &lt;a href="https://github.com/TylerBrock/saw" rel="noopener noreferrer"&gt;github&lt;/a&gt;), let's prepare environment so that saw can connect to AWS.&lt;/p&gt;

&lt;p&gt;Set the following environment variables:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export AWS_ACCESS_KEY_ID=&amp;lt;put your AWS access key here&amp;gt;
export AWS_SECRET_ACCESS_KEY=&amp;lt;put your AWS secret key here&amp;gt;
export AWS_REGION=&amp;lt;put your AWS region here, e.g.:eu-west-1&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Use case 1: Watch lambda logs real-time
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; saw watch "/aws/lambda/my-lambda-function"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above command will fetch logs for Lambda named &lt;code&gt;my-lambda-function&lt;/code&gt; in a similar way like &lt;code&gt;tail -f filename&lt;/code&gt; command works in Linux for printing out lines added to a file in real time.&lt;/p&gt;

&lt;h3&gt;
  
  
  Use case 2: Filter out old logs in search for particular expression
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;saw get "/aws/lambda/my-lambda-function" \
--start 2019-09-17T10:52:00.00Z \
--stop 2019-09-17T23:00:00.00Z \
--prefix '2019/09/19/[$LATEST]dc80521928524b82837eae6ee718d217' \
&amp;gt; function.log
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above command will fetch logs from 2019-09-17 starting at 10:52:00 UTC and ending at 23:00:00 UTC and redirect them to file &lt;code&gt;function.log&lt;/code&gt; on your local. &lt;/p&gt;

&lt;p&gt;In addition, it will filter out logs from particular log stream (&lt;code&gt;2019/09/19/[$LATEST]dc80521928524b82837eae6ee718d217&lt;/code&gt;)&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;--start&lt;/code&gt;, &lt;code&gt;--stop&lt;/code&gt; and &lt;code&gt;--prefix&lt;/code&gt; arguments are optional, but I would recommend using them unless you want to fetch all the logs for particular function which can be huge amount of data.&lt;/p&gt;

&lt;p&gt;Having the logs in a local file is a big advantage as you can further parse it or search with tools like &lt;code&gt;sed&lt;/code&gt;, &lt;code&gt;grep&lt;/code&gt;, open in editor etc.&lt;/p&gt;

&lt;p&gt;How would you know which log stream to query? Answer to this question you will find in my other article: &lt;a href="https://dev.to/piczmar_0/tracking-failed-sqs-messages-21l6"&gt;Tracking failed SQS messages&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>lambda</category>
      <category>observability</category>
      <category>serverless</category>
    </item>
    <item>
      <title>Getting S3 bucket size different ways</title>
      <dc:creator>Marcin Piczkowski</dc:creator>
      <pubDate>Sat, 28 Dec 2019 01:14:01 +0000</pubDate>
      <link>https://dev.to/piczmar_0/getting-s3-bucket-size-different-ways-4n4o</link>
      <guid>https://dev.to/piczmar_0/getting-s3-bucket-size-different-ways-4n4o</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fsoown2nxay87qlrboxhx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fsoown2nxay87qlrboxhx.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this post I will show a few methods of how to check AWS S3 bucket size for bucket named &lt;code&gt;my-docs&lt;/code&gt;. You can change this name to any existing bucket name which you have access to.&lt;/p&gt;

&lt;h2&gt;
  
  
  S3 CLI
&lt;/h2&gt;

&lt;p&gt;All you need is &lt;a href="https://aws.amazon.com/cli/" rel="noopener noreferrer"&gt;AWS CLI&lt;/a&gt; installed and configured.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

aws s3 ls --summarize --human-readable --recursive s3://my-docs


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;It prints output like this one:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;


2019-03-07 12:11:24   69.7 KiB 2019/01/file1.pdf
2019-03-07 12:11:20  921.4 KiB 2019/01/file2.pdf
2019-03-07 12:11:16  130.9 KiB 2019/01/file3.pdf

Total Objects: 310
Total Size: 121.7 MiB


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The output looks similar to bash &lt;code&gt;ls&lt;/code&gt; command.&lt;/p&gt;

&lt;p&gt;You can than cut total size from the output, e.g.:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

aws s3 ls --summarize --human-readable --recursive s3://my-docs \
| tail -n 1 \
| awk -F" " '{print $3}'


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;code&gt;tail&lt;/code&gt; will get last line of the output&lt;br&gt;
&lt;code&gt;awk&lt;/code&gt; will tokenize line by space and print third token which is bucket size in MB.&lt;/p&gt;

&lt;h2&gt;
  
  
  CloudWatch metric
&lt;/h2&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

aws cloudwatch get-metric-statistics --namespace AWS/S3 \
| --start-time 2019-07-07T23:00:00Z \
| --end-time 2019-10-31T23:00:00Z \
| --period 86400 \
| --statistics Sum \
| --region us-east-1 \
| --metric-name BucketSizeBytes \
| --dimensions Name=BucketName,Value="my-docs" Name=StorageType,Value=StandardStorage \
| --output text \
| sort -k3 -n | tail -n 1 | cut -f 2-2


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;What is happening in the above command?&lt;/p&gt;

&lt;p&gt;AWS CLI CloudWatch method &lt;code&gt;get-metric-statistics&lt;/code&gt; prints metric data points from range &lt;code&gt;start-time&lt;/code&gt; till &lt;code&gt;end-time&lt;/code&gt; with period every 86400 sec (which is 24h, the &lt;code&gt;period&lt;/code&gt; depends on the time frame, see &lt;a href="https://docs.aws.amazon.com/cli/latest/reference/cloudwatch/get-metric-statistics.html" rel="noopener noreferrer"&gt;docs&lt;/a&gt; for more info). &lt;/p&gt;

&lt;p&gt;The bucket name is &lt;code&gt;my-docs&lt;/code&gt; and we want output to be printed in plain text (we could as well choose json).&lt;/p&gt;

&lt;p&gt;The output printed would look like: &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

DATAPOINTS  127633754.0 2019-10-17T23:00:00Z    Bytes
DATAPOINTS  127633754.0 2019-08-13T23:00:00Z    Bytes
DATAPOINTS  127633754.0 2019-07-07T23:00:00Z    Bytes
DATAPOINTS  127633754.0 2019-10-03T23:00:00Z    Bytes


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The third column is a timestamp. Data is unordered and we are interested in the most current size of the bucket so we should sort output by this column: &lt;code&gt;sort -k3 -n&lt;/code&gt; will sort output by 3rd column.&lt;/p&gt;

&lt;p&gt;Finally, we want to take second column which is bucket size in bytes.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;tail -n 1&lt;/code&gt; takes last line of the output&lt;br&gt;
&lt;code&gt;cut -f 2-2&lt;/code&gt; will cut the line from 2nd to 2nd column, in other words it takes only the column we are interested in.&lt;/p&gt;

&lt;p&gt;This method of fetching bucket size is error prone because data points are present only for time frames when data was actually changed on s3 so if you have not modified data through last month and you request metrics from this period you won't get any.&lt;/p&gt;

&lt;h2&gt;
  
  
  AWS S3 job (inventory)
&lt;/h2&gt;

&lt;p&gt;This is a feature provided by AWS - an inventory report. It allows to configure a scheduled job which will save information about S3 bucket in a file in another bucket. The information, among others, can contain the size of objects in source bucket.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/AmazonS3/latest/user-guide/configure-inventory.html" rel="noopener noreferrer"&gt;This&lt;/a&gt; instruction explains how to configure AWS S3 inventory manually on AWS console.&lt;/p&gt;

&lt;p&gt;To wrap up, the first option looks like the easiest one from command line, but other options are worth to know too. &lt;/p&gt;

&lt;p&gt;They may serve better for particular use case. &lt;/p&gt;

&lt;p&gt;E.g. if you wanted to see how bucket size changed over time period the 2nd method would be more suitable, but if you'd like to get a report with bucket size on regular basis the third seems easier to implement. You could listen on new report object in the second bucket and trigger lambda function on object-created event to process the report (maybe notify user by email).&lt;/p&gt;

</description>
      <category>aws</category>
      <category>s3</category>
    </item>
    <item>
      <title>Make a gif from a video quickly</title>
      <dc:creator>Marcin Piczkowski</dc:creator>
      <pubDate>Fri, 27 Sep 2019 21:16:01 +0000</pubDate>
      <link>https://dev.to/piczmar_0/quickly-make-a-gif-from-a-video-553m</link>
      <guid>https://dev.to/piczmar_0/quickly-make-a-gif-from-a-video-553m</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fjotxry5ytfls4sinednj.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fjotxry5ytfls4sinednj.jpg" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
&lt;small&gt; -- Photo by &lt;a href="https://unsplash.com/@tumbao1949?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText" rel="noopener noreferrer"&gt;James Wainscoat&lt;/a&gt; on Unsplash -- &lt;/small&gt;&lt;/p&gt;

&lt;p&gt;In this post I will show you how you can easily convert a video into gif with free command line tools. &lt;/p&gt;

&lt;p&gt;All in 3 steps:&lt;/p&gt;

&lt;p&gt;1 ) Install require tools&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://ffmpeg.org/" rel="noopener noreferrer"&gt;&lt;code&gt;ffmpeg&lt;/code&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.lcdf.org/gifsicle/" rel="noopener noreferrer"&gt;&lt;code&gt;gifsicle&lt;/code&gt;&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;2 ) Record a video (in MOV format) which you'd like to convert later to a gif file.&lt;/p&gt;

&lt;p&gt;3 ) Transform video into gif like that:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ffmpeg -i my.mov -s 1400x800 -pix_fmt rgb24 \
 -r 20 -f gif -  | gifsicle --optimize=3 --delay=3 &amp;gt; my.gif
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above command takes file &lt;code&gt;my.mov&lt;/code&gt; as input for &lt;code&gt;ffmpeg&lt;/code&gt; program and produces &lt;code&gt;my.gif&lt;/code&gt; as output. In addition it does some optimisations using &lt;code&gt;gifsicle&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;We could actually skip &lt;code&gt;gifsicle&lt;/code&gt; and only run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ffmpeg -i my.mov -s 1400x800 -pix_fmt rgb24 -r 20 -f gif my.gif
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The drawback is that the resulting gif would be large.&lt;/p&gt;

&lt;p&gt;If you're not interesting about the details of what the command does you can skip reading here. Otherwise follow me till the end.&lt;/p&gt;




&lt;p&gt;Below is a more detailed explanation of different options.&lt;/p&gt;

&lt;p&gt;If you'd like to see a complete list of &lt;code&gt;ffmpeg&lt;/code&gt; parameters and options with explanation check &lt;a href="https://gist.github.com/tayvano/6e2d456a9897f55025e25035478a3a50" rel="noopener noreferrer"&gt;here&lt;/a&gt; or use command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ffmpeg &lt;span class="nt"&gt;-h&lt;/span&gt; full
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Similarly for &lt;code&gt;gifsicle&lt;/code&gt; run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gifsicle &lt;span class="nt"&gt;-h&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The following are &lt;code&gt;ffmpeg&lt;/code&gt; options which were used to create a gif:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;-i&lt;/code&gt; flag prepends input file path (movie).&lt;/p&gt;

&lt;p&gt;&lt;code&gt;-s&lt;/code&gt; flag is used to set frame size, e.g. &lt;code&gt;1400x800&lt;/code&gt; means width of 1400 and height of 800 pixels.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;-pix_fmt&lt;/code&gt; sets pixel format, e.g. &lt;code&gt;rgb24&lt;/code&gt; which is a format with 24 bits per pixel. Each color channel (red, green, and blue) in video is allocated 8 bits per pixel. To put simply, pixel format is a kind of computer representation for color.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;-r&lt;/code&gt; is a frame rate, in our case 20 frames per second. What happens under the hood is that if the original video has higher frame rate than 20 than ffmpeg will remove some frames and if it had lower frame rate it will duplicate some frames to obtain output video with desired number of frames per second.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;-f&lt;/code&gt; flag is used to set filter for ffmpeg, e.g. &lt;code&gt;gif&lt;/code&gt; filter will transform video into a gif.&lt;/p&gt;

&lt;p&gt;And here are &lt;code&gt;gifsicle&lt;/code&gt; options:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;--optimize&lt;/code&gt; is used to shrink resulting gif. There are 3 levels supported. Level 3 will take more time to process but is the best optimization.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;--delay&lt;/code&gt; sets the delay in time between gif frames in hundredths of a second, in our case it will be 0.03s. The larger the value the slower the gif will be but if the value is too high it will make impression of lags.&lt;/p&gt;

&lt;p&gt;You can even set a handy shortcut in your bash to just type:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gifly input.mov output.gif
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can do this by adding a function on Mac OS or Linux in your &lt;code&gt;~/.bash_profile&lt;/code&gt; or &lt;code&gt;~/.bashrc&lt;/code&gt;. &lt;/p&gt;

&lt;p&gt;Add the following lines:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gifly() {
    ffmpeg -i "$1" -s 1400x800 -pix_fmt rgb24 -r 20 -f gif -  | gifsicle --optimize=3 --delay=3 &amp;gt; "$2"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it, &lt;code&gt;ffmpeg&lt;/code&gt; rocks! Enjoy creating gifs without any special software.&lt;/p&gt;

</description>
      <category>ffmpeg</category>
      <category>gifsicle</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Terraform 0.12 "is empty tuple" error in module</title>
      <dc:creator>Marcin Piczkowski</dc:creator>
      <pubDate>Fri, 06 Sep 2019 08:14:42 +0000</pubDate>
      <link>https://dev.to/piczmar_0/terraform-0-12-is-empty-tuple-error-in-module-17ie</link>
      <guid>https://dev.to/piczmar_0/terraform-0-12-is-empty-tuple-error-in-module-17ie</guid>
      <description>&lt;p&gt;I'm struggling to create a module with optional resources steered with "count" attribute.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/piczmar/terraform-sqs-module"&gt;Here&lt;/a&gt; is a project which demonstrates the problem.&lt;/p&gt;

&lt;p&gt;There is an SQS module which has optional dead-letter queue attached with SQS redrive policy.&lt;/p&gt;

&lt;p&gt;On first run when queue is created with DLQ everything is fine, but when I wanted to remove DLQ the issue appears when &lt;code&gt;terraform plan&lt;/code&gt; is running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Error: Invalid index

  on modules/sqs/sqs.tf line 67, in resource "aws_sqs_queue" "regular_queue_with_dl":
  67:   redrive_policy             = var.attach_dead_letter_config ? data.template_file.regular_queue_redrive_policy[count.index].rendered : null
    |----------------
    | count.index is 0
    | data.template_file.regular_queue_redrive_policy is empty tuple

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Does anyone know how to properly create such module with optional resource in Terraform 0.12 or fix the issue I am facing?&lt;/p&gt;




&lt;p&gt;&lt;em&gt;EDIT&lt;/em&gt;: I asked the same question on &lt;a href="https://stackoverflow.com/questions/57908294/optional-resources-in-terraform-0-12-module"&gt;stackoverflow&lt;/a&gt; and got the answer which solved my problem.&lt;/p&gt;

&lt;p&gt;The solution was to conditionally assign &lt;code&gt;redrive_policy&lt;/code&gt; to &lt;code&gt;null&lt;/code&gt; if Lamdba should not use it, e.g.:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_sqs_queue" "regular_queue" {
  redrive_policy = var.attach_dead_letter_config ? templatefile(
    "${path.module}/redrive_policy.tmpl", {
      # (...whatever you had in "vars" in the template_file data resource...)  
    },
  ) : null
...
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;In addition we can inline template file instead of creating a separate resource.&lt;/p&gt;

&lt;p&gt;My updated example is &lt;a href="https://github.com/piczmar/terraform-sqs-module/commit/7ad3018b6ba945e0b1d3a459cdc5dcc645959d49"&gt;here&lt;/a&gt;&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>discuss</category>
      <category>aws</category>
    </item>
    <item>
      <title>RESTful API design concerns</title>
      <dc:creator>Marcin Piczkowski</dc:creator>
      <pubDate>Fri, 26 Jul 2019 22:42:34 +0000</pubDate>
      <link>https://dev.to/piczmar_0/restful-api-design-concerns-n8j</link>
      <guid>https://dev.to/piczmar_0/restful-api-design-concerns-n8j</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Flfv7sjpprc9hne7rsd6h.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Flfv7sjpprc9hne7rsd6h.jpg"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;-- Photo by &lt;a href="https://unsplash.com/@omerrana?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText" rel="noopener noreferrer"&gt;Omer Rana&lt;/a&gt; on &lt;a href="https://unsplash.com/search/photos/car-park?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText" rel="noopener noreferrer"&gt;Unsplash&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;When working on real production system recently I asked myself a question:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Should REST API be constrained by the current architecture choices?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;To illustrate the problem I invented an example.&lt;br&gt;
Let's imagine a system which is used for renting cars. The business is sort of Airbnb for "car rental". Small companies which rent cars can register their spot in the system to access wider range of potential customers. The system allows them to manage their cars but not any other rental spot cars.&lt;/p&gt;

&lt;p&gt;Let's imagine we're a startup and build the system. We start small and use relational database to store all cars in a single table. Each car is identified by unique ID in this table. &lt;br&gt;
We want to export a RESTful API for our system.&lt;/p&gt;

&lt;p&gt;Among others, we would need APIs to browse all cars in a spot and get single car details.&lt;/p&gt;

&lt;p&gt;The API for listing all cars in a spot could look like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;GET /spots/{spotId}/cars
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It would return a list of cars from which we could get IDs of the cars.&lt;/p&gt;

&lt;p&gt;The API for getting a car by ID could look like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;GET /spots/{spotId}/cars/{carId}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;or&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;GET /cars/{carId}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Since we want to be aligned with good practices of API design, we've decided to go with the longer path, because the cars are resources which cannot exist alone and always belong to a given spot. The path &lt;code&gt;/spots/{spotId}/cars&lt;/code&gt; clearly explains the relationship.&lt;/p&gt;

&lt;p&gt;However, the &lt;code&gt;spotId&lt;/code&gt; in the path is redundant.&lt;br&gt;
Since we have all the cars in single table and we know the car ID, because we got it from the &lt;code&gt;/spots/{spotId}/cars&lt;/code&gt; endpoint, the only variable we really need is the &lt;code&gt;carId&lt;/code&gt;.&lt;br&gt;
Of course, in our relational database we will have relation from car to a spot and we could add the &lt;code&gt;spotId&lt;/code&gt; in out query, but it's not crucial.&lt;/p&gt;

&lt;p&gt;E.g. we could have a query like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;select c.* from cars c inner join spots s
on s.id = c.spot_id
where s.id = :spotId and c.id = :carId
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;but it would get the same result as:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;select * from cars 
where id = :carId
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So, should we use &lt;code&gt;/spots/{spotId}/cars/{carId}&lt;/code&gt; or &lt;code&gt;/cars/{carId}&lt;/code&gt; as the endpoint path?&lt;/p&gt;

&lt;p&gt;I've been thinking about it and both options have pros &amp;amp; cons. As mentioned before, the longer one sounds more appropriate from the semantics of the API perspective, but the shorter one is easier to use and implement in the current state of the backend architecture.&lt;/p&gt;

&lt;p&gt;If we think about the evolution of our service, then we can imagine that we may want to split the cars table into separate per each spot. This may happen if the volume of data grows, or if we want to distribute database and set several instances in locations nearby to each spot (for better performance and scaling). Each car would then be unique but within the single DB instance (or instances if we consider a cluster of instances in specific location for given spot). Then we could only distinguish a car by a pair of &lt;code&gt;spotId&lt;/code&gt; and &lt;code&gt;carId&lt;/code&gt; and the longer API path would make more sense.&lt;/p&gt;

&lt;p&gt;Finally, I answered to myself: &lt;br&gt;
API is not still. When the architecture evolves so does API.&lt;br&gt;
What currently makes sense and is simple (&lt;code&gt;/cars/{id}&lt;/code&gt;) may not be applicable anymore in future. In future, if I need to split car storage into separate table/database for each car rental spot, the new API may look like: &lt;code&gt;/spots/{spotId}/cars/{carId}&lt;/code&gt;. On the other hand this might never happen and as Donald Knuth used to say “Premature optimization is the root of all evil”.&lt;/p&gt;

&lt;p&gt;What is your answer to the problem. If you have more thoughts please share with me and other readers in comments.&lt;/p&gt;

</description>
      <category>rest</category>
      <category>api</category>
      <category>design</category>
      <category>discuss</category>
    </item>
    <item>
      <title>How to conditionally upload Lambda artifact to s3 with Terraform?</title>
      <dc:creator>Marcin Piczkowski</dc:creator>
      <pubDate>Fri, 24 May 2019 15:44:57 +0000</pubDate>
      <link>https://dev.to/piczmar_0/how-to-conditionally-upload-lambda-artifact-to-s3-with-terraform-48j6</link>
      <guid>https://dev.to/piczmar_0/how-to-conditionally-upload-lambda-artifact-to-s3-with-terraform-48j6</guid>
      <description>&lt;p&gt;I have 2 projects on Gitlab. One with terraform to provision infrastructure, another with Lambda code. Lambda is configured to be deployed from S3 bucket.&lt;/p&gt;

&lt;p&gt;In terraform I have a dummy zip file uploaded to s3. I had to give any ZIP otherwise Terraform complained during apply.&lt;/p&gt;

&lt;p&gt;I use a separate project to keep Lambda code, which has separate build pipeline and deploy to the same s3 bucket. It should overwrite the dummy ZIP on deploy.&lt;/p&gt;

&lt;p&gt;The problem is that whenever Terraform executes the real ZIP is overwritten with the dummy one again, so I need to deploy the real ZIP from another project again.&lt;/p&gt;

&lt;p&gt;I don't want to keep function code together with Terraform project.&lt;br&gt;
I also don't want my Terraform build to always trigger another project build (containing function source code), because most often Lambda does not change but other resources are modified in Terraform configuration.&lt;/p&gt;

&lt;p&gt;I thought about having another Lambda triggered when ZIP is uploaded to S3 which would verify if it is a real or dummy one. &lt;br&gt;
In case of dummy one it would trigger another project deployment using Gitlab API.&lt;/p&gt;

&lt;p&gt;Is there any easy solution to my problem?&lt;/p&gt;

</description>
      <category>help</category>
      <category>lambda</category>
      <category>terraform</category>
      <category>discuss</category>
    </item>
    <item>
      <title>AWS RDS dump / restore / view progress</title>
      <dc:creator>Marcin Piczkowski</dc:creator>
      <pubDate>Thu, 14 Mar 2019 00:20:54 +0000</pubDate>
      <link>https://dev.to/piczmar_0/aws-rds-dump--restore--view-progress-bij</link>
      <guid>https://dev.to/piczmar_0/aws-rds-dump--restore--view-progress-bij</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fd28w63qeobp8327jfjkc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fd28w63qeobp8327jfjkc.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I bet you had a situation when you had to dump production database for some investigation, testing or even development.&lt;/p&gt;




&lt;p&gt;Off topic:&lt;br&gt;
I won't mention here about the sensitive data and data obfuscation as this is out of scope of this post, but just wanted to remind you that you're dealing with real people data and should keep in mind some regulations like GDPR or be careful not to send emails to real users if you are testing your apps using production DB.&lt;br&gt;
In any case it's good to replace real users' sensitive data with some dummy values.&lt;/p&gt;



&lt;p&gt;Usually the production db is secured in AWS VPC (virtual private cloud) and noone should be able to connect directly without VPN. Often it happens that the DB is only accessible from a server inside VPC which serves as a "bastion" server so that you can access other servers inside VPC from this server but not from public.&lt;/p&gt;

&lt;p&gt;Below are a few tips which you may find useful in these circumstances:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Dump MySQL file from RDS database straight on your localhost.
This means you do not have to dump the file on "bastion" server and then copy it to your local, because you can create the dump straight on your local
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh -i ssh_key.pem ec2-user@bastion.host  \
mysqldump -P 3306 -h rds.host -u dbuser --password=dbpassword dbname &amp;gt; dumpfile
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;This command will ssh to the bastion server and execute &lt;code&gt;mysqldump&lt;/code&gt; command there but the result is redirected to your localhost file &lt;code&gt;dumpfile&lt;/code&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If you're not brave enough to dump DB from command line and you prefer to use some graphic tools like &lt;a href="https://www.sequelpro.com" rel="noopener noreferrer"&gt;Sequel Pro&lt;/a&gt; you can still do it using &lt;a href="https://www.howtoforge.com/reverse-ssh-tunneling" rel="noopener noreferrer"&gt;ssh reverse tunnel&lt;/a&gt;.
It means that you will map remote port of your RDS host to a localhost port an the bastion server will be used to tunnel the traffic.
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh -i ssh_key.pem -N -L LOCAL_PORT:rds.host:RDS_PORT ec2-user@bastion.host
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;When it runs you should be able to connect with your favourite client, e.g.:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Feq3knq8rf2659wsrlcvd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Feq3knq8rf2659wsrlcvd.png" alt="Example connection configuration from Sequel Pro"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If you dump or restore large DBs it can be tricky to see progress. Usually the command  looks like hanging, e.g. for restore:
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mysql -P DB_PORT -u dbuser --password=dbpassword  dbname &amp;lt; dumpfile
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;There is a nice but not so well known tool on *nix systems called &lt;a href="https://catonmat.net/unix-utilities-pipe-viewer" rel="noopener noreferrer"&gt;Pipe Viewer&lt;/a&gt; with which you can track progress. &lt;/p&gt;

&lt;p&gt;-- for dump:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mysqldump -P 3306 -h rds.host -u dbuser --password=dbpassword dbname | pv -W &amp;gt; dumpfile
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;-- for restore:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pv dumpfile | mysql -P DB_PORT -h db.host -u dbuser --password=dbpassword  dbname
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then you see a nice progress bar from Pipe Viewer:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2F1n0efvrqzzydq2qx3tfg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2F1n0efvrqzzydq2qx3tfg.png" alt="Pipe Viewer in action"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;Thanks for reading! &lt;/p&gt;

&lt;p&gt;You can connect with me on &lt;a href="https://twitter.com/piczmar" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt; or subscribe to my mailing list. I will occasionally update you about my recent work.&lt;/p&gt;

&lt;p&gt;&lt;a href="http://eepurl.com/ghDSdz" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2F6wq5xr06op3k7jbn6p1x.png" alt="Subscribe"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>rds</category>
      <category>dump</category>
      <category>mysql</category>
    </item>
  </channel>
</rss>
