<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Mirek Sedzinski</title>
    <description>The latest articles on DEV Community by Mirek Sedzinski (@msedzins).</description>
    <link>https://dev.to/msedzins</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/msedzins"/>
    <language>en</language>
    <item>
      <title>@Autowired magic in SpringBoot</title>
      <dc:creator>Mirek Sedzinski</dc:creator>
      <pubDate>Sun, 16 Jun 2024 10:01:26 +0000</pubDate>
      <link>https://dev.to/msedzins/autowired-magic-in-springboot-fb7</link>
      <guid>https://dev.to/msedzins/autowired-magic-in-springboot-fb7</guid>
      <description>&lt;p&gt;&lt;strong&gt;Disclosure:&lt;/strong&gt; I'm quite an experienced developer (currently go/python/bit of rust, scala/c# in the past), but a newbie when it comes to Spring Boot. &lt;/p&gt;

&lt;p&gt;Recently, I've had to use it in one of my projects and somewhere in the code, I've come across this interesting annotation:&lt;br&gt;
&lt;code&gt;@Autowired&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;When I've read about it, I couldn't believe my eyes:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;In Spring Boot, the @Autowired annotation is used for automatic dependency injection. &lt;br&gt;
This means that Spring will automatically resolve and inject any beans that your object depends on.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I'm familiar with Dependency Injection, but I've never thought that anyone could use, or even propose to use, such an approach. &lt;/p&gt;

&lt;p&gt;It's pure magic. I strongly believe that DI should be explicit, and the developer should be aware of what dependencies are injected into his/her class.&lt;/p&gt;

&lt;p&gt;Here is, in my opinion, a much better approach to the problem:&lt;br&gt;
&lt;a href="https://github.com/google/wire"&gt;https://github.com/google/wire&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Wire is a compile-time dependency injection tool. Dependencies are defined in a separate file, and the developer is always aware of what is injected into his/her class. &lt;/p&gt;

&lt;p&gt;I'm curious about your opinion on this topic. Do you think that @Autowired is a good approach to DI? Or maybe you prefer a more explicit approach like Wire? Or maybe you have your own way of doing DI? I'm looking forward to your answers.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>programming</category>
      <category>springboot</category>
    </item>
    <item>
      <title>Shamir's secret sharing</title>
      <dc:creator>Mirek Sedzinski</dc:creator>
      <pubDate>Sat, 03 Feb 2024 20:20:11 +0000</pubDate>
      <link>https://dev.to/msedzins/shamirs-secret-sharing-2j75</link>
      <guid>https://dev.to/msedzins/shamirs-secret-sharing-2j75</guid>
      <description>&lt;p&gt;Shamir's secret sharing (SSS) is an efficient secret sharing algorithm for distributing private information (the "secret") among a group. The secret cannot be revealed unless a quorum of the group acts together to pool their knowledge.&lt;/p&gt;

&lt;p&gt;Behind the concept sits quite simple yet powerful and beautiful math.&lt;/p&gt;

&lt;p&gt;In my proof-of-concept I expose SSS implemented in Rust as a WebAssembly. It is then used in a simple web application where interaction with WebAssembly happens via JavaScript.&lt;br&gt;
Please have a look: &lt;br&gt;
&lt;a href="https://github.com/msedzins/shamir_rust"&gt;https://github.com/msedzins/shamir_rust&lt;/a&gt;&lt;/p&gt;

</description>
      <category>cryptography</category>
      <category>webassembly</category>
    </item>
    <item>
      <title>Learning Rust - Merkle Tree</title>
      <dc:creator>Mirek Sedzinski</dc:creator>
      <pubDate>Sun, 18 Sep 2022 23:08:36 +0000</pubDate>
      <link>https://dev.to/msedzins/learning-rust-merkel-tree-9p</link>
      <guid>https://dev.to/msedzins/learning-rust-merkel-tree-9p</guid>
      <description>&lt;p&gt;Recently, in my spare time, I've started to learn Rust and as an exercise I've decided to implement a Merkle Tree (&lt;a href="https://en.wikipedia.org/wiki/Merkle_tree"&gt;Wiki&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;Sounds easy but it's turned out to be a real adventure :) In this post I'm sharing some of my experiences.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 1
&lt;/h2&gt;

&lt;p&gt;I've started with preparing the data structures. Here is something really basic, that would work with many languages:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="k"&gt;struct&lt;/span&gt; &lt;span class="n"&gt;MerkelTree&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;leaves&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;Vec&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;MerkelNode&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;struct&lt;/span&gt; &lt;span class="n"&gt;MerkelNode&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;parent&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;Option&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;MerkelNode&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;hash&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;String&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Merkel Tree consists of list of nodes labelled as "leaves". Each leaf contains hash of the corresponding data block and link to the parent node. Parent node groups two leaves (binary tree), and contains hash of their hashes. It also contains link to its parent node which groups two nodes at higher level. That rule is repeated upwards, until we end up with a single node at the top - root node. It contains so called "root hash" and it has no parent.&lt;/p&gt;

&lt;p&gt;Needless to say - the structures I've designed didn't work. &lt;br&gt;
The error said:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="k"&gt;struct&lt;/span&gt; &lt;span class="n"&gt;MerkelNode&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
   &lt;span class="p"&gt;|&lt;/span&gt; &lt;span class="o"&gt;^^^^^^^^^^^^^^^^^&lt;/span&gt; &lt;span class="n"&gt;recursive&lt;/span&gt; &lt;span class="k"&gt;type&lt;/span&gt; &lt;span class="n"&gt;has&lt;/span&gt; &lt;span class="n"&gt;infinite&lt;/span&gt; &lt;span class="n"&gt;size&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It's turned out to be quite well known error in Rust. At compile time it needs to know how much space a type takes up. Which is not possible with the approach I've taken.&lt;/p&gt;

&lt;p&gt;There is an easy solution to that problem described here:&lt;br&gt;
&lt;a href="https://doc.rust-lang.org/book/ch15-01-box.html#enabling-recursive-types-with-boxes"&gt;Enabling-recursive-types-with-boxes&lt;/a&gt;.&lt;br&gt;
So, let's try it out.&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 2
&lt;/h2&gt;

&lt;p&gt;I've used "box" smart pointer which allows storing data on the heap. In that way my structure has a known, fixed size at compile time.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="k"&gt;struct&lt;/span&gt; &lt;span class="n"&gt;MerkelNode&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;parent&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;Option&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nb"&gt;Box&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;MerkelNode&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;hash&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;String&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The code above compiles, so I was happy to start implementation. However, even after a few hours I was still not able to make my code compile...&lt;br&gt;
The problem was not with the logic. The problem was that I was not able to express what I wanted in the way that was acceptable by Rust.&lt;/p&gt;

&lt;p&gt;Then it came to me - it just can't work and the problem is with the structures again.&lt;br&gt;
Two core rules in Rust say:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Each value in Rust has an owner.&lt;/li&gt;
&lt;li&gt;There can only be one owner at a time.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But in my case I have, for example, two leaves that point to the same parent. Which one of them is owner of the parent?&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 3
&lt;/h2&gt;

&lt;p&gt;Ok, I can't have two owners, so it means I have to borrow values and use references:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="k"&gt;struct&lt;/span&gt; &lt;span class="n"&gt;MerkelNode&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;parent&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;Option&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nb"&gt;Box&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;MerkelNode&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;hash&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;String&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;However, above code will not compile because references in structures require using lifetimes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="k"&gt;struct&lt;/span&gt; &lt;span class="n"&gt;MerkelNode&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nv"&gt;'a&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;parent&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;Option&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nb"&gt;Box&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&amp;amp;&lt;/span&gt;&lt;span class="nv"&gt;'a&lt;/span&gt; &lt;span class="n"&gt;MerkelNode&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nv"&gt;'a&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&amp;gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;hash&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;String&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The code looks a bit cluttered but at least it compiles. Let's continue with implementation then. &lt;/p&gt;

&lt;p&gt;After another few hours, once again, I came to conclusion - it can't work this way.&lt;br&gt;
I can borrow the value multiple times. That's fine. But what about this rule:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Each value in Rust has an owner.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Looks like I don't have an owner now. I mean - I have it somewhere along the line, but I don't have any dedicated place to store it for the entire duration of the program.&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 4
&lt;/h2&gt;

&lt;p&gt;At that point of time I've come to conclusion that I was doing something fundamentally wrong. There must be a pattern in Rust that will allow me to do what I want...&lt;/p&gt;

&lt;p&gt;After some googling, I finally found this:&lt;br&gt;
&lt;a href="https://doc.rust-lang.org/book/ch15-04-rc.html"&gt;The Reference Counted Smart Pointer&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It provides the multiple ownership capability, whoa!&lt;/p&gt;

&lt;p&gt;Definition of the structure looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="k"&gt;struct&lt;/span&gt; &lt;span class="n"&gt;MerkelNode&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;parent&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="nb"&gt;Option&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nb"&gt;Rc&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;MerkelNode&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;hash&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;String&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, finally, I can have two owners for the same parent node.&lt;/p&gt;

&lt;p&gt;There is only one final catch here - Rc&amp;lt;T&amp;gt; allows sharing the data for reading only. It is not sufficient in my case because once I created the node, I need to modify it with the pointer to the parent which is not known at the time of creation.&lt;/p&gt;

&lt;p&gt;After an hour or so....&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 5
&lt;/h2&gt;

&lt;p&gt;It turns out there is an &lt;a href="https://doc.rust-lang.org/book/ch15-05-interior-mutability.html#having-multiple-owners-of-mutable-data-by-combining-rct-and-refcellt"&gt;interior mutability pattern&lt;/a&gt; in Rust that will allow me to do exactly what I need.&lt;/p&gt;

&lt;p&gt;The final definition of the structure looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="k"&gt;struct&lt;/span&gt; &lt;span class="n"&gt;MerkelNode&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
     &lt;span class="n"&gt;parent&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;Option&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nb"&gt;Rc&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;RefCell&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;MerkelNode&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&amp;gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
     &lt;span class="n"&gt;hash&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;String&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With this structure I was able to implement working solution. I'm sure there is a lot ot things still to be optimised in my code but today I call it a day and celebrate :)&lt;/p&gt;

</description>
      <category>rust</category>
    </item>
    <item>
      <title>Getting details of blockchain transaction with GetTransactionByID</title>
      <dc:creator>Mirek Sedzinski</dc:creator>
      <pubDate>Sun, 15 May 2022 17:20:04 +0000</pubDate>
      <link>https://dev.to/msedzins/getting-details-of-blockchain-transaction-with-gettransactionbyid-1dln</link>
      <guid>https://dev.to/msedzins/getting-details-of-blockchain-transaction-with-gettransactionbyid-1dln</guid>
      <description>&lt;p&gt;In HyperLedger Fabric, to get details of a given transaction, one can use QSCC system chaincode call:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;GetTransactionByID
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The problem I came across, was with parsing the response. I couldn't find any good explanation how to do this.&lt;/p&gt;

&lt;p&gt;Finally, with a bit of experimentation, I've managed to figure it out. Sample  code and description are available here:&lt;br&gt;
&lt;a href="https://github.com/msedzins/GetTransactionByID"&gt;https://github.com/msedzins/GetTransactionByID&lt;/a&gt;&lt;/p&gt;

</description>
      <category>blockchain</category>
      <category>hyperledger</category>
      <category>fabric</category>
    </item>
    <item>
      <title>DevOps in Oracle Blockchain Platform</title>
      <dc:creator>Mirek Sedzinski</dc:creator>
      <pubDate>Mon, 24 Jan 2022 00:51:09 +0000</pubDate>
      <link>https://dev.to/msedzins/devops-in-oracle-blockchain-platform-313f</link>
      <guid>https://dev.to/msedzins/devops-in-oracle-blockchain-platform-313f</guid>
      <description>&lt;h1&gt;
  
  
  Intro
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://docs.oracle.com/en/cloud/paas/blockchain-cloud/index.html"&gt;Oracle Blockchain Platform&lt;/a&gt; is built on top of the &lt;a href="https://hlf.readthedocs.io/en/latest/"&gt;HyperLedger Fabric&lt;/a&gt;.&lt;br&gt;
It provides additional capabilities like web administration console, Oracle Berkley DB, with SQL syntax support, as World State, improved transaction validation and, what we will focus in this post on, set of REST APIs in front of native fabric GRPC APIs.&lt;/p&gt;

&lt;p&gt;REST APIs cover things like: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;chaincode installation, approval, committing&lt;/li&gt;
&lt;li&gt;chaincode invocation&lt;/li&gt;
&lt;li&gt;updating configuration of nodes (peer, ca, orderer)&lt;/li&gt;
&lt;li&gt;checking node health, etc. (full list of APIs is available here: &lt;a href="https://docs.oracle.com/en/cloud/paas/blockchain-cloud/restoci/index.html"&gt;API documentation&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;
&lt;h1&gt;
  
  
  Goal
&lt;/h1&gt;

&lt;p&gt;Let's use said REST APIs to rapidly automate daily devops tasks. To that end we need a CLI that we can use easily in our CI/CD pipelines or to perform ad-hoc tasks.&lt;/p&gt;
&lt;h1&gt;
  
  
  Implementation
&lt;/h1&gt;
&lt;h2&gt;
  
  
  Step 1
&lt;/h2&gt;

&lt;p&gt;First, we need to go to OBP web console and download swagger file with all REST APIs definitions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--fIdTgRyy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lrh2n5y6fu6hxrerofj4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--fIdTgRyy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lrh2n5y6fu6hxrerofj4.png" alt="OBP UI" width="880" height="335"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 2
&lt;/h2&gt;

&lt;p&gt;Now, we need to generate REST client from a swagger spec file. It will be "golang" client in my case so I will use &lt;a href="https://github.com/go-swagger/go-swagger"&gt;go-swagger&lt;/a&gt; tool.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;./swagger generate client  -f OBP_swagger.yml -A obp-admin&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;where:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"OBP_swagger.yml" - name of a swagger file&lt;/li&gt;
&lt;li&gt;"obp-admin" - arbitrary client name&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Generated code is ready to be used straight away. It contains "golang" structures that define requests/responses for respective APIs and "client" package that orchestrates all HTTP communication.  &lt;/p&gt;
&lt;h2&gt;
  
  
  Step 3
&lt;/h2&gt;

&lt;p&gt;We can start to implement calls to REST APIs. Let's make a call to &lt;a href="https://docs.oracle.com/en/cloud/paas/blockchain-cloud/restoci/op-console-admin-api-v2-chaincodes-get.html"&gt;Get Installed Chaincode List&lt;/a&gt; endpoint as an example.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Below code uses "cobra" as a library to create CLI application, but of course it't not a requirement.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="k"&gt;var&lt;/span&gt; &lt;span class="n"&gt;ListChaincodesCmd&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;cobra&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Command&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;Use&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;   &lt;span class="s"&gt;"list"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;Short&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="s"&gt;"Get a list of installed chaincodes, optionally for a given peer."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;Run&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="k"&gt;func&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;cmd&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;cobra&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Command&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;args&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;&lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;

        &lt;span class="c"&gt;//set basic authentication header (user + password)&lt;/span&gt;
        &lt;span class="n"&gt;auth&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;runtimeClient&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;BasicAuth&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;cmd&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Flag&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"user"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Value&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="n"&gt;cmd&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Flag&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"passwd"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Value&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;

        &lt;span class="c"&gt;//set URL to OBP&lt;/span&gt;
        &lt;span class="n"&gt;config&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;DefaultTransportConfig&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;WithHost&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;cmd&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Flag&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"obpHost"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Value&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;

        &lt;span class="c"&gt;//create client&lt;/span&gt;
        &lt;span class="n"&gt;cl&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;NewHTTPClientWithConfig&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;strfmt&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Default&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="c"&gt;//optionally take id of a peer&lt;/span&gt;
        &lt;span class="n"&gt;peerId&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;cmd&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Flag&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"peerId"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Value&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

        &lt;span class="c"&gt;//make a call to REST API, parse the response&lt;/span&gt;
        &lt;span class="n"&gt;resp&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;cl&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Chaincode&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;GetInstalledChaincodes&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;chaincode&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;NewGetInstalledChaincodesParams&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;WithPeerID&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;peerId&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="n"&gt;auth&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Fatalf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Unexpected error:%v"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;

        &lt;span class="c"&gt;//display the response (list of chaincodes)&lt;/span&gt;
        &lt;span class="n"&gt;aJSON&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;_&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;MarshalIndent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;resp&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;GetPayload&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="s"&gt;""&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\t&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;fmt&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Printf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;%v &lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;aJSON&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Summary
&lt;/h1&gt;

&lt;p&gt;Using swagger spec we can implement devops CLI tool for OBP rapidly. &lt;/p&gt;

&lt;p&gt;A few practical things to keep in mind:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;in case of failure (for example: due to bad input parameters) the response from the server can be very vague - HTTP code without any further information. It's good then to check the server logs, usually they are very descriptive.&lt;/li&gt;
&lt;li&gt;sometimes, changes to autogenerated code are needed. One example is, when "bool" parameter in API request is required, but is marked as "omitempty" in autogenerated code - we need to remove "omitempty" tag then (otherwise, if parameter is set to "false", it will be omitted from the request payload which will result in an error). &lt;/li&gt;
&lt;/ul&gt;

</description>
    </item>
    <item>
      <title>The misconceptions about Polyglot programming</title>
      <dc:creator>Mirek Sedzinski</dc:creator>
      <pubDate>Sun, 18 Apr 2021 21:08:55 +0000</pubDate>
      <link>https://dev.to/msedzins/the-misconceptions-about-polyglot-programming-5bin</link>
      <guid>https://dev.to/msedzins/the-misconceptions-about-polyglot-programming-5bin</guid>
      <description>&lt;p&gt;Back in the day, a life was easy. N-tier architecture reigned the world and each tier was implemented roughly in one technology, for example:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;data layer - relational database&lt;/li&gt;
&lt;li&gt;business layer - Java EE/.Net &lt;/li&gt;
&lt;li&gt;user interface - HTML + CSS + Java Script&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Then microservices and functions came along, together with the approach called &lt;a href="http://memeagora.blogspot.com/2006/12/polyglot-programming.html"&gt;Polyglot programming&lt;/a&gt; (and related term &lt;a href="https://martinfowler.com/bliki/PolyglotPersistence.html"&gt;PolyglotPersistence&lt;/a&gt; which pertains specifically to the data layer).&lt;/p&gt;

&lt;p&gt;So, don't get me wrong. I completely accept the fact that there are many languages out there and different languages can be a better fit for different things. &lt;/p&gt;

&lt;p&gt;If I want to provision infrastructure - I use Terraform. If I want to do some scripting - I use Bash/Python. If I want to analyse data I would go with Python or R. &lt;/p&gt;

&lt;p&gt;What I strongly disagree with, is introducing new technologies to projects just  "because I can" or "because I want to play with something new" or "because technology X is a bit better/fancy/popular than technology Y".&lt;/p&gt;

&lt;h4&gt;
  
  
  Different languages on the project
&lt;/h4&gt;

&lt;p&gt;As architects and developers we design and implement systems so that they meet all customer requirements and at the same time we strive to make them as simple as possible.&lt;br&gt;
Simplicity is a key for maintainability and extensibility. Every new requirement, every new test we create, every new library we use - makes it more complex.&lt;br&gt;
And introducing a new language is one of the most complex things I can imagine....&lt;/p&gt;

&lt;p&gt;This complexity is not about the syntax. Syntax is usually easy. And it's not about writing one-off code. Writing a code that you plan to throw away tomorrow is super easy.  &lt;/p&gt;

&lt;p&gt;But when we write enterprise grade systems the bar is set much higher. Just a few things off the top of my head:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Idiomatic ways of approaching common problems (logging, error handling, input parameters etc.)&lt;/li&gt;
&lt;li&gt;New libraries (when approaching new language, usually we have to learn completely new libraries that do exactly the same as libraries we already know but in completely different ways)&lt;/li&gt;
&lt;li&gt;Packages/modules/dependencies management (again: completely new way of doing the same things)&lt;/li&gt;
&lt;li&gt;New toolset (compilation, testing, debugging)&lt;/li&gt;
&lt;li&gt;Configuration of dev. environment&lt;/li&gt;
&lt;li&gt;Integration with CI/CD framework&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;And of course, when one person in the team, starts to use a new technology, all other members, sooner or later, have to learn it (to do code reviews or to stand in when needed)&lt;/p&gt;

&lt;h4&gt;
  
  
  Different databases on the project
&lt;/h4&gt;

&lt;p&gt;Let me tell you my story:&lt;br&gt;
One day I was asked to provision MongoDB and ElasticSearch for one of the projects. &lt;br&gt;
I had no experience with those databases, but spinning up containers and exposing the endpoints was super easy. Development team was happy. &lt;/p&gt;

&lt;p&gt;Then, a few "small" activities came along:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;High availability - Turned out, that I had to set up two 3-node clusters. And of course - setting up cluster for MongoDB was very different than for ElasticSearch.  After provisioning - I had to test if both clusters work.&lt;/li&gt;
&lt;li&gt;Backups - For MongoDB it was quite easy, but for ElasticSearch it turned out that shared volume was required (so NFS server had to be configured). Again - backup and recovery procedure had to be tested.&lt;/li&gt;
&lt;li&gt;Security - users/roles/which ports must be open, which ports can be closed etc. Completely different ways of doing the same things in each database.&lt;/li&gt;
&lt;li&gt;Patching, monitoring, performance tweaking and troubleshooting.....&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The conclusion is the following:&lt;br&gt;
Of course one can learn how to operate any database in the world. But it requires significant effort and time. Each new database brings new tools, new concepts and tons of documentation.&lt;/p&gt;

&lt;p&gt;My conclusion - whenever possible stay with one generic database to handle most of the traffic. Add new databases only to handle very specific workloads and only when really necessary. &lt;br&gt;
Some examples of specific workloads are: caching, graphs, big data, time series.  However, we should also bear in mind, that current databases usually support more than one type of workload. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;And what is your take on this? I would love to hear your comments and experiences!&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>programming</category>
      <category>polyglot</category>
    </item>
    <item>
      <title>Why do we need the Abstract Factory pattern?</title>
      <dc:creator>Mirek Sedzinski</dc:creator>
      <pubDate>Thu, 08 Apr 2021 22:50:39 +0000</pubDate>
      <link>https://dev.to/msedzins/why-do-we-need-abstractfactory-pattern-djj</link>
      <guid>https://dev.to/msedzins/why-do-we-need-abstractfactory-pattern-djj</guid>
      <description>&lt;p&gt;I had a really hard time understanding what are real benefits of using AbstractFactory pattern. Now, when I've finally got it, let me explain it in 6 steps :)&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 1
&lt;/h4&gt;

&lt;p&gt;Let's imagine a quite common situation in which we've been working on a custom code and we've just realised that we need to reuse some logic from utility class created by the friend of ours, working on the same team.&lt;/p&gt;

&lt;p&gt;It can look like this:&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--C71cKirK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7h5hwend7mq44u8gqoae.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--C71cKirK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7h5hwend7mq44u8gqoae.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 2
&lt;/h4&gt;

&lt;p&gt;We have a working code but there is at least one problem with it - it violates &lt;strong&gt;Dependency Inversion&lt;/strong&gt; principle. Let's try to fix it then:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--4HArrCku--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3in28wpj87zq27ul3ygs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4HArrCku--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3in28wpj87zq27ul3ygs.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 3
&lt;/h4&gt;

&lt;p&gt;Ok, much better. But the next question arises: how to create &lt;strong&gt;Utility&lt;/strong&gt; class before using it...? Of course, we have a &lt;strong&gt;Factory&lt;/strong&gt; pattern for this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--drlkxq03--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2ssoghja2lvymo2wqwb9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--drlkxq03--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2ssoghja2lvymo2wqwb9.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Our custom code is using &lt;strong&gt;Factory&lt;/strong&gt; class that creates instance of &lt;strong&gt;Utility&lt;/strong&gt; class and returns it as an interface for us to use.&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 4
&lt;/h4&gt;

&lt;p&gt;One thing immediately to notice is that we again violate &lt;strong&gt;Dependency Inversion&lt;/strong&gt; principle. For some people it can be enough reason for putting AbstractFactory in between custom code and &lt;strong&gt;Factory&lt;/strong&gt; class. &lt;br&gt;
But we don't have to stick to the rule, just for the sake of doing so :)&lt;br&gt;
Let's ignore &lt;strong&gt;Dependency Inversion&lt;/strong&gt; principle for a moment and see what will happen.&lt;/p&gt;

&lt;p&gt;Well, on many occasions nothing will happen.....difference can be seen for more complex scenarios though.&lt;/p&gt;

&lt;p&gt;Let's imagine that we have more than one &lt;strong&gt;Utility&lt;/strong&gt; class that implements the same interface. It can look like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--bivEfptg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/grj2f216zjasp4by2xwz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--bivEfptg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/grj2f216zjasp4by2xwz.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 5
&lt;/h4&gt;

&lt;p&gt;What is the problem with our design? To notice it, let's look at source code dependencies:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--0Ucb2QOi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7gvtq4kia3bxyjepzuxy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0Ucb2QOi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7gvtq4kia3bxyjepzuxy.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Custom code is placed in File 1. To make it work we need to import File 2, File 3 and apparently all files that contain all possible implementations of *&lt;em&gt;Utility&lt;/em&gt; class - even if we use only one implementation.&lt;br&gt;
It is clearly not an optimal solution.&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 6
&lt;/h4&gt;

&lt;p&gt;We would like to apply &lt;strong&gt;Common Reuse&lt;/strong&gt; principle. In short - we don't want to depend on things that we don't need. &lt;br&gt;
If we use &lt;strong&gt;Utility&lt;/strong&gt; class, version X - we wan't to depend only on its source code. Nothing more.&lt;/p&gt;

&lt;p&gt;So, here is the final design with a use of  AbstractFactory:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--XrUDKYT0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tj1h4o88ho7nicqxb6qd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--XrUDKYT0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tj1h4o88ho7nicqxb6qd.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Assuming that each &lt;strong&gt;Factory&lt;/strong&gt; class is a different module, we depend physically only on File 1, File 2, File 3, File 4, File 5. &lt;br&gt;
For complex and big systems this kind of flexibility can bring many benefits.&lt;/p&gt;

</description>
      <category>solid</category>
      <category>designpatterns</category>
      <category>cleanarchitecture</category>
    </item>
    <item>
      <title>Super lightweight approach to unit testing in Bash</title>
      <dc:creator>Mirek Sedzinski</dc:creator>
      <pubDate>Sat, 20 Feb 2021 12:54:08 +0000</pubDate>
      <link>https://dev.to/msedzins/super-lightweight-approach-to-unit-testing-in-bash-18n4</link>
      <guid>https://dev.to/msedzins/super-lightweight-approach-to-unit-testing-in-bash-18n4</guid>
      <description>&lt;p&gt;There is already a good, recognised &lt;a href="https://github.com/sstephenson/bats"&gt;testing framework&lt;/a&gt; for Bash.&lt;/p&gt;

&lt;p&gt;But if someone is looking for a minimalistic approach, please check out my repo: &lt;br&gt;
&lt;a href="https://github.com/msedzins/bashUT"&gt;lightweight unit testing&lt;/a&gt;&lt;/p&gt;

</description>
      <category>bash</category>
    </item>
    <item>
      <title>Very simple rule of thumb on when to write tests</title>
      <dc:creator>Mirek Sedzinski</dc:creator>
      <pubDate>Fri, 01 Jan 2021 22:41:30 +0000</pubDate>
      <link>https://dev.to/msedzins/very-simple-rule-of-thumb-on-when-to-write-tests-1dml</link>
      <guid>https://dev.to/msedzins/very-simple-rule-of-thumb-on-when-to-write-tests-1dml</guid>
      <description>&lt;p&gt;Testing is a critical part of software development process. And there is a ton of literature on when and how to test.  &lt;/p&gt;

&lt;p&gt;I'm not a big fun of sticking to any particular approach just because it's currently popular. &lt;/p&gt;

&lt;p&gt;Over the course of time I found a simple way to identify potential candidate for tests in my code:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;As I write a piece of the code, very often, I start to feel uncertain whether it will work or not. This feeling is accompanied by the urge to execute the code and validate its behaviour at runtime.&lt;br&gt;
Well, this for me is a strong indicator that I should write tests.&lt;/em&gt; &lt;/p&gt;

&lt;p&gt;Having said that, please mind that I'm talking about potential candidate. Sometimes (rarely), on second thought, I come to conclusion that no tests are needed.&lt;/p&gt;

</description>
      <category>testing</category>
    </item>
    <item>
      <title>A few practical Terraform tips</title>
      <dc:creator>Mirek Sedzinski</dc:creator>
      <pubDate>Fri, 20 Nov 2020 17:08:28 +0000</pubDate>
      <link>https://dev.to/msedzins/a-few-practical-terraform-tips-521m</link>
      <guid>https://dev.to/msedzins/a-few-practical-terraform-tips-521m</guid>
      <description>&lt;p&gt;Terraform is a popular, open-source infrastructure as a code software tool.&lt;br&gt;
This article aims to present a few tips on how to use it, based on hands-on experience. Readers are assumed to have at least some level of Terraform working knowledge.&lt;/p&gt;

&lt;h1&gt;
  
  
  Teamwork - the State
&lt;/h1&gt;

&lt;p&gt;Let's say we created a bunch of Terraform scripts. Most probably we keep them in the repository of our choice. By doing so, we can easily share them between team members.&lt;br&gt;
The question arises: &lt;strong&gt;what about the State?&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;By default, it's stored in a file in a current working directory where Terraform was run. Should it be pushed to the repository together with Terraform scripts?&lt;br&gt;
Actually, it's not a best idea. State file is machine-generated and there is a significant probability of frequent merge conflicts between different revisions. Those conflicts will have to be resolved by hand and it won't be easy.&lt;/p&gt;

&lt;p&gt;There are two options to handle this:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Local state - state kept as a file in a shared location. Sharing can be achieved with the use of a network-attached storage. Or there can be one dedicated "builder" machine reused by the whole team.
&lt;/li&gt;
&lt;li&gt;Remote state - state kept on a remote storage. This is a feature of Backends, and there are several of them to choose from. What's good to check and be aware of is whether given Backend supports locking mechanism (for example Oracle Object Storage currently doesn't). Locking mechanism is a measure to avoid two or more different users accidentally running Terraform at the same time, and thus ensure that each Terraform run begins with the most recent updated State.&lt;/li&gt;
&lt;/ol&gt;

&lt;h1&gt;
  
  
  Teamwork - running the scripts
&lt;/h1&gt;

&lt;p&gt;Whatever Backend we use, and regardless of whether it supports the locking mechanism or not, if two users run the same set of terraform scripts, which are out of sync, we are in a trouble.&lt;/p&gt;

&lt;p&gt;Let's imagine a situation where two developers pull the same scripts from the repository. Developer A modify scripts by adding an additional Compute instance. She runs the scripts and the instance is provisioned. Shared state is updated.&lt;br&gt;
A few minutes after that, Developer B runs his version of the scripts (which he didn't modify). Terraform compares the content of shared state with the content of the scripts and finds out that:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Instance was provisioned on the infrastructure [information from the State]&lt;/li&gt;
&lt;li&gt;There is no instance in the current scripts&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Based on the above, Terraform comes to conclusion that the Compute instance has to be decomissioned. Obviously, this is not what we expected.&lt;/p&gt;

&lt;p&gt;To prevent such situations, one must make sure that Terraform is run always using up-to-date scripts. It can be done by defining a manual process or with a tool.&lt;br&gt;
CI/CD pipeline or a job can be created for that purpose. Or specific service can be used like Resource Manager, that is part of OCI offering.&lt;/p&gt;

&lt;h1&gt;
  
  
  Organising the scripts
&lt;/h1&gt;

&lt;p&gt;Two things should be taken into consideration here: avoiding redundancy and planning for efficient use.&lt;/p&gt;

&lt;p&gt;For redundancy part, one should consider:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Moving common elements to modules to promote reusability&lt;/li&gt;
&lt;li&gt;Using variables to parametrise the scripts&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Side note&lt;/strong&gt;&lt;br&gt;
All sensitive data should be removed from the scripts and loaded from external variables. In this post I'm talking about one possible approach to do that in a safe way: &lt;a href="https://dev.to/msedzins/sensitive-data-in-bash-scripts-3j5c"&gt;Link&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When it comes to efficiency, we should first reflect on how we are going to provision and decommission our infrastructure. Things that we want to provision/deprovision together should obviously go together in the scripts. &lt;br&gt;
However, at the same time, we should keep in mind that in Terraform we usually use "everything or nothing" approach. In other words - either we provision everything or nothing. The same holds true for decommissioning. Of course, there are ways to narrow down the scope (the "-target" option can be used to focus Terraform's attention on only a subset of resources), but it should be treated more like and exception than a rule.&lt;br&gt;
So, it's better to have a few independent set of scripts which we can run separately and orchestrate as needed, even if they are tightly coupled and pertain to the same piece of the software and infrastructure.&lt;/p&gt;

&lt;p&gt;For example, let's say we want to provision Kubernetes cluster. Instead of putting everything into one big set of scripts, we can divide it into following components:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Scripts to provision identity provider&lt;/li&gt;
&lt;li&gt;Load Balancer&lt;/li&gt;
&lt;li&gt;Image registry&lt;/li&gt;
&lt;li&gt;Control + data plane&lt;/li&gt;
&lt;li&gt;Extensions like storage, cert manager, etc.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Each component, from the list above, is a complex thing. It's good to have a possibility to approach them separately or together, depending on a need.&lt;/p&gt;

&lt;h1&gt;
  
  
  Terraform vs Ansible
&lt;/h1&gt;

&lt;p&gt;Frequent question is: which tool should be used, Ansible or Terraform?&lt;/p&gt;

&lt;p&gt;To answer it let's first make a differentiation between management of:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Infrastructure - vm, storage, networking etc.&lt;/li&gt;
&lt;li&gt;Configuration - software installed on top of the infrastructure &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;So, we can definitely use either tool to cover both areas. It especially makes sense for easy use cases. For example, we can go with Terraform only and use cloud-init/provisioners for configuration management.&lt;/p&gt;

&lt;p&gt;However, in more complex situations, in my opinion it's good to use the tools for what they were originally designed for, which means: Terraform for infrastructure and Ansible for configuration management. It just makes things easier and more natural.&lt;/p&gt;

&lt;h1&gt;
  
  
  Idempotency
&lt;/h1&gt;

&lt;p&gt;And last important advice: regardless of the tool used, scripts should be idempotent. It increases a bit effort to implement them (especially in case of Ansible) but pays off greatly later on.&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>ansible</category>
    </item>
    <item>
      <title>Using Oracle Streaming service as a managed Apache Kafka</title>
      <dc:creator>Mirek Sedzinski</dc:creator>
      <pubDate>Tue, 22 Sep 2020 15:59:30 +0000</pubDate>
      <link>https://dev.to/msedzins/using-oracle-streaming-service-as-a-managed-apache-kafka-3epo</link>
      <guid>https://dev.to/msedzins/using-oracle-streaming-service-as-a-managed-apache-kafka-3epo</guid>
      <description>&lt;p&gt;&lt;strong&gt;The Oracle Cloud Infrastructure Streaming&lt;/strong&gt; is a fully managed and scalable service for processing high-volume streams of data in real time.&lt;br&gt;
We can think of it as an equivalent of Apache Kafka and actually - they are API-compatibile, which means we can use applications written for Kafka with Streaming without having to rewrite a code. &lt;/p&gt;
&lt;h1&gt;
  
  
  Problem statement
&lt;/h1&gt;

&lt;p&gt;Why we would even consider using Streaming service? Well, for the same reasons we would consider using Kafka. &lt;/p&gt;

&lt;p&gt;The additional bonus is that because it's a PaaS service, it's extremely easy to set up and there's almost no effort related to maintenance (forget about patching, scaling, configuring HA or running out of disc space).&lt;/p&gt;
&lt;h1&gt;
  
  
  How to use it
&lt;/h1&gt;

&lt;p&gt;Configuration can be done in a standard way - manually in the cloud console or automatically with REST API/SDKs/Terraform/CLI.&lt;/p&gt;

&lt;p&gt;Things are getting more interesting when it comes to publishing/consuming messages. There are two approaches:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;OCI REST API&lt;/li&gt;
&lt;li&gt;Kafka-compliant API&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Usually, I tend to go with the latter, for one reason: &lt;strong&gt;long-lived TCP connections&lt;/strong&gt; used by Kafka protocol. Using this mechanism, makes the process of reading the data much more interactive - one can listen certain amount of time on a topic and when a message arrives it is consumed at once. In case of OCI REST API, the only way is to call GetMessages operation in a loop and it exits immediately when there is nothing to consume.&lt;/p&gt;

&lt;p&gt;In the next section I will show how easy is to configure Streaming and connect to it using Kafka client.&lt;/p&gt;
&lt;h1&gt;
  
  
  Setup
&lt;/h1&gt;
&lt;h3&gt;
  
  
  Stream Pool
&lt;/h3&gt;

&lt;p&gt;First, let's create stream pool which groups streams (and the easiest way to think about a stream is to treat it as a Kafka topic):&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Ffpoa8qp3mmedi7iztzfy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Ffpoa8qp3mmedi7iztzfy.png" alt="Stream Pool"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The only required parameter that has to be provided is actually a &lt;strong&gt;Stream Pool Name&lt;/strong&gt;. For other settings we can use default values.&lt;/p&gt;

&lt;p&gt;However, there are two interesting configuration parameters to mention:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Endpoint type - we can choose whether our streams are public (ingress traffic allowed from the internet) or private (accessible only within Oracle Cloud region).&lt;/li&gt;
&lt;li&gt;Encryption settings - all messages are encrypted by default (at rest and in-transit). We can specify details of the configuration.&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;
  
  
  Stream
&lt;/h3&gt;

&lt;p&gt;Now we can create a Stream (aka topic):&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fiqz9nues9n3oa00fdqce.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fiqz9nues9n3oa00fdqce.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;First we need to assign a Stream to a Stream Pool (the one we created in the previous section). Then we need to provide the name of the Stream.&lt;/p&gt;

&lt;p&gt;We can control read/write throughput using mechanism of partitions (works the same as in Kafka).&lt;/p&gt;
&lt;h3&gt;
  
  
  Kafka configuration
&lt;/h3&gt;

&lt;p&gt;When we open a page with details of any given Stream Pool, at the top of the screen we will find the following button:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fnp3w1oda7zbcixvtbghz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fnp3w1oda7zbcixvtbghz.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After clicking it, the page with connection details pops up:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fb03t9156t5iu9qn2nzfx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fb03t9156t5iu9qn2nzfx.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Having all that data we are almost ready to connect to Streaming service using Kafka client. But first we need to touch on one very important topic - security. &lt;/p&gt;
&lt;h3&gt;
  
  
  A few important words about security
&lt;/h3&gt;
&lt;h3&gt;
  
  
  Encryption
&lt;/h3&gt;

&lt;p&gt;Connection between client and server is encrypted in transit using TSL (successor of SSL). And there is a 1-way authentication enabled by default, which means that client authenticates the server certificate.&lt;/p&gt;

&lt;p&gt;To make that authentication work, client has to trust the certificate presented by the server. Usually it requires additional configuration.&lt;/p&gt;

&lt;p&gt;There is a setting that can be used on client side: &lt;em&gt;"ssl.ca.location"&lt;/em&gt;. It can be used to point to the file with proper certificate chain of trust. The file itself should look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Server certificate
-----BEGIN CERTIFICATE-----
MIIFaDCCBFCgAwIBAgISESHkvZFwK9Qz0KsXD3x8p44aMA0GCSqGSIb3DQEBCwUA
VQQDDBcqLmF3cy10ZXN0LnByb2dyZXNzLmNvbTCCASIwDQYJKoZIhvcNAQEBBQAD
ggEPADCCAQoCggEBAMGPTyynn77hqcYnjWsMwOZDzdhVFY93s2OJntMbuKTHn39B
...
bml6YXRpb252YWxzaGEyZzIuY3JsMIGgBggrBgEFBQcBAQSBkzCBkDBNBggrBgEF
BQcwAoZBaHR0cDovL3NlY3VyZS5nbG9iYWxzaWduLmNvbS9jYWNlcnQvZ3Nvcmdh
bml6YXRpb252YWxzaGEyZzJyMS5jcnQwPwYIKwYBBQUHMAGGM2h0dHA6Ly9vY3Nw
lffygD5IymCSuuDim4qB/9bh7oi37heJ4ObpBIzroPUOthbG4gv/5blW3Dc=
-----END CERTIFICATE-----

# Trust chain intermediate certificate
-----BEGIN CERTIFICATE-----
MIIEaTCCA1GgAwIBAgILBAAAAAABRE7wQkcwDQYJKoZIhvcNAQELBQAwVzELMAkG
C33JiJ1Pi/D4nGyMVTXbv/Kz6vvjVudKRtkTIso21ZvBqOOWQ5PyDLzm+ebomchj
SHh/VzZpGhkdWtHUfcKc1H/hgBKueuqI6lfYygoKOhJJomIZeg0k9zfrtHOSewUj
...
dHBzOi8vd3d3Lmdsb2JhbHNpZ24uY29tL3JlcG9zaXRvcnkvMDMGA1UdHwQsMCow
KKAmoCSGImh0dHA6Ly9jcmwuZ2xvYmFsc2lnbi5uZXQvcm9vdC5jcmwwPQYIKwYB
K1pp74P1S8SqtCr4fKGxhZSM9AyHDPSsQPhZSZg=
-----END CERTIFICATE-----

# Trust chain root certificate
-----BEGIN CERTIFICATE-----
MIIDdTCCAl2gAwIBAgILBAAAAAABFUtaw5QwDQYJKoZIhvcNAQEFBQAwVzELMAkG
YWxTaWduIG52LXNhMRAwDgYDVQQLEwdSb290IENBMRswGQYDVQQDExJHbG9iYWxT
aWduIFJvb3QgQ0EwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDaDuaZ
...
jc6j40+Kfvvxi4Mla+pIH/EqsLmVEQS98GPR4mdmzxzdzxtIK+6NiY6arymAZavp
38NflNUVyRRBnMRddWQVDf9VMOyGj/8N7yy5Y0b2qvzfvGn9LhJIZJrglfCm7ymP
HMUfpIBvFSDJ3gyICh3WZlXi/EjJKSZp4A==
-----END CERTIFICATE-----
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;How to build the chain of trust? We start at the bottom of the chain ("Server certificate" in the example above, but please mind that direction in the file is reversed).&lt;/p&gt;

&lt;p&gt;To get the server certificate we can run the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;echo -n | openssl s_client -connect &amp;lt;endpoint taken from Stream details page&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It will return two very important pieces of information:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"Certificate chain" section which lists certificates. All of them should be placed in our file. In my case it looks like this (there are 3 certificates in the chain):
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Certificate chain
 0 s:/C=US/ST=California/L=Redwood City/O=Oracle Corporation/OU=Oracle OCI-PROD FRANKFURT/CN=streaming.eu-frankfurt-1.oci.oraclecloud.com
   i:/C=US/O=DigiCert Inc/CN=DigiCert SHA2 Secure Server CA
 1 s:/C=US/O=DigiCert Inc/CN=DigiCert SHA2 Secure Server CA
   i:/C=US/O=DigiCert Inc/OU=www.digicert.com/CN=DigiCert Global Root CA
 2 s:/C=US/O=DigiCert Inc/OU=www.digicert.com/CN=DigiCert Global Root CA
   i:/C=US/O=DigiCert Inc/OU=www.digicert.com/CN=DigiCert Global Root CA
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;"Server certificate" - section between BEGIN/END CERTIFICATE lines. This is a certificate number "0" in a "Certificate chain" and should be placed at the top of our file.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The last step is to collect remaining certificates ("1" and "2") and place them in the file. The root certificate goes at the bottom.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;BTW:&lt;/strong&gt;&lt;br&gt;
For testing purposes we can use quick and dirty workaround. Setting &lt;em&gt;"enable.ssl.certificate.verification"&lt;/em&gt; to "false" will disable server authentication altogether. &lt;/p&gt;

&lt;h3&gt;
  
  
  Authentication and authorisation
&lt;/h3&gt;

&lt;p&gt;It's highly recommended to create a dedicated user for each Stream Pool (or set of Stream Pools, depending on the requirements). &lt;br&gt;
After creating such a user, we have to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;generate for him Auth Token, which will be used as a password. &lt;/li&gt;
&lt;li&gt;grant him appropriate level of access using OCI Policies &lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Kafka client
&lt;/h1&gt;

&lt;p&gt;Finally, let's try to connect to our Streaming service using Kafka Client. In my case I will use Go and &lt;a href="https://www.confluent.io" rel="noopener noreferrer"&gt;Confluent package&lt;/a&gt;.&lt;br&gt;
Example of producer and consumer code that we can use is available here:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/confluentinc/examples/blob/5.5.1-post/clients/cloud/go/consumer.go" rel="noopener noreferrer"&gt;Consumer&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/confluentinc/examples/blob/5.5.1-post/clients/cloud/go/producer.go" rel="noopener noreferrer"&gt;Producer&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Consumer configuration
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="n"&gt;consumer&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;kafka&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;NewConsumer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;kafka&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ConfigMap&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="c"&gt;//taken from Stream Pool details page&lt;/span&gt;
        &lt;span class="s"&gt;"bootstrap.servers"&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;                   &lt;span class="s"&gt;"cell-1.streaming.eu-frankfurt-1.oci.oraclecloud.com:9092"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="c"&gt;//arbitrary value, when using consumer groups&lt;/span&gt;
        &lt;span class="s"&gt;"group.id"&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;                            &lt;span class="s"&gt;"foo"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="c"&gt;//taken from Stream Pool details page&lt;/span&gt;
        &lt;span class="s"&gt;"sasl.mechanisms"&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;                     &lt;span class="s"&gt;"PLAIN"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="c"&gt;//user authorized to read from the  stream(s) &lt;/span&gt;
        &lt;span class="s"&gt;"sasl.username"&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;                       &lt;span class="s"&gt;"[tenancy]/[user name]/[stream pool id]"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="c"&gt;//Auth Token for the user&lt;/span&gt;
        &lt;span class="s"&gt;"sasl.password"&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;                       &lt;span class="s"&gt;"[token]"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="s"&gt;"enable.ssl.certificate.verification"&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="s"&gt;"true"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="c"&gt;//Full path to the file with certificates that make up chain of trust&lt;/span&gt;
        &lt;span class="s"&gt;"ssl.ca.location"&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;                     &lt;span class="s"&gt;"/dir/ca.pem"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="c"&gt;//taken from Stream Pool details page&lt;/span&gt;
        &lt;span class="s"&gt;"security.protocol"&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;                   &lt;span class="s"&gt;"SASL_SSL"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="s"&gt;"auto.offset.reset"&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;                   &lt;span class="s"&gt;"earliest"&lt;/span&gt;&lt;span class="p"&gt;})&lt;/span&gt;  

&lt;span class="c"&gt;//subscribe to given Stream (aka topic)&lt;/span&gt;
&lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;consumer&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;SubscribeTopics&lt;/span&gt;&lt;span class="p"&gt;([]&lt;/span&gt;&lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="s"&gt;"testStream"&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Producer configuration
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="n"&gt;p&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;kafka&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;NewProducer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;kafka&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ConfigMap&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="c"&gt;//taken from Stream Pool details page&lt;/span&gt;
        &lt;span class="s"&gt;"bootstrap.servers"&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;                   &lt;span class="s"&gt;"cell-1.streaming.eu-frankfurt-1.oci.oraclecloud.com:9092"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="c"&gt;//taken from Stream Pool details page&lt;/span&gt;
        &lt;span class="s"&gt;"sasl.mechanisms"&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;                     &lt;span class="s"&gt;"PLAIN"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="c"&gt;//user authorized to write to the  stream(s) &lt;/span&gt;
        &lt;span class="s"&gt;"sasl.username"&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;                       &lt;span class="s"&gt;"[tenancy]/[user name]/[stream pool id]"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="c"&gt;//Auth Token for the user&lt;/span&gt;
        &lt;span class="s"&gt;"sasl.password"&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;                       &lt;span class="s"&gt;"[token]"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="c"&gt;//example of how to disable server certificate validation&lt;/span&gt;
        &lt;span class="c"&gt;//don't do it in production!&lt;/span&gt;
        &lt;span class="s"&gt;"enable.ssl.certificate.verification"&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="s"&gt;"false"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="c"&gt;//taken from Stream Pool details page&lt;/span&gt;
        &lt;span class="s"&gt;"security.protocol"&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;                   &lt;span class="s"&gt;"SASL_SSL"&lt;/span&gt;&lt;span class="p"&gt;})&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>kafka</category>
      <category>oracle</category>
      <category>paas</category>
    </item>
    <item>
      <title>Sensitive data in bash scripts</title>
      <dc:creator>Mirek Sedzinski</dc:creator>
      <pubDate>Sat, 18 Jul 2020 07:47:37 +0000</pubDate>
      <link>https://dev.to/msedzins/sensitive-data-in-bash-scripts-3j5c</link>
      <guid>https://dev.to/msedzins/sensitive-data-in-bash-scripts-3j5c</guid>
      <description>&lt;h1&gt;
  
  
  Problem
&lt;/h1&gt;

&lt;p&gt;Sensitive data (passwords etc.) should not be placed directly in the  configuration files. &lt;br&gt;
Quite common approach is to use environment variables to do the trick: we assign sensitive data to the environment variables and use these variables later on in our configuration/scripts/code.&lt;br&gt;
But this raises another question: how we should make assignment itself? Should we use for that just another script in which sensitive data is stored in plain text?  &lt;/p&gt;

&lt;p&gt;Below is the approach that I've been talking.&lt;/p&gt;
&lt;h1&gt;
  
  
  Solution
&lt;/h1&gt;

&lt;p&gt;There is a really nice script out there - &lt;a href="https://github.com/plyint/encpass.sh"&gt;encpass.sh&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What it allows to do is to store a secret in a &lt;strong&gt;reasonably safe&lt;/strong&gt; way and retrieve it easily in a bash script.&lt;/p&gt;

&lt;p&gt;But why only in a reasonably safe way? &lt;br&gt;
Well, secrets are encrypted using symmetric keys and both values (secret + password) are stored in a hidden directory that can only be accessed by the user who run the script. It obviously doesn't protect from situations in which attacker has an access to the user's hidden directory (for example if an attacker has a root access).&lt;br&gt;
This risk can be mitigated by securing access to the keys with an additional password. In such a case, before using the secret, user is asked to provide the password. Although it works good for manual use cases, can become cumbersome in case when automation is used (we get back to the original problem - how to store password in a secure way?).&lt;/p&gt;
&lt;h1&gt;
  
  
  Quick demo
&lt;/h1&gt;

&lt;p&gt;After setting up secrets using &lt;em&gt;encpass.sh&lt;/em&gt; we can have a configuration file named &lt;em&gt;configuration.sh&lt;/em&gt;, that looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#!/bin/sh&lt;/span&gt;

&lt;span class="nv"&gt;configuration_param1&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;data1 &lt;span class="c"&gt;# not sensitive data&lt;/span&gt;

&lt;span class="nb"&gt;source &lt;/span&gt;encpass.sh
&lt;span class="nv"&gt;configuration_param2&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;get_secret bucket_name secret_name&lt;span class="si"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then, in our script we use it like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#!/bin/sh&lt;/span&gt;

&lt;span class="nb"&gt;source &lt;/span&gt;configuration.sh

&lt;span class="nb"&gt;echo &lt;/span&gt;configuration_param1
&lt;span class="nb"&gt;echo &lt;/span&gt;configuration_param2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;br&gt;&lt;br&gt;
BTW: This approach can also be used with other languages, for example recently I've used it with Go - there is a script in bash which uses &lt;em&gt;encpass.sh&lt;/em&gt; to decrypt the secrets and then exports them. Afterwards I can access environment variables from my Go code.&lt;/p&gt;

</description>
      <category>bash</category>
    </item>
  </channel>
</rss>
