<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Ajay Singh</title>
    <description>The latest articles on DEV Community by Ajay Singh (@ajayatgit).</description>
    <link>https://dev.to/ajayatgit</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/ajayatgit"/>
    <language>en</language>
    <item>
      <title>Blockchain Trilemma</title>
      <dc:creator>Ajay Singh</dc:creator>
      <pubDate>Mon, 12 Jun 2023 20:03:03 +0000</pubDate>
      <link>https://dev.to/ajayatgit/blockchain-trilemma-o58</link>
      <guid>https://dev.to/ajayatgit/blockchain-trilemma-o58</guid>
      <description>&lt;p&gt;Blockchain technology, which underpins cryptocurrencies, is a distributed ledger that allows transactions to be recorded in a secure and transparent way. However, there is a fundamental problem that developers and users of blockchain technology face, known as the Blockchain Trilemma. The blockchain trilemma refers to the challenge of achieving scalability, security, and decentralization, all at the same time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scalability&lt;/strong&gt;&lt;br&gt;
Scalability refers to the ability of a blockchain network to handle a large number of transactions. Currently, most blockchain networks have limited scalability, which means they can only process a few transactions per second. This is a major problem for blockchain adoption, as it limits the number of users who can use the network at the same time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Security&lt;/strong&gt;&lt;br&gt;
Security refers to the ability of a blockchain network to resist attacks and prevent malicious actors from hacking or manipulating the network. Blockchain networks are secured through a process called consensus, where nodes in the network agree on the validity of transactions. However, achieving security can be difficult, especially in the face of determined attackers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Decentralization&lt;/strong&gt;&lt;br&gt;
Decentralization refers to the distribution of nodes that maintain the blockchain network. A decentralized network is more secure and resistant to attacks because there is no single point of failure. However, achieving decentralization can be difficult, especially as the network grows and more nodes are needed to maintain it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--PGOYVG1Q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lz1ic1pur8nr39vufyia.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--PGOYVG1Q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lz1ic1pur8nr39vufyia.png" alt="Image description" width="510" height="412"&gt;&lt;/a&gt;&lt;a href="https://dev.tourl"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Achieving all three goals of scalability, security, and decentralization at the same time is a challenge that has yet to be fully solved. Most blockchain networks have had to make trade-offs between these goals. For example, some networks sacrifice decentralization to achieve greater scalability, while others sacrifice scalability for greater security. You can at anytime successfully only cater to one side of the above triangle.&lt;/p&gt;

</description>
      <category>blockchain</category>
      <category>cryptocurrency</category>
    </item>
    <item>
      <title>Learning Bitcoin Part -1</title>
      <dc:creator>Ajay Singh</dc:creator>
      <pubDate>Tue, 06 Jul 2021 22:20:28 +0000</pubDate>
      <link>https://dev.to/ajayatgit/learning-bitcoin-part-1-21n3</link>
      <guid>https://dev.to/ajayatgit/learning-bitcoin-part-1-21n3</guid>
      <description>&lt;p&gt;Over the next few days we would be learning the basics and fundamentals of bitcoin. &lt;/p&gt;

&lt;p&gt;Today we will be talking about the origin of bitcoin &lt;/p&gt;

&lt;h3&gt;
  
  
  Who created Bitcoin?
&lt;/h3&gt;

&lt;p&gt;This is one of the well-guarded secret and mystery.&lt;br&gt;
No one actually know who created bitcoin, but we have a name attributed to it - Satoshi Nakamoto. &lt;br&gt;
This Person or Collection of person have authored a whitepaper titled &lt;strong&gt;Bitcoin: A Peer-to-Peer Electronic Cash System&lt;/strong&gt;, on October 31, 2008.&lt;br&gt;
This whitepaper was posted on cryptography mailing list below is the original content of the email.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;I've been working on a new electronic cash system that's fully
peer-to-peer, with no trusted third party.

The paper is available at:
http://www.bitcoin.org/bitcoin.pdf

The main properties:
Double-spending is prevented with a peer-to-peer network.
No mint or other trusted parties.
Participants can be anonymous.
New coins are made from Hashcash style proof-of-work.
The proof-of-work for new coin generation also powers the
network to prevent double-spending.

Bitcoin: A Peer-to-Peer Electronic Cash System

Abstract. A purely peer-to-peer version of electronic cash would
allow online payments to be sent directly from one party to another
without the burdens of going through a financial institution.
Digital signatures provide part of the solution, but the main
benefits are lost if a trusted party is still required to prevent
double-spending. We propose a solution to the double-spending
problem using a peer-to-peer network. The network timestamps
transactions by hashing them into an ongoing chain of hash-based
proof-of-work, forming a record that cannot be changed without
redoing the proof-of-work. The longest chain not only serves as
proof of the sequence of events witnessed, but proof that it came
from the largest pool of CPU power. As long as honest nodes control
the most CPU power on the network, they can generate the longest
chain and outpace any attackers. The network itself requires
minimal structure. Messages are broadcasted on a best effort basis,
and nodes can leave and rejoin the network at will, accepting the
longest proof-of-work chain as proof of what happened while they
were gone.

Full paper at:
http://www.bitcoin.org/bitcoin.pdf

Satoshi Nakamoto
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;On 9 January 2009, Nakamoto released version 0.1 of the bitcoin software on SourceForge, and launched the network by defining the genesis block of bitcoin (block number 0), which had a reward of 50 bitcoins.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--hCHqz2ns--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1w0z00b6v55tnv1hyimx.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--hCHqz2ns--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1w0z00b6v55tnv1hyimx.jpeg" alt="Bitcoin-Genesis-block"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Bitcoin is combination of existing Technology
&lt;/h3&gt;

&lt;p&gt;Bitcoin is not something which has popped up out of blue, but it was combination of various technologies which lead to it.&lt;/p&gt;

&lt;p&gt;Below are a few technology inventions that “Satoshi Nakamoto” uses, mentions, and gives credit via his whitepaper:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;1) digital cash technology and protocol called ecash by David Chaum and Stefan Brands&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;2) “proof of work” system called hashcash by Adam Back for spam monitoring and control, which was eventually built upon by Hal Finney, who created a reusable proof-of-work protocol&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;3) distributed scarcity system built upon “b-money” created by Wei Dai&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;4) technology called “bitgold” by Nick Szabo that proposed a mechanism for market inflation control&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Key point to mention all of the above mentioned folks have denied being Satoshi Nakamoto or being part of the group being called "Satoshi Nakamoto"&lt;/p&gt;

</description>
      <category>bitcoin</category>
      <category>blockchain</category>
      <category>cryptocurrency</category>
    </item>
    <item>
      <title>Etherscan.io down No worries </title>
      <dc:creator>Ajay Singh</dc:creator>
      <pubDate>Thu, 15 Apr 2021 18:08:17 +0000</pubDate>
      <link>https://dev.to/ajayatgit/ethercan-io-down-no-worries-17ka</link>
      <guid>https://dev.to/ajayatgit/ethercan-io-down-no-worries-17ka</guid>
      <description>&lt;p&gt;In case you all are not able to access etherscan.io for your transaction verification.&lt;br&gt;
Do not worry feel free to visit DL Eth Explorer &lt;a href="https://dlethexplorer.dltlabs.com/"&gt;https://dlethexplorer.dltlabs.com/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>blockchain</category>
      <category>ethereum</category>
      <category>ether</category>
      <category>dltlabs</category>
    </item>
    <item>
      <title>Linux Capacity Planning Part -2 </title>
      <dc:creator>Ajay Singh</dc:creator>
      <pubDate>Sun, 24 May 2020 22:56:50 +0000</pubDate>
      <link>https://dev.to/ajayatgit/linux-capacity-planning-2-423f</link>
      <guid>https://dev.to/ajayatgit/linux-capacity-planning-2-423f</guid>
      <description>&lt;p&gt;In the last post &lt;a href="https://dev.to/ajayatgit/linux-capacity-planning-part-1-2l4l"&gt;Linux Capacity Planning-1&lt;/a&gt;, we discussed about - &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The linkage between capacity planning and performance &lt;/li&gt;
&lt;li&gt;Package sysstats - its installation on various Linux flavours.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In this post, we will do a deep dive in &lt;strong&gt;iostat&lt;/strong&gt; utility present in the &lt;strong&gt;sysstats&lt;/strong&gt; package. &lt;/p&gt;

&lt;p&gt;Iostat command is a command used to monitor the system's input/output (I/O) device load by observing the time the devices are active in relation to their average transfer rates.  The iostat create reports that can be used to change system configuration to better balance the input/output between physical disks.&lt;/p&gt;

&lt;p&gt;sample of Iostat command run without passing any argument will look as below&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[root@ip-10-25-57-107 ec2-user]# iostat
Linux 4.14.173-137.229.amzn2.x86_64 (ip-10-25-57-107.ap-south-1.compute.internal)       05/24/2020      _x86_64_        (1 CPU)

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.02    0.00    0.01    0.01    0.00   99.96

Device:            tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
xvda              0.59         1.32         4.20     206156     654551

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;When the command is run without arguments, it generates a detailed report containing information since the system was booted. You can provide two optional parameters to change this:&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;iostat [option] [interval] [count]&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;interval&lt;/strong&gt; parameter specifies the duration of time in seconds between each report&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Count&lt;/strong&gt; parameter allows you to specify the number of reports that are generated before iostat exits.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The first report generated by the iostat command provides statistics concerning the time since the system was booted unless the -y option is used (in this case, this first report is omitted).  Each subsequent report covers the time since the previous report. All statistics are reported each time the iostat command is run. The report consists of a CPU header row followed by a row of CPU statistics. On multiprocessor systems, CPU statistics are calculated system-wide as averages among all processors. A device header row is displayed followed by a line of statistics for each device that is configured.&lt;/p&gt;

&lt;p&gt;The interval parameter specifies the amount of time in seconds between each report. The first report contains statistics for the time since system startup(boot) unless the -y option is used (in this case, this report is omitted).  Each subsequent report contains statistics collected during the interval since the previous report. The count parameter can be specified in conjunction with the interval parameter. If the count parameter is specified, the value of count determines the number of reports generated at interval seconds apart. If the interval parameter is specified without the count parameter, the iostat command generates reports continuously.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[root@ip-10-25-57-107 ec2-user]# iostat 1 5
Linux 4.14.173-137.229.amzn2.x86_64 (ip-10-25-57-107.ap-south-1.compute.internal)       05/24/2020      _x86_64_        (1 CPU)

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.02    0.00    0.01    0.01    0.00   99.96

Device:            tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
xvda              0.59         1.32         4.19     206164     655868

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.00    0.00    0.00    0.00    0.00  100.00

Device:            tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
xvda              0.00         0.00         0.00          0          0

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.00    0.00    0.00    0.00    0.00  100.00

Device:            tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
xvda              0.00         0.00         0.00          0          0

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.00    0.00    0.00    0.00    0.00  100.00

Device:            tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
xvda              0.00         0.00         0.00          0          0

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.00    0.00    0.00    0.00    0.00  100.00

Device:            tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
xvda              0.00         0.00         0.00          0          0

[root@ip-10-25-57-107 ec2-user]#
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The iostat command generates two types of reports, the &lt;strong&gt;CPU Utilization&lt;/strong&gt; and &lt;strong&gt;Device Utilization&lt;/strong&gt; report.&lt;br&gt;
The first section of the report is related to &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;em&gt;CPU Utilization Report&lt;/em&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.00    0.00    0.00    0.00    0.00  100.00
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;


&lt;p&gt;For multiprocessor systems, the CPU values are global averages among all processors. &lt;/p&gt;

&lt;p&gt;The report has the following format:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;%user - Show the percentage of CPU utilization that occurred while executing at the user level (application).&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;%nice- Show the percentage of CPU utilization that occurred while executing at the user level with nice priority.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;%system-  Show the percentage of CPU utilization that occurred while executing at the system level (kernel).&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;%iowait- show the percentage of time that the CPU or CPUs were idle during which the system had an outstanding disk I/O request.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;%steal- Show the percentage of time spent in involuntary wait by the virtual CPU or CPUs while the hypervisor was servicing another virtual processor.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;%idle- Show the percentage of time that the CPU or CPUs were idle and the system did not have an outstanding disk I/O request.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;2.&lt;strong&gt;The Device Utilization Report&lt;/strong&gt;&lt;br&gt;
The device report provides statistics on a per physical device or partition basis. Block devices and partitions for which statistics are to be displayed may be entered on the command line.  If no device nor partition is entered,  then statistics are displayed for every device used by the system, and providing that the kernel maintains statistics for it.  If the ALL&lt;br&gt;
keyword is given on the command line, then statistics are displayed for every device defined by the system, including those that have never been used.&lt;br&gt;
Transfer rates are shown in 1K blocks by default unless the environment variable POSIXLY_CORRECT is set, in which case 512-byte blocks are used.  The&lt;br&gt;
report may show the following fields, depending on the flags used:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Device-  device/partition name as listed in /dev directory&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;tps-  number of transfers per second that were issued to the device. Higher tps means the processor is busier&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Blk_read/s-  show the amount of data read from the device expressed in a number of blocks (kilobytes, megabytes) per second&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Blk_wrtn/s- amount of data written to the device expressed in a number of blocks (kilobytes, megabytes) per second&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Blk_read- show the total number of blocks read&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Blk_wrtn- show the total number of blocks written&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;You get more detailed information by extending - using -x option with iostat.&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;iostat -x&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[root@ip-10-25-57-107 ec2-user]# iostat -x
Linux 4.14.173-137.229.amzn2.x86_64 (ip-10-25-57-107.ap-south-1.compute.internal)       05/24/2020      _x86_64_        (1 CPU)

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.02    0.00    0.01    0.01    0.00   99.96

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
xvda              0.00     0.02    0.08    0.51     1.30     4.16    18.62     0.00    3.06    1.19    3.35   0.17   0.01

[root@ip-10-25-57-107 ec2-user]#
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Below are the details of the fields&lt;/p&gt;

&lt;p&gt;&lt;em&gt;%util: how much time did the storage device have outstanding work (was busy).&lt;/em&gt;&lt;br&gt;
&lt;em&gt;svctm: indicate how fast does your I/O subsystem respond requests overall when busy. Actually, less you load your system, higher svctm is.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;await: indicates how fast do requests go through. It is just an average.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;avgqu-sz: how many requests are there in a request queue. Low = either your system is not loaded, or has serialized I/O and cannot utilize underlying storage properly. High = your software stack is scalable enough to load properly underlying I/O.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;avgrq-sz: Just an average request size. can indicate what kind of workload happens.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;wsec/s &amp;amp; rsec/s: Sectors read and written per second. Divide by 2048, and you’ll get megabytes per second.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;r/s &amp;amp; w/s: Read and write requests per second. These numbers are the ones that are the I/O capacity figures, though of course, depending on how much pressure underlying I/O subsystem gets (queue size!), they can vary.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;rrqm/s &amp;amp; wrqm/s: How many requests were merged by block layer.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;We will continue with iostat in the next post.&lt;/p&gt;

</description>
      <category>linux</category>
      <category>capacity</category>
      <category>iostat</category>
      <category>metrics</category>
    </item>
    <item>
      <title>Linux Capacity Planning Part -1 </title>
      <dc:creator>Ajay Singh</dc:creator>
      <pubDate>Sat, 23 May 2020 03:16:04 +0000</pubDate>
      <link>https://dev.to/ajayatgit/linux-capacity-planning-part-1-2l4l</link>
      <guid>https://dev.to/ajayatgit/linux-capacity-planning-part-1-2l4l</guid>
      <description>&lt;p&gt;This is going to be a series of blog post which will in introduce you with knowledge and tools necessary to analyze the historical data and plan for the increased resource to meet your future need as well as able to identify the resource bottleneck in your host machine. &lt;/p&gt;

&lt;p&gt;We would be considering the &lt;strong&gt;Amazon Linux 2&lt;/strong&gt; as Linux operating system. &lt;/p&gt;

&lt;p&gt;In this part, we will first look at a few important considerations which we should look during the planning. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;when you discuss the capacity, in reality, you are initiating a discussion about the performance. Capacity and Performance always go hand in hand.&lt;/li&gt;
&lt;li&gt;When we discuss performance we are trying to discuss the performance of an application on the system. &lt;/li&gt;
&lt;li&gt;You have to measure and monitor performance to do any sort of capacity planning.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Below are the few important parameters which determine the capacity of your system and impact performance of your application. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;CPU &lt;/li&gt;
&lt;li&gt;RAM &lt;/li&gt;
&lt;li&gt;IOPS (disk read and write)&lt;/li&gt;
&lt;li&gt;list of open files &lt;/li&gt;
&lt;li&gt;Network transfer rate&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;One of the tools which can help us in measuring these parameters in present in package name as sysstat.&lt;br&gt;
The above package come by default installed on &lt;strong&gt;Amazon Linux 2&lt;/strong&gt; AMI. &lt;br&gt;
The distro on which it is not installed you can install it by running below commands. &lt;/p&gt;

&lt;p&gt;Command to install on different Distros:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;On RedHat / CentOS / Fedora&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;yum install sysstat&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;On Debian / Ubuntu / Linux Mint&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;apt-get install sysstat&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;Once you have started the EC2 instance you can go and check if the sysstat service is active.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[root@ip-10-25-57-107 ec2-user]# service sysstat status
Redirecting to /bin/systemctl status sysstat.service
● sysstat.service - Resets System Activity Logs
   Loaded: loaded (/usr/lib/systemd/system/sysstat.service; enabled; vendor preset: enabled)
   Active: active (exited) since Sat 2020-05-23 02:21:42 UTC; 3min 34s ago
  Process: 2690 ExecStart=/usr/lib64/sa/sa1 --boot (code=exited, status=0/SUCCESS)
 Main PID: 2690 (code=exited, status=0/SUCCESS)
   CGroup: /system.slice/sysstat.service

May 23 02:21:41 localhost systemd[1]: Starting Resets System Activity Logs...
May 23 02:21:42 localhost systemd[1]: Started Resets System Activity Logs.
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;In next post, we will look into various utilities provided under sysstat package.&lt;/p&gt;

</description>
      <category>linux</category>
      <category>capacity</category>
    </item>
    <item>
      <title>Introducing new avatar of DL Gateway</title>
      <dc:creator>Ajay Singh</dc:creator>
      <pubDate>Wed, 11 Dec 2019 20:02:49 +0000</pubDate>
      <link>https://dev.to/ajayatgit/introducing-new-avatar-of-dl-gateway-41l</link>
      <guid>https://dev.to/ajayatgit/introducing-new-avatar-of-dl-gateway-41l</guid>
      <description>&lt;p&gt;We are thrilled to announce that the newly updated DL Gateway platform has been launched! Packed with cutting edge features and system improvements, we think you will absolutely love the upgraded environment.&lt;/p&gt;

&lt;p&gt;Some of our new features and improvements include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Signup and login directly to the Ethereum and Hyperledger Fabric blockchain infrastructures&lt;/li&gt;
&lt;li&gt;Create and manage projects using Ethereum and Hyperledger Fabric&lt;/li&gt;
&lt;li&gt;Collaborate and work on projects with other users registered on the platform&lt;/li&gt;
&lt;li&gt;View detailed project statistics, including total project transactions and top API calls, in a clean UI that is easy to navigate&lt;/li&gt;
&lt;li&gt;Monitor end-to-end blockchain network statistics&lt;/li&gt;
&lt;li&gt;Personalize your account with a photo or avatar image&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With more than five million transactions to date and growing, we’d like to remind you that this world-class system is 100% free to use. Please join us, collaborate with your team and create something amazing.&lt;/p&gt;

&lt;p&gt;Register for  DL Gateway at - &lt;a href="http://dlgateway.dltlabs.com/"&gt;http://dlgateway.dltlabs.com/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>blockchain</category>
      <category>dltlabs</category>
      <category>dlt</category>
    </item>
    <item>
      <title>Keep Docker alive during docker daemon downtime</title>
      <dc:creator>Ajay Singh</dc:creator>
      <pubDate>Thu, 08 Aug 2019 21:56:59 +0000</pubDate>
      <link>https://dev.to/ajayatgit/keep-docker-alive-during-docker-daemon-downtime-4pfi</link>
      <guid>https://dev.to/ajayatgit/keep-docker-alive-during-docker-daemon-downtime-4pfi</guid>
      <description>&lt;p&gt;When the docker daemon terminates by default it shutdowns the running docker containers. From Docker Engine 1.12, you can configure the daemon so that containers remain running even if the daemon becomes unavailable. &lt;br&gt;
This functionality is named is &lt;strong&gt;live restore&lt;/strong&gt;. The live restore option helps reduce container downtime due to daemon crashes, planned outages, or upgrades.&lt;/p&gt;
&lt;h3&gt;
  
  
  How do we enable live restore?
&lt;/h3&gt;

&lt;p&gt;There are two ways to enable live restore &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Add the configuration to the daemon configuration file. On Linux, this defaults to /etc/docker/daemon.json&lt;/p&gt;

&lt;p&gt;Use the following JSON to enable live-restore.&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  {
     "live-restore": true
   }
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;you can restart the docker daemon now. If you are using systemd, then use the command &lt;em&gt;systemctl reload docker&lt;/em&gt;. &lt;/p&gt;


&lt;/li&gt;
&lt;li&gt;&lt;p&gt;you can also start the dockerd process manually with the --live-restore flag.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Impact of live restore on running containers
&lt;/h3&gt;

&lt;p&gt;If the daemon is down for a long time, running containers may fill up the FIFO log the daemon normally reads. A full log blocks containers from logging more data. The default buffer size is 64K. If the buffers fill, you must restart the Docker daemon to flush them.&lt;/p&gt;

&lt;p&gt;On Linux, you can modify the kernel’s buffer size by changing /proc/sys/fs/pipe-max-size.&lt;/p&gt;

&lt;h3&gt;
  
  
  Live restore and swarm mode
&lt;/h3&gt;

&lt;p&gt;The live restore option only pertains to standalone containers, and not to swarm services. Swarm services are managed by swarm managers. If swarm managers are not available, swarm services continue to run on worker nodes but cannot be managed until enough swarm managers are available to maintain a quorum.&lt;/p&gt;

&lt;h3&gt;
  
  
  Live restore during upgrades
&lt;/h3&gt;

&lt;p&gt;Live restore supports keeping containers running across Docker daemon upgrades, though this is limited to patch releases and does not support minor or major daemon upgrades.&lt;/p&gt;

&lt;p&gt;If you skip releases during an upgrade, the daemon may not restore its connection to the containers. If the daemon can’t restore the connection, it cannot manage the running containers and you must stop them manually.&lt;/p&gt;

&lt;h3&gt;
  
  
  Live restore upon restart
&lt;/h3&gt;

&lt;p&gt;The live restore option only works to restore containers if the daemon options, such as bridge IP addresses and graph driver, did not change. If any of these daemon-level configuration options have changed, the live restore may not work and you may need to manually stop the containers.&lt;/p&gt;

</description>
      <category>docker</category>
      <category>containers</category>
      <category>linux</category>
    </item>
    <item>
      <title> Intro to ps command </title>
      <dc:creator>Ajay Singh</dc:creator>
      <pubDate>Tue, 06 Aug 2019 23:09:54 +0000</pubDate>
      <link>https://dev.to/ajayatgit/intro-to-ps-command-fh0</link>
      <guid>https://dev.to/ajayatgit/intro-to-ps-command-fh0</guid>
      <description>&lt;p&gt;The aim of the article is to give an idea about ps command and various option it is used to run.&lt;br&gt;
I have seen people running commands like&lt;br&gt;
&lt;br&gt;
 &lt;code&gt;ps -ef&lt;/code&gt;&lt;br&gt;
&lt;br&gt;
 without understanding the reason for adding the parameters ( they do because of somewhere they have seen in StackOverflow or seen their colleagues using it while debugging some issue).&lt;/p&gt;

&lt;p&gt;To start with what does &lt;strong&gt;ps&lt;/strong&gt; command is used for: &lt;strong&gt;displays information about a selection of the active processes&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;ps accepts several kinds of options which can be clubbed as below &lt;/p&gt;

&lt;p&gt;1   UNIX options, which may be grouped and must be preceded by a dash.&lt;br&gt;
 2   BSD options, which may be grouped and must not be used with a dash.&lt;br&gt;
 3   GNU long options, which are preceded by two dashes.&lt;/p&gt;

&lt;p&gt;ps command works by reading files in the proc filesystem. The directory &lt;strong&gt;/proc/PID&lt;/strong&gt; contains various files that provide information about process PID. The content of these files is generated on the fly by the kernel when a process reads them.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ps&lt;/strong&gt;&lt;br&gt;
By default, ps selects all processes with the same effective user ID (euid=EUID) as the current user and associated with the same terminal as the invoker.&lt;/p&gt;

&lt;p&gt;It displays the &lt;strong&gt;process ID (pid=PID)&lt;/strong&gt;, the terminal associated with the process (tname=TTY), the cumulated CPU time in [DD-]hh:mm:ss format (time=TIME), and the executable name (ucmd=CMD).Output is unsorted by default.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[ajays@ip-10-40-19-132 ~]$ ps
  PID TTY          TIME CMD
20244 pts/0    00:00:00 bash
20342 pts/0    00:00:00 ps
[ajays@ip-10-40-19-132 ~]$ who
ajays    pts/0        2019-08-06 20:38 (ip-10-19-12-119.ca-central-1.compute.internal)
[ajays@ip-10-40-19-132 ~]$

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;So as per the explanation, the ps command would have only shown the process with the same user and associated as with the same terminal as invoker. The next command list down the invoker ( ajays ) and terminal ( pts/0 ). So the output of the ps command is the process associated with ajays and attached to terminal pts/0.&lt;/p&gt;

&lt;p&gt;The options used by ps can be classified in mainly below categories.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Process selection ( simple or by list )&lt;/li&gt;
&lt;li&gt;Output format control and modifiers&lt;/li&gt;
&lt;li&gt;Thread display&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each of the above categories has a lot of option which we cannot cover in this article but will try to cover some common ones.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ps -f&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;-f&lt;/strong&gt; option is UNIX option. It is used to do full-format listing.&lt;br&gt;
The out show following details UID, PID, PPID, C, STIME, TTY, TIME, CMD.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;UID&lt;/em&gt; - The name of the user who have started the process.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;PID&lt;/em&gt; - This act as the identification no of the process running in the memory.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;PPID&lt;/em&gt;- It is parent process id. This id is the pid of the process because of &lt;br&gt;
           which these process has been started. &lt;/p&gt;

&lt;p&gt;&lt;em&gt;C&lt;/em&gt;-  Processor utilization information in %.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;STIME&lt;/em&gt;- This is the start time of the process.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;TTY&lt;/em&gt;- This is the terminal from which the process was started.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;TIME&lt;/em&gt;- Total time for which the process has utilized cpu.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;CMD&lt;/em&gt;- The command and arguments executed.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[ajays@ip-10-40-19-132 ~]$ ps -f
UID        PID  PPID  C STIME TTY          TIME CMD
ajays    20244 20213  0 20:38 pts/0    00:00:00 -bash
ajays    29906 20244  0 22:12 pts/0    00:00:00 ps -f
[ajays@ip-10-40-19-132 ~]$
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;ps -ef&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Now we have attached one more argument &lt;strong&gt;-e&lt;/strong&gt; (this is a UNIX based option).&lt;br&gt;
This option is a select option. Select all processes.&lt;br&gt;
a snippet of this command looks like below&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;5984      6011  5341  0 Jul31 ?        00:00:00 /opt/couchdb/bin/couchjs -S 268435456 /opt/couchdb/share/server/ma
5984      6012  5341  0 Jul31 ?        00:00:01 /opt/couchdb/bin/couchjs -S 268435456 /opt/couchdb/share/server/ma
5984      6013  5341  0 Jul31 ?        00:00:00 /opt/couchdb/bin/couchjs -S 268435456 /opt/couchdb/share/server/ma
5984      6014  5341  0 Jul31 ?        00:00:03 /opt/couchdb/bin/couchjs -S 268435456 /opt/couchdb/share/server/ma
5984      6015  5341  0 Jul31 ?        00:00:09 /opt/couchdb/bin/couchjs -S 268435456 /opt/couchdb/share/server/ma
5984      6016  5341  0 Jul31 ?        00:00:07 /opt/couchdb/bin/couchjs -S 268435456 /opt/couchdb/share/server/ma
5984      6017  5341  0 Jul31 ?        00:00:08 /opt/couchdb/bin/couchjs -S 268435456 /opt/couchdb/share/server/ma
ajays    23552 20244  0 23:07 pts/0    00:00:00 ps -ef
postfix  24642  3429  0 22:01 ?        00:00:00 pickup -l -t unix -u
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;ps -af&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;Now we have attached &lt;strong&gt;-a&lt;/strong&gt; ( this is UNIX based option ).&lt;br&gt;
Select all processes except both session leaders and processes not associated with a terminal.&lt;br&gt;
So the difference between &lt;strong&gt;-a&lt;/strong&gt; and &lt;strong&gt;-e&lt;/strong&gt; is evident, -e will also return the process which are not associated to a terminal where -a will only return process which are associated with a terminal.&lt;/p&gt;

&lt;p&gt;Session Leaders are the process where PID=SID.&lt;/p&gt;

</description>
      <category>linux</category>
      <category>unix</category>
      <category>ps</category>
    </item>
    <item>
      <title>Why Go is better?</title>
      <dc:creator>Ajay Singh</dc:creator>
      <pubDate>Fri, 19 Jul 2019 22:59:58 +0000</pubDate>
      <link>https://dev.to/ajayatgit/why-go-is-better-3p45</link>
      <guid>https://dev.to/ajayatgit/why-go-is-better-3p45</guid>
      <description>&lt;p&gt;The GoLang was conceived and developed by developers at Google.&lt;br&gt;
It is reported that it was conceived while they were waiting for the project to have complied.&lt;/p&gt;

&lt;p&gt;Go or GoLang, as it is called, is a robust system-level language used for programming across large-scale network servers and big distributed systems. Golang emerged as an alternative to C++ and Java for the app developers in the context of what Google needed for its network servers and distributed systems&lt;/p&gt;

&lt;p&gt;Let's discuss one by one factor which makes go different from the majority of existing programming language.&lt;/p&gt;

&lt;h1&gt;
  
  
  1) Multithreading And Concurrency
&lt;/h1&gt;

&lt;p&gt;A vast majority of programming languages lack concurrent execution when working with multiple threads. They often slow down the pace of programming, compiling and execution. This is where Go comes as the most viable option to support multi-threading environment and concurrency both.&lt;br&gt;
With time the hardware manufacturers have kept on adding more core to the system to ensure better performance.&lt;br&gt;
Go was conceived at the time multi-core processors became widely available across sophisticated hardware. The creators of Go gave particular focus on concurrency. Go works with goroutine which allow it to handle concurrently large number of tasks.&lt;/p&gt;

&lt;h1&gt;
  
  
  2) Simplicity of Go
&lt;/h1&gt;

&lt;h2&gt;
  
  
  No Generics:
&lt;/h2&gt;

&lt;p&gt;Generics or templates which remained a mainstay for various programming languages often add to the obscurity and difficulties of understanding. Go designers by deciding to go without it made things simple. &lt;/p&gt;

&lt;h2&gt;
  
  
  Single Executable:
&lt;/h2&gt;

&lt;p&gt;GoLang comes without any separate runtime library. It can produce a single executable code that can be deployed by just copying. This helps in removing all the concerns of committing mistakes on dependencies or versions mismatch. &lt;/p&gt;

&lt;h2&gt;
  
  
  No Dynamic Libraries:
&lt;/h2&gt;

&lt;p&gt;Go just decided to do away with any dynamic library to keep the language simple and straightforward. Although, in the latest Go 1.10 version the developers are given the option to upload dynamic libraries through plug-in packages. This has just been included only as an extended capability.&lt;/p&gt;

</description>
      <category>go</category>
    </item>
    <item>
      <title>Logical Components in blockchain</title>
      <dc:creator>Ajay Singh</dc:creator>
      <pubDate>Tue, 16 Jul 2019 13:54:27 +0000</pubDate>
      <link>https://dev.to/ajayatgit/logical-components-in-blockchain-32fe</link>
      <guid>https://dev.to/ajayatgit/logical-components-in-blockchain-32fe</guid>
      <description>&lt;p&gt;This article points out and gives an outline of the various logical components in the blockchain.&lt;/p&gt;

&lt;p&gt;When one takes a first look in Blockchain space it looks very confusing and complicated and there is no doubt about it, as the low-level programming and mathematics involved to create and implement blockchain ecosystem come with difficulty.&lt;/p&gt;

&lt;p&gt;In this article, I will explain the purpose of the individual logical component that makes up any blockchain ecosystems.&lt;/p&gt;

&lt;p&gt;To understand how blockchain technology works irrespective of its flavor or application, it is necessary to understand the logical component of blockchain ecosystem and their role.&lt;/p&gt;

&lt;p&gt;The Blockchain Ecosystem can be logically segregated in these 4 logical components.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;node application&lt;/li&gt;
&lt;li&gt;shared ledger&lt;/li&gt;
&lt;li&gt;consensus algorithm&lt;/li&gt;
&lt;li&gt;virtual machine&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Node Application&lt;br&gt;
Any Machine who want to be part of the blockchain ecosystem, need to run a node of the specific Blockchain. Using bitcion as an example , the machine must be running Bitcion wallet Application.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Shared Ledger&lt;br&gt;
Consider it as a data structure, which is inside the node application. If you are running the Ethereum client, you can see the Ethereum ecosystem ledger and interact according to the rules of that ecosystem (smart contracts, payments, etc.). If you are running the Bitcoin client, you can participate in the Bitcoin ecosystem, according to the rules set out in the program code of the Bitcoin node application.&lt;br&gt;
You may run as many node application of any blockchain ecosystem, but there will be only one shared ledger of the particular blockchain ecosystem.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Consensus Algorithm&lt;br&gt;
This logical component provides the "rules and regulation" on how the single view of the shared ledger will be reached in blockchain ecosystem.Different ecosystems have different methods for attaining consensus depending on the desired features the ecosystem needs. The various famous consensus algorithms are - Proof of Work, Proof of Authority, Proof of Stake, Proof of Elapsed Time.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Virtual Machine&lt;br&gt;
This is the final logical component of the blockchain ecosystem. It acts a bucket or container where all the logical components rest, act and interact.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;A virtual machine is a representation of a machine (real or imaginary) created by a computer program and operated with instructions embodied in a language.It is an abstraction of a machine inside a machine.&lt;/p&gt;

&lt;p&gt;This post was first originally posted here - &lt;a href="https://www.linkedin.com/pulse/logical-components-blockchain-ajay-singh/"&gt;https://www.linkedin.com/pulse/logical-components-blockchain-ajay-singh/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>blockchain</category>
      <category>dlt</category>
      <category>distributed</category>
    </item>
  </channel>
</rss>
