<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Samuyi</title>
    <description>The latest articles on DEV Community by Samuyi (@samuyi).</description>
    <link>https://dev.to/samuyi</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/samuyi"/>
    <language>en</language>
    <item>
      <title>How To Setup Nginx For HLS Video Streaming On Centos 7</title>
      <dc:creator>Samuyi</dc:creator>
      <pubDate>Wed, 13 Feb 2019 21:26:31 +0000</pubDate>
      <link>https://dev.to/samuyi/how-to-setup-nginx-for-hls-video-streaming-on-centos-7-3jb8</link>
      <guid>https://dev.to/samuyi/how-to-setup-nginx-for-hls-video-streaming-on-centos-7-3jb8</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;HLS stands for HTTP live streaming. It’s an HTTP based media streaming protocol developed by Apple. Unlike UDP based protocols like RTP it can’t be blocked by firewalls that only allow HTTP traffic. It can  be delivered by HTTP servers such as Nginx and can distributed through CDNs. &lt;/p&gt;

&lt;p&gt;The default install of  Nginx doesn’t come complied with an HLS module;  but there’s an open source &lt;a href="https://github.com/arut/nginx-rtmp-module"&gt;Nginx module&lt;/a&gt; that supports HLS.  We would need to compile Nginx from source and add the module during compilation.&lt;/p&gt;

&lt;p&gt;This tutorial shows you how to install Nginx and use it as a video live streaming server.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;To follow along with this tutorial please ensure the following are present on the target machine:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;git, wget, gcc, gcc-c++, perl, gd, gd-devel, perl-ExtUtils-Embed, geoip, geoip-devel and tar &lt;/li&gt;
&lt;li&gt;A non root user with sudo capabilities&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you don’t have the build utilities you would need to install them. Run this to do so:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo yum update
$ sudo yum install epel-release
$ sudo yum install git wget gcc gcc-c++ tar gd gd-devel perl-ExtUtils-Embed geoip geoip-devel
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 1 -  Download and Compile Nginx With It’s Dependencies
&lt;/h2&gt;

&lt;p&gt;We need to download the dependency libraries for Nginx;  including the open sorce &lt;a href="https://github.com/arut/nginx-rtmp-module"&gt;nginx-rtmp&lt;/a&gt; module used to provide Nginx with HLS capabilities. First off we download the &lt;a href="http://pcre.org/"&gt;PCRE&lt;/a&gt; module required by Nginx &lt;a href="https://nginx.org/en/docs/ngx_core_module.html"&gt;Core&lt;/a&gt; and &lt;a href="https://nginx.org/en/docs/http/ngx_http_rewrite_module.html"&gt;Rewrite&lt;/a&gt; modules. Run this to do so:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt; $    wget ftp://ftp.csx.cam.ac.uk/pub/software/programming/pcre/pcre-8.42.tar.gz
 $   tar -zxf pcre-8.42.tar.gz
 $   rm -rf pcre-8.42.tar.gz
 $   cd pcre-8.42
 $   ./configure
 $   make
 $   sudo make install
 $   cd
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Next we download the &lt;a href="//www.zlib.net"&gt;zlib&lt;/a&gt; module required by the &lt;a href="https://nginx.org/en/docs/http/ngx_http_gzip_module.html"&gt;Nginx Gzip module&lt;/a&gt; of nginx and install it. Run this to do so:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ wget http://zlib.net/zlib-1.2.11.tar.gz
$ tar -zxf zlib-1.2.11.tar.gz
$ rm -rf zlib-1.2.11.tar.gz
$ cd zlib-1.2.11
$ ./configure
$ make
$ sudo make install
$ cd 
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Up next, we download the &lt;a href="http://www.openssl.org"&gt;openssl module&lt;/a&gt; required by the &lt;a href="https://nginx.org/en/docs/http/ngx_http_ssl_module.html"&gt;Nginx SSL module&lt;/a&gt;. Run this to do so:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ wget http://www.openssl.org/source/openssl-1.0.2q.tar.gz
$ tar -zxf openssl-1.0.2q.tar.gz
$ rm -rf openssl-1.0.2.tar.gz
$ cd openssl-1.0.2q
$ ./config
$ make
$ sudo make install
$ cd
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;We then download the open source &lt;a href="https://github.com/arut/nginx-rtmp-module"&gt;nginx-rtmp&lt;/a&gt;  module  from its github repository. To do that run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git clone git://github.com/arut/nginx-rtmp-module.git
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Finally we download the Nginx source. We would be downloading the latest stable version, which as of this writing  is 1.14.2,  from &lt;a href="http://www.nginx.org/en/download.html"&gt;nginx.org&lt;/a&gt;. Run this to do so:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ wget https://nginx.org/download/nginx-1.14.2.tar.gz
$ tar zxf nginx-1.14.2.tar.gz
$ rm -rf nginx-1.14.2.tar.gz
$ cd nginx-1.14.2
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Now that we have the necessary dependencies, we can compile Nginx. Now we need to configure the build options. This is done by running the  “./configure” script in the directory with a host of options for Nginx to compile. The options include the paths to the open source module, zlib module, pcre module and openssl module all previously downloaded and installed. We also need to specify which in built Nginx modules we want compiled. We run this to get the desired build option:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ./configure  --add-module=../nginx-rtmp-module \
--sbin-path=/usr/sbin/nginx \ 
--lock-path=/var/run/nginx.lock \
--conf-path=/etc/nginx/nginx.conf \
--pid-path=/run/nginx.pid \   
--with-pcre=../pcre-8.42 \    
--with-zlib=../zlib-1.2.11 \  
--with-openssl=../openssl-1.0.2q \
--error-log-path=/var/log/nginx/error.log \
--http-log-path=/var/log/nginx/access.log \
--user=nginx \                 
--group=nginx \                
--with-http_auth_request_module \
--with-http_degradation_module \
--with-http_geoip_module \     
--with-http_gunzip_module \    
--with-http_gzip_static_module \
--with-http_image_filter_module \
--with-http_mp4_module \
--with-http_perl_module \
--with-http_realip_module \
--with-http_secure_link_module \
--with-http_slice_module \
--with-http_ssl_module  \
--with-http_stub_status_module \
--with-http_v2_module \
--with-stream_ssl_module \
--with-stream \
--with-threads \
--prefix=/etc/nginx
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Finally compile and build Nginx.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ make 
$ sudo make install
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;To check if it was installed properly run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ nginx -V #The output should be the Nginx version, compiler version, and configure script parameters.
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;We need to install the Nginx man pages, to do that run this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo cp ~/nginx-1.14.2/man/nginx.8 /usr/share/man/man8
$ sudo gzip /usr/share/man/man8/nginx.8
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Now we clean up the libraries previously downloaded.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ rm -rf nginx-1.14.2 nginx-rtmp-module openssl-1.0.2q pcre-8.42 zlib-1.2.11
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 2 – Setup and Configure Nginx
&lt;/h2&gt;

&lt;p&gt;Now that the Nginx binary is installed in our search path, we need to setup an nginx user. To setup the nginx user, run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo useradd --system --home /var/lib/nginx --shell /sbin/nologin --comment "nginx system user" nginx
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;We also need to create the directory where Nginx logs are stored and make user nginx the owner. For that we run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo mkdir /var/log/nginx &amp;amp;&amp;amp;  sudo chown nginx:nginx /var/log/nginx 
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;With that done, it’s time to create the nginx systemd service unit file. It’s contents should be something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  [Unit]
  Description=nginx - high performance web server
  Documentation=https://nginx.org/en/docs/
  After=network.target remote-fs.target nss-lookup.target
  Wants=network-online.target
        [Service]
  Type=forking
  PIDFile=/run/nginx.pid
  ExecStartPre=/usr/bin/rm -f /run/nginx.pid 
  ExecStartPre=/usr/sbin/nginx -t -c /etc/nginx/nginx.conf
  ExecStart=/usr/sbin/nginx -c /etc/nginx/nginx.conf
  ExecReload=/bin/kill -s HUP $MAINPID
  KillSignal=SIGQUIT
  TimeoutStopSec=5
  KillMode=process
  PrivateTmp=true
     [Install]
  WantedBy=multi-user.target
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;We now paste the above contents in the nginx service file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo vim  /lib/systemd/system/nginx.service   # You can replace vim with whichever editor you prefer
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Now we reload systemctl daemon and start Nginx.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo systemctl daemon-reload
$ sudo systemctl start nginx
$ sudo systemctl enable nginx
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Now we need to configure Nginx to stream videos. Our nginx.conf file; located in the /etc/nginx/ directory, should look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;user  nginx;
worker_processes  auto;
 server_tokens off;
events {
    worker_connections  1024;
}
# We need to setup an rmtp server to stream video from client devices
rtmp {
    server {
      listen 1935;
      chunk_size 4096;
      ping 30s;
      notify_method get;
      allow play all;
       # rmtp handler our clients connect to for live streaming, it runs on port 1935. It converts the stream to HLS and stores it on our server
   application app {
          live on;
          hls on;   
          hls_path /var/www/hls/live;
          hls_nested on;  # create a new folder for each stream
          record_notify on;
          record_path /var/www/videos;
          record all;
          record_unique on;
     }

    application vod {
       play /var/www/videos;
    }
 }
}
http {
    include       mime.types;
    default_type  application/octet-stream;
    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';
    sendfile        on;
    tcp_nopush     on;
    keepalive_timeout  65;
    gzip  on;
    server {
        listen       80;
        server_name  _;
        location / {
            root   html;
            index  index.html index.htm;
        }
          # the http end point our web based users connect to see the live stream
          location /live {
            types {
                application/vnd.apple.mpegurl m3u8; 
             }
                 alias /var/www/hls/live;
                add_header Cache-Control no-cache;
       }
   }
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;We need to also create the directory where our video stream is stored.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt; $ sudo mkdir -p /var/www/hls/live
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Now we can restart nginx to reload the new configuration file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo systemctl restart nginx

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 3- Stream and Publish Videos
&lt;/h2&gt;

&lt;p&gt;Now to live stream videos from a client machine, assuming the client has the video stored locally and &lt;a href="https://www.ffmpeg.org/"&gt;ffmpeg&lt;/a&gt; installed, we run this to publish to our server:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ffmpeg -i /path/to/video  -c:v h264 -c:a aac  -strict -2 -f flv rtmp://server_ip:1935/app/unique_stream_name     #the name of the stream has to be unique  

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Our viewers can watch the video on &lt;a href="https://www.videolan.org/vlc/index.html"&gt;vlc media player&lt;/a&gt; by streaming the url: &lt;code&gt;http://server_ip/live/unique_stream_key/index.m3u8&lt;/code&gt;. It is also possible to publish video from webcam from a Linux client machine by running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ffmpeg -f video4linux2 -i /dev/video0 -c:v libx264 -an -f flv rtmp://server_ip:1935/app/unique_stream_name
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;From a mac book, it’s this way&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ffmpeg -f avfoundation -framerate 30 -i "0" -c:v libx264 -an -f flv rtmp://server_ip:1935/app/unique_stream_name
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;And thats how to setup HLS with Nginx. To go further we could write an application in our favorite programming language to handle user authentication and maybe store videos on a media server. The key thing is that our users can live stream videos with regular HTTP.  &lt;/p&gt;

</description>
      <category>linux</category>
      <category>devops</category>
      <category>webdev</category>
    </item>
    <item>
      <title>USING DATABASE TRIGGERS IN POSTGRESQL</title>
      <dc:creator>Samuyi</dc:creator>
      <pubDate>Wed, 09 Jan 2019 05:16:40 +0000</pubDate>
      <link>https://dev.to/samuyi/using-database-triggers-in-postgresql-2h89</link>
      <guid>https://dev.to/samuyi/using-database-triggers-in-postgresql-2h89</guid>
      <description>&lt;p&gt;Triggers are a part of the SQL standard and are very useful in applications that are data intensive in nature. Triggers are attached to database objects such as tables, views and foreign tables. They usually occur after some event such as an UPDATE, DELETE and INSERT has happened in a database. They help enforce constraints and monitoring of data. Triggers are classified according to whether they fire before, after, or instead of an operation. They’re referred to as BEFORE trigger , AFTER triggers, and INSTEAD OF triggers respectively. They can also be classified based on if they’re row level triggers or statement level triggers. Row level triggers are fired for each row of a table that’s affected by the statement that fired the trigger; triggers that fire based on UPDATE, DELETE OR INSERT may only be defined based on row level triggers. Statement level triggers fire only once after a statement has been executed even if the statement didn’t affect any rows. Triggers attached to TRUNCATE operation or views fire at the the statement level. To make a trigger in PostgreSQL we need to declare a stored function and then declare a trigger.&lt;/p&gt;

&lt;h2&gt;
  
  
  DEFINING A TRIGGER
&lt;/h2&gt;

&lt;p&gt;There is a whole range of possible ways of defining a trigger in PostgreSQL; this is due to the numerous options available for defining a trigger. In this article we would focus on only a subset of features to get you started.  We can define a trigger minimally this way:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CREATE TRIGGER trigger_name { BEFORE | AFTER | INSTEAD OF } { UPDATE | INSERT | DELETE | TRUNCATE }
   ON table_name
   FOR EACH ROW EXECUTE PROCEDURE function_name()

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;


&lt;p&gt;The variables here are the trigger_name; which represents the name of the trigger, table_name represents the name of the table the trigger is attached to, and function_name represents the name of the stored function. For the basics of writing stored functions look up my &lt;a href="https://dev.to/samuyi/a-primer-on-postgresql-stored-functions-plpgsql-1594"&gt;article&lt;/a&gt; on them.&lt;/p&gt;
&lt;h2&gt;
  
  
  PERFORMING DATA VALIDATION
&lt;/h2&gt;

&lt;p&gt;When writing application level code having informative error messages can make the debugging experience a lot easier and faster. Constraint violation error messages from databases are not the most informative and require some digging to understand what they mean. What if we could perform data validation and generate our custom error messages when our requirements are not met? This would certainly make an application developer’s life far more easier. Well we can perform data validation with triggers and generate informative error messages for violations. We check the data if it meets our requirements before inserting in the database and if it doesn’t we abort the operation and raise an error; if it does meet our requirements we continue with the operation.&lt;/p&gt;

&lt;p&gt;Suppose we have an example table called passwd that handles users in an application, we want to ensure that the users password isn’t less than 10 characters or is null, and their name isn't NULL.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;



&lt;p&gt;We could attach a trigger to the table listening for ‘INSERT’ and ‘UPDATE’ operations, instead of defining constraints at the database level. First we define our stored function.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;The stored function returns a ‘TRIGGER’, this is essential for all stored functions used as stored procedures in triggers. The function takes no arguments. Special variables are passed to the stored function by default; the ‘NEW’ and ‘OLD’ variables: the former represents the incoming row for an INSERT or UPDATE operation while the later represents the outgoing row for an UPDATE or DELETE operation. Another is the ‘TG_NAME’ and ‘TG_OP’ which represents the name of the trigger that’s fired and the operation that fired the trigger: could be an ‘UPDATE’, ‘INSERT’ or ‘DELETE’. Others are ‘TG_WHEN’, ‘TG_TABLE_NAME’, and ‘TG_ARGV[]’: the first variable represents  either of ‘BEFORE’, ‘AFTER’ or ‘INSTEAD OF’ depending on the trigger definition, the second variable represents the table name the trigger was fired on, while the third variable represents arguments that are passed into the trigger definition. There are more arguments that are passed into the stored functions by default see the &lt;a href="https://www.postgresql.org/docs/9.6/plpgsql-trigger.html"&gt;PostgreSQL documentation&lt;/a&gt; for more. Our function makes use of the ‘NEW’ variable to check if the individual values of the new row meet our constraints, and if they don’t we raise an exception with a message that’s meaningful. The stored function must return either a NULL or a row matching the structure of the table it was fired for. In our case we return ‘NEW’ since it contains the new row we’re inserting or updating. It is possible to change individual values of ‘NEW’ and return the changed ‘NEW’ which will be inserted into the database. As an aside we can use triggers to populate columns during an ‘INSERT’ operation; this is done when performing full text search as we need to populate a ‘tsvector’ column. &lt;/p&gt;

&lt;p&gt;Finally we define our main trigger.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;The trigger is executed for both INSERT and UPDATE statements. It fires before a new row is inserted or a row is updated in the database; the ‘BEFORE’ key word ensures that. &lt;/p&gt;

&lt;h2&gt;
  
  
  AUDITING CHANGES
&lt;/h2&gt;

&lt;p&gt;Suppose we have a banking application how do we track changes made to the underlying data? How do we trace back changes made when a rogue individual or bug compromises our data. Certainly we wouldn’t want to guess what the value was before the malicious changes were made. What we need is a way to ensure that any changes made are kept track of; a sort of history of the state of each row in a table. This is best done at the database level with triggers. As an example suppose we have a banking application. We create a separate table to keep track of the changes made to our account table. &lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;Again our stored function for our audit trigger is.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;Our function is similar to the previous one. It returns a ‘TRIGGER’ and it checks the special variable ‘TG_OP’ for what operation caused the trigger to fire. If its an ‘INSERT ‘ we insert the new row into our table, NEW contains the new row we want to insert into our account table, and finally return ‘NEW’. We do the same for an ‘UPDATE’ operation. For a ‘DELETE’ operation we insert the values of ‘OLD’ , which contain the rows that’s about to be deleted, and finally return ‘OLD’ to continue the delete operation.  &lt;/p&gt;

&lt;p&gt;Finally our trigger definition.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;The ‘AFTER’ keyword  means that the operation that fired the trigger completes before our trigger fires. We could use foreign data wrappers to send the data from our audit trigger to a remote database to prevent loss of data in the case of a database crash.&lt;/p&gt;

&lt;h2&gt;
  
  
  CONCLUSION
&lt;/h2&gt;

&lt;p&gt;Triggers are a powerful feature that can improve the efficiency of any application using a database. Even mongodb has added the &lt;a href="https://docs.mongodb.com/stitch/triggers/database-triggers/"&gt;feature&lt;/a&gt;. The only case when triggers can become a nuisance is in the case of cascading triggers, that is triggers that fire other triggers, that in most cases represents bad database design and can be difficult to maintain. That being said triggers can be a force for real good in any application.&lt;/p&gt;

</description>
      <category>sql</category>
      <category>database</category>
      <category>postgres</category>
      <category>linux</category>
    </item>
    <item>
      <title>A PRIMER ON POSTGRESQL STORED FUNCTIONS (PL/pgSQL)</title>
      <dc:creator>Samuyi</dc:creator>
      <pubDate>Mon, 07 Jan 2019 16:57:08 +0000</pubDate>
      <link>https://dev.to/samuyi/a-primer-on-postgresql-stored-functions-plpgsql-1594</link>
      <guid>https://dev.to/samuyi/a-primer-on-postgresql-stored-functions-plpgsql-1594</guid>
      <description>&lt;p&gt;Postgresql functions extends the SQL language; it adds features that are normally found in programming languages such as control statements and loops to make an application developer’s life easier; they are stored and executed completely on a database server. Using functions means that you don’t have write ineffective code that would be a bottle neck in your application. Say for instance you need to fetch some data from a database for some computation and based on the result of the computation you need to fetch some extra data do some more computation and store the results in the database. This will require several calls to the database, worse off if the database server exists on a separate host as the application server then the network call adds to the execution time of the process. Complex logic like previously described can be placed in a function and executed all at once on the database server, removing all the unnecessary intermediate network calls.  &lt;/p&gt;

&lt;h2&gt;
  
  
  PL/pgSQL
&lt;/h2&gt;

&lt;p&gt;Postgresql is quite flexible when it comes to defining functions it lets you define functions in almost any of your favorite programming languages. There’s a module for &lt;a href="https://www.postgresql.org/docs/9.6/plpython.html"&gt;python&lt;/a&gt;, &lt;a href="http://www.joeconway.com/plr/doc/doc.html"&gt;R&lt;/a&gt;, &lt;a href="https://plv8.github.io/"&gt;javascript&lt;/a&gt; including but not least &lt;a href="https://tada.github.io/pljava/"&gt;Java&lt;/a&gt;. The modules for this languages don’t come installed by default on the database. Another module for writing functions is the Pl/pgSQl module for writing functions, it’s based on the SQL language and it comes installed on the database by default since version 9. PL/pgSQL functions look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  CREATE FUNCTION test_func(integer, text) RETURNS integer
   AS $$
     /* function body text  goes here */
   $$
   LANGUAGE plpgsql;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;


&lt;p&gt;Basically we define a function called test_func with the CREATE FUNCTION  syntax. The test function takes in two arguments, an integer and a text, we specify that we are returning an integer with RETURNS. We indicate the start of the body of the function with the AS, the $$ represents quoting of the beginning and end of the function body. The quoting of the body need not be $$ it could be regular single or double quotes, but if any of the latter two are used within the function body;  they would have to be escaped.Lastly we define the procedural language we’re using with keyword LANGUAGE; in our case it is plpgsql.  &lt;/p&gt;

&lt;p&gt;The function body takes the form of a block; the block is defined as such:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;       [ &amp;lt;&amp;lt; label &amp;gt;&amp;gt; ]
       [ DECLARE
         declarations ]
       BEGIN
        statements
       END [ label ];
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;


&lt;p&gt;The ‘ &amp;lt;&amp;lt; label &amp;gt;&amp;gt; ‘ represents a handle for a block and they’re optional. It is possible to nest blocks so labels act as a reference to the blocks. DECLARE as the name implies is for declaring variables used within the block. The BEGIN and END wrap the main logic of the function. The general DDL syntax for a function declaration has many more options you can check it out &lt;a href="https://www.postgresql.org/docs/9.6/sql-createfunction.html"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;As an example, suppose we have an application that manages a car sales organization; tracking new vehicles, employees and sales. The tables for the application are given below.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;



&lt;p&gt;For our first function suppose we want to insert values into our sales table, we could go about it this way:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;Our function takes two arguments, an employee and a car id, both being integers. Our function returns a ‘SETOF sales’; which means we’re returning a set of the type ‘sales’, every table in PostgreSQL is a composite type consisting of the types of it’s individual columns. Next  we ‘DECLARE’  the variables for use within our function. To execute dynamic SQL commands we use the ‘EXECUTE’  keyword, the output of the command is put into the variables following the ‘INTO’ statement, and the variables used within the SQL statement follow the ‘USING’ keyword. The ‘RETURN QUERY’ keyword is used to return the type ‘SETOF sales’, since we’re returning a set of records we execute a select statement to return the necessary records. Notice that if we were to carry out this logic without stored functions we would have to make several round trips to the server to achieve our goal. To execute our function all we need  do is run it like any other built in database function passing in the necessary  arguments:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SELECT sales_func(1, 2);
SELECT sales_func(2, 3);
SELECT sales_func(6, 2);
SELECT sales_func(1, 3);
SELECT sales_func(4, 1);
SELECT sales_func(3, 1);
SELECT sales_func(5, 3);
SELECT sales_func(4, 2);
SELECT sales_func(6, 2);
SELECT sales_func(5, 2);
SELECT sales_func(2, 2);
SELECT sales_func(3, 2);
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;


&lt;p&gt;Our next function is for populating our sales summary table. It contains values for total sales and bonus figures for each employee for a quarter. &lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;



&lt;p&gt;This function is a little more involved; it uses a ‘LOOP’. The loop, loops through every employee from the staff table and summarizes their sales figures for each quarter.  The ‘LOOP’ used here is similar to the loops found in modern programming languages. It loops through the results of  the SQL statement and assigns it to the employee variable. The employee variable is of record type from our declarations. To get a value from our employee variable we use a dot to access the columns of the result. The ‘RAISE NOTICE’ key word serves as a sort of print statement within the function and can be useful for debugging. After our loop we delete the data that has been processed within our loop. Our function returns a custom table, with column types similar to our final ‘SELECT’ statement. Returning a table is a way of returning a custom record if we don’t want to return every column from a table. The argument for the function has a default value; it is possible to use default values just like in we would for defining relations. Our argument defaults to the ‘CURRENT_TIMESTAMP’ value. The ‘end_date’ variable corresponds to a date of  ‘3 months’  from current ‘start_date’, we add a period of three months to the start_date variable using the ‘interval 3 months’ key word. &lt;/p&gt;

&lt;p&gt;Our final function is used for updating the values of the bonus for a car. &lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;First we check that the car id exists. When we execute an SQL statement that returns rows and we want to discard the resulting rows; we need to use the ‘PERFORM’  keyword, if not there would be an error. In our case since we’re ‘selecting’ from a table we replace the ‘SELECT’ with a ‘PERFORM’. If our query returns any rows then it sets the special variable FOUND to true. We check if the car id doesn’t exist and raise an exception if true, we also check if the bonus is greater than thirteen percent.  Our function returns a record of type cars, so we create a variable car of composite type cars and insert the the updated car into it.&lt;/p&gt;

&lt;h2&gt;
  
  
  CONCLUSION
&lt;/h2&gt;

&lt;p&gt;Using plpgsql functions can make for more concise and efficient applications. This is especially true for data intensive applications especially were security is paramount. That being said using plpgsql for all application logic would probably be a bad idea. For non complex logic it’s probably best to stick with an ORM. &lt;/p&gt;

&lt;p&gt;This features shown in this article are just the tip of the iceberg as regards what can be done with stored functions visit the &lt;a href="https://www.postgresql.org/docs/9.6/plpgsql.html"&gt;postgres documentation&lt;/a&gt; for more.&lt;/p&gt;

</description>
      <category>postgres</category>
      <category>linux</category>
      <category>sql</category>
      <category>database</category>
    </item>
    <item>
      <title>How To Avoid Merge Conflicts</title>
      <dc:creator>Samuyi</dc:creator>
      <pubDate>Mon, 10 Dec 2018 16:34:25 +0000</pubDate>
      <link>https://dev.to/samuyi/how-to-avoid-merge-conflicts-3j8d</link>
      <guid>https://dev.to/samuyi/how-to-avoid-merge-conflicts-3j8d</guid>
      <description>&lt;p&gt;Merge conflicts are every programmers nightmare when collaborating with other developers on a project using git. They are bound to happen as long as you have different developers working on a project. They take time to fix and are darn annoying. Resolving merge conflicts could takes up precious man hours with teams that are very active. It’s usually a sign that there is poor communication and coordination among project members if they happen frequently. In the case were communication is pretty good but every now and then a merge conflicts crops up, is the subject of this article. What we want is to cut all instances of merge conflict if possible. Below I list steps to avoid merge conflicts all together. &lt;/p&gt;

&lt;h2&gt;
  
  
  Use A diff tool
&lt;/h2&gt;

&lt;p&gt;Its always a good idea to compare branches with a diff tool this can help spot potential trouble spots before merging. To use a diff tool you need to configure git to use a diff tool like so:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git config –global diff.tool &amp;lt;diff-tool&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The command to diff two branches is:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git difftool  branch1 branch2
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;A diff tool needs to have been set before you run the above command. Diff tools can also be used to easily resolve conflicts during a merge conflict. Diffing two branching before a merge goes a long way in avoiding merge conflicts.&lt;/p&gt;

&lt;h2&gt;
  
  
  Use git fetch
&lt;/h2&gt;

&lt;p&gt;Doing a git fetch as opposed to a git pull on origin can save you a load of headache in the form of merge conflict. Unfortunately many graphical tools for git only perform a git pull. A git pull consist of a git fetch and a git merge, which means you don’t get to inspect the changes before they’re merged into your local branch. Doing a git fetch gives you an opportunity to do a git diff between your local branch and the remote branch spotting potential conflicts in the process.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git fetch
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Personally I use a diff tool called &lt;a href="http://kdiff3.sourceforge.net/"&gt;k-diff3&lt;/a&gt; to compare branches, it gives a nice graphical display of the changes between the branches, there are numerous diff tools in the wild if that’s not to your liking. After inspecting the various changes and communicating with the developer who made them for any changes not clear then, you can go ahead and merge the changes into your local branch. At least with this approach majority of the merge conflicts can be resolved even before the actual merge.&lt;/p&gt;

&lt;h2&gt;
  
  
  Use git rerere
&lt;/h2&gt;

&lt;p&gt;If you’re developer who likes to merge daily or maybe more often. Then you’ll encounter a situation were you keep resolving the same conflict again and again. Then git rerere is the solution you need. It means reuse recorded resolution. Basically it records a merge conflict that has been resolved and reuses it again when that merge conflict happens again. This means that we don’t spend time solving recurring merge conflicts. To use git rerere it needs to be enabled through:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git config rerere.enabled true 
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Once it’s enabled then if there is a merge conflict of the same scenario that has been been previously recorded rerere automatically resolves it for you.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;If merge conflicts keep cropping up then it’s usually a sign that communication and coordination is poor within the project team. In situations like this none of the above solutions would work fully, until coordination and communication is improved within the team.&lt;/p&gt;

</description>
      <category>git</category>
      <category>linux</category>
      <category>development</category>
    </item>
    <item>
      <title>The How To Of Port Forwarding With SSH</title>
      <dc:creator>Samuyi</dc:creator>
      <pubDate>Mon, 19 Nov 2018 13:56:33 +0000</pubDate>
      <link>https://dev.to/samuyi/the-how-to-of-ssh-port-forwarding-1f4e</link>
      <guid>https://dev.to/samuyi/the-how-to-of-ssh-port-forwarding-1f4e</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkhbaq02d8xax55wvdhpq.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkhbaq02d8xax55wvdhpq.gif" alt="Port Forwarding vs Direct communication" width="336" height="156"&gt;&lt;/a&gt;&lt;br&gt;
Port forwarding is a type of interaction between two applications, usually TCP/IP applications, that talk to each other using an SSH connection. SSH intercepts a service request from a client application on a host, creates an SSH session carrying the request to the other side of the SSH connection. The other side decrypts the request before sending it to the application server on the remote host. Port forwarding can be used to secure communications between applications that aren’t secured traditionally. They can also be used for communications that aren’t possible, for instance IT administrators block certain ports on hosts from external access with firewalls to improve security, with port forwarding it becomes possible to access those applications running on the remote machine.  In a previous &lt;a href="https://dev.to/samuyi/ssh-agents-in-depth-4116"&gt;post&lt;/a&gt; we talked about a different type of forwarding called ssh-agent forwarding.  This lets us create SSH connections from one computer, through a remote host, to a third remote host using public-key authentication without the need to have your private keys on the second remote host. Port forwarding is sometimes refereed to as “tunneling” because it provides a means for which you can secure TCP/IP connections through SSH.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F322xecltbwlomur5al70.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F322xecltbwlomur5al70.gif" alt="local port forwarding" width="391" height="181"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Local Port Forwarding
&lt;/h2&gt;

&lt;p&gt;Suppose you have an IMAP server running on a remote host and you want to access the server using an email client on your home machine. Suppose also that the administrators of the remote host are super paranoid and they block all external access except only on port 80, 433 and 22. Unfortunately since IMAP runs on port 143 you can’t access it from your home. All you need to do is to tunnel through to the IMAP server using SSH. The command you need to run on your local machine is:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ssh -L2001:localhost:143  remote.net
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Lets break down the above command. The -L switch specifies local forwarding; this essentially says the TCP client ( your email client) , is on your local machine. The 2001 represents the port on your local machine you want your email client to connect to.  The localhost means the source sockets of the connection on the server appear to come from localhost. This means that the above command could be written as:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ssh -L2001:remote.net:143 remote.net
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And the source packets would appear as coming from remote.net address. This maybe trivial in some TCP applications but, some servers are configured to do access control and block connections from the loopback address. Or it might be running on a multi-homed host, and have bound only a subset of the addresses the host has, possibly not including the loopback address. It’s generally better to use the first command. Finally 143 represents the port number the IMAP server is running on the remote server.&lt;/p&gt;

&lt;p&gt;By default your SSH client only listens to connections from your local machine. That is it only accepts connections from applications running locally on your machine. Any attempt from external applications to attempt to connect over the wire to your SSH client will fail by default. To enable this you need to tweak the command above like so:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ssh  -g -L2001:localhost:143  remote.net
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The -g switch represents the GateWayPorts option in the client configuration file; if set it to yes then there’s no need for the -g option.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq43usfhnvh7cok98318c.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq43usfhnvh7cok98318c.gif" alt="remote port forwarding with ssh" width="391" height="181"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Remote Port Forwarding
&lt;/h2&gt;

&lt;p&gt;An example of remote forwarding is, suppose we have an email client running on a remote server shell.isp.net and we want to access an IMAP  server on another remote server remote.host.net with an SSH server installed then remote forwarding is the way to go in this case. The command to establish remote forwarding is:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ssh -R2001:localhost:143  remote.host.net
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The syntax is similar to local port forwarding with the only difference being the -R switch instead of the -L in local port forwarding. &lt;/p&gt;

&lt;p&gt;There are subtle differences between remote and local forwarding. The main difference being that in local forwarding the SSH client listens for communication from the application client and therefore usually resides on the same box as the application client. In remote forwarding the SSH server listens for communication from the application client, the SSH server and application client reside on the same host.&lt;/p&gt;

&lt;h2&gt;
  
  
  Dynamic Port Forwarding
&lt;/h2&gt;

&lt;p&gt;You may ask is it possible to tunnel HTTP connections on port 80 to access an insecure website using local forwarding? The answer is yes but there are many restrictions that come with it. Such as, since the browser is running only on localhost then following an absolute URL such as “&lt;a href="http://inseure/web/now.html%E2%80%9D" rel="noopener noreferrer"&gt;http://inseure/web/now.html”&lt;/a&gt; would not work since the browser only knows localhost also trying to proxy the request with your browser won’t work with any connections other than port 80. What would work for us in this case is dynamic port forwarding. &lt;a href="https://en.wikipedia.org/wiki/SOCKS" rel="noopener noreferrer"&gt;SOCKS&lt;/a&gt; is a dynamic forwarding protocol, it is used by popular browsers such as &lt;a href="https://www.torproject.org/" rel="noopener noreferrer"&gt;tor&lt;/a&gt; to proxy requests and tunnel through to websites censored in certain countries and protect the users privacy. A SOCKS client connects via TCP, and indicates via the protocol the remote socket it wants to reach; the SOCKS server makes the connection, then gets out of the way, transparently passing data back and forth. The command to enable this with SSH is run like so:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ssh -D 1080 remote.host.net
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When a user types an absolute URL into the browser including any port such as “&lt;a href="http://myweb:1890%E2%80%9D" rel="noopener noreferrer"&gt;http://myweb:1890”&lt;/a&gt;. The browser connects to SSH socks proxy on port 1080 and asks for connection to myweb:1890. The SSH client associates the browser’s connection with a new SSH session and then connects to the SSH server. The SSH client and server essentially get out of the way and the browser directly connects to the web server. Each new connection to a different web site gets assigned a new socket by SSH.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Port forwarding is a general TCP proxying feature that tunnels TCP connections through an SSH session. It is useful for securing protocols that may not be secure and tunneling through to TCP connections that may be blocked by firewalls. However there are some TCP protocols that may not work properly with port forwarding such as FTP. It so happens that FTP opens random ports on a client after authentication. This makes port forwarding unnecessarily complex. By and large majority of the TCP protocols do work with port forwarding.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>security</category>
      <category>linux</category>
      <category>ssh</category>
    </item>
    <item>
      <title>SSH Agents In Depth</title>
      <dc:creator>Samuyi</dc:creator>
      <pubDate>Wed, 14 Nov 2018 06:46:49 +0000</pubDate>
      <link>https://dev.to/samuyi/ssh-agents-in-depth-4116</link>
      <guid>https://dev.to/samuyi/ssh-agents-in-depth-4116</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmupxjbuh6w7ujif7e0br.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmupxjbuh6w7ujif7e0br.gif" alt="Figure showing how SSH agent work" width="425" height="198"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In a previous &lt;a href="https://dev.to/samuyi/using-ssh-agent-to-simplify-your-ssh-experience--1in8"&gt;post&lt;/a&gt; we talked about SSH key management best practices which involved using of ssh-agent to store decrypted private keys to automate authentication of an SSH client.In this post we would do a deep dive into how SSH agents work including some edge cases in its use.&lt;/p&gt;

&lt;h2&gt;
  
  
  Starting An Agent
&lt;/h2&gt;

&lt;p&gt;SSH agent has some mind puzzling behavior that could surprise even the most seasoned of system administrators. There isn’t just one command syntax for running an agent and there are caveats associated with each way.&lt;/p&gt;

&lt;p&gt;There are two ways to start an agent: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Single-shell method; uses your current login shell on a single terminal&lt;/li&gt;
&lt;li&gt;Subshell method; forks a subshell to use some inherited environment variables &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In the first approach the agent is run in your current login shell. This is done by running the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ssh-agent
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The variables printed to the shell after the command above is run, need to be set as environment variables.  The command forks the ssh-agent in the background, detaching from the terminal and returning the prompt to you.  Beware that after using after each SSH session terminates the agent doesn’t get killed even if you logout, unless you explicitly terminate the agent. So if you start another terminal session and run ssh-agent and ssh-add again another process gets forked in the background. Soon you have a series of agents running in the background doing nothing. To avoid this use the second method.&lt;/p&gt;

&lt;p&gt;The second approach spawns a subshell from your current shell. All you need do is provide a path to a shell or shell script like so:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ssh-agent  $SHELL
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This time instead of forking a background process, the ssh-agent runs in the foreground, spawning a subshell and setting the necessary environment variables. The rest of your login session runs in the subshell and when you logout the agent process gets terminated. The agent can also be terminated by running the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ssh-agent -k  #this won’t work if you started ssh-agent with the first approach
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once the agent is up and running we add keys to it, this is done by ssh-add command giving the path to the identity file (private key) as a parameter.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ssh-add  /home/you/.ssh/id_rsa
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Your passphrase is requested once by ssh-add to load up the private key. There are numerous command line options including listing and deleting keys in the ssh-agent using ssh-add command. Look up the man page for ssh-add on your favorite distribution to see the full list of command line options. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7zbpb0brtkfg89ffpvwg.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7zbpb0brtkfg89ffpvwg.gif" alt="figure showing how SSH agent forwarding works" width="289" height="257"&gt;&lt;/a&gt;                      &lt;/p&gt;

&lt;h2&gt;
  
  
  Agent Forwarding
&lt;/h2&gt;

&lt;p&gt;A scenario were agents are of particular use is,  suppose I want to copy a file from remote server shell.isp.com to another remote server other.host.net and SSH is installed on both of them and they both use public key authentication how do I achieve that? I could copy from remote server shell.isp.com to my local system then copy from my system to remote server other.host.net, but that’s a waste of time. Instead what we could do since I have an account on both remote servers is to directly copy the file from server shell.isp.com  to server other.host.net with a command like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ scp pat@shell.isp.com:print-me psmith@other.host.net:imprime-moi 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;But if you run the above command it fails because there is no tty to enter the passphrase to decrypt the private key for other.host.net on the shell.isp.com host  when it tries to run scp to do the actual copy of the file other.host.net. All this assumes you have your private key for other.host.net on shell.isp.net. It’s a very bad idea to store your private key on a server and generally the fewer places with Internet network access your private key is located the safer it is. If you must back up your private key keep it on a portable disk kept save from the everyone else. Another use case concerning developers is pulling code from a repository on github to a remote server. If the repository is private and you have ssh access set up for your github account then using an agent would make life a lot easier, instead of copying your github private keys to the remote server.&lt;/p&gt;

&lt;p&gt;Fortunately ssh-agent easily solves all the above issues. The remote scp simply contacts the ssh-agent on your local machine for authentication, your private keys never leave your local machine, assuming you have loaded the private keys for other.host.net into ssh-agent. Similarly if you have loaded your private keys for your github account on your ssh-agent locally then authentication happens on your machine.&lt;/p&gt;

&lt;p&gt;To use agent forwarding both the client and the server must permit it.  The socket opened by the ssh-agent is responsible for the communication between the server and the client. The environment variable SSH_AUTH_SOCK points to the socket, the variable is created when the agent is initialized, the socket is usually stored in the /tmp directory.  To enable agent forwarding on the server the variable “AllowAgentForwarding” must be set to yes in your SSH server config file, likewise the variable “ForwardAgent” must be set to yes on your SSH client config file. &lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;SSH agents are a very useful feature. They allow automation and improve security of your systems.  Its possible with agent forwarding to hop from machine to machine as agent-forwarding relationship is transitive in nature. As long as the each intermediate server has agent forwarding enabled then, the authentication will always take place on the client. This feature is especially useful in corporate firewalled networks  where there is a bastion server facing the Internet preventing access to a private network of servers. With SSH agent forwarding it’s possible to reach behind the firewall and access the servers if SSH is installed on the servers and forwarding is enabled.&lt;/p&gt;

&lt;p&gt;SSH agent consumes a significant amount of computing resources so it’s always good to make sure that you don’t have idle processes running in the background.&lt;/p&gt;

&lt;p&gt;When using an SSH agent to hop from machine to machine it's important that each intermediate host isn't compromised as that would expose your keys to intruders.&lt;/p&gt;

</description>
      <category>linux</category>
      <category>ssh</category>
      <category>devops</category>
      <category>security</category>
    </item>
    <item>
      <title>Using SSH-Agent to Simplify Your SSH Experience</title>
      <dc:creator>Samuyi</dc:creator>
      <pubDate>Tue, 13 Nov 2018 15:36:55 +0000</pubDate>
      <link>https://dev.to/samuyi/using-ssh-agent-to-simplify-your-ssh-experience--1in8</link>
      <guid>https://dev.to/samuyi/using-ssh-agent-to-simplify-your-ssh-experience--1in8</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2kmqpojry2uk5fq7x894.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2kmqpojry2uk5fq7x894.gif" alt="Picture showing how ssh-agent works" width="425" height="198"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;SSH keys are used for identity management between an SSH server and client. Basically there are two uses of keys in SSH; to identify a server and to identify a user.&lt;br&gt;
An SSH host key is used to identify a server, this ensures a client knows its talking to the right server. Usually host keys are stored in a secure repository maintained by a system administrator. When a server is provisioned, an administrator runs ssh-keyscan on the server address to get the server 's fingerprint of its keys. The server's host keys are generated on startup of the server if an SSH server is present.&lt;br&gt;
The second use of SSH keys is to identity users to an SSH server. (If you're using a password on your SSH server better change to a public key to save your server from an impending dictionary attack.)Each time you try to access an SSH server you need to decrypt your private key, hence if you run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ssh -l pat shell.isp.com
Enter passphrase for key '/home/you/.ssh/id_rsa': ************
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You get prompted for your passphrase to decrypt your private key. It quickly gets annoying if you continuously get prompted for the passphrase every time you logout of your remote server temporarily, maybe for a bathroom break, and you have to login again. Won't it be so much better if you only got prompted once for your passphrase; perhaps initially when you boot up your work station at the beginning of work?&lt;/p&gt;

&lt;p&gt;Supposing you have dozens of SSH servers you need to do some maintenance work on with a script. Each time the script tries to SSH into each of the servers you would have to enter the passphrase for each of the different private keys to authenticate you on every server. Assuming you have different keys for each server, which you absolutely should, It becomes a real pain in the butt to enter the passphrase for each private key. I know what your thinking; why not just store the passphrases on the client machine disk and have them fed into the script to automate authentication. You absolutely shouldn't do this because the client machine would store the passphrases in the history file also if any one with access to your machine runs ps while the script is running they'll see the passphrases in the command run by the script, even encrypting them on disk won't save you, worst of there is no way to find out if the passphrases have been compromised. What you need to do is use an ssh-agent, which is the subject of this article.&lt;/p&gt;

&lt;p&gt;An ssh-agent is a program that caches private keys and responds to authentication related queries from SSH clients. It works with a another program called ssh-add to save you from having to retype your passphrases each time you try to authenticate to an SSH server, ssh-add adds and removes keys from the agent's key cache. A typical use might look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Start an agent  for bash like shells
$ ssh-agent $SHELL
# Load an SSH key
$ ssh-add .ssh/id_rsa
Enter passphrase for /home/you/.ssh/id_rsa:**********
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By typing your passphrase once you save yourself from having to decrypt the keys each time you try to authenticate to an SSH server. Your private keys are now stored in memory by the agent. From now until you logout or terminate the agent the keys remain within the agent. SSH clients now contact the agent for all key based operations.&lt;br&gt;
Agents perform two tasks:&lt;br&gt;
Store your private keys in memory&lt;br&gt;
Answer questions from SSH clients about those keys&lt;/p&gt;

&lt;p&gt;They neither store your private keys on disk nor do they send them across the network. Any thing related to private keys that the SSH client needs to answer is handled by the agent.&lt;/p&gt;

&lt;p&gt;Back to our hypothetical scenario of logging into numerous servers to carry out maintenance work. A human only needs to load the agent once with all the necessary keys for the various servers allowing the maintenance scripts carry out their functions unattended to. But of course, there is a complexity trade-off here; if you have 100 batch jobs, separate accounts or keys for each one may be too much to deal with. In that case, partition the jobs into categories according to the privileges they need, and use a separate account and/or key for each category of job. Better still you can store the passphrases for each of the servers on an external disk only mounting it when the need arises to get the passphrases. As long as there is no reboot of the system running the ssh-agent there would be no need to enter the passhrases again.&lt;/p&gt;

&lt;p&gt;SSH agents are pretty much safe. Since the private keys are stored in memory only a very skilled attacker with root access can steal the keys in memory. However there are best practices to follow when using an agent. It is best not leave your terminal unattended while logged into your SSH client machine. While your private keys are loaded in an agent, anyone may use your terminal to connect to any remote accounts accessible via those keys, without needing your passphrase! Even worse a sophisticated intruder may succeed in stealing your keys from the your system. If you must step away from your SSH client machine ensure you logout . Even better you can run ssh-add -D to clear all keys loaded into your agent while you're away and load them back in when you return.&lt;/p&gt;

&lt;p&gt;Conclusion&lt;br&gt;
SSH agent is a powerful tool for automation while using SSH. They save having to type pass-phrases each time you want to decrypt a private key to access an SSH server .&lt;br&gt;
They come installed with majority of the SSH clients since its pretty much part of the SSH protocol.&lt;/p&gt;

</description>
      <category>ssh</category>
      <category>openssh</category>
      <category>security</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
