<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Celestine Omin</title>
    <description>The latest articles on DEV Community by Celestine Omin (@cyberomin).</description>
    <link>https://dev.to/cyberomin</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/cyberomin"/>
    <language>en</language>
    <item>
      <title>Terraform: Infrastructure as code - Part II</title>
      <dc:creator>Celestine Omin</dc:creator>
      <pubDate>Sat, 24 Jun 2017 04:00:00 +0000</pubDate>
      <link>https://dev.to/cyberomin/terraform-infrastructure-as-code---part-ii</link>
      <guid>https://dev.to/cyberomin/terraform-infrastructure-as-code---part-ii</guid>
      <description>&lt;p&gt;In Part I, I introduced us to the concept of IAC(Infrastructure as Code) using Terraform, and we explored the awesomeness of Terraform. While the code we used in Part I for provisioning a simple server did work very well, the system we eventually provisioned is hardly a scalable system and not one I’ll recommend for production use. The reasoning here is simple, running a single server for your entire application is almost as bad as lighting a match in a gas station, bad things can and will definitely happen. And I’ll strongly suggest that you refrain from this setup.&lt;/p&gt;

&lt;p&gt;In this second part of our IAC series, I’ll want us to build upon what we started in Part I and we will try to build a semi-highly available system. In Code X of Part I, we had created one server with the following code&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "digitalocean_droplet" "web" {
  image = “ubuntu-16-04-x64”
  name = “web-1”
  region = “lon1”
  size = “1gb”
  ssh_keys = ["${digitalocean_ssh_key.default.id}"]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Code I&lt;/p&gt;

&lt;p&gt;Now let’s expand on this code and create two web servers. To do this, we will add a meta-parameter called &lt;code&gt;count&lt;/code&gt;. The count parameter can be added to any resource and it simply creates more of the declared resource based on the count value. For instance, we will add &lt;code&gt;count = 2&lt;/code&gt; to create two web servers.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource “digitalocean_droplet “web {
  image = “ubuntu-16-04-x64”
  name = “web-1”
  count = 2
  region = “lon1”
  size = “1gb”
  ssh_keys = ["${digitalocean_ssh_key.default.id}”]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Code II&lt;/p&gt;

&lt;p&gt;Notice the introduction of &lt;code&gt;count = 2&lt;/code&gt; in the code above? Now that’s how we create 2 web servers. If we wanted to create 10 web servers, all that we need do is to change the value of count to 10, if we need 20 servers, we change the value of count to 20, you get the idea.&lt;/p&gt;

&lt;p&gt;The next thing that we will do to build our semi-highly available system is to create a loadbalancer and distribute traffic to all of our servers. Terraform provides us with a &lt;code&gt;digitalocean_loadbalancer&lt;/code&gt; resource and we will make use of it. Adding a load balancer is simple, we will declare a load balancer resource, give it a name, choose a region, apply the traffic forwarding rules, add a health check from the load balancers to the attached machines and finally, attached our Digital Ocean droplets to these load balancers. That simple.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource “digitalocean_droplet “web {
  image = “ubuntu-16-04-x64”
  name = “web-1”
  count = 2
  region = “lon1”
  size = “1gb”
  ssh_keys = ["${digitalocean_ssh_key.default.id}”]
}

resource “digitalocean_loadbalancer “public_lb {
  name = “web-lb”
  region = “lon1”

  forwarding_rule {
    entry_port = 80
    entry_protocol = “http”

    target_port = 80
    target_protocol = “http"
  }

  algorithm = "round_robin"

  droplet_ids = ["${digitalocean_droplet.web.id}”]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Code III&lt;/p&gt;

&lt;p&gt;From the code above, we added a public load balancer to our infrastructure. The &lt;em&gt;name&lt;/em&gt; and &lt;em&gt;region&lt;/em&gt; are pretty self explanatory and both are required as also the forwarding rule. The forwarding rule basically tells us how to send traffic. The protocol of choice for our load balancer is HTTP which has a default port of 80. The &lt;code&gt;entry_protocol&lt;/code&gt; and &lt;code&gt;entry_port&lt;/code&gt; simply states how traffic are being sent to the load balancer and the &lt;code&gt;target_port&lt;/code&gt; and &lt;code&gt;target_protocol&lt;/code&gt; talks about how traffic gets to the attached droplets. It’s that simple.&lt;/p&gt;

&lt;p&gt;The next important bit here is the connection algorithm between the load balancer and the droplets. In the case, we chosed round robin. Although Terraform only supports two sets of algorithm for Digital Ocean’s load balancer resource –Round Robin(round_robin) and Least Connections(least_connections)–other load balancing algorithm exist like Weighted Round Robin, Least Traffic, Source IP, etc.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;A round robin is an arrangement of choosing all elements in a group equally in some rational order, usually from the top to the bottom of a list and then starting again at the top of the list and so on. A simple way to think of round robin is that it is about “taking turns. Used as an adjective, round robin becomes “round-robin. - &lt;a href="http://whatis.techtarget.com/definition/round-robin"&gt;WhatIs&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In the last section of our load balancer resource block, we attach our droplets to the load balancer using &lt;code&gt;droplet_ids&lt;/code&gt;. We use string interpolation to reference the droplet ids, this, is how Terraform builds its dependency graph. There is a convention to using interpolation in Terraform, it follows the pattern of ${RESOURCE_TYPE.RESOURCE_NAME.ATTRIBUTE_NAME}. In our case here, we are referencing the &lt;code&gt;digitalocean_droplets&lt;/code&gt; resource with the name of &lt;code&gt;web&lt;/code&gt; and we are getting all its &lt;code&gt;id&lt;/code&gt; attributes.&lt;/p&gt;

&lt;p&gt;With the set up above, we have been able to put together a simple layer 7 load balancing. The only missing bit here is a database server.&lt;/p&gt;

&lt;p&gt;Adding a database is simple, unlike AWS, Digital Ocean, as of the time of this article, does not offer a managed database service like RDS. We will need to roll out our own manually. To do that, we will create a droplet resource, just like we did for web. The code in Code VI does exactly that for us.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource “digitalocean_droplet “web {
  image = “ubuntu-16-04-x64”
  name = “web-1”
  count = 2
  region = “lon1”
  size = “1gb”
  ssh_keys = ["${digitalocean_ssh_key.default.id}”]
}

resource “digitalocean_droplet “database {
  image = “ubuntu-16-04-x64”
  name = “web-1”
  region = “lon1”
  size = “1gb”
  ssh_keys = ["${digitalocean_ssh_key.default.id}”]
}

resource “digitalocean_loadbalancer “public_lb {
  name = “web-lb”
  region = “lon1”

  forwarding_rule {
    entry_port = 80
    entry_protocol = “http”

    target_port = 80
    target_protocol = “http"
  }

  algorithm = "round_robin"
  droplet_ids = ["${digitalocean_droplet.web.id}”]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Code VI.&lt;/p&gt;

&lt;p&gt;To get thing going and see its effect, we run &lt;code&gt;terraform plan&lt;/code&gt; just to make sure things are fine and well sorted, then we run &lt;code&gt;terraform apply&lt;/code&gt; to build the actual system. With the setup in Code VI, we have successfully build ourselves a simple infrastructure that is good enough to host a decent blog. In the next part of this series, we will talk about how to prepare our machine right after provisioning and install basic software on it.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Disclaimer&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;While following the examples outlined in this article, please bear in mind that there’s a cost attached to it. When you run &lt;code&gt;terraform plan&lt;/code&gt; then &lt;code&gt;terraform apply&lt;/code&gt; and create a real resource at your provider’s end, they start billing you almost immediately. As a word of caution and this apply only in a non-production environment, always try to clean after yourself. For this purpose, Terraform offers us a really handy command called &lt;code&gt;terraform destroy&lt;/code&gt;. The &lt;code&gt;terraform destroy&lt;/code&gt; command literally goes back to your provider and delete/destroy every single resource that you have created. This is a one-way command and cannot be undone, so I strongly advise you do this in your personal or experimental environment.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This post was originally published on &lt;a href="http://cyberomin.github.io/engineering/2017/06/24/terraform-part-ii.html"&gt;cyberomin.github.io&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>engineering</category>
    </item>
    <item>
      <title>Priorities: What matters most</title>
      <dc:creator>Celestine Omin</dc:creator>
      <pubDate>Sun, 11 Jun 2017 15:22:39 +0000</pubDate>
      <link>https://dev.to/cyberomin/priorities-what-matters-most</link>
      <guid>https://dev.to/cyberomin/priorities-what-matters-most</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;First make it work, then make it right, then make it fast.&lt;br&gt;
– Kent Beck&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;em&gt;This is something I &lt;a href="http://cyberomin.github.io/startup/2015/10/06/priority.html"&gt;wrote a while ago&lt;/a&gt;. I think it still hold true to this day.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Last week, I had a rather interesting and illuminating discussion with someone. Our discussion bothered on some core concepts that almost every startup will deal with at some point; over engineering and time to market.&lt;/p&gt;

&lt;p&gt;While I argued that getting your product quickly out of the door and testing market fit should take priority for every single startup, especially the ones venturing into new territories, my &lt;em&gt;friend&lt;/em&gt; was of the opinion that getting engineering right from the onset before anything should be the focal point. His concern was simple, fix the engineering problem and save yourself from technical debt.&lt;/p&gt;

&lt;p&gt;In the startup world, especially in emerging markets like Nigeria, getting your product out fast and testing it in the marketplace cannot be over emphasised. The whole idea of technical startup and digitisation, while not exactly new, our counterparts in the West have had over 20yrs head start before us. In this market, you have to, not just build, but also reorient people on why they should abandon their old, tried and tested ways of doing things to embrace this new shiny idea of yours. Getting to the market fast shouldn’t be taken lightly.&lt;/p&gt;

&lt;p&gt;Reid Hoffman, the founder of LinkedIn now a partner at Greylock Partners, once said, &lt;strong&gt;"if you are not embarrassed by the first version of your product, you’ve launched too late."&lt;/strong&gt; Speed and first mover advantage will win always. Not surprised he (Reid) is currently teaching a class on Blitzscaling at Stanford.&lt;/p&gt;

&lt;p&gt;We have all heard it time without number that the customer doesn’t care what is happening under the hood. He or she isn’t interested in knowing if your data store is an RDBMS or a NoSQL data store. If your operating system is a Unix flavoured OS or a Windows server. He isn’t even interested in knowing if you wrote your stylesheet in vanilla CSS or you did go the Sass or Less route. He is interested in 2 things. 1. Does it work? 2. Will it solve my problems?&lt;/p&gt;

&lt;p&gt;While writing scalable software and code cannot be ignored, it is imperative to note that if your product doesn’t gain market acceptability you are doomed. Dead. Your super sleek MEAN(Mongo, Express, Angular, Node) based app that runs on 4 Amazon M3 large instances with an elastic load balancer in front of them all using a MySQL RDS and a read replica will come to nut. Nothing. Nobody cares.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Product before engineering&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Every hyper-growth startup today all started with less over engineering and in some cases monoliths; Netflix, Uber, Spotify, Dropbox, Twitter etc all ran really large systems and in most cases all in one machine. They were all super focused on the products. They all bothered about sleek engineering after they got a strong foot hold in the market. Today, all of these startups have broken things down in favour of micro services (smaller units of systems). You could argue that Travis Kalanic, CEO of Uber, isn’t an engineer, but I don’t think he bothered so much on how the first commit of the Uber codebase looked like. He just wanted a service that can/will allow users hail taxis using their mobile phones. Uber today is a ~$40B company, they can afford a star-studded engineering team. Heck, they poached almost all the scientist/engineers working on the self-driving car project at Carnegie Mellon University.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why am I pro-product?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When Konga launched in July of 2012, we had one simple goal; retail. Our stack then was a simple Rails app that ran on a single 4GB machine. We didn’t even write the underlying e-commerce software, we used an open source solution called Spree and built upon it. No one cared. No one knew about it. We sold stuff, we were fighting an uphill battle, e-commerce, unlike today, didn’t have so much acceptability. We needed to prove to people that they can trust us with their money and we will deliver their item as promised. Today that one machine now spans fleets of servers and we are more than anything, favouring micro-services over monolith. Why? The time is right. We have gained to a greater extent, acceptability and we can afford to settle in now and build a rock solid infrastructure.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Does awesome engineering matter?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I understand why my &lt;em&gt;friend&lt;/em&gt; favoured micro-services and doing things right from the get-go, for obvious reasons.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Micro services enable separation of concerns. No need to deploy an entire application when you only worked on a subset of it.&lt;/li&gt;
&lt;li&gt; Scalability is easier&lt;/li&gt;
&lt;li&gt; Deployment is a lot easier.&lt;/li&gt;
&lt;li&gt; No single point of failure.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;While all of these things are awesome and nice to have/do. For a brand new spanking startup, I still favour, speed, product market fit, acceptability(read as revenue) over awesome Google-esque engineering.&lt;/p&gt;

&lt;p&gt;I live by this quote by Kent Beck - “First make it work, then make it right, then make it fast.”&lt;/p&gt;

&lt;p&gt;A friend of mine just got into Techstars, his &lt;a href="http://max.ng"&gt;startup&lt;/a&gt; has a lot to prove. Awesome engineering skills isn’t one of them.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Remote work: How we make it work.</title>
      <dc:creator>Celestine Omin</dc:creator>
      <pubDate>Tue, 30 May 2017 04:30:00 +0000</pubDate>
      <link>https://dev.to/cyberomin/remote-work-how-we-make-it-work</link>
      <guid>https://dev.to/cyberomin/remote-work-how-we-make-it-work</guid>
      <description>&lt;p&gt;In the last couple of years, there has been a strong advocacy for remote work and how it a) improve employees productivity and b) is fast becoming the future of work. Technology companies, especially startups add remote work as one of their benefits.&lt;/p&gt;

&lt;p&gt;But things aren’t always rosy in the remote land. One of the problems that I have seen one too many times is that of communication and visibility and the antidote to this is over communication. Voice out every move. The physical office provides some benefit that is almost non-existent when you work remotely, for one, the in-person communication cannot be overemphasised.&lt;/p&gt;

&lt;p&gt;Working from the same physical location and actually seeing the people you work with has a huge upside. This benefit becomes apparent when communicating with one another, things like voice tone and body language add fine-grained nuances that technology can’t bridge.&lt;/p&gt;

&lt;center&gt; ____________ &lt;/center&gt;

&lt;p&gt;Working and leading a remote team can be daunting without the right tools and processes in place. Getting the right tools very early on will save the team loads of trouble and also improve productivity. I mean, you can only do your best work when you have the right tools. In this article, I’ll try to highlight some of the tools that my team uses daily and why we use them.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Slack:&lt;/strong&gt; Slack is where the magic happens. I think about this tool like our command centre. It’s pretty much where we start and end our day. With the help of amazing bots and integrations, the possibilities with Slack are endless. We use Slack for our daily stand-ups, file sharing, voice calls and more. I personally use Slack as a personal assistant of some sort; I set reminders for things I am supposed to get back to at specific time using Slack reminders. Slack’s slash command provides a rich suite of operations that can be performed right from the app.&lt;/p&gt;

&lt;p&gt;Beyond just the daily communication, we apply Slack to our team’s internal adaptation of ChatOps; a transparent workflow with a high focus on visibility. Take for instance, when a ticket is created or moved on our JIRA project board, a notification is sent to Slack our &lt;em&gt;#developers&lt;/em&gt; Slack channel, that way, everyone on the team knows what’s happening at every point in time and who is working or responsible for what. When a comment is added to a GitHub pull request, Slack notifies a channel with the comment and the relevant parties can either continue the conversation using Slack or move it to GitHub.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;What is ChatOps? Conversations, put to work. ChatOps is a collaboration model that connects people, tools, process, and automation into a transparent workflow. This flow connects the work needed, the work happening, and the work done in a persistent location staffed by the people, bots, and related tools. &lt;a href="https://www.atlassian.com/blog/software-teams/what-is-chatops-adoption-guide" rel="noopener noreferrer"&gt;Source: Atlassian&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;We are big believers in the agile methodology, as such, we take things like continuous integration and deployment pretty seriously. Spring planning, retrospective and stand-up all happen on Slack.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CircleCI:&lt;/strong&gt; CircleCI is an invaluable and almost indispensable tool. For us, it isn’t just another CI tool, it’s our source of truth. CircleCI plays a critical role in our development cycle and deployment pipeline. It helps with running our test suites and does an auto deployment. The beautiful thing about this tool is how it integrates nicely with our workflow and other tools. For instance, when a build is broken, not only does CircleCI emails the entire team, it also sends a Slack notification to the team. And since we use almost the same Slack handle on our GitHub profile, the &lt;em&gt;build offender&lt;/em&gt; —a person who breaks a build…I made this up— is notified immediately, this helps improve our turn around time and it also encourages transparency and accountability.&lt;/p&gt;

&lt;p&gt;CirleCI further allows us to enforce some defined standards like a coding style, it lint’s all of our code and this process helps keeps our code base clean and promote consistency.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F56crwt7mj9i86exnvzau.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F56crwt7mj9i86exnvzau.png" width="800" height="476"&gt;&lt;/a&gt;&lt;em&gt;CircleCI/Slack integration. Photo credit - Igor Davydenko&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;JIRA and GitHub:&lt;/strong&gt; are other tools that we use extensively. We use feature branching in our workflow, which means every JIRA ticket is a branch. The integration between JIRA and GitHub allows us to go straight from a JIRA ticket to a GitHub branch, and since we have a &lt;em&gt;PULL_REQUEST_TEMPLATE&lt;/em&gt; that requires us to include the JIRA ticket ID, we can easily go back to JIRA from GitHub.&lt;/p&gt;

&lt;p&gt;When a branch is pushed into GitHub, JIRA links it automatically to its assigned ticket.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz1mc6rig6qqlyn5owqcl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz1mc6rig6qqlyn5owqcl.png" width="500" height="260"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;A JIRA ticket, showing the associated GitHub branch, number of commits and pull request status&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Screen Hero:&lt;/strong&gt; Once in a while, people need help with technical issues and this is absolutely normal. When a team member is blocked and needs help with an issue, someone who is free or a team member with the most experience and knowledge of the problem jumps on Scree Hero and pair. One of the benefits of this is that stronger bonds and connections are formed.&lt;/p&gt;

&lt;h4&gt;
  
  
  How did we get here?
&lt;/h4&gt;

&lt;p&gt;In the beginning, things weren’t always like this, it took a bit of trial and error to find the best workflow. We started out using Slack for just communication but quickly found out that a lot of people didn’t quite know what was happening at every point in time. So, instead of asking “what are you working on? every so often, we leverage our tools and let them do the talking for us. That way, we can minimise user interruption and just focus on getting work done.&lt;/p&gt;

&lt;p&gt;As a distributed team, we use these tools to compensate for our lack of in-person interaction. It could be a little annoying, and quite frankly, frustrating, having to go check on what task a person is working on at any point in time and also check the ticket status. We delegate this to Slack and Slack does the informing when a ticket status is changed. If a ticket moves from &lt;em&gt;In Progress&lt;/em&gt; to &lt;em&gt;Code Review&lt;/em&gt;, the author doesn’t need to manually inform the team to review their work, Slack handles this and a few extra keystrokes are saved. Everyone is happy :)&lt;/p&gt;

&lt;h4&gt;
  
  
  Improvements?
&lt;/h4&gt;

&lt;p&gt;We are not where I will want us to be, but honestly, I don’t think we’ll ever get there. It will be a case of constant improvement. In the future, I would love to see us trigger manual deployment via Slack and get performance metrics via Slack too. We will likely experiment with tools like Hubot to see if they work for our team.&lt;/p&gt;

&lt;p&gt;In addition, we’ll also explore custom integrations to leverage the APIs of the third party tools that we use. We will explore building an integration to AWS for instance, such that if there is a service degradation from a region that we work with, Slack can notify us in near real-time. This same tool can also be extended to include GitHub status notification etc. With this in place, we will be more proactive than reactive to issues.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;PS: Do you work remotely? I will like to know how you work.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;em&gt;This post was originally published on &lt;a href="http://cyberomin.github.io/work/2017/05/30/remote-work.html" rel="noopener noreferrer"&gt;cyberomin.github.io&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>remote</category>
      <category>management</category>
      <category>process</category>
    </item>
    <item>
      <title>Terraform: Infrastructure as code - Part I</title>
      <dc:creator>Celestine Omin</dc:creator>
      <pubDate>Mon, 29 May 2017 02:00:00 +0000</pubDate>
      <link>https://dev.to/cyberomin/terraform-infrastructure-as-code---part-i</link>
      <guid>https://dev.to/cyberomin/terraform-infrastructure-as-code---part-i</guid>
      <description>

&lt;h4&gt;Introduction&lt;/h4&gt;

&lt;p&gt;In the last couple of months, I have been obsessed with automation, workflows and infrastructure as code. This obsession led me to explore tools like Ansible and a little bit of Chef and how to better apply them to my everyday work.&lt;/p&gt;

&lt;p&gt;In the last few weeks, I have been experimenting with Harshicorp’s Terraform and I must say, I’m impressed. In this article, I will like to share my findings and also document what I have learnt for posterity.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;PS: I’ve created a bootstrapped Vagrant box with Terraform provisioned. You can clone the &lt;a href="https://github.com/cyberomin/terraform"&gt;repository here&lt;/a&gt; and follow along.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Terraform, like every other Harshicorp products have basic commands that can be run from the CLI. But the one most frequently used Terraform command is the &lt;code&gt;terraform apply&lt;/code&gt;. This is the command that actually allows Terraform to run and communicate with our provider(more on this later). The &lt;code&gt;terraform apply&lt;/code&gt; command goes out to our provider and provision the resources that we have declared in our Terraform files. In simple terms, it builds or changes our infrastructure.&lt;/p&gt;

&lt;p&gt;While the &lt;code&gt;terraform apply&lt;/code&gt; command is great, the problem is that it doesn’t give you an early feedback of what you’re doing, luckily, Terraform provides another command &lt;code&gt;terraform plan&lt;/code&gt; which does just that. It allows us to see how infrastructure execution plan.&lt;/p&gt;

&lt;h4&gt;The Terraform Syntax - HCL&lt;/h4&gt;

&lt;p&gt;Terraform’s code is written in Harshicorp’s proprietary language called Hashicorp Configuration Language(HCL). HCL is a structured configuration language that is intended to be both machines friendly and human readable. It’s geared mostly towards DevOps, and in the case of Terraform, its syntax allows us to describe our infrastructure as code. All Terraform codes are written in a file with a &lt;code&gt;.tf&lt;/code&gt; extension.&lt;/p&gt;

&lt;p&gt;Before we use Terraform and explore its power, we will need to declare a provider. This is the entry point to every Terraform program. As at the time of this post, they are well over 10 different Terraform providers and they include; AWS, Digital Ocean, Google cloud, etc. For a complete and up to date list of providers, visit the Terraform providers &lt;a href="https://www.terraform.io/docs/providers/index.html"&gt;documentation page&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Declaring a provider is simple, you start with the keyword &lt;code&gt;provider&lt;/code&gt; and provide the name of the provider, e.g; &lt;code&gt;aws&lt;/code&gt;, &lt;code&gt;digitalocean&lt;/code&gt;, &lt;code&gt;google&lt;/code&gt;, etc.&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;provider "digitalocean" {
    # todo
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;&lt;em&gt;Code I&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The code above is how a Terraform provider is declared. That simple. The declaration above by itself does nothing and has very limited use. In order to harness its power and begin communication with the Digital Ocean cloud, we will have to provide a valid Digital Ocean token. This is how Terraform will be able to communicate with our Digital Ocean cloud.&lt;/p&gt;

&lt;p&gt;A token can be gotten from your Digital Ocean account, simply log in and generate one. Remember to also keep this token safe and secure as bad things could happen if it gets into the wrong hands. &lt;em&gt;Insert picture of how to generate Digital Ocean token&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;We will extend our initial code, Code I, and add our DO token.&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;provider "digitalocean" {
    token = "xxxx-xxxxx-xxxx-xxxx-xxxx"
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;&lt;em&gt;Code II&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Our work here isn’t done, we have only told Terraform we want to work with Digital Ocean and nothing more. This, by itself does nothing, we need to create some Digital Ocean droplets. Terraform has a concept of resource, a resource is typically a service offered by your cloud hosting service, a provider in Terraform’s parlance. These resources include but not limited to; VMs(EC2, droplets), Load balancers, database, cache, etc. To create a Digital Ocean droplet, we will declare a droplet resource and pass along arguments like image type—Ubuntu, CentOS, etc, name—hostname of the machine, region—the DO region where we want this resource created, size—size of the droplet.&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;provider "digitalocean" {
    token = "valid digital ocean token"
}

resource "digitalocean_droplet" "web" {
    image = "ubuntu-16-04-x64"
    name = "web-1"
    region = "lon1"
    size = "1gb"   
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;&lt;em&gt;Code III&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The human readable form of Code III declared above translates to &lt;em&gt;“create a 64 bit Ubuntu 16.04 droplet in the London 1 region and give it a size of 1gb and a hostname of web-1.”&lt;/em&gt; It’s worthy of note that a few things in the resource declaration above are standard. The &lt;code&gt;resource&lt;/code&gt; keyword is standard, &lt;code&gt;digitalocean_droplet&lt;/code&gt; is standard, this is how Terraform represent’s Digital Ocean’s droplets—it varies for other cloud providers, for example, &lt;code&gt;aws_instance&lt;/code&gt; for AWS’ EC2 and &lt;code&gt;google_compute_instance&lt;/code&gt; for Google’s VM. The &lt;code&gt;web&lt;/code&gt; word is arbitrary, it serves the purpose of an identifier in this declaration, as such, you can choose whatever name pleases you. The other declaration inside the curly braces are standard and required. For information on how it’s declared for other providers, please consult the Terraform documentation.&lt;/p&gt;

&lt;p&gt;With the code above, we have successfully declared a valid Terraform provider and created an associated resource. To run this command see its effect, we will run &lt;code&gt;terraform apply&lt;/code&gt; on our console. At this point, Terraform will initiate a communication link with Digital Ocean and if everything goes well, create us a valid and ready to use droplet. This is a rather simplistic way of doing things and we will be diving deeper in the course of this article.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Variables&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In &lt;em&gt;code II&lt;/em&gt;, we created a Digital Ocean provider and provided it with an API token. While this get’s the job done, it’s not entirely the best way to deal with this problem. This is where the concept of a variables comes in. Like many traditional programming languages, Terraform also has a concept of a variable, albeit declared differently.&lt;/p&gt;

&lt;p&gt;In ES6 for insta\nce, a variable can be declared with either the &lt;code&gt;const&lt;/code&gt; or the &lt;code&gt;let&lt;/code&gt; keyword. For instance&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const name = “Bob Jones”
let age = 70
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Code IV&lt;/p&gt;

&lt;p&gt;But things are a bit different in the Terraform land. Every variable is predicated with the keyword &lt;code&gt;variable&lt;/code&gt; followed by the variable name and a set of parameters. Terraform’s variables comes in two different flavours; input and output variables. An input variable is used to send values into a Terraform while the output variable prints result from Terraform to the stdout. Input variables can be sent in a different format; command line, from a file and an environment variable.&lt;/p&gt;

&lt;p&gt;To create an assign data to a variable we will start with the &lt;code&gt;variable&lt;/code&gt; keyword as seen in Code V. variable “name” {} variable “token” { default = “DO API Token” } Code V.&lt;/p&gt;

&lt;p&gt;From the definition in Code V, if we run the Terraform code using &lt;code&gt;terraform apply&lt;/code&gt;, Terraform will prompt us to enter a value for the &lt;code&gt;name&lt;/code&gt; variable and wouldn’t do same for the second one, &lt;code&gt;token&lt;/code&gt;. This is because, in the second declaration, we have provided a default value for the variable which is our Digital Ocean API token as such, Terraform picks it from there.&lt;/p&gt;

&lt;p&gt;If we need to use this variable anywhere, we will have to invoke it like this &lt;code&gt;“${var.token}”&lt;/code&gt;, so going back to Code II, we can modify the declaration to this format: provider “digitalocean” { token = “${var.token}” } Code VI&lt;/p&gt;

&lt;p&gt;The advantage here is that we can use this variable in multiple locations without necessarily repeating the API token itself in multiple location. This provides tremendous power as to how we manage our code.&lt;/p&gt;

&lt;p&gt;The output variables follow the same pattern with the input variable with the only distinction being that the output variables uses &lt;code&gt;value&lt;/code&gt; in place of &lt;code&gt;default&lt;/code&gt;. Below is a sample declaration of the output variable. output “ip” { value = “${digitalocean_droplet.web.ipv4_address}” } Code VII.&lt;/p&gt;

&lt;p&gt;The declaration in Code VII tells Terraform to print the IPV4 address of our droplet to the console. So far, we have been able to create a Digital Ocean droplet, which is good, but the problem now is that we can’t ssh into our newly minted machine, which is a major issue and will definitely pose a problem for us as we go. To fix this issue, we will need to add our SSH public key to our droplet. Terraform provides us with an SSH resource aptly named &lt;code&gt;digital ocean_ssh_key&lt;/code&gt;. To use this resource we declare it as below:&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "digitalocean_ssh_key" "default" {
    name = "SSH Key Credential"
    public_key = "${file("/home/vagrant/.ssh/id_rsa.pub")}"
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Code VIII&lt;/p&gt;

&lt;p&gt;With the introduction of the SSH key resource, we will need to link it to our droplet. That way, we can SSH in using our private key. For this to happen, we will have to modify our code in Code III. The code in Code VIII uploads allows SSH public key to our droplet. Also, notice that we didn’t copy and paste our SSH key here, instead, we used a Terraform built-in function called &lt;code&gt;file&lt;/code&gt;. The file function lets us read a file from a path. It has a basic syntax of &lt;code&gt;${file(“path/to/file")}&lt;/code&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;If you are using the Vagrant box I provided in this article, I strongly advise that you generate a new SSH key as this box comes without one. Generating an SSH key is simple, simply run this command on your terminal &lt;code&gt;ssh-keygen -t rsa -b 4096 -C “your email”&lt;/code&gt; and follow the on-screen information. I’ll strongly advise that you don’t set a passphrase for your SSH key.&lt;/p&gt;
&lt;/blockquote&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;provider "digitalocean" {
    token = "xxxx-xxxxx-xxxx-xxxx-xxxx"
}

resource "digitalocean_droplet" "web" {
    image = "ubuntu-16-04-x64"
    name = "web-1"
    region = "lon1"
    size = "1gb"
    ssh_keys = ["${digitalocean_ssh_key.default.id}"]
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;&lt;em&gt;Code IX&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Pulling all everything together, we will have something like this:&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "token" {
    default = "xxxx-xxxxx-xxxx-xxxx-xxxx"
}

provider "digitalocean" {
    token = "${var.token}"
}

resource "digitalocean_ssh_key" "default" {
    name = "SSH Key Credential"
    public_key = "${file("/home/vagrant/.ssh/id_rsa.pub")}"
}

resource "digitalocean_droplet" "web" {
    image = "ubuntu-16-04-x64"
    name = "web-1"
    region = "lon1"
    size = "1gb"
    ssh_keys = ["${digitalocean_ssh_key.default.id}"]
}

output "ip" {
    value = "${digitalocean_droplet.web.ipv4_address}"
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;&lt;em&gt;Code X&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;If we run &lt;code&gt;terraform apply&lt;/code&gt; again, we will have an output similar to this&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;...

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

Outputs:

    ip = 46.XX.XXX.XXX
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Control and Conditional&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Unlike procedural languages, Terraform use a declarative language pattern. If you wanted to create a resource say three times, you will wrap them in a for loop, but in terraform, you will use a meta-parameter like &lt;code&gt;count&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;It’s important to note that the &lt;code&gt;count&lt;/code&gt; parameter available to every terraform resource has a zero-based index, similar to arrays in a traditional programming language. So if you had resource declaration like&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "digitalocean_droplet" "web" {
    count = 4
    image = "ubuntu-16-04-x64"
    name = "web.${count.index}"
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;&lt;em&gt;Code XI&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;From Code XI above, we will have 3 web servers created with names; web.0, web.1, web.2, web.3. Note that the names argument makes use of interpolation syntax.&lt;/p&gt;

&lt;p&gt;If we wanted to check the truthy of something before using it, we could use another conditional which is similar to the itinerary operator in a regular programming language. It follows the pattern &lt;code&gt;condition ? trueval : falseval&lt;/code&gt;. Let’s declare a resource that will only be available if a certain condition is met.&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "digitalocean_loadbalancer" "pubic" {
    count = "${var.env == "production" ? 1 : 0}"
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;&lt;em&gt;Code XII&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;From the declaration in Code XII, the DO load balancer will only exist if the environment is a production environment. Terraform also support operations like &lt;code&gt;!=, &amp;gt;, &amp;lt;, &amp;gt;=, &amp;lt;= &amp;amp;&amp;amp; || !&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Disclaimer&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;While following the examples outlined in this article, please bear in mind that there’s a cost attached to it. When you run &lt;code&gt;terraform apply&lt;/code&gt; and create a real resource at your provider’s end, they start billing you almost immediately. As a word of caution and this apply only in a non-production environment, always try to clean after yourself. For this purpose, Terraform offers us a really handy command called &lt;code&gt;terraform destroy&lt;/code&gt;. The &lt;code&gt;terraform destroy&lt;/code&gt; command literally goes back to your provider and delete/destroy every single resource that you have created. This is a one-way command and cannot be undone, so I strongly advise you do this in your personal or experimental environment.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This post was originally published on &lt;a href="http://cyberomin.github.io/engineering/2017/05/29/terraform-introduction.html"&gt;cyberomin.github.io&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;


</description>
      <category>engineering</category>
    </item>
    <item>
      <title>MySQL Rabbit Hole: Adventure in data recovery</title>
      <dc:creator>Celestine Omin</dc:creator>
      <pubDate>Tue, 16 May 2017 07:36:00 +0000</pubDate>
      <link>https://dev.to/cyberomin/mysql-rabbit-hole-adventure-in-data-recovery</link>
      <guid>https://dev.to/cyberomin/mysql-rabbit-hole-adventure-in-data-recovery</guid>
      <description>&lt;p&gt;Recently, an old project of mine died mysteriously—I don’t think so, things rarely die mysteriously, something had gone wrong; MySQL wasn’t running and there was a potential data loss. What was more frustrating about this was the fact that I had procrastinated the database replication and backup for the longest of time. This was all my fault.&lt;/p&gt;

&lt;p&gt;Over the last couple of months, there have a sizable amount of data that I had collected and I didn’t imagine myself going through the process of collecting these data again and as such, excuses weren’t an option. I had to recovered it with as little information that I had at my disposal.&lt;/p&gt;

&lt;p&gt;To begin this process, I needed to first recreate the database on a fresh MySQL instance.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CREATE DATABASE children DEFAULT CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
CREATE USER ’nimo'@'%' IDENTIFIED BY ‘password';
GRANT ALL PRIVILEGES on children.* TO ’username'@'%';
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The first mistake made was that I had deleted some files from the &lt;code&gt;/var/lib/mysql&lt;/code&gt; directory. In hindsight, my journey to recovery would have been a lot less painful had I backed up the content of this directory, but hindsight, they say is 20-20&lt;/p&gt;

&lt;p&gt;I messed up and did the one thing no sane person should do, I ran &lt;code&gt;sudo rm&lt;/code&gt; on a production server. It turns out, MySQL needed these files; &lt;em&gt;ibdata1&lt;/em&gt;, &lt;em&gt;ib_logfile1&lt;/em&gt;, &lt;em&gt;ib_logfile0&lt;/em&gt; to function properly, this is particularly important if you are trying to restore InnoDB table(s). The &lt;code&gt;ibdata1&lt;/code&gt;, &lt;code&gt;ib_logfile1&lt;/code&gt;, &lt;code&gt;ib_logfile0&lt;/code&gt; files are at the heart of the InnoDB storage engine. The &lt;code&gt;ib_logfile1&lt;/code&gt;, &lt;code&gt;ib_logfile0&lt;/code&gt; form the redo log_(a disk-based data structure used during crash recovery, to correct data written by incomplete transactions)_, beyond that, they record statements that attempt to change data in InnoDB tables. &lt;code&gt;ibdata1&lt;/code&gt; on the other hand contain meta data about the InnoDB table, and the storage areas for one or more undo logs, the change buffer, and the doublewrite buffer.&lt;/p&gt;

&lt;h3&gt;
  
  
  Back to recovery
&lt;/h3&gt;

&lt;p&gt;My only saving grace was that I managed to backup the contents of &lt;code&gt;/var/lib/mysql/database_name&lt;/code&gt;. This directory holds the files that make up each table and they are in the &lt;code&gt;*.frm&lt;/code&gt; and &lt;code&gt;*.ibd&lt;/code&gt; format. There’s also a single &lt;code&gt;db.opt&lt;/code&gt; file. The &lt;code&gt;db.opt&lt;/code&gt; file stores the characteristics of a database. The &lt;code&gt;*.frm&lt;/code&gt; file is how MySQL represents its data on disk. The files share the same name as the table with the &lt;code&gt;*.frm&lt;/code&gt; extension and this is irrespective of the storage engine used. The &lt;code&gt;*.ibd&lt;/code&gt; files, on the other hand, contain a single table and its associated index.&lt;/p&gt;

&lt;h3&gt;
  
  
  Let the fix begin.
&lt;/h3&gt;

&lt;p&gt;In other to fix this issue, I had to recreate the database schema—thankfully, my ORM provided me with a really good migration—and I paid particular attention to constraints, especially foreign keys.&lt;/p&gt;

&lt;p&gt;As part of the restoration process, I logged into MySQL and disabled the Foreign Key constraints: &lt;code&gt;SET FOREIGN_KEY_CHECKS=0;&lt;/code&gt; and also discarded the InnoDB tablespace &lt;code&gt;ALTER TABLE table_name DISCARD TABLESPACE;&lt;/code&gt;. I did this for all the newly created MySQL tables. Note that by default, InnoDB stores its tables and indexes in the system tablespace.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;System tablespace: One or more data files (ibdata files) containing metadata for InnoDB-related objects (the InnoDB data dictionary), and the storage areas for one or more undo logs, the change buffer, and the doublewrite buffer. Depending on the innodb_file_per_table setting, it might also contain table and index data for InnoDB tables. The data and metadata in the system tablespace apply to all databases in a MySQL instance.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;em&gt;A note about discarding and importing InnoDB Tablespaces&lt;/em&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;An InnoDB table created in its own file-per-table tablespace can be discarded and imported using the DISCARD TABLESPACE and IMPORT TABLESPACE options. These options can be used to import a file-per-table tablespace from a backup or to copy a file-per-table tablespace from one database server to another.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;After the database migration via my ORM was completed and I have completed the &lt;code&gt;ALTER TABLE table_name DISCARD TABLESPACE&lt;/code&gt; for all my tables, I restored all the &lt;code&gt;*.frm&lt;/code&gt; and &lt;code&gt;*.ibd&lt;/code&gt; to the newly created database directory which is located at &lt;code&gt;/var/lib/mysql/database_name&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;I then imported the previously discarded tablespace: &lt;code&gt;ALTER TABLE table_name IMPORT TABLESPACE;&lt;/code&gt;, like the discarding process, I executed the importation operation for all my tables. After the importation process was completed, I restored the Foreign Key constraint &lt;code&gt;SET FOREIGN_KEY_CHECKS=1;&lt;/code&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Are we done yet?
&lt;/h3&gt;

&lt;p&gt;At this point, everything should be fine and there should be peace all round, but no, MySQL has a mind of its own. The first problem was that I suffered an index corruption, upon checking the MySQL error log, I discovered an error&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[ERROR] InnoDB: Clustered record for sec rec not found 
InnoDB: index listings_slug_unique of table database.table_name 
InnoDB: sec index record PHYSICAL RECORD: n_fields 2; compact format; info bits 0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;An index corruption is quite rare and unusual, they are mostly caused by MySQL bug or hardware failure. The solution to a corrupt index is to run &lt;code&gt;OPTIMIZE TABLE table_name&lt;/code&gt;. But in most cases, this will not suffice and you could end up with funny errors like this;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mysql&amp;gt; OPTIMIZE TABLE table_name;
+----------------+----------+----------+-------------------------------------------------------------------+
| Table | Op | Msg_type | Msg_text |
+----------------+----------+----------+-------------------------------------------------------------------+
| database.table_name | optimize | note | Table does not support optimize, doing recreate + analyze instead |
| database.table_name | optimize | error | Invalid default value for 'end_date' |
| database.table_name | optimize | status | Operation failed |
+----------------+----------+----------+—————————————————————————————————+
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And this was exactly what happened to me. Taking a closer look at this error, the second line hints on what could be the possible problem &lt;code&gt;invalid default value for ‘end_date’&lt;/code&gt; . Our &lt;code&gt;end_date&lt;/code&gt; column is getting an invalid timestamp. This is due to the fact that MySQL is currently set to &lt;em&gt;NO_ZERO_DATE&lt;/em&gt; (strict) mode and I was trying to feed it with a timestamp in this format &lt;code&gt;0000-00-00 00:00:00&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The workaround to this problem is to set the SQL mode to allow for invalid dates: &lt;code&gt;SET SQL_MODE='ALLOW_INVALID_DATES';&lt;/code&gt;. The SQL mode here prevents MySQL from performing a valid date check. It checks to make sure that only the month uses a numeric range; 1 through 12 and day uses a numeric range too; 1 through 31. Note that it does not give provision for leading zeros — 01, 02, etc.&lt;/p&gt;

&lt;p&gt;At this point, everything seemed to have worked out pretty well. Finally, I restarted MySQL and there were much joy and praise :)&lt;/p&gt;

&lt;h4&gt;
  
  
  References
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://dev.mysql.com/doc/refman/5.6/en/innodb-multiple-tablespaces.html"&gt;MySQL :: MySQL 5.6 Reference Manual :: 14.7.4 InnoDB File-Per-Table Tablespaces&lt;/a&gt;&lt;a href="https://superuser.com/questions/675445/mysql-innodb-lost-tables-but-files-exist"&gt;MySQL InnoDB lost tables but files exist - Super User&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This post was originally published on &lt;a href="http://cyberomin.github.io/engineering/2017/05/16/mysql-rabbit-hole.html"&gt;cyberomin.github.io&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>engineering</category>
    </item>
    <item>
      <title>A pull request is submitted, what next?</title>
      <dc:creator>Celestine Omin</dc:creator>
      <pubDate>Thu, 27 Apr 2017 12:43:00 +0000</pubDate>
      <link>https://dev.to/cyberomin/a-pull-request-is-submitted-what-next</link>
      <guid>https://dev.to/cyberomin/a-pull-request-is-submitted-what-next</guid>
      <description>&lt;p&gt;A few weeks ago, I wrote an article on the need to hire a tech lead when you consider &lt;a href="https://dev.to/cyberomin/so-you-want-to-outsource-your-product-engineering"&gt;outsourcing your engineering&lt;/a&gt;. This article sparked some interesting conversation and one was about pull requests(PR) and code reviews(CR) and how it should be approached. Today, I will try to talk about code reviews and some of its inherent problems amongst team members, and I’ll also highlight possible ways to overcome some of these problems.&lt;/p&gt;

&lt;p&gt;For context, I’ll love to explain what pull request and code reviews are and for the sake of this article, I’ll be using both terms interchangeably.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;A pull request(PR) is a method of submitting contributions to a software development project. It is often the preferred way of submitting contributions to a project using a distributed version control system (DVCS) such as Git. A pull request occurs when a developer asks for changes committed to an external repository to be considered for inclusion in a project’s main repository. Source - &lt;a href="http://oss-watch.ac.uk/resources/pullrequest"&gt;OSS Watch&lt;/a&gt;&amp;gt; Code review is a systematic examination (sometimes referred to as peer review) of computer source code. It is intended to find mistakes overlooked in the initial development phase, improving the overall quality of software. Source - &lt;a href="https://en.wikipedia.org/wiki/Code_review"&gt;Wikipedia&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The problem that I see with most pull request/code review session is when reviewers nitpick on seemingly inconsequential issues, often neglecting the elephant in the room. This not only degrades the entire reviewing experience, it leads to unnecessary bikeshedding. This, in my opinion, is one of the causes of strife within technical teams and it’s unhealthy.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The bike shed story tells of a management committee’s decision to approve a nuclear power plant, which it does so with little argument or deliberation. The story contrasts this with another decision on choosing the color of the bike shed where the management gets into a nit-picking debate and expends far more time and energy than on the nuclear power plant decision.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;A pull request should be a learning experience, it’s an opportunity to get good feedback from trusted team members. This feedback could range anywhere from writing more structured and succinct code to better optimisation of algorithms, queries, etc. It’s the time to be open minded and not become unnecessarily defensive.&lt;/p&gt;

&lt;p&gt;One issue that I find quite worrying is the fact that most authors come with girded loins when they submit their PRs. You can’t tell them otherwise, after all, they have spent a significant portion of their day solving this complex problem and all they need is your approval and a merge to the base branch. And this is a big problem.&lt;/p&gt;

&lt;p&gt;But, as an author of a PR, before you put on your armour, take a second and think through the opinions of your reviewers, and look at the issue from their own lens. Your reviewers are here to uncover your blind spot, they check those things that must have slipped through the cracks, they are meant to guide and help you become better. When authors approach their PR/CR session with the way they would approach an editor, they become aware of things they must have overlooked, uncover new paradigms and fundamentally gain new knowledge. Your reviewers, like editors, are your third eyes.&lt;/p&gt;

&lt;p&gt;Are your reviewers always right? Far from it. Your reviewers, like every human, aren’t infallible. They will make mistakes, and it’s your place as the author to help guide them to the obvious and help them understand your thought process to the problem and what led to your solution. One of the problems I have seen with PR sessions is when reviewers are in haste to prove a point without first, understanding the problem domain. They make hasty conclusions and in most cases, miss the main point by a stretch.&lt;/p&gt;

&lt;p&gt;As a reviewer, your first task is to understand the problem domain and offer suggestions as opposed to calling people out. This is one of the reasons most authors become defensive. Don’t do that, you don’t want to be that guy. While reviewing, If it helps and where possible, move over to the author’s desk or get on a call and share screen, then you can ask questions as to why the author made some design choice. This does not only reduce any unforeseen tension, it allows you to understand the problem and solution a lot better, see things from author’s perspective thereby giving you an opportunity to give constructive feedback. The other advantage to this approach is that it create’s the sort of bond that will make people more receptive to your ideas.&lt;/p&gt;

&lt;p&gt;Authors become defensive when comments left on their PRs make them look stupid. There’s an ego in every human, never forget that. A code review session is a win-win for everyone. The reviewer(s) and author both learn and benefit from the experience. Put extra thought and care into crafting your feedback, they should be suggestive and also offer a better way of solving a problem.&lt;/p&gt;

&lt;p&gt;Remember, when a code review session degenerates into a pissing contest, its essence is lost and everybody, both the reviewer(s) and the author loses out big time.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This post was originally published on &lt;a href="http://cyberomin.github.io/engineering/2017/04/27/pull-request.html"&gt;cyberomin.github.io&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>engineering</category>
    </item>
    <item>
      <title>So you want to outsource your product engineering?</title>
      <dc:creator>Celestine Omin</dc:creator>
      <pubDate>Mon, 03 Apr 2017 10:00:00 +0000</pubDate>
      <link>https://dev.to/cyberomin/so-you-want-to-outsource-your-product-engineering</link>
      <guid>https://dev.to/cyberomin/so-you-want-to-outsource-your-product-engineering</guid>
      <description>&lt;p&gt;In the last couple of days, there have been many talks about developer brain drain and capital flight from the Nigerian tech ecosystem. I’m mostly going to focus on the latter.&lt;/p&gt;

&lt;p&gt;In the life of a new technology startup founder, a lot is happening; getting that MVP into the hands of users, investors meetings, optimising pitch deck, product managing and generally being the jack of all trade. Not a bad thing if you ask me, considering the founder is just starting out and doesn’t have the cash to splurge.&lt;/p&gt;

&lt;p&gt;One thing that worries me is the recurring theme of, “the average Nigerian engineer isn’t good enough, charges an arm and a leg and will never deliver on time. So, if you want something great, outsource it.” While I have absolutey no problem with this, I mean, I can’t tell you how best to spend your money, I’m worried that most startup founders are mostly concern about the end product and blindsided about what happens behind the scene. For the majority of the founders, as long as when you click that button a modal box shows up and an event is fired behind the scene, all is good. This shouldn’t be the case.&lt;/p&gt;

&lt;p&gt;This post isn’t my way of supporting outsourcing or not, like I earlier mentioned, the money is yours, go where you get the most value. I’ll try to highlight one major component founders should consider before outsourcing; the role and the need of a technical lead.&lt;/p&gt;

&lt;p&gt;Having those “expected” events—fire an email after a new user register—occur when a user interacts with your product isn’t good enough, the long-term plan of the code that powers this product should be something that concerns you. How do you solve this? Get a technical team lead for your outsourced project.&lt;/p&gt;

&lt;p&gt;From a very high level, it sounds counter intuitive to get a technical lead when you outsource, considering the fact that these developers and code shops should know what they are doing. Sadly the reverse is always the case.&lt;/p&gt;

&lt;p&gt;Most freelancers and code shops are just here to get it done at all cost. They sacrifice best practices, put little thought in the development process and user empathy isn’t something they could be bothered about. They push out features and just move on to the next project. Quality, epsecially code quality isn’t something they worry about, this is partly not their fault. How do you explain to a client that you spent half your day setting up a continuous integration environment and you will be spending just as much time writing unit tests on product features. To most founders, this doesn’t make sense. But these things are super critical, especially when you think about these products from a long-term perspective.&lt;/p&gt;

&lt;p&gt;Getting a technical team lead isn’t a waste of resources and it’s a really valuable investment. This person, if they know their onion, will work to save you untold heartaches many months down the road. They will be most likely be responsible for the technical decision that happens in the life of the project.&lt;/p&gt;

&lt;p&gt;It’s the tech leads place to advise the startup founder on best practices. He or she drives the product engineering.&lt;/p&gt;

&lt;h4&gt;
  
  
  Why is this important?
&lt;/h4&gt;

&lt;p&gt;One time I was helping a friend evaluate the code for his product. The company he had outsourced to claimed they had four engineers on this project when in actual reality they had just one person slaving away on the product API and the UI/UX component. It was easy to tell, all I needed to do was look at the git history/logs, it turns out only one person was making commit. And to make things interesting, all these commits happened on the master branch—no feature branching—which also explains why there was no pull request. I mean, how do you even PR(pull request) and code review yourself? Doesn’t make sense. You don’t set your own exams and grade yourself afterwards. The saddest part about this whole thing was that he paid for four engineers at $2,500 per pop. Not cheap.&lt;/p&gt;

&lt;p&gt;The company he had outsourced to, didn’t only make terrible technical decision, they just didn’t care. While going through the source code, I discovered they were saving passwords in PLAIN TEXT. Yes, plain text passwords in 2016. I almost had a heart attack when I saw this. I quickly flagged it and they came up with a nice excuse to back their decision. They used really old and outdated versions of frameworks, frameworks that had no LTS(long term support). This one had cascading consequences, essentially they will be missing out on bugs and security fixes not to even mention optimisation.&lt;/p&gt;

&lt;p&gt;This same company had no concept of a staging server. Everything went straight to production. No filter. I guess they just wanted to show progress and cared about only that. They also deployed a NodeJS application without a process manager or a reverse proxy.&lt;/p&gt;

&lt;p&gt;The concept of using configuration management and deployment pipelines sounded alien to them. In essence, when they were ready to deploy, someone will manually SSH(secure shell) into the server and run a &lt;code&gt;git pull&lt;/code&gt; and a new version was out in the wild. The problem with this method is many folds. a) rollbacks were difficult if not near impossible. b) If a server goes down at any point in time, someone had to manually create a fresh VM(virtual machine) and manually install all the packages needed to run the service. 3) if there was ever a sudden traffic spike, the system will collapse. They had everything in one machine; database, API and the UI, high availability was never in the plan. And yes, your guess is as good as mine, no one was replicating the database. If that node went down or data was lost, there is almost no way to recover and this might just be the end of my friend’s startup. I can go on and on about things that didn’t go well.&lt;/p&gt;

&lt;p&gt;All of these discoveries were made after about a 30min conference call and spending another an hour on the project repository. As you can imagine, the code shop didn’t like me. I instantly became a torn on their flesh. That troublesome Nigerian.&lt;/p&gt;

&lt;p&gt;After these discoveries were made, I told my friend and the code shop about the changes they needed to make, the code shop instantly fired back that these changes will not only impact the project timeline, it will also cost an extra fee. Eventually, the relationship between my friend and the code shop went south and he is trying to rebuild his project; wasted time and resource.&lt;/p&gt;

&lt;p&gt;I cannot over-emphasize the need for a technical team lead, this person will make your life a lot easier and you will have a peaceful night rest. In my friend’s case, if he had hired a technical lead from the inception of the project, I am certain things would have gone a different route.&lt;/p&gt;

&lt;p&gt;Dear startup founder, don’t just worry about those nice features, think also about what goes on behind the scenes. A house with a nice paint work and construction practises will begin to show its flaws over time, especially when subjected to stress.&lt;/p&gt;

&lt;p&gt;Finally, get a technical team lead. Your will be glad you did.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This post was originally published on &lt;a href="http://cyberomin.github.io/tech/2017/04/03/before-you-outsource.html"&gt;cyberomin.github.io&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>tech</category>
    </item>
    <item>
      <title>Docker Hack: Prune those unwanted stuff.</title>
      <dc:creator>Celestine Omin</dc:creator>
      <pubDate>Sat, 01 Apr 2017 01:00:00 +0000</pubDate>
      <link>https://dev.to/cyberomin/docker-hack-prune-those-unwanted-stuff</link>
      <guid>https://dev.to/cyberomin/docker-hack-prune-those-unwanted-stuff</guid>
      <description>&lt;p&gt;I have been meaning to do this for a very long time and today, I overcame inertia. This is a first in, hopefully, a never ending series that I have aptly titled bits. The aim of these series is to share my discoveries, hacks and shortcuts with everyday tools that I work with. Sit back, relax and enjoy the ride.&lt;/p&gt;

&lt;p&gt;In this first episode of the bit series, I will be sharing a Docker hack. Docker is an amazing piece of software, like every other tool, it has its ways and as engineers, we have to learn those ways. This article isn’t meant to talk about what Docker is and is not, the aim here is to showcase one tiny feature I found this week.&lt;/p&gt;

&lt;p&gt;As we go around spinning containers, images and volumes, we run the risk of filling up our hard drives very quickly. Luckily, as at version 17.03.1, Docker provides us with a tiny nifty command called &lt;code&gt;prune&lt;/code&gt;. Here is how it works.&lt;/p&gt;

&lt;h4&gt;
  
  
  Usage
&lt;/h4&gt;

&lt;p&gt;&lt;code&gt;docker system prune [OPTIONS]&lt;/code&gt; - Remove unused data&lt;/p&gt;

&lt;h4&gt;
  
  
  Options
&lt;/h4&gt;

&lt;p&gt;&lt;code&gt;--all, -a&lt;/code&gt; - Remove all unused images including the dangling ones.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;--force, -f&lt;/code&gt; - Do not prompt for confirmation &lt;em&gt;just do it&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;The command above clears out everything from containers to images and including volumes. While this may not always be what you want, Docker provides us with a more granular version of the &lt;code&gt;prune&lt;/code&gt; command: container, image and volume pruning.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker container prune [OPTIONS]&lt;/code&gt; - Remove all stopped containers&lt;code&gt;docker image prune [OPTIONS]&lt;/code&gt; - Remove all unused images&lt;code&gt;docker volume prune [OPTIONS]&lt;/code&gt; - Remove all unused volumes&lt;/p&gt;

&lt;p&gt;For all three commands above, the only available option is &lt;code&gt;--force, -f&lt;/code&gt;. The &lt;code&gt;--force, -f&lt;/code&gt; simply ask Docker to proceed without confirmation.&lt;/p&gt;

&lt;p&gt;I recently ran &lt;code&gt;docker system prune -a -f&lt;/code&gt; and this was what I recovered almost 25GB of hard drive space.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;PS: Used these commands carefully.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This post was originally published on &lt;a href="http://cyberomin.github.io/bits/docker/2017/04/01/docker-hacks.html"&gt;cyberomin.github.io&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>bits</category>
      <category>docker</category>
    </item>
    <item>
      <title>Iptables: An Introduction</title>
      <dc:creator>Celestine Omin</dc:creator>
      <pubDate>Wed, 25 Jan 2017 01:00:00 +0000</pubDate>
      <link>https://dev.to/cyberomin/iptables-an-introduction</link>
      <guid>https://dev.to/cyberomin/iptables-an-introduction</guid>
      <description>&lt;p&gt;Server security is one of the tiny elements in application development that cannot be ignored. It can make or mar you. As such, giving adequate attention to it cannot be overemphasised.&lt;/p&gt;

&lt;p&gt;In today’s world where hackers derive delight in taking down full service or data centres, it becomes important to keep this bit at the top of the list when provisioning a new server or deploying to an existent one. While patches and security updates are just as important, they wouldn’t do so much in some cases—especially around coordinated attacks like DDoS.&lt;/p&gt;

&lt;p&gt;For a very long time, setting up application firewalls has been one of the ways of securing servers. IPTables, a program that defines a set of rule showing what packet can go through what port in a server, has been favoured.&lt;/p&gt;

&lt;h4&gt;
  
  
  What is IPTables?
&lt;/h4&gt;

&lt;p&gt;According to Wikipedia, IPtables is a user-space application program that allows a system administrator to configure the tables provided by the Linux kernel firewall (implemented as different Netfilter modules) and the chains and rules it stores.&lt;/p&gt;

&lt;p&gt;IPTables exist in many flavours of Linux. Among all of the open-source firewalls available today, it’s the standard firewall bundled with most Linux distribution such as Ubuntu, CentOS and Fedora.&lt;/p&gt;

&lt;p&gt;A standard IPTables is generally broken down into 3 units otherwise known as chains. These chains are INPUT, FORWARD and OUTPUT.&lt;/p&gt;

&lt;p&gt;Chains INPUT - This chain controls the behaviour of incoming traffic. If a user is accessing a website on a server, the IPTables will try to match the incoming request to a rule that has been defined in the INPUT chain. This rule will typically check what the destination port, port 80, says about web traffic.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;0 0 ACCEPT tcp -- any any anywhere anywhere tcp dpt:http&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;OUTPUT - This chain manages every outgoing connection. Typically, all traffic generated by the host server e.g a &lt;code&gt;ping&lt;/code&gt; request.&lt;/p&gt;

&lt;p&gt;FORWARD - This chain controls all network packets routed through the server.&lt;/p&gt;

&lt;p&gt;Before we go any further, it’s important to highlight the 3Ps of IPTables: Packet, Port and Protocol. Packet - A block of data transmitted across a network. Think about a packet like the mail delivered by a mailman. Port - A port is a logical connection place. It has a numerical designation; 80, 22, 443, etc. Protocol - A set of rules governing the exchange or transmission of data between devices. E.g: TCP, UDP.&lt;/p&gt;

&lt;h4&gt;
  
  
  Basics
&lt;/h4&gt;

&lt;p&gt;To setup a rule, we run the following:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo iptables -A INPUT -i lo -j ACCEPT&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The command above reads “ACCEPT traffic on the loopback interface on the INPUT chain. Loopback interface here is a virtual networking interface that a computer uses in communicating with itself, hence the term, loopback.&lt;/p&gt;

&lt;p&gt;To check the loopback interface on a Linux machine, run&lt;/p&gt;

&lt;p&gt;&lt;code&gt;ifconfig lo&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;To set up iptables rules for HTTP traffic we will run&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo iptables -A INPUT -p tcp --dport 80 -j ACCEPT&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo iptables -A INPUT 3 -p tcp --dport 80 -j ACCEPT&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;“ACCEPT all HTTP(tcp) traffic which is going to a destination port 80 where &lt;code&gt;-p&lt;/code&gt; is protocol, &lt;code&gt;-j&lt;/code&gt; is jump and &lt;code&gt;--dport&lt;/code&gt; is destination port and &lt;code&gt;-A&lt;/code&gt; is append.&lt;/p&gt;

&lt;p&gt;The second rule above looks almost exactly like the first rule, the only difference here is that the second rule specifies the location the rule should be added in the rule chain. This is important especially when you want your rules to be matched sequentially.&lt;/p&gt;

&lt;p&gt;So far, we have seen how to accept traffic to a particular port. While this is great, it’s also important that we set rules to drop unwanted traffic.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo iptables -A INPUT -p tcp --dport 443 -j DROP&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The rule above says drop every HTTPS traffic hitting our servers. If you pay attention closely, the rules for accepting and dropping traffic closely resembles themselves, the only exception here is that while accepting traffic uses ACCEPT, dropping traffic uses DROP.&lt;/p&gt;

&lt;p&gt;While DROP works quite well, in most cases, it helps to give feedback to the user on what’s happening. REJECT on the other hands drops the traffic and also provide useful feedback. To set up a reject rule, we run&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo iptables -A INPUT -p tcp --dport 3306 -j REJECT&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;When a client tries to access this port, they will get a message saying that connection to port 3306 was refused. This particular rule prevents an external client from connecting to the MySQL database.&lt;/p&gt;

&lt;p&gt;Viewing and deleting rules To view existing IPTables rules, we will run&lt;code&gt;sudo iptables -L -n&lt;/code&gt; or &lt;code&gt;sudo iptables -L -v&lt;/code&gt; where &lt;code&gt;-v&lt;/code&gt; means verbose.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target prot opt in out source destination

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target prot opt in out source destination

Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target prot opt in out source destination
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;While setting up rules could be all fun and great, there comes a time where you need to delete an existing rule; either because you don’t need the said rule again or it was configured wrongly. There are two ways to deleting a rule:&lt;/p&gt;

&lt;h4&gt;
  
  
  Method 1
&lt;/h4&gt;

&lt;p&gt;&lt;code&gt;sudo iptables -D INPUT 3&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Delete the 3rd rule from the input chain. The numbering here isn’t 0 based, but it starts from 1.&lt;/p&gt;

&lt;h4&gt;
  
  
  Method 2
&lt;/h4&gt;

&lt;p&gt;&lt;code&gt;sudo iptables -D INPUT -P tcp --dport 3306 -J ACCEPT&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Delete the exact same rule by its parameters.&lt;/p&gt;

&lt;p&gt;While the two methods achieve a common goal– deleting a rule–it’s important to pay attention to the difference. Method 1 deletes just about any rule that’s on the third chain. In a case where the index does not exist, an error message is returned. Method 2 is a lot more specific, albeit verbose. it targets only a particular rule and the risk of deleting another rule by mistake is minimal.&lt;/p&gt;

&lt;p&gt;Automating IPTables with Ansible When it comes to configuration management, ability to build systems in an idempotent and a repeatable fashion cannot be overemphasised.&lt;/p&gt;

&lt;p&gt;Ansible is one of the few tools out there that simplifies configuration management, orchestration and deployment. It allows you to build and configure servers in a predictable manner. After taking the time to learn the basis of firewalls via IPtables, we will attempt to automate the entire process.&lt;/p&gt;

&lt;p&gt;Let’s create an Ansible playbook to automate our IPtables configuration. In this playbook, we are going accept connections to the following port 22(ssh), 80(http), 443(https) and 3306(mysql) and we drop every other request that does not match this rule.&lt;/p&gt;

&lt;p&gt;If you’re new to Ansible and configuration management, there’s an excellent primer here - &lt;a href="http://docs.ansible.com/"&gt;http://docs.ansible.com/&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;***
- name: Install iptables Persistent
  apt: name=netfilter-persistent state=present update_cache=true

- name: Set Loop Back Rule
  iptables: chain=INPUT action=append in_interface=lo jump=ACCEPT comment='Accept all loop back traffic'

- name: Set Established Connection Rule
  iptables: chain=INPUT ctstate='ESTABLISHED,RELATED' jump=ACCEPT comment='Let all established connection stay'

- name: Set SSH Port 22 SSH Rule
  iptables: chain=INPUT jump=ACCEPT protocol=tcp destination_port=22 comment='Accept all SSH traffic'

- name: Set HTTP Port 80 HTTP Rule
  iptables: chain=INPUT jump=ACCEPT protocol=tcp destination_port=80 comment='Accept all HTTP traffic'

- name: Set HTTP Port 443 SSL Rule
  iptables: chain=INPUT jump=ACCEPT protocol=tcp destination_port=443 comment='Accept all SSL traffic'

- name: Set HTTP Port 3306 SSL Rule
  iptables: chain=INPUT jump=ACCEPT protocol=tcp destination_port=3306 comment='Accept all MySQL traffic'

- name: Drop Any Traffic Without Rule
  iptables: chain=INPUT jump=DROP comment='Drop traffic for rules that did not match'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;This post was originally published on &lt;a href="http://cyberomin.github.io/devops/2017/01/25/iptables.html"&gt;cyberomin.github.io&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>devops</category>
    </item>
    <item>
      <title>2016: Obligatory year in review</title>
      <dc:creator>Celestine Omin</dc:creator>
      <pubDate>Sun, 08 Jan 2017 09:20:00 +0000</pubDate>
      <link>https://dev.to/cyberomin/2016-obligatory-year-in-review</link>
      <guid>https://dev.to/cyberomin/2016-obligatory-year-in-review</guid>
      <description>&lt;p&gt;I’m late to the party, I know, but this isn’t a &lt;em&gt;me-too&lt;/em&gt; article. This, in many ways, is a reflection and thanksgiving.&lt;/p&gt;

&lt;p&gt;This is my first post for 2017, so it makes sense to recap my 2016; both the high points and the low. 2016 was a great year, monumental and life changing on different fronts. So great, I probably didn’t want it to end. Alas, all good things must come to an end.&lt;/p&gt;

&lt;p&gt;In no particular order, I’ll try to share a few of the events that took place in my life.&lt;/p&gt;

&lt;h3&gt;
  
  
  Work &amp;amp; Life
&lt;/h3&gt;

&lt;p&gt;I &lt;a href="http://cyberomin.github.io/life/2016/08/01/chapter-one.html"&gt;switched jobs&lt;/a&gt; and moved to &lt;a href="http://cyberomin.github.io/life/2016/09/05/this-is-andela.html"&gt;Andela&lt;/a&gt;. It’s been a little over 4 months, and boy, it has been an incredible experience.&lt;/p&gt;

&lt;p&gt;In this same year, I lost my last surviving grandparent, Eka Anthony. I miss her to this day. Eka Anthony was the kind of grandmother every child prayed for. God bless her sweet soul.&lt;/p&gt;

&lt;p&gt;It wasn’t all bleak, I also became an uncle to an amazing young man, Jessy. In this same year, Lee showed up. Great guy.&lt;/p&gt;

&lt;p&gt;I learnt how to drive. Lagos is unfair to those who are getting behind the wheel for the first time. We move (literally).&lt;/p&gt;

&lt;p&gt;I finally graduated from the University—&lt;a href="http://cyberomin.github.io/life/2016/11/07/chapter-three.html"&gt;big deal&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Speaking &amp;amp; Events.
&lt;/h3&gt;

&lt;p&gt;In November, I spoke at my home church, Honey Streams Christian Center. This is one privilege I’ll never take for granted. Thank you, Pastor Akomaye. Thank you, sir.&lt;/p&gt;

&lt;p&gt;I was a panellist at the &lt;a href="http://afrilabs-gathering.com/speakers/celestine-omin/"&gt;AfriLabs conference&lt;/a&gt; that held in Accra. It was amazing and humbling sharing the stage with industry heavyweights from Microsoft, etc.&lt;/p&gt;

&lt;p&gt;I met the amazing community called DevCongress. &lt;a href="https://twitter.com/unicodeveloper"&gt;Prosper&lt;/a&gt; and I had the rare privilege of sharing our journeys as engineers, while talking about what makes startups in Nigeria unique. &lt;a href="https://twitter.com/edemkumodzi"&gt;Edem Kumodzi&lt;/a&gt; made this possible. Thanks, Chale.&lt;/p&gt;

&lt;p&gt;I got an invite to speak at TEDx Unilag, but I somehow managed to botch this one. Poor planning. Who knows, I may get another one this year. I want to speak at a TED event. If you’re a TEDx licensee and you’re interested in having someone share his life story, do reach out.&lt;/p&gt;

&lt;p&gt;I spoke about Progressive Web Apps at the DevFest SE season 2016 that held in Port Harcourt. I made a case for PWAs as the new way of building mobile applications moving forward.&lt;/p&gt;

&lt;p&gt;I was at HiveCo Lab, Kampala, where I got the opportunity to meet with amazing engineers and feel the pulse of the Uganda technology scene. There, I met &lt;a href="https://twitter.com/valanchee"&gt;Mwesigwa Daniel&lt;/a&gt;. Daniel is an amazing guy. He still owes me a Rolex though.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--9Xvs5iMW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://cyberomin.github.io/assets/article_images/2016/daniel.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--9Xvs5iMW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://cyberomin.github.io/assets/article_images/2016/daniel.jpg" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I met egbon Mark Zuckerberg. He still owes me a photograph.&lt;/p&gt;

&lt;p&gt;I delivered the Keynote at the DevFest SW event. I spoke on the State of the Web. It was beautiful and nostalgic. For one, it afforded me the opportunity to learn about the history of the web and also examine the lives of those who made it possible. Thank you, Sir Tim Berners-Lee.&lt;/p&gt;

&lt;p&gt;The high point of my 2016 speaking engagement was when I delivered the keynote for the CodeCalabar conference. Calabar isn’t just a city. It’s my city. My autobiography will not be complete if I don’t include this city. Calabar is my genesis.&lt;/p&gt;

&lt;p&gt;I was a guest on Hit FM 95.5, Calabar. I spoke about technology and how it affects our everyday life. I made Papa proud.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--wyJK1q6w--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://cyberomin.github.io/assets/article_images/2016/codecalabar.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--wyJK1q6w--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://cyberomin.github.io/assets/article_images/2016/codecalabar.jpg" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Writing
&lt;/h3&gt;

&lt;p&gt;I set out to write once every week, but I wasn’t disciplined enough to follow through on this one. I take absolute responsibility for this. No excuses.&lt;/p&gt;

&lt;p&gt;In the midst of all of life’s happenings, I managed to scribble a few things. My biggest moment was when I shared my &lt;a href="http://cyberomin.github.io/life/2016/11/07/chapter-three.html"&gt;11-year journey&lt;/a&gt;. For some weird reason, it was therapeutic. I could feel the weight off my chest. What was more interesting about this particular article was the fact that I had people emailing me and sharing their own university stories. I had the privilege of counselling with a few of these individuals.&lt;/p&gt;

&lt;p&gt;In a bid to step away from my comfort zone—technology and startups—I decided to try my hands on fiction. I wrote about an upwardly mobile couple, &lt;a href="http://cyberomin.github.io/fiction/2016/04/10/the-dinner.html"&gt;Bidemi and Makinde&lt;/a&gt;, who had planned this &lt;a href="http://cyberomin.github.io/fiction/2016/04/17/bidemi-meets-makinde.html"&gt;amazing dinner date&lt;/a&gt; that almost turned into a disaster. This story isn’t complete, so I may revisit it sometime this year.&lt;/p&gt;

&lt;p&gt;I now have renewed respect for every single person who writes fiction for a living or for fun. Fiction is tasking, but it also allows your mind wander and sets your imagination wild.&lt;/p&gt;

&lt;p&gt;I did my best to appeal to young people on the need to write &lt;a href="http://cyberomin.github.io/life/2016/03/19/dear-young-person.html"&gt;complete words and sentences&lt;/a&gt;. This particular article was borne out of the fact that I was tired of either ignoring people that start conversations with &lt;em&gt;xup&lt;/em&gt; or those that will abuse your senses with &lt;em&gt;tanz 4 ur tym&lt;/em&gt;. This gripes me, always.&lt;/p&gt;

&lt;p&gt;I made a special appeal to UXers on the need to consider my grandmother when next they are thinking through that app that will connect the next 1 billion people. She, like many other &lt;a href="http://cyberomin.github.io/tutorial/docker/2016/02/04/dont-ignore-my-grand-mum.html"&gt;grandparents are just as important&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I contributed to Ventures Africa, iAfrikan, Techpoint and Y!Naija. This year, I’m aiming for Bloomberg, Financial Times, WSJ, and NYTimes. Amen. A boy can dream and dreams do come true.&lt;/p&gt;

&lt;h3&gt;
  
  
  Reading
&lt;/h3&gt;

&lt;p&gt;I read a number of books this year and here are some of my favourite:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.amazon.com/My-Vision-Challenges-Race-Excellence/dp/1860633447"&gt;My Vision: Challenges in the Race for Excellence by Mohammed bin Rashid Al Maktoum&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.amazon.com/Elon-Musk-SpaceX-Fantastic-Future/dp/006230125X"&gt;Elon Musk: Tesla, SpaceX, and the Quest for a Fantastic Future by Ashlee Vance&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.amazon.com/Hatching-Twitter-Story-Friendship-Betrayal/dp/1591847087"&gt;Hatching Twitter: A True Story of Money, Power, Friendship, and Betrayal by Nick Bilton&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.amazon.com/Third-Wave-Entrepreneurs-Vision-Future/dp/150113258X"&gt;The Third Wave: An Entrepreneur’s Vision of the Future by Steve Case&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.amazon.com/Marketplace-3-0-Rewriting-Borderless-Business/dp/0230342140"&gt;Marketplace 3.0: Rewriting the Rules of Borderless Business by Hiroshi Mikitani&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.amazon.com/Building-Microservices-Designing-Fine-Grained-Systems/dp/1491950358"&gt;Building Microservices: Designing Fine-Grained Systems by Sam Newman&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--KAAUEK5G--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://cyberomin.github.io/assets/article_images/2016/books.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--KAAUEK5G--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://cyberomin.github.io/assets/article_images/2016/books.jpg" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Traveling
&lt;/h3&gt;

&lt;p&gt;I visited 4 African countries this year; Kenya, Uganda, Ghana and Rwanda (&lt;em&gt;coughs,&lt;/em&gt; it was a 2 hour layover). East Africa is beautiful. Yes, I said that.&lt;/p&gt;

&lt;p&gt;I got the rare opportunity to taste a crocodile meat—thanks, Lisbi. In a bit to document my culinary experience, I started a trend I tagged culinary journeys—totally lifted from CNN’s show, Culinary Journey. I intend to follow suit this year as I encounter amazing dishes that life brings my way.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---20WxIhM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://cyberomin.github.io/assets/article_images/2016/food.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---20WxIhM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://cyberomin.github.io/assets/article_images/2016/food.jpg" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Edem was in Lagos last December, we both had Jollof at Terra Kulture. I guess we can finally lay the Nigerian/Ghana jollof squabble to rest.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8sILyxt5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://cyberomin.github.io/assets/article_images/2016/jellof.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8sILyxt5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://cyberomin.github.io/assets/article_images/2016/jellof.jpg" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Learning
&lt;/h3&gt;

&lt;p&gt;This is one year that I decided to go under the hood and learn the inner workings of a few things. I took the time to explore the not-so-sexy side of MySQL. I’m by no means a database expert, but to say this experience wasn’t invaluable will be me lying.&lt;/p&gt;

&lt;p&gt;Elasticsearch has been that one software that is not only beautiful and well designed, it most times felt magical. I did a few digging and you have to give it to the folks at Elastic, they do amazing work.&lt;/p&gt;

&lt;p&gt;Ansible, how did I ever exist without you in my life? To RedHat and the amazing community maintaining this project, I say thank you.&lt;/p&gt;

&lt;h3&gt;
  
  
  2017
&lt;/h3&gt;

&lt;p&gt;The idea is simple, multiple everything here by 10.&lt;/p&gt;

&lt;p&gt;In closing, this was one beautiful year. Here’s to a greater and bigger 2017. Cheers.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--oNW9Zr92--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://cyberomin.github.io/assets/article_images/2016/end.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--oNW9Zr92--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://cyberomin.github.io/assets/article_images/2016/end.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;


&lt;center&gt;Image credit: PLURALSIGHT&lt;/center&gt;

&lt;p&gt;&lt;em&gt;This post was originally published on &lt;a href="http://cyberomin.github.io/life/2017/01/08/year-2016.html"&gt;cyberomin.github.io&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>life</category>
    </item>
    <item>
      <title>Building the African Software Engineering Ecosystem</title>
      <dc:creator>Celestine Omin</dc:creator>
      <pubDate>Sat, 17 Dec 2016 01:04:00 +0000</pubDate>
      <link>https://dev.to/cyberomin/building-the-african-software-engineering-ecosystem</link>
      <guid>https://dev.to/cyberomin/building-the-african-software-engineering-ecosystem</guid>
      <description>&lt;p&gt;In Steve Case’ book, The Third Wave: An Entrepreneur’s Vision for The Future, Steve outlined the different waves that defined the Internet and laid the foundation for the explosive technological innovation that we experience today.&lt;/p&gt;

&lt;p&gt;The first wave — building the internet — , which lasted between 1985 - 1999, was largely driven by People, Products, Platforms, Partnerships, Policy and Perseverance and he outlined a few companies that were pivotal to the success of that era; Cisco, IBM, Apple, AOL, Sprint and the now defunct Sun Microsystem.&lt;/p&gt;

&lt;p&gt;The second wave — App economy and mobile revolution — started in 2000 through 2015 and this wave was driven by People, Products and Platforms. The third wave — Internet of Everything — started this year, 2016 and it’s ongoing.&lt;/p&gt;

&lt;p&gt;I’m more interested in the first wave and how it relates to the Nigerian technology ecosystem. I’m really interested in the &lt;em&gt;People&lt;/em&gt; aspect of things. Yesterday, I started a series of tweets about how technology startups in Nigeria and Africa at large can bolster the engineering community around us. While other components that makes for a successful ecosystem are equally as important, I choose to focus on engineering because not only is it my forte, it’s one of the critical components of every successful technology startup.&lt;/p&gt;

&lt;p&gt;Engineering is a major differentiator between successful startups and the average ones. As Jeff Bezos rightly puts it, it’s what separates the A players from the B players.&lt;/p&gt;

&lt;p&gt;But in all of this, how are we as a community faring with all things engineering? How much are we giving back as individuals and organisations? Is this something tech startups in Nigeria and Africa actively think about?&lt;/p&gt;

&lt;p&gt;I love how companies in the valley, Europe and South East Asia are deliberately building their engineering culture. These companies do this through a couple of ways; tech talks, engineering blogs, open source, etc. They share their problems, pain points and solutions.&lt;/p&gt;

&lt;p&gt;This is one surefire way of building a community. A community is not people doing things in silos and calling it an “ecosystem.” Netflix holds tech-talks and invites Uber engineers to speak. They move this further by sharing core components of their infrastructure.It’s only in sharing that we can truly build a sustainable ecosystem and also move the needle a notch higher.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.toKafka"&gt;https://kafka.apache.org/&lt;/a&gt;, one of the core components of distributed computing was built in-house by LinkedIn. They shared it and everyone is benefiting today. For a second, imagine if Google kept Angular and Kubernetes to itself and Facebook kept React to itself. Think about it.&lt;/p&gt;

&lt;p&gt;A few days ago, Neo Ighodaro, CTO of Hotels.ng, tweeted about Watch Dog, a monitoring tool they built in-house that sends critical alerts to Slack and every stakeholder. Not only did they write about this tool, they went ahead and made it open source. To say I was excited will be an understatement. That tweet literally made my day.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Just published “Meet Watchdog; Hotels.ng’s in-house open source application monitor” &lt;a href="https://t.co/iiFizO41gQ"&gt;https://t.co/iiFizO41gQ&lt;/a&gt; cc &lt;a href="https://twitter.com/lyndachiwetelu"&gt;@lyndachiwetelu&lt;/a&gt; &lt;a href="https://twitter.com/markessien"&gt;@markessien&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;— Neo Ighodaro (@NeoIghodaro) &lt;a href="https://twitter.com/NeoIghodaro/status/808205822450868224"&gt;December 12, 2016&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;While the Nigerian and African tech needs funding, the need for actual engineering collaboration and sharing cannot be over emphasised. If you’ve solved a major problem; optimisation, security, scalability, experimenting with a new tech, write about it. Share.&lt;/p&gt;

&lt;p&gt;While I’m not oblivious to the Dunning-Kruger effect, you will be surprised at how many people will learn and benefit from this gesture. Building out your own CI/CD pipeline? Write about it. Wrote an amazing configuration management for your servers? Share it. Talk!!! Found a new way to minimise latency by 3%? Write about it. Talk. Mitigated a DDoS? Write about it.&lt;/p&gt;

&lt;p&gt;We need more African tech startups writing about their processes, product design methodology, software engineering practises, opening up their APIs. More startups holding community events to talk about technologies they are exploring or currently using in production. Enough of gathering people for product pitches.&lt;/p&gt;

&lt;p&gt;In 2017, there should be just as much tech talks and articles. I think we have enough fundraising articles.&lt;/p&gt;

&lt;p&gt;PS: I recognise and celebrate the great work of foorLoop, GDGs, African Git Meetup(shameless plug, that’s me) all of the individual contributors to OSS and everyone writing one technical article or the other, you’re all rockstars.&lt;/p&gt;

&lt;p&gt;Hotels.ng, CcHub thank you for giving out your spaces. We need these supports.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This post was originally published on &lt;a href="http://cyberomin.github.io/ecosystem/2016/12/17/building-african-engineering-ecosystem.html"&gt;cyberomin.github.io&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ecosystem</category>
    </item>
    <item>
      <title>Chapter Three: The 11-year journey.</title>
      <dc:creator>Celestine Omin</dc:creator>
      <pubDate>Mon, 07 Nov 2016 07:16:00 +0000</pubDate>
      <link>https://dev.to/cyberomin/chapter-three-the-11-year-journey</link>
      <guid>https://dev.to/cyberomin/chapter-three-the-11-year-journey</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;Winners don't make excuses.&lt;/p&gt;

Harvey Specter, &lt;cite title="Source Title"&gt;Suits&lt;/cite&gt;
&lt;/blockquote&gt;

&lt;p&gt;In the hit TV show Suits, produced by US Network, Patrick Adams played the role of Mike Ross, a brilliant college dropout turned unlicensed lawyer. Mike Ross, a man who came from almost nothing went on to build a stellar career at Pearson Specter. Mike rose through the ranks very quickly, he made junior partner in the shortest period of the firm’s history. His brilliance and photographic memory would make even Einstein jealous.&lt;/p&gt;

&lt;p&gt;He was everyone’s favourite, senior partners would typically fight amongst themselves to have him come work for them for two weeks. One time, his boss, Harvey Specter, a senior partner at Pearson Specter placed him as a bet with his arch-rival, Louis Litt. Such was the awesomeness of Mike. He was loved by everyone, the paralegals marvel in awe whenever he speaks. How can this man be a fraud?&lt;/p&gt;

&lt;p&gt;Mike’s chance at Pearson Specter and his subsequent rise was pure happenstance. Prior to his law career, Mike Ross made a living selling marijuana and taking LSAT for law school hopefuls. On the day of what turned out to become a life-changing event, Mike was going to a hotel to follow up on a drug deal when he ran into the Feds, he got chased through the hallway and stairs and he eventually found himself in a room where Harvey Specter was conducting interviews for a new Associate.&lt;/p&gt;

&lt;p&gt;Harvey, cocky as the king of spade, decided to interview Mike, even though he wasn’t a shortlisted candidate. Mike picked up a book in front of Harvey and dared him to open to any chapter and quiz him from those pages. Harvey, a bit mystified, obliged Mike. He opened a random page and started reading out a citation, he had barely completed the case title when Mike swooped in, completed the title and proceed to read out the court proceedings from memory. Harvey was blown.&lt;/p&gt;

&lt;p&gt;Mike explained to Harvey, that he never graduated college let alone graduate from Harvard, as it was the tradition for Pearson Specter to hire only Harvard law grads. There and then, Harvey made an exception and hired Mike immediately. Mike’s life changed forever.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why am I telling you all this?
&lt;/h3&gt;

&lt;p&gt;Suits is the first TV show that I have ever followed religiously, this is largely due to the fact that the protagonist and I share a similar story.&lt;/p&gt;

&lt;p&gt;In 2011, I moved to Uyo and this will become my new home for the next 9 months. On a certain beautiful Friday afternoon, I was boiling rice to eat with the stew mum had sent me. As I watched the grains of rice dart all over the pot as the meal cooked, my phone rang. I didn’t know the caller, but something struck me, the phone number pattern was rather unique. It felt like the owner had asked the network provider to specifically reserve him the number. On the other side of the line was the voice of a man with a deep baritone and he spoke eloquently.&lt;/p&gt;

&lt;p&gt;The caller introduced himself as the CEO of DealDey and asked if I had 5 minutes to spare. For a second, my heart skipped, I was totally unprepared. I don’t quite remember what I mumbled. We spent the next 30 minutes or so talking about life and work. Then he asked the question, &lt;em&gt;“what did you graduate with?”&lt;/em&gt; there was a short momentary silence, then I blurted, I didn’t graduate college.&lt;/p&gt;

&lt;h3&gt;
  
  
  For context sake…
&lt;/h3&gt;

&lt;p&gt;You see, I was supposed to graduate with my colleagues back in 2009. But somehow, I managed to get my priorities mixed up and I ended up in a really bad state academically. My folks didn’t know about this. I was even too scared to talk to them about it. I guess, somehow, they had imagined everything was well and there was no cause for alarm, this was largely due to the fact that back in secondary, I was, for the most part, a straight B student. Most terms, I oscillated between As and Bs, but I didn’t dip below Bs. I figured how to hack the exams — cram, write and pass. It wasn’t rocket science.&lt;/p&gt;

&lt;p&gt;I studied Computer Science back in the University and this was by choice, not an accident. The first time I came close to a computer was at the ripe age of 9. I didn’t touch one until I was 15. The moment I sat in front of one, I fell in love, so madly in love, I applied to study Computer Science for my bachelors. I was excited. To me, studying computer science was a way of formalising this new found love, I figured since I’d fallen head-over-heels for this thing, studying for a degree in it would be a walk in the park. I was wrong. Very wrong. I was in for a rude shock.&lt;/p&gt;

&lt;h3&gt;
  
  
  The shock.
&lt;/h3&gt;

&lt;p&gt;I was so happy to be in school for my first class, I arrived 30min to the commencement of my first lecture. I was ready. This young, naive 17-year-old was finally ready to take on the world and nothing was going to stop me, but the Nigerian University shocks you.&lt;/p&gt;

&lt;p&gt;My first real shock was when I saw the student handbook for the entire program duration. I was never going to touch anything computer network related until the second semester of my final year. Database management system was slated for my third year. For my freshman and sophomore year, I was going to study GWBasic, Pascal and FORTRAN. These were between 2006 and 2007.&lt;/p&gt;

&lt;p&gt;As the weeks grew into months, so did the frustrations snowballed. Schooling was anything but fun and school work felt like chores. I dreaded exams. What started out as a love story was fast becoming the bane of my life. I was exhausted. Every iota of motivation in me fizzled out. Frustration and anger set in. I questioned my own sanity one too many times.&lt;/p&gt;

&lt;p&gt;On a certain day, after lectures, I asked the HOD why we had to study decades-old technologies and concepts that had no place in today’s world, his response was &lt;em&gt;“go and complain to the NUC.”&lt;/em&gt; I guess this was the day I lost every single interest in school. I attended classes merely for formalities. I craved holidays more than anything else. Prolonged and incessant strikes didn’t help too. I loved them, a part of me secretly prayed for them. To me, this was my own way of running away from this torture called school.&lt;/p&gt;

&lt;p&gt;By my third year, I probably lost count of how many resit I had to go through. I didn’t care, I just wanted to fulfil the mandatory four years and move on with my life. &lt;em&gt;“School isn’t adding so much to me so why even bother,”&lt;/em&gt; this was the lie I told myself and I believed my own lie. Looking back, I blamed myself. I could have applied myself a little more and go back to my good old trick; cram, pass and move on. While all of these were going on, I applied myself to what I knew the real world needed. Safe to say I taught myself the real computer science. I had people I admire and these people went out of their ways to support me. The God-sent folks at MIDI(Maximum Impact Digital Institute, Calabar) gave me their resources without holding back; computers, Internet, power, space, etc.&lt;/p&gt;

&lt;p&gt;I buried myself in every single material I found meaningful, I read everything I could find. Everything. I spent most nights and days at a cyber cafe not too far from home so much so that the cyber cafe operators knew me. Google was and still is my friend. I Googled almost every concept that didn’t make sense to me. I downloaded materials and devoured them…literally. This for me was my only ticket out. If I didn’t do well academically, I should have a market ready skill. Looking back, I wish there was Andela.&lt;/p&gt;

&lt;p&gt;By final year, I had lost so much interest in school it didn’t even make sense going anywhere near the school gate, let alone attend classes. I suddenly found myself with so much free time and I was determined to make the most out of them. I found a small library not far from home and this was my new school. Then one day, while talking to my pastor(Pst Akomaye Ugar), I mentioned to him that I’d dropped out of school. The man left all that he was doing and spent the next hour counselling with me. He encouraged me to go back to school and complete what I had started. He checked in with me throughout the semester to see how I was fairing, and that semester turned out to become my best semester ever. I made my best result. I probably averaged about 3.2 GPA. But I still had issues, years of multiple resit quickly dissolved this score and it made very little impact in my CGPA. I’d spend the next 7 years cleaning my own mess. The mess I had created with my own hands.&lt;/p&gt;

&lt;h3&gt;
  
  
  I got hired…
&lt;/h3&gt;

&lt;p&gt;A month later I had the call with Sim, I made the move to Lagos, this time, I wasn’t visiting, I was here to stay. To make things more interesting, I got a one-way ticket. No turning back. Sim hired me without a university degree. Amazing. I’d go on to spend the next 4 years working at Konga and doing almost everything; from stock count to writing software. But I still wasn’t fulfilled, the lack of a university degree was taking its toll on me. I’d told a few colleagues a couple of times that I never graduated but for some reason, they just didn’t believe me. I turned it into a joke, I’d usually tell people I sat at the back of the classroom, seriously, this part was true.&lt;/p&gt;

&lt;p&gt;While still keeping a full-time job, I made a few trips back home a couple of times and for each trip, I manage to take a resit exams and just generally try to improve my odds of graduating. It wasn’t fun. It was exhausting. I confided in Nneka and she became not just my sounding board, but a great confidant.&lt;/p&gt;

&lt;p&gt;Nneka became my Rachel Zane, Mike Ross’ fiancée. Times when I was way too tired to even try again she’d encourage me to give it one last shot. One shot became two shots, three shots and so on. On the 4th of January, 2016, after 7 tortuous years, I finally graduated. I got a certificate with a beautiful 3rd class well embolden with what I will describe as the most amazing typeface I have ever seen. I made a 1.54 CGPA.&lt;/p&gt;

&lt;p&gt;Sim was my Harvey. Thank you, Sim.&lt;/p&gt;

&lt;p&gt;Suits, isn’t just a show. It speaks to me. Suits is &lt;em&gt;déjà vu&lt;/em&gt; all over again.&lt;/p&gt;

&lt;p&gt;PS: If you’ve had a similar story, go ahead and share it in the comments. Be a source of hope and inspiration today!!&lt;/p&gt;

&lt;h6&gt;
  
  
  Share this post with your network. Someone may just find another reason to try.
&lt;/h6&gt;

&lt;p&gt;&lt;em&gt;This post was originally published on &lt;a href="http://cyberomin.github.io/life/2016/11/07/chapter-three.html"&gt;cyberomin.github.io&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>life</category>
    </item>
  </channel>
</rss>
