<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Frank Promise Edah</title>
    <description>The latest articles on DEV Community by Frank Promise Edah (@frankpromise).</description>
    <link>https://dev.to/frankpromise</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/frankpromise"/>
    <language>en</language>
    <item>
      <title>terraform httpd_container</title>
      <dc:creator>Frank Promise Edah</dc:creator>
      <pubDate>Mon, 25 Apr 2022 20:41:44 +0000</pubDate>
      <link>https://dev.to/frankpromise/terraform-httpdcontainer-5dhn</link>
      <guid>https://dev.to/frankpromise/terraform-httpdcontainer-5dhn</guid>
      <description>&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform {
  required_providers {
    docker = {
      source  = "kreuzwerker/docker"
      version = "~&amp;gt;2.12.0"
    }
  }
}

provider "docker" {}

resource "docker_image" "httpd_image" {
  name = "httpd:latest"
}

resource "random_string" "random" {
  count = 2
  length  = 4
  special = false
  upper   = false
}



resource "docker_container" "httpd_container" {
  count = 2
  name  = join("-", ["httpd", random_string.random[count.index].result])
  image = docker_image.httpd_image.latest
  ports {
    internal = 8080
    #external = 8080
  }
}



output "IPAddress" {
  value       = [for i in docker_container.httpd_container[*]: join(":", [i.ip_address], i.ports[*]["external"])]
  description = "The IP Address and port  of the container"
}


output "container_name" {
  value       = docker_container.httpd_container[*].name
  description = "The name of the container"
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;br&gt;
`&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Using s3 commands with aws cli</title>
      <dc:creator>Frank Promise Edah</dc:creator>
      <pubDate>Mon, 18 Apr 2022 19:51:27 +0000</pubDate>
      <link>https://dev.to/frankpromise/using-s3-commands-with-aws-cli-1jcn</link>
      <guid>https://dev.to/frankpromise/using-s3-commands-with-aws-cli-1jcn</guid>
      <description>&lt;p&gt;This write up describes some of the commands you can use to manage your s3 buckets and objects using aws s3 command line interface.&lt;/p&gt;

&lt;p&gt;&lt;u&gt;&lt;strong&gt;create a bucket&lt;/strong&gt;&lt;/u&gt;&lt;/p&gt;

&lt;p&gt;To create an s3 bucket with the command line, we use &lt;strong&gt;s3 mb&lt;/strong&gt;.&lt;br&gt;
Note that the name given to an s3 bucket must be globally unique. It can contain letter or number, and cannot contain a period next to a hyphen or another period.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;syntax&lt;/strong&gt;&lt;br&gt;
$ aws s3 mb  [--options]&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;example&lt;/strong&gt;&lt;br&gt;
aws s3 mb s3://bucket-name&lt;/p&gt;

&lt;p&gt;&lt;u&gt;&lt;strong&gt;2. list buckets and objects&lt;/strong&gt;&lt;/u&gt;&lt;/p&gt;

&lt;p&gt;$ aws s3 ls&lt;br&gt;
Using this command without an option attached list all buckets.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;To list the object contained in specific folder(prefix)&lt;/strong&gt;&lt;br&gt;
$ aws ls s3://bucket-name/example/&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Delete bucket&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;$ aws s3 rb s3://bucket-name&lt;/p&gt;

&lt;p&gt;By default, AWS does not allow you delete a bucket that is not empty. In order to by pass that, you must add &lt;strong&gt;--force&lt;/strong&gt; option. However, if versioning is enabled in the bucket, then even this option won't work.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Move objects&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To move a folder in an s3 bucket to another s3 bucket, use:&lt;br&gt;
   $ aws s3 mv s3://bucket-name/example s3://my-bucket/&lt;/p&gt;

&lt;p&gt;To move a  local file from your current working directory to the Amazon S3 bucket:&lt;br&gt;
    aws s3 mv filename.txt s3://bucket-name&lt;/p&gt;

&lt;p&gt;To move a file from your Amazon S3 bucket to your current working directory:&lt;br&gt;
  aws s3 mv s3://bucket-name/filename.txt ./&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Copy objects&lt;/strong&gt;&lt;br&gt;
To copy objects in an s3 bucket to another:&lt;/p&gt;

&lt;p&gt;$ aws s3 cp s3://bucket-name/example s3://my-bucket/&lt;/p&gt;

&lt;p&gt;To copy a local file from your current working directory to the Amazon S3 bucket:&lt;br&gt;
  $ aws s3 cp filename.txt s3://bucket-name&lt;/p&gt;

</description>
    </item>
    <item>
      <title>How to create s3 bucket using infrastructure as code(terraform)</title>
      <dc:creator>Frank Promise Edah</dc:creator>
      <pubDate>Sat, 16 Apr 2022 12:18:03 +0000</pubDate>
      <link>https://dev.to/frankpromise/how-to-create-s3-bucket-using-infrastructure-as-codeterraform-phn</link>
      <guid>https://dev.to/frankpromise/how-to-create-s3-bucket-using-infrastructure-as-codeterraform-phn</guid>
      <description>&lt;p&gt;&lt;strong&gt;WHAT YOU WILL LEARN&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In this tutorial, we are going to create an s3 bucket using terraform.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;WHAT IS TERRAFORM?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Terraform is an infrastructure as code tool that allows you to develop, update, and version your infrastructure efficiently and keeping it secure.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;PREREQUISITES&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;AWS IAM role with S3 permissions&lt;/li&gt;
&lt;li&gt;Access key ID and secret access key of that account.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;How to create an Access key and secret access key&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;login to your AWS account&lt;/li&gt;
&lt;li&gt;select IAM&lt;/li&gt;
&lt;li&gt;On the left side of the panel, select &lt;strong&gt;user&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Select &lt;strong&gt;Add users&lt;/strong&gt; and enter details&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;NOTE: you have to select &lt;strong&gt;programmatic access&lt;/strong&gt; in access type to get access key ID and secret key.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Attach policy&lt;/li&gt;
&lt;li&gt;Add tags(optional)&lt;/li&gt;
&lt;li&gt;create user
If your user is successfully created, you will see a message with your access key and secret key.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Steps to create an s3 bucket using Terraform&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;u&gt;&lt;strong&gt;Create S3 bucket module&lt;/strong&gt;&lt;/u&gt;
Create a module that will have a basic s3 file configuration. And for that, i will create one folder name &lt;em&gt;"S3"&lt;/em&gt;  which will have two files namely : &lt;strong&gt;bucket.tf and var.tf&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;u&gt;&lt;strong&gt;2. Define bucket&lt;/strong&gt;&lt;br&gt;
&lt;/u&gt;&lt;br&gt;
Open bucket.tf and define your bucket in it.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;resource "aws_s3_bucket" "practises3" {&lt;br&gt;
    bucket = "${var.bucket_name}" &lt;br&gt;
    acl = "${var.acl_value}"   &lt;br&gt;
}&lt;br&gt;
&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Explanation&lt;/strong&gt;&lt;br&gt;
There is a block with the key name &lt;strong&gt;resource&lt;/strong&gt; with resource type &lt;strong&gt;aws_s3_bucket&lt;/strong&gt;. It is a fixed value and since terraform is cloud agnostic, this value depends on the provider. In this case, the cloud provider is AWS and s3 is the resource and &lt;strong&gt;practises3&lt;/strong&gt; is the resource name used. &lt;/p&gt;

&lt;p&gt;Bucket and ACL(access control list) are arguments types for our resources. Either we can provide value directly or use the var.tf file to declare the value of an argument.&lt;/p&gt;

&lt;p&gt;&lt;u&gt;&lt;strong&gt;3. Define variables&lt;/strong&gt;&lt;/u&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;variable "bucket_name" {}&lt;br&gt;
variable "acl_value" {&lt;br&gt;
    default = "private"&lt;br&gt;
}&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Explanation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The block declares values of variables. We can either provide a default value to be used when needed or ask for value during execution.&lt;/p&gt;

&lt;p&gt;&lt;u&gt;&lt;strong&gt;4. Add configuration&lt;/strong&gt;&lt;/u&gt;&lt;br&gt;
Here, we will create a file named main.tf for keeping configuration in our working directory&lt;/p&gt;

&lt;p&gt;&lt;code&gt;provider "aws" {&lt;br&gt;
    access_key = "${var.aws_access_key}"&lt;br&gt;
    secret_key = "${var.aws_secret_key}"&lt;br&gt;
    region = "${var.region}"&lt;br&gt;
}&lt;br&gt;
module "s3" {&lt;br&gt;
    source = "&amp;lt;path-to-S3-folder&amp;gt;"&lt;br&gt;
    bucket_name = "&amp;lt;Bucket-name&amp;gt;"       &lt;br&gt;
}&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Explanation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Here, details of our provider (AWS) access key, secret key, etc is provided. Since are using terraform modules to create s3, we use the keyword &lt;strong&gt;module&lt;/strong&gt; and the name of the folder we created earlier. In argument, we provide a source to the s3 module and bucket name.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Add Access key, secret_key, and region&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Now we will define variable.tf where we will enter our access key, secret key and region.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;variable "aws_access_key" {&lt;br&gt;
default = “&amp;lt;your_access_key&amp;gt;”&lt;br&gt;
}&lt;br&gt;
variable "aws_secret_key" {&lt;br&gt;
default = “&amp;lt;your_secret_key&amp;gt;”&lt;br&gt;
 }&lt;br&gt;
variable "region" {&lt;br&gt;
    default = "region"&lt;br&gt;
}&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;u&gt;&lt;strong&gt;Run Terraform script in your system&lt;/strong&gt;&lt;br&gt;
&lt;/u&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;terraform init&lt;/strong&gt;&lt;br&gt;
It is used to initilize the working directory.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;terraform plan&lt;/strong&gt;&lt;br&gt;
We will use this command for script verification to confirm that there is no error in our configuration.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;terraform apply&lt;/strong&gt;&lt;br&gt;
we will use this command to create the s3 bucket.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

</description>
    </item>
    <item>
      <title>Building a Kubernetes 1.23 Cluster with Kubeadm</title>
      <dc:creator>Frank Promise Edah</dc:creator>
      <pubDate>Wed, 06 Apr 2022 16:33:21 +0000</pubDate>
      <link>https://dev.to/frankpromise/building-a-kubernetes-123-cluster-with-kubeadm-1fh0</link>
      <guid>https://dev.to/frankpromise/building-a-kubernetes-123-cluster-with-kubeadm-1fh0</guid>
      <description>&lt;p&gt;Log into the Control Plane Node (Note: The following steps must be performed on all &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Create configuration file for containerd:&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;cat &amp;lt;&amp;lt;EOF | sudo tee /etc/modules-load.d/containerd.conf&lt;br&gt;
overlay&lt;br&gt;
br_netfilter&lt;br&gt;
EOF&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Load modules:&lt;/strong&gt;
sudo modprobe overlay
sudo modprobe br_netfilter&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Set system configurations for Kubernetes networking:&lt;br&gt;
cat &amp;lt;&amp;lt;EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf&lt;br&gt;
net.bridge.bridge-nf-call-iptables = 1&lt;br&gt;
net.ipv4.ip_forward = 1&lt;br&gt;
net.bridge.bridge-nf-call-ip6tables = 1&lt;br&gt;
EOF&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Apply new settings:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;sudo sysctl --system&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Install containerd:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;sudo apt-get update &amp;amp;&amp;amp; sudo apt-get install -y containerd&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Create default configuration file for containerd:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;sudo mkdir -p /etc/containerd&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Generate default containerd configuration and save to the newly created default file:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;sudo containerd config default | sudo tee /etc/containerd/config.toml&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Restart containerd to ensure new configuration file usage:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;sudo systemctl restart containerd&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Verify that containerd is running.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;sudo systemctl status containerd&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Disable swap:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;sudo swapoff -a&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Disable swap on startup in /etc/fstab:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;sudo sed -i '/ swap / s/^(.*)$/#\1/g' /etc/fstab&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Install dependency packages:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;sudo apt-get update &amp;amp;&amp;amp; sudo apt-get install -y apt-transport-https curl&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Download and add GPG key:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;curl -s &lt;a href="https://packages.cloud.google.com/apt/doc/apt-key.gpg"&gt;https://packages.cloud.google.com/apt/doc/apt-key.gpg&lt;/a&gt; | sudo apt-key add -&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Add Kubernetes to repository list:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;cat &amp;lt;&amp;lt;EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list&lt;br&gt;
deb &lt;a href="https://apt.kubernetes.io/"&gt;https://apt.kubernetes.io/&lt;/a&gt; kubernetes-xenial main&lt;br&gt;
EOF&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Update package listings:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;sudo apt-get update&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Install Kubernetes packages (Note: If you get a dpkg lock message, just wait a minute or two before trying the command again):&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;sudo apt-get install -y kubelet=1.23.0-00 kubeadm=1.23.0-00 kubectl=1.23.0-00&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Turn off automatic updates:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;sudo apt-mark hold kubelet kubeadm kubectl&lt;/p&gt;

&lt;p&gt;Log into both Worker Nodes to perform previous steps.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Initialize the Cluster&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Initialize the Kubernetes cluster on the control plane node using kubeadm (Note: This is only performed on the Control Plane Node):&lt;/p&gt;

&lt;p&gt;sudo kubeadm init --pod-network-cidr 192.168.0.0/16 --kubernetes-version 1.23.0&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Set kubectl access:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;mkdir -p $HOME/.kube&lt;br&gt;
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config&lt;br&gt;
sudo chown $(id -u):$(id -g) $HOME/.kube/config&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Test access to cluster:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;kubectl get nodes&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Install the Calico Network Add-On&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;On the Control Plane Node, install Calico Networking:&lt;br&gt;
kubectl apply -f &lt;br&gt;
&lt;a href="https://docs.projectcalico.org/manifests/calico.yaml"&gt;https://docs.projectcalico.org/manifests/calico.yaml&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Check status of the control plane node:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;kubectl get nodes&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Join the Worker Nodes to the Cluster&lt;/strong&gt;
In the Control Plane Node, create the token and copy the kubeadm join command (NOTE:The join command can also be found in the output from kubeadm init command):
kubeadm token create --print-join-command&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In both Worker Nodes, paste the kubeadm join command to join the cluster. Use sudo to run it as root:&lt;/p&gt;

&lt;p&gt;sudo kubeadm join ...&lt;/p&gt;

&lt;p&gt;In the Control Plane Node, view cluster status (Note: You may have to wait a few moments to allow all nodes to become ready):&lt;br&gt;
kubectl get nodes&lt;br&gt;
`&lt;/p&gt;

</description>
    </item>
    <item>
      <title>How to set up systemd targets in CENTOS</title>
      <dc:creator>Frank Promise Edah</dc:creator>
      <pubDate>Mon, 28 Mar 2022 19:44:51 +0000</pubDate>
      <link>https://dev.to/frankpromise/how-to-set-up-systemd-targets-in-centos-1afo</link>
      <guid>https://dev.to/frankpromise/how-to-set-up-systemd-targets-in-centos-1afo</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--u4fcmDwb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n9dah8y8mpkrt4tv141u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--u4fcmDwb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n9dah8y8mpkrt4tv141u.png" alt="Image description" width="880" height="513"&gt;&lt;/a&gt;You have been handed a server that needs to be customized according to a specific standard. complete the following objectives &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;OBJECTIVES&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;carryout a check list on the default target in order to make sure the custom target you will create works.&lt;/li&gt;
&lt;li&gt;Your custom target should be identical to your default target. Except that it should want httpd service.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Experience level&lt;/strong&gt;&lt;br&gt;
Basic&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;verify default target&lt;/strong&gt;&lt;br&gt;
To verify default target: &lt;strong&gt;systemctl get-default&lt;/strong&gt;&lt;br&gt;
To set it to multi user target: &lt;em&gt;systemctl set-default multiuser.target&lt;/em&gt;_&lt;br&gt;
To isolate the target: &lt;strong&gt;systemctl isolate multiuser.target&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;CREATE custom.target&lt;/strong&gt;&lt;br&gt;
a) Change directory: &lt;strong&gt;cd/etc/systemd/system&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;b) create a file by copying multi-user.target into custom.target: __/usr/lib/systemd/system/multi-user.target ./custom.target&lt;/p&gt;

&lt;p&gt;c) open the file: &lt;strong&gt;nano custom.target&lt;/strong&gt;&lt;br&gt;
d) Edit the [unit] section:&lt;br&gt;
          [Unit]&lt;br&gt;
          Description=Custom Target&lt;br&gt;
          Documentation=man:systemd.special(7)&lt;br&gt;
          Requires=basic.target&lt;br&gt;
          Wants=httpd.service&lt;br&gt;
          Conflicts=rescue.service rescue.target&lt;br&gt;
          After=basic.target rescue.service rescue.target&lt;br&gt;
          AllowIsolate=yes&lt;br&gt;
e) use the &lt;strong&gt;ctrl X&lt;/strong&gt; option to save and quit&lt;br&gt;
    check if httpd is already installed in the server: &lt;em&gt;rpm -q httpd&lt;/em&gt;_&lt;br&gt;
    if not installed, do &lt;strong&gt;yum install httpd -y&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;f) isolate custom.target&lt;br&gt;
g) check status of httpd: &lt;strong&gt;systemctl status httpd&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If we try to isolate into multi-user.target, we will see that httpd is dead because it is not set up to want Apache service.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>python code for a treasure hunt game</title>
      <dc:creator>Frank Promise Edah</dc:creator>
      <pubDate>Sun, 27 Mar 2022 16:49:14 +0000</pubDate>
      <link>https://dev.to/frankpromise/python-code-for-a-treasure-hunt-game-4hkn</link>
      <guid>https://dev.to/frankpromise/python-code-for-a-treasure-hunt-game-4hkn</guid>
      <description>&lt;p&gt;Learning python has made me realize the importance of repetition. Especially if one is new to it. &lt;/p&gt;

&lt;p&gt;Today, i was able to write a basic code for a treasure hunt game without guidiance&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;print("Welcome to Treasure Island.")
print("Your mission is to find the treasure.") 



first_choice = input('You\re on a crossroad, where do you want to go? Left or right\n').lower()


if first_choice == 'left':
  second_choice = input('You\ve come to a lake. There is an island in the middle of the lake. Do you wait or swim?\n"').lower()


  if second_choice == 'wait':

    third_choice = input('You have arrived the island unharmed. There is a house with 3 doors. One red, one yellow and one blue. Which colour do you choose?\n').lower()


    if third_choice == 'blue':
      print('Eaten by beasts. Game over')
    elif third_choice == 'red':
      print('burned by fire. Game over')
    elif third_choice == 'yellow':
      print('You win')
    else:
      print('Game over.')


  else:
    print('swallowed by a shark. Game over')

else:
  print('oops! wrong choice. Game over')

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
    </item>
    <item>
      <title>privelege escalation</title>
      <dc:creator>Frank Promise Edah</dc:creator>
      <pubDate>Thu, 24 Mar 2022 09:44:59 +0000</pubDate>
      <link>https://dev.to/frankpromise/privelege-escalation-3g86</link>
      <guid>https://dev.to/frankpromise/privelege-escalation-3g86</guid>
      <description>&lt;p&gt;As part of your onboarding, you've been tasked with setting a server up so that bob is a superuser. Set bob up so no password is required when he uses the sudo command and he can run any command. In addition, set up adam to be able to run the journalctl command as root without being prompted for a password.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;STEP 1&lt;/strong&gt;__ Add bob to sudo files&lt;br&gt;
In order to add bob to sudo files, we have to first make bob a root user by user the &lt;strong&gt;sudo -i&lt;/strong&gt; command and then run the &lt;strong&gt;visudo&lt;/strong&gt; command.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;STEP 2&lt;/strong&gt;&lt;br&gt;
inside the editor, press &lt;em&gt;i&lt;/em&gt; on the keyboard to insert the folllowing:&lt;br&gt;
&lt;strong&gt;bob   ALL= (ALL)     NOPASSWD: ALL&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;adam    ALL=(root)     NOPASSWD: /bin/journalctl&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To exit, press the _&lt;em&gt;esc&lt;/em&gt; key which makes insert to disappear, thereafter enter __:wq: to quit and save.&lt;/p&gt;

&lt;p&gt;To check if your changes works properly, try:&lt;br&gt;
&lt;strong&gt;sudo -adam&lt;/strong&gt;&lt;br&gt;
Try to install php as this user, and you will notice it doesnt allow you to because adam is only set to run /bin/journalctl.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>WORKING WITH FILES IN CENTOS8</title>
      <dc:creator>Frank Promise Edah</dc:creator>
      <pubDate>Wed, 23 Mar 2022 00:15:19 +0000</pubDate>
      <link>https://dev.to/frankpromise/working-with-files-in-centos8-32n6</link>
      <guid>https://dev.to/frankpromise/working-with-files-in-centos8-32n6</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5xwTcKUl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gtgv8gje0xokzt8b5dt4.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5xwTcKUl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gtgv8gje0xokzt8b5dt4.jpg" alt="Image description" width="674" height="580"&gt;&lt;/a&gt;Let's assume you are a junior sysadmin in a company and as part of your on board process, you have been asked to perform some basic tasks in order to check your knowledge of the operating system. You have been asked to perform the following tasks:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Find out how many CPU's are in the system and what type.&lt;/li&gt;
&lt;li&gt;Gather the logs&lt;/li&gt;
&lt;li&gt;Find out how many users are on the system.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;At first, this might look quite challenging, but knowing what commands to use at the right time makes it easier! We are going to walk through each steps together and arrive at the right end!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Find out how many CPU's are in the system and what type.&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Log into the server using the credentials given. You can temporary assume root privilege using the &lt;strong&gt;sudo i&lt;/strong&gt; command. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;2.To know how many cpu's are on the system, use the &lt;em&gt;cat&lt;/em&gt; command to print out the content of /proc/cpuinfo. This directory is where you can find all informations related to the CPU. &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;In order to get the relevant information, use the &lt;em&gt;head -5&lt;/em&gt; command. This command shows you the first five lines of the /proc/cpuinfo. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Alternatively, if there are more than 1 CPU, you can use the &lt;em&gt;grep&lt;/em&gt; command like this &lt;em&gt;grep A 4 processor /proc/cpuinfo&lt;/em&gt;. Adding 'A', tells grep to print four lines after it matches the word 'Processor'.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Summary of task&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;cat /proc/cpuinfo&lt;br&gt;
head -5 &amp;gt; /tmp/cpu&lt;br&gt;
    OR&lt;br&gt;
grep A processor /proc/cpuinfo &amp;gt; /tmp/cpu&lt;br&gt;
'&amp;gt;' tells the system to redirect the result into the file stated instead of printing it out.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Gather the logs containing today's information&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Let's assume we want information on today's date, we can use the &lt;em&gt;tail&lt;/em&gt; command to print out the recent logs from /var/log/messages. Run the following command:&lt;br&gt;
&lt;strong&gt;tail  /var/log/messages&lt;/strong&gt;&lt;br&gt;
__grep "Mar 3 " /var/log/messages &amp;gt; /tmp/logs&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Find out how many users are on the system&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To do this, we can use the &lt;strong&gt;wc&lt;/strong&gt; command. This command is use to the lines in a line. To know how many users are on the system, run the following command:&lt;br&gt;
&lt;strong&gt;wc -l /etc/passwd &amp;gt; /tmp/usernum&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Now you see it is not as challenging as it seems from the beginning!&lt;/p&gt;

&lt;p&gt;Suggestions on other ways this can be done will be highly welcomed&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Creating an Amazon RDS DB Instance (MS SQL Server)</title>
      <dc:creator>Frank Promise Edah</dc:creator>
      <pubDate>Mon, 21 Mar 2022 21:59:26 +0000</pubDate>
      <link>https://dev.to/frankpromise/creating-an-amazon-rds-db-instance-ms-sql-server-4l5c</link>
      <guid>https://dev.to/frankpromise/creating-an-amazon-rds-db-instance-ms-sql-server-4l5c</guid>
      <description>&lt;p&gt;&lt;strong&gt;STEP1&lt;/strong&gt; &lt;br&gt;
Create a relational database.&lt;/p&gt;

&lt;p&gt;To create a relational database, go the aws console, search for RDS. Make sure the engine options is set to Mysqlserver. As soon as the DB is up and running, the security group settings can be changed. For instance, the inbound rules can be edited to allow traffic from any where if there isn't any known IP addresses. same with the outbound rules.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;STEP2&lt;/strong&gt;&lt;br&gt;
In order to connect to the local server, copy the endpoint of the RDS. On the local engine, paste the endpoint on the space for 'server name', chose the mysql engine for authentication, enter the username and password generated during the RDS creation stage.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>rds</category>
      <category>database</category>
    </item>
    <item>
      <title>How to build AWS SERVERLESS AURORA</title>
      <dc:creator>Frank Promise Edah</dc:creator>
      <pubDate>Sun, 20 Mar 2022 19:31:16 +0000</pubDate>
      <link>https://dev.to/frankpromise/how-to-build-aws-serverless-aurora-3p2p</link>
      <guid>https://dev.to/frankpromise/how-to-build-aws-serverless-aurora-3p2p</guid>
      <description>&lt;p&gt;&lt;strong&gt;step 1&lt;/strong&gt;&lt;br&gt;
Create an rds database and choose the serverless option. Make sure the API button is clicked.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8d4h5kNO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/33xvey5jxouxt8fhmoee.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8d4h5kNO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/33xvey5jxouxt8fhmoee.jpg" alt="Image description" width="328" height="900"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;step 2&lt;/strong&gt;&lt;br&gt;
connect the query editor to the database&lt;/p&gt;

</description>
    </item>
    <item>
      <title>How to host a website on an ec2 server</title>
      <dc:creator>Frank Promise Edah</dc:creator>
      <pubDate>Sat, 19 Mar 2022 14:39:33 +0000</pubDate>
      <link>https://dev.to/frankpromise/how-to-host-a-website-on-an-ec2-server-1k7o</link>
      <guid>https://dev.to/frankpromise/how-to-host-a-website-on-an-ec2-server-1k7o</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--QwidK1ky--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5ybsok6cxdxqv468qvni.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--QwidK1ky--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5ybsok6cxdxqv468qvni.jpg" alt="Image description" width="736" height="680"&gt;&lt;/a&gt;**STEP 1:&lt;br&gt;
Create an s3 bucket and upload the zip folder or file containing the index.html. Note that all objects in an s3 bucket are private by default. Hence, it is important to make the file publicly accessible using the the bucket ACL. &lt;br&gt;
Two choice is available: Either create the bucket policy manually or use the AWS policy generator.&lt;/p&gt;

&lt;p&gt;**STEP 2:&lt;br&gt;
Launch an ec2 instance allowing internet access over port 80. SSH into the instance using PUTTY( for windows user). copy the ipv4 address of the instance and under the part labelled 'session' on PUTTY, insert 'ec2-user@theipyoucopied'. &lt;/p&gt;

&lt;p&gt;**STEP 3:&lt;br&gt;
Install Apache by typing in 'yum install httpd -y' in the command line. change directory to the \var\www\html, the copy website on the s3 bucket into the ec2 instance. To do that, type in the command line 'wget 'add the object url'. Move the file to the var html directory.&lt;/p&gt;

&lt;p&gt;**STEP 4:&lt;br&gt;
start http using the command 'service httpd start'.&lt;/p&gt;

&lt;p&gt;yehhhh!!!! website hosting successful!!&lt;br&gt;
&lt;a href="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/trwmg0i453wif4o9svqd.png"&gt;Image description&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
