<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Imran Hayder </title>
    <description>The latest articles on DEV Community by Imran Hayder  (@hayderimran7).</description>
    <link>https://dev.to/hayderimran7</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/hayderimran7"/>
    <language>en</language>
    <item>
      <title>How to update Azure load balancer backend pool via a python script</title>
      <dc:creator>Imran Hayder </dc:creator>
      <pubDate>Mon, 05 Oct 2020 04:34:15 +0000</pubDate>
      <link>https://dev.to/hayderimran7/how-to-azure-load-balancer-backend-pool-update-python-script-3pki</link>
      <guid>https://dev.to/hayderimran7/how-to-azure-load-balancer-backend-pool-update-python-script-3pki</guid>
      <description>&lt;p&gt;Some time back ago I was working on a very interesting problem in Azure. Lets  says we have two VMs added in a load-balancer. Now the ask was to do some maintenance on one of the vm while other was still in the load-balancer.&lt;br&gt;&lt;br&gt;
The requirements were:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;take the VM out of load-balancer for maintenance &lt;/li&gt;
&lt;li&gt;do some work on it&lt;/li&gt;
&lt;li&gt;add it back to the load-balancer&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now there wasn't any automated way to do it via Azure so I came up with following python script. Hopefully someone out there in same predicament finds it useful :)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/usr/bin/env python3
import argparse
import configparser
import logging.config
import os
import sys

# Setup..
from azure.common.credentials import ServicePrincipalCredentials
from azure.mgmt.network import (
    NetworkManagementClient,
)
from azure.mgmt.compute import (
    ComputeManagementClient
)
import requests
from requests import Request, Session

logging.basicConfig(level=logging.INFO, format='%(asctime)s %(message)s')
log = logging.getLogger(__name__)

parser = argparse.ArgumentParser('RMS(one) Load Balancer update')
parser.add_argument('-s', '--stack', action='store', dest='stack', metavar='', required=True, help='Name of stack to update LB')
parser.add_argument('-a', '--action', action='store', dest='action', metavar='', required=True, help='Action for LB', choices=['create-maint', 'delete-maint'])
args = parser.parse_args()
stack = args.stack
action = args.action
base_url = 'https://management.azure.com/'

def get_token_from_client_credentials(endpoint, client_id, client_secret):
    payload = {
        'grant_type': 'client_credentials',
        'client_id': client_id,
        'client_secret': client_secret,
        'resource': 'https://management.core.windows.net/',
    }
    #TODO add back in verify for non-fiddler
    #NOTE add Verify=False when going via a proxy with fake cert / fiddler
    response = requests.post(endpoint, data=payload).json()
    return response['access_token']

def get_virtual_machine(compute_client, resource_group_name, vm_name):
    """
    :param resource_group_name: str
    :param vm_name: str
    :return: azure.mgmt.compute.VirtualMachine
    """
    virtual_machine = compute_client.virtual_machines.get(resource_group_name, vm_name)
    logging.info('using virtual machine id: %s', virtual_machine.id)
    return virtual_machine

def get_network_interface_ip_configuration(network_client, resource_group_name, network_interface_name):
    network_interface = network_client.network_interfaces.get(resource_group_name, network_interface_name)
    return network_interface
    #for ipconfig in network_interface.network_interface.ip_configurations:
    #    return ipconfig

def get_virtual_machine_network_interface(compute_client, network_client, resource_group_name, virtual_machine_name):
    virtual_machine = get_virtual_machine(compute_client, resource_group_name, virtual_machine_name)
    for profile in virtual_machine.network_profile.network_interfaces:
        print(profile.id)
        nic_uri = profile.id

    #network_interface = get_network_interface(resource_group_name)
    label = os.path.basename(os.path.normpath(nic_uri))
    logging.info('nic on vm to use is: %s', label)

    network_interface = get_network_interface_ip_configuration(network_client, resource_group_name, label)
    logging.info('nic id is: %s', network_interface.id)
    return network_interface

def build_request(vm_object, nic_object, load_balancer=None):
    """
    :param vm_object : azure.mgmt.compute.VirtualMachine
    :param nic_object : azure.mgmt.network.networkresourceprovider.NetworkInterface
    :param load_balancer : azure.mgmt.network.LoadBalancer
    :return: dict
    """
    if load_balancer is None:
        backend_pool = []
    else:
        backend_pool = [{'id' : load_balancer.backend_address_pools[0].id}]

    request = {
        'properties': {
            'virtualMachine' : {
                'id' : vm_object.id
                },
            'ipConfigurations' : [{ #may have to build by hand
                'properties' : {
                    'loadBalancerBackendAddressPools' : backend_pool,
                    'subnet' : {
                        'id' :  nic_object.ip_configurations[0].subnet.id
                        }
                    },
                'name' : nic_object.ip_configurations[0].name,
                'id' : nic_object.ip_configurations[0].id
            }]
        },
        'id' : nic_object.id,
        'name' : nic_object.name,
        'location' : vm_object.location,
        'type' : 'Microsoft.Network/networkInterfaces'
        }

    return request

def send_loadbalancer_request(payload, auth_token, resource_id, max_retries=20):
    endpoint = base_url + resource_id + '?api-version=2019-06-01'
    header = {'Authorization' : 'Bearer ' + auth_token}
    while max_retries &amp;gt; 0:
        session = Session()
        request = Request('PUT', endpoint, json=payload, headers=header)
        prepared = session.prepare_request(request)

        log.debug('raw body sent')
        log.debug(prepared.body)

        response = session.send(prepared)
        print(response.status_code)
        print(response.text)
        if response.status_code == 200:
            break
        elif response.status_code == 429:
            log.info('retrying an HTTP send due to 429 retryable response')
            log.info('this will be try# %s', max_retries)
        max_retries = max_retries - 1
    return response

def main():
    ini_config = configparser.ConfigParser()
    ini_config.read('~/azure.ini')
    stack_data = ini_config[stack]
    tenant_id = stack_data['tenant']
    client_id = stack_data['client_id']
    client_secret = stack_data['secret']
    sub_id = stack_data['subscription_id']
    endpoint = 'https://login.microsoftonline.com/' + tenant_id + '/oauth2/token'
    auth_token = get_token_from_client_credentials(endpoint, client_id, client_secret)
    # now the Azure management credentials
    credentials = ServicePrincipalCredentials(client_id=client_id,
                                              secret=client_secret,
                                              tenant=tenant_id)
    # now the specific compute, network resource type clients
    compute_client = ComputeManagementClient(credentials, sub_id)
    network_client = NetworkManagementClient(credentials, sub_id)
    # Resources
    resources = {"vmnames":{"dcos_vms": [f"{stack}-dcos-extpublicslave1", f"{stack}-dcos-extpublicslave2"], "maint_vm" : f"{stack}-maintpage"},
                 "vmResourceGroup": f"{stack}-dcos",
                 "netResourceGroup": f"{stack}-Network-Infrastructure",
                 "loadBalancerName": f"{stack}-dcos-extpublicslave",
                 "subnetName": f"{stack}-SubNet1",
                 "virtualNetworkName" : f"{stack}-Vnet1"}

    #TODO modify this to mach your specific settings
    vm_resource_group = resources['vmResourceGroup']
    load_balancer_name = resources['loadBalancerName']
    #TODO - end - only the "above" should need to change.
    dcos_vms_res = {}
    maint_vm_res = {}
    for dcos_vm_name in resources["vmnames"]["dcos_vms"]:
        dcos_vms_res[dcos_vm_name] = {"vm":"", "nic":""}
        dcos_vms_res[dcos_vm_name]["vm"] = compute_client.virtual_machines.get(vm_resource_group, dcos_vm_name)
        dcos_vms_res[dcos_vm_name]["nic"] = get_virtual_machine_network_interface(compute_client, network_client, vm_resource_group, dcos_vm_name)
    maint_vm_res["vm"] = compute_client.virtual_machines.get(vm_resource_group, resources["vmnames"]["maint_vm"])
    maint_vm_res["nic"] = get_virtual_machine_network_interface(compute_client, network_client, vm_resource_group, resources["vmnames"]["maint_vm"])
    #the load balancer
    load_balancer = network_client.load_balancers.get(vm_resource_group, load_balancer_name)

    # running maint on/off action
    if action == "create-maint":
        log.info("Running maintenance ON ")
        maint_vm_lb = load_balancer
        dcos_vm_lb = None
    elif action == "delete-maint":
        log.info("Running maintenance OFF ")
        maint_vm_lb = None
        dcos_vm_lb = load_balancer
    else:
        log.error("ERROR: invalid action specified")
        sys.exit(1)

    maint_lb_request = build_request(maint_vm_res["vm"], maint_vm_res["nic"], maint_vm_lb)
    send_loadbalancer_request(maint_lb_request, auth_token, maint_vm_res["nic"].id)
    for dcos_vm in dcos_vms_res:
        dcos_lb_request = build_request(dcos_vms_res[dcos_vm]["vm"], dcos_vms_res[dcos_vm]["nic"], dcos_vm_lb)
        send_loadbalancer_request(dcos_lb_request, auth_token, dcos_vms_res[dcos_vm]["nic"].id)


if __name__ == "__main__":
    main()
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The script is also added on my &lt;a href="https://gist.github.com/hayderimran7/0754985c4b5cbb597a13155856067603"&gt;github gist&lt;/a&gt; so feel free to check out.&lt;/p&gt;

&lt;h3&gt;
  
  
  How to run the script
&lt;/h3&gt;

&lt;p&gt;Save the script as &lt;code&gt;lb.py&lt;/code&gt; and run as:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;python lb.py -s stack_name -a create-maint
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;stack_name&lt;/code&gt; is whatever I used in &lt;code&gt;azure.ini&lt;/code&gt; file to get the credential from . The options &lt;code&gt;create-maint&lt;/code&gt; and &lt;code&gt;delete-maint&lt;/code&gt; are used to switch back and forth between the two vms. tested with python 3 and python 2&lt;br&gt;
&lt;strong&gt;Note&lt;/strong&gt; I follow naming convention for my resource so stack has to be passed you can totally remove the inputs as you like . &lt;/p&gt;

</description>
      <category>azure</category>
      <category>loadbalancer</category>
      <category>microsoftazure</category>
    </item>
    <item>
      <title>Create a simple VPC Peer between Kubernetes and RDS(postgres)</title>
      <dc:creator>Imran Hayder </dc:creator>
      <pubDate>Thu, 19 Mar 2020 15:46:00 +0000</pubDate>
      <link>https://dev.to/hayderimran7/create-a-simple-vpc-peer-between-kubernetes-and-rds-postgres-lhn</link>
      <guid>https://dev.to/hayderimran7/create-a-simple-vpc-peer-between-kubernetes-and-rds-postgres-lhn</guid>
      <description>&lt;h1&gt;
  
  
  Create a VPC Peering connection between EKS Kubernetes and RDS Postgres
&lt;/h1&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: this script assumes your resources names are prefixed with name of EKS cluster so if EKS cluster name is "myEKS" then the EKS VPC name is myEKS/VPC.&lt;br&gt;&lt;br&gt;
please fix this script variables in Step#1 according to your naming convention or actual names of resources. &lt;br&gt;
all rest of steps are same.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Set some basic information like EKS names / VPC Names
&lt;/h3&gt;

&lt;p&gt;Setting variables for EKS cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;EKS_CLUSTER&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"name_of_EKS_cluster_goes_here"&lt;/span&gt;
&lt;span class="c"&gt;# set name of VPC in which EKS exists, for me i use eksctl to create eks&lt;/span&gt;
&lt;span class="c"&gt;# so vpc name is automatically set to name_of_eks_cluster/VPC&lt;/span&gt;
&lt;span class="c"&gt;# you can change here to whatever your VPC name is&lt;/span&gt;
&lt;span class="nv"&gt;EKS_VPC&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$EKS_CLUSTER&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;/VPC 
&lt;span class="c"&gt;# same goes for the VPC public routing table name of EKS&lt;/span&gt;
&lt;span class="nv"&gt;EKS_PUBLIC_ROUTING_TABLE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$EKS_CLUSTER&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;/PublicRouteTable
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and for RDS:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;RDS_NAME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"name_of_RDS_goes_here"&lt;/span&gt;
&lt;span class="c"&gt;# set this variable to the name of VPC in which RDS exists &lt;/span&gt;
&lt;span class="nv"&gt;RDS_VPC&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$RDS_NAME&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;/VPC
&lt;span class="c"&gt;# same goes for private routing table of RDS&lt;/span&gt;
&lt;span class="nv"&gt;RDS_PRIVATE_ROUTING_TABLE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$RDS_NAME&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;/RDSPrivateRoutingTable

&lt;span class="nv"&gt;RDS_DB_NAME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"Name_of_RDS_instance"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now that all variables are set, &lt;strong&gt;the following steps should be run as simply copy-paste&lt;/strong&gt; &lt;/p&gt;

&lt;h3&gt;
  
  
  Get VPC ID of acceptor i.e. RDS
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"getting the VPC ID and CIDR of acceptor(RDS instance)"&lt;/span&gt;
&lt;span class="nv"&gt;ACCEPT_VPC_ID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;aws ec2 describe-vpcs &lt;span class="nt"&gt;--filters&lt;/span&gt; &lt;span class="nv"&gt;Name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;tag:Name,Values&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;$RDS_VPC&lt;/span&gt; &lt;span class="nt"&gt;--query&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;Vpcs[0].VpcId &lt;span class="nt"&gt;--output&lt;/span&gt; text&lt;span class="si"&gt;)&lt;/span&gt;
&lt;span class="nv"&gt;ACCEPT_CIDR&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;aws ec2 describe-vpcs &lt;span class="nt"&gt;--filters&lt;/span&gt; &lt;span class="nv"&gt;Name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;tag:Name,Values&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;$RDS_VPC&lt;/span&gt; &lt;span class="nt"&gt;--query&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;Vpcs[0].CidrBlockAssociationSet[0].CidrBlock &lt;span class="nt"&gt;--output&lt;/span&gt; text&lt;span class="si"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Get VPC ID of requestor i.e. EKS
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;REQUEST_VPC_ID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;aws ec2 describe-vpcs &lt;span class="nt"&gt;--filters&lt;/span&gt; &lt;span class="nv"&gt;Name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;tag:Name,Values&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;$EKS_VPC&lt;/span&gt; &lt;span class="nt"&gt;--query&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;Vpcs[0].VpcId &lt;span class="nt"&gt;--output&lt;/span&gt; text&lt;span class="si"&gt;)&lt;/span&gt;
&lt;span class="nv"&gt;REQUEST_CIDR&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;aws ec2 describe-vpcs &lt;span class="nt"&gt;--filters&lt;/span&gt; &lt;span class="nv"&gt;Name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;tag:Name,Values&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;$EKS_VPC&lt;/span&gt; &lt;span class="nt"&gt;--query&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;Vpcs[0].CidrBlockAssociationSet[0].CidrBlock &lt;span class="nt"&gt;--output&lt;/span&gt; text&lt;span class="si"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  get Public Route table ID of requestor and acceptor
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;REQ_ROUTE_ID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;aws ec2 describe-route-tables &lt;span class="nt"&gt;--filters&lt;/span&gt; &lt;span class="nv"&gt;Name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;tag:Name,Values&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;$EKS_PUBLIC_ROUTING_TABLE&lt;/span&gt; &lt;span class="nt"&gt;--query&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;RouteTables[0].RouteTableId &lt;span class="nt"&gt;--output&lt;/span&gt; text&lt;span class="si"&gt;)&lt;/span&gt;
&lt;span class="nv"&gt;ACCEPT_ROUTE_ID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;aws ec2 describe-route-tables &lt;span class="nt"&gt;--filters&lt;/span&gt; &lt;span class="nv"&gt;Name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;tag:Name,Values&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;$RDS_PRIVATE_ROUTING_TABLE&lt;/span&gt; &lt;span class="nt"&gt;--query&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;RouteTables[0].RouteTableId &lt;span class="nt"&gt;--output&lt;/span&gt; text&lt;span class="si"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Create Peering Connection
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;peerVPCID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;aws &lt;span class="nv"&gt;$DRY_RUN&lt;/span&gt; ec2 create-vpc-peering-connection &lt;span class="nt"&gt;--vpc-id&lt;/span&gt; &lt;span class="nv"&gt;$REQUEST_VPC_ID&lt;/span&gt; &lt;span class="nt"&gt;--peer-vpc-id&lt;/span&gt; &lt;span class="nv"&gt;$ACCEPT_VPC_ID&lt;/span&gt; &lt;span class="nt"&gt;--query&lt;/span&gt; VpcPeeringConnection.VpcPeeringConnectionId &lt;span class="nt"&gt;--output&lt;/span&gt; text&lt;span class="si"&gt;)&lt;/span&gt;
aws &lt;span class="nv"&gt;$DRY_RUN&lt;/span&gt; ec2 accept-vpc-peering-connection &lt;span class="nt"&gt;--vpc-peering-connection-id&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$peerVPCID&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
aws &lt;span class="nv"&gt;$DRY_RUN&lt;/span&gt; ec2 create-tags &lt;span class="nt"&gt;--resources&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$peerVPCID&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nt"&gt;--tags&lt;/span&gt; &lt;span class="s1"&gt;'Key=Name,Value=eks-peer-rds'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Adding the private VPC CIDR block to our public VPC route table as destination
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws &lt;span class="nv"&gt;$DRY_RUN&lt;/span&gt; ec2 create-route &lt;span class="nt"&gt;--route-table-id&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$REQ_ROUTE_ID&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nt"&gt;--destination-cidr-block&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$ACCEPT_CIDR&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nt"&gt;--vpc-peering-connection-id&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$peerVPCID&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
aws &lt;span class="nv"&gt;$DRY_RUN&lt;/span&gt; ec2 create-route &lt;span class="nt"&gt;--route-table-id&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$ACCEPT_ROUTE_ID&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nt"&gt;--destination-cidr-block&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$REQUEST_CIDR&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nt"&gt;--vpc-peering-connection-id&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$peerVPCID&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Add a rule that allows inbound RDS (from our Public Instance source)
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;RDS_VPC_SECURITY_GROUP_ID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;aws rds describe-db-instances &lt;span class="nt"&gt;--db-instance-identifier&lt;/span&gt; &lt;span class="nv"&gt;$RDS_DB_NAME&lt;/span&gt; &lt;span class="nt"&gt;--query&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;DBInstances[0].VpcSecurityGroups[0].VpcSecurityGroupId &lt;span class="nt"&gt;--output&lt;/span&gt; text&lt;span class="si"&gt;)&lt;/span&gt;
aws ec2 authorize-security-group-ingress &lt;span class="nt"&gt;--group-id&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;RDS_VPC_SECURITY_GROUP_ID&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; &lt;span class="nt"&gt;--protocol&lt;/span&gt; tcp &lt;span class="nt"&gt;--port&lt;/span&gt; 5432 &lt;span class="nt"&gt;--cidr&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$REQUEST_CIDR&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  TESTING CONNECTIONS
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Run postgresql container :&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl run -i --tty --rm postgresdebug --image=alpine:3.5 -- 
 restart=Never -- sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;install postgresql:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apk update
apk add postgresql
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Run PSQL:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;psql -h &amp;lt;HOST&amp;gt; -U &amp;lt;USER&amp;gt;
Password for user &amp;lt;USER&amp;gt;:
psql (9.6.10, server 9.6.15)
SSL connection (protocol: TLSv1.2, cipher: ECDHE-RSA-AES256-GCM- 
SHA384, bits: 256, compression: off)
Type "help" for help.
&amp;lt;DB_NAME&amp;gt;=
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>vpc</category>
      <category>kubernetes</category>
      <category>rds</category>
      <category>postgres</category>
    </item>
    <item>
      <title>Adding cross-account access to EKS </title>
      <dc:creator>Imran Hayder </dc:creator>
      <pubDate>Thu, 12 Mar 2020 18:42:08 +0000</pubDate>
      <link>https://dev.to/hayderimran7/adding-cross-account-access-to-eks-5ebh</link>
      <guid>https://dev.to/hayderimran7/adding-cross-account-access-to-eks-5ebh</guid>
      <description>&lt;h1&gt;
  
  
  introduction
&lt;/h1&gt;

&lt;p&gt;When you want your users in IAM to access EKS cluster in another account, its very simple to do via cross account role. &lt;br&gt;
This assumes you have already created the role in account B to users in account A. &lt;/p&gt;
&lt;h1&gt;
  
  
  steps to access EKS in second account
&lt;/h1&gt;

&lt;ol&gt;
&lt;li&gt;first make sure you have a IAM role &lt;code&gt;cross-account-role&lt;/code&gt; created in Account B and having added trusted relationship for users in
that you would like to from account A to access it.&lt;/li&gt;
&lt;li&gt;Once thats done , make sure you have access to the EKS cluster in account B(this needs to be done in order to edit the permissions of EKS). &lt;/li&gt;
&lt;li&gt;
&lt;p&gt;now edit the &lt;code&gt;aws-auth&lt;/code&gt; configmap of that EKS cluster as:&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl edit -n kube-system configmaps aws-auth
&lt;/code&gt;&lt;/pre&gt;




&lt;/li&gt;
&lt;li&gt;

&lt;p&gt;add following lines under &lt;code&gt;mapRoles&lt;/code&gt; to add the &lt;code&gt;role&lt;/code&gt; created in step#1:&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- "groups":
  - "system:masters"
  - "system:nodes"
  "rolearn": "arn:aws:iam::Account B:role/cross-account-role"
&lt;/code&gt;&lt;/pre&gt;




&lt;/li&gt;
&lt;li&gt;

&lt;p&gt;try setting the new &lt;code&gt;cross-account&lt;/code&gt; for account B in &lt;code&gt;~/.aws/credentials&lt;/code&gt; :&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[account-B]
role_arn = arn:aws:iam::Account B:role/cross-accountrole
region = us-west-2
source_profile = account-A
&lt;/code&gt;&lt;/pre&gt;




&lt;/li&gt;
&lt;li&gt;

&lt;p&gt;export this profile on terminal and add the EKS cluster config :&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export AWS_PROFILE=account-B
aws eks update-kubeconfig --name name-of-eks-cluster-in-account-B
&lt;/code&gt;&lt;/pre&gt;




&lt;/li&gt;
&lt;li&gt;

&lt;p&gt;try running kubectl now:&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get ns
kubectl get pods
&lt;/code&gt;&lt;/pre&gt;




&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>eks</category>
      <category>iam</category>
      <category>aws</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Canary deployment in Gitlab using nginx-ingress</title>
      <dc:creator>Imran Hayder </dc:creator>
      <pubDate>Wed, 04 Mar 2020 22:55:32 +0000</pubDate>
      <link>https://dev.to/hayderimran7/canary-deployment-in-gitlab-using-nginx-ingress-le4</link>
      <guid>https://dev.to/hayderimran7/canary-deployment-in-gitlab-using-nginx-ingress-le4</guid>
      <description>&lt;h1&gt;
  
  
  Canary deployments in gitlab AutoDevops using nginx-ingress
&lt;/h1&gt;

&lt;p&gt;Gitlab AutoDevops is a great feature in &lt;a href="https://gitlab.com" rel="noopener noreferrer"&gt;Gitlab&lt;/a&gt; which allows us to Build,Test,Deploy our apps seamlessly to Kubernetes.&lt;br&gt;&lt;br&gt;
This tutorial will not walk over steps to configure kubernetes integration in Gitlab as its already &lt;a href="https://docs.gitlab.com/ee/user/project/clusters/" rel="noopener noreferrer"&gt;well documented here&lt;/a&gt;.  &lt;/p&gt;
&lt;h2&gt;
  
  
  AutoDevops Helm Chart setup for canary-deployments
&lt;/h2&gt;

&lt;p&gt;This chart is the modified form of official &lt;a href="https://gitlab.com/gitlab-org/charts/auto-deploy-app" rel="noopener noreferrer"&gt;auto-deploy-app&lt;/a&gt; that is intended to achieve traffic routing in &lt;code&gt;canary deployment&lt;/code&gt; using &lt;code&gt;nginx-ingress&lt;/code&gt;.&lt;br&gt;&lt;br&gt;
Assuming you have already configured your Gitlab project with &lt;a href="https://docs.gitlab.com/ee/topics/autodevops/" rel="noopener noreferrer"&gt;AutoDevops&lt;/a&gt;, next is to use the modified chart that I created here -&amp;gt; &lt;a href="https://gitlab.com/hayderimran7/auto-deploy-canary-chart" rel="noopener noreferrer"&gt;https://gitlab.com/hayderimran7/auto-deploy-canary-chart&lt;/a&gt; &lt;br&gt;
AutoDevops is completely customizable, so in order to use this chart instead of official chart, all you need to do is copy the files and move under &lt;code&gt;chart&lt;/code&gt; directory in your repo.&lt;br&gt;&lt;br&gt;
Next, you want to enable canary stage which can be done simply in your &lt;code&gt;.gitlab-ci.yml&lt;/code&gt; as:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;include:
  - template: Auto-DevOps.gitlab-ci.yml
variables:
  CANARY_ENABLED: "true"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Traffic Routing in Auto Deploy canary deployments using Nginx-ingress
&lt;/h2&gt;

&lt;p&gt;The traffic routing in canary deployment is done based on header value while using &lt;code&gt;nginx-ingress&lt;/code&gt; and AutoDevOps.&lt;/p&gt;

&lt;h2&gt;
  
  
  How it works
&lt;/h2&gt;

&lt;p&gt;We used &lt;code&gt;canary by header value&lt;/code&gt; feature of &lt;code&gt;nginx-ingress&lt;/code&gt; annotations &lt;a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#canary" rel="noopener noreferrer"&gt;https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#canary&lt;/a&gt;.&lt;br&gt;&lt;br&gt;
For that we had to modify official chart &lt;a href="https://gitlab.com/gitlab-org/charts/auto-deploy-app" rel="noopener noreferrer"&gt;https://gitlab.com/gitlab-org/charts/auto-deploy-app&lt;/a&gt; to add two additional resources that are created during &lt;code&gt;canary&lt;/code&gt; stage:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;canary-ingress&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;canary-service&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fo5xcf0qlpwoplkh3i167.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fo5xcf0qlpwoplkh3i167.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;canary-ingress&lt;/code&gt; checks for header &lt;code&gt;canary&lt;/code&gt;, if set, routes to &lt;code&gt;canary&lt;/code&gt; service backend that then forwards  to &lt;code&gt;canary-deployment&lt;/code&gt;.  &lt;/p&gt;

&lt;p&gt;There was a bug &lt;code&gt;auto-deploy-app&lt;/code&gt; where the &lt;code&gt;production&lt;/code&gt; service was pointing to both &lt;code&gt;production&lt;/code&gt; deployment and &lt;code&gt;production-canary&lt;/code&gt; deployment based on its selectors, which is raised here  &lt;a href="https://gitlab.com/gitlab-org/charts/auto-deploy-app/issues/51" rel="noopener noreferrer"&gt;https://gitlab.com/gitlab-org/charts/auto-deploy-app/issues/51&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Testing in AutoDevops
&lt;/h2&gt;

&lt;p&gt;Simply set &lt;code&gt;CANARY_ENABLED&lt;/code&gt; in your &lt;code&gt;.gitlab-ci.yml&lt;/code&gt; when using &lt;code&gt;AutoDevops.  &lt;br&gt;
Deploy the app using this chart in&lt;/code&gt;production&lt;code&gt;and then to&lt;/code&gt;canary`.  &lt;/p&gt;

&lt;p&gt;Now make a request to service URL as:&lt;br&gt;
&lt;code&gt;&lt;/code&gt;&lt;code&gt;&lt;br&gt;
curl -H "canary: always" http://&amp;lt;service-url&amp;gt; &lt;br&gt;
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;&lt;br&gt;
 This will hit the &lt;code&gt;canary-ingress&lt;/code&gt; that will route to &lt;code&gt;canary-deployment&lt;/code&gt; that you can verfiy in pod logs. &lt;/p&gt;

</description>
      <category>gitlab</category>
      <category>canary</category>
      <category>kubernetes</category>
      <category>autodevops</category>
    </item>
  </channel>
</rss>
