DEV Community

Sid Bhanushali
Sid Bhanushali

Posted on

DevSecOps -Automate & Secure

DevSecOps is the practice of integrating a security-first mindset and methodologies into traditional DevOps CI / CD environments. Here are key best practices for organizations seeking to implement DevSecOps.

Being able to get code out the door fast, secure, and efficiently is the name of the game. In a CI/CD environment, it’s important to maintain speed as the main tenet but also to be aware of the security needed to bulk up your pipeline. Without automation, implementing security practices could be a major bottleneck in the pipeline and wouldn’t be considered a priority for many organizations that rely on speed. For security to be part of this workflow, it needs to be automated for it to be considered a relevant factor in an environment that prioritizes speed.

Security controls and tests need to be embedded early and everywhere in the development lifecycle, and they need to happen in an automated fashion because the culture of software deployment is changing rapidly. Some organizations are pushing new versions of code into production almost 50 times per day for a single app. Not only this but adding automated security analysis within CI platforms can limit the introduction of vulnerable code earlier in the software development lifecycle.

However, trying to run automated scans on your entire application source code each day can consume a lot of time and break your ability to keep up with daily changes. One option is to run scans against recent or new code changes.

A growing number of test automation tools with a range of capabilities have become available for doing security analysis and testing throughout the software development lifecycle, from source-code analysis through integration and post-deployment monitoring. For example, nmap and Metasploit, which are tools to monitor servers and networks for vulnerabilities or known exploits, can be integrated into said automation.

Cron Jobs

However, all this depends on the type and frequency of the task to be automated. There are certain tasks that need to run on an interval basis, such as backing up databases, updating the system, performing periodic reboots, and so on.

Such tasks in Linux are referred to as cron jobs. Cron jobs are used for the automation of tasks to help in simplifying the execution of repetitive and sometimes mundane tasks. Cron is a daemon that allows you to schedule these jobs which are then carried out at specified intervals.

A crontab file, also known as a cron table, is a simple text file that contains rules or commands that specify the time interval of execution of a task. It hosts a set of rules that are analyzed and performed by the cron daemon. The system crontab file is located at /etc/crontab and can only be accessed and edited by the root user. The crontab file looks like so: chronjobs

The basic syntax for a crontab file comprises 5 columns represented by asterisks followed by the command to be carried out. This format can also be represented as shown below:

[minute:0-59] [hour: 0 - 23] [day:0 - 31] [month:0-12] [day of week] /directory/command output
Enter fullscreen mode Exit fullscreen mode

The first five fields in the command represent numbers that define when and how often the command runs. A space separates each position, which represents a specific value. Lets see how to apply this to a linux system

To create or edit a cron job as the root user, run the command

 crontab -e
Enter fullscreen mode Exit fullscreen mode

All cron jobs being with a shebang header as shown

#!/bin/bash
Enter fullscreen mode Exit fullscreen mode

This indicates the shell you are using, which, for this case, is bash shell. Next, specify the interval at which you want to schedule the tasks using the cron job format. For example, let's say we wanted to run a backup script every month when the system isn’t actively in use

* 2 0 * * /root/backup.sh
Enter fullscreen mode Exit fullscreen mode

The command runs the first of every month at 2 am. Cron Jobs is a useful tool built into Linux systems that can automate specific tasks or scripts. However, Jenkins is a much more comprihenisve automation build tool that is more commonly used in the lifecycle. Lets see how we can impliment best practices when using Jenkins.

Securing Jenkins

Another all in one automation tool is jenkins. jenkins is an open source automation server. It helps automate the parts of software development related to building, testing, and deploying, facilitating continuous integration and continuous delivery. Jenkins is an all in one tool to integrate automation into every stage of the CI CD process. Since jenkins is a server based tool, it is important to secure the Jenkins instance and have proper handling of users and credentials within it. Jenkins does not come preconfigured with default security checks. When creating users in Jenkins, it's important to differentiate the access control that each user has.

Another important thing is to be mindful of the credentials and where they are stored. Using Jenkins credentials provider, users can bind their credentials to variables and use them in their jenkinsfile as to not expose sensitive data. Here is an example of a credentials screen in jenkins that will impliment credentails binding.

Jenkins credentials binding

Securing Linux Servers

The heart of any pipeline is a linux system. Since Cron Jobs need a linux system to function on, it's important to consider the security of the linux systems themselves that will be in charge of automation. Securing the linux system itself is a critical step in DevSecOps.

Disable Root Login
The first step in securing the system is securing the way people even log into the system, to begin with. Disabling root login is essential to strengthen your server security. This is because keeping root login enabled can present a security risk and diminish the safety of small business cloud resources hosted on the server, as hackers can exploit this credential to access the server. Instead, create a new user account and assign elevated (sudo) permissions, so that you will still have a way of installing packages and performing other admin actions on the server.

User logins through Public / Private key pairs
One suggestion is to use good password hygiene, meaning having a decent mix of numbers, letters, and special characters to prevent from password cracking. However, this can get messy to enforce and passwords can ultimately be cracked using large amounts of computing power. A more secure way to grant access is through the use of public/private key pairs for users.

generate (on their local machine) their keypair using

ssh-keygen -t rsa 
Enter fullscreen mode Exit fullscreen mode

Then they need to put the contents of their public key (id_rsa.pub) into ~/.ssh/authorized_keys on the server being logged into.

Key Rotation and/or Configure 2 Factor Authentication
It is important to keep changing the private/public key pairs, as well as any other passwords or credentials needed to access a machine to prevent keys or passwords from being leaked . 2 Factor Authentication can be used in conjunction with SSH (Secure Shell) to enforce the requirement for a second credential when logging into the server. To set up 2FA on a Debian server and Debian-derived distributions, you should install the libpam-google-authenticator package. The package can display a QR code or produce a secret token that can be added to a software authentication device, such as Google Authenticator.

Server Side antivirus / IDS
External software and programs for secuirty and defense should always be an extra layer, not the only layer. Many routers or firewalls will oftentimes have a preconfigured instance of an Antivirus, IDS or some form of it. The disadvantage to this is that it puts the burden on one sole piece of hardware. If a phishing email with a malicious payload is slipped through the cracks, then an IDS system that simply monitors the external perimeter is not much help. Once someone is in, they can make as much noise as they want, since all the guards are patrolling the outside.

A solution to this could be a standalone IDS that sits on the internal network as part of a layered defense, providing visibility within the network and around the important assets and internal files. It can be configured to protect sensitive data without interfering with legitimate network traffic.

Disk encryption
you can secure your data by configuring disk encryption to encrypt whole disks (including removable media), partitions, as well as any other files. There are many methods that can be used to achieve this. One universal way to do this on all Linux systems is to install the cryptsetup package. As always, make sure root user login is disabled, only users with advanced sudo privileges!

Volume level disk encryption helps protect users and customers from Data Theft or even accidental loss. Encrypted hard disks make it very hard for hackers to gain access or read any sort of data on that hard disk.

Securing EC2

In most cases, the linux instance that will be running the automation would be running on a cloud compute instance, lets say EC2 for example. One benefit of using an EC2 is the diversity and flexibility it offers. A tradeoff of this can be security. There are steps that can be taken to secure an EC2 instance.

Security Groups
Security groups are the fundamental network security of AWS. They control how inbound and outbound traffic is allowed into the EC2 Machine. These control the opening and closing of network ports to allow for different protocols or servers to run on.

For example, since the Jenkins servers default port is port 8080, you have to expose the port in the security group. You can run Jenkins on a different port, however that must be exposed as well.

enter image description here

VPC
Controlling the network traffic to your EC2 instance is crutial to maintain its secuirty. Configure your VPC and use private subnets for your instances if they should not be accessed directly from the internet. A VPC is your own network in the cloud. For example, in each region there are availability zones. A VPC is a private network within an AWS region and it would span all the availability zones / physical centers in the region.

Subnets are sub-networks inside the VPC, span a single availability zone, and are logical subdivisions of an IP network. The practice of dividing a network into two or more networks is called subnetting. AWS provides two types of subnetting one is Public which allows the internet to access the machine and another is private which is hidden from the internet.

Subnets could be compared to the different rooms in your apartment. They are containers within your VPC that segment off a slice of the CIDR block you define in your VPC. CIDR notation is a compact representation of an IP address and its associated network mask.

For example:
192.168.100.14/24 represents the IP address
192.168.100.14 is the network prefix
192.168.100.0, or equivalently, its subnet mask 255.255.255.0.

Subnets allow you to give different access rules and place resources in different containers where those rules should apply. You wouldn't have a big open window in your bathroom on the shower wall so people can see sensitive things, much like you wouldn't put a database with secretive information in a public subnet allowing any and all network traffic. You might put that database in a private subnet (i.e a locked closet). Anything from outside of the VPC could connect to a public subnet, but only containers inside a VPC can access a private subnet.

IAM
Another sure proof way to manage the security of your EC2 Instance is through IAM. IAM Is where users can manage their credentials. By using IAM with Amazon EC2, you can control whether users in your organization can perform a task using specific Amazon EC2 instances. It's important to lock away your access keys and consider them important numbers, as if they were credit cards or social security numbers. Similarly you wouldn't have one social security number for every user therefore you would not have one credential as a root user.

it's important to create individual users and grant them the least amount of permissions as needed. Policy actions are classified as List, Read, Write, Permissions management, or Tagging. For example, you can choose actions from the List and Read access levels to grant read-only access to your users.

Top comments (0)