According to Forbes, by 2025 the Worlds Data is estimated to hit 163 Zettabytes of storage. As you can imagine, this number continues to grow, from the enormous amount of applications spawning to our favorite puppy videos, data is everywhere. Behind that data is storage, and that storage connects to an operating system. This is where my latest project fits in.
The Problem Statement
How can we make Network Attached Storage (NAS) easily manageable and self service to our customers?
Context
Within my environment, there are three business logics that are key. The first is our configuration management tool, Puppet, which manages tens of thousands of servers across several productions and non-production environments. The second is Bitbucket, which houses code for the entire company and syncs self-created modules to all puppet masters on a cron-style basis. Third is the initiative to allow for system “devineers” to automate the infrastructure into a completely self-servicing environment.
Within this problem statement, there are several key players. For simplicity, I will break them down into a role, rather than their actual title – storage, systems, and customer.
Background
In the environment when a dev group, or a customer, requests storage the request is filtered through to the systems team. The systems team then requests the storage to be provisioned from storage, which today is through an automated API call or self-service page. Once the storage is provisioned and presented to the servers, the systems teams is required to mount, provide permissions, and alert the customer that the new network storage is ready for them to use. Today the systems team task is very manual, and although processes are sync, the paperwork and hands on keyboard could push the request out longer.
Architecture
Solution
Create a containerized API that dynamically scales and manages definition-based files that are then pushed out through configuration management. The Flask based Python API, reaches out to Bitbucket, to grab the latest version of mounts, for both hosts and hostgroups, adds the modification from the API, then commits and pushes back to git. From there Bitbucket syncs with the Puppet, and within thirty minutes (or the next manual puppet run), the new mount appears on the OS for the customer to use.
Details
API
HTTP method | URI path | Description |
---|---|---|
GET | / | Returns health status to show node is active |
GET | /mounts | Retrieves all mounts |
GET | /mounts/hosts, /mounts/hostgroups | Retrieves just [hostname/hostgroup] mount. |
GET | /mounts/[hosts/hostgroups]/[servername/hostgroupname] | Retrieves servers/hostgroups mounts |
GET | /mounts/[hosts/hostgroups]/[servername/hostgroupname]/uuid | Retrieves invidiual mount point for given UUID |
POST/PUT | /mounts/[hosts/hostgroups] | Create new NAS Point for a particular host or hostgroup |
PATCH | /mounts/[hosts/hostgroups]/[servername/hostgroupname]/uuid | Modifies a NAS Point for a particular host or hostgroup |
DELETE | /mounts/[hosts/hostgroups]/[servername/hostgroupname] | Removes the management for a particular host or hostgroup |
DELETE | /mounts/[hosts/hostgroups]/[servername/hostgroupname]/uuid | Removes the management of a NAS Point for a particular host or hostgroup |
YAML File:
The strucute of the yaml defines an array of mount objects, that live on a host/hostgroup.
nfs_mounts::Hosts:
Hostname.company.com:
- uuid
share_path
local_path
options
owner
group
Hostname2.company.com:
- ...
nfs_mounts::Hostgroups:
Group1:
- uuid
share_path
local_path
options
owner
group
- ...
Group2
...
Puppet Code
The following snippet, loads the yaml, then loops through host and hostgroup, and at runtime checks to see if the host or hostgroup match.
# == Class: nfs_mounts ()
#
# Base Class for NAS Mounts
#
class nfs_mounts(
$hosts,
$hostgroups
) {
include '::stdlib'
# Load in Dynamic Yaml file
if is_hash($hosts) and has_key($hosts, $::fqdn) {
# Check if the server name matches item in loop
$hosts[$::fqdn].each | $mount | {
if (is_hash($mount) and
has_key($mount, 'local_path') and
has_key($mount, 'share_path') and
has_key($mount, 'owner') and
has_key($mount, 'group') and
has_key($mount, 'options') ){
# Create the directory to be mounted
exec { $name :
command => "/bin/mkdir -p ${mount[local_path]}",
unless => "/usr/bin/test -d ${mount[local_path]}",
}
# Set Ownership
file { $mount[local_path] :
ensure => directory,
owner => $mount[owner],
group => $mount[group],
require => Exec[$name],
}
# Mount the share
mount { $mount[local_path] :
ensure => mounted,
atboot => 'yes',
device => $mount[share_path],
remounts => false ,
fstype => 'nfs',
options => $mount[options],
dump => '0',
pass => '0',
require => Exec[ $name ],
}
}
}
}
if is_hash($hostgroups) and has_key($hostgroups, $::hostgroup) {
$hostgroups[$::hostgroup].each | $mount | {
...
}
}
}
To see the fully scripted api, check out the project over on my github and let me know what you think in the comments below.
The NFS API was designed to manage a mounts.yml file within nfs_mounts mounts puppet repo. This Python3 application is dockerized and posts to git using ssh keys embeded into the container.
Structure:
main.py - flask application that defines the endpoints
mounts.py - the behind the scenes worker to read and update the mounts file
requirements.txt - pip3 install -r requirements.txt - will install the necesary packages needed to run the app
Pre cursor
Acting as a container, the main purpose of this project is to managed a git based hiera defintion. Therefore, you must have a NFS Mounts project in git, that utilizes a hiera based defintion.
Install Instructions
- Clone Project
- Modify git user profile configuration in dockerfile
- Run
docker build -t nfs_api:latest .
- Run
docker run \
-e REPO_URI=<full ssh git (ex. git@github.com:someuser/NFS-MOUNTS.git)> \
-v /path/to/ssh/on/host:/root/.ssh \
-e GIT_DIRECTORY=/path/to/store/repo
-p 5000:5000 \
nfs_api:latest
- Interact with the api on your…
Top comments (0)