Every aspect of your technology can be vulnerable to some sort of exploit. Each of these is a disaster waiting to happen unless measures are taken to secure your most important resources. With any web or cloud application finding ways to reduce the attack surface of your sensitive data becomes more and more important. Below I’ve listed how in some of the layers, security can be implemented.
Using a cloud provider reduces physical risk
Fundamentally, your physical hardware can be at risk. Getting access to hard drives that contain sensitive data, or even the in-memory processes which must expose the information in an unencrypted state proves to be a difficult hurdle to overcome. Most buildings do not provide the level of security that is necessary. Cloud providers go through rigorous obstacles to ensure security. Most provide some sort of security by design, such as AWS or GCP, in their data centers. It’s to the degree that hopefully no office needs, but which is necessary for your data.
Encrypting data at rest prevent physical access
Even while your application data in the cloud is safe, it isn’t always secure. There are still those with the correct security clearance that are allowed to access the datacenter and the devices where your sensitive data is located. Additionally, in the case that the physical drives are swapped for any reason, the removed drive may still unnecessarily contain your data on it. The next level of security is to ensure all data written to the disk is encrypted. DynamoDB in AWS and Google cloud by default provide encryption at rest, making sure those features are enabled for data stored prevents vulnerability at this layer.
DynamoDB encryption can be enabled by setting the associated properties in CloudFormation
'DynamoDBTable': {
'Type': 'AWS::DynamoDB::Table',
'Properties': {
'SSESpecification': {
'SSEEnabled': true
}
}
}
Encryption in transit secures communication
While encryption at rest works well during periods of inactivity, as soon as the data will be in use, it no longer provides enough protection. The transmission of data between services, either from cloud provider to the application service or between application services, also must be secured. The standard way to do this is asymmetric encryption using TLS certs over HTTPS. Whenever accessing data or transmitting it to another location, use HTTPS endpoints and associated libraries which will automatically encrypt the data so only that one endpoint can inspect the contents. Transmission of data without HTTPS results in plain text, human readable output. Any process between the application service and the intended target is able to read that information. That includes anything on the same network, the ISP, proxy services, caching endpoints, as well as any technology also running on the same physical device. Creating HTTPS certs is free and usually automated by your cloud provider.
Application layer encryption
Often your configuration does not live in a database, but instead inside the application itself, or the source code. Most of the configuration may not need to be encrypted, however there are usually api access keys associated with most services. For access to the cloud provider, always use cloud provider service roles to ensure that only the right services have access to the data. For access keys to other services, these keys should be encrypted and stored in source control for the service. Rather than exposing the api keys in environment variables, which are vulnerable while the application isn’t running. Encrypt the api keys and put them in source control. Then the only time the api keys are exposed is in memory of the product service and never anywhere else.
With KMS encrypt the the access key with
$ aws kms encrypt — key-id AWS-KEY-ID — plaintext API\_KEY
And then decrypt the key at runtime using:
const encryptedSecret = 'ENCRYPTED_KEY';
const clientSecret = await kmsClient.decrypt({
CiphertextBlob: Buffer.from(encryptedSecret, 'base64') }).promise()
.then(data => data.Plaintext.toString());
The result is the encrypted data is only available in one place. This also has a secondary benefit of being able to verify the key information during development without actually needing to have explicit access to it.
Cloud security policies secure cloud resources
Using IAM and other security policies to restrict access to the specific resources is the next layer. From above we’ve limited the unencrypted data only to the correctly authorized cloud users, but without specifying which users those are, anyone would have access. Define policies on the service account clients or service roles as well as cloud developers to ensure only the least number of entities have access:
'LambdaRole': {
'Type': 'AWS::IAM::Role',
'Properties': {
'Policies': [{
'PolicyDocument': {
'Version': '2012–10–17',
'Statement': [{
'Effect': 'Allow',
'Action': 'dynamodb:*',
'Resource':{'Fn::Sub':'arn:aws:dynamodb:*:*:table/TABLE' }
}]
}
}]
}
}
Restricting any other entities from having access ensures that only the correct service has access to the data.
Identifying the right user protects users
Having implemented the above, now the cloud resources are secure from any vulnerabilities that could come via the cloud access directly, however that doesn’t help our users very much. They need access to their data through your service interfaces, not through the cloud interfaces. Cloud provider technology can never help with that so additional layers are necessary. How does the service application know who the user is? The user can tell you “I am user X”, although you can’t trust that, because anyone can say it. The answer is identity management. This is also called authentication. Via a complex OAuth2.0 flow you’ll end up with JWTs in request headers that your service can verify. The service should use a federated identity provider, there is a long list of “Sign in with …” that exist. Any one of them that OIDC complaint is a good start. Auth0, Okta, Google, etc.. Your application will get a JWT that can verify the user is who they say they are.
try {
const identity = jwtManager.verify(token, publicKey, { algorithms: [‘RS256’], issuer });
} catch (error) {
throw Error(‘Unauthorized’);
}
If the user supplies an invalid token then your application service will reject the request to access the resource.
Granting the right access protects your service interfaces
Just like your service needs to have the right cloud policy to access the cloud database, the application users need to have the right application policy to access the service. Knowing who the user is isn’t good enough, the service also needs to know if the user has the right permissions to the right resource. To do this you need an authorization provider. These verify the JWT provided by the identity provider and then ensure that combined with the resource that the action the user is taking should be authorized. Is user X allowed to Delete resource R , identity providers don’t provide that, authorization providers do. Getting the right authorization provider is about finding a fit for your application. If the application needs roles-based access control (RBAC), attribute-based access control (ABAC), or policy-based access control (PBAC), Authress is built explicitly for this.
const client = new AuthressClient('https://api.authress.io');
await client.authorizeUser('USERID', 'RESOURCE_URI', 'ACTION');
A secure cloud service is the result
Now that all the security best practices are in place, the followup is to ensure that there aren’t any holes and no additional vulnerabilities are in place. Perhaps a secret was publicly shared at one point, even if the rest of the practices are being kept auditing becomes important. Other follow ups are tracking access to application resources. Which of your services resources are being frequently accessed and how? Which users are using them and how often. There are a lot of steps and precautions to take so anything that can reduce the burden becomes crucial.
Frequently Asked Questions
How can I manage user/entity access so that not everyone who has access to the build cluster or production cluster has access to the credentials
If your goal is to use secrets in production and also limit access as much as possible, the best approach is to encrypt the secrets and store them in source code that can only be decrypted at run time.
Shouldn't I use environment variables to store the credentials?
Using jenkins credentials manager or any CI/CD credentials manager will give access to the CI/CD and anyone who has access to RI/CD, which is two more places than you want.
Come join our Community and discuss this and other security related topics!
Top comments (0)