DEV Community

Ben Halpern
Ben Halpern Subscriber

Posted on

If you were tasked to conduct a security audit on a server/database-backed web app, where would you start?

Let's say you were brought on to inspect and a company's web app, built on something common like PHP/Rails/Node/etc.

What's your check list, what are you looking for?

Top comments (38)

Collapse
 
andrewbrown profile image
Andrew Brown πŸ‡¨πŸ‡¦ • Edited

I have a Security Checklist for AWS which you can apply to any Cloud Computing service, it was too painful for me to find the original so I was lazy and linked it from my LinkedIn:

linkedin.com/posts/andrew-wc-brown...

Rails has very sane defaults, Rails Security outline gives you a good idea where to look:

guides.rubyonrails.org/security.html

OWASP top ten is a useful resource:
owasp.org/index.php/Category:OWASP...

A fun way of thing of ways to compromise an app/system is looking at Kali Linux full list of tools for inspiration.

tools.kali.org/tools-listing

Maybe you are running an old version of Postgres? Exploit DB might have some means for me to find a way in:
exploit-db.com/

  • Are you using dependabot?
  • Are you using that searches for CVEs? eg. Synk
  • Have you tried sniffing for credentials that may be in the git history?
  • Are you enforcing MFA? Are you enforcing signing of git commits?
  • Do you have tests for all your endpoints? If not that that is a good place to look to abuse access to records I should not have access to
  • Are you hosted on AWS? If not I bet lots of you're resources have public-facing addresses, Are you using Sidekiq? That means you're using Redis, maybe Redis is public-facing and you have not kept it up to date and I can gain access via an exploit.
  • I would run Metasploit against your servers

A bit busy at the moment but a very fun thing to investigate

Collapse
 
therealkevinard profile image
Kevin Ard

I have Kali in a vm and a bootable usb both - but I know very little about how to ACTUALLY use it. πŸ€·β€β™€οΈ

But... Infosec isn't really my main thing

Collapse
 
andrewbrown profile image
Andrew Brown πŸ‡¨πŸ‡¦

I've never honestly found any good tutorials on it. There is one company which has Certifications for Kali Linux but it is at absurd prices eg. 1K

Thread Thread
 
skydevht profile image
Holy-Elie ScaΓ―de

I don't think a tutorial will be valuable. it's just a Linux distro with pentesting tools. The best way is to start with a general security book like hacking exposed to understand the main process and go from there to experiment with the various tools. (They are a little complex but that's just the Unix philosophy)

Thread Thread
 
therealkevinard profile image
Kevin Ard

Agreed. There are kali-specific tutors out there, but the distro is more about the endless toolkit.

There are - separately - a trillion tutors on the tools, where they coincidentally use Kali. Those are the better start.

Pentest has soooo many angles, the tooling and concepts are the jump-off point, not the distro.

I think that's what intrigues me so much. My career is builder/creator. I make a thing that does a thing, and that's fun. ... ... ...but I'm not geared for "include an ampersand and this specific text is your ajax call, or create you avatar in this specific way if the server is running on this specific version of blank, then do this and this and this, and now you have admin privileges"

Young me had no idea how important it was to hide the powered-by response header.

Thread Thread
 
therealkevinard profile image
Kevin Ard

CSS hacks blow me away! Just a little tiny bit of user control, and a bad actor can slip-in a background-image that points to a remote gif that triggers a script-kiddie rig that does who-knows-what 🀯

Thread Thread
 
skydevht profile image
Holy-Elie ScaΓ―de

I think that every hack out of there is the subversion of normal input. You're not creating a new entity, you just inserting something not expected which can trigger an abnormal response from an existing one.
I'm also a creator and the only things that were ever interesting for me in pen testing were reverse engineering and programming rootkits. Both because you have to set yourself to learning mode. It's like exploring those portions of the map where it's marked "Here be dragons".

Collapse
 
ferricoxide profile image
Thomas H Jones II

Heh...

It's funny how much things have changed with respect to AWS and "default public" settings: more things default-closed, plus, when you explicitly open things up, you get colored warning-labels in the web consoles (and that's without leveraging any of the advanced threat-detection and auto-remediation tools available in mos of the commercial regions).

Helpful that GitHub and GitLab both now do basic checks for credential-y looking things.

As to enforcing MFA ...if you're allowing interactive logins to your production hosts/instances, at all (let alone from random systems around the Internet), you're probably doing other stuff really wrong, too. Which is a good 50,000' nugget of information to gather as you move your audit-tasks from the outside inwards.

Collapse
 
andrewbrown profile image
Andrew Brown πŸ‡¨πŸ‡¦ • Edited

It's hard in practice to get engineering teams to stop fiddling with servers directly.
It should be logical that instances should be hands-off, and tasks should be automated through Run Commands or something such as Anisble. It really comes down to stubbornness.

Humans are such a pain

Thread Thread
 
ferricoxide profile image
Thomas H Jones II

Yeah... One of my main customer's internal groups was flogging their favored CM system, recently. Touting, "but you can extend management from legacy systems to the newer, cloud-hosted (and it's cross-cloud!) systems" (while being able to compose a global dashboard would be a good justification, that group's never really been into hiring the kinds of people you need to have around to get worthwhile reports authored/updated). Ironically, the person that was flogging it was also joking, earlier, about "you could also use it to manage containers, but that would be horribad." All I could think was, "why do I need lifecycle-CM for my cloud-VMs: when it comes time to patch (etc.), we just nuke and re-deploy …and that's for the systems that we don't have running as part of auto-scaling groups (with scheduled scaling-actions configured for frequent replacement)".

It's not just Operations types that are hard to break of habits, the Security teams might be worse. A couple years ago, they insisted they needed interactive-shell access to run their tools. So, we arrange for that to be doable ...and then they got pissy that system-ids were constantly changing and their databases were filling up with systems that no longer existed. Note, this was the same team that insisted that our containers had to be "whole OS" types of containers, since their scanning tools didn't know how to work with minimized containers in a way that allowed them to check off all the same boxes they were used to checking off with "whole OS" VMs and legacy servers.

Collapse
 
vlaja profile image
Vlatko Vlahek

I would personally start by auditing server and database access, before delving into the code of the app and database queries themselves. This means I would personally first audit ports and used user accounts for services and the database itself. This can give big security advantages in a relatively low amount of time.

Some questions to ask yourself here:

  • Is the access restricted enough by having only required ports open to the public?
  • Is this true both for the server, and some incoming access control ruleset (on AWS, Amazon, DigitalOcean ...)?
  • Do the web server, app, and database run on specific user accounts which are restricted to only what they need in order to run properly (in terms of disk access on the machine, running specific services etc.)?

These changes will largely mitigate potential infrastructure breaches. Also, if you need development access to the database (I wouldn't recommend for production, but sometimes it can't be avoided), never accept the case of "easy access" which will add security vulnerabilities. Enforce certificate-based VPN + SSH tunneling to a local port on the developer's machine.

After this case is covered, I would focus on determining who has access to the DB and for what reason?

A part of the problem is GDPR related, but even if remove that from the equation, does everybody needs to have direct access to the database? If so, why. If it is for running "hacks" or changing something that app dashboard currently doesn't support, stop that practice and ensure that crucial things can be done from the application side.

Also, do a review of the load balancer and web server security rules. Disable unused protocols which don't bring in any value. A lot of attack types can be stopped at this level.

After the security of access is resolved, I would focus on more codebase stuff, like ensuring that there are no user accounts or connection strings password in the code, as these can be resolved by using local environment variables on the host machines. I have seen this more times then I would personally like, and nobody can guarantee that your code will never leak, by your own developers or the git repo. Better safe than sorry.

If you are good on all the mentioned fronts, continue testing the app to most common attack types (like SQL injection and XSS), and especially focus on testing API responses, with and without authentication. Try to see what data can you get from the app, and try to get data for other users and resources that you don't own inside the app logic. Also, try to determine how long do the authentication tokens really last, and whether this makes sense for the app logic.

Collapse
 
bernhardbln profile image
Bernhard Streit

I'm a bit confused about what you mean with credentials and environment variables. Totally agree credentials should not be part of the code, but isn't it a common practice to have them injected into the target container/machine via environment variables, and to prohibit any login to the container/machine?

Collapse
 
vlaja profile image
Vlatko Vlahek

If you are using a CI/CD pipeline, it is definitively preferred to inject something like this from an encrypted env variable on the CI/CD system and not save anything on the host machine. It has the benefit of added security.

However, for a lot of smaller companies, the case of having a CI/CD pipeline is not always the case. I have seen a lot of admins deploying the app manually via SSH or RDP, by copying files or whatnot. Of course, while this is generally not acceptable for serious systems, we can't run from the fact it happens, especially for teams that are not as experienced in developing larger systems, or simply don't have any infrastructure experience.

I come from a .NET background, where there are a few solutions even for such cases:

  • Secrets manager
  • appsettings.{environment}.json files

The issue with this approach is that these files are not encrypted, so infrastructure breach will compromise the app. But, still, it's better than committing to a git repo.

Collapse
 
ferricoxide profile image
Thomas H Jones II

If you have the opportunity to set up an app-level account, there's still a non-trivial number of sites that you can get a basic idea of the implementation-language by the characters you're not allowed to use in passwords. Sadly, many of them are banking-sites. :p

Collapse
 
ssimontis profile image
Scott Simontis

When was the last time they practiced a DR drill? Or just start with what was the last time they verified a database backup?

Identity management is huge too. How many SSH keys are in circulation? Who all has the capability to create keys to PROD servers. They do use different keys for PROD, right?

Don't focus all on PROD. If they have a DEV server that's running a backup of PROD, then there is potentially hundreds of gigabytes of PII on a server with the most minimal of defenses.

Collapse
 
thejoezack profile image
Joe Zack

Start with the OWASP top 10, in that order. By far most the most common problems are...well, the most common. :)

I also think that taking inventory of, and classifying the data that each system deals with is really important so you can prioritize your efforts.

Collapse
 
simbo1905 profile image
Simon Massey • Edited
  • What's the security policy for the code. If it's on public git service (eg GitHub) is 2FA enabled for all contributors
  • is git branch protection enabled with at least one reviewer. That way a worm writing a rogue dependency into package.json is less like to slip something in
  • how are secrets handled such as database credentials. are they git-secret encrypted and strong.
  • does the configured database user have too much privileges (uses ”admin” or ”postgres” accounts) it should be a none default account granted the minimum privileges possible
  • are the default accounts of the database secured with strong passwords.
  • is there regular backups of the database. are they encrypted before moving off the host to a secure location. is the restore of backups regularly tested.
  • is there a CI/CD pipeline and does it have security vulnerability scanning enabled (snyk.io) and is the build failed for anything other than low severity issues
  • are low severity security scanned issues regularly reviewed and fixed by default. those that wont be fixed is the reason documented and peer reviewed (eg ”we don't use that logic”). are those reasons periodically reviewed to check they are still valid
  • are deployments against git tagged versions of the code. have branch protection and githooks been set to prevent updating tags or forces rewrites of history in git that could hide the history of a backdoor being added into the code base
  • are git release tags annotated tags and gpg signed ’git tag -s’ so that we know exactly who said that version of the code is good to release in a way that cannot be faked
  • is the OS or base container layer up to date and regularly patched. stale Dockerfiles and images with cached out-of-date base image layers is sadly typical. the release build should run with flags to disable layer caching to force download of latest patches from upstream
  • does the code run on the current long term support version(s) of the web technology it uses. is there a policy to promptly upgrade to support the next long term support version and a deprecate support for expired versions.
  • is the software frequently rebuilt and redeployed pick up newly discovered security issues (if the code is only pushed every few months known security bugs are not fixed for months)
  • is HTTPS enforced and is the cert properly protected (such as git-secret encrypted)
  • is 2FA enforced for access to all infrastructure (eg AWS account)
  • is the app appropriately firewalled (only specific ports enabled)
  • are docker images running as root (mostly everything on hub.docker.com does) you should only docker images as a regular user. s2i images typically do this properly if you need a move away from root images.
  • is there a ”security” label for bugs in the issue tracker and are issues with that label prioritised with a low response time to triage them rather than just ignoring it as not a fun feature to build.
  • did the app write it's own authentication and authorization logic. if it did that's an epic fail.
  • does the app allow the forcing of 2FA for privileged accounts
  • is all input to the app sanitised (library to scrub SQL injection)
  • is static code analysis being applied to the code base and the build failed for issues. most security bugs are simple code errors and code that passes a lint is likely to have less bugs so less holes
  • is there peer review of all pull requests. most security bugs are simple code errors and code that is clear and understood by two devs is less likely to have bugs that lead to security errors
  • has penetration testing been performed against the application
  • has the codebase been reviewed to ensure that role based access control is properly enforced
  • are cookies handled correctly such as marked as secure so only sent over HTTPS
  • are assets loaded from public CDN which may be a route to inject attacks. use your own CDN and ensure you checked the hash for the files you put there against the official releases or built them yourself
  • check CORS correctly configured and XSRF protection in place
  • are passwords correctly stretched and salted or better yet use SRP6a authentication protocol
  • are users emailed on and changes to their account
  • are strong passwords enforced? check part of the hash of passwords against the Troy Hnt ”have I been pwned” API of half a billion compromised passwords.
  • is the login screen tested against the most popular password managers so that users can use randomised strong passwords
  • are the users offered recovery codes or password recovery using shamir secret sharing scheme
Collapse
 
ryansmith profile image
Ryan Smith • Edited

I would start with the human layer of the stack. Who has access, what are their permissions, are accounts shared, password requirements, etc.

Collapse
 
adam_cyclones profile image
Adam Crockett πŸŒ€

I suppose look at the OWASP top 10?

Collapse
 
cjbrooks12 profile image
Casey Brooks

what are you looking for?

someone else more competent πŸ˜‚

Collapse
 
therealkevinard profile image
Kevin Ard

An outside-in pentest. Forget everything you know about the guts and come in from the outside.

  1. This is how malicious actors would approach it
  2. For those purposes, it's functionally meaningless to audit access levels - bad actors never had access to begin with... They create it.

This tool is in my tabs - I haven't used it, but it seems to ball up all the ones I do use. I've been curious about it.

latesthackingnews.com/2019/08/04/a...

Collapse
 
ferricoxide profile image
Thomas H Jones II

Bingo. Even something a simple as iteratively running nmap, upping the fingerprinting-aggressiveness with each run can be helpful. This can let you know "are they using any scan-detectors to auto-block script-kiddies" and help you level-set the types of attacks that are likely to work.