If you were tasked to conduct a security audit on a server/database-backed web app, where would you start?

Ben Halpern on September 17, 2019

Let's say you were brought on to inspect and a company's web app, built on something common like PHP/Rails/Node/etc.

What's your check list, what are you looking for?

markdown guide
 

I have a Security Checklist for AWS which you can apply to any Cloud Computing service, it was too painful for me to find the original so I was lazy and linked it from my LinkedIn:

linkedin.com/posts/andrew-wc-brown...

Rails has very sane defaults, Rails Security outline gives you a good idea where to look:

guides.rubyonrails.org/security.html

OWASP top ten is a useful resource:
owasp.org/index.php/Category:OWASP...

A fun way of thing of ways to compromise an app/system is looking at Kali Linux full list of tools for inspiration.

tools.kali.org/tools-listing

Maybe you are running an old version of Postgres? Exploit DB might have some means for me to find a way in:
exploit-db.com/

  • Are you using dependabot?
  • Are you using that searches for CVEs? eg. Synk
  • Have you tried sniffing for credentials that may be in the git history?
  • Are you enforcing MFA? Are you enforcing signing of git commits?
  • Do you have tests for all your endpoints? If not that that is a good place to look to abuse access to records I should not have access to
  • Are you hosted on AWS? If not I bet lots of you're resources have public-facing addresses, Are you using Sidekiq? That means you're using Redis, maybe Redis is public-facing and you have not kept it up to date and I can gain access via an exploit.
  • I would run Metasploit against your servers

A bit busy at the moment but a very fun thing to investigate

 

Heh...

It's funny how much things have changed with respect to AWS and "default public" settings: more things default-closed, plus, when you explicitly open things up, you get colored warning-labels in the web consoles (and that's without leveraging any of the advanced threat-detection and auto-remediation tools available in mos of the commercial regions).

Helpful that GitHub and GitLab both now do basic checks for credential-y looking things.

As to enforcing MFA ...if you're allowing interactive logins to your production hosts/instances, at all (let alone from random systems around the Internet), you're probably doing other stuff really wrong, too. Which is a good 50,000' nugget of information to gather as you move your audit-tasks from the outside inwards.

 

It's hard in practice to get engineering teams to stop fiddling with servers directly.
It should be logical that instances should be hands-off, and tasks should be automated through Run Commands or something such as Anisble. It really comes down to stubbornness.

Humans are such a pain

Yeah... One of my main customer's internal groups was flogging their favored CM system, recently. Touting, "but you can extend management from legacy systems to the newer, cloud-hosted (and it's cross-cloud!) systems" (while being able to compose a global dashboard would be a good justification, that group's never really been into hiring the kinds of people you need to have around to get worthwhile reports authored/updated). Ironically, the person that was flogging it was also joking, earlier, about "you could also use it to manage containers, but that would be horribad." All I could think was, "why do I need lifecycle-CM for my cloud-VMs: when it comes time to patch (etc.), we just nuke and re-deploy …and that's for the systems that we don't have running as part of auto-scaling groups (with scheduled scaling-actions configured for frequent replacement)".

It's not just Operations types that are hard to break of habits, the Security teams might be worse. A couple years ago, they insisted they needed interactive-shell access to run their tools. So, we arrange for that to be doable ...and then they got pissy that system-ids were constantly changing and their databases were filling up with systems that no longer existed. Note, this was the same team that insisted that our containers had to be "whole OS" types of containers, since their scanning tools didn't know how to work with minimized containers in a way that allowed them to check off all the same boxes they were used to checking off with "whole OS" VMs and legacy servers.

 

I have Kali in a vm and a bootable usb both - but I know very little about how to ACTUALLY use it. 🤷‍♀️

But... Infosec isn't really my main thing

 

I've never honestly found any good tutorials on it. There is one company which has Certifications for Kali Linux but it is at absurd prices eg. 1K

I don't think a tutorial will be valuable. it's just a Linux distro with pentesting tools. The best way is to start with a general security book like hacking exposed to understand the main process and go from there to experiment with the various tools. (They are a little complex but that's just the Unix philosophy)

Agreed. There are kali-specific tutors out there, but the distro is more about the endless toolkit.

There are - separately - a trillion tutors on the tools, where they coincidentally use Kali. Those are the better start.

Pentest has soooo many angles, the tooling and concepts are the jump-off point, not the distro.

I think that's what intrigues me so much. My career is builder/creator. I make a thing that does a thing, and that's fun. ... ... ...but I'm not geared for "include an ampersand and this specific text is your ajax call, or create you avatar in this specific way if the server is running on this specific version of blank, then do this and this and this, and now you have admin privileges"

Young me had no idea how important it was to hide the powered-by response header.

CSS hacks blow me away! Just a little tiny bit of user control, and a bad actor can slip-in a background-image that points to a remote gif that triggers a script-kiddie rig that does who-knows-what 🤯

I think that every hack out of there is the subversion of normal input. You're not creating a new entity, you just inserting something not expected which can trigger an abnormal response from an existing one.
I'm also a creator and the only things that were ever interesting for me in pen testing were reverse engineering and programming rootkits. Both because you have to set yourself to learning mode. It's like exploring those portions of the map where it's marked "Here be dragons".

 

I would personally start by auditing server and database access, before delving into the code of the app and database queries themselves. This means I would personally first audit ports and used user accounts for services and the database itself. This can give big security advantages in a relatively low amount of time.

Some questions to ask yourself here:

  • Is the access restricted enough by having only required ports open to the public?
  • Is this true both for the server, and some incoming access control ruleset (on AWS, Amazon, DigitalOcean ...)?
  • Do the web server, app, and database run on specific user accounts which are restricted to only what they need in order to run properly (in terms of disk access on the machine, running specific services etc.)?

These changes will largely mitigate potential infrastructure breaches. Also, if you need development access to the database (I wouldn't recommend for production, but sometimes it can't be avoided), never accept the case of "easy access" which will add security vulnerabilities. Enforce certificate-based VPN + SSH tunneling to a local port on the developer's machine.

After this case is covered, I would focus on determining who has access to the DB and for what reason?

A part of the problem is GDPR related, but even if remove that from the equation, does everybody needs to have direct access to the database? If so, why. If it is for running "hacks" or changing something that app dashboard currently doesn't support, stop that practice and ensure that crucial things can be done from the application side.

Also, do a review of the load balancer and web server security rules. Disable unused protocols which don't bring in any value. A lot of attack types can be stopped at this level.

After the security of access is resolved, I would focus on more codebase stuff, like ensuring that there are no user accounts or connection strings password in the code, as these can be resolved by using local environment variables on the host machines. I have seen this more times then I would personally like, and nobody can guarantee that your code will never leak, by your own developers or the git repo. Better safe than sorry.

If you are good on all the mentioned fronts, continue testing the app to most common attack types (like SQL injection and XSS), and especially focus on testing API responses, with and without authentication. Try to see what data can you get from the app, and try to get data for other users and resources that you don't own inside the app logic. Also, try to determine how long do the authentication tokens really last, and whether this makes sense for the app logic.

 

I'm a bit confused about what you mean with credentials and environment variables. Totally agree credentials should not be part of the code, but isn't it a common practice to have them injected into the target container/machine via environment variables, and to prohibit any login to the container/machine?

 

If you are using a CI/CD pipeline, it is definitively preferred to inject something like this from an encrypted env variable on the CI/CD system and not save anything on the host machine. It has the benefit of added security.

However, for a lot of smaller companies, the case of having a CI/CD pipeline is not always the case. I have seen a lot of admins deploying the app manually via SSH or RDP, by copying files or whatnot. Of course, while this is generally not acceptable for serious systems, we can't run from the fact it happens, especially for teams that are not as experienced in developing larger systems, or simply don't have any infrastructure experience.

I come from a .NET background, where there are a few solutions even for such cases:

  • Secrets manager
  • appsettings.{environment}.json files

The issue with this approach is that these files are not encrypted, so infrastructure breach will compromise the app. But, still, it's better than committing to a git repo.

 

If it's a question of where I would start, in the first hour:

  • Start learning how the request/response is implemented: Is it REST? Query-based? Some home-grown scheme? How is escaping done?
  • Start learning how they manage sessions: Session in Cookie? Session in LocalStorage? Session in URL?
  • Start learning why types of errors I can get it to throw: Can I get a 400-series? Any 500-series?

Once I get a good baseline of their public API, I try to imagine the type of development team that built the site:

  • Was the team experts or amateurs?
  • Did the team understand web security concepts or not?
  • Was the team senior or junior?
  • Was the team rushed for time or not?
  • Was there a lot of group-think in their decision-making process?

After that, I know where to go next. For example, if I see sloppy session management, can trigger 500 errors, and I think they're a junior team, I'll start looking for errors related to manipulating session data.

If I've got nothing after an hour, I usually give up. Usually, however, there's some thread to pull.

 

If you have the opportunity to set up an app-level account, there's still a non-trivial number of sites that you can get a basic idea of the implementation-language by the characters you're not allowed to use in passwords. Sadly, many of them are banking-sites. :p

 

When was the last time they practiced a DR drill? Or just start with what was the last time they verified a database backup?

Identity management is huge too. How many SSH keys are in circulation? Who all has the capability to create keys to PROD servers. They do use different keys for PROD, right?

Don't focus all on PROD. If they have a DEV server that's running a backup of PROD, then there is potentially hundreds of gigabytes of PII on a server with the most minimal of defenses.

 
  • What's the security policy for the code. If it's on public git service (eg GitHub) is 2FA enabled for all contributors
  • is git branch protection enabled with at least one reviewer. That way a worm writing a rogue dependency into package.json is less like to slip something in
  • how are secrets handled such as database credentials. are they git-secret encrypted and strong.
  • does the configured database user have too much privileges (uses ”admin” or ”postgres” accounts) it should be a none default account granted the minimum privileges possible
  • are the default accounts of the database secured with strong passwords.
  • is there regular backups of the database. are they encrypted before moving off the host to a secure location. is the restore of backups regularly tested.
  • is there a CI/CD pipeline and does it have security vulnerability scanning enabled (snyk.io) and is the build failed for anything other than low severity issues
  • are low severity security scanned issues regularly reviewed and fixed by default. those that wont be fixed is the reason documented and peer reviewed (eg ”we don't use that logic”). are those reasons periodically reviewed to check they are still valid
  • are deployments against git tagged versions of the code. have branch protection and githooks been set to prevent updating tags or forces rewrites of history in git that could hide the history of a backdoor being added into the code base
  • are git release tags annotated tags and gpg signed ’git tag -s’ so that we know exactly who said that version of the code is good to release in a way that cannot be faked
  • is the OS or base container layer up to date and regularly patched. stale Dockerfiles and images with cached out-of-date base image layers is sadly typical. the release build should run with flags to disable layer caching to force download of latest patches from upstream
  • does the code run on the current long term support version(s) of the web technology it uses. is there a policy to promptly upgrade to support the next long term support version and a deprecate support for expired versions.
  • is the software frequently rebuilt and redeployed pick up newly discovered security issues (if the code is only pushed every few months known security bugs are not fixed for months)
  • is HTTPS enforced and is the cert properly protected (such as git-secret encrypted)
  • is 2FA enforced for access to all infrastructure (eg AWS account)
  • is the app appropriately firewalled (only specific ports enabled)
  • are docker images running as root (mostly everything on hub.docker.com does) you should only docker images as a regular user. s2i images typically do this properly if you need a move away from root images.
  • is there a ”security” label for bugs in the issue tracker and are issues with that label prioritised with a low response time to triage them rather than just ignoring it as not a fun feature to build.
  • did the app write it's own authentication and authorization logic. if it did that's an epic fail.
  • does the app allow the forcing of 2FA for privileged accounts
  • is all input to the app sanitised (library to scrub SQL injection)
  • is static code analysis being applied to the code base and the build failed for issues. most security bugs are simple code errors and code that passes a lint is likely to have less bugs so less holes
  • is there peer review of all pull requests. most security bugs are simple code errors and code that is clear and understood by two devs is less likely to have bugs that lead to security errors
  • has penetration testing been performed against the application
  • has the codebase been reviewed to ensure that role based access control is properly enforced
  • are cookies handled correctly such as marked as secure so only sent over HTTPS
  • are assets loaded from public CDN which may be a route to inject attacks. use your own CDN and ensure you checked the hash for the files you put there against the official releases or built them yourself
  • check CORS correctly configured and XSRF protection in place
  • are passwords correctly stretched and salted or better yet use SRP6a authentication protocol
  • are users emailed on and changes to their account
  • are strong passwords enforced? check part of the hash of passwords against the Troy Hnt ”have I been pwned” API of half a billion compromised passwords.
  • is the login screen tested against the most popular password managers so that users can use randomised strong passwords
  • are the users offered recovery codes or password recovery using shamir secret sharing scheme
 

Start with the OWASP top 10, in that order. By far most the most common problems are...well, the most common. :)

I also think that taking inventory of, and classifying the data that each system deals with is really important so you can prioritize your efforts.

 

I would check if there's any session checking / auth verification.

Most of big non-tech company rely too much on VPN and don't invest money on security, thinking that it would not be possible for someone to actually access to an app without getting inside the network.

 

An outside-in pentest. Forget everything you know about the guts and come in from the outside.

  1. This is how malicious actors would approach it
  2. For those purposes, it's functionally meaningless to audit access levels - bad actors never had access to begin with... They create it.

This tool is in my tabs - I haven't used it, but it seems to ball up all the ones I do use. I've been curious about it.

latesthackingnews.com/2019/08/04/a...

 

Bingo. Even something a simple as iteratively running nmap, upping the fingerprinting-aggressiveness with each run can be helpful. This can let you know "are they using any scan-detectors to auto-block script-kiddies" and help you level-set the types of attacks that are likely to work.

 

OWASP has a great web app testing methodology guide to walk you through a bunch of checks: owasp.org/index.php/Web_Applicatio...

These are kind of the minimum, a tester would want to expand based on what behavior exists in the application, but that guide is a great baseline.

 

Also, business logic inconsistencies and access control misconfigurations (or failures) are something I prioritize, as these are the kind of things an automated scanner or tool is not really able to find.

 

I would start with the human layer of the stack. Who has access, what are their permissions, are accounts shared, password requirements, etc.

 

what are you looking for?

someone else more competent 😂

 
 

Lot of professional answers but I would start by looking how many keys are stored in plain text in config file on backend server.

I would also check how many of these are used by the frontend. Some developers leave keys embedded in HTML like hidden input and sometimes you can get the key by inspecting network traffick with dev tools in browser if your app frontend uses 3rd party api but tries to hide the key by uglifying JS. Many forget to keep the keys on back and act as a middleware.

Then classics like sql injection, xss, those kind of things.

Later I would call sec experts to check for real threats which are security stuff and not common mistakes.

 

As @andrew_brown pointed out OWASP and Kali have a lot of amazing tools. I would recommend every company to use ZAP from OWASP as a good starting point. It has a big list of automated tests which of course need you to verify afterwards manually or using other tools but it does warn on many things.

owasp.org/index.php/OWASP_Zed_Atta...

 

The responses posted provide good information. The only thing I would add is referencing the OWASP ASVS (application security verification standard) as it describes the security that should be built into the application - input handling, session management, use of secure ciphers, privileged command execution etc. This is the link to OWASP ASVS:

owasp.org/images/3/33/OWASP_Applic...

The other item I didn't see mentioned (I may have missed it) but is proper implementation of TLS.

Additional considerations include application and database configuration and secure configuration of the execution venue. Running the application on AWS EC2 instances versus GCP GKE (intentionally drawing a stark contrast) brings different security considerations.

 
  1. Make a precise list of valuables things you have to protect (data, access to the app for your clients, your own processing power, your bandwidth).
  2. For each point in the previous list, try to list all the possible breaches you can think of
  3. For each breaches, evaluate the probability and the eventual cost of an exploit of the breach
  4. Sort the list by probability * cost and list possible counter measures for the most important possible issues.
 

You should check:

  • Web language security patches
  • Web framework security patches
  • Web application passes OWASP
  • Web server access
  • Database server access
  • Database users
  • Permissions for the users on the webserver
  • Contents of data (you could easily find spam if the web layer is insecure) on the database
  • If the servers are accessible from the web without a VPN or proper security (AWS has a good direction on that)
  • If the servers OS has the latest security patches
  • CVE's

For the majority, you will be dealing, very likely, with outdated servers and unauthorized access or improper permissions for user access.

 

I would start from the back of the stack and work towards the front end. The theory being that locking down the DB operations and access will give the most benefit vs time spent as the source is secure. Then I would start fanning out to any services that interact with the data source and make sure they are secure. Lastly moving on to any clients that interact with those services.

 

Assuming I just wanted to get a quick one day check of things, and not a thorough security review, this is what I would look at:

There are free SAST and DAST tools available that could be useful to get a baseline done pretty quickly. For example the OWASP ZAP project.

I would think about logging, alerting, and other APM stuff. If they don't know what kind of errors and issues are happening then they probably couldn't detect a hack. If they are logging things, then what are they paying attention to?

Next up would be dependency management and other general coding practices. Is there a code review process, are there quality gates? How are defects resolved? Who makes sure they don't handle personal data incorrectly? Poorly written code is insecure code. Also who evaluates their security currently?

I would look at application boundaries. Particularly where data comes from the front end. the application boundary stuff is partially covered by the SAST and DAST tools. But its probably where most applications have their OWASP Top 10 issues.

Finally I would take a quick look at authentication and authorization. Are the APIs open to the public? How do they handle user logins. Time wise it would take a lot of effort to review this (and I just don't have the skill to do so).

 
 

Focus on the human element of the organisation structure plus try to grab the laptop of the middle manager to see you are able to gain access to it

 

I always start with an inventory, then check patch levels of everything. Then verify backups and logging.

 
 
 

The first thing to check is if they’re using the default admin account on the database and if it is still using the default password or something easily crackable. You’d be surprised...

code of conduct - report abuse