Let's say you were brought on to inspect and a company's web app, built on something common like PHP/Rails/Node/etc.
What's your check list, what are you looking for?
Let's say you were brought on to inspect and a company's web app, built on something common like PHP/Rails/Node/etc.
What's your check list, what are you looking for?
For further actions, you may consider blocking this person and/or reporting abuse
soma -
Ingo Steinke, web developer -
ad1s0n -
Uendi Hoxha -
Top comments (38)
I have a Security Checklist for AWS which you can apply to any Cloud Computing service, it was too painful for me to find the original so I was lazy and linked it from my LinkedIn:
linkedin.com/posts/andrew-wc-brown...
Rails has very sane defaults, Rails Security outline gives you a good idea where to look:
guides.rubyonrails.org/security.html
OWASP top ten is a useful resource:
owasp.org/index.php/Category:OWASP...
A fun way of thing of ways to compromise an app/system is looking at Kali Linux full list of tools for inspiration.
tools.kali.org/tools-listing
Maybe you are running an old version of Postgres? Exploit DB might have some means for me to find a way in:
exploit-db.com/
A bit busy at the moment but a very fun thing to investigate
I have Kali in a vm and a bootable usb both - but I know very little about how to ACTUALLY use it. π€·ββοΈ
But... Infosec isn't really my main thing
I've never honestly found any good tutorials on it. There is one company which has Certifications for Kali Linux but it is at absurd prices eg. 1K
I don't think a tutorial will be valuable. it's just a Linux distro with pentesting tools. The best way is to start with a general security book like hacking exposed to understand the main process and go from there to experiment with the various tools. (They are a little complex but that's just the Unix philosophy)
Agreed. There are kali-specific tutors out there, but the distro is more about the endless toolkit.
There are - separately - a trillion tutors on the tools, where they coincidentally use Kali. Those are the better start.
Pentest has soooo many angles, the tooling and concepts are the jump-off point, not the distro.
I think that's what intrigues me so much. My career is builder/creator. I make a thing that does a thing, and that's fun. ... ... ...but I'm not geared for "include an ampersand and this specific text is your ajax call, or create you avatar in this specific way if the server is running on this specific version of blank, then do this and this and this, and now you have admin privileges"
Young me had no idea how important it was to hide the powered-by response header.
CSS hacks blow me away! Just a little tiny bit of user control, and a bad actor can slip-in a background-image that points to a remote gif that triggers a script-kiddie rig that does who-knows-what π€―
I think that every hack out of there is the subversion of normal input. You're not creating a new entity, you just inserting something not expected which can trigger an abnormal response from an existing one.
I'm also a creator and the only things that were ever interesting for me in pen testing were reverse engineering and programming rootkits. Both because you have to set yourself to learning mode. It's like exploring those portions of the map where it's marked "Here be dragons".
Heh...
It's funny how much things have changed with respect to AWS and "default public" settings: more things default-closed, plus, when you explicitly open things up, you get colored warning-labels in the web consoles (and that's without leveraging any of the advanced threat-detection and auto-remediation tools available in mos of the commercial regions).
Helpful that GitHub and GitLab both now do basic checks for credential-y looking things.
As to enforcing MFA ...if you're allowing interactive logins to your production hosts/instances, at all (let alone from random systems around the Internet), you're probably doing other stuff really wrong, too. Which is a good 50,000' nugget of information to gather as you move your audit-tasks from the outside inwards.
It's hard in practice to get engineering teams to stop fiddling with servers directly.
It should be logical that instances should be hands-off, and tasks should be automated through Run Commands or something such as Anisble. It really comes down to stubbornness.
Humans are such a pain
Yeah... One of my main customer's internal groups was flogging their favored CM system, recently. Touting, "but you can extend management from legacy systems to the newer, cloud-hosted (and it's cross-cloud!) systems" (while being able to compose a global dashboard would be a good justification, that group's never really been into hiring the kinds of people you need to have around to get worthwhile reports authored/updated). Ironically, the person that was flogging it was also joking, earlier, about "you could also use it to manage containers, but that would be horribad." All I could think was, "why do I need lifecycle-CM for my cloud-VMs: when it comes time to patch (etc.), we just nuke and re-deploy β¦and that's for the systems that we don't have running as part of auto-scaling groups (with scheduled scaling-actions configured for frequent replacement)".
It's not just Operations types that are hard to break of habits, the Security teams might be worse. A couple years ago, they insisted they needed interactive-shell access to run their tools. So, we arrange for that to be doable ...and then they got pissy that system-ids were constantly changing and their databases were filling up with systems that no longer existed. Note, this was the same team that insisted that our containers had to be "whole OS" types of containers, since their scanning tools didn't know how to work with minimized containers in a way that allowed them to check off all the same boxes they were used to checking off with "whole OS" VMs and legacy servers.
I would personally start by auditing server and database access, before delving into the code of the app and database queries themselves. This means I would personally first audit ports and used user accounts for services and the database itself. This can give big security advantages in a relatively low amount of time.
Some questions to ask yourself here:
These changes will largely mitigate potential infrastructure breaches. Also, if you need development access to the database (I wouldn't recommend for production, but sometimes it can't be avoided), never accept the case of "easy access" which will add security vulnerabilities. Enforce certificate-based VPN + SSH tunneling to a local port on the developer's machine.
After this case is covered, I would focus on determining who has access to the DB and for what reason?
A part of the problem is GDPR related, but even if remove that from the equation, does everybody needs to have direct access to the database? If so, why. If it is for running "hacks" or changing something that app dashboard currently doesn't support, stop that practice and ensure that crucial things can be done from the application side.
Also, do a review of the load balancer and web server security rules. Disable unused protocols which don't bring in any value. A lot of attack types can be stopped at this level.
After the security of access is resolved, I would focus on more codebase stuff, like ensuring that there are no user accounts or connection strings password in the code, as these can be resolved by using local environment variables on the host machines. I have seen this more times then I would personally like, and nobody can guarantee that your code will never leak, by your own developers or the git repo. Better safe than sorry.
If you are good on all the mentioned fronts, continue testing the app to most common attack types (like SQL injection and XSS), and especially focus on testing API responses, with and without authentication. Try to see what data can you get from the app, and try to get data for other users and resources that you don't own inside the app logic. Also, try to determine how long do the authentication tokens really last, and whether this makes sense for the app logic.
I'm a bit confused about what you mean with credentials and environment variables. Totally agree credentials should not be part of the code, but isn't it a common practice to have them injected into the target container/machine via environment variables, and to prohibit any login to the container/machine?
If you are using a CI/CD pipeline, it is definitively preferred to inject something like this from an encrypted env variable on the CI/CD system and not save anything on the host machine. It has the benefit of added security.
However, for a lot of smaller companies, the case of having a CI/CD pipeline is not always the case. I have seen a lot of admins deploying the app manually via SSH or RDP, by copying files or whatnot. Of course, while this is generally not acceptable for serious systems, we can't run from the fact it happens, especially for teams that are not as experienced in developing larger systems, or simply don't have any infrastructure experience.
I come from a .NET background, where there are a few solutions even for such cases:
The issue with this approach is that these files are not encrypted, so infrastructure breach will compromise the app. But, still, it's better than committing to a git repo.
If you have the opportunity to set up an app-level account, there's still a non-trivial number of sites that you can get a basic idea of the implementation-language by the characters you're not allowed to use in passwords. Sadly, many of them are banking-sites. :p
When was the last time they practiced a DR drill? Or just start with what was the last time they verified a database backup?
Identity management is huge too. How many SSH keys are in circulation? Who all has the capability to create keys to PROD servers. They do use different keys for PROD, right?
Don't focus all on PROD. If they have a DEV server that's running a backup of PROD, then there is potentially hundreds of gigabytes of PII on a server with the most minimal of defenses.
Start with the OWASP top 10, in that order. By far most the most common problems are...well, the most common. :)
I also think that taking inventory of, and classifying the data that each system deals with is really important so you can prioritize your efforts.
I would start with the human layer of the stack. Who has access, what are their permissions, are accounts shared, password requirements, etc.
I suppose look at the OWASP top 10?
someone else more competent π
An outside-in pentest. Forget everything you know about the guts and come in from the outside.
This tool is in my tabs - I haven't used it, but it seems to ball up all the ones I do use. I've been curious about it.
latesthackingnews.com/2019/08/04/a...
Bingo. Even something a simple as iteratively running
nmap
, upping the fingerprinting-aggressiveness with each run can be helpful. This can let you know "are they using any scan-detectors to auto-block script-kiddies" and help you level-set the types of attacks that are likely to work.