Let's say you were brought on to inspect and a company's web app, built on something common like PHP/Rails/Node/etc.
What's your check list, what are you looking for?
Let's say you were brought on to inspect and a company's web app, built on something common like PHP/Rails/Node/etc.
What's your check list, what are you looking for?
For further actions, you may consider blocking this person and/or reporting abuse
I have a Security Checklist for AWS which you can apply to any Cloud Computing service, it was too painful for me to find the original so I was lazy and linked it from my LinkedIn:
linkedin.com/posts/andrew-wc-brown...
Rails has very sane defaults, Rails Security outline gives you a good idea where to look:
guides.rubyonrails.org/security.html
OWASP top ten is a useful resource:
owasp.org/index.php/Category:OWASP...
A fun way of thing of ways to compromise an app/system is looking at Kali Linux full list of tools for inspiration.
tools.kali.org/tools-listing
Maybe you are running an old version of Postgres? Exploit DB might have some means for me to find a way in:
exploit-db.com/
A bit busy at the moment but a very fun thing to investigate
I have Kali in a vm and a bootable usb both - but I know very little about how to ACTUALLY use it. 🤷♀️
But... Infosec isn't really my main thing
I've never honestly found any good tutorials on it. There is one company which has Certifications for Kali Linux but it is at absurd prices eg. 1K
I don't think a tutorial will be valuable. it's just a Linux distro with pentesting tools. The best way is to start with a general security book like hacking exposed to understand the main process and go from there to experiment with the various tools. (They are a little complex but that's just the Unix philosophy)
Agreed. There are kali-specific tutors out there, but the distro is more about the endless toolkit.
There are - separately - a trillion tutors on the tools, where they coincidentally use Kali. Those are the better start.
Pentest has soooo many angles, the tooling and concepts are the jump-off point, not the distro.
I think that's what intrigues me so much. My career is builder/creator. I make a thing that does a thing, and that's fun. ... ... ...but I'm not geared for "include an ampersand and this specific text is your ajax call, or create you avatar in this specific way if the server is running on this specific version of blank, then do this and this and this, and now you have admin privileges"
Young me had no idea how important it was to hide the powered-by response header.
CSS hacks blow me away! Just a little tiny bit of user control, and a bad actor can slip-in a background-image that points to a remote gif that triggers a script-kiddie rig that does who-knows-what 🤯
I think that every hack out of there is the subversion of normal input. You're not creating a new entity, you just inserting something not expected which can trigger an abnormal response from an existing one.
I'm also a creator and the only things that were ever interesting for me in pen testing were reverse engineering and programming rootkits. Both because you have to set yourself to learning mode. It's like exploring those portions of the map where it's marked "Here be dragons".
Heh...
It's funny how much things have changed with respect to AWS and "default public" settings: more things default-closed, plus, when you explicitly open things up, you get colored warning-labels in the web consoles (and that's without leveraging any of the advanced threat-detection and auto-remediation tools available in mos of the commercial regions).
Helpful that GitHub and GitLab both now do basic checks for credential-y looking things.
As to enforcing MFA ...if you're allowing interactive logins to your production hosts/instances, at all (let alone from random systems around the Internet), you're probably doing other stuff really wrong, too. Which is a good 50,000' nugget of information to gather as you move your audit-tasks from the outside inwards.
It's hard in practice to get engineering teams to stop fiddling with servers directly.
It should be logical that instances should be hands-off, and tasks should be automated through Run Commands or something such as Anisble. It really comes down to stubbornness.
Humans are such a pain
Yeah... One of my main customer's internal groups was flogging their favored CM system, recently. Touting, "but you can extend management from legacy systems to the newer, cloud-hosted (and it's cross-cloud!) systems" (while being able to compose a global dashboard would be a good justification, that group's never really been into hiring the kinds of people you need to have around to get worthwhile reports authored/updated). Ironically, the person that was flogging it was also joking, earlier, about "you could also use it to manage containers, but that would be horribad." All I could think was, "why do I need lifecycle-CM for my cloud-VMs: when it comes time to patch (etc.), we just nuke and re-deploy …and that's for the systems that we don't have running as part of auto-scaling groups (with scheduled scaling-actions configured for frequent replacement)".
It's not just Operations types that are hard to break of habits, the Security teams might be worse. A couple years ago, they insisted they needed interactive-shell access to run their tools. So, we arrange for that to be doable ...and then they got pissy that system-ids were constantly changing and their databases were filling up with systems that no longer existed. Note, this was the same team that insisted that our containers had to be "whole OS" types of containers, since their scanning tools didn't know how to work with minimized containers in a way that allowed them to check off all the same boxes they were used to checking off with "whole OS" VMs and legacy servers.
I would personally start by auditing server and database access, before delving into the code of the app and database queries themselves. This means I would personally first audit ports and used user accounts for services and the database itself. This can give big security advantages in a relatively low amount of time.
Some questions to ask yourself here:
These changes will largely mitigate potential infrastructure breaches. Also, if you need development access to the database (I wouldn't recommend for production, but sometimes it can't be avoided), never accept the case of "easy access" which will add security vulnerabilities. Enforce certificate-based VPN + SSH tunneling to a local port on the developer's machine.
After this case is covered, I would focus on determining who has access to the DB and for what reason?
A part of the problem is GDPR related, but even if remove that from the equation, does everybody needs to have direct access to the database? If so, why. If it is for running "hacks" or changing something that app dashboard currently doesn't support, stop that practice and ensure that crucial things can be done from the application side.
Also, do a review of the load balancer and web server security rules. Disable unused protocols which don't bring in any value. A lot of attack types can be stopped at this level.
After the security of access is resolved, I would focus on more codebase stuff, like ensuring that there are no user accounts or connection strings password in the code, as these can be resolved by using local environment variables on the host machines. I have seen this more times then I would personally like, and nobody can guarantee that your code will never leak, by your own developers or the git repo. Better safe than sorry.
If you are good on all the mentioned fronts, continue testing the app to most common attack types (like SQL injection and XSS), and especially focus on testing API responses, with and without authentication. Try to see what data can you get from the app, and try to get data for other users and resources that you don't own inside the app logic. Also, try to determine how long do the authentication tokens really last, and whether this makes sense for the app logic.
I'm a bit confused about what you mean with credentials and environment variables. Totally agree credentials should not be part of the code, but isn't it a common practice to have them injected into the target container/machine via environment variables, and to prohibit any login to the container/machine?
If you are using a CI/CD pipeline, it is definitively preferred to inject something like this from an encrypted env variable on the CI/CD system and not save anything on the host machine. It has the benefit of added security.
However, for a lot of smaller companies, the case of having a CI/CD pipeline is not always the case. I have seen a lot of admins deploying the app manually via SSH or RDP, by copying files or whatnot. Of course, while this is generally not acceptable for serious systems, we can't run from the fact it happens, especially for teams that are not as experienced in developing larger systems, or simply don't have any infrastructure experience.
I come from a .NET background, where there are a few solutions even for such cases:
The issue with this approach is that these files are not encrypted, so infrastructure breach will compromise the app. But, still, it's better than committing to a git repo.
If you have the opportunity to set up an app-level account, there's still a non-trivial number of sites that you can get a basic idea of the implementation-language by the characters you're not allowed to use in passwords. Sadly, many of them are banking-sites. :p
When was the last time they practiced a DR drill? Or just start with what was the last time they verified a database backup?
Identity management is huge too. How many SSH keys are in circulation? Who all has the capability to create keys to PROD servers. They do use different keys for PROD, right?
Don't focus all on PROD. If they have a DEV server that's running a backup of PROD, then there is potentially hundreds of gigabytes of PII on a server with the most minimal of defenses.
Start with the OWASP top 10, in that order. By far most the most common problems are...well, the most common. :)
I also think that taking inventory of, and classifying the data that each system deals with is really important so you can prioritize your efforts.
I would start with the human layer of the stack. Who has access, what are their permissions, are accounts shared, password requirements, etc.
I suppose look at the OWASP top 10?
someone else more competent 😂
An outside-in pentest. Forget everything you know about the guts and come in from the outside.
This tool is in my tabs - I haven't used it, but it seems to ball up all the ones I do use. I've been curious about it.
latesthackingnews.com/2019/08/04/a...
Bingo. Even something a simple as iteratively running
nmap
, upping the fingerprinting-aggressiveness with each run can be helpful. This can let you know "are they using any scan-detectors to auto-block script-kiddies" and help you level-set the types of attacks that are likely to work.I would start from the back of the stack and work towards the front end. The theory being that locking down the DB operations and access will give the most benefit vs time spent as the source is secure. Then I would start fanning out to any services that interact with the data source and make sure they are secure. Lastly moving on to any clients that interact with those services.
You should check:
For the majority, you will be dealing, very likely, with outdated servers and unauthorized access or improper permissions for user access.
The responses posted provide good information. The only thing I would add is referencing the OWASP ASVS (application security verification standard) as it describes the security that should be built into the application - input handling, session management, use of secure ciphers, privileged command execution etc. This is the link to OWASP ASVS:
owasp.org/images/3/33/OWASP_Applic...
The other item I didn't see mentioned (I may have missed it) but is proper implementation of TLS.
Additional considerations include application and database configuration and secure configuration of the execution venue. Running the application on AWS EC2 instances versus GCP GKE (intentionally drawing a stark contrast) brings different security considerations.
As @andrew_brown pointed out OWASP and Kali have a lot of amazing tools. I would recommend every company to use ZAP from OWASP as a good starting point. It has a big list of automated tests which of course need you to verify afterwards manually or using other tools but it does warn on many things.
owasp.org/index.php/OWASP_Zed_Atta...
Lot of professional answers but I would start by looking how many keys are stored in plain text in config file on backend server.
I would also check how many of these are used by the frontend. Some developers leave keys embedded in HTML like hidden input and sometimes you can get the key by inspecting network traffick with dev tools in browser if your app frontend uses 3rd party api but tries to hide the key by uglifying JS. Many forget to keep the keys on back and act as a middleware.
Then classics like sql injection, xss, those kind of things.
Later I would call sec experts to check for real threats which are security stuff and not common mistakes.
Assuming I just wanted to get a quick one day check of things, and not a thorough security review, this is what I would look at:
There are free SAST and DAST tools available that could be useful to get a baseline done pretty quickly. For example the OWASP ZAP project.
I would think about logging, alerting, and other APM stuff. If they don't know what kind of errors and issues are happening then they probably couldn't detect a hack. If they are logging things, then what are they paying attention to?
Next up would be dependency management and other general coding practices. Is there a code review process, are there quality gates? How are defects resolved? How (if ever) do they update dependencies?
I would look at application boundaries. Where does data enter and leave the application? Is it sanitized and encoded? What is the overall risk exposure of the application e.g. if someone nefarious got access could they affect other apps/systems at the company? How much attack surface is there?.
Finally I would take a quick look at authentication and authorization. Are the APIs open to the public? How do they handle user logins. Time wise it would take a lot of effort to review this (and I just don't have the skill to do so).
I would check if there's any session checking / auth verification.
Most of big non-tech company rely too much on VPN and don't invest money on security, thinking that it would not be possible for someone to actually access to an app without getting inside the network.
OWASP has a great web app testing methodology guide to walk you through a bunch of checks: owasp.org/index.php/Web_Applicatio...
These are kind of the minimum, a tester would want to expand based on what behavior exists in the application, but that guide is a great baseline.
Also, business logic inconsistencies and access control misconfigurations (or failures) are something I prioritize, as these are the kind of things an automated scanner or tool is not really able to find.
Logs and test accounts
I always start with an inventory, then check patch levels of everything. Then verify backups and logging.
Possible SQL injection vulnerabilities.
SQLMap I've found to be useful for this means:
tools.kali.org/vulnerability-analy...
The first thing to check is if they’re using the default admin account on the database and if it is still using the default password or something easily crackable. You’d be surprised...
Focus on the human element of the organisation structure plus try to grab the laptop of the middle manager to see you are able to gain access to it