DEV Community

Stanislav(Stas) Katkov
Stanislav(Stas) Katkov

Posted on • Originally published at lintingruby.com

Interview with Greg Molnar - Rails developer and penetration tester

One of the sections in upcoming Linting Ruby book is about automating security checks in ruby. I never specialized on security in Rails application, nor did Lucian. We didn't felt qualified to give our opinions as general advice on such an important topic. So, we decided to conduct a thorough research and reach out to people who know better.

So we present you interview with Greg Molnar, who is a Rails developer for 13 years and OSCP-certified penetration tester. You might know him as an author of Spektr a static code analysis tool that finds potential vulnerabilities in Rails and work in progress book titled "Secure Code Review for Rails developers".


Stas: How did you get interested in security?

Greg: I ended up working for a company that did penetration testing and security consultancies. They hired me as a Ruby on Rails developer to work on their internal applications.
They were always telling me about all the cool things they do and got me infected with all of that. I wanted to learn more about the world of InfoSec. Unfortunately, I had to leave that company and England, but I took a penetration testing course that they recommended to me.

This course is supposedly one of the hardest out there, it's called Offensive Security Certified Professional or OSCP for short. It's a very hands-on penetration testing course. For the exam, they give you five IP addresses and 48 hours. Your task is to find vulnerabilities, and get root access on all of them, and write a report.
And since then, I have been splitting my time between doing development and security work.

Stas: Can you tell us more, what your security work looks like... Do you help with Rails code reviews or you try to break into their systems?

Greg:I do penetration tests, not only for Ruby on Rails projects. I don't mind the underlying technology actually.
Most of my work is for companies with a requirement of being PCI compliant. Part of that is to have yearly penetration test on their application. For instance if a company works in the financial industry, their customers and partners likely require them to be PCI compliant due to the sensitive data they handle. But I also work for companies getting a test done to just cover their bases.

Stas: Based on your experience which are the most common problems? Most common security issues? Is it on application level or because infrastructure is misconfigured?

Greg: Assessing infrastructure testing is not really something that I typically do. In fact, I can't remember when I last did it because usually, these tasks are separated in companies. There's the development team which handles the web penetration test - the application's test. Then, there's the operations team, which is responsible for their own security tests and they hire their own consultants.

The most common vulnerability, I believe, is still Cross-Site Scripting(XSS). XSS is very frequent, but often, I find cases where proper configuration via X-XSS-Protection
and alike headers prevent full exploitation, even when developers make mistakes. So, vulnerability is there, but nobody can exploit it. I can run the script, but it won't execute - an exception is thrown in the browser console saying you are not allowed to do this.
Besides that, authorization issues are the most common ones. When a user can access or do things they shouldn't be permitted.

Stas: There's a gem called 'bundle-audit' that checks if any of your dependencies have known security issues. However, we're seing more and more attacks these days involving third-party libraries. Is it enough to rely only on 'bundle-audit' to secure your third-party libraries?

Greg: 'bundle-audit' works same way as other alternatives as far as I know, so relying on that should be enough I think. But if someone hijacks a ruby gem and publishes a version with a malware, nothing can really save you except your own manual due diligence.
I recommend to always cross-check the repository and rubygems for updates when you upgrade a gem. When you get an new version of a gem from rubygems, go to GitHub to search for the same tag. If you can't find that tag on GitHub, you have to ask why is that. When I also always the changelog on GitHub if there is one. I consider it a red flag, if I can't find the tag or a changelog for the version I am getting from rubygems.

Stas: Do you use dependabot?

Greg: No, to be honest, never particularly liked this tool, even before GitHub acquired it. Now, I don't favor GitHub either due to the Microsoft acquisition.
I strive to maintain my independence and avoid reliance on a tool unique to GitHub. For instance, with bundle-audit, I can take my repository anywhere and still use that tool.

Stas: So you don't stay on a bleeding edge of gems and probably have some lag time before you'll update. Since there is a need to check every update for every gem?

Greg: If there's a security update, and it is likely exploitable in the apps I work on, I update immediately. As for the regular gem updates, I batch them and do upgrade days when I upgrade as many as I can.

Stas: When you're developing Rails applications, as someone who is so security-minded, do you use any tools to help prevent security issues?

Greg: Yes, 'bundle-audit' and 'Brakeman', which are very well-known and widely used.
I actually created my own gem, similar to 'Brakeman'. Mainly because 'Brakeman' was acquired a few years back and subsequently changed their license. Under the new license, you can use it for free with your own application, but once you start running 'Brakeman' on someone else's application who is paying you, that infringes on their terms of use.
My gem, Spektr, is somewhat similar, bit it built with a different parsing library and it has a different license, so penetration testers can run it for free during assignments. But the core idea is the same as Brakeman's. It might report a few false positives, but even those might give ideas for patterns to look into during a test.

Stas: How about Rubocop? Rubocop had cops that are security related.

Greg: Yes, RuboCop does have security features, but I think they overlap with Brakeman, so I don't really use them. I use RuboCop for style enforcement, but not for the security aspects. Perhaps I should take a closer look and see how it compares to the other tools.
I believe that Brakeman and bundle-audit pretty much cover everything that can be detected automatically. However, there are other security issues, particularly those of logical nature, that cannot be flagged automatically. If you forget enforce authorization for an endpoint - no automatic tool can point it out. For that, I often recommend writing tests for every different role, to ensure they have access to only the things they're supposed to.
Use a whitelist, not a blacklist - if you add something new, it's blocked by default and can only be enabled intentionally.

Stas: It seems like we can automate some aspects of security checks, correct? But it also seems like this portion is relatively limited, implying a significant amount of manual work is still necessary.

Greg: I believe you can automate about half of security checks, but the other half still requires vigilance. You still need to write tests and make sure you're implementing all authorization and authentication rules properly, and continually check that they are functioning as they should.
Certainly, you can - or rather, should - automate some portion, primarily because it's a straightforward process. Why wouldn't you take advantage of it if it's that easy and offers a head start against potential attacks?

Stas: In your experience, you mentioned having worked with PHP and other stacks. In terms of security, how do Rails developers compare to other programmers? Does Rails significantly enhance security by default? Are Rails developers generally more knowledgeable about security than PHP developers, based on your experiences?

Greg: Rails' default security settings protect against a lot of common mistakes. It's hard to create an XSS vulnerability because everything is escaped by default, with only a few exceptions. You need to explicitly state that a string is HTML safe; otherwise, it's automatically treated as unsafe. This makes life much easier.

90% of the ActiveRecord API is immune to SQL injection, which means that you don’t need extensive security knowledge to maintain a safe environment.

I believe, however, that this could lead to people not taking security as seriously as they should, they think Rails takes over that responsibility, but that's impossible, there is no way to make a functioning framework without letting people to shoot themself in the leg if they want to.

When I'm writing code, I'm solving a problem. I don't think about security as a first class citizen. I always think about, okay, let's solve this problem in the most efficient way. And then I look at it and think about the security aspect. But it's only because I care about security, but a lot of people miss that second step.

Another aspect to consider is for instance, with PHP, if I discover a vulnerability in a PHP application, it is relatively easy to gain access to the entire server or infrastructure, because it is running on a server with a bunch of tools installed and those usually help to open a remote shell and elevate privileges. This is much easier than finding a vulnerability in, say, a Rails app running on Heroku in a container.


If you're interested in linting and ruby, you might want to follow along as we write our book on https://lintingruby.com

Thank you.

Top comments (0)