It's pronounced Diane. I do data architecture, operations, and backend development. In my spare time I maintain Massive.js, a data mapper for Node.js and PostgreSQL.
The only perfectly secure system is one that's been disconnected, powered off, encased in concrete, and dropped into the ocean from a helicopter flown blindfolded.
Any functionality you can use is functionality someone else with ulterior motives can use. Data you can access through your system is data someone else can access through your system. Backdoors are an inherent security risk.
I'm not sure if the blindfolded helicopter will achieve its purpose. It might just crash pretty quick, make the system fall and release it from concrete. Just sayin'
It's pronounced Diane. I do data architecture, operations, and backend development. In my spare time I maintain Massive.js, a data mapper for Node.js and PostgreSQL.
Re 6: No, hashing is not enough.
Use an algorithm suited for this task, as recommended by those crypto experts, which right now is mostly scrypt and argon2.
md5/sha1/sha2/etc is not enough no matter how much salt and pepper you throw on top.
The biggest security mindset shift for me was understanding that input is not something a user enters in a form element. Input is literally everything that comes to your server (since everything can be tampered with), so treat it as such!
That async request you yourself wrote so you think you can trust it? Validate that payload same as you would a text field.
A big part of my role as Chief Defender Against the Dark Arts at 1Password is helping our very talented development team to build secure code. I have the good fortune of working with people who are highly motivated to do things securely, but they have not necessarily been specifically trained how to. Here are a few broad and narrow lessons in no particular order off of the top of my head.
Developers need (to be pointed to) the right tools to do things right. It is not enough to say "don't do X, do Y instead" if you don't give them the tools to do Y. So when some security expert tells you not to do X, ask them for the tools to do better.
Instead of addressing specific attacks (as they come up or we can imagine them), it is better to build things in ways to preclude whole categories of attack.
Ad-hoc regular expressions are rarely the right way to validate input (and all input may be hostile). But (see point 1), we need tools to build safe parsers for input.
Expanding on the previous point: That stuff that you learned and promptly forgot in your Formal Language Theory or Automata Theory class turns out to be really important for securely handling potentially hostile input.
Have as few user secrets as possible. (This is an example of 2).
And users should have as much control as possible over determining what is "secret".
Using good cryptographic libraries is essential, but they are very very easy to use incorrectly. Have someone who knows about cryptography to review your use. You may have to pay them.
Many exploits involve chaining together little, seemingly, harmless bugs. Just because you can't think of how some issue could be practically exploited doesn't mean that someone won't figure it out some day. (This is a variant of 2, but it is worth restating this way.)
Use debuggers, not printf, to study intermediate values. This prevents accidentally logging things that shouldn't be logged.
Heed IDE/compiler warnings. Run static and run-time analytics. Remember, many memory addressing errors can be turned into exploits.
Anyway, this is off of the top of my head, and I will close with a few slides, I used in some internal training. This was done as the form of a quiz
The less data you store, the fewer security hazards you expose yourself to, and the safer your participants will be. Don't hoard data on the theory that it'll become useful - only save what you need, and question yourself every time you get into a situation where you think you need it.
If you must store data, especially sensitive data, don't ever store it in plain-text! Look into hashing algorithms like bcrypt.
Always give your participants the option to delete their data, and actually delete it when they ask you to.
In Europe we must comply to GDPR which is coming into practise in the next few months. We’ve had to implement what you mentioned on every single system which holds more than one piece of identifying information about a user.
Not everything, just database tables containing personally identifiable information. You’ll want to encrypt this information rather than hash it, as you’ll more than likely need to retrieve it at a later date. Here is a good read which explains the legislation in more detail: techblog.bozho.net/gdpr-practical-...
Security is hard. It's worthwhile to read about various attacks to understand the magnitude of ways in which stuff is attacked.
Your system will be breached. Mitigation strategy is as important as the "wall".
A system is never "secure", you can only balance security goals with current risks and available resources.
Privacy is inseparable from security. Even if you're irresponsible and don't care about your users, the attackers will.
Security becomes harder as the data becomes more valuable. Most systems are really only secure because nobody really wants the data they store. As a company becomes successful, the attackers will come.
Security is a moving target. You are are never done implementing security.
User security is as important as corporate security.
Being open about security is the only way to know it's correct. There is no security through obscurity.
Everybody is responsible for security. Every person and every machine is a potential attack vector.
Kim Arnett [she/her] leads the mobile team at Deque Systems, bringing expertise in iOS development and a strong focus on accessibility, user experience, and team dynamics.
I follow these two guys on Twitter: (@Scott_Helme)[twitter.com/Scott_Helme] and @troyhunt. They're a source of lots of security articles, research, breaches, etc. I try to keep up on recent events, and do a deep dive in the web whenever a concept/term comes up that I don't know.
Kim Arnett [she/her] leads the mobile team at Deque Systems, bringing expertise in iOS development and a strong focus on accessibility, user experience, and team dynamics.
I'd add: never even commit a credential (password/API key/etc) to your repo. I'd argue this applies to any repo, not just open source ones, since you never know what might happen to the repo in the future. Even if you remove the credential in a future commit, it still exists in the history.
Just FYI if anyone has hit this issue before: ‘Bfg repo cleaner’ can clean the repository of any traces of files, however if you’re working on a team project the key can spread like a virus as it will ‘infect’ branches stemmed off of master in the future (if this passed code review of course). I had to deal with a situation similar to this as someone had committed a global config file containing passwords which was only meant for development. Fun times. Of course the solution for deeming an API key pair useless is just to regenerate the key, however passwords are a different story if you don’t want a history of previous passwords being revealed.
He/Him/His
I'm a Software Engineer and a teacher.
There's no feeling quite like the one you get when you watch someone's eyes light up learning something they didn't know.
The OWASP top ten security vulnerability documents are a great place to start: OWASP. Typically, the top web app security vulnerabilities are SQL injection, XSS and authentication issues. The top web frameworks will address those issues in their documentation so that is another place to begin researching.
1- Never trust user input. Always validate both frontend and backend
2- Sanitize the data. Use prepared statements
3- Set Access-Control-Allow-Origin to deny
4- Your application must login to the database with the minimum rights as possible.
5- Change the passwords frequently
6- Keep server system up to date
7- Configure the firewall properly
8- Consider to use CDNs
9- Be aware about the data you are dealing.
Whenever you process data from the outside, always process it in this order:
Sanitize
Validate
Execute
Display feedback
Example:
$errors=array();// 1. Sanitisation$email=filter_var($_POST['email'],FILTER_SANITIZE_EMAIL);// 2. Validationif(false===filter_var($email,FILTER_VALIDATE_EMAIL)){$errors['email']="Invalid email address";}// 3. Exécutionif(count($errors)>0){echo'There are errors : ';print_r($errors);exit;}// At this point, all is fine, let's open the gate...$bdd=newPDO('mysql:host=localhost;dbname=test','root','');//...// 4. Feedback information
DON'T RUN a command with sudo in your command line if you don't fully know what it's doing & understand that typing sudo in front of a command & then typing in your password grants that command full read/write access to your filesystem!
For example:
I know we've all seen this command before & sometimes jokingly tell people to run it sudo rm -rf /
What this command does is it recursively remove all files/directories under /. "Slash" is the root directory of your computer. So calling sudo with this command gives it full rights to do without prompting any kind of warning to proceed.
The very very very first step is to ensure security is even a priority by management and whoever leads the team - and each developer. Nothing else matters if there's no culture around these issues.
It needs to be one of the first clear goals that the team values security and will, therefore, allocate time for testing, learning, tooling, etc.
I'm the founder of ServerlessOps which I started when I asked, what would I do if the server went away? My job is to understand what the role of the operations person will be in the future.
OWASP puts together a list of what they consider the most critical security risks in web applications and is updates every few years to account for trend changes.
There's no one stop shop security solution for any application. Security and best practices are always changing and the most important thing in security is showing up and staying informed. And trust no one.
Don't forget about social engineering. Tell your support team to never give out passwords over phone. Build a password reset into site and get your support team to point users to that.
Most data breaches are by employees - lock your systems down.
Get your site pen tested.
Plus everything already said.
It's not a fundamental principle, but this book was a good overview of security subjects. Unfortunately, as I discovered when studying for my Security+ test, the book is not really designed for cramming. It looks like there's a 6th edition out now.
Security is everyone's responsibility even the people who don't write code. Designers can just as easily expose information about people with poorly thought out UX.
I'm not sure if the blindfolded helicopter will achieve its purpose. It might just crash pretty quick, make the system fall and release it from concrete. Just sayin'
You can take off without the blindfold but you have to put it on once you're over the water.
This sounds reasonable
Re 6: No, hashing is not enough.
Use an algorithm suited for this task, as recommended by those crypto experts, which right now is mostly scrypt and argon2.
md5/sha1/sha2/etc is not enough no matter how much salt and pepper you throw on top.
PHP (which isn't exactly my favorite language) kinda got it right, providing easy-enough to use password functions in their standard library.
The biggest security mindset shift for me was understanding that input is not something a user enters in a form element. Input is literally everything that comes to your server (since everything can be tampered with), so treat it as such!
That async request you yourself wrote so you think you can trust it? Validate that payload same as you would a text field.
A big part of my role as Chief Defender Against the Dark Arts at 1Password is helping our very talented development team to build secure code. I have the good fortune of working with people who are highly motivated to do things securely, but they have not necessarily been specifically trained how to. Here are a few broad and narrow lessons in no particular order off of the top of my head.
Developers need (to be pointed to) the right tools to do things right. It is not enough to say "don't do X, do Y instead" if you don't give them the tools to do Y. So when some security expert tells you not to do X, ask them for the tools to do better.
Instead of addressing specific attacks (as they come up or we can imagine them), it is better to build things in ways to preclude whole categories of attack.
Ad-hoc regular expressions are rarely the right way to validate input (and all input may be hostile). But (see point 1), we need tools to build safe parsers for input.
Expanding on the previous point: That stuff that you learned and promptly forgot in your Formal Language Theory or Automata Theory class turns out to be really important for securely handling potentially hostile input.
Have as few user secrets as possible. (This is an example of 2).
And users should have as much control as possible over determining what is "secret".
Using good cryptographic libraries is essential, but they are very very easy to use incorrectly. Have someone who knows about cryptography to review your use. You may have to pay them.
Many exploits involve chaining together little, seemingly, harmless bugs. Just because you can't think of how some issue could be practically exploited doesn't mean that someone won't figure it out some day. (This is a variant of 2, but it is worth restating this way.)
Use debuggers, not printf, to study intermediate values. This prevents accidentally logging things that shouldn't be logged.
Heed IDE/compiler warnings. Run static and run-time analytics. Remember, many memory addressing errors can be turned into exploits.
Anyway, this is off of the top of my head, and I will close with a few slides, I used in some internal training. This was done as the form of a quiz
The less data you store, the fewer security hazards you expose yourself to, and the safer your participants will be. Don't hoard data on the theory that it'll become useful - only save what you need, and question yourself every time you get into a situation where you think you need it.
If you must store data, especially sensitive data, don't ever store it in plain-text! Look into hashing algorithms like bcrypt.
Always give your participants the option to delete their data, and actually delete it when they ask you to.
In Europe we must comply to GDPR which is coming into practise in the next few months. We’ve had to implement what you mentioned on every single system which holds more than one piece of identifying information about a user.
So it's not just a good idea, it's the law ;)
What did you do, just hash or encrypt everything? We are facing the same right now.
Not everything, just database tables containing personally identifiable information. You’ll want to encrypt this information rather than hash it, as you’ll more than likely need to retrieve it at a later date. Here is a good read which explains the legislation in more detail: techblog.bozho.net/gdpr-practical-...
I agree. The idea of soft-deletes irks me. My personal data isn't a line in a log file which one would wanna keep for as long as possible.
I created an account, said my name is Rexford. Now I'm leaving, and say, delete that data, and then soft-deletes it?
It's so crazy of an approach not sure why it still exists.
Practical advice:
Do you have any books or suggested reads on various attacks?
I follow these two guys on Twitter: (@Scott_Helme)[twitter.com/Scott_Helme] and @troyhunt. They're a source of lots of security articles, research, breaches, etc. I try to keep up on recent events, and do a deep dive in the web whenever a concept/term comes up that I don't know.
Awesome! Thanks!!
API Keys are just as sensitive as a username and password combination!
I'd add: never even commit a credential (password/API key/etc) to your repo. I'd argue this applies to any repo, not just open source ones, since you never know what might happen to the repo in the future. Even if you remove the credential in a future commit, it still exists in the history.
Yeah too true.
Just FYI if anyone has hit this issue before: ‘Bfg repo cleaner’ can clean the repository of any traces of files, however if you’re working on a team project the key can spread like a virus as it will ‘infect’ branches stemmed off of master in the future (if this passed code review of course). I had to deal with a situation similar to this as someone had committed a global config file containing passwords which was only meant for development. Fun times. Of course the solution for deeming an API key pair useless is just to regenerate the key, however passwords are a different story if you don’t want a history of previous passwords being revealed.
GitHub is pretty good with that, if they detect that one of their keys was committed and pushed to GitHub they'll let you know and disable the key.
The OWASP top ten security vulnerability documents are a great place to start: OWASP. Typically, the top web app security vulnerabilities are SQL injection, XSS and authentication issues. The top web frameworks will address those issues in their documentation so that is another place to begin researching.
Don't roll your own crypto in production.
1- Never trust user input. Always validate both frontend and backend
2- Sanitize the data. Use prepared statements
3- Set Access-Control-Allow-Origin to deny
4- Your application must login to the database with the minimum rights as possible.
5- Change the passwords frequently
6- Keep server system up to date
7- Configure the firewall properly
8- Consider to use CDNs
9- Be aware about the data you are dealing.
Whenever you process data from the outside, always process it in this order:
Example:
DON'T RUN a command with
sudo
in your command line if you don't fully know what it's doing & understand that typingsudo
in front of a command & then typing in your password grants that command full read/write access to your filesystem!For example:
I know we've all seen this command before & sometimes jokingly tell people to run it
sudo rm -rf /
What this command does is it recursively remove all files/directories under
/
. "Slash" is the root directory of your computer. So calling sudo with this command gives it full rights to do without prompting any kind of warning to proceed.Something about not exposing passwords by passing form info as url variables because people can then see your password.
I haven't read it yet, but found something on Reddit that is probably relevant to the discussion:
As a full stack web developer, I've recently taken a detour into learning about web security and penetration testing. I decided to take what I've learned over the past few months and put together a list of "Minimum Viable Security" recommendations for anyone building web apps.
The very very very first step is to ensure security is even a priority by management and whoever leads the team - and each developer. Nothing else matters if there's no culture around these issues.
It needs to be one of the first clear goals that the team values security and will, therefore, allocate time for testing, learning, tooling, etc.
The OWASP Top 10 is a start. owasp.org/images/7/72/OWASP_Top_10...
OWASP puts together a list of what they consider the most critical security risks in web applications and is updates every few years to account for trend changes.
Read crackstation's hashing and sec "summary" as a primer: crackstation.net/hashing-security.htm
There's no one stop shop security solution for any application. Security and best practices are always changing and the most important thing in security is showing up and staying informed. And trust no one.
Don't forget about social engineering. Tell your support team to never give out passwords over phone. Build a password reset into site and get your support team to point users to that.
Most data breaches are by employees - lock your systems down.
Get your site pen tested.
Plus everything already said.
It is easier and safer to whitelist than blacklist.
It's not a fundamental principle, but this book was a good overview of security subjects. Unfortunately, as I discovered when studying for my Security+ test, the book is not really designed for cramming. It looks like there's a 6th edition out now.
Dian: Love the helicopter!
Here is my 2-cents to add to your list:
This has some good overlap with your items and a few others to add:
Web Developer Security Checklist: dev.to/powerdowncloud/web-develope...
Your code will, at some point, be found to be insecure. Don't take it personally, as if someone's called your baby ugly. Listen. Fix. Learn.
Design and code everything with a 'secure by default' mindset - meaning that out of the box your code should be secure.
Security is everyone's responsibility even the people who don't write code. Designers can just as easily expose information about people with poorly thought out UX.
Cool thanks for sharing!
I leave you a real case, to laugh or cry
Emojis can be wicked
Adding to the list, Basics of web tokens.
Don't assume its anyone else's responsibility!
Have a shared understanding of threads to your application/product in your team.