The very very very first step is to ensure security is even a priority by management and whoever leads the team - and each developer. Nothing else matters if there's no culture around these issues.
It needs to be one of the first clear goals that the team values security and will, therefore, allocate time for testing, learning, tooling, etc.
A big part of my role as Chief Defender Against the Dark Arts at 1Password is helping our very talented development team to build secure code. I have the good fortune of working with people who are highly motivated to do things securely, but they have not necessarily been specifically trained how to. Here are a few broad and narrow lessons in no particular order off of the top of my head.
Developers need (to be pointed to) the right tools to do things right. It is not enough to say "don't do X, do Y instead" if you don't give them the tools to do Y. So when some security expert tells you not to do X, ask them for the tools to do better.
Instead of addressing specific attacks (as they come up or we can imagine them), it is better to build things in ways to preclude whole categories of attack.
Ad-hoc regular expressions are rarely the right way to validate input (and all input may be hostile). But (see point 1), we need tools to build safe parsers for input.
Expanding on the previous point: That stuff that you learned and promptly forgot in your Formal Language Theory or Automata Theory class turns out to be really important for securely handling potentially hostile input.
Have as few user secrets as possible. (This is an example of 2).
And users should have as much control as possible over determining what is "secret".
Using good cryptographic libraries is essential, but they are very very easy to use incorrectly. Have someone who knows about cryptography to review your use. You may have to pay them.
Many exploits involve chaining together little, seemingly, harmless bugs. Just because you can't think of how some issue could be practically exploited doesn't mean that someone won't figure it out some day. (This is a variant of 2, but it is worth restating this way.)
Use debuggers, not printf, to study intermediate values. This prevents accidentally logging things that shouldn't be logged.
Heed IDE/compiler warnings. Run static and run-time analytics. Remember, many memory addressing errors can be turned into exploits.
Anyway, this is off of the top of my head, and I will close with a few slides, I used in some internal training. This was done as the form of a quiz
Don't forget about social engineering. Tell your support team to never give out passwords over phone. Build a password reset into site and get your support team to point users to that.
Most data breaches are by employees - lock your systems down.
Get your site pen tested.
Plus everything already said.
It's not a fundamental principle, but this book was a good overview of security subjects. Unfortunately, as I discovered when studying for my Security+ test, the book is not really designed for cramming. It looks like there's a 6th edition out now.
Latest comments (47)
Cool thanks for sharing!
I leave you a real case, to laugh or cry
Emojis can be wicked
The very very very first step is to ensure security is even a priority by management and whoever leads the team - and each developer. Nothing else matters if there's no culture around these issues.
It needs to be one of the first clear goals that the team values security and will, therefore, allocate time for testing, learning, tooling, etc.
A big part of my role as Chief Defender Against the Dark Arts at 1Password is helping our very talented development team to build secure code. I have the good fortune of working with people who are highly motivated to do things securely, but they have not necessarily been specifically trained how to. Here are a few broad and narrow lessons in no particular order off of the top of my head.
Developers need (to be pointed to) the right tools to do things right. It is not enough to say "don't do X, do Y instead" if you don't give them the tools to do Y. So when some security expert tells you not to do X, ask them for the tools to do better.
Instead of addressing specific attacks (as they come up or we can imagine them), it is better to build things in ways to preclude whole categories of attack.
Ad-hoc regular expressions are rarely the right way to validate input (and all input may be hostile). But (see point 1), we need tools to build safe parsers for input.
Expanding on the previous point: That stuff that you learned and promptly forgot in your Formal Language Theory or Automata Theory class turns out to be really important for securely handling potentially hostile input.
Have as few user secrets as possible. (This is an example of 2).
And users should have as much control as possible over determining what is "secret".
Using good cryptographic libraries is essential, but they are very very easy to use incorrectly. Have someone who knows about cryptography to review your use. You may have to pay them.
Many exploits involve chaining together little, seemingly, harmless bugs. Just because you can't think of how some issue could be practically exploited doesn't mean that someone won't figure it out some day. (This is a variant of 2, but it is worth restating this way.)
Use debuggers, not printf, to study intermediate values. This prevents accidentally logging things that shouldn't be logged.
Heed IDE/compiler warnings. Run static and run-time analytics. Remember, many memory addressing errors can be turned into exploits.
Anyway, this is off of the top of my head, and I will close with a few slides, I used in some internal training. This was done as the form of a quiz
Your code will, at some point, be found to be insecure. Don't take it personally, as if someone's called your baby ugly. Listen. Fix. Learn.
Adding to the list, Basics of web tokens.
Don't forget about social engineering. Tell your support team to never give out passwords over phone. Build a password reset into site and get your support team to point users to that.
Most data breaches are by employees - lock your systems down.
Get your site pen tested.
Plus everything already said.
It's not a fundamental principle, but this book was a good overview of security subjects. Unfortunately, as I discovered when studying for my Security+ test, the book is not really designed for cramming. It looks like there's a 6th edition out now.
Dian: Love the helicopter!
Here is my 2-cents to add to your list:
This has some good overlap with your items and a few others to add:
Web Developer Security Checklist: dev.to/powerdowncloud/web-develope...
I haven't read it yet, but found something on Reddit that is probably relevant to the discussion:
As a full stack web developer, I've recently taken a detour into learning about web security and penetration testing. I decided to take what I've learned over the past few months and put together a list of "Minimum Viable Security" recommendations for anyone building web apps.