Not that long ago, I was in a company working heavily in cybersecurity.
One day, I started as usual, by opening my company-provided MacBook, and went to read the day's announcements. I'd just started to read—
The screen blinked off.
Surprised, I nudged the mouse, and sure enough, the screen came to life again, with a password prompt. Odd. I logged back in, found my place and started to—
The screen blinked off again.
What the heck?
Device Management solutions are pretty awful things. They enforce some arcane policy by changing your settings, usually without telling you. You, the user, have no control. In our case, we were a consultancy literally filled with experts in the cybersecurity industry, yet our laptops were working against us.
It was simply infuriating. In this case, a bug in the device management solution had meant that in enforcing a screen timeout, it enforced a one minute screen timeout.
This meant that we were unable to work without gently moving the mouse near constantly. Several of us gave up, and downloaded the source for an open source app that caused the mouse to "jiggle" when left alone, and defeated the errant software.
If you think we were wrong, just bear in mind that we frequently had to give presentations to key customers. Having to change slides at least once a minute would be a challenging presentation style.
But fundamentally, this situation arose because in the security world, the user is not trusted or involved. They are seen as part of the problem - not part of the solution. Surely, in our case at least, our team mates were an asset?
In fact, aren't the staff always the front line for any organisation's security posture and device health?
All too many cybersecurity firms - those with impressive front pages with pictures of green-lit, hoodie-wearing hackers - like to talk about The Insider Threat. In capitals, just like that.
What they tend not to note is that the insider threat - while very real - comprises almost totally of people making honest mistakes. Trying to prevent mistakes by enforcing that the mistakes cannot be made has two problems. First, it is very complex - and, as we saw, prone to error. Secondly, it often damages the productivity of employees.
Surely the best way to reduce errors like this is by inclusion and education - turning your staff into a security asset, rather than a liability?
Surely security should be more than saying "No"?
Plenty of security experts have already found, for example, that the best way to reduce the effectiveness of phishing attacks is to send phishing attacks to users periodically, gamifying the task of spotting and avoiding them.
After all, this protects not only their corporate email, but their personal email, too - and you can bet that a clever attacker will target that, too. By involving users in their own security, therefore, you are protecting areas that enforcement could never hope to cover.
As "Bring Your Own Device" and working from home builds momentum, the lines between corporate security and personal security blur to an unprecedented degree.
Just as we don't want our employers to gather information on our home lives systematically, we obviously don't want them to gather information on our personal devices without our understanding and consent.
For companies with staff in Europe, California, and other places around the world, this is a matter of more than idle concern. The GDPR makes gathering personal data without consent illegal. Perhaps worse, it requires companies to provide the data they do collect back to the user on demand.
Clearly, then, the old model of blind draconian enforcement isn't sustainable, even if it were desirable.
What's needed is a model of corporate security that works in the best - and most effective - traditions of leadership. As security leaders, we should draw our users with us, rather than trying to corral and drive them from behind.
We need to reset the relationship users have with security. We can transform it into a positive force for not only the risk management of the company, but the personal safety of those we work with.
This will make our users happier - and perhaps even more productive. But it will also reduce the risks from security failures to the company as a whole.
Thoughts like these are behind the emergence of a new model of corporate security - "Honest Security". Built around concepts like consent, transparency, and inclusional security practice, the intent is to reverse the adversarial posture of security versus user.
I am not, I admit, the least cynical person on the planet. In the cybersecurity world, there's plenty to be cynical about, after all. I'm fully expecting a series of companies to jump on this bandwagon in name only.
But if the outcome is that security becomes less of a barrier and more of an enabler, I'm all for it. If this is a buzzword, it's a buzzword to watch.