DEV Community πŸ‘©β€πŸ’»πŸ‘¨β€πŸ’»

DEV Community πŸ‘©β€πŸ’»πŸ‘¨β€πŸ’» is a community of 966,904 amazing developers

We're a place where coders share, stay up-to-date and grow their careers.

Create account Log in
Cover image for Three Myths about Honest Security
Antigoni Sinanis for Kolide

Posted on

Three Myths about Honest Security

Today at Kolide, we published our guide to Honest Security. It’s our North Star for Kolide and represents our vision of the future for the endpoint security and device management.

Honest Security focuses on the following five tenets:

  1. The values your organization stands behind should be well-represented in your security program.

  2. A positive working relationship between the end-user and the security team is incredibly valuable and worth fostering.

  3. This relationship is built on a foundation of trust that is demonstrated through informed consent and transparency.

  4. The security team should anticipate and expect that end-users use their company owned devices for personal activities and design their detection capabilities with this in mind.

  5. End-users are capable of making rational and informed decisions about security risks when educated and honestly motivated.

Listen to the podcast about Honest Security at Hacker Valley Studio.

While much of the free guide we posted focuses on what Honest Security is and how it should work mechanically, I imagine there are many IT and security practitioners out there that might be automatically shut off to something like this due to previous bad experiences.

After chatting with a few folks who have read the tenets, I’ve noticed there are already some common misconceptions forming about the Honest Security approach. As we launch Honest Security, I thought I would author a supplementary post that dispels some of the myths behind the methodology.

Myth #1: Honest Security is incompatible with Device Management

Many folks believe Honest Security advocates for an approach that is fundamentally incompatible with device management like MDM. You could hardly be blamed if you do; I have been advocating strongly against blanket device management since the inception of Kolide. Since 2019, our thinking has evolved.

In the chapter six Achieving Compliance Objectives, we acknowledge that education isn’t enough to move the needle towards acceptable levels of adherence to the company’s objectives. To that end, we suggest generating predictable and proportionate consequences that can be applied to end-users who are not heeding the important recommendations of the security team. We go on to describe the concept of Opt-in Management.

While this process is effective, there are just some people who will continually find themselves always on the brink of the consequence activating (or worse, serial offenders). In some situations, these users may do much better with the recommendations they regularly fail to implement on time if the security team could just do it for them. This is where Honest Security can allow the users to opt-in to traditional device management solutions (where applicable) and not have to worry about getting locked out of critical services or accounts.

We feel so strongly about this, we are working on an Honest MDM solution for our customers. The MVP is almost complete!

Myth #2: Honest Security blinds security teams in the name of privacy

Critics of Honest Security approach often get defensive about its strong push away from blanket collection of data that might be useful later. Many folks I deeply respect remind me that if it were not for a certain piece of data they weren’t sure they should gather, they would have never been able to detect <insert really bad thing>.

While Honest Security definitely advocates for being mindful and intentional about the data you collect, it primarily champions for an informed-consent and transparency based approach. If the security team really does think a piece of data is important, then we should be up-front about it. If that data could be dangerous or extremely personal (like GPS coordinates), then informed-consent is the best option.

The best summary of his position is in Chapter 4. Collecting Data Honestly.

Do you know if your organization looks at your web browser history?

I want to clarify that I am not asking them whether or not the company collects web browsing history. I am asking them whether they know, with 100% certainty, whether the security team is or is not. It is one thing to know whether your organization can view your browsing history, and another to know if they do. Also notice I didn’t ask them if the organization usually looks. One person, looking once because they were curious, is looking.

After making these clarifications, it is my experience that there are three camps of people who can still emphatically answer β€œyes.”

[…]

The third camp are the folks who can answer β€œyes”, because they know exactly what tools are installed on their devices and what the tools are capable of collecting. More importantly, they know they can independently verify how the security and IT team is using these tools in practice. They know if the security team is looking at their web browser history because the tools the security team uses require them to know. These are people who work for companies that practice Honest Security.

Myth #3: Honest Security hurts Insider Threat detection and deterrence

Many folks think the quality of their insider threat defense strategy is inextricably tied to the act of obfuscating exactly how the endpoint security team performs its detection mission. We address this myth directly in the fourth chapter, Collecting Data Honestly. In there you will find a section aptly titled, β€œThe Insider Threat”. (reproduced below)

The most common argument I see against transparency is that it gives bad-actors within your organization an advantage. The rationale is that an insider threat might be able to identify gaps in the security team’s detection capabilities and systematically abuse them to complete their mission. I disagree. As we all know security through obscurity rarely works. Also, it’s much more likely that this transparency and regular contact will instead serve as a deterrent. Unlike end-users who are making unforced errors, malicious insiders are afraid of being caught. The more interactions they have with a team practicing Honest Security, the more uncomfortable they will get.

Part of Honest Security is trusting end-users because they are our colleagues. If you build a dystopian and cynical security program born out of fear, mistrust, and suspicion, then you will inevitably make your fellow-employees your enemies. The positive working relationship we are advocating for in this guide cannot exist under such a program. Only you can judge if that trade-off is ultimately worthwhile.

I hope this short post cleared up some of the myths we’ve seen in the early hours of promoting this honest approach to security. While we’ve thought very carefully about what we want to include in our guide, it’s a document we expect to evolve based on the feedback, opinions, and most importantly, the experiences of security/IT professionals all over the world.

The Honest Security Guide can be found at https://honest.security.

-Jason Meller, Co-Founder and CEO of Kolide, Inc.

Top comments (1)

Collapse
 
dwd profile image
Dave Cridland

I suspect the most numerous cases of what we call "The Insider Threat" is people typing the wrong email address in. There are mitigations to that built around enforcement - STANAG 4774, anyone? - but realistically even those rely heavily on training and cooperation, and are unavailable to most organisations.

If you have an actual insider adversary, then they'll just borrow someone else's device and credentials anyway, and all your clever enforcement is wasted. Better, then, to concentrate on ensuring your colleagues know not to lend their device or credentials to anyone else, surely?

Update Your DEV Experience Level:

Settings

Go to your customization settings to nudge your home feed to show content more relevant to your developer experience level. πŸ›