DEV Community

Daniel Glover
Daniel Glover

Posted on • Originally published at danieljamesglover.com

Social Engineering: The Human Side of Cybersecurity

Your people are your biggest attack surface - and your last line of defence. Here is how to build a security culture that turns users from liability into asset.

Last year, a finance director at a mid-sized retailer received a call from someone claiming to be from the company IT support. The caller knew the director name, their direct dial, and the name of the email provider. Within 12 minutes, the director had handed over credentials that gave attackers access to the accounting platform, the ERP system, and the backup infrastructure. The breach cost GBP 2.3 million and took eight months to fully resolve.

No malware was involved. No zero-day exploit. Just a phone call and carefully researched pretexting.

This is social engineering - and it remains the most effective attack vector in modern cybersecurity, precisely because it exploits the one variable that technical controls cannot fully govern: human behaviour.

Why Technical Controls Are Not Enough

Most organisations have invested heavily in perimeter security, endpoint detection, email filtering, and multi-factor authentication. These controls are necessary, but they operate on the assumption that the person on the other end of the keyboard is the legitimate owner of the credentials. Social engineering breaks this assumption systematically.

A phishing email that bypasses your secure email gateway. A phone call that bypasses your network authentication. A physical tailgate through an unguarded door. These are not technical failures - they are gaps between what your controls assume and what your people actually do.

The 2025 Verizon Data Breach Investigations Report found that 68% of breaches involved a human element, whether through error, privilege misuse, or social engineering. That figure has remained stubbornly consistent across multiple years of the report. This is not a technology problem waiting for a better technological solution. It is a people problem that requires a people solution.

The Anatomy of a Social Engineering Attack

Understanding how social engineers work is the first step to defending against them. Most attacks follow a recognisable pattern.

Reconnaissance. Attackers gather publicly available information about their target. LinkedIn tells them who works in the finance team. Twitter reveals who attended which conference and what they presented. The company website lists leadership names and organisational structure. This phase is entirely passive and entirely legal from the attacker perspective - they are using open source intelligence that anyone with an internet connection can access.

Pretexting. The attacker constructs a believable scenario to initiate contact. This might be a delivery driver with a missing parcel, an IT administrator investigating a suspected breach, or a colleague from another office who has forgotten their access credentials. The pretext does not need to be sophisticated - it needs to be plausible in context.

Exploitation. Once contact is established, the attacker uses psychological levers to escalate access or extract information. Common techniques include authority bias (posing as someone with power), reciprocity (offering something small to trigger a return favour), and scarcity (creating urgency to suppress rational scrutiny).

Disengagement. After achieving the objective, the attacker exits cleanly, often leaving the victim unaware that anything has occurred until much later - if at all.

Building a Security Culture That Stands Up to Pressure

Technical controls alone cannot protect against an attacker who has successfully impersonated a trusted colleague or service provider. The defence has to operate at the human level too.

Make Security Behaviour Visible and Social

One of the most powerful drivers of secure behaviour is the perception that others are doing it too. When security practices are visible - team members questioning unusual requests in meetings, colleagues verifying identity before sharing credentials, managers modelling good hygiene - they become normalised rather than seen as obstruction.

Security champions programmes work on this principle. Select one person per department to act as a security point of contact, give them basic threat awareness training, and empower them to question anything that feels wrong without fear of appearing unhelpful. Over time, security awareness becomes embedded in team culture rather than siloed in an IT department that sends quarterly password reminders nobody reads.

Create Psychological Safety for Verification

The biggest obstacle to effective verification is social friction. Nobody wants to be the person who challenged the finance director when they were just trying to do their job. Nobody wants to call out a colleague they have worked with for years as a potential security risk.

Leaders have to explicitly create permission to verify. This means stating clearly, repeatedly, and from the top of the organisation that questioning unusual requests is expected, not rude. It means celebrating cases where people caught something suspicious rather than treating near-misses as embarrassments to be quietly buried.

A simple script can help: I want to make sure we are doing this securely. Can you verify your identity through the standard channel? This depersonalises the challenge and frames verification as professional diligence rather than personal suspicion.

Measure What Matters

Most security awareness programmes measure completion rates for training modules. Completion rate tells you whether someone clicked through a slide deck, not whether they would make the right decision under pressure. Behavioural metrics are far more valuable.

Track how many people report phishing simulations. Measure the time between a suspicious email landing and it being reported to the security team. Run controlled exercises to see who would hand over credentials under different pretexting scenarios. Use the results to identify where cultural interventions are working and where there are persistent gaps.

The CISO of a global logistics firm I worked with ran quarterly social engineering simulations across all business units and published the results - anonymised but by department - to the executive team. Within 18 months, theι’“ι±Ό reporting rate went from under 5% to over 70%, and the number of credentials actually handed over in tests fell from 23% to under 2%. The published results created accountability in a way that internal reporting to the security team alone never could.

Keep Training Real and Relevant

Generic cybersecurity awareness training that covers password management, phishing, and GDPR compliance does not prepare people for the specific attacks targeting your organisation. The most effective training programmes are built around real incidents - your own or industry equivalents - and focus on the decision points where someone could have stopped the attack chain.

When a social engineering attempt is identified, whether successful or not, treat it as a learning opportunity. Anonymise the details. Walk through what the attacker did, what made it convincing, and what the correct response would have been. This turns every incident into a training moment rather than a post-mortem that only the security team attends.

The Leadership Imperative

Security culture starts with tone at the top, but it cannot stay there. If the CEO cheerfully ignores the IT department advice on a weekly basis, the message that security is optional will filter down through every layer of the organisation. Conversely, if leaders visibly engage with security practices - reporting suspicious emails, completing training without being chased, asking questions when something does not feel right - that culture cascades too.

One of the most effective interventions an IT leader can make is to ensure that the board understands social engineering risk in commercial terms. The retail breach I described at the start of this article was not primarily a technology failure. It was a failure to equip the finance director with the awareness and the permission to question a phone call that should have set off alarm bells.

The return on investment for security culture is difficult to quantify precisely, but the cost of not building it is measurable in breach notifications, regulatory fines, operational disruption, and reputational damage. In a climate where the average cost of a data breach in the UK now exceeds GBP 3 million, the question is not whether to invest in security culture, but how quickly you can build it before an attacker tests whether your people are your weakest link.

Conclusion

Social engineering exploits human psychology rather than software vulnerabilities, which means no technical control can fully eliminate the risk. Your people are simultaneously your biggest attack surface and your most capable last line of defence. Building a security culture that empowers users to question unusual requests, rewards verification over politeness, and treats every incident as a learning opportunity is not a soft initiative - it is a critical security control that compounds in value over time.

The finance director who handed over credentials last year was not incompetent. They were a reasonable professional who had never been taught to treat credential requests with the same scepticism they applied to unsolicited emails. That gap in awareness is fixable. The question is whether your organisation fixes it proactively, or waits until the fix comes in the form of a breach notification.

Top comments (0)