The Psychology of Social Engineering: Why Smart People Get Hacked
The Psychology of Social Engineering: Why Smart People Get Hacked
The finance team in the $25.6 million Hong Kong deepfake case weren't naive. They were experienced professionals, on a routine video call, doing exactly what their company's processes asked of them. They approved a transfer. Every familiar face on that call was AI-generated.
The attack didn't outsmart them. It outsmarted their environment — exploiting the trust they placed in what looked like a completely normal interaction.
This is the part of social engineering that most people miss entirely. The conversation usually ends at "they should have known better." But that framing is both wrong and dangerous, because it leads to the wrong defences. The research tells a more uncomfortable story.
The Big Misunderstanding
Social engineering doesn't target stupidity. It targets psychology.
There's a critical difference. Stupidity is fixed and personal. Psychology is universal — the same cognitive architecture that makes humans capable of trust, cooperation, and fast decision-making also makes every one of us exploitable under the right conditions.
Influence principles commonly used in marketing — such as scarcity, reciprocity, likeability, and social proof — remain effective even when individuals are aware of cybersecurity risks and social engineering threats. That finding, from a 2024 peer-reviewed study published in Springer's Research Challenges in Information Science, is the most important sentence in this entire post.
Even when you know you might be manipulated, these triggers still work. That's not a failure of intelligence. That's the architecture of human cognition.
Robert Cialdini documented six core principles of influence in his landmark work on persuasion. Attackers have been using all six as a technical toolkit for decades. Content analyses and simulations consistently find authority and social proof to be the most prevalent principles in phishing attacks. Here's how each one operates in practice.
1. Authority — The Voice You Don't Question
When someone presents as a figure of power — the CEO, IT support, the auditors, a regulator — most people comply without scrutiny. This isn't weakness; it's a social behaviour that works correctly in almost every real-world context. Attackers exploit the exceptions.
The Muddled Libra helpdesk attacks we covered earlier this week work almost entirely on authority. A caller claims to be a senior employee, uses the right jargon, references internal systems by name, and the helpdesk agent — conditioned to be helpful to authority — resets the MFA.
The tell: legitimate authority rarely demands urgency while simultaneously discouraging verification. That combination — "I need this now, don't check with anyone else" — is the manufactured version.
2. Urgency — The Thinking Killer
Urgency is the single most reliable tool in the social engineer's kit. And the research supports exactly why: time pressure, autonomy reduction, and threat appraisal fundamentally shape security behaviour, according to a 2023 study published in Cyberpsychology: Journal of Psychosocial Research on Cyberspace.
When we perceive that something bad will happen imminently — account locked, deadline missed, money lost — the brain shifts from deliberate thinking to reactive mode. Careful evaluation of sender domains, unusual requests, and out-of-character instructions becomes almost impossible in that state.
Every "your account will be permanently deleted in 24 hours" email is engineering this response deliberately.
The counter-move: treat urgency itself as a red flag, not a reason to act faster. Real systems have grace periods. Real colleagues can wait five minutes for you to verify through a second channel.
3. Fear — When the Amygdala Takes Over
Fear is urgency's more powerful sibling. Where urgency creates time pressure, fear creates threat response — and threat response is physiologically designed to bypass the prefrontal cortex, the part of the brain responsible for analytical thinking.
"Your account has been compromised." "Legal action will be taken unless you respond immediately." "Unusual sign-in detected from another country."
These aren't just pressure tactics. They are neurological switches. Once fear is activated, the cognitive resources available for evaluating whether the email is real drop dramatically. Attackers who understand this don't need convincing emails — they need triggering ones.
4. Social Proof — Because Everyone Else Did
Social Proof and Authority were found to be the most influential of Cialdini's six principles across both UK and Arab cultural frameworks in a 2024 study of 642 participants.
Social proof is the instinct to treat the behaviour of others as evidence of correct action. "Your colleague has already approved this." "Everyone on the team has completed this verification." "Three other vendors have already submitted their documents."
These statements lower resistance instantly — not because people are gullible, but because social proof is a rational heuristic in almost every non-attack context. It works.
5. Liking — The Attacker Who Seems Like a Friend
Research found that emails using the Liking principle were the most effective for phishing, while Authority and Scarcity combined were more likely to arouse suspicion.
Attackers who establish rapport before making a request — who are warm, relatable, share something in common with the target, or who have done something helpful first — are dramatically more successful than those who lead with demands.
This is why the long-game pretexting attacks (like those executed by Muddled Libra) invest time building a believable character before ever making a request. The attacker who seems friendly, knowledgeable, and on your side has already won most of the battle.
6. Reciprocity — The Favour That Costs You
Reciprocity is the principle that humans feel obligated to return what has been given to them. Attackers weaponise this in two ways.
The first is the small favour first — providing helpful information, solving a problem, doing something generous — before making the real request. The target feels an obligation they didn't consciously sign up for.
The second is the long-game cultivation of a relationship over days or weeks before the actual attack. By the time the request arrives, the relationship feels real, the obligation feels genuine, and refusing feels like a breach of trust.
This is how pretexting escalates from a first contact to a transfer authorisation.
The Practical Exercise: Name the Trigger
Here's the one habit shift that the research consistently supports as effective: before you act on any request — click a link, submit credentials, approve a payment, grant access — pause for five seconds and ask:
"Which trigger is this activating in me right now?"
Is it urgency? Authority? Fear? If you can name it, you've introduced the cognitive distance that the attack is specifically designed to eliminate. That gap — between feeling and acting — is where the attack fails.
Susceptibility can fluctuate within the same individuals across repeated exposures, challenging assumptions that static defences or one-time awareness interventions remain effective over time. This is why naming the trigger needs to become a reflex, not a checklist — something practiced regularly until it's automatic.
What This Means for Training
Understanding the psychology doesn't just help individuals — it completely changes how organisations should approach security awareness training.
Training that teaches people to spot technical red flags (grammar errors, suspicious URLs) is fighting last decade's war. Training that teaches people to recognise which psychological trigger is being activated, and to pause before acting — that's training that works against AI-generated phishing, deepfake video calls, and every social engineering technique that doesn't yet have a name.
The research from SECURWARE 2025 recommends explicitly: move from teaching people to spot technical cues and toward teaching psychological tactics through role-play scenarios that activate the same triggers in a safe environment.
That's precisely what scenario-based simulation does — and it's the only training model the evidence consistently supports.
At Kuboid Secure Layer, our Human Risk Assessment service is built around this model. We simulate the triggers — not just the emails — so your team builds real reflex responses, not theoretical checklists. See how it works here.
One Last Thought
The $25.6 million didn't leave that company because the finance team was careless. It left because nobody had ever trained them to ask, in the middle of a familiar-looking video call: "Which trigger is this activating in me right now?"
That question is free. It works on every attack. And once it's a reflex, it's very hard to social-engineer around.
Which of these six triggers do you think your team would find hardest to resist under real pressure? I'd genuinely like to know — drop a comment below. And if you want to find out through a controlled simulation rather than a real attack, we're here.
Kuboid Secure Layer provides social engineering simulations, human risk assessments, and security awareness programmes for businesses across India and beyond. Learn more.