The stereotypical image of a cybercriminal is a hooded man in a dark room, using code that makes no sense to the average person, to hack into their person or private systems. The reality is, all it often takes to gain access to a system is cunningly asking an employee on the other end of the phone.

Social engineering, the art of manipulating human psychology rather than exploiting software weaknesses, is a potent vector of attack – and not a new one, either.

According to Splunk, 98% of cyberattacks rely on social engineering, with 80% of data breach incidents targeting the human element to gain access to business information.

Exploiting trust 

 

Chris Yule, director of threat research at Sophos and his colleague Eric Escobar, principal security consultant at Sophos and experienced penetration tester, describe how easily trust can be weaponised.

Escobar recalls orchestrating what seems like a mundane help-desk interaction, layered with subtle tricks. “My go-to is putting on coffee shop sounds in the background, acting like I’m in line for coffee.”

“Anytime they ask me for sensitive information, I can brush it off, call a couple of times, get used to the information they normally ask for, and then call back in a more legitimate setting when the help desk has changed shifts.”

By embedding his calls in familiar soundscapes and routines, he disarms suspicion.

Escobar adds that even small personal details can tilt the balance. A reference to a co-worker’s recent holiday, spotted on social media, can be enough to establish credibility. “Friendly humans want to help other humans,” he says, “and hackers exploit that.”

The advent of artificial intelligence has only sharpened these tools. Deepfake audio and video, easily created with browser-based software, can mimic voices and appearances with uncanny accuracy.

Kerri Shafer-Page, vice-president of incident response at Arctic Wolf, calls AI a double-edged sword.

Defenders can use it to scan for vulnerabilities in seconds, but attackers can turn the same capability into hyper-personalised phishing. She notes that while multi-factor authentication has been heralded as a safeguard, attackers exploit human fatigue, flooding phones with prompts until users simply approve them to make the alerts go away.

“It’s not AI itself that is the threat,” she says, “it’s the person behind the keyboard and what they’re doing with it.”

If awareness is one line of defence, architecture is the other. Gunter Ollman, chief technology officer at Cobalt, argues that the industry has spent too long blaming users for security lapses instead of designing systems that assume human fallibility. “Training isn’t going to work,” he says.

“You assume people are going to click on everything. So how do you protect them no matter what? The dumbest thing you think they can do, they will do.”

That, he says, means systems must be engineered with resilience in mind, not dependent on perfect behaviour.

The growing dominance of these techniques was illustrated by the recent breach at HR software provider Workday.

Attackers mounted a social engineering campaign against one of its third-party CRM providers, impersonating IT and HR staff in calls and messages to employees.

Once inside, they accessed customer contact details such as names, email addresses and phone numbers. Workday said there is no evidence that tenant data was compromised, but the breach highlights how even partial information can be weaponised for follow-on attacks.

This incident is part of a wider wave of activity that has swept across major firms.

Groups linked to ShinyHunters and the collective known as “The Com” have leveraged voice phishing and OAuth-based (allowing one app to sign in to another) ploys against CRM systems used by household-name companies including Google, Adidas, Qantas and Allianz.

Andy Piazza, senior director of threat intelligence at Palo Alto’s Unit 24, believes the surge in these attacks is partly because exploiting humans is far cheaper than developing advanced malware.

“System exploitation is hard,” he says. “Human exploitation is a little bit easier.” He notes that once criminals gain access, they use legitimate administrator tools to blend in with normal activity, making detection harder. “It becomes a behavioural analysis problem,” he explains, “and that’s where you really need threat hunters paying close attention.”

Moving forward

 

For defenders, the path forward requires both cultural change and technical vigilance.

Help-desk staff and other frontline employees are often underpaid, undertrained and outsourced, yet they represent the first barrier against manipulation.

All of the experts urge organisations to give employees regular, realistic training based on actual attack attempts rather than abstract scenarios.

Others point to the importance of oversight for third-party systems, from CRM to HR platforms, which are increasingly targeted as indirect gateways into larger enterprises.

Yule stresses that organisations are beginning to recognise the problem, and are, or should be, tightening processes for something as simple as a password reset.

“At Sophos, if someone wants to change their password or reset their MFA, I have to get on a Teams call, with cameras on, and see their photo ID. That makes my job a lot harder – but it also makes an attacker’s job harder.”

Personalized Feed
A Coffee With... See More
Personalized Feed
A Coffee With... See More