skip to main content
4.7/5
Customers rate us on G2
See our reviews on G2.

Why human risk management is key to data protection

CategoryData Loss Prevention
John Scott, Lead Cyber Security Researcher
ByJohn Scott
Date
Read time

Your personal data is constantly being processed and transferred in numerous different ways – whether in healthcare applications, store loyalty programmes, during purchases, or while browsing online. And with such a vast amount of personal data in circulation, the likelihood of errors occurring is heightened. 

It feels like almost every day we hear a story of another company being breached – with your data being stolen by cyber criminals looking to steal your identity, access your accounts, or commit fraud. Things are also getting easier for these cyber criminals, thanks to advancements like Generative AI assisting with more convincing phishing emails and deepfake content. 

So, what should companies be doing to protect your data? And if you work for or own an organisation that manages other people’s data, what can you do to help? 


Understand that human error is inevitable 

Research such as the Verizon Data Breach Investigations Report (DBIR) each year shows that the human element is a significant factor in 74% or more of breaches. Bluntly, people make mistakes, and cyber criminals know what to do to exploit those mistakes. That isn’t to say that the people in your organisation are the weakest link. In fact, they can be one of the strongest defences you have, if they’re given the right support, training, and tools to help protect your data and that of your customers and clients. 

When it comes to managing human risks, you probably already have all the data you need to get started. An effective strategy for approaching human risk management is CultureAI’s “Monitor, Reduce, Fix” framework. Start by analysing the data on the risks that your employees are causing, coach them to reduce the likelihood or severity of incidents, and fix the issues raised automatically or nudge the employee to fix them directly. 

Organisations that do well at protecting personal data tend to have a positive attitude towards security – what we’d call a strong security culture. One of the key indicators of a strong security culture is when people in your organisation are not afraid to come forward when they have made a mistake. If your colleagues feel safe, knowing that they won’t get blamed for an honest mistake and that your organisation is going to work with them to rectify the problem, then they will tell you what needs to be fixed. 

And what if they don’t feel safe? As Sidney Dekker said in ‘The Field Guide to Understanding Human Error’ – when you have a punitive culture, where people feel they will be punished for making mistakes, you don’t stop having errors, but you might well stop finding out about them until it’s too late to fix them. 

How can human risk management help you create a strong security culture? 

Encourage people to slow down 

One of the times we’re likely to make mistakes is when we are in a hurry. It doesn’t matter how much training we’ve had, if we’re rushing to meet a deadline, it’s easy to cut corners or not be fully focused on security. So, encourage people to slow down and double-check, even if that delays things a little. It’s better in most cases to do something safely, rather than swiftly. 

Prompt rather than train 

Most people must take mandatory security training each year, but there’s very little evidence that this has any impact on their behaviour. Instead, why not prompt people when they’re doing something particularly risky, using nudges or other interventions to get them to think about what they’re doing? 

Raise awareness, but don’t scare people 

Make sure that when you’re telling your colleagues about a new risk or threat, you are very clear on how they can effectively manage that threat. There’s no point in telling people to avoid a no-click zero-day text message – they might not even know what that is, and even if they do, they can’t avoid having messages sent to them. The important thing is that they know what to do if they see something suspicious.  

Watch for mistakes, and help colleagues fix them 

Tired and stressed people make mistakes – and just telling them not to and shouting at them if they do doesn’t fix anything. An effective human risk management platform will integrate with your tech stacks and flag any mistakes, such as sharing personal information in public chat channels or reusing passwords across SaaS applications – and automatically nudge the person carrying out that risky behaviour to help them fix it. 

Reward the positive 

Monitor for good behaviours and use recognition and reward to call them out to others. You might have an internal reward platform you can use, or you might simply get your CISO to send a thank you email (copying in the colleague’s manager, of course). People gossip and tell stories – wouldn’t it be great if one of those stories was how nice the security team was? 


To ensure robust data protection, a comprehensive, multi-layered approach to security should be adopted. Proactively managing human risk in real time promotes secure behaviours, minimising the impact of human errors. 

This is best achieved by working with human risk management providers such as CultureAI, who understand human behaviour and have developed solutions to coach employees in the moment and automatically fix risks before they escalate into issues. Through this process, employees gain insights into the evolving threat landscape and gain the necessary tools to respond adeptly when needed. 

HUMAN RISK MANAGEMENT

More than a security alert: A guide to nudges

Security nudges not only help identify risks that might otherwise go unnoticed but also dramatically reduce the time needed to resolve incidents—from days to mere minutes, or even seconds.

INSIGHTS

Deepfakes: The Next Frontier in Digital Deception

As deepfake technology becomes more advanced, it becomes harder to spot the fakes. That’s why companies need to prioritise educating employees on the warning signs of deepfakes to avoid them falling victim to a scam.