Cognitive Bias in Cybersecurity

I found this really interesting publication about cognitive bias in Cybersecurity:

And it got me wondering - how is cognitive bias prevalent in research, academia, or even just general institutions? Here are some ideas that I have. I don’t want to categorize them specifically (because I’ll probably just get it wrong) but someone else might.

Cognitive Bias in Research Cybersecurity

I infer that:

  • because the general researcher must be working for the greater good, they cannot engage in any malicious or immoral practices with data, or privacy. (aggregate bias?)
  • working on any research cluster under my institution ensures that I am completely protected.
  • using a cloud provider means that I don’t need to worry about security
  • If I run a service quickly and don’t use SSL, it’s probably okay for just this one time (some kind of collective action failure if then everyone doesn’t)
  • I don’t understand what a CVE is, so it must be my system administrator’s problem.
  • If I didn’t check and see it in my data, or if I didn’t clarify the rules for data on this cluster, there is no problem (this is covering a tendency to assume that it’s not you that is violating some privacy rule with an imperfectly cleaned dataset, but all those “other people.”)
  • If I purchase a security product I am safe.

I think there are probably a lot of random examples that one might come up with, and I thought it would be an interesting discussion to have. And then importantly, how to overcome some of these biases? What do you think are common ones? Which are most important?

In my experience following are some other few common cognitive bias in cybersecurity.

  1. Web browser shows a green padlock, so the website is a legitimate website
  2. My organization is compliant with XYZ compliance, so I don’t need to worry about the security
  3. I have performed risk assessment last year, so I don’t need to perform it again this year
  4. I have enabled 2FA for my accounts, so I’m completely secure from password attacks

To overcome such bias, we need to conduct the security training and exercise that are more engaging and interactive with the users. We (the security community) should clearly explain the risk and consequences of the actions rather than simply providing recommendations.

Here are a few additional examples of cognitive biases that may be worth considering:

  • Assuming that certain types of attacks or threats are not relevant to one’s organization or area of research, leading to an underestimation of risk.
  • Assuming that cybersecurity is solely the responsibility of IT or security personnel, rather than a shared responsibility among all employees or members of an institution.
  • Overreliance on a single solution or approach to cybersecurity, such as a particular software tool or security protocol, without considering the potential limitations or vulnerabilities of that solution.
  • Confirmation bias, where researchers or institutions may interpret data or information in a way that confirms their existing beliefs or assumptions about cybersecurity risks and solutions, rather than seeking out conflicting or alternative perspectives.
  • Attribution bias, where researchers or institutions may attribute the success or failure of cybersecurity efforts to internal factors (such as their own actions or decisions) rather than external factors (such as luck or the actions of attackers).

Overcoming cognitive biases in cybersecurity research, academia, and institutions can be challenging, but some strategies that may be helpful include:

  • Encouraging diversity of thought and perspectives among researchers and employees, in order to challenge assumptions and biases.
  • Providing ongoing education and training on cybersecurity best practices, as well as on the potential risks and biases that may arise in cybersecurity research and practice.
  • Encouraging open dialogue and collaboration among researchers and employees to identify potential biases and blind spots, and to develop solutions that are more comprehensive and effective.
  • Adopting a culture of continuous improvement, where cybersecurity strategies and practices are regularly evaluated and adjusted based on ongoing feedback and analysis.

How can we ensure that ethical considerations are given sufficient weight in cybersecurity research and practice, particularly given the potential for cognitive biases to cloud ethical judgment? What are some strategies for mitigating the impact of cognitive biases on cybersecurity research and practice? For example, how can we encourage more rigorous testing and evaluation of security solutions, or ensure that cybersecurity decisions are based on empirical evidence rather than intuition or assumptions?