Ask.Cyberinfrastructure

Cognitive Bias in Cybersecurity

I found this really interesting publication about cognitive bias in Cybersecurity:

And it got me wondering - how is cognitive bias prevalent in research, academia, or even just general institutions? Here are some ideas that I have. I don’t want to categorize them specifically (because I’ll probably just get it wrong) but someone else might.

Cognitive Bias in Research Cybersecurity

I infer that:

  • because the general researcher must be working for the greater good, they cannot engage in any malicious or immoral practices with data, or privacy. (aggregate bias?)
  • working on any research cluster under my institution ensures that I am completely protected.
  • using a cloud provider means that I don’t need to worry about security
  • If I run a service quickly and don’t use SSL, it’s probably okay for just this one time (some kind of collective action failure if then everyone doesn’t)
  • I don’t understand what a CVE is, so it must be my system administrator’s problem.
  • If I didn’t check and see it in my data, or if I didn’t clarify the rules for data on this cluster, there is no problem (this is covering a tendency to assume that it’s not you that is violating some privacy rule with an imperfectly cleaned dataset, but all those “other people.”)
  • If I purchase a security product I am safe.

I think there are probably a lot of random examples that one might come up with, and I thought it would be an interesting discussion to have. And then importantly, how to overcome some of these biases? What do you think are common ones? Which are most important?