Study finds humans not completely useless at malware detection
Researchers from the Universities of Guelph and Waterloo have discovered exactly how users decide whether an application is legitimate or malware before installing it – and the good news is they’re better than you might expect, at least when primed to expect malware.
“Most existing malware research analyzes ‘after action’ reports,” co-author and Waterloo professor of science Daniel Vogel explained in the paper’s announcement. “That is, investigations into what went wrong after a successful attack. Our study, which featured novice, intermediate and expert users, is the first malware research to observe user strategies in real time.”
The study had a relatively limited participant pool of 36 users drawn from jobs including customer service reps, administrative assistants, a social worker, a nurse, an entomologist, plus “intermediate” and “advanced” users working in IT management, software development, and threat analysis. Participants were placed in front of a Windows 10 laptop with a mocked-up Microsoft Teams interface. Their task was to decide whether or not the software a “colleague” had just sent them was legitimate or malware.
Given the parlous state of IT security, you may be forgiven for thinking participants performed poorly – but that wasn’t the case. With the proviso that, given the nature of the study, participants were primed to be suspicious of any and all software received, 88 percent of the malware samples – simulated and de-fanged examples of the LockBit Black ransomware, Async Remote Access Trojan (RAT), and XMRIG CoinMiner – were correctly identified.
Where users fell down, the study found, was in correctly identifying legitimate software – “obscure” packages, by the authors’ own admission, including printer drivers and file-sharing applications. Here participants’ accuracy dropped to 62 percent, with the “advanced” users falling into a pit made of their own suspicious nature.
“The majority of false positives [in the advanced group] were due to the confusion caused by their prior knowledge,” the researchers found. “They tried to find indicators that would stoke their suspicion (e.g. fixating on information that was absent in metadata or in a system notification.)”
Advanced users weren’t alone in flagging legitimate software as malicious, however. “It was interesting how novice users sometimes flagged legitimate software as malware due to a typo or poor interface design,” lead author Brandon Lit noted, “yet missed real malware when the clue was unusual system behavior, like high processor usage.”
In an interesting twist to the experiment, the researchers repeated the test with the addition of a system monitoring tool, inspired by Windows’ Task Manager, which adds data such as destination countries of network connections, verified publisher details associated with the executable, and with file access lists organized by parent directory – but presented in a simplified user interface accessible to all.
Using this, malware detection accuracy jumped to 94 percent overall, thanks largely to a big boost to the “basic” users’ performance, with participants also taking around a minute less to come to a decision. Legitimate software still suffered from false positive flagging, though with a slight improvement to 66 percent accuracy.
“Just having a bit of information puts beginner users on par with computer scientists,” Lit said of the tool, which the researchers have released under an unspecified open source licence on GitHub. “Fostering critical thinking is one of the most important things we can do to increase security.”
The study also provides four “indicator categories” – executable properties, program behavior, program look and feel, and threat intelligence sources – broken down into 25 total indicators that participants used to make their decision, while flagging a range of misconceptions that may be harming user security. The biggest was complete confusion about the meaning of the shield icon overlay on a Windows executable, designed by Microsoft to indicate an application that requests elevated privileges yet interpreted by participants to mean “secure software.”
In an email exchange with The Register, Daniel Vogel, the corresponding author on the malware research, told us:
“Our study shows that people should be aware of system resource usage, such as CPU load and network activity. If your CPU fan comes on and your network suddenly feels really slow, something unusual may be going on that could signal malware activity.
“Operating system developers could make it easier for people to see system resource usage. For example, adding a visualization to the task bar to show things like CPU load and network activity, or redesigning system monitoring tools to be more understandable for non-technical users.”
The report is to be presented at the 34th USENIX Security Symposium later this month, with a preprint available on the conference website as a PDF download. ®
READ MORE HERE