Human Factors for Computer Security Professionals


Computer security is a critical task for any organization. Even though security automation has made significant strides in recent years, human effort is still required. Our goal is to develop a better understanding of how communities of practitioners perform security tasks and the obsticles they face. Through this, we hope to produce tools, training, and policy to support the specific needs for security professionals

Find Out More

About Our Research


To understand the human factors of computer security roles, we are studying the people who perform these roles, security professionals. Specifically, we are focused on the following computer security professionals:

  • Software Developers, who design and build software that manages and protects sensitive information;
  • White-Hat Hackers and Software Testers, who find and report software security vulnerabilities;
  • Malware Analysts, who analyze malicious programs to determine how they work and mitigate the threats they pose;
  • Network Defenders, who architect and deploy network defenses (e.g., firewall, intrustion detection systems, etc.), monitor logging, and respond to malicious events like scanning and intrusions.
  • Security and Systems Administrators, who deploy and manage security-sensitive software and hardware systems;
  • Intelligence Analysts, who collect and analyze data about security matters to understand information and make predictions;

By understanding the methods, motivations, and learning processes of these various practitioners, we aspire to develop best practices, tools, training, and policies to support them.

Publications


PDF thumbnail
Compliance Cautions: Investigating Security Issues Associated with U.S. Digital-Security Standards
Stevens, R., Dykstra, J., Everette, W.K., Chapman, J., Bladow, G., Farmer, A., Halliday, K., and Mazurek, M.L. (University of Maryland)
Proceedings of NDSS 2020, San Diego, CA, USA
Digital security compliance programs and policies serve as powerful tools for protecting organizations’ intellectual property, sensitive resources, customers, and employees through mandated security controls. Organizations place a significant emphasis on compliance and often conflate high compliance audit scores with strong security; however, no compliance standard has been systemically evaluated for security concerns that may exist even within fully-compliant organizations. In this study, we describe our approach for auditing three exemplar compliance standards that affect nearly every person within the United States: standards for federal tax information, credit card transactions, and the electric grid. We partner with organizations that use these standards to validate our findings within enterprise environments and provide first-hand narratives describing impact. We find that when compliance standards are used literally as checklists — a common occurrence, as confirmed by compliance experts — their technical controls and processes are not always sufficient. Security concerns can exist even with perfect compliance. We identified 148 issues of varying severity across three standards; our expert partners assessed 49 of these issues and validated that 36 were present in their own environments and 10 could plausibly occur elsewhere. We also discovered that no clearly-defined process exists for reporting security concerns associated with compliance standards; we report on our varying levels of success in responsibly disclosing our findings and influencing revisions to the affected standards. Overall, our results suggest that auditing compliance standards can provide valuable benefits to the security posture of compliant organizations.
PDF thumbnail
Building and Validating a Scale for Secure Software Development Self-Efficacy
Votipka, D., Abrokwa, D., and Mazurek, M.L.
Proceedings of CHI 2020, Honolulu, HI, USA
Security is an essential component of the software development lifecycle. Researchers and practitioners have developed educational interventions, guidelines, security analysis tools, and new APIs aimed at improving security. However, measuring any resulting improvement in secure development skill is challenging. As a proxy for skill, we propose to measure self-efficacy, which has been shown to correlate with skill in other contexts. Here, we present a validated scale measuring secure software-development self-efficacy (SSD-SES). We first reviewed popular secure-development frameworks and surveyed 22 secure-development experts to identify 58 unique tasks. Next, we asked 311 developers—over multiple rounds—to rate their skill at each task. We iteratively updated our questions to ensure they were easily understandable, showed adequate variance between participants, and demonstrated reliability. Our final 15-item scale contains two sub-scales measuring belief in ability to perform vulnerability identification and mitigation as well as security communications tasks.
PDF thumbnail
An Observational Investigation of Reverse Engineers’ Processes
Votipka, D., Rabin, S.M., Micinski, K., Foster, J.S., and Mazurek, M.L.
Proceedings of USENIX Security 2020, Boston, MA, USA
Reverse engineering is a complex process essential to software-security tasks such as vulnerability discovery and malware analysis. Significant research and engineering effort has gone into developing tools to support reverse engineers. However, little work has been done to understand the way reverse engineers think when analyzing programs, leaving tool developers to make interface design decisions based only on intuition. This paper takes a first step toward a better understanding of reverse engineers’ processes, with the goal of producing insights for improving interaction design for reverse engineering tools. We present the results of a semi-structured, observational interview study of reverse engineers (N=16). Each observation investigated the questions reverse engineers ask as they probe a program, how they answer these questions, and the decisions they make throughout the reverse engineering process. From the interview responses, we distill a model of the reverse engineering process, divided into three phases: overview, sub-component scanning, and focused experimentation. Each analysis phase’s results feed the next as reverse engineers’ mental representations become more concrete. We find that reverse engineers typically use static methods in the first two phases, but dynamic methods in the final phase, with experience playing large, but varying, roles in each phase. Based on these results, we provide five interaction design guidelines for reverse engineering tools.education.
PDF thumbnail
Understanding security mistakes developers make: Qualitative analysis from Build It, Break It, Fix It
Votipka, D., Fulton, K.R., Parker, J., Hou, M., Mazurek, M.L., and Hicks, M.
Proceedings of USENIX Security 2020, Boston, MA, USA
Secure software development is a challenging task requiring consideration of many possible threats and mitigations. This paper investigates how and why programmers, despite a baseline of security experience, make security-relevant errors. To do this, we conducted an in-depth analysis of 94 submissions to a secure-programming contest designed to mimic real-world constraints: correctness, performance, and security. In addition to writing secure code, participants were asked to search for vulnerabilities in other teams’ programs; in total, teams submitted 866 exploits against the submissions we considered. Over an intensive six-month period, we used iterative open coding to manually, but systematically, characterize each submitted project and vulnerability (including vulnerabilities we identified ourselves). We labeled vulnerabilities by type, attacker control allowed, and ease of exploitation, and projects according to security implementation strategy. Several patterns emerged. For example, simple mistakes were least common: only 21% of projects introduced such an error. Conversely, vulnerabilities arising from a misunderstanding of security concepts were significantly more common, appearing in 78% of projects. Our results have implications for improving secure-programming APIs, API documentation, vulnerability-finding tools, and security education.
PDF thumbnail
A Qualitative Investigation of Insecure Code Propagation from Online Forums
Bai, W., Akgul, O., and Mazurek, M.L.
Proceedings of IEEE SecDev 2019, Washington, D.C., USA
Research demonstrates that code snippets listed on programming-oriented online forums (e.g., Stack Overflow)–including snippets containing security mistakes – make their way into production code. Prior work also shows that software developers who reference Stack Overflow in their development cycle produce less secure code. While there are many plausible explanations for why developers propagate insecure code in this manner, there is little or no empirical evidence. To address this question, we identify Stack Overflow code snippets that contain security errors and find clones of these snippets in open source GitHub repositories. We then survey (n=133) and interview (n=15) the authors of these GitHub repositories to explore how and why these errors were introduced. We find that some developers (perhaps mistakenly) trust their security skills to validate the code they import, but the majority admit they would need to learn more about security before they could properly perform such validation. Further, although some prioritize functionality over security, others believe that ensuring security is not, or should not be, their responsibility. Our results have implications for attempts to ameliorate the propagation of this insecure code
PDF thumbnail
Toward a Field Study on the Impact of Hacking Competitions on Secure Development [Slides]
Votipka, D., Hu, H., Eastes, B., and Mazurek, M.
Proceedings of WSIW 2018, Baltimore, MD, USA
The ability to find and fix vulnerabilities is critical to providing secure software to users. Previous research has shown that the main difference between experts who specialize in finding security flaws and general software practitioners (i.e., developers and testers) is that the experts have been exposed to more potential security issues. To bridge this experience gap, computer security competitions, called Capture-the- Flags (CTF), have been carried out both in the academic and corporate setting. Using a mixed-methods approach, we examine in a field setting whether CTF competitions improve participants’ ability to identify security weaknesses and write more secure code. Our initial results indicate that CTFs have a positive effect on security thinking, encourage communication with the security team, and reduce overconfidence in participants’ ability handle complex security problems.
PDF thumbnail
Hackers vs. Testers: A Comparison of Software Vulnerability Discovery Processes [Slides, Talk, Poster] (SOUPS '18 Distinguished Poster Award)
Votipka, D., Stevens, R., Redmiles, E., Hu, J. and Mazurek, M.
Proceedings of IEEE S&P 2018, San Francisco, California, USA
Identifying security vulnerabilities in software is a critical task that requires significant human effort. Currently, vulnerability discovery is often the responsibility of software testers before release and white-hat hackers (often within bug bounty programs) afterward. This arrangement can be ad-hoc and far from ideal; for example, if testers could identify more vulnerabilities, software would be more secure at release time. Thus far, however, the processes used by each group — and how they compare to and interact with each other — have not been well studied. This paper takes a first step toward better understanding, and eventually improving, this ecosystem: we report on a semi-structured interview study (n=25) with both testers and hackers, focusing on how each group finds vulnerabilities, how they develop their skills, and the challenges they face. The results suggest that hackers and testers follow similar processes, but get different results due largely to differing experiences and therefore different underlying knowledge of security concepts. Based on these results, we provide recommendations to support improved security training for testers, better communication between hackers and developers, and smarter bug bounty policies to motivate hacker participation.
PDF thumbnail
The Battle for New York: A Case Study of Applied Digital Threat Modeling at the Enterprise Level [Slides, Talk] (USENIX Secuirty '18 Distinguished Paper Award)
Stevens, R., Votipka, D., Redmiles, E., Ahern, C., Sweeney, P. and Mazurek, M.
Proceedings of USENIX Security 2018, Baltimore, Maryland, USA
Digital security professionals use threat modeling to assess and improve the security posture of an organization or product. However, no threat-modeling techniques have been systematically evaluated in a real-world, enterprise environment. In this case study, we introduce formalized threat modeling to New York City Cyber Command: the primary digital defense organization for the most populous city in the United States.
We find that threat modeling improved self-efficacy; 20 of 25 participants regularly incorporated it within their daily duties 30 days after training, without further prompting. After 120 days, implemented participantdesigned threat mitigation strategies provided tangible security benefits for NYC, including blocking 541 unique intrusion attempts, preventing the hijacking of five privileged user accounts, and addressing three public-facing server vulnerabilities. Overall, these results suggest that the introduction of threat modeling can provide valuable benefits in an enterprise setting.
PDF thumbnail
Developers need support, too: A survey of security advice for software developers
Acar, Y., Stransky, C., Wermke, D., Weir, C., Mazurek, M. and Fahl, S.
Proceedings of IEEE Secure Development Conference 2017, Cambridge, Massachusetts, USA
Increasingly developers are becoming aware of the importance of software security, as frequent high-profile security incidents emphasize the need for secure code. Faced with this new problem, most developers will use their normal approach: web search. But are the resulting web resources useful and effective at promoting security in practice? Recent research has identified security problems arising from Q&A resources that help with specific secure-programming problems, but the web also contains many general resources that discuss security and secure programming more broadly, and to our knowledge few if any of these have been empirically evaluated. The continuing prevalence of security bugs suggests that this guidance ecosystem is not currently working well enough: either effective guidance is not available, or it is not reaching the developers who need it. This paper takes a first step toward understanding and improving this guidance ecosystem by identifying and analyzing 19 general advice resources. The results identify important gaps in the current ecosystem and provide a basis for future work evaluating existing resources and developing new ones to fill these gaps.
PDF thumbnail
Security developer studies with GitHub users: Exploring a convenience sample
Acar, Y., Stransky, C., Wermke, D., Mazurek, M. and Fahl, S.
Proceedings of USENIX Symposium on Usable Privacy and Security 2017, Santa Clara, California, USA
The usable security community is increasingly considering how to improve security decision-making not only for end users, but also for information technology professionals, including system administrators and software developers. Recruiting these professionals for user studies can prove challenging, as, relative to end users more generally, they are limited in numbers, geographically concentrated, and accustomed to higher compensation. One potential approach is to recruit active GitHub users, who are (in some ways) conveniently available for online studies. However, it is not well understood how GitHub users perform when working on security-related tasks. As a first step in addressing this question, we conducted an experiment in which we recruited 307 active GitHub users to each complete the same securityrelevant programming tasks. We compared the results in terms of functional correctness as well as security, finding differences in performance for both security and functionality related to the participant's self-reported years of experience, but no statistically signifi cant differences related to the participant's self-reported status as a student, status as a professional developer, or security background. These results provide initial evidence for how to think about validity when recruiting convenience samples as substitutes for professional developers in security developer studies.
PDF thumbnail
Comparing the usability of cryptographic APIs
Acar, Y., Backes, M., Fahl, S., Garfinkel, S., Kim, D., Mazurek, M., and Stransky, C.
Proceedings of IEEE Symposium on Security and Privacy 2017, San Jose, California, USA
Potentially dangerous cryptography errors are well-documented in many applications. Conventional wisdom suggests that many of these errors are caused by cryptographic Application Programming Interfaces (APIs) that are too complicated, have insecure defaults, or are poorly documented. To address this problem, researchers have created several cryptographic libraries that they claim are more usable; however, none of these libraries have been empirically evaluated for their ability to promote more secure development. This paper is the first to examine both how and why the design and resulting usability of different cryptographic libraries affects the security of code written with them, with the goal of understanding how to build effective future libraries. We conducted a controlled experiment in which 256 Python developers recruited from GitHub attempt common tasks involving symmetric and asymmetric cryptography using one of five different APIs. We examine their resulting code for functional correctness and security, and compare their results to their self-reported sentiment about their assigned library. Our results suggest that while APIs designed for simplicity can provide security benefits—reducing the decision space, as expected, prevents choice of insecure parameters—simplicity is not enough. Poor documentation, missing code examples, and a lack of auxiliary features such as secure key storage, caused even participants assigned to simplified libraries to struggle with both basic functional correctness and security. Surprisingly, the availability of comprehensive documentation and easy-to-use code examples seems to compensate for more complicated APIs in terms of functionally correct results and participant reactions; however, this did not extend to security results. We find it particularly concerning that for about 20% of functionally correct tasks, across libraries, participants believed their code was secure when it was not. Our results suggest that while new cryptographic libraries that want to promote effective security should offer a simple, convenient interface, this is not enough: they should also, and perhaps more importantly, ensure support for a broad range of common tasks and provide accessible documentation with secure, easy-to-use code examples.
PDF thumbnail
Build It, Break It, Fix It: Contesting Secure Development
Ruef, A., Hicks, M., Parker, J., Levin, D., Mazurek, M., and Mardziel, P.
Proceedings of ACM Conference on Computer and Communications Security 2016, Vienna, Austria
Typical security contests focus on breaking or mitigating the impact of buggy systems. We present the Build-it, Break-it, Fix-it (BIBIFI) contest, which aims to assess the ability to securely build software, not just break it. In BIBIFI, teams build specified software with the goal of maximizing correctness, performance, and security. The latter is tested when teams attempt to break other teams' submissions. Winners are chosen from among the best builders and the bestbreakers. BIBIFI was designed to be open-ended|teams can use any language, tool, process, etc. that they like. As such, contest outcomes shed light on factors that correlate with successfully building secure software and breaking insecure software. During 2015, we ran three contests involving a total of 116 teams and two different programming problems. Quantitative analysis from these contests found that the most ecient build-it submissions used C/C++, but submissions coded in other statically-typed languages were less likely to have a security flaw; build-it teams with diverse programming-language knowledge also produced more secure code. Shorter programs correlated with better scores. Break-it teams that were also successful build-it teams were significantly better at finding security bugs.

People


Michelle

Michelle L. Mazurek

Primary Investigator

Daniel

Daniel Votipka

Graduate Research Assistant

Rock

Rock Stevens

Graduate Research Assistant

Elissa

Elissa M. Redmiles

Graduate Research Assistant

Kelsey

Kelsey Fulton

Graduate Research Assistant

James

James Parker

Graduate Research Assistant

Seth

Seth Rabin

Undergraduate Research Assistant

Desiree

Desiree Abrokwa

Undergraduate Research Assistant

Participate in Research


We are always looking to recruit new participants to join our research panel of computer security professionals. Press the button below to learn more about joining our panel. Thank you in advance for your interest.

Learn about our research panel

Contact Us


Have any questions, comments, or concerns about our research?
Please feel free to reach out to us!