Justin Giboney, David Wilson, and Alexandra Durcikova
This paper develops a theoretical model to help answer the following question— what influences an individual's opinions regarding the right to privacy of others (individuals, organizations, and governments)? This question is particularly relevant for organizations and governments, for whom insider threats to corporate or government privacy present a dangerous risk. We draw from three bases of literature (human rights, privacy, and transparency) to theorize several constructs that should account for individual attributions of another's right to privacy. In addition, we differentiate between individual, company, and government right to privacy, justify different origins of these entity types' right to privacy, and propose different effects on these entity types from predictors in our model. The model presented here is the first of its kind in the IS literature, and lays the groundwork for future contributions in academic research and greater understanding relevant to practitioners.
Philip Menard, Mikko Siponen, and Merrill Warkentin
Organizations continue to be concerned with information breaches, some of which can result from employees' non-compliance with organizational information security procedures. The leading theory to explain non-compliance behavior in IS security research is deterrence theory (DT). The applications of DT in IS security suggests that increasing the level of a sanction's severity, certainty, and celerity will lead to employees' intentions being more aligned with organizational security procedures. We argue that although IS applications of DT may increase security compliance by performing its functional role of protecting business information, they may simultaneously prevent an organization from achieving its primary business goals. Namely, grounded upon selfdetermination theory, we argue that the introduction of sanction systems as applied from DT have adverse effects on employees' work motivation, job performance, organizational-based self-esteem, organizational commitment and job satisfaction, all of which are linked to performance of the primary function of the organizations. Self-determination theory would imply that DT, by attempting to perform its functional role via sanctions, may hinder the primary goals of the organization through decreased work performance. Previous research has focused on the effect on the DT on the functional role, while ignoring the effect of DT on primary business goals. This research-in-progress paper focuses on this unstudied area of IS, namely the negative implications of DT toward key indicators of human performance in corporations, such as work motivation, job performance, organizational-based self-esteem, organizational commitment, and job satisfaction. We also examine when sanctions become too excessive, having negative implications on work performance and other critical human performance factors. Data will be collected using the factorial survey method.
Sean Browne and Michael Lang
A deficiency exists in the Information Systems Security literature because of the tendency to regard IT threat avoidance and IT security adoption as separate behaviours. In addressing the deficiency this research in progress focuses on entrepreneurial SME companies, for several reasons including their strategic importance globally, the current trend among cybercriminals to conduct more high volume, low risk attacks against weaker targets and also because of the individualistic behavioural patterns in SMEs. Drawing on several well-established behavioural theories, this paper synthesises elements of these theories into a holistic behavioural model, with coping theory placed firmly at is centre. This study will make several contributions to the field, initially creating an empirically validated model for behaviours surrounding both avoidance and preventative actions in entrepreneurial small firms and also in presenting and prioritising a specific view of the external factors influencing how threats are appraised, assessed and dealt with.
Mark Keith, Nam Ngo, and Jeffry Babb
Existing research on information privacy behaviors in the information technology context has had a difficult time explaining information disclosure behaviors. The "privacy paradox" is the specific term used to describe this phenomenon: that information disclosure intentions are a poor indicator of actual behaviors. The dominant theories used to explain information disclosure are based on only a cross-sectional view of consumer attitudes, beliefs, and behaviors. This is a limitation because privacy beliefs are known to change dramatically over time based on experience and education. This study develops a modified information disclosure theory, based on privacy calculus, which accounts for the consumer's long-term risk expectations and their own self-regulation. We test this model in two ways. First, we employed a realistic mobile app-based field experiment that studies initial information disclosure based on longitudinal risk expectations. Second, we employ a longitudinal field study (in progress) also based on mobile application consumers which shows the effects of risk education and self-regulation over time. Mobile devices are a particularly useful context because they integrate many forms of information privacy in a single device (e.g. personal data, location data, preferences, social network, etc.). Our results implicate that self-regulation (patience and action control) is a significant moderator of the perceived risks and benefits of information disclosure.
Burcu Bulgurcu, Hasan Cavusoglu, and Izak Benbasat
Today, sensitive personal information is increasingly being traded-off for some expected returns from technology use. This could violate an individual's information privacy, damage social and professional relationships, or harm the overall wellbeing. This research, in the context of social apps, focuses on technology users' vulnerabilities to information privacy risks and specifically identifies conditions under which individuals grant permissions to social apps to access and utilize their sensitive personal information. Our key objectives are: (i) to understand if a user's willingness to share personal information is affected by the type of benefits (i.e., hedonic and utilitarian) that are obtained by a user in using a social app, in particular when both perceived benefit and risk are simultaneously high; and (ii) to explain the role of app developers' choices — permissions requested and privacy controls provided by an application—in shaping users' benefit and risk perceptions. Based on the analysis of the data collected from 747 Facebook users by employing a scenario-based experiment, we show the following results: (i) While the permissions requested by a social app increases a user's privacy risk perceptions, this can be countered by the privacy controls provided as long as perceived privacy risk caused by the extent of permission requested is not too high. Otherwise, privacy controls provided has a limited role in reducing privacy risks. (ii) The extent of permissions requested not only increases a user's risk perceptions, but also reduces the perceived benefit a user derives from using the app. (iii) A user's overall perceived benefit serves as a means to lower the influence of a user's perceived privacy risk on his attitude towards using an app, especially when perceived benefit is high, thus; making him more vulnerable to information privacy related risks. (iv) A user perceives less benefit associated with using an application when potential privacy risks perceived are higher. The results of this study offer important theoretical and practical implications. Theoretically, this study expands our understanding of the antecedents of a user's risk and benefit perceptions, and how these perceptions affect the user's attitude towards using social applications, especially in situations where both perceptions are high. For practitioners, our results highlight the importance of requesting a minimal number of permissions and of providing privacy controls for reducing a user's risk perceptions by illustrating the significant impact of privacy controls and permission requests on shaping a user's perception of a social application, and in turn his willingness to share his personal information with the app.
Xueyu Jin, Nan Zhang, and Mikko Siponen
As social media becomes ubiquitous, better understanding about users' privacy issues is critical. However, due to the complexity of privacy concept, it is hard to know how individuals define their privacy in the virtual world. In this paper, we combine two main research streams in privacy research together to identify the individual's psychological boundary between private and public area. Specifically, drawing on the psychological ownership theory, we first define what the individual's online privacy is and then identify the antecedents of the individual's online privacy concerns. We empirically validate our model with the data collected from Sina Weibo, one of the largest social media websites in China. The study provides a new perspective to systematically investigate privacy issues in the context of social media. The results are also expected to help the social media vendors to develop more efficient functions to protect users' online privacy.
Teodor Sommestad, Henrik Karlzén, and Jonas Hallberg
The behavior of individuals is imperative to the information security of organizations and individuals. This paper presents a meta-analysis of the protection motivation theory's (PMT) ability to explain variance in secure behavior that is voluntary and up to the individual as well as secure behavior that is mandatory and required by someone else. In addition, it is assessed whether the theory is better at predicting specific behaviors than more general ones and if the theory is better at predicting behaviors where the information security threat is directed towards the person rather than the person's organization or someone else. Data from 28 surveys is synthesized and the results of three experiments are described. Support was found for all the relationships predicted by the PMT for voluntary and mandatory behavior. In both voluntary and mandatory behavior about 0.4 of the variance in behavior is explained by the PMT. The weighted mean correlation coefficients for variables in the PMT reported from studies of specific behaviors is higher than studies of general behaviors (differences of 0.00 to 0.11). The weighted mean correlation coefficients to threat appraisal is also higher when the threat targets the individual person instead of the person's organization or someone else (difference 0.04 and 0.13).
Jeff Jenkins, Alexandra Durcikova, and Shane Reeves
People use computers for a myriad of purposes, including to accomplish work-related tasks, entertainment, and communication. At the same time, users must inadvertently interact with security mechanisms and make security decisions. Humans, however, normally have trouble performing two or more relatively simple tasks concurrently—a phenomenon known as dual-task interference. This research-in-progress article explores how dual-task interference influences users' secure behavior (i.e., compliance with organizational security policies and best practices). The article hypothesizes how dual-task interference may influence two types of common security activities: a) security activities that are performed simultaneously with other activities (e.g., assessing the credibility of sources while reading emails or browsing the web), and b) security activities that interrupt users' other activities to request a security action (e.g., warnings or prompts). The article proposes two experiments to test the hypotheses about how dual-task interference influences these two types of security activities. At the time of this submission, pilot tests were underway to further refine the experimental designs. Pending on whether the hypotheses are supported, this research highlights the need for conducing future research that focuses on alleviating the effects of dual-task interference (a physiological limitation of humans) in addition to the existing approaches on exploring how to improve users' security beliefs and intentions.
Bonnie Anderson, Tony Vance, Brock Kirwan and David Eargle
Warning messages are one of the last lines of defense in computer security, and are fundamental to users' security interactions with technology. A key contributor to pervasive user disregard of security warnings is habituation, the diminishing of attention due to frequent exposure to a warning. Research that has examined habituation and security warnings has done so indirectly, by observing the influence of habituation on security behavior, rather than measuring habituation itself. This study seeks to contribute by using neuroscience to open the “black box” of the brain to observe habituation as it occurs. Specifically, we point to the repetition suppression (RS) effect, the reduction of neural responses to stimuli that are viewed repeatedly, a phenomenon directly antecedent to the process of habituation. By investigating how repetition suppression occurs in the brain, we can make a more precise approach to designing security warnings that are resistant to, or possibly can even reverse the effects of habituation. We propose a series of three laboratory experiments using functional magnetic resonance imaging (fMRI) to observe brain data and improve user interaction with security warnings.
David Eargle, Dennis Galletta, and Greg Siegle
Many information security breaches can be traced to negligent actions by organizational insiders. Individuals may fail to give heedance to protective security messages such as prompts to install security updates or warnings about websites serving malicious software. Research on security warnings has found that warnings are frequently disregarded by users, for reasons such as users not perceiving threats, or because of habitual non-attention to the messages, with users dismissing them as quickly as possible in order to continue with primary, interrupted tasks. To solve long-standing problems of low threat perceptions and inattention to protective security messages, we propose a novel application of theory from neuroscience research on human reactions to displayed emotions involving fearful facial expressions, which have been used to boost perceptions of fear in observers. We describe fMRI and field studies that will test several behavioral and neurological hypotheses relating to the research question of how fearful facial expressions designed into protective security messages will influence secure end user behavior. Expected contributions to academia and practice are discussed.
Rachel Chung and Dennis Galletta
Behavioral genetics offers numerous opportunities to bridge gaps in biological research of organizational science and to shed light on the nature versus nurture debate. This study seeks to explain persistent vulnerability to behavioral security from a genetic perspective. A synthesis of current literatures on cognitive neuroscience, decision making, and behavioral security suggests that there may potentially be a genetic basis for user susceptibility to security risks. Using the classic twin design, this study reports estimated heritability of behavioral security to be up to 36% by comparing concordance between 144 pairs of monozygotic (MZ) twins and that between 98 pairs of same-sex dyzygotic (DZ) twins on a behavioral security test. The results suggest that behavioral security is explained largely by both shared and non-shared environmental influences. Zygosity of the twin pairs serves as the primary independent variable in these behavioral genetics analyses. Implications of the study results are discussed with respect to anti-fraud research as well as managerial practices.
Gunnar Wahlgren and Stewart Kowalski
In this early stage paper we present a draft of an IT Security Risk Escalation Capability Maturity Model. This model is used to develop a new approach to IT Security Risk Management where IT Security Risk Management is placed as a recurring activity at all levels of the organization including the strategic, tactical and operational levels. To construct this model we combined ISO 27005 framework for IT Security Risk Management with NIST Multitier framework and take elements from the ISAC IT Risk framework. We end our paper with an outline of our current plans to evaluate this escalation maturity model by using expert groups to rank outcomes of response to similar IT incidents by different organization that have been ranked according to this maturity model. In this way we hope to establish if there are correlations as to the maturity level of an organization and how well it responds to an IT incident.
Tejaswini Herath and John D'Arcy
Hwee-Joo Kam, Pairin Katerattanakul, and Greg Gogolin
Using multiple case studies approach, temporal analysis, and Social Network Analysis, this study examined the antecedents of knowledge management that drove Cyber learning in Cyber Jihad. By investigating the cases of “Jihad Jane” and “Terrorist 007”, this study discovered that knowledge management was driven by trust, social interaction ties, group identification, reciprocity norms, shared vision, shared language, and community well-being. In addition, personal outcome expectations were the effect, rather than the antecedent, of knowledge sharing among Cyber Jihadists. Furthermore, a high degree of knowledge management expanded the network of Cyber Jihad, enabling Cyber learning across multiple networks and supporting the intuitive learning culture. Hinged on these findings, this study presents counterterrorism strategies and theoretical implications.
Kennedy Njenga and Jordaan Pierre
Making information security decisions regarding information security concerns in small business organisations can be understood not by understanding how rational decisions made by owners/managers are, but from understanding the heuristic interplay between cognition and context. By using bounded rationality as a theoretical lens, we argue that the responsiveness towards information security is not generally aimed at 'optimising' information security responses but rather presented as 'satisficed' states. Surprisingly finding from four cases extend the idea that in an effort to satisfice, some of the main information security tenets that are solidly grounded on the discipline are consciously or unconsciously flouted by some of these small business organisations. These findings are based on pragmatic heuristics rules that are at play in small business organisations. These heuristics are not to be judged as good or bad per se, but should be seen as solely applying in context to the environments in which they are used. The methodological implications for the study are that information security principles adopted by small enterprises need to be understood as functions of cognition and environmental structures.
Esther L. Mead and Rachida Parks
Despite their popularity in the cashless lunch line, recording students' attendance or checking out books from the school library, the use of biometrics in schools is making headlines in the news and raising major privacy and security concerns against this identity-aware technology. Biometrics tends to be viewed as putting students' Personally Identifiable Information at risk, and the issue of parental and student consent to collect the biometric data has become a simultaneous focal point in the societal discussion. Understanding this phenomenon requires a focus between the technology of biometrics and its social implications. Applying a socio-technical approach, we attempt to gain a clear understanding between the technology of biometrics and its social environment. Through our analysis, we identify technical and social implications that include security advantages and concerns, privacy concerns, awareness issues, and legal considerations. The examination of the complex social and technical interactions of biometrics provides a unique perspective and spur the development of research propositions. The research has implications to schools, parents, lawmakers, vendors and privacy rights advocates.
Michael Schermann and Scott R. Boss
This paper presents the rationale, elements, and process of the white-collar hacking contest (WCHC), a novel approach to teach forensic investigations, management controls, and security in the digital environment. The WCHC is a round-based contest, where participants in turn act as those who commit fraud (fraudsters) and forensic investigatory teams. State-ofthe- art enterprise information systems serve as the playing field for the game. The contest was developed in close cooperation with experienced forensic investigators to ensure real-life conditions for both fraudsters and forensic investigators. This serious gaming approach helps to advance teaching on fraud detection and forensic investigations in the digital age in two important ways. First, it provides an intriguing way to teach forensic investigation methods to students of interdisciplinary backgrounds. In particular, students experience the role of both a fraudster and a forensic investigator. Second, the contest design helps to focus on ambivalent fraud schemes where a number of possible legal alternatives can be presented to rebut the gathered evidence. We also discuss the WCHC as a starting point for experimental research in fraud investigation.