Richard Baskerville, Eun Hee Park, Jong Woo KimThis paper describes an integrated model of computer abuse that incorporates both the abuse setting and certain psychological processes of the abuser. The model adapts the crime opportunity structure model with the emotion process model to create an integrated emote opportunity model. This model enables a more complete view of computer abuse as the result of an opportunity for people to emote (express emotion). This model is validated by its application to a real case of computer abuse drawn from the literature. The integrated model provides several benefits. First, it can explain interaction between organization-centric factors and individual-centric factors. Second, it can provide explanation on how potential computer abusers elicit emotion process component and ultimately lead to computer abuse behaviors. Third, because the model connects organizational external regulation processes and individual internal regulation processes to emotion process components of potential abusers, it can explain how the regulation processes for crime prevention can affect emotion process components. As a result, the model develops more extensive explanations and prescriptive advice relevant to the abuse case setting. The operability of the model adds new evidence supporting the validity of the two underlying models.
Robert Crossler, France BélangerSecurity threats regularly affect users of home computers. As such, it is important to understand the behaviors users utilize to protect their computers and identify determinants of these behaviors. This study empirically tests a previously untested theoretically-derived behavioral security model (the Technology Threat Avoidance Theory (TTAT)) utilizing a newly developed measure for individual security behaviors. This study shows that the TTAT-based model along with another Protection Motivation Theory based model are both effective in explaining individual security behaviors. The study also demonstrates the robustness of the newly developed instrument to measure individual security behaviors. Implications of the study for research and practice are discussed.
Tejaswini Herath, Kichan Nam, H. R. RaoThis work-in-progress presents a framework based on Bandura's theory of moral disengagement (Bandura 1986; Bandura 2002; Bandura et al. 1996). This theory is rooted in social-cognitive theory, which encompasses self regulatory mechanisms, and we argue can explain the exercise of moral agency that manifests in both refraining from behaving deviantly and proactively behaving in positive manner. Using a backdrop of this theory we develop a model that captures the disengagement from, or engagement in, security behaviors. This proposed research attempts to examine behaviors on both sides of security behavior spectrum.
Qing Hu, Zhengchuan Xu, Tamara Dinev, Hong LingIn this study, we attempt to understand why employees commit computer offenses by developing a comprehensive model that integrates three main stream criminology theories, i.e., general deterrence, rational choice, and individual propensity. We submit that, while the main decision process leading to an offensive act may be explained by the rational choice theory, self-control and deterrence factors could significantly alter the risk-benefit calculus assumed in the rational choice model. Using data collected from employees in multiple organizations, we tested the model with structural equation modeling techniques. We found that while the rational choice framework is largely supported by the fact that the perceived benefits and perceived risks of offensive behavior indeed significantly influence offensive intentions, the individual propensity to crime, more specifically, low self-control, plays an central role in committing computer offenses by influencing the perceived benefits and risks. In addition, we found that impact of deterrence is limited: among the three deterrence constructs, only certainty has a weak influence on the perceived risks. The results suggest that companies cannot rely solely on deterrence, such as establishing security policies that articulate stiff penalties for computer offenses. Screening for employee with high level of self-control for the critical and sensitive positions is a critical component that seems to have not been given deserved attention.
Allen C. Johnston, Merrill WarkentinPersuasive communications have been shown to be effective means to influence individual behavior, and in the organizational context, such communications from managers to employees have been effectively used to increase positive actions and to diminish unwanted behaviors. Is the impact of these communications differential and does the degree to which individual identifies with an organization influence how they react to the communications? A sample of students and employees of two large southeastern universities will be surveyed to address these research questions, and the results will be presented and discussed at the conference.
Paul B. Lowry, Noelle Teh, Braden Molyneux, Son Ngoc BuiBecause employees are major IT security threats in organizations, substantial behavioral IS security research has looked at ways to increase IT security compliance. Unfortunately, many of these approaches--especially those based on deterrence theory and other approaches based on control--can backfire. Accordingly, we introduce psychological reactance theory as an innovative theory that can explain why controlling approaches to IT security policies can backfire. The theory explains that, when an individual's freedoms are threatened, he or she will respond by attempting to reestablish the threatened freedoms. We combined control theory, mandatoriness, and reactance theory into a cohesive model--the Control-Reactance Model--to explain and predict, for the first time, the inherent conflicts between increased control and mandatoriness that may increase IT security policy compliance yet restrict personal freedom in a manner that causes negative reactance results. We discovered that, while creating a sense of mandatoriness is important for compliance with a new IT security policy, if this sense of mandatoriness is delivered through high levels of control or controlling language, or if the new IT security policy is too restrictive of personal freedoms, such scenarios can create negative, unintended consequences such as anger and lack of compliance. From these findings, we propose recommendations for practice, including carefully communicating policy, understanding the importance of freedoms for employees, and establishing an environment of threat awareness.
Xin (Robert) Luo, Han Li, Qing Hu, Heng XuInformation security management has assumed increasing importance as confidentiality, integrity, and availability of data plays a crucial role in business competitiveness, sustainability, and continuity. In essence, no organization is immune to both external and internal forces threatening the safety and security of its data and information. The congruence of extant IS security research is that one of the greatest threats to an organization's information security is the organization's own employees who are the threat-agents closest to the organizational data and information. There still is a pressing need to quantitatively capture and assess the crucial factors leading to employees' willingness to commit e-crimes in organizations. Drawing on Routine Activity Theory, this study proposes a new theoretical model to predict and gauge the likelihood of deliberate e-crimes by employees in an organization. By incorporating criminological variables into IS security research, we hope this study could advance our understanding of employee security behavior and information security management practices.
Kennedy Njenga, Irwin BrownInformation Security Risk Management (ISRM) is ranked as a key concern for Information Systems (IS) managers. Two main approaches to IS management are the functionalist approach and the incremental approach. Each of these approaches are known to have limitations in the context of dynamic, volatile and uncertain environments. ISRM is very often embedded in such contexts. New insights and meaning to ISRM activities become evident when the incrementalist approach and the functionalist approach are examined holistically. The concept of improvisation can be used to describe this holistic approach. In order to develop a better understanding of improvisation in ISRM activities a single case study strategy (of a multinational organisation) was employed. Empirical data was collected through in-depth interviews with ISRM practitioners, through observations and through review of organizational internal documents. The data obtained was analysed, interpreted and conceptualised using open coding techniques. Generally it was found that improvisation is manifested in ISRM activities at an individual level and collective level. Implications of these and other findings for the scholarly community and for practical use are discussed.
Clay Posey, Tom Roberts, Paul Benjamin Lowry, Becky Bennett, James F. CourtneyProtecting information from a wide variety of security threats is an important and sometimes daunting organizational activity. Instead of relying solely on technological advancements to help solve human problems, managers within firms must recognize and understand the roles that organizational insiders have in the protection of information (Choobineh et al. 2007; Vroom et al. 2004). The systematic study of human influences on organizational information security is termed behavioral information security (Fagnot 2008; Stanton et al. 2006b), and it affirms that the protection of organizational information assets is best achieved when the detrimental behaviors of organizational insiders are effectively deterred and the beneficial activities of these individuals are appropriately encouraged. Relative to the former, the latter facet has received little attention in the academic literature. Given this opportunity, this research explicitly focuses upon protective behaviors that help promote the protection of organizational information resources. These behaviors are termed protection-motivated behaviors (PMBs). PMBs are defined as the volitional behaviors organizational insiders can enact that protect (1) organizationally relevant information within their firms and (2) the computer-based information systems in which that information is stored, collected, disseminated, and/or manipulated from information-security threats. This paper focuses upon the development of a formal typology of PMBs as viewed by organizational insiders. Data are obtained from 33 interviews and several end-user surveys, which are then utilized by the complementary classification techniques of Multidimensional Scaling (MDS), Property Fitting (ProFit) analysis, and cluster analysis. Sixty-seven individual PMBs were discovered, and the above classification techniques uncovered a three-dimensional perceptual space common among organizational insiders regarding PMBs. This space verifies that insiders differentiate PMBs according to whether the behaviors (1) require a minor or continual level of improvements within organizations, (2) are widely or narrowly standardized and applied throughout various organizations, and (3) are a reasonable or unreasonable request of organizations to make of their insiders. Fourteen unique clusters were also discovered during this process; this finding further assists information security researchers and practitioners in their understanding of how organizational insiders perceive the behaviors that help protect information assets.
Petri Puhakainen, Mikko Siponen, Mari KarjalainenInformation security standards and maturity models are probably the most common resources for information security management. However, previous research has pointed out that existing information security management standards and maturity models are neither based on appropriate theories nor provide empirical evidence on the models' practical usefulness. As a preliminary step to address this concern, the present study sketches an information security management maturity model based on Hare's meta-theory of three levels of thinking. We then use action research to further develop and validate the model. The results suggest that the model is a promising way to assess maturity of information security.
Mikko Siponen, Mari Karjalainen, Suprateek SarkerEmployees' compliance with IS security procedures is a key concern for organizations. To help organizations address this concern, academic researchers have conducted a number of studies on this topic. The existing research in the area is dominated by studies performing theory verification, i.e., testing well-known theories developed in other fields (psychology, criminology, social psychology) in the context of IS security. We argue that in order to derive specific insights regarding employees' compliance with IS security policies, there is a need to focus the investigation on the explicit phenomenon of employees' compliance with IS security procedures of their organization, and abstracting theoretical ideas from it, rather than testing if well-known theories in other fields explain employees' compliance. Otherwise, the chosen theoretical perspectives originally developed to explain/predict phenomenon other than compliance with IS security procedures may form blinders, preventing IS security scholars to see issues that may be salient to the phenomenon of employees' compliance with IS security procedures. To address this concern, we propose the use of an inductive and qualitative approach to complement the existing body of knowledge on this topic. We identify several social mechanisms that are associated with employees' compliance/non-compliance with IS security procedures. The key finding of our study, which has not been reported by previous studies, is that the reasons (and mechanisms) for individuals violating (and complying with) IS security procedures depend on the type of violation. Implications for research and practice are discussed.
Anthony Vance, Gove Allen, Braden Molyneux, Paul Benjamin LowryA persistent problem of information security is the threat of organizational insiders, an example of which is the unauthorized access of information. A long-standing solution to this problem is the principle of least privilege, which requires that systems users be given the minimum amount of access privilege required to complete a task. However, this solution is partial. While it limits access and therefore the risk of unauthorized access, it does not prevent the abuse of access privileges properly granted. In addition, in many financial, medical, and customer records systems, granularly restricting access privileges is not practical. This paper presents accountability--the expectation that one will be required to answer for one's actions--as an alternative solution to the problem of unauthorized access. We apply accountability theory to the context of system access privileges to predict that two aspects of accountability--identifiability and evaluation--will reduce instances of unauthorized access. We conduct a field experiment using a system in actual use to test our hypotheses. The results of the experiment generally support our hypotheses, demonstrating the potential of accountability mechanisms within systems to prevent unauthorized access.
Robert Willison, Merrill WarkentinThe extant literature analyzing information system security policy violations has primarily focused on accidental or non-malicious noncompliance behavior. The focus is typically on the direct antecedents of behavioral intention, and researchers have applied theories related to planned behavior, adoption, protection motivation, and other cognitive processes. But another class of violation demands greater research emphasis--the intentional commission of computer security policy violation, or computer abuse. Whether motivated by greed, disgruntlement, or other psychological process, this act has the greatest potential for loss and damage to the employer. We argue the focus must include not only the act and its immediate antecedents, but also the cognitive processes leading to the formation of abuse intention, including the motivations and decision processes that may lead up to intention. By presenting three specific examples of how the organization can expand its zone of control further back in time ('to the left of bang'), our framework extends the Straub and Welke (1998) security action cycle. We present the Extended Security Action Cycle, a new theoretic model for illustrating potential organizational impacts on the formation of employees' intention to commit computer abuse within the organization. Implications for practitioners and academic researchers are presented, including guidelines for establishing trust with employees that will foster positive perceptions of organizational justice.