The 2011 Dewald Roode Workshop on Information Systems Security Research, IFIP WG8.11/WG11.13

Program chairs: Robert Crossler and Paul Benjamin Lowry

Conference Proceedings

Proceedings Editor: Anthony Vance

Download all papers here.

Paper 1: The Roles of Privacy Assurance, Network Effects, and Information Cascades in the Adoption of and Willingness to Pay for Location-Based Services with Mobile Applications

Mark Jeffrey Keith, Jeffry S. Babb Jr., Paul Benjamin Lowry, Christopher Paul Furner, and Amjad Abdullat

Location-based services (LBS) are increasingly combined with new forms of mobile and ubiquitous- based computing such as music players, cameras, email and internet access, and thousands of other mobile applications (a.k.a., “apps”). LBS features present both new and interesting benefits as well as new forms of privacy risk. Recent headlines have brought attention to the enormous privacy information available to mobile application developers and providers such as Apple and Google. As risks and benefits increase in LBS apps, it is unknown how users tradeoff between these converged risks and benefits— particularly in the market for mobile apps. This paper uses a unique theoretical model based on privacy calculus and network theory to empirically examine the effects of the risk/benefits tradeoff on the adoption of new and emerging forms of LBS apps. Through two experiments involving 1588 mobile application users, we examine how institutional privacy assurances—including app quality and network size—influence users' perceptions of location privacy risk and app benefits, in turn, affects their adoption intentions and willingness-to-pay. This research contributes to theory by demonstrating how network size affects not only perceived benefits, but also the perceived risks of IS in the absence of perfect information. Concerning practice, we provide evidence that (1) LBS privacy risk is of great concern to consumers, (2) that privacy assurance is particularly important when an app’s network size is low or if its quality cannot be verified, and (3) and improved standards for institutional privacy assurance at the app level could provide greater value/profit to consumers/providers.

Paper 2: Understanding the Diffusion of Negative Innovation through Network Analysis of Security Vulnerabilities

Sam Ransbotham

While most research on technology innovation seeks to promote diffusion, information security provides an important context for the study of negative technology innovations—technology adoptions that society would prefer to hinder. Current countermeasure technologies provide a wealth of data that can help not only understand the security attack process, but can also provide insight into the general diffusion of innovation. To examine this, I combine two years of security alert data from intrusion detection systems (400+ million alerts) across 960 distinct firms with vulnerability characteristics from the National Vulnerability Database. Using this dataset, I study the relationships between vulnerabilities by conceptualizing vulnerabilities and attacked firms as two-mode network. Through network analysis of the resulting affiliation graph, researchers can infer behavior of the normally inaccessible (black hat) attackers and observe their adoption of negative innovation. As such, this empirical and theoretical analysis yields important managerial insights to firms coping with a relentless stream of emerging security vulnerabilities and important theoretical insights into the diffusion of negative innovation.

Paper 3: Exploring the Role of Individual Employee Characteristics and Personality on Employee Compliance with Cyber Security Policies

Merrill Warkentin, Lemuria Carter, and Maranda E. McBride

Research indicates that insiders represent a major threat to the security of an organization’s information resources (Warkentin & Willison, 2009; Stanton et al., 2005). Given this threat, it is imperative that we understand the factors that promote compliant cybersecurity behaviors. The study is designed to identify key components of secure insider behavior by conducting an extensive review of the literature, developing a cybersecurity compliance survey, administering that survey to employees in several organizations, developing psychological profiles of employees, and creating personalized cybersecurity training protocols to meet the unique needs of each employee profile. In summary, our goal is to empirically establish the role that key individual differences, such as personality characteristics, sanction perceptions, and employee demographics, play in the formation of positive cybersecurity compliance intentions. These employee profiles will enable us to develop customized cybersecurity training protocols that meet the needs of individual technology users.

Paper 4: Victimized by Phishing: A Dual-Process Information Processing Perspective

Wei Zhang, Xin (Robert Luo), and Zhengchuan Xu

To the extent that phishing has become a serious threat to information security, there has been rather limited theory-grounded research on this burgeoning phenomenon. In this paper, we propose a study on victimization by phishing based on the Heuristic-Systematic Model of information processing. We argue that the Heuristic-Systematic Model offers an ideal theoretical framework for investigating the psychological mechanism underlying the effectiveness of phishing attacks, and present a preliminary research model based on the theory.

Paper 5: User Resistance to Mandatory Security Implementation

France Bélanger, Stephane Collignon, Kathy Enget, and Eric Negangard

Organizations implement security policies and guidelines to ensure employees perform necessary security behaviors to protect the assets of the organization. However, research has shown that individuals often resist or fail to properly perform these behaviors. Combining Protection Motivation Theory (PMT), the Theory of Planned Behavior (TPB), and the Model of Resistance to IT Implementation, we propose a model to explore the attitudes and resistance behaviors of individuals faced with a mandatory security enhancement. We test the model during an actual mandatory security enhancement implementation. Triggered by recurring security issues and an appraisal of individual password practices within his institution, the IT security officer at a large mid-Atlantic university established a new password change requirement for all users. Our study investigates individual and organizational antecedents to users’ attitudes and resistance behaviors toward this mandatory change of security policy. We test the model with Structural Equation Modeling using survey data from 425 respondents representing all categories of users at the university. Results indicate that attitude towards the mandatory change impacts resistance behaviors, and that all of the PMT-derived and TPB- derived factors affect attitude, with the exception of perceived threat severity. Implications for research and practice are discussed.

Paper 6: Transferring risk to an opponent: Security design principles for high-value-low-frequency threats

Richard Baskerville

Risk management principles indicate four main kinds of treatments to reduce information security risks. These categories include self-insurance for low-value, low-frequency risks; self- protection for low-value, high-frequency risks; avoidance for high-value, high-frequency risks; and risk-transfer for high-value, low frequency risks. In this paper, we consider the latter category, and specifically explore risk treatments that represent a non-financial, non-tradable, transfer of risk from the victim to the attacker.

Paper 7: A Theory Explaining How an Organization Can Live Up to the Letter, But Not the Spirit, of an Information Security Initiative

Gurvirender Tejay and Allen Lee

In all walks of life, there are people who say one thing, but do another. In our case study of a government organization that has set out to implement an information security initiative, we see people who say (and actually believe) one thing, but do something else entirely, where the result is a thwarted security initiative. For an overall perspective, we adopt Pettigrew’s (1987) contextualist theory of strategic change, first, to provide categories to guide and organize observations at our case-study site and, second, to serve as a foundation upon which to build a specific theory explaining the phenomenon of how members of an organization can live up to the letter, but not the spirit, of the organization’s information security initiative, thereby undermining and defeating it. Such superficial security can result from the beliefs and actions of individuals rather than be the fault of technology. We offer a theory of action that diagnoses the mechanism by which this is possible, and that consequently prescribes how fundamental information security actions need to be coordinated with people’s underlying security values for desired information security objectives to be achieved.

Paper 8: Examining Employee Security Behavior: A Moral Disengagement Perspective

Tejaswini Herath, Myung-Seong Yim, John D’Arcy, Kichan Nam, and H.R. Rao

Security behaviors of the workforce have been identified as the cornerstone for achieving holistic security. The information systems (IS) security literature has recently started to acknowledge that security policy compliance by employees is of utmost importance in organizations. In a complementary yet unexplored vein, this research-in-progress paper develops a predictive model of employees’ security policy violating behavior based on Bandura’s theory of moral disengagement. This theory, rooted in social-cognitive theory, encompasses self-regulatory mechanisms that can explain the exercise of moral agency that is manifested in employees refraining from misbehaviors. Extending the security moral disengagement framework presented at IFIP 2010 (Herath et al. 2010), this work develops a model that identifies organizational commitment and security education, training, and awareness (SETA) programs as two important antecedents that can reduce employees’ engagement in deliberate security policy violations. We validate and test the model using data collected from 5 organizations in Korea.

Paper 9: Hermeneutical Exegesis on Collective Improvisation in Information System Security

Kennedy Njenga

This article exemplifies collective improvisation in practice and broadens the view point that collective improvisation occurring within the discipline of information systems (IS) security is best understood by applying hermeneutical exegesis in context. The article draws from the insights of Gadamer who has elevated hermeneutics to a universal philosophy. Hermeneutical exegesis is used to explore and examine three typologies of collective improvisation (bricolage, innovation and rational-adaptation) in IS security. The article uses exegesis techniques that include textual criticism and redaction criticism, both which have been applied in Biblical exegesis to explore meaning of text. A similar approach is demonstrated in IS security in order to enrich the discipline towards a better depiction of collective improvisation in IS security practice. The article reports on the findings of an in-depth single case study and shows that both textual and redaction criticism are able to successfully exemplify collective improvisation across six selected IS security activities. The article discusses collective improvisation in each of these six distinct IS security activities successively.

Paper 10: Information Processing in Law Enforcement during extreme events - a study about Mumbai 26/11 2008

Rajarshi Chakraborty, H. Raghav Rao, and Manish Agrawal

Law enforcement officers take actions to mitigate hostilities based on intelligence and information they receive from various sources. The terrorist attacks in Mumbai on November 26, 2008 created a unique scenario to study adaptive behavior and information processing in the police department, i.e. the first responders to the attacks. In the context of this event, we carried out a survey of two police control zones that were involved. We use Partial Least Squares to examine the factors that led to mitigating actions by the officers on that day. The findings of this analysis are further supported with anecdotal evidence extracted from interviews of some of the first responders.

Paper 11: Information Sensitivity vs. Environmental Risk: User Behavior Across Multiple Networks

Jim Lee, Merrill Warkentin, and Allen C. Johnston

Ubiquitous networking permits users to utilize multiple network environments to access the Internet. Awareness of risk sources in Internet activities is crucial to protecting individual and organizational sensitive data. Prior research has demonstrated the importance of awareness, trust, and risk between the user and his intended endpoint; nominalizing these factors towards the infrastructure. Because the information in transit is at risk from the network environment’s vulnerabilities, we focus on the implications of infrastructure risk on Internet activities. This study examines user information security awareness as it is applied to Internet privacy concerns and perceived risk of network environments to determine network trust. We revisit the risk/trust relationship framed in the context of network environments, and provide an alternative view of behavioral intention as the exhibition of trust.

We propose an evaluation process for Internet activities that require sensitive information while connected to a specific network environment. The process starts with General Information Security Awareness influencing one’s Internet Privacy concerns and perceived risk of the environment. Evaluating network environment risk and information sensitivity versus Trust in the Network creates the Intention to Provide Sensitive Information. The process of evaluating the network environment occurs in parallel with the previous literatures use of these constructs towards the endpoint, and ultimately is exhibited by actual behavior.

Paper 12: Environment, Motivation, and System Hardening

Jordan Shropshire

The present study provides a theoretical basis for understanding the determinants of system hardening performance. Using self-determination theory (SDT), an integrated motivational model is developed and tested. Constructs systems familiarity and threat awareness are included as antecedents of competence and autonomy and as indirect determinants of self-determined motivation. In turn, the latter impacts system hardening performance. An empirical study for analyzing this model was conducted. Some 179 current and potential systems administrators completed surveys and hardened Linux webservers in a controlled laboratory environment. The results confirm the proposed relationships. The model accounted for 33.3% of the variance in motivation and 17.2 % of the variance in system hardening performance. Implications for research and practice are discussed.

Paper 13: Motivating the Insider to Protect Organizational Information Assets: Evidence from Protection Motivation Theory and Rival Explanations

Clay Posey, Tom L. Roberts, and Paul Benjamin Lowry

This research investigates the factors that motivate employees to protect their organizations from information security threats via protection-motivated behaviors (PMBs). A model founded on Protection Motivation Theory (PMT) and several rival explanations is assessed using data from 380 employees from a wide variety of industries in the U.S. Several important findings for behavioral information security research emerged. First, the basic assumptions of PMT hold in an organizational security context whereby employees weigh the potential benefits and risks associated with threats before engaging in PMBs. Intrinsic maladaptive rewards, response efficacy, and response costs effectively influence employees’ protection motivation levels; however, extrinsic maladaptive rewards and threat vulnerability and severity do not. Moreover, fear does not play a significant role in motivating insiders to engage in PMBs. The rival explanations for protection motivation of job satisfaction and management support significantly influence employees’ protection motivation, whereas sanctions and financial incentives do not.

Paper 14: Understanding Users’ Coping with Information Privacy Threats in Online Social Networks

Burcu Bulgurcu, Hasan Cavusoglu, and Izak Benbasat

This dissertation focuses on understanding users’ coping with information privacy threats in Online Social Networks (OSNs) by investigating (i) the outcomes of users’ privacy related trade-offs (e.g. trade-offs between perceived benefits vs. information privacy threats associated with the use OSNs); and (ii) the strategies that users employ to cope with information privacy threats in different conditions (e.g., when they believe they have high vs. low control to prevent a privacy threat).

Paper 15: Perceived Deception: An Evaluation of Technology Awareness and Self-Efficacy

Dustin Ormond, Merrill Warkentin, Kent Marett, and Allen C. Johnston

Detecting fake antivirus messages is important as these messages mislead users into unintentionally surrendering their identity, money, or time. The present study discusses factors which aid users in perceiving deceptive communications. In a pilot study, this study utilized a scenario to measure these factors. A pre-­‐scenario and a post-­‐scenario survey were administered to evaluate factors affecting perceived deception with 213 usable responses. The data from the pilot study support that technology awareness significantly increases one’s deception detection self-­‐efficacy which enhances the likelihood an individual perceives a fake antivirus message as deceptive. In addition, the data significantly illustrate that behavioral intention to use the fake antivirus software or scareware is lower when the message is perceived as deceptive. Also, this study shows that more technologically aware individuals have lower intention to download fake antivirus software. Further data collection and analysis will be conducted in the near future and a dual model comparison will be conducted with regards to source familiarity when exposed to the fake antivirus message.

Paper 16: A New Approach to the Problem of Unauthorized Access: Raising Perceptions of Accountability through User Interface Design Features

Anthony Vance, Braden Molyneux, and Paul Benjamin Lowry

A persistent problem of information security is the threat of organizational insiders, an 5 example of which is the unauthorized access of information. A long-standing solution to this problem is the principle of least privilege, which requires that systems users be given the minimum amount of access privilege required to complete a task. However, this solution is partial. While it limits access and therefore the risk of unauthorized access, it does not prevent the abuse of access privileges properly granted. In addition, in many financial, medical, and customer records systems, granularly restricting access privileges is not practical.

This study presents accountability—the expectation that one will be required to answer for one's actions—as an alternative solution to the problem of unauthorized access. We apply accountability theory to the context of system access privileges to predict that three aspects of accountability—identifiability, evaluation, and social presence—will reduce instances of unauthorized access. We develop a factorial survey to determine the effects of user interface design features relating to these aspects of accountability. The results demonstrate the potential of accountability mechanisms within systems to prevent unauthorized access.

Paper 17: Understanding Persistence of Risky IS Behavior with Respect to Phishing: A Multi-Stage Approach

Mary B. Burns, Alexandra Durcikova, and Jeff Jenkins

Though trained to counter phishing attempts, sophisticated users nevertheless may be lured to click on the links in phishing attempts. Due to the prevalence and severity of these types of attacks that can fool even expert users, there is a need to understand what determines why both expert and novice users persist in risky IS behavior and how we can design interventions to change this type of behavior. IS risky behavior, particularly with respect to phishing, is investigated via the theoretical lens of a hybrid continuum-stage health behavior change model. This type of model helps us to understand why humans engage in risky behavior, the stages toward safer behavior, and how appropriate interventions at these stages can reduce subsequent risky behavior. Informed by the literature in health behavior change models, this proposed study will longitudinally monitor the effects of both simulated phishing attempts and interventions aimed at reducing risky behavior in a field experiment. Therefore, this research in progress will extend IS Security research by developing and empirically testing a theoretical hybrid continuum-stage model of users’ risky IS behavior.

Paper 18: Influence of Perceived Value of Data on Anti-Virus Software Usage: An Empirical Study of Protection Motivation

Kalana Malimage and Merrill Warkentin

Malware has become a major threat to individuals and organizations, which can be negated by installing and using protective technologies such as anti-virus or anti-spyware software. There are several reasons for individuals to treat malware as a severe threat to which they are vulnerable, and one of the reasons is that they perceive the data they have stored on their hard disk drives as being valuable. Utilizing protection motivation theory (PMT) and an individual’s perceived value of data, we investigated the factors that lead to the formation of intention to install and use anti-virus software as a protective technology. We confirm that perceived threat vulnerability, self-efficacy, and response efficacy were all positively associated with behavioral intention to use anti-virus as a protective technology. Interestingly, we also found that perceived value of data is positively associated with threat and coping appraisal as well as having a direct effect on behavioral intention. Results are discussed, and implications for research and practice are presented.

Paper 19: How Can You Tell a Hacker from a Geek? Ask Whether He Spends More Time on Games Than in Sports!

Qing Hu, Hongcheng Zhang, and Zhengchuan Xu

Motivated by the lack of empirical research on the evolution of hackers, we investigated the question why talented and computer savvy college students may evolve into computer hackers using a multi- method approach. We first conducted a case study of five known computer hackers. Primarily based on the results of this case study, we developed a survey instrument and adopted the scenario based research methodology commonly used in criminological studies for collecting data. Eight computer hacking scenarios were created and responses about the likelihood of committing the hacking activities were collected, along with a large number of predictor variables related to past experience, current activities, moral beliefs, and self-control characteristics of individual subjects. Consequently, eight regression models were tested, and the results were revealing and interesting. We found three primary factors that contribute to the likelihood of geeks becoming hackers: moral beliefs, self-control, and time spent on computer games vs. sports activities. Our results indicate that individuals who has strong moral beliefs against hacking activities, strong ability to control temper, and spend more time in sports than on computer games are much less likely to be involved in computer hacking activities. On the other hand, individuals who don’t believe that hacking is morally wrong, who have a tendency to lose temper, and who spend more time on computer games are much more likely to develop into computer hackers. The significant implications of these findings for scholars, educators, and policy makers are discussed and future research directions are explored.

Paper 20: Determinants of Online Privacy Protection Behaviors

France Bélanger, Robert E. Crossler, and Janine S. Hiller

Children’s privacy in the online environment has become critical. There is an increased use of the Internet for commercial purposes, an increase in requests for information, and an increasing number of children who use the Internet for casual web surfing, chatting, games, e- mail, interactive learning, and other applications. Often, websites hosting these activities ask for personal information such as name, e-mail, street address, and phone number. In the United States, the Children’s Online Privacy Protection Act (COPPA) of 1998 was enacted in reaction to widespread collection of information from children and subsequent abuses identified by the Federal Trade Commission (FTC). Other countries have enacted similar laws aimed at protecting a child’s privacy by requiring parental consent before collecting information from children under the age of 13. To date, however, the business practices used and the technical approaches employed to comply with these laws fail to protect children’s online privacy effectively. In this research, we use a multi-method approach to build a model of determinants of parental online privacy protection behaviors, which includes concern for information privacy, concern for information privacy of children, trust, and perceived risks. We then test a portion of the model with responses from a survey of 180 parents, and find that concern for information privacy of children is influenced by concern for information privacy. The results also show that concern for information privacy of children positively affects trust and risk perceptions, which also affect the behavioral intentions of parents to protect their children’s privacy online. The implications of our findings for both research and practice are discussed.