Read Trust%20beyond%20security--expanded%20trust%20model-CACM%20July%202006.pdf text version

94

July 2006/Vol. 49, No. 7 COMMUNICATIONS OF THE ACM

AN EXPANDED TRUST MODEL

Developing an improved trust model and related metrics for distributed computer-based systems that will be useful immediately and resilient to changing technology. By Lance J. Hoffman, Kim Lawson-Jenkins, and Jeremy Blum

TRUST BEYOND SECURITY:

ADVANCES

in network and microprocessor technology have increased the adoption of computer technology in areas such as consumer shopping, banking, voting, and automotive technology. At the same time, widespread proliferation of viruses and recent catastrophic power outages have made the general public all too aware of the associated risks. Trust may be a crucial factor for the successful introduction of new products and services, including computer technology. However, implementation of poorly analyzed technical solutions can backfire. A specific example of the latter is the introduction of high-technology voting equipment at U.S. polling places in 2004 in an effort to increase public confidence and trust in the election process. Reported problems with the technology raise questions whether fielding this new technology will increase or decrease public trust in the voting process.1

Another example where a valid trust model would be very helpful is the evolution of intelligent vehicles. Applications that will rely on direct inter-vehicle communication (IVC) illustrate a fundamental dilemma of trust--although it is made up of potentially untrustworthy peers, the network must be survivable and attacks must be detected in a distributed manner. Central to resolving this dilemma is the ability of a host to assess the trustworthiness of entities it encounters. Unfortunately, underlying wireless networks as specified in early commercial systems, emerging standards, and the research literature address neither trust nor survivability concerns. Current proposals for managing the routing layer ignore the potential for these attacks, since planned IVC messages are propagated by individual vehicles either with repeaters or routers [11]. These devices could be hacked to inject false messages, modify messages, or fail to forward messages.

1

New York Times Editorial Board. How to hack an election. New York Times (Jan. 31, 2004).

ILLUSTRATION BY

PAUL ZWOLAK

COMMUNICATIONS OF THE ACM July 2006/Vol. 49, No. 7

95

THE NEED FOR AN EXPANDED TRUST MODEL model are used synonymously. From a user's point of J.B. Rotter defined interpersonal trust as a general- view, security is extremely important in trusting that ized expectancy held by an individual that the word, computer-based technology will perform the user's promise, oral or written statement can be reliedTable 1. Generic trust model parameters. factors other on intended requested function. However, [12]. Modifying this definition for applicability to than security can be as important from the user's human trust in automation, we define trust as the perspective. Usability is an important factor as to expectation that a service will be provided or a commitment will Expanded Trust Model Generic Model Parameters Subcomponent be fulfilled. With this definition, Authentication of parties in transaction expectation is a key component. Data Access Control Users' expectations may be based Security Data Integrity on many things. Some examples Software change control procedures include:

Physical security

· Experience with an application or service prior to the incorporation of computer-based technology; · Experience with computerbased technology in general; · The reputation of the vendor providing the service or product; · Knowledge of the technology employed to deliver the service and the confidence of the user in this technology; and · A trusted, or distrusted, agent who is involved in the delivery of the service or the performance of the product.

Perception issues Usability Motor accessibility (such as user dexterity, physical strength requirements) Interaction design issues User anonymity Privacy Data confidentiality Vulnerability to denial-of-service attacks Connection to the Internet Reliability and Availability Quality of service/performance criteria specific to the application Use of fault tolerance platform techniques Cryptographic methods used to verify database integrity Audit and Verification Mechanisms Manual or paper audit trails Use of Trusted agents Product Reputation Prior User Experience User Expectation Knowledge of technology Use of Trusted agents

Changes in the factors that affect users' expectations will also Table 1. Generic trust whether users trust technology. Reliability and availmodel parameters. impact users' trust levels. ability are additional factors, and often privacy and The problems of defining safety are as well. trust and trust metrics have primarily focused on As illustrated in Figure 1, today separate security, Jenkins table 1 (7/06) availability, safety, and privacy public key authentication and e-commerce [8]. usability, reliability, Trust models and metrics for public key infrastruc- models within engineering disciplines exist, and all ture systems address authentication between sender these models incorporate some limited aspects of and receiver entities, message integrity, and data trust. There is limited or no data sharing between confidentiality. These are all aspects of a security the individual functional models. Current trust model, and often the terms trust model and security models have been developed based on specific secu-

A thorough understanding of both the psychological and engineering aspects of trust is necessary to develop an appropriate trust model.

96

July 2006/Vol. 49, No. 7 COMMUNICATIONS OF THE ACM

rity issues and also solely on knowledge, experience, aspects of measuring trust, while others, such as practices, and performance history [5]. In addition, usability models, rarely address the issue of user Figure 1. Current trust. The new expanded trust model will contain much of the prior research in trust in automation functional models. has focused primarily on the psychological aspect of data that can be imported from existing models (that trust [10]. But prior research rarely addressed all is, security, reliability, availability, usability, privacy, and safety) to form a comprehensive model of user these areas. trust of a system. This is Proposals have been illustrated in Figure 2. presented to link some of This trust model will the psychological aspects SECURITY USABILITY RELIABILITY be usable by individuals of trust with engineering and groups with different issues. For example, and possibly conflicting attempts have been made interests. While previous to map psychological trust models in e-comaspects of trust--reliabilPRIVACY AVAILABILITY SAFETY merce have been develity, dependability, and oped to measure the trust integrity--to humanof a single customer (the machine trust clusters purchaser of a product or associated with engineering trust issues such as Figure 1. Current functional models. service), the reliability and security [7]. This article will only2. Expanded trust model including feedback proposed Figure trust model briefly touch upon some of these psychological mechanism to other functional models. will be able aspects of trust, while focusing on a discussion of to measure trust of differing users. For example, a Jenkins fig 1 (7/06) the engineering aspects for a trust model. Future trust model of voting systems might have at least two work will include more direct mappings of the psychological aspects (as described by Jian et SECURITY al., Dzindolet et al., and Camp, AVAILABILITY USABILITY for example) into the trust model [3, 6, 7]. A thorough understanding of both the psychologiPRIVACY cal and engineering aspects of Experience Verification trust is necessary to develop an TRUST SAFETY appropriate trust model. Trust Propagation Knowledge A comprehensive trust model RELIABILITY of computer-based technology must predict how usability, reliability, privacy, and availability Figure 2. Expanded types of users--voters and election officials--who (and possibly other factors), as trust model including well as security, affect user trust. feedback mechanisms may have different levels of trust and uncertainty to other functional with various aspects of the voting system. As another models. example,Jenkins fig 2 (7/06)users of an intelligent two classifications of A GENERIC TRUST MODEL AND vehicle communications system would be drivers METRICS DEFINITION A general trust model and accompanying metrics and traffic safety engineers. Metrics must be defined to measure user trust and should be used to predict and measure user trust levels in new or updated applications of computer- distrust of a system. Quantitative metrics, qualitative based technology before committing to full-scale metrics, fuzzy metrics, or a combination of these development and installation efforts. The new trust should be used to measure trust levels. Some aspects of the trust model (for example, model incorporates security, privacy, safety, usability, reliability, and availability factors into a trust vector, cryptographic techniques for enhanced system secupaying careful attention to the interacting differ- rity or redundancy features to increase system reliaences and synergies. In addition, the trust model will bility and availability) will be generic and can be incorporate factors not used in previous models, applied to more than one system. Other aspects of such as verification techniques, user knowledge, user the model may be specifically designed for a given experience, and trust propagation. Some current application system. The trust model will also explore the connection models, such as security models, may include some

COMMUNICATIONS OF THE ACM July 2006/Vol. 49, No. 7

97

between verification and trust. Different examples the expanded trust model is shown in Table 1. of this connection will be analyzed, including blind trust (no verification required), trust with verifica- TWO SPECIFIC APPLICATIONS FOR AN EXPANDED tion, trust based on knowledge, trust based on expe- TRUST MODEL rience, and trust between principals and agents Here, we examine more closely how one might (propagation of trust). apply a trust model to two applications that would Table 2. E-Voting application-specific trust model parameters. Since no system is perfect, the question "Who benefit from it: voting systems and IVC systems. watches the watchers?" must be addressed. Thus, Voting Systems. Voting problems in the U.S. elecaudit capabilities (both electronic and non-elec- tions of 2000 and 2002 have raised the question of tronic, both by a single whether citizens are losE-voting Application user and by multiple Expanded Trust Model Model Parameters ing faith in the integrity Subcomponent trusted agents)--will be of the voting process. Authentication of voters during the registration and vote cast processes included in the trust The problems in these model. The trust model Security elections related to Data access control of voter registration database, vote cast database, and vote tally database will show the effects of voter registration, vote Data Integrity of voter registration database, vote security and verification casting, and vote-count cast database, and vote tally database mechanisms on trust tallying led to new iniBallot design Usability levels. tiatives to correct these Access from non-polling centers (such as Internet Implementation and problems. The most voting from a remote location) support of cryptographic significant remedy User anonymity during vote casting algorithms are fundamen- Privacy enacted was HAVA, the Data confidentiality during vote casting tal to the strength of the Help America Vote Act. E-voting product and technology reputation trust model, regardless of HAVA was drafted Prior user experience of the voting processes the specific application. In User Expectation specifically with the Knowledge of e-voting technology establishing trust in a intention of replacing Use of trusted agents and user confidence in the transaction using a distribunreliable voting systrusted agents uted computer system, tems, such as those users will ask one or more Table 2. E-voting application- using punched card ballots, with new systems that of the following questions: specific trust model parameters. employ more advanced technology for vote casting, such as optical scan and direct recording electronic Jenkins (DRE) machines. In addition, there is a specific sec· Can this transaction be viewed by unauthorized table 2 (7/06) parties? tion of HAVA that charges the newly formed Elec· Can the contents of this transaction be altered by tion Administration Commission to study and unauthorized parties? report on e-voting and the electoral process. Legisla· Can I prove my identity to the other party in the tors inserted this wording due to concerns about the transaction? impact of electronic and Internet technologies on · Can I verify the identity of the other party in the the integrity of voter registration, vote casting, and transaction? vote counting aspects of the electoral processes. · Can I receive verification that the transaction has In response to HAVA and negative publicity from been processed correctly? the 2000 elections, state election boards have turned to electronic technology as the solution for restoring Various cryptographic mechanisms such as blind the public's trust in the voting system. In addition to signatures that allow signature verification without purchasing optical scan and DRE machines, some disclosing the contents of the signed document; jurisdictions have implemented Internet registration anonymous signatures that allow electronic data to and voting processes. While the cost, ease of use, be authenticated without revealing the identity of and maintenance issues regarding these new techthe person giving the signature; e-receipts that sup- nologies are relatively straightforward to evaluate, port authentication, user anonymity, data integrity, and there is some early evidence they provide signifand confidentiality in applications such as e-voting icant improvements in some areas such as usability and e-commerce; and remote platform integrity and reduction of unintended undervoting, questions challenges, where a remote host can electronically of security and public trust in the integrity of newly verify the integrity of a target platform, can be used purchased DRE machines are being raised today [9]. to meet these trust model requirements [4]. A non- In addition (and worse), little attention has been exhaustive list of variables that will be included in focused on the security and integrity of e-registra98

July 2006/Vol. 49, No. 7 COMMUNICATIONS OF THE ACM

Trust models and metrics can be developed and used to facilitate the successful deployment of new technology to be used by the general public.

tion aspects of voting, but problems in this area could possibly affect election outcomes to the same extent as problems with vote casting and vote counting. When deciding whether to purchase and deploy new computer technology, election boards (users) have little ability and competence (and no formal method) to assess whether deploying the new technology will maintain or increase public trust in the voting system. We don't question whether Internet and electronic transactions will be used in voting systems; this is inevitable. Market pressures, convenience, and utility have traditionally trumped security concerns. (Automobiles, introduced in 1896, did not have seat belts offered until 1955; these weren't standard equipment until the late 1960s in the U.S.) We do suggest that trust models and metrics can be developed and used to facilitate the successful deployment of new technology to be used by the general public. In particular, the trust model for evoting must properly handle as composable subsystems three processes: voter registration, vote casting, and vote counting. A non-exhaustive list of variables that will be included in the expanded trust model for an e-voting application is shown in Table 2. Metrics for voting systems will include, but not necessarily be limited to: · Voter confidence that as a registered voter he will be allowed to cast a ballot; · Voter confidence that his ballot will be cast as intended; · Voter confidence that his ballot will be included in the final vote tally count; · Voter confidence that the final vote count includes votes from only authorized voters who cast ballots; and · Voter confidence in the confidentiality of their vote [4]. We posit that the voting system application will be more trusted if audit and verification capabilities

are observable and measurable in the registration, vote casting, and vote tally processes. During the registration process, voters and authorized agents should have the ability to view and verify voter registration lists using secure computer technology. During the vote casting process, election workers should have the ability to verify voter authorization by accessing the voter registration list via secure, electronic means and voters should have the ability to verify that their votes were cast as intended. Finally, during the vote tally process, election workers should be able to verify that only authorized cast votes were included in the final vote tally and individual voters should have a method of verifying their vote was included in the final vote tally. The trust model can be used to generate design requirements for a trusted e-voting system. E-voting systems can be tested to verify that the systems meet specific trust requirements of various groups of users (such as voters and election administration officials) and the systems will be iteratively refined until the systems meet acceptable trust thresholds. At this point, e-voting system users can participate in elections, and then will provide input regarding their trust levels in these voting systems. The trust metrics generated by users of the various evoting systems will be compared to the trust metrics predicted for the systems. The trust model may be updated as we learn more from using it on various e-voting system configurations and with various user populations. IVC Systems. An IVC network provides an ideal and complementary application for the trust model. Despite the very different application area, many of the variables in the trust model are identical to the ones in the voting application. It is also a large distributed system, in which privacy, availability, platform integrity, and data integrity are central to its trustworthiness. It also illustrates a different set of variables ranging from its real-time requirements to issues arising from ad hoc wireless networking. Like the voting application, the intelligent vehicle application represents a crucial application in which trust

COMMUNICATIONS OF THE ACM July 2006/Vol. 49, No. 7

99

Table 3. IVCS application-specific trust model parameters.

communications and is a roadblock to imple- Expanded Trust Model IVCS Application Model Parameters vehicle-vehicle applicamentation. Consequently, Subcomponent Delivery guarantees for periodic messages which tions. From the perthe trust model is an ideal contain the position, dynamics, and driver intentions of vehicles in the network spective of computer mechanism to generate the Reliability Delivery guarantees for one-time messages which security, a salient feadesign requirements for a contain warnings about roadway hazards, congestion, and other events ture of both systems is technology that can save the use of the 802.11 lives. Accuracy of data provided by remote platforms Distributed CoordinaFuture generations of in- Security Ability of remote platforms to route appropriate messages without compromising message integrity tion Function (DCF) vehicle Intelligent TransResiliency of the network architecture to for ad hoc networking. portation Systems (ITS) Availability denial-of-service attacks (including jamming) Under the DCF stanwill network with nearby dard, an uncooperative vehicles for enhanced Table 3. IVCS applicationsafety and efficiency. Intel- specific trust model parameters. node can lead to denial of service for the nodes within its communication range. ligent vehicles will ascerJenkins tableFor(7/06) a jamming attack could be aimed at 3 example, tain the intentions and dynamics of nearby vehicles and of the presence of vehicle platoons. Vehicleplatoons are designed to Figure 3. Attack on a Since Platoon. roadway hazards. These ITS technologies will allow increase roadway capacity and drivers violate safe folsafe tight inter-vehicle spacing of vehicle platoons lowing distances routinely, platoons are not designed and coordinate collision avoidance in intersections to implement a fail-safe system and collisions are posand be instrumental in degraded visibility conditions such as Step 1 heavy fog. For efficiency and cost reasons, the wireless communication will ideally be done directly between vehicles. Although the complete comStep 2 munications architecture underlying these networked ITS applications remains unspecified, proposals are emerging that specify the structure of the architecStep 3 ture's lowest layers. Most of these proposals, as seen in communications standards, commercial efforts, and research systems, share the approach of requiring node cooperation for media access control. The U.S. govern- Figure 3. Attack on a sible. Collisions should not occur if each vehicle is vehicle platoon. provided with the lead vehicle's dynamics via interment anticipates the use of physvehicle communication. However, if inter-vehicle ical and media access controls communication is (7/06) based on the ANSI/IEEE 802.11 Jenkins fig 3 disrupted, a collision could be standards. MeshNetworks has developed a commer- severe. Figure 3 shows steps leading to such a collision. cial system for ITS applications, also based on the 802.11 standards, which manages vehicle-roadside In Step 1, the lead car of the platoon engages in hard

Like the voting application, the intelligent vehicle application represents a crucial application in which trust is a roadblock to implementation.

100

July 2006/Vol. 49, No. 7 COMMUNICATIONS OF THE ACM

braking due to a vehicle merging into its lane. The lead car transmits its dynamics to the preceding vehicle. In Step 2, a message containing the lead car's dynamics is propagated to the following vehicle. However, in Step 3, the message containing this information does not reach the last vehicle in the platoon, due to the jamming of the wireless signal. If this breakdown occurs in the middle of a long platoon, a serious multi-car pileup can occur. Unfortunately, in an IVC network, there is no infrastructure within which to provide security services. Instead, nodes must rely on untrusted hosts to provide network management, deliver messages, and provide accurate control data for routing purposes. Moreover, the highly volatile nature of mobile computing makes it difficult to distinguish between malicious and normal behavior. One potential solution to these challenges is CARAVAN, A Communication Architecture for Reliable Adaptive Vehicular Ad hoc Networks [1, 2]. CARAVAN provides essential security services to prevent a wide array of attacks aimed at the wireless network. Furthermore, it functions in an efficient and scalable manner that effectively manages scarce bandwidth, minimizing collisions, and providing quality-of-service guarantees for the delivery of critical messages. CARAVAN includes an explicit time-slot allocation media access protocol that mitigates the exposure of the IVC network to denial-of-service attacks. CARAVAN also provides cryptographic libraries to support digital signatures needed for the authentication and integrity of messages, as well as providing confidentiality for control messages; trusted computing platforms (TCPs) to ensure trustworthiness of peers; and spread spectrum techniques for antijamming capabilities. The trust model could help evaluate CARAVAN against alternate proposals. But to do so, the model must be expanded to include metrics specific to the IVC network. Examples of trust model parameters specific to the IVC application are shown in Table 3. The model could then analyze current proposals for inter-vehicle networking protocols. If the trust model predicts that none of the proposals produces an acceptable level of trust, the model might be used to generate the appropriate design requirements for a trusted IVC. Protocols and their parameterization could then be specified to meet these design requirements.

CONCLUSION This article proposed an expanded trust model for distributed computer systems and highlighted some

of the variables required in the model. It incorporates aspects of system security, usability, reliability, availability, audit, and verification mechanisms, as well as user privacy concerns, user experience, and user knowledge. Hopefully this will lead to measurable systems that have trust built in, and a scientific community and public that demands such systems. c References

1. Blum, J. and Eskandarian, A. CARAVAN: A communications architecture for reliable adaptive vehicular ad hoc networks. In Proceedings of the Society of Automotive Engineers World Congress (Detroit, MI, Apr. 2006). 2. Blum, J. and Eskandarian, A. Adaptive Space Division Multiplexing: An improved link layer protocol for inter-vehicle communications. International IEEE Conference on Intelligent Transportation Systems (Vienna, Austria, Sept. 2005). 3. Camp, L.J. Design for trust. In R. Falcone, Ed., Trust, Reputation and Security: Theories and Practice. Springer-Verlang, Berlin. 4. Chaum, D. Secret ballot receipts and transparent integrity; www. vreceipt.com/article.pdf. 5. Daignault, M. and Marche, S. Enabling trust online. In Proceedings of the Third International Symposium on Electronic Commerce (Oct. 2002). 6. Dzindolet, M.T. et al. The role of trust in automation reliance. International Journal of Human and Computer Studies 58, (2003), 697­718. 7. Jian, J., Bisantz, A., and Drury, C. Foundations for an empirically determined scale of trust in automated systems. International Journal of Cognitive Ergonomics 4 (2000), 53­71. 8. Manchala, D.W. Trust metrics, models, and protocols for electronic commerce transactions. In Proceedings of the 18th International Conference on Distributed Computing Systems (May 1998). 9. Maryland General Assembly, Department of Legislative Services, Trusted Agent Report: Diebold AccuVote-TS Voting System (Jan. 20, 2004). 10. Muir, B. Trust in automation: Part I. Theoretical issues in the study of trust and human intervention in automated systems. Ergonomics 37 (1994), 1905­1922. 11. Okada, M. et al. A joint road-to-vehicle and vehicle-to-vehicle communications system based on a non-regenerative repeater. In Proceedings of the 50th IEEE Vehicle Technology Conference (Amsterdam, 1999), 2233­2237. 12. Rotter, J.B. Interpersonal trust, trustworthiness, and gullibility. American Psychologist 35 (1980), 1­7.

Lance J. Hoffman ([email protected]) is a Distinguished

Research Professor of the Computer Science Department at George Washington University in Washington, D.C. Jeremy Blum ([email protected]) is a research scientist at the Center for Intelligent Systems Research at George Washington University in Washington, D.C. Kim Lawson-Jenkins ([email protected]) is a doctoral graduate student in the Computer Science Department at George Washington University in Washington, D.C.

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.

© 2006 ACM 0001-0782/06/0700 $5.00

COMMUNICATIONS OF THE ACM July 2006/Vol. 49, No. 7

101

Information

8 pages

Report File (DMCA)

Our content is added by our users. We aim to remove reported files within 1 working day. Please use this link to notify us:

Report this file as copyright or inappropriate

102723


Notice: fwrite(): send of 199 bytes failed with errno=104 Connection reset by peer in /home/readbag.com/web/sphinxapi.php on line 531