Using trust for detecting deceitful agents in artificial societies
- 1 September 2000
- journal article
- research article
- Published by Taylor & Francis in Applied Artificial Intelligence
- Vol. 14 (8) , 825-848
- https://doi.org/10.1080/08839510050127579
Abstract
Trust is one of the most important concepts guiding decision-making and contracting in human societies. In artificial societies, this concept has been neglected until recently. The inherent benevolence assumption implemented in many multiagent systems can have hazardous consequences when dealing with deceit in open systems. The aim of this paper is to establish a mechanism that helps agents to cope with environments inhabited by both selfish and cooperative entities. This is achieved by enabling agents to evaluate trust in others. A formalization and an algorithm for trust are presented so that agents can autonomously deal with deception and identify trustworthy parties in open systems. The approach is twofold: agents can observe the behavior of others and thus collect information for establishing an initial trust model. In order to adapt quickly to a new or rapidly changing environment, one enables agents to also make use of observations from other agents. The practical relevance of these ideas is demonstrated by means of a direct mapping from a scenario to electronic commerce.Keywords
This publication has 0 references indexed in Scilit: