As a guest user you are not logged in or recognized by your IP address. You have
access to the Front Matter, Abstracts, Author Index, Subject Index and the full
text of Open Access publications.
Recently, argumentation frameworks have been extended in order to consider trust when defining preferences between arguments, given that arguments (or information that supports the arguments) from more trustworthy sources may be preferred to arguments from less trustworthy sources. Although such literature presents interesting results on argumentation-based reasoning and how agents define preferences between arguments, there is little work taking into account agent strategies for argumentation-based dialogues using such information. In this work, we propose an argumentation framework in which agents consider how much the recipient of an argument trusts others in order to choose the most suitable argument for that particular recipient, i.e., arguments constructed using information from those sources that the recipient trusts. Our approach aims to allow agents to construct more effective arguments, depending on the recipients and on their views on the trustworthiness of potential sources.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.