Abstract
In this paper I critically evaluate the value neutrality thesis regarding technology, and find it wanting. I then introduce the various ways in which artifacts can come to influence moral value, and our evaluation of moral situations and actions. Here, following van de Poel and Kroes, I introduce the idea of value sensitive design. Specifically, I show how by virtue of their designed properties, artifacts may come to embody values. Such accounts, however, have several shortcomings. In agreement with Michael Klenk, I raise epistemic and metaphysical issues with respect to designed properties embodying value. The concept of an affordance, borrowed from ecological psychology, provides a more philosophically fruitful grounding to the potential way(s) in which artifacts might embody values. This is due to the way in which it incorporates key insights from perception more generally, and how we go about determining possibilities for action in our environment specifically. The affordance account as it is presented by Klenk, however, is insufficient. I therefore argue that we understand affordances based on whether they are meaningful, and, secondly, that we grade them based on their force.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
A key question that emerges in the philosophy of technology is whether technological artifacts can embody values. It is a truism at this point that technology is value-laden (van den Hoven and Weckert 2008), that is, technology can in some sense be causally efficacious in the kinds of things we come to value (i.e., as means to our ends, as having instrumental value). A far more pertinent question, however, concerns the status of these artifacts themselves: is it possible for these technological artifacts to embody values (Johnson and Noorman 2014; van de Poel and Kroes 2014; Klenk 2020)? Can artifacts, independently of their use, be said to have value? This is one of the more controversial questions in philosophy of technology, and it is the question I will concern myself with in this paper. “Value”, however, is a diverse concept, with many competing accounts of what exactly it is, and, moreover, what kinds of value we might be talking about (epistemic, moral, etc.). In this paper I will be concerned with moral values specifically, and whether it might be possible to embed such values into technological artifacts.
Consider the case of the American National Rifle Association (NRA), whose opponents advocate against the proliferation of firearms and claim that “Guns kill people”. The popular retort from the NRA, captured in their slogan, is that “Guns don’t kill people, people kill people.” Implicit in this response is the neutrality thesis regarding technology: the gun itself does not carry any value and is only instrumentally valuable. Its value is determined by its use by human beings, and this type of response denies that the technology itself embodies any values (Peterson and Spahn 2011). Implicit in the first slogan (“Guns kill”) is the view that the material components of the gun are irreducible to the social qualities associated with the user-of-the-gun (Latour 1999: 176). Some material components of the gun, therefore, can come to embody values independently of the qualities of the user. In this way an ordinary citizen, by virtue of using a gun, can become a threat to society and themselves. The second slogan (“Guns don’t kill, people kill”), however, seems to suggest that it is not the material components of the gun (its design, or whatever) that make it dangerous. The gun is simply a neutral carrier of intentions, and those intentions naturally flow from the person who is using the gun. If the user-of-the-gun is a good person, the gun will be used with discretion and in morally appropriate ways. Conversely, if the user is insane or morally bankrupt, the gun will be used in morally reprehensible ways: all this, without any change in the constitution of the gun itself. Latour considers the first slogan to involve a sociological interpretation of artifacts, and the second to offer us a material interpretation thereof (Latour 1999: 177). The “material” interpretation, following Latour, “make[s] the intriguing suggestion that our qualities as subjects, our competences, our personalities, depend on what we hold in our hands” (Latour 1999: 177). The “sociological” interpretation, in contrast, moralizes the situation. Here it is worth quoting Latour at length:
“For the NRA, one’s moral state is a Platonic essence: one is born either a good citizen or a criminal. Period. As such, the NRA account is moralist-what matters is what you are, not what you have. The sole contribution of the gun is to speed the act. Killing by fists or knives is simply slower, dirtier, messier. With a gun, one kills better, but at no point does the gun modify one’s goal” (Latour 1999: 177).
The suggestion here (from the NRA at least) is that if we can learn to simply be better persons, then we do not have to worry about the moral effects of artifacts. If we are trained, for example, to uphold better gun safety standards, etc. then we would have done all we can. The above characterization between “material” and “sociological” interpretations is of course a rough caricature of the actual positions held and defended by various philosophers of technology. For example, nobody would claim that the gun makes no contribution to the killing, and nobody would claim that the gun is wholly responsible either. Those who oppose the proliferation of guns merely assert that these artifacts can affect those who make use of them. Conversely, gun control opponents merely claim that guns are but one efficient way of carrying out an act, with other things also capable of performing the same task (Latour 1999: 176; Verbeek 2005: 155). This caricature, however, serves the purpose of introducing the topic of value-embedded in technology. In what follows I will briefly introduce and then critique the so called “neutrality thesis” regarding technological artifacts (Illies and Meijers 2009; Peterson and Spahn 2011).
1.1 The neutrality thesis
The Neutrality Thesis states that the various technological artifacts are merely neutral means with which agents achieve their ends (Illies and Meijers 2009: 421). This view has little support in this crude formulation due to the society-wide effects that technological artifacts have. Let us call this the Strong Neutrality Thesis (SNT). A more sophisticated version of the value neutrality of technology is due to Peterson and Spahn (2011). Here, the authors show how it is implausible that technology never affects the moral evaluation of action (2011: 423). They call this view the weak neutrality thesis (WNT). To make their point salient, they use the example of a terrorist.
“who intends to kill ten million people in a big city by blowing up a small atomic bomb hidden in a suitcase. Compare the possible world in which the terrorist presses the red button on his suitcase and the bomb goes off, with the possible world in which he presses the red button on the suitcase but in which nothing happens because there was actually no bomb hidden in the suitcase. In the first example ten million people die, but in the second no one is hurt” (Peterson and Spahn (2011: 423).
In the example above, in the first case, the action of pushing the button is morally wrong. This, however, is not necessarily true of the second case. The point is that the mere presence of the bomb in the suitcase changes the moral evaluation of the action (Peterson and Spahn 2011: 423, my emphasis). In the case where millions die, we are outraged and might demand reparations. In the case where nobody dies, we might be outraged but it would make little sense to seek reparations. Thus the moral valence of the action changes, without necessarily changing the fact that in both cases an immoral act was committed. At the very least, therefore, technology can come to influence consequences, and our moral evaluation of those consequences. But can technology come to influence what we value?
1.2 Artifacts influencing value
Consider a seemingly trivial example, borrowed from Verbeek (2005: 5) of microwave ovens. Initially the microwave, as a novel technology, was targeted primarily at men. It was marketed as technologically sophisticated device and appeared alongside video recorders in stores. Once this market became saturated, however, the microwave was marketed more as an ordinary cooking device, and started appearing alongside refrigerators and ovens (Verbeek 2005: 5). There was.
“a gender divide whereby ‘brown goods’ such as televisions, video and hi-fi were seen as high-tech and male-oriented by the company engineers, marketers and retailers, while ‘white goods’ such as refrigerators, dishwashers and clothes washing machines were seen as low-tech and female oriented” (Henry and Powell 2017: 35).
Early designs of the microwave positioned it as a stereotypically ‘brown good’, appealing to single men who did not have wives at home to prepare their meals for them in advance (Cockburn 1997). However, after failing to sell, retailers reconsidered their options and decided to label the microwave as a ‘white good’ and market it to woman. This involved, among other things, a change in colour scheme (from dark to light) (Henry and Powell 2017: 36). Moreover, the microwave made possible a new kind of meal: the frozen meal for one, which can be quickly prepared with minimal fuss. Before the microwave, there existed few options for quickly preparing frozen meals, but with this new technology it became easy. This ease made dining alone a far more convenient event than it was before. In this way, the microwave can be said to have altered the possible ways we can take meals. Subsequently, this change in our available action scheme makes us value certain actions more (eating alone) than would have been possible without the technological artifact being present (Illies and Meijers 2009: 422).
“Thus technologies are not understood as neutral (a mere addition to a pre-given social system), or determinative (directly causal of changes in a social system) but as an embedded and co-constituting feature of society and its structures, cultures and practices” (Henry and Powell 2017: 36–37).
In this sense, technological artifacts are not simple “intermediaries”, but rather mediators, in the relation between humans and the world (Verbeek 2005: 114). They change how the world appears to us and our possible interactions with it. In this way, technology, broadly construed, can come to influence what we value, and increase the likelihood of certain states of affairs coming about. In what follows I will outline how technological artifacts can influence moral values.
1.3 Artifacts influencing moral values
Let us start with an examination of “Killer robots”—weapon systems capable of performing lethal military operations that were once the domain of human beings. An example of this type of system is the “Predator”Footnote 1 drone, an unpiloted combat aerial vehicle capable of remotely performing military operations such as air-to-ground missile launches (Sparrow 2007: 63; Royakkers and van Est 2015: 560).Talk of drone technology has recently become part of our common lexicon, with former US president Barrack Obama’s controversial use of drones to wage war in Iraq being a key trigger point for this debate. Moreover, the addition of Distinguished Warfare Medals for drone operators has also drawn the public’s attention. Such awards can outrank combat medals awarded to US troops, and the public’s uncertainty as to whether drone pilots deserve to be acknowledged in this way is suggestive of the lack of consensus with regards to done warfare and its place in the military (Sparrow 2015: 380).
Consider an example from the Kosovo war, in which NATO aircraft were forced to fly above 15,000 feet to avoid enemy fire. In this case, any bombs deployed would have had to be dropped from this height. In one instance, this tragically resulted in NATO aircraft mistaking a convoy of busses transporting refugees for Serbian tanks, and subsequently bombing them (Royakkers and van Est 2015: 560). In such a situation, an unpiloted drone would be preferred, as it could fly at a lower altitude, taking greater care in target selection and the subsequent use of lethal force. Such drones also reduce the need for human lives to be put in danger in military operations, creating a new class of ‘cubicle warriors’ (ibid.: 560). They also may be cheaper than human soldiers in the long run (a military drone does not need a pension scheme or a hospital plan), and outperform human soldiers in specific domains (human soldiers tend to require sleep to function optimally) (Müller 2014: 4). There is, therefore, a strong prima facie case for driving the project to create ever more complex drone technology, and this is indeed reflected in the US government having funded research into the construction of autonomous robots since the early 2000’s via the Defence Advanced Research Projects Agency (DARPA) (Wallach and Allen 2009: 49).Footnote 2 One could even argue that it would be morally impermissible to place a soldier in a life-threatening situation if that same task could be carried out by a military robot, in which case the use of such robots could be ethically defensible, and even encouraged.
Armed with this understanding of military drones more generally, we can consider a situation in which drones take lethal action and civilian casualties are incurred. This is not mere speculation: it is estimated that since 2004 between 769 and 1725 civilians have been killed in drone strikes in Pakistan, Yemen, Somalia and Afghanistan (Drone Warfare 2019). Moreover, drones are not infallible, and we can foresee a scenario in which a decision is made to launch a strike, but the target is misidentified (as in the Kosovo example above) (Tollon 2019: 20). In such cases it is still human beings who are pulling the trigger, albeit from a distance. Therefore, when evaluating such civilian deaths, we should exclusively look towards the human beings that can be held morally responsible for these deaths, since holding the drone responsible would be conceptually inappropriate. This is generally because (i) human operators are taken to be ultimately responsible for the actions of such drones, and (ii) because moral responsibility is taken to entail punishment, and drones cannot be punished (e.g. Sparrow 2007: 74). However, notwithstanding the fact that human operators are held morally responsible, it is clear the use of such drones makes the act of killing far easier.
The history of military technology is such that at each new stage of development we get better at killing from a distance: from swords to SWORDS (a remotely operated machine gun which makes use of the Special Weapons Observation Remote Direct-action System) (Wallach and Allen 2009: 20). Killing from a distance gets around two of the most common barriers to an effective war machine: Firstly, soldiers’ fear of being killed, and secondly, their resistance to killing others. The fact that machines currently lack the capacity for affect is seen as an improvement on human soldiers, as it means they (machines) would not have these affective limitations.
In the example above, therefore, it is possible to discern a distinct change in moral values: in the classic case, soldiers are trained to engage with combatants and non-combatants in warfare. This is predicated on the fact that soldiers will in fact find themselves in situations where they will have to make decisions on the fly, without perfect information, while simultaneously being in the theatre of war itself, and, therefore, factoring in to their decisions the potential consequences of their actions for their own lives. A courageous action, in such a scenario, might be risking one’s life to save another, as courage involves a personal sacrifice to do what is right. Thus (and this is but one example) the virtue of being courageous in this sense is valued, and indeed deemed morally commendable. By contrast, remotely operated drones outsource many of the affective components of warfare, meaning that decisions can be made outside the context of the theatre of war itself. Specifically, drone operators need not be concerned with whether they will live or die when performing a given military operation, and so will not factor this into the decisions they make, as there is no personal sacrifice to be made.Footnote 3 Here the distinction between moral and physical courage becomes paramount. Physical courage refers to the capacity to face bodily injury (or death), while moral courage refers to the capacity to make difficult moral decisions (Sparrow 2015: 383). On the surface, it seems as though drone operators may not exercise physical courage due to their being geographically separated from the theatre of war. However, it seems plausible that they could cultivate moral courage, as they could of course refuse to follow an instruction to kill should they deem it problematic on moral grounds, despite whatever institutional pressure there may be to follow such a command. However, and this is crucial, in the case of military personnel who find themselves “on the ground”, moral and physical courage go hand in hand. It is by virtue of their proximity to conflict that such soldiers are said to act courageously, literally risking their lives for what they believe to be right. Their physical courage, in a sense, gives rise to moral courage.
This is not to say, however, that drone operators are therefore incapable of moral courage. It seems right to me that such persons can and do exercise the capacity of moral courage when they refuse orders that may be illegal or immoral. However, to my mind, the absence of physical risk matters significantly.Footnote 4 And it is this that constitutes a change in how we think about military ethics more generally: in the past, courage (at least in the military sense) was understood to involve both physical and moral criteria, with the two being joined at the hip. Now, however, it is possible to discern a change whereby the one can be decoupled from the other. I leave it open as to what the exact relationship between moral and physical courage may be. My point is simply that our usage of such teleoperated weapons has forced us to consider a change a change in what constitutes the moral value of “courage”, at least in military settings.
1.4 Intentionally designed features as embodying value
What I have shown above is that technological artifacts can influence what we come to value. Moreover, I showed how these artifacts can also change what comes to constitute a given moral value. In what follows, however, I would like to explore whether such artifacts can have value independently of their use. That is, can technological artifacts be “good” or “bad” by virtue of their designed properties alone? Should this question be answered in the affirmative, it would mean a significant burden would be placed on those who design such systems. There would need to be serious ethical considerations and extensive consultations around the intended and unintended consequences of specific design choices. Moreover, it would mean aligning the values of our technological systems with the values we aim for as a society (How 2017; Taddeo and Floridi 2018; Floridi et al. 2020). I will show that we should not focus exclusively on the designed properties of artifacts. First, this kind of approach does not allow values to change, and, second, it encounters the difficulty of figuring out what exactly designer intentions may be. From this I will introduce an affordance account of technological artifacts, which aims to shed light on how technological artifacts afford certain uses, and in this way, independently of their actual use, can encourage or discourage certain actions.
A good place to start for such a design focussed account is provided by van de Poel and Kroes (2014), where the authors claim that value sensitive design (VSD) can lead to artifacts capable of embodying value (2014: 112). This account turns on technical artifacts being intentionally designed to have certain features, and that, in some cases at least, these features can result in technology embodying value (2014: 112). I will outline and then critique their argument.
1.5 The intentional account
Van de Poel and Kroes make use of two contrasting examples to underscore their thesis: sea dykes and knives. Sea dykes, as flood protection embankments, serve the function of protecting low-lying land near the sea from flooding. As the authors note, the point is not that sea dykes are instrumentally valuable (i.e., that they can be used as effective vehicles for safety), but rather that safety is an integral part of their function (i.e., safety, as a design specification, is part of their makeup) (Van de Poel and Kroes 2014: 114). Contrast this with a kitchen knife: the function of such a knife is to cut things. Such cuttings may be instrumentally valuable, for example, for the maintenance of good health or well-being, etc. However, and significantly, the realisation of these final values is not part of the function of knives nor are these values to be found in the design specification of knives in general (Van de Poel and Kroes 2014: 114). In other words, in the case of the knife, its function and the final values that can be achieved via this function can be separated. This is not the case in the sea dyke example, as their instrumental purpose (prevention of flooding) is necessarily tethered to their final value, the value for which they are intentionally designed (safety from flooding) (Van de Poel and Kroes 2014: 114).
Based on this discussion, the authors go on to claim that:
“the embodiment of extrinsic final values in technical artifacts thus depends on both an intentional condition (‘x has been designed for G’) and on a condition that primarily refers to physical properties (‘The designed properties of x have the potential to achieve or contribute to G (under the appropriate conditions’)” (Van de Poel and Kroes 2014: 118).
Thus, their account hinges importantly on the designed properties of the artifact in question, as these artifacts can only be said to properly embody value if they have been intentionally designed as such. However, just because an artifact has been intentionally designed to embody a specific value, does not mean that it will always realise that value, in practice (van de Poel and Kroes 2014: 119). In this way, there is a crucial difference between the intended value (that which designers aim to embody), embodied value, and the realised value of a technical artifact. The embodied value is that which is intentionally designed, whereas the realised value is how this value comes about in practice or use (van de Poel and Kroes 2014: 119). The context in which a technical artifact is embedded, therefore, plays a crucial role in co-determining whether the intended or embedded value is indeed realised (van de Poel and Kroes 2014: 119).
In other words, intended design underdetermines the value that an artifact may come to be embedded with. There are cases where the specific use of a technology in different situations leads to the realisation of different values (van de Poel and Kroes 2014: 120). In addition to this, the authors also point out that VSD is only the first step in the process of creating an artifact that properly embodies a relevant value. This implies that designers have an obligation to not just consider their design intentions, but also the potential contexts in which the device will be used, anticipating the potential for multiple realisability of values in practice.
1.6 Problems with the value sensitive design account
While the argument presented by van de Poel and Kroes is significant for the way in which it makes salient how technologies can embody values, I will argue below that this accout still has some shortcomings. Specifically, I will follow Klenk (2020), who argues against van de Poel and Kroes by showing that their acocunt has both metaphysical and epistemic issues. From this he introduces the concept of an afforance, borrowed from ecological psychology, into discussions surrounding value embedding in philosophy of technology.
1.7 Metaphysical issues
The first issue that Klenk raises is metaphysical, and pertains to the intended use versus the designed use of an artifact (2020: 5). According to Klenk, IHAVEFootnote 5 creates a disjuncture between actual use and the question of whether an artifact embodies a value (2020: 5). This suggests that while the designers of artifacts are the source of value for the various technical artifacts, it is not necessary for them to also sustain those values in practice. An implication of this is that how an artifact comes to be used is not a requirement when considering what value it embodies. While van de Poel and Kroes do acknowledge that designers must consider the potential uses of the artifact, this consideration is only applicable insofar as it features in the design phase (2014: 120). While the VNT claimed that an artifacts value is only to be found in its use, van de Poel and Kroes seem to be claiming that use has no bearing whatsover on value (2014). In such a scenario, we would always have to look at the designed intentions of an artifact to determine its value. It is here that the metaphysical issue rears its head: if the value of an artificat is “fixed” at its origin, then it does not seem possible that the embodied value of an artifact can change over time (Klenk 2020: 5).
An implication of this is that should we want to claim that the value of an artifact has to change, we would then need to claim that the designed intentions also changed. This is of course impossible: we cannot go back in time and change the intentional history associated with a particular technical artifact (Klenk 2020: 6). The only kind of value change that is possible on this account is elimination of value completely. Once an artifact stops contributing to the relevant designed value, it ceases to have any value whatsoever. There are, however, cases of appropriation, where the embedded value of the technology is shown to be subject to change, without any change in the artifacts intentional history (Klenk 2020: 6). We, therefore, have both metaphysical and practical grounds for questioning the tenability of the intentional history account.
Moreover, there is the issue of how designer intentions are supposed to feature in technology itself. If it is designer intentions that really matter, then what is the use of claiming that technology, embodies value? If we ought to look toward designer intentions, then it seems that any value that we would find in technology would simply be a derivative of those which the designers had in mind. It, therefore, makes little sense to speak of technology embodying values at all, as the values seem to be in the heads of the designers. This leads to certain epistemic issues.
1.8 Epistemic issues
To see the epistemic issues with IHAVE, once again consider the determining role that designed intentions play in the value an artifact comes to embody. To fix the value of a given artifact, therefore, it should be possible to have reliable access to those designed intentions, to ensure that our judgment is epistemically sound. There are two possible ways in which these intentions can be uncovered: directly or indirectly (Klenk 2020: 7). Directly observing intentions is impossible,Footnote 6 and the best we can hope for in this regard is an accurate inference. At best, this inference gives us indirect access to intentions.
Indirect access can be obtained in a number of ways. First, one could look at the observable features of the artifact in question, and reverse engineer what the design intentions may have been. However, since design intentions underdetermine design choices, this route seems fraught with difficulty (Klenk 2020: 2). Second, designers often make their intentions clear, either verbally or through explicit documentation of the design process. In such cases, we seem to have a reliable way to track design intentions, as these documents are in some cases publicly accessible (or can at least be uncovered upon request). These documents can illuminate the designed intentions and how they relate to the physical properties of the artifact. Klenk, however, points out that we have situations in which the same artifact has two different intentional histories associated with it (2020: 8). This is clearest in cases of replication, and specifically replication with the intention for novel usage. It is possible to imagine an engineer, E, who designs a specific artifact A, recording along the way their designed intentions. Now, another engineer, E* comes across A, but intends to use it for very different purposes. E* records their design intentions for product A*, which are substantially different from those of E. However, the physical properties of A and A* are identical, with different intentional histories. In such a scenario, we would have to decide which intentions matter most, and only then would we be able to determine which values the physically identical artifacts have. IHAVE, however, does not provide us with certainty as to which intentions “count”, creating epistemic uncertainty (Klenk 2020: 8). Following from these difficulties with IHAVE, Klenk argues that we instead investigate an affordance account of value embedding in artifacts.
2 The affordance account
Klenk suggests that we look towards the literature on affordances, which finds empirical support in the ecological psychology literature, founded by Gibson (1979). Klenk claims that artifacts can embody values if they enable valuable actions. In other words, artifacts can afford certain actions (like a chair affords sitting), and these affordances are response-dependant (Klenk 2020: 9). They are response-dependant in the sense that they make some or other action more likely, given the physical properties of the artifact in conjunction with the given context (in one scenario a staircase may afford walking up, but in another it may afford sitting). This also underscores the fact that, should we find that the affordance account is successful, it is a relational account of value embedding.
As noted above, the concept of an affordance was initially used in ecological psychology. Here, it was operationalised to show how different environments “offer” various potentialities of action for a given organism. Gibson used the term to refer to perceived opportunities to engage with objects in the world (1979). The novelty of this account, at the time, was its emphasis on the fact that perception is not viewed as the passive interpretation of environmental information. Rather, perception is to be understood as active and direct, in that our activities are goal-directed, and we do not merely perceive the world but we also perceive the possibilities for action that our world presents to us. This “basic” reading of affordances might suggest that they are merely natural properties of the world. However, one can easily extend this to account for cultural affordances. On this proposal, the skilled learning of individual agents, in their given niches, can change the possibilities for action in a given affordance landscape (Ramstead et al. 2016: 3).
For example, a desk may afford writing, reading, etc. for an adult human.Footnote 7 However, for an animal the same desk may afford shelter. Which type of affordance is more salient depends on the characteristics of the entities in question (human, animal, or machine). Given knowledge about the entities involved, we can make reasonable inferences as to what a given object might afford (such as knowing that animals are unlikely to use a desk for writing). In this way the affordances an artifact embodies depends both on its physical makeup and on the characteristics of the subject. Moreover, we can imagine that shared cultural history and social learning would also come to play a role in the kinds of affordances that agents would find to be most salient (Ramstead et al. 2016). For example, for chimpanzees’ rocks may afford the cracking of nuts, but for lizards they may only afford basking in the sun.
Klenk’s first point is to assuage worries that affordances are merely secondary properties of artifacts, and that they depend in an important sense on whether they are perceived, desired, sought out, etc. This is important, as if the concept is only of secondary importance, it would lend credence to the VNT, since the value of an artifact would then plausibly be determined by how it is used by subjects. However, a response to this argument is to argue that even response-dependant properties can in fact be objective (Klenk 2020: 14). To see how this is the case consider the example of the perception of red, used by Klenk. When we claim that.
“something red is defined by looking red to normal observers in normal circumstances, then that means that in normal circumstances, normal observers will experience the object as red. It does not entail, however, that the thing looking red is what makes the thing red” (Klenk 2020: 14).
In this sense, the property of “being red” is an objective one and does not necessarily depend on a subject being present, nor on it being perceived.
Secondly, Klenk then shows how these response-dependant properties are indeed values (2020: 15). To do this he claims that affordances can enable or enhance the chances of an action coming about. In this sense they can be understood as “helping or encouraging” certain actions, and that these are linked to the dispositions of the agent in question (Klenk 2020: 15). These dispositions can be both instrumentally and finally valuable (i.e., valuable in themselves). A disposition to be curious is both instrumentally valuable (in that it is useful for uncovering certain facts about the world), but also seems valuable in itself. Thus, the affordance(s) of an artifact are to be conceived of as being part of the set of enabling conditions for the use of that artifact. These enabling conditions can be valuable in themselves, and so artifacts, by embodying affordances, also embody values (Klenk 2020: 16). It is here that Klenk’s account comes to an end, but I would like to suggest that his argument could be helpfully extended by considering meaningful affordances, graded by their force. Meaningful affordances are those which solicit specific kinds of actions, whereas mere affordances simply provide possibilities for actions more generally. In other words, I will show that not all affordances are experienced or created equally.
2.1 Extending the affordance account
The first thing to note about Klenk’s account is that, while he acknowledges that affordances are response-dependant properties, he claims that they are nonetheless objective properties of artifacts, and hence it seems likely that they have a permanent ontological status. However, this kind of claim could be challenged: for example, it is clear that a glass of water affords drinking. However, whether the act of drinking is solicited depends on how thirsty I am. In this sense, there are different ways in which we may experience the same affordance. This solicitation, critically, depends on its relevance to our concerns (Dings 2018: 682). In this way the notion of an affordance does not seem to be that objective kind of property that Klenk argues it to be. However, I do not think that what I have said above refutes Klenk’s argument, as I will show below.
Consider again the claim that what an object affords is not necessarily dependant on its perception. I argued above, however, that we seem to have reason to doubt this claim, as affordances may solicit various responses from subjects, based on those subjects’ concerns (such as being thirsty). However, this subjective element need not lead us to argue that affordances are not objective properties. It seems plausible to say that while, phenomenologically, our concerns shape how an object might afford actions, that the object affords something does not change, and it is this sense of affordance that remains objective (Dings 2018: 684). So while my being thirsty determines whether I perform the action of drinking water, this does not change the objective feature of the glass of water (that it affords drinking). However, it is here that the question of “force” rears its head. In the example above I claimed, following Klenk, that an affordance is indeed an objective property. However, it seems clear that, depending on concerns of the perceiving subject, the associated strength of the affordance is subject to change. I think that this could be a useful conceptual resource to add to Klenk’s argument.
2.2 Towards a robust affordance account
I believe that the concept of an affordance can be of great aid to researchers in the philosophy of technology. However, in order for this to properly come to fruition we need to add some nuance to the account developed by Klenk. Specifically, while affordances are indeed objective properties of the world, the perceiving agents phenomenology plays a significant role. I believe that Klenk’s account above is atomistic in its construal of affordances, and that it would benefit from a more holistic interpretation (Dings 2020). Such a holistic interpretation acknowledges that agents have specific concerns, and that “these concerns are embedded in the agents wider concerns, values, projects and commitments” (Dings 2020: 1).
Specifically, we require an understanding of affordances that helps us explain why certain actions might be made more likely than others. As noted, it is not enough to simply look at the designed properties of artifacts. We must also take seriously the psychology of those who will be using these artifacts: human beings. I will offer two extensions of Klenk’s account. The first involves an elaboration on the “meaningfulness” of an affordance, and the second concerns the “force” of an affordance.
2.3 The meaningfulness of an affordance
Firstly, then, we need a means of cashing out the likelihood of various possibilities for action. One way to do this would be to distinguish between “merely relevant” possibilities for action and “meaningful” possibilities for action (Dings 2020: 2). Meaningful affordances would be those that related to, for example, the concerns or values held by the agent in question. “Merely relevant” affordances, on the other hand, would have a more impoverished associated phenomenology. When we consider personal history, values, and self-narrative (that is, we pay attention to the embedded nature of affordances), we can see how these things come to shape the possibilities for actions that an agent may perceive.
For example, consider an agent’s personal history. This speaks to the role of memory, as for one person “a path” might be experienced as a means to get out of a forest. To another, however, it might be experienced as the way home, which has a far richer associated phenomenology. In the example above, the one agent views the path as a “mere” affordance, whilst to the other views it as a meaningful affordance, and this is due in large part to their values and commitments, which are a result of their personal histories (which collectively can be called “concerns”) (Dings 2020: 9). Such concerns are not static: they are a product of a multitude of factors and have both forward- and backward-looking aspects. An agents history shapes who they are at any given moment, and their goals also play a role in guiding their actions. It is, therefore, important that such agential interests are taken into account. Distinguishing between meaningful and “mere” affordances can aid us in this task.
Following from this it makes sense to distinguish how various kinds of actions might be identified by agents. This helps us keep the notion of a meaningful affordance precise, while preserving its inherent pluralism. Such a scale would move from relatively low-level to high level action identification (Dings 2020: 10). At the low end of the scale we find affordances that suggest how an action is to be performed (for example, a handle that affords gripping). At the high-end we find affordances that suggest why an action is to be performed, and this often involves “specifying the reasons or long-term goals that are relevant” (Dings 2020: 11). For example, imagine coming across some used cardboard box in the street. The box, at a rather low level, might simply afford being picked up. However, at a higher level, it might afford recycling, being thrown away, etc. These higher-order, “why” identifications once again draw our attention to the embeddedness of agents concerns, and the effects of this on how affordances are experienced.
2.4 The force of an affordance
My second extension of the affordance account has more to do with the concept of an affordance itself. I believe that it is possible to grade affordances based on their “force”, that is, whether they are “demanding” or “inviting” (Dings 2018: 689). The kinds of technology we produce, based on their design, may be more or less inviting for certain kinds of actions. For example, while an AK-47 and a handgun both afford the use of lethal force, the AK-47 is more demanding in this regard, given that it is explicitly designed to be as lethal as possible, whereas a small handgun might be argued to be only designed for self-defence. While the notion of a meaningful affordance has to do with the concerns of the agent in question, the notion of force here concerns the psychological machinery that agents such as ourselves possess, and our subsequent interaction with technical artifacts. Here it might be possible to draw on work in the behavioural sciences, where cognitive biases are studied in detail (Kahneman 2011). A major applied stream of research in this field is nudge theory, which is often used in the service of socially desirable outcomes (Thaler and Sunstein 2008). A key presupposition of this approach is that there are reliable ways in which we fail to reason properly about the world due to various constraints (time, information, etc.). In cases such as this, successful behavioural interventions can increase the likelihood of positive outcomes, without necessarily changing the economic incentives of the agent (Thaler and Sunstein 2008: 8).Footnote 8 For example, switching to opt-out (as opposed to opt-in) retirement plans in the UK resulted in a 37% increase in eligible private sector worker participation (Cribb and Emmerson 2016).
Socially beneficial outcomes, in some cases, can, therefore, be seen as kinds of engineering problems that thoughtful design can help to promote. The inverse, of course, is also true. Recommender systems, for example, might suggest problematic content to users (Burr et al. 2018; Alfano et al. 2020). The affordance account introduced in this paper, and the specific notion of “force”, allows us to better understand these and other issues by giving us a framework with which to evaluate how and why certain probabilities for action might be increased or decreased. To do so requires us to take seriously our cognitive biases and ensure our artifacts our designed appropriately. Examples of these biases include the primacy bias, availability bias, and priming bias (see Kahneman 2011).
While it is beyond the scope of this paper to go into each of these biases in detail, they do reveal something interesting regarding affordances: their dual-nature. Affordances are both descriptive and prescriptive. They are descriptive in the sense in which “they constitute the privileged mode for the perceptual disclosure of aspects of the environment” (Ramstead et al. 2016: 5). That is, they help us describe aspects of the environment that may be perceived. They are prescriptive in that “they specify the kinds of action and perception that are available, situationally appropriate and, in the case of social niches, expected by others” (Ramstead et al. 2016: 5). In this prescriptive sense, then, affordances the help track what kinds of actions would be appropriate or expected from particular agents. Considering the gun example introduced earlier, the AK-47 and the hand can both be descriptively understood as affording “lethal force”. Prescriptively, however, the force with which the AK-47 solicits this action is far greater, given the range and kinds of actions it invites.
3 Conclusion
In this paper, I have argued against the Neutrality Thesis regarding technology. I then introduced a value embedding account of technology, which was shown to have metaphysical and epistemic shortcomings. Following Klenk (2020), I argued for an affordance-based account of value embedding. I further argued that such an account could be extended in fruitful ways, especially if they take into account the force of the affordance in question. This creates the interesting situation in which our design of technological artifacts is not the determining factor in the kinds of values they come to embody. Rather, there is always a dance between designed properties and the way in which we perceive them. This brings in to sharper focus how the affordance account of value embedding might be used in practice, and how we might be able to more reliably cash out the values embedded in our technologies.
Notes
More recent iterations of this technology include the “Reaper” and the “Avenger” drones (Sparrow 2015: 380).
The US Department of Defence spends $5 billion per year on ‘unmanned systems’ (their sexist terminology, not mine), while DARPA has an annual budget of $3 billion (Müller 2014: 4).
This is not to say that there is no risk to their well-being, however, as there are cases where drone operators experience severe PTSD from the actions that they are required to perform (Sparrow 2015: 386).
The current usage of drone technology is highly asymmetrical: one finds these systems being used by, for example, the US, against groups in the Middle East who lack the economic and technological capacity to make use of such systems. We can imagine a future, however, where these groups have access to such technology and can use it to target the geographically remote drone control centres. In such a scenario, perhaps drone operators would be able to exercise physical courage, due to the risks of their occupation.
IHAVE is Klenk’s term of art for the intentional history account of value embedding, which is how he understands van de Poel and Kroes’ argument.
Unless of course you are the designer and are observing your own intentions. However, in for these intentions to count they should be amenable to some kind of third-party verification, and so the first-person perspective is inappropriate for such an inquiry.
This account of affordances shows that when objects afford certain actions, they are not acting themselves. Therefore, this account of value embedding in technology is silent on whether technology can have moral agency.
This is not to say that all behavioural interventions are successful. However, much can still be learnt from failed interventions (see Osman et al. (Osman 2020)).
References
Alfano M et al (2020) Technologically scaffolded atypical cognition: the case of YouTube’s recommender system. Synthese (Springer, The Netherlands). https://doi.org/10.1007/s11229-020-02724-x
Burr C, Cristianini N, Ladyman J (2018) An analysis of the interaction between intelligent software agents and human users. Minds Mach (Springer, The Netherlands). https://doi.org/10.1007/s11023-018-9479-0
Cockburn C (1997) Domestic technologies: Cinderella and the engineers. Women’s Stud Intern Forum 20(3):361–371. https://doi.org/10.1016/S0277-5395(97)00020-4
Cribb J, Emmerson C (2016) What happens when employers are obliged to nudge? Automatic enrolment and pension saving in the UK. Available at: https://www.ifs.org.uk/publications/8723
Dings R (2018) Understanding phenomenological differences in how affordances solicit action. An exploration. Phenomenol Cogn Sci 17(4):681–699. https://doi.org/10.1007/s11097-017-9534-y
Dings R (2020) Meaningful affordances. Synthese (Springer, The Netherlands). https://doi.org/10.1007/s11229-020-02864-0
Drone Warfare (2019) The Bureau of Investigative Journalism. Available at: https://www.thebureauinvestigates.com/projects/drone-war. Accessed 11 July 2019
Floridi L et al (2020) How to design AI for social good: seven essential factors. Sci Eng Ethics (Springer, The Netherlands) 26(3):1771–1796. https://doi.org/10.1007/s11948-020-00213-5
Gibson JJ (1979) The ecological approach to visual perception. Houghton Mifflin, Boston, MA
Henry N, Powell A (2017) Sexual violence in the digital age, social & legal studies. Palgrave Macmillan, London. https://doi.org/10.1177/0964663915624273
van den Hoven J, Weckert J (2008) Information technology and moral philosophy. Inform Technol Moral Phil. https://doi.org/10.1017/CBO9780511498725
How JP (2017) Ethically aligned design: a vision for prioritizing human well-being with autonomous and intelligent systems—version 2. IEEE Contr Syst. https://doi.org/10.1109/MCS.2018.2810458
Illies C, Meijers A (2009) Artifacts without agency. The Monist 92(3):420–440. https://doi.org/10.2174/138920312803582960
Johnson DG, Noorman M (2014) Artefactual agency and artefactual moral agency. In: Kroes P, Verbeek PP (eds) The moral status of technical artifacts. Springer, New York, pp. 143–158. https://doi.org/10.1007/978-94-007-7914-3
Kahneman D (2011) Thinking, fast and slow. Penguin Books, London
Klenk M (2020) How do technological artifacts embody moral values? Phil Technol
Latour B (1999) Pandora’s hope. Harvard University Presss, Cambridge, Massachusetts. https://doi.org/10.1017/CBO9781107415324.004
Müller VC (2014) Autonomous killer robots are probably good news. Front Artif Intell Appl 273:297–305. https://doi.org/10.3233/978-1-61499-480-0-297
Osman M et al (2020) Learning from behavioural changes that fail. Trends Cogn Sci 24(12):969–980. https://doi.org/10.1016/j.tics.2020.09.009
Peterson M, Spahn A (2011) Can technological artifacts be moral agents? Sci Eng Ethics 17(3):411–424. https://doi.org/10.1007/s11948-010-9241-3
van de Poel I, Kroes P (2014) Can technology embody values? In: Kroes P, Verbeek PP (eds) The moral status of technical artifacts. Springer, Netherlands
Ramstead MJD, Veissiere SPL, Kirmayer LJ (2016) Cultural affordances: scaffolding local worlds through shared intentionality and regimes of attention. Front Psychol 7:1–21. https://doi.org/10.3389/fpsyg.2016.01090
Royakkers L, van Est R (2015) A literature review on new robotics: automation from love to war. Intern J Soc Robot (Springer, The Netherlands) 7(5):549–570. https://doi.org/10.1007/s12369-015-0295-x
Sparrow R (2007) Killer robots. J Appl Phil 24(1):62–78. https://doi.org/10.1111/j.1468-5930.2007.00346.x
Sparrow R (2015) Drones, courage, and military culture. In: Lucas G (ed) Routledge handbook of military ethics. Routledge, New York. https://doi.org/10.4324/9780203148433
Taddeo M, Floridi L (2018) How AI can be a force for good. Science 361(6404):751–752. https://doi.org/10.1126/science.aat5991
Thaler R, Sunstein C (2008) Nudge: improving decisions about health, wealth, and happiness. Yale University Press, New Haven
Tollon F (2019) Moral agents or mindless machines? A critical appraisal of agency in artificial systems. Magyar Filozofiai Szemle 63(4):9–23
Verbeek PP (2005) What things do. The Pennsylvania State University Press, University Park, Pennsylvania. https://doi.org/10.1017/CBO9781107415324.004
Wallach W, Allen C (2009) Moral machines. Oxford University Press, New York
Funding
Open Access funding enabled and organized by Projekt DEAL. Funding (information that explains whether and by whom the research was supported): Funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation)—Project 254954344/GRK2073 “Integrating Ethics and Epistemology of Scientific Research”. Fabio is also a research fellow at the Centre for Artificial Intelligence Research (CAIR).
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
Not applicable.
Availability of data and material (data transparency)
Not applicable.
Code availability (software application or custom code)
Not applicable.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Tollon, F. Artifacts and affordances: from designed properties to possibilities for action. AI & Soc 37, 239–248 (2022). https://doi.org/10.1007/s00146-021-01155-7
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00146-021-01155-7