Some questions that arise in connection with Autonomous Machines (AMs) are philosophical. These include questions about agency (and moral agancy), in connection with concerns about whether AMs can be held  responsible and blameworthy in some sense, as well as questions about autonomy and trust.

Most of us associate autonomy with concepts  such as liberty, dignity, and individuality.  Others, however,  link autonomy  to “independence”. If we defines autonomy as a “capacity or trait that individuals manifest by acting independently”. While it is difficult to  ascribe  characteristics  such as liberty  and dignity  to AMs, we have seen that these machines do appear to be  capable of “acting independently”. So, if we can show that AMs can indeed act independently, it would seem plausible to describe  AMs as entities that are also autonomous in some sense.

What would  it mean for a human to trust an AM? Can we trust AMs to always act  in our beat interests, especially AMs designed in such a way that they cannot be shutdown by human operators.  There is need to clarify what is meant by the concept of trust in general -i.e, the kind of trust that applies in relationship between humans.

Definitions of ‘trust’ that focus mainly on reliance however, do not always help us to understand  the nature  of ethical trust. For example, I rely on my phone to dail but I do not “trust” it to do so. Conversely,  I trust my daughter  implicitly, but I cannot always rely on her to organise  her important  papers. Thus, trust and reliance are not equivalent notions; while reliance may be a necessary  condition for trust, something more is needed  for ethical trust.

Because I am unable to have a trust relationship with my phone, does it follow that I also cannot have one with an AM? Or does an AM’s ability to exhibit  some level of autonomy -even if only functional autonomy  -make a difference? Consider that I am able to trust  a human because the person in whom I place my trust not only can disappoint me (or let me down) but can also betray me -e.g, that person as a fully  Autonomous  (human) agent, can freely elect to breach the trust I have placed in her. So  it would seem that an entity’s having at least some sense of autonomy is required for  it to be capable of breaching the trust that someone has placed in it. In his sense, my phone cannot breach my  trust or betray me, even though I may be very disappointed if it fails to dial.  Although my phone does not have autonomy, we have seen that an AM has (functional) autonomy and thus might seen capable of satisfying the conditions required  for a trust relationship. But even if an AM has (some level of) autonomy, and even if having autonomy is a necessary condition  for being in a sufficient condition.  So, we can further ask whether any additional  requirement may also need to be satisfied. 


Leave a Reply

Your email address will not be published. Required fields are marked *