Pomiet Background Texture

Values and Design Ethics in AI

Given that it would be better to create technologies whose designs support values in a way that aligns with the values of their users and (perhaps especially) with that of society more broadly, how might a process to assure this kind of design be devised?

Article Dec 21, 2016

Nadav Zohar

In October 2014 I attended a conference in which the closing keynote presentation described how the technology of the future will not have an "interface" as we are accustomed to thinking of it: in other words, objects will simply respond to our voices and movements and glances--perhaps directly to our intentions--without us having to navigate UIs or even lift a finger. Several videos in the presentation demonstrated emerging technologies possessing these capabilities: a "smart" office completely outfitted with sensors, projectors, and IoT devices; a wrist-worn device that nudges the user's hand so he produces fine artistic drawings; a headband with mechanical puppy ears that move to accentuate the emotions evident from the wearer's facial expressions; and so on.

This future was posed in a celebratory way, as something to be eagerly manifested. But my gut reaction to it was the opposite: it made me revulsed, even depressed. I thought, "Maybe I should just pack up my wife and kids and go live in a cabin in the woods." I drove home from the conference shaken and disillusioned, but also puzzled at why I had had that particular response.

Eventually I realized it was a matter of conflicting values: the emerging technologies--in fact, the general worldview--demonstrated in the presentation seemed to implicitly support the primacy of certain values which I did not share: the drawing device and puppy ears, for example, championed a kind of trans-humanism that clashed with my preference for human authenticity; the "smart" office put convenience and connectedness above all else whereas I value patience and solitude.

Like many insights, this one led to an empirical question: given that it would be better to create technologies whose designs support values in a way that aligns with the values of their users and (perhaps especially) with that of society more broadly, how might a process to assure this kind of design be devised?

I first conducted a study to demonstrate that the people who create technology tend to have a somewhat different value structure than the wider public. Demonstrating this was important, because it would show that technology creators cannot simply assume the changes they want to bring about through their creations are welcome; a process to discover society's value structure and honor it through design would therefore be necessary.

For inspiration I looked at several existing processes that do something like this. One was the Contextual Design process, developed by Hugh Beyer and Karen Holtzblatt. Another was Value Sensitive Design, developed by Batya Friedman and Peter Kahn. Both incorporate iterative processes consisting of multiple stages of research and prototyping to articulate and align with the values of users, although it did not seem to me that these processes expressly considered higher-order impacts of the design, or areas where values might conflict internally.

My biggest inspiration was taken from the Amish, who are keenly in touch with their own values and, because of this, have for many generations been able to maintain a high degree of control over the speed and direction of change in their communities. They do this by carefully considering what technologies they will or will not adopt.

Their adoption decisions can be subdivided as follows: rejection (e.g. televisions); full adoption (e.g. roller skates); adoption with some modification to the design (e.g. the removal of tires from a tractor so it cannot easily be driven long distances); and adoption with restrictions on placement or ownership (e.g. requiring telephones to be located in a public booth outside the home). In each case of adoption or rejection, a decision reflects the community's assessment of how the design of the technology might either support or threaten values they hold dear.

We non-Amish are much less aware of our own societal values, but if they could be mapped and conveyed to technology creators it would be theoretically possible to emulate this key part of the Amish's secret to success.

To test whether this is true, I conducted a study in which the participants were recruited based on the fact that they worked in jobs where they contributed to technology design. (Most were UXers of one stripe or another.) I divided my participants into small "teams" as is common in many technology organizations, and gave each team the identical instruction that they would be collaboratively designing a hypothetical device. I provided each team with the same list of features, and asked them to collectively decide which features to include or exclude in the design. I also provided some information about the target users and the manufacturer.

As an experimental variable, I gave half the teams an additional instruction to exclude any feature they determined would threaten any of three social values that I named and defined for them on a sheet of paper.

My hypothesis was that the experimental teams would consistently exclude certain features on the list, which I had planted there to be enticing unless one thought about them in terms of the social values I named.

It may have been that my experiment was poorly designed or the materials poorly conceived, or that my hypothesis was simply false, but in the end there did not seem to be any discernible pattern to how experimental vs. control teams included or excluded features.

Discouraged, I left the question alone for a while, but recently I noticed a lot of other people in the technology industry were starting to have similar misgivings about the effects of design on society, and used similar "values" language to discuss it. "Value alignment" is by now a well-populated area of AI research.

This widespread interest in values and design ethics provides a hopeful note; it suggests that my line of inquiry was valid and may reward further effort.

Looking for a guide on your journey?

Ready to explore how human-machine teaming can help to solve your complex problems? Let's talk. We're excited to hear your ideas and see where we can assist.

Let's Talk