By borrowing from communities that have long practiced deliberate technology adoption, like the Amish, we can build tools that serve people without quietly undermining the social fabric we all depend on.
Designing with Social Values in Mind
Publication Jan 05, 2024
Nadav Zohar
In a world where new technology rolls out faster than we can fully grasp its ripple effects, one community stands out for its deliberate approach: the Amish. Amish elders don't rush to adopt every innovation. They pause, evaluate how a new tool or system might affect their core way of life, family closeness, community cohesion, humility, and tradition. Only after considering these values do they decide whether to accept it as-is, reject it outright, or modify it to preserve what matters most. This approach has created centuries of stability while still benefiting from select technologies that align with their values.
Contrast this approach to life with much of modern technology development. Features get built because they're feasible, marketable, or interesting, often with little structured reflection on long-term societal impacts. The consequences can be subtle but profound. For example, smartphones quietly erode face-to-face conversation, social media amplifies division, and always-on connectivity blurs work-life boundaries. Even when a system meets its stated goals (efficiency, convenience, connectivity), unintended consequences emerge, sometimes quietly reshaping relationships, attention spans, privacy, or mental well-being.
None of these consequences began with malicious intent. They are the byproduct of design processes that prioritize technical excellence, user delight, or business interests over broader social values. At POMIET, we believe responsible innovation requires more intentionality. Technology isn’t value-neutral, though most organizations don't consciously decide which values to promote. So, we developed a framework that supports conscious decision-making about the values promoted by a product.
A Practical Framework: Social Values Protection (SVPro)
Inspired by the Amish vetting process and structured risk analysis methods like failure mode and effects analysis (FMEA), we propose a straightforward, repeatable approach called Social Values Protection (SVPro). The goal is to help design teams evaluate planned features not just for usability or performance, but also for how they support or threaten the motivating principles of users.
SVPro unfolds in three core steps:
1. Define your value bundles
Start by selecting a manageable set of values worth protecting, drawing on established universal frameworks such as Shalom Schwartz's nineteen basic human values (e.g., self-direction, tradition, security, benevolence, universalism).
Work as a team (ideally with organizational stakeholders) to identify the "prime" values most relevant to your context. Then, for each prime value, brainstorm realistic scenarios where protecting it might unintentionally threaten another important value. Those threatened values become "conditional" values.
Group them into "value bundles" with a simple rule: Protection of the prime value should not threaten the conditional value(s).
For example, a feature that maximizes individual freedom (self-direction, action) shouldn't compromise safety and stability.
2. Evaluate every planned attribute
Examine each proposed feature, component, or interaction through the lens of your value bundles.
Adapt FMEA practices. Map out how humans will interact with the attribute, trace plausible first-order and longer-term effects, and assess whether those effects protect or threaten each bundle.
Look beyond immediate use cases and consider habit formation, cultural shifts, and second-order behaviors. Existing research on technology's impacts (e.g., studies on screen time and relationships, data permanence and memory, automation and skill erosion) can inform this step.
3. Make intentional decisions
For each attribute, decide:
- Accept: It aligns with or strengthens your value bundles.
- Reject: The threat to values is unacceptable.
- Modify: Keep the benefit while mitigating the risk (e.g., add safeguards, limit scope, provide opt-outs).
This mirrors the Amish repertoire: full acceptance, full rejection, or thoughtful adaptation.
A fourth step, inviting broader stakeholders and public input, strengthens the process but falls outside the core model for now.
Early Insights from a Pilot Exploration
We wanted to try this framework outside of the POMIET environment. So, to test whether explicitly considering values influences design choices, we conducted workshops with UX and technology professionals. Teams designed features for a hypothetical "smart refrigerator" targeted at the mass market. One group received instructions to pass only attributes that protected (or at least did not threaten) three specific values: freedom of thought and action, preservation of tradition/family customs, and societal safety/stability. The control group received no such guidance.
While the pilot data didn't show dramatic statistical differences, the exercise surfaced richer conversations. Teams with the value prompt surfaced concerns about surveillance (constant food tracking), erosion of everyday rituals (less spontaneous meal prep), and dependency (over-reliance on app-controlled features). These are exactly the kinds of second-order effects that often go unnoticed until after launch.
Why This Matters Now
Technology organizations often follow design and development processes, such as agile, lean, user-centered, value-sensitive, but few place protection of social values at the center. Codes of ethics exist, but they're rarely enforced at the organizational level or tied to concrete decision-making.
SVPro offers a lightweight, adaptable way to bring that intentionality into everyday design work. It doesn't replace existing processes; it layers in a values check that can prevent subtle harms from accumulating into larger societal shifts.
At POMIET, we see human-machine teaming as inherently values-laden. When we design systems that augment human capability, we must also ask: augment toward what end? Toward greater freedom without sacrificing connection? Toward efficiency without eroding trust? Toward innovation while honoring tradition?
By borrowing from communities that have long practiced deliberate technology adoption, we can build tools that serve people without quietly undermining the social fabric we all depend on.
This work was published in Technoethics.
Looking for a guide on your journey?
Ready to explore how human-machine teaming can help to solve your complex problems? Let's talk. We're excited to hear your ideas and see where we can assist.
Let's Talk