The Oath
If you’re designing for perception, you’re engineering thoughts and emotions. The design industry has essentially no shared ethical framework for wielding that power.
A comedian I used to watch in Chicago had a bit about confidence men. The original con, the one the term comes from, was a guy in 1840s New York who’d walk up to strangers, strike up a conversation, and then ask: “Would you have the confidence in me to trust me with your watch until tomorrow?” And people would hand it over. No trick. No sleight of hand. Just a clean, warm, fluent social interaction that felt so natural, so trustworthy, that the mark’s critical evaluation never fired.
That story has bothered me for years. Because the mechanism the confidence man used is the same mechanism I design with.
Processing fluency. Social warmth cues. Pattern-matching that says “this feels right.” The brain’s prediction engine processing smoothly, no errors, no red flags, System 1 gliding right through. Everything I’ve spent ten chapters teaching you to deploy, the confidence man deployed first. He just pointed it at the wrong target.
Here’s the thing about designing for perception: it works. The 50-millisecond verdict, the prediction engine, the fluency effect, the activation points. These mechanisms are real, they’re powerful, and they’re value-neutral. A fluent lie is more persuasive than a disfluent truth. A warm, trustworthy-looking interface can sell a product that doesn’t deserve trust. An activation point can be calibrated to trigger impulse purchases that users regret the next morning.
If you’re a designer working in the perception layer, you are, whether you like the framing or not, an engineer of thoughts and emotions. You decide where attention goes. You decide what feels trustworthy. You decide which predictions to match and which to break. You decide what the user thinks about, and when, and how it makes them feel.
That’s power. And the design industry has essentially no shared ethical framework for wielding it.
The ACM Code of Ethics exists. It’s thoughtful, comprehensive, and almost nobody outside of academia has read it. I’ve asked designers at conferences. I’ve asked them in interviews. I’ve asked them in workshops. Blank stares. The closest thing most practicing designers have to an ethical framework is “I wouldn’t do anything I felt gross about,” which is not a framework. It’s a vibes check.
Vibes checks fail when the incentives push hard enough.
Three Tests in Sixty Seconds
So I built one. Not a manifesto. Not a poster for the office wall. Three tests you can run on any design decision in sixty seconds, right there in the review session, before anything ships.
The Alignment Test: Does this design bring perception closer to reality, or further from it?
Berdichevsky and Neuenschwander proposed this in 1999, in one of the earliest papers on the ethics of persuasive technology. They argued that any technology designed to change attitudes or behaviors has an obligation to change them in the direction of truth. Not the designer’s truth. Not the client’s truth. Actual reality.
When Simply Smart Home’s site looked like a $25 overseas knockoff despite selling a $150 product that actually worked, perception was below reality. Closing that gap upward is correction, not manipulation. The product was good. The perception was bad. I brought them into alignment.
The violation goes in the other direction. If the product is mediocre and the site makes it look premium, if the service is slow and the messaging promises speed, if the company culture is toxic and the careers page radiates warmth, that’s inflating perception above reality. That’s the confidence man’s watch trick with better typography.
The same mechanism shows up in physical products. A cheap gadget with unnecessary weight added to the housing so it “feels” substantial. A mid-tier brand that plasters a celebrity endorsement across the packaging so the product inherits status it didn’t earn. These are perception inflation through physical and social signals: manufacturing the feeling of quality without the substance of it. The channel changes. The trick doesn’t.
The Sincerity Test: If the user fully understood what this design choice does, would they feel served or exploited?
This one comes from Friestad and Wright’s Persuasion Knowledge Model (1994). Their research showed something that most designers don’t realize: people don’t object to being persuaded. We know ads try to sell us things. We know stores are designed to move us through a path. We know salespeople are working an angle. We’re fine with it, mostly, as long as we believe the persuasion is sincere.
What triggers resistance, what Friestad and Wright call the “change of meaning” moment, is when the user realizes the persuasion was designed to benefit someone other than them. Campbell and Kirmani expanded this in 2000, showing that users who detect an ulterior motive don’t just resist the specific tactic. They downgrade their entire evaluation of the source. Trust doesn’t bend. It breaks.
Perceived intent changes everything. Langer, Blank, and Chanowitz demonstrated this in 1978 with a study so clean it still holds up nearly fifty years later. A researcher approached people waiting at a copy machine and asked to cut in line. Three conditions.
“Excuse me, may I use the machine?” got about 60% compliance. “Excuse me, may I use the machine, because I’m in a rush?” got 94%. And “Excuse me, may I use the machine, because I need to make copies?” got 93%.
That last reason is meaningless. Everyone at a copy machine needs to make copies. But the word “because” signaled intentionality, signaled a reason existed, and that was enough. People processed the structure of the request and waved it through without evaluating the content.
But only for small asks. When the request got costly (20 or more pages), the placebic reason stopped working. People started evaluating what came after the “because.”
This is the dynamic that matters for design ethics. A tiny dark pattern, a pre-checked newsletter box, a slightly confusing unsubscribe flow, gets waved through the same way. The user processes the structure (“this looks like a standard form”) and moves on.
But when the cost escalates, when the stakes get personal, users start reading the actual intent behind the design. And when they realize the “because” was empty, when they see that the unsubscribe button was tiny on purpose, that the cancellation flow was deliberately obstructed, trust doesn’t erode gradually. It collapses. The same action perceived as accidental gets patience. Perceived as deliberate, it gets fury.
This response is not cultural. It is developmental. Fehr, Bernhard, and Rockenbach (2008) demonstrated egalitarianism in children as young as three, published in Nature. McAuliffe and colleagues (2017) reviewed the developmental evidence and found that children across cultures engage in costly punishment of unfairness, rejecting unequal offers even when rejection costs them personally.
Fairness is not a learned preference. It is a developmental constant that appears before conscious reasoning is fully online. When a design promises one thing and delivers another, the response is visceral and immediate because it triggers mechanisms that predate the user’s ability to articulate why they’re angry.
So the sincerity test isn’t “is this persuasive?” Everything I design is persuasive. The test is: if I pulled back the curtain, if I showed the user exactly how this layout directs their attention, exactly how this color palette builds trust, exactly how this copy sequence moves them toward the CTA, would they feel like I was working for them? Or against them?
The Golden Rule: Would I consent to being influenced by this technique if I were the user?
This one doesn’t need a citation. It needs honesty. I browse the web. I buy products. I fill out forms and sign up for trials and click “accept” on terms of service. When I encounter a design that respects my time and intelligence, that makes a genuinely good thing easy to find and evaluate and buy, I appreciate it. When I encounter a design that hides the unsubscribe button, pre-checks the newsletter box, makes the “decline” option look like a guilt trip (“No thanks, I don’t want to save money”), I remember. And I don’t come back.
Every time Microsoft uses an update cycle to ask whether I want OneDrive as my default storage, I feel the same violation. Every time Adobe pushes a new feature tooltip on app launch that I have to close before I can work, same thing. Software that uses updates to reset user preferences or push unwanted services is the digital equivalent of a restaurant that brings you a more expensive dish than you ordered and hopes you won’t send it back. Microsoft and Adobe survive it because of monopoly lock-in, not because users forgive them. The golden rule test fails every time. No one at Microsoft wants their OS settings overwritten by a vendor during a security update.
The golden rule is the final filter because the other two tests can be gamed. You can argue that a misleading progress bar “brings perception closer to the reality of completion.” You can argue that an aggressive upsell “serves the user’s need for the premium tier.” But you can’t honestly say you’d want to be on the receiving end of a confirm-shaming modal. Not if you’re being straight with yourself.
Dark Patterns Are Cheap Laughs
Dark patterns are the cheap laughs of design.
I mean this literally, not metaphorically. In comedy, a cheap laugh is a joke that gets a reaction through shock, crudeness, or punching down. It works once. The audience laughs. But they don’t respect you for it, and they don’t come back because of it. A cheap laugh trades long-term audience trust for a short-term metric (laughter in the room right now).
Harry Brignull coined the term “dark patterns” in 2010, formalizing the concept in A List Apart the following year. Gray, Kou, Battles, Hoggatt, and Toombs published the definitive taxonomy in 2018: nagging, obstruction, sneaking, interface interference, forced action.
Every one of these patterns converts short-term. The hidden subscription charges. The roach motel you can check into but can’t check out of. The misdirection that moves the “cancel” button between screens. They all produce the number someone asked for on the dashboard this quarter.
And they all destroy trust.
Cheap laughs get old fast. I get it, I get it, yeah yeah, another one, wow. The audience checks out because the pattern is transparent and the comedian isn’t risking anything.
Del Close and Charna Halpern wrote the book on this, literally. Truth in Comedy (1994), the manual that came out of iO Theater. Their core principle: “The truth is funny. Honest discovery, observation, and reaction is better than contrived invention.” One of the biggest mistakes a performer can make is trying to be funny. If the audience senses you’re reaching for the laugh, you’ve made the job harder. Real humor comes from finding the joke in the reality of the moment, not from sacrificing reality to crack a cheap one.
Close called the cheap version “going for the joke.” Sacrificing the truth of a scene to get a reaction. It works once. But generosity, making your scene partner and their ideas look as good as possible, that’s what builds the group mind. That’s what makes an ensemble feel like they’re reading each other’s thoughts. The audience can’t fake-detect generosity because it’s not a technique. It’s an orientation.
Tina Fey is the clearest example. She came up through iO and Second City, trained directly in the Del Close lineage, and her whole approach is this principle in action. Her run on SNL, first as head writer then Weekend Update with Amy Poehler, was built on elevating the people around her rather than centering herself. The 30 Rock writers’ room ran on “yes, and” as a production methodology, not just a comedy rule. She designed the systems that made other people funnier. That’s generosity as architecture, not generosity as performance.
Expensive laughs are sustainable and scalable because they’re rooted in truth. Comedians build entire careers on them. It takes longer to set up. It requires understanding the room. It doesn’t always land on the first try. But when it works, it builds a relationship between the comedian and the audience that compounds over time. They come back. They bring friends. They buy the special.
PFD is the expensive laugh. Genuine emotional resonance built on real value, designed to compound. It takes more skill. It requires the product to actually be good. It doesn’t produce a spike on the dashboard this quarter. But the clients I’ve worked with for five, seven, nine years didn’t stay because I tricked their users into converting. They stayed because their users kept coming back.
But expensive laughs depend on the comedian’s judgment in the moment. What happens when the comedian gets tired, or the club owner is pressuring them to do the crowd-pleasing bit they know is cheap? That’s the limit of individual ethics. It took me years to articulate this as a distinction: operational ethics versus structural ethics.
Operational vs. Structural Ethics
The three tests are operational ethics. One designer, one review session, one decision. Does this button placement pass the alignment test? Does this copy pass the sincerity test? Would I accept this modal as a user? One practitioner, one moment, one call.
Operational ethics are necessary. They’re also insufficient.
They assume the designer has good values, clear judgment, and the organizational power to act on both. In practice, designers get tired. They get pressured. They get incentivized. A PM says “we need this metric up by Thursday.” A client says “make the cancel flow harder.” A stakeholder says “can we just default them into the annual plan?” And the designer, who passed the ethics test on Monday, cuts a corner on Friday because the sprint needs to close.
Structural ethics work differently. They build the ethical constraints into the system itself, so they hold regardless of any individual practitioner’s judgment on any given Tuesday.
My business model is structural ethics. Build, Host, Retain. I don’t do one-off projects without ongoing hosting. I don’t build a site and disappear. If I inflate perception beyond reality, I’m the one who has to maintain the gap. I’m the one fielding support tickets from users who feel misled. I’m the one watching the analytics when the return rate climbs and the reviews turn negative.
That structure filters out a specific type of client before the engagement even starts: the ones who want perception inflated beyond reality. They don’t want a long-term relationship. They want a quick flip. They want someone to make their mediocre product look premium, take the money, and move on. My model doesn’t serve that. It’s not designed to.
It’s not that I’m more ethical than other designers. It’s that my business model makes it structurally expensive to be unethical. The incentives are aligned with the tests. That’s the point.
Operational ethics assumes good values. Structural ethics enforces them. A framework that relies only on operational ethics fails when the practitioner is rushed or incentivized to cut corners. A framework that relies only on structural ethics fails when a novel situation falls outside the rules. PFD uses both.
What happens when structural ethics are absent
Cory Doctorow named it in 2023: enshittification. Platforms follow a three-stage decay cycle. First, be good to users to attract them. Then, abuse users to benefit business customers (advertisers, agencies, merchants). Finally, abuse business customers to extract maximum value for shareholders. The business model incentivizes progressive exploitation. Each stage feels rational to the people running it. The structure makes the unethical outcome the default outcome.
Facebook is the textbook case. Between 2015 and 2018, they inflated video viewing metrics by 150 to 900 percent. Not a methodology dispute. Not a rounding error. The amended class-action complaint revised the initial estimates upward; Facebook settled for $40 million.
The inflated numbers told media companies that video was the future. CollegeHumor gutted its editorial staff. Funny or Die laid off most of its writers. Mic, a news outlet with tens of millions of monthly readers, fired its entire editorial team and pivoted to video production. Within two years, Mic sold for a reported $5 million after being valued at $100 million.
That’s the confidence man’s watch trick at planetary scale. Facebook fabricated the perception of video value, and an entire industry restructured around the lie. These companies had profitable text and image operations. They destroyed them to chase metrics that were fake.
Build-Host-Retain is the inverse structure. My model makes it expensive to inflate perception because I’m the one who lives with the consequences. Enshittification makes it expensive not to inflate perception because every stage of the decay cycle rewards the company that extracts more from its users. The incentives point in opposite directions. That’s not an accident. It’s the whole point of structural ethics.
And the harm compounds. Ardoline and Lenzo (2025) published the first peer-reviewed paper connecting enshittification to cognitive harm. They introduced the concept of cognitive deskilling: when platforms degrade, users who have offloaded cognitive tasks to those platforms lose the capacity to perform those tasks independently. The damage is not transactional. Users don’t just lose a service. They lose the ability to do what the service used to do for them. Doctorow endorsed the paper. The American Dialect Society had already named “enshittification” its 2023 Word of the Year.
The Fluency Trap
There’s a thing I need to name honestly, because if I don’t, someone else will, and they’ll be right to.
I call it the fluency trap.
Processing fluency, the mechanism at the heart of Layer 2, is value-neutral. “If it’s easy to process, it feels true.” That’s Reber and Schwarz, 1999. The effect doesn’t care whether the thing being processed is actually true. A fluent lie feels truer than a disfluent truth. A clean, well-designed scam site feels more trustworthy than a legitimate but poorly designed one. The mechanism doesn’t evaluate content. It evaluates processing ease. And it assigns truth-value based on that ease.
This means that by optimizing for fluency, by making my designs smooth, consistent, easy to process, I’m making it harder for users to engage the critical evaluation that would catch deception. The very thing that makes PFD effective (reducing prediction error, keeping the user’s processing smooth through the desired path) is the same thing that can prevent them from stopping to question whether they should.
This is ontic occlusion baked into the mechanism. I borrowed that term from a colleague and friend, Cory Knobel, who uses it to describe how one representation of reality blocks another from being seen. My framework makes certain things visible (perception gaps, trust signals, fluency) and in doing so makes other things harder to see (the question of whether fluency itself should be interrupted).
The harm compounds over time. Ardoline and Lenzo’s cognitive deskilling research (2025) applies directly here. When a platform optimizes for fluency unethically, users who offload evaluation to that platform’s cues (the smooth interface, the trustworthy layout, the frictionless flow) gradually lose their capacity to evaluate independently.
Unethical fluency optimization does not just suppress critical evaluation in the moment. It degrades the user’s capacity for critical evaluation over time. The dark pattern makes the user worse at recognizing dark patterns. That is not a side effect. It is the mechanism working as designed.
Pennycook and Rand (2019) found that susceptibility to fake news was driven by insufficient analytical thinking, not partisan bias. People who scored higher on analytical reasoning tests were better at distinguishing fake from real news regardless of political alignment. The problem was not ideology. The problem was insufficient cognitive engagement.
Designs that reduce cognitive engagement, that make everything smooth and easy and frictionless, make users more vulnerable to deception. That is not a misuse of fluency. It is fluency doing exactly what it does, applied without the ethical constraints that determine whether it serves the user or exploits them.
I don’t have a clean solution for this. The three ethical tests help. If the alignment test is honest, then fluency is being deployed on behalf of truth, not against it. But the tests depend on the designer’s judgment about what “truth” and “reality” are, and those judgments are themselves subject to the fluency trap.
I know my own biases better than most people know theirs (the autism helps with that, actually; I’m relentlessly self-monitoring). But knowing your biases doesn’t eliminate them. It just means you can name what you’re probably missing.
The honest statement is this: PFD is a loaded weapon. The safety is the designer. If the designer’s judgment is compromised, by incentives, by self-deception, by honest ignorance, the weapon still fires.
Steve Krug didn’t address ethics. He didn’t need to. His book is about usability, about reducing cognitive friction so users can accomplish their goals. There’s not much ethical ambiguity in making a navigation menu clearer or reducing the number of form fields. Usability optimization is, for the most part, unambiguously good for the user.
PFD doesn’t operate in the usability layer. It operates in the persuasion layer. The perception layer. The layer where you’re not just removing barriers to what the user already wants to do, but actively shaping what they notice, what they trust, what they feel, and what they do next. That layer demands ethical guardrails in a way that usability never did.
That omission isn’t a criticism. It’s a scope observation. His book covers its territory completely and well. But if you take the techniques in this book and apply them without the ethical framework in this chapter, you are building confidence tricks. Fluent ones. Effective ones. The kind that don’t trigger the persuasion knowledge response because you’ve designed them not to.
That’s not what this framework is for.
The mandate, stated plainly:
Perception-First Design removes perception barriers between users and genuine value. It does not create the perception of value where none exists.
The designer is responsible for the perception layer. The organization is responsible for the value layer. When these diverge, the designer’s obligation is to the user.
Simply Smart Home is the clean case. The product was good. Families genuinely stayed more connected using those digital picture frames. The tablets worked. The price was fair for the functionality. But the website looked like an overseas knockoff. The marketing led with feature specs instead of emotional connection. The brand system was a template that communicated “$25” when the product justified “$150.”
Perception was below reality. I closed the gap upward. Revenue tripled. Not because I manufactured desire. Because I removed the perception barriers that were blocking people from seeing value that actually existed.
The violation occurs in the opposite direction. If the product was mediocre and I made the site look premium, if the service was unreliable and I designed the experience to feel seamless, if the value wasn’t there and I manufactured the perception of it, that’s the confidence man with the watch. That’s the cheap laugh. That converts this quarter and craters next year.
I think about this more than most designers do. Partly because of the framework. When you name the mechanisms explicitly, when you write down “processing fluency makes things feel true whether they are or not,” you can’t pretend you don’t know what you’re holding. The knowledge creates the obligation.
Partly because of the nightclub. At the door, I had power over people’s nights. I decided who got in and who didn’t. I could have abused that power, and I saw other bouncers who did. The ego trip. The petty gatekeeping. The “that violates our dress code” or turning people away for some made-up reason as a proxy for “I don’t like you.” That was perception manipulation too, just crude. I chose to use the position differently. To make the experience better for everyone who showed up, regardless of who they were. To conduct instead of command.
And partly because I’ve spent years building for communities that have been on the wrong end of perception manipulation for generations. Communities that know what it feels like when someone else controls how they’re seen. You develop a different relationship with the tools when the people you serve have been shaped by other people’s perceptions their entire lives. You don’t get to be cavalier about the power.
The oath isn’t complicated. Three tests. Sixty seconds. Before anything ships.
Does this bring perception closer to reality?
If the user knew what this does, would they feel served?
Would I accept this as the user?
If all three pass, ship it. If any one fails, redesign it.
That’s it. No manifesto. No certification. No twelve-step program. Just three questions and the honesty to answer them.
The hard part was never the test. The hard part is the honesty.
Next: What I Don’t Know Yet, on the questions this framework can’t answer and the blind spots I haven’t figured out how to see past.
Key Terms
| The Alignment Test | Does this design bring perception closer to reality, or further from it? Based on Berdichevsky & Neuenschwander (1999). |
| The Sincerity Test | If the user fully understood what this design choice does, would they feel served or exploited? Based on Friestad & Wright’s Persuasion Knowledge Model (1994). |
| The Golden Rule | Would I consent to being influenced by this technique if I were the user? |
| Dark patterns | Brignull (2010/2011), Gray et al. (2018). Design patterns that convert short-term by exploiting users: nagging, obstruction, sneaking, interface interference, forced action. The cheap laughs of design. |
| Operational vs. structural ethics | Operational: three tests run by one designer on one decision. Structural: ethical constraints built into the business model itself, holding regardless of individual judgment. |
| The fluency trap | Processing fluency is value-neutral. A fluent lie feels truer than a disfluent truth. By optimizing for fluency, PFD can make it harder for users to engage critical evaluation. |
| Ontic occlusion | Knobel. Any representation of reality blocks other representations from being seen. PFD makes perception visible and in doing so occludes other concerns. |
| Enshittification | Doctorow (2023). Three-stage platform decay: good to users, then abuse users for business customers, then abuse business customers for shareholders. The structural ethics failure case. |
| Cognitive deskilling | Ardoline & Lenzo (2025). When platforms degrade, users who offloaded cognitive tasks lose the capacity to perform those tasks independently. Harm compounds over time. |
References
| Berdichevsky & Neuenschwander (1999) | Toward an ethics of persuasive technology. Communications of the ACM, 42(5), 51–58. |
| Friestad & Wright (1994) | The Persuasion Knowledge Model. Journal of Consumer Research, 21(1), 1–31. |
| Campbell & Kirmani (2000) | Consumers’ use of persuasion knowledge. Journal of Consumer Research, 27(1), 69–83. |
| Langer, Blank & Chanowitz (1978) | The mindlessness of ostensibly thoughtful action: The role of “placebic” information in interpersonal interaction. Journal of Personality and Social Psychology, 36(6), 635–642. |
| Brignull (2011) | Dark Patterns: Deception vs. Honesty in UI Design. A List Apart, November 1, 2011. |
| Gray, Kou, Battles, Hoggatt & Toombs (2018) | The dark (patterns) side of UX design. CHI ’18 Proceedings. |
| Doctorow (2023) | TikTok’s Enshittification. Wired / Pluralistic. |
| Ardoline & Lenzo (2025) | The Cognitive and Moral Harms of Platform Decay. Ethics and Information Technology, 27, 37. |
| Fehr, Bernhard & Rockenbach (2008) | Egalitarianism in young children. Nature, 454(7208), 1079–1083. |
| McAuliffe et al. (2017) | The developmental foundations of human fairness. Nature Human Behaviour, 1, 0042. |
| Pennycook & Rand (2019) | Lazy, not biased: Susceptibility to partisan fake news is better explained by lack of reasoning. Cognition, 188, 39–50. |
| Reber & Schwarz (1999) | Effects of perceptual fluency on judgments of truth. Consciousness and Cognition, 8(3), 338–342. |
| Knobel (2010) | Ontic Occlusion and Exposure in Sociotechnical Systems. Doctoral dissertation, University of Michigan. |