← All posts
DesignMarch 28, 2026· 5 min read

Designing trust into an AI app, one micro-interaction at a time

Hallucinations are a UX problem before they're a model problem. Three patterns I'm using to design trust into a native AI app.

Designing trust into an AI app, one micro-interaction at a time

Every AI app eventually has the same conversation with its users: *why did it say that?* I'd argue most of the trust issues people have with AI products are interface failures, not model failures.

Three patterns I've been leaning on:

1. Show the receipts inline. When the model cites a source, the source should be one tap away — not buried in a sidebar. Trust collapses when verification has friction.

2. Let the model say 'I'm guessing.' A small confidence indicator next to a generated answer changes how people read it. They stop treating it as fact and start treating it as a suggestion. That's exactly the right mental model.

3. Make undo cheap. If the AI takes an action — drafts a reply, edits a note, schedules a thing — the undo affordance has to be more prominent than the confirm. The cost of a wrong action should always feel reversible.

The throughline: the goal isn't to make the model seem smarter. It's to make the *system* — model plus interface plus you — feel honest.