Apply AI Trust Builder design patterns to give users confidence that an AI product's results are ethical, accurate, and trustworthy. Use this skill whenever a designer, PM, or developer wants to make their AI product feel safer, more transparent, or more accountable. Trigger on: "make users feel safe", "add a disclaimer", "handle user data", "label AI-generated content", "privacy mode", "disclose AI is being used", "watermark AI outputs", "make the AI more transparent", "audit trail for AI", "user consent for recording", or any request touching AI accountability, privacy, explainability, or honest representation of what AI is doing. Also use when auditing an existing AI product for trust signals or when building new AI features into a non-AI-native product. Covers seven patterns: Caveat, Consent, Data Ownership, Disclosure, Footprints, Incognito Mode, and Watermark.
Trust is foundational to any AI product. Users who don't trust the system won't engage deeply with it, and those who over-trust it may be harmed by it. Trust Builder patterns are the design tools that close that gap — they communicate what the AI is doing, acknowledge its limitations, protect user data, and keep humans meaningfully in the loop.
This skill covers seven patterns. Each addresses a different dimension of trust. They are most powerful when combined.
When a user brings you a trust-related challenge, identify which of the seven patterns apply, explain the pattern clearly, and recommend specific design implementations. Don't recommend all seven patterns at once unless the situation warrants a full audit. Lead with the most relevant 1–3 patterns, explain the tradeoffs, and let the user decide.
Think about context: a consumer chatbot has different trust requirements than an enterprise document editor or a healthcare assistant. Match the pattern intensity to the stakes.
What it is: A visible message that reminds users the AI may be wrong, incomplete, or biased.
When to use it: Almost always — especially in consumer-facing AI where outputs influence decisions. Required whenever users might act on AI-generated content without checking it first.
Common placements:
Design guidance:
Pitfall: Caveats are nearly ubiquitous, which means users are increasingly blind to them. A caveat is a warning label, not a support system. Use it, but don't rely on it.
What it is: Explicitly requesting permission from users — and in some cases, third parties — before recording, analyzing, or processing data with AI.
When to use it: Whenever an AI feature captures audio, video, conversation, or biometric data. Especially critical when recording involves people other than the primary user (meeting participants, bystanders, subjects of photos).
Three domains of consent:
Consent variations:
Design guidance:
Pitfall: Burying consent in terms of service or making it a condition of using the product. This erodes trust and may violate emerging AI regulations.
What it is: User-facing settings that give people control over how their data is stored, retained, and used — especially for AI model training.
When to use it: Any AI product that stores conversations, generates personalized outputs, or trains on user data. Should be surfaced in product settings for all users who interact with AI.
Key dimensions:
Design guidance:
Pitfall: Defaulting to data sharing because it benefits the company, without offering users a clear, friction-free way to opt out. As AI regulations mature, this will increasingly create legal and reputational risk.
What it is: Clear labeling that lets users know when they're interacting with AI — or when content was created or edited by AI.
When to use it: Wherever AI generates, edits, summarizes, or responds on behalf of a product. Especially important in blended products where AI content is mixed with human content, in customer support contexts, and in agentic AI products where the AI takes actions.
Disclosure contexts:
Disclosure forms:
Design guidance:
Pitfall: Using a vague company name for the AI (e.g., "Assistant") without any indicator that it's non-human. This creates confusion and erodes trust when users eventually figure it out.
What it is: Visible and machine-readable traces that show where and how AI participated in creating, editing, or deciding something — across both the interface and system levels.
When to use it: In any product where users need to verify AI outputs, audit decisions, understand how a result was reached, or reproduce a previous generation. Especially valuable in enterprise, creative, developer, and compliance contexts.
Two modes:
Three levels:
Design guidance:
Pitfall: Building footprints only for the interface without system-level logging, or failing to persist provenance data when content is exported. Both create accountability gaps.
What it is: A private interaction mode where prompts, outputs, and files are excluded from memory, training, and persistent logs — giving users a session that leaves no trace.
When to use it: Whenever users need to interact with AI without that interaction influencing their personalized experience, being stored, or being accessible to others. Valuable for exploration, sensitive drafting, vendor evaluation, and enterprise compliance.
Common use cases:
Variations:
Design guidance:
Pitfall: Creating "incognito mode" that still logs data server-side without telling users. This destroys trust if discovered and may violate privacy commitments.
What it is: A signal embedded in or attached to AI-generated content to identify its synthetic origin — ranging from visible labels to invisible machine-readable fingerprints.
When to use it: Whenever AI generates content that will be shared, published, or used outside the product — including images, video, audio, and text. Also when the product receives user-uploaded content and needs to verify its origin.
Watermark types:
Content provenance (alternative/complementary): Embeds a digital fingerprint into the content's metadata, tracking the full history of creation and edits. Requires platform cooperation but survives across platforms that support the standard.
Regulatory context: Multiple governments are moving toward watermarking mandates. The EU AI Act, US Executive Orders, and similar regulations are creating requirements — especially for realistic synthetic media. Designing for watermarking now reduces future compliance burden.
Design guidance:
Pitfall: Relying on a single watermark type in isolation. Overlay watermarks are trivially removed. A defense-in-depth approach using multiple methods provides more durable provenance.
These patterns work together. A strong trust architecture typically combines several:
| Challenge | Recommended patterns | |---|---| | Users don't know AI is in the product | Disclosure + Caveat | | AI outputs might be wrong or incomplete | Caveat + Footprints + Citations | | Recording or transcription involves others | Consent + Disclosure | | Users worry their data is being trained on | Data Ownership + Consent | | Need to audit AI decisions later | Footprints (system level) | | Users want to explore without consequences | Incognito Mode | | AI content is being shared outside the product | Watermark + Footprints (media level) | | Enterprise compliance requirements | Data Ownership + Footprints + Incognito Mode |
When applying these patterns, structure your response with:
Always ground recommendations in the user's specific product context — a consumer chatbot, an enterprise tool, and a creative platform have meaningfully different trust requirements.
Apply AI Tuner design patterns when adding or improving AI features in a product. Tuners are the controls that let users shape how AI interprets input and produces output — before, during, or after generation. Use this skill whenever the user wants to add AI configuration UI to a product, improve how users control AI behavior, design prompt controls, model selectors, filters, style systems, voice/tone settings, or any mechanism that lets users influence what the AI does. Trigger on phrases like "let users control the AI", "add model switching", "prompt settings", "AI configuration", "let users set tone or style", "negative prompting", "AI filters", "mode switching", "AI parameter controls", or any request to give users more agency over AI output. This skill covers eight tuner patterns: Attachments, Connectors, Filters, Model Management, Modes, Parameters, Preset Styles, Saved Styles, and Voice & Tone.
Apply Wayfinder patterns to design or improve AI onboarding, discoverability, and first-interaction flows in any product. Use this skill whenever the user wants to add AI to a product surface, reduce blank-slate anxiety, help users discover what the AI can do, improve an initial CTA or prompt input, add suggestions or templates, design a gallery, add nudges, or generally reduce friction at the start of an AI interaction. Trigger even on vague requests like "make it easier to get started with AI", "users don't know what to type", "how do we show what the AI can do", "add some example prompts", or "improve onboarding to our AI feature". Wayfinders are: Initial CTA, Example Gallery, Suggestions, Templates, Nudges, Follow-ups, Prompt Details, and Randomize.
Audit UI designs, flows, copy, and layouts to reduce cognitive load and maximize conversion. Apply this skill whenever a user shares a screen, mockup, flow, form, landing page, onboarding step, or any UI element and asks how to improve it — even if they don't say "cognitive load" or "conversion". Trigger on phrases like "why aren't users converting", "improve this flow", "reduce friction", "simplify this", "make this easier to use", "review this UI", "why do users drop off", "improve this form", "critique this design", "make this clearer", or any open-ended "improve this" request about a product surface. Always use this skill before giving UX or conversion improvement advice.
My name is Tommy. Im a Product designer and developer from Copenhagen, Denmark.
Connected with me on LinkedIn ✌️