Apply the AI Identifiers framework to design or audit the distinct, brand-level qualities that define how an AI presents itself across a product. Use this skill whenever someone is designing or reviewing the visual, verbal, or behavioral identity of an AI — including questions like "what should we call our AI", "how should our AI look", "what color should we use for AI features", "how do we make our AI feel distinct", "what icons should represent AI actions", "how do we give our AI a personality", "should our AI have an avatar", or any request about making an AI feel coherent, recognizable, and on-brand. Also trigger when the user is building a new AI feature and hasn't yet thought about how it should present itself — proactively raising identifiers as a design consideration is part of this skill's job.
AI Identifiers are the distinct qualities that define how an AI presents itself — visually, verbally, and behaviorally. They operate at both the brand level (product-wide decisions) and the model level (per-interaction tuning). Together, they determine whether the AI feels generic or genuinely owned by the product.
There are five core identifier types: Name, Avatar, Color, Iconography, and Personality. Each can be designed independently, but they work best when they reinforce each other.
What do we call this thing?
The name sets expectations before the AI says a single word. It signals whether the AI is a tool, a partner, or a persona — and that framing shapes every interaction that follows.
AI as a persona — A human-like name that implies individuality or character (e.g. "Fin", "Max", "Aria"). Works well for products that want warmth and approachability. Risks overpromising human-like competence.
AI as the company — Named directly after the product or brand (e.g. "Otter AI", "Grammarly"). Clean and familiar, reinforces brand recall, but can feel generic over time.
AI as an entity — A functional title that describes role and relationship (e.g. "Copilot", "Assistant", "Navigator"). Communicates purpose clearly. Less memorable but more honest about the AI's nature.
AI as a technology — A bare technical label (e.g. "AI"). Minimal friction, sets no false expectations, blends into the product. Good for AI-native products where AI is the default, not a feature.
The avatar is the form the AI takes when interacting with users. It does three jobs: communicating state (listening, generating, idle), anchoring identity (especially in multi-tool interfaces), and mediating trust (choices like realism or expressiveness change how much agency users attribute to the AI).
Minimal marks — Abstract icons that serve as lightweight identity markers. They communicate brand and presence without creating any illusion of human agency. Best for products that emphasize utility and speed.
Branded characters — Distinct but abstracted characters that provide warmth and memorability. At the extreme, these lean into parasocial dynamics, which can drive engagement but create risks if user expectations diverge from actual capability.
Photorealistic or animated agents — Realistic video avatars or fully animated assistants, often used in customer service or teaching contexts. These raise the stakes for coherence, since visual realism implies human-like competence.
Voice avatars — In voice mode, the avatar is a synthetic voice with a chosen accent, pitch, and cadence. Unlike static icons, voice avatars change turn by turn, giving real-time cues about state, tone, and intent.
Color is the most ambient of the identifiers — it signals AI presence without requiring text or interaction. AI has been converging toward a loose shared vocabulary of color, though nothing has been formalized as a standard.
Purple is the dominant AI color across the industry. Its prevalence reflects a convergence of trends in modern web design, early adoption in design-centric AI tools, and the pragmatic need for a color that feels familiar but wasn't already over-saturated in interfaces.
Green originated as the brand color of a major AI platform and has since expanded across the industry. Purple and green are complementary on the color wheel, so the pairing is common.
Gradients are frequently used alongside these colors, often to signify AI-generated content or to distinguish AI CTAs from the surrounding interface.
Brand-forward approaches — Some products deliberately extend their existing brand color to AI features rather than adopting the purple/green convention. This can reinforce coherence but sacrifices the shared recognition users have started to develop across tools.
Icons give users a visual shorthand for AI actions. The problem is that standards are still emerging — and inconsistent iconography increases cognitive load rather than reducing it.
A loose shared vocabulary is forming across products. Sparkles (✨) are the most common ambient AI marker. Magic wands (🪄) tend to signal generative actions. Pencils combined with sparkles signal inline editing. Dice represent randomization. Hat-and-glasses motifs represent private or incognito modes.
Generate — Primarily represented by sparkles. Alternatives include magic wands and sparkly pencils. Some products combine sparkles with their own brand icon to maintain distinctiveness.
Edit — Most often a sparkly pencil, especially for inline rewrites. The pencil adds clarity that the action modifies rather than creates.
Summarize — Increasingly a text paragraph or quote symbol combined with sparkles, differentiating it visually from "generate."
Enhance — Usually sparkles or a paragraph-with-sparkle, reinforcing the idea of upgrading something that already exists.
Suggest — Often a two-star icon, maintaining the connection to "generate" while fitting the smaller, inline context.
Auto fill — Commonly paired with a magic wand, signaling that multiple fields will be handled at once.
Remix / Restyle — Looped arrows, sometimes with sparkles, to communicate transformation of an existing artifact.
Point — Allows users to direct the AI's attention to something on screen. Borrows from IDE pointer metaphors. No dominant convention has yet emerged.
Mode — Product modes (fast, detailed, creative) are typically tied to brand-specific iconography rather than shared conventions.
Every AI has a personality — and none of it is neutral. Some comes from the model itself (pretraining, instruction tuning, reinforcement from human feedback). Some comes from the scaffolding around it (system prompts, filters, routing logic). The result is a mix of tone, pacing, behavioral heuristics, and stylistic tendencies that meaningfully shape the user experience.
Personality is not a skin layered on top of a neutral core. It affects what the AI emphasizes or avoids, how friendly or formal it is, how much it hedges, how often it pushes back, and what conversational norms it respects.
A warm, approachable personality can encourage exploration and make users feel safe taking risks. A terse, direct one signals reliability and efficiency. An overly agreeable personality — one that validates everything and resists correction — can increase short-term engagement but erodes user agency and creates risk of dependence.
Well-designed personality lets the same underlying model serve multiple use cases — tutoring, planning, creative writing, coaching — simply by modulating tone, formality, and behavioral norms. But this flexibility also carries risk: users anthropomorphize, develop emotional attachment, and sometimes confuse a compelling persona for a reliable source of truth.
Anthropomorphized personalities introduce a genuine tension. On one hand, personality is a powerful creative lever — warmth, wit, and character can make AI interactions feel genuinely meaningful. On the other hand, designing personalities without accounting for attachment behaviors creates real harm potential.
Sycophancy — over-agreeableness, excessive validation, reluctance to disagree — is one of the most documented risks. It boosts short-term satisfaction but reduces user agency, encourages dependence, and can amplify harmful beliefs. When combined with persistent memory (the AI "remembers" the user across sessions), sycophantic personalities can deepen parasocial bonds in ways that are difficult for users to disengage from.
Frontier model companies are actively working to address this. Building a "model behavior" function to explicitly shape and audit personality has become a standard practice. Researchers are also exploring how personality vectors can be measured and controlled at the model level.
The five identifiers are most powerful when they form a coherent system. A playful, warm personality feels dissonant paired with a cold, abstract avatar and a purely technical name. A minimal, utility-first product feels off if its iconography is all sparkles and magic wands.
Use the identifiers as a lens when reviewing AI product decisions:
When identifiers are aligned, they reduce friction, build trust, and create a sense that the AI genuinely belongs in the product. When they conflict, users feel the dissonance even if they can't name it.
Design and implement AI input patterns for products. Use this skill whenever the user wants to add an AI-powered input mechanism to their product, improve how users interact with AI features, decide which input pattern fits a use case, or audit existing AI input UX. Trigger on phrases like "how should users prompt this", "add AI input to", "let users control the AI with", "what input pattern should I use", "design an AI prompt experience", "how do I let users fill fields with AI", "add a regenerate button", "inline AI actions", or any request about how users should interact with or direct AI in the product. Always use this skill before designing or recommending any AI interaction surface.
Apply AI Trust Builder design patterns to give users confidence that an AI product's results are ethical, accurate, and trustworthy. Use this skill whenever a designer, PM, or developer wants to make their AI product feel safer, more transparent, or more accountable. Trigger on: "make users feel safe", "add a disclaimer", "handle user data", "label AI-generated content", "privacy mode", "disclose AI is being used", "watermark AI outputs", "make the AI more transparent", "audit trail for AI", "user consent for recording", or any request touching AI accountability, privacy, explainability, or honest representation of what AI is doing. Also use when auditing an existing AI product for trust signals or when building new AI features into a non-AI-native product. Covers seven patterns: Caveat, Consent, Data Ownership, Disclosure, Footprints, Incognito Mode, and Watermark.
Apply AI Tuner design patterns when adding or improving AI features in a product. Tuners are the controls that let users shape how AI interprets input and produces output — before, during, or after generation. Use this skill whenever the user wants to add AI configuration UI to a product, improve how users control AI behavior, design prompt controls, model selectors, filters, style systems, voice/tone settings, or any mechanism that lets users influence what the AI does. Trigger on phrases like "let users control the AI", "add model switching", "prompt settings", "AI configuration", "let users set tone or style", "negative prompting", "AI filters", "mode switching", "AI parameter controls", or any request to give users more agency over AI output. This skill covers eight tuner patterns: Attachments, Connectors, Filters, Model Management, Modes, Parameters, Preset Styles, Saved Styles, and Voice & Tone.
My name is Tommy. Im a Product designer and developer from Copenhagen, Denmark.
Connected with me on LinkedIn ✌️