In Part 1: The AI of CX, we explored how organizations apply AI to customer experience in their operations. Now we turn to the more critical (and in my opinion the more exciting!) challenge: designing the experience of AI itself.
As AI becomes the primary interface between organizations and customers, the personality, competence, and trustworthiness of a brand’s AI becomes indistinguishable from the personality, competence, and trustworthiness of the brand.
Every chatbot conversation is a brand conversation. Every recommendation algorithm expresses brand positioning and value proposition. Every AI decision reflects your brand’s core values.
While 64% of customers prefer companies didn't use AI in customer service, the organizations defying this trend share a common approach: they don't just deploy AI, they design AI experiences that feel genuinely helpful rather than artificially efficient.
The difference comes down to understanding the fundamental shift in human-computer interaction. We're no longer designing interfaces that humans use. We're designing entities that humans relate to.
This requires moving beyond UX principles toward what I call RX - Relationship Experience design. Because when customers interact with your AI, they're not just completing tasks. They're forming impressions about whether your organization understands them, respects them, and can be trusted to act in their interests.
Beyond Efficiency: The Psychology of AI Interaction
Traditional UX design assumes humans interacting with interfaces. But AI interaction introduces a fundamentally different dynamic: humans relating to entities that exhibit agency, make decisions, and communicate in natural language.
This shift triggers different psychological responses. When your website loads slowly, customers feel frustrated with the technology. When your AI makes a mistake, customers feel frustrated with your judgment. The AI becomes a representative of your brand's intelligence, values, and competence.
Research reveals three critical factors that shape human-AI relationships:
Competence & Ability: Does the AI consistently perform tasks well?
Character Trust: Does the AI seem to have my best interests at heart?
Predictability: Can I understand how this AI will behave?
Most organizations optimize exclusively for competence. Accuracy, speed, task completion. But character and predictability often matter more for long-term relationship building.
Consider the difference between these two responses to "I need help with my password":
Competence-focused: "I can help you reset your password. Click here to begin the process."
Character-focused: "Password troubles are can feel really frustrating. Let me walk you through the quickest way to get back into your account."
Both solve the problem. Only one acknowledges the human experience.
The Authentic vs. The Artificial
Here's where AI design gets tricky. Customers want AI that feels human enough to relate to, but not so human that it feels deceptive. They want personality without pretense, warmth without manipulation.
This creates a very narrow band where AI feels authentic without feeling artificial.
I've seen organizations handle this in fascinating ways. One financial services company designed their AI to occasionally say "I need to double-check this information", deliberately displaying uncertainty to build trust. A travel company programmed their AI to admit when destinations weren't its "personal favorites," creating the illusion of preference without claiming human experience.
The key insight: authenticity in AI doesn't mean mimicking humans perfectly. It means being genuinely helpful while acknowledging limitations honestly.
Trust In Conversations: The Architecture of Transparent AI
Trust in AI requires a different approach than trust in humans. With humans, we often trust based on rapport and intuition. With AI, trust must be architecturally designed into the conversation itself.
As AI moves from task execution to natural conversation, the principles of good dialogue become crucial. Human conversations rely on shared context, emotional rapport, and mutual understanding. AI conversations must create these conditions artificially while maintaining useful functionality.
This requires building trust through conversational design—what I call "trust scaffolding" embedded in natural language interactions. Rather than simply executing commands, AI must demonstrate clear capability boundaries, show how it reaches conclusions, express uncertainty when appropriate, and provide seamless transitions when reaching its limits.
The most successful AI conversationalists don't try to be perfectly human, they try to be perfectly helpful while being authentically artificial. They preserve context across interactions, ask clarifying questions to ensure understanding, handle misunderstandings gracefully, and maintain personality consistency across different scenarios.
Consider these scenarios that illustrate contextual emotional intelligence in action:
Scenario 1: Customer types "Wow, this is ridiculous!" while trying to return a product
Scenario 2: Customer types "Wow, this is ridiculous!" while browsing luxury watches
Same words, completely different emotional contexts. Scenario 1 suggests frustration requiring de-escalation, while scenario 2 might indicate positive surprise requiring enthusiasm.
Leading organizations develop contextual emotional frameworks that interpret emotional cues within situational contexts, matching AI tone and approach to emotional needs rather than relying on simple sentiment analysis.
Memory and Building Relationships
One of the most overlooked aspects of AI experience design is memory. Humans form relationships through accumulated shared experiences. AI systems that reset with each interaction can never build genuine relationships.
This goes beyond basic personalization toward creating a sense of ongoing relationship. The most sophisticated systems remember past interactions and reference them appropriately, understand how customer needs evolve over time, adapt communication styles based on what works for each individual, and acknowledge significant moments in the customer journey.
Many of us have experienced this need if we've used AI tools regularly like ChatGPT or Claude…these tools are starting to incorporate relationship experience layers that create continuity across conversations.
AI Experience Design Across Cultural Contexts
AI personalities that work in one cultural context may fail completely in another. What feels friendly in the United States might feel inappropriate in Japan. What builds trust in Germany might seem cold in Brazil.
This requires thinking about AI personality as culturally adaptive rather than universally designed. Successful global organizations aren't building one AI personality, they're building AI personalities that adapt communication styles, decision-making approaches, authority relationships, and cultural values while maintaining brand consistency across different contexts.
Measuring Success Beyond Efficiency Metrics
Traditional AI metrics focus on task completion, accuracy, and speed. AI experiences require metrics beyond these basics to also measure relationship building: how customer trust in the AI changes over time, whether customers feel better or worse after AI interactions, if customers engage more deeply over time, and the quality of human handoffs when customers escalate.
At one engagement around AI customer service operations, I recommended measuring "emotional velocity", ie. how quickly their AI shifted frustrated customer sentiment to neutral or positive emotional states. Another metric we tested was "conversational stickiness", ie. whether customers chose to continue conversations with AI rather than immediately seeking human alternatives. Greater conversational stickiness correlated with lower customer effort scores and higher customer satisfaction scores.
While we found this correlation, we ultimately decided NOT to use conversational stickiness as a primary metric because of ethical considerations. The metric felt like it diminished the role of humans in the value stream and created an adversarial dynamic between AI and human interactions. Instead, I recommended reframing the concept entirely.
In cases where customers switched from AI to human assistance (which we reframed as positive outcomes), we had human agents help train the AI model by sharing how they successfully de-escalated customers. This approach still improved what we had measured as conversational stickiness, but created a collaborative partnership between AI and human agents rather than positioning them in competition.
The result was better AI performance and more meaningful human work focused on complex problem-solving and emotional intelligence.
These experiences reveal that the most successful AI implementations optimize for relationship quality and human-AI collaboration, not just task efficiency or AI preference metrics.
The Future of Human-AI Relationships
As AI becomes more sophisticated, the line between artificial and authentic will continue to blur. But rather than trying to make AI indistinguishable from humans, the future lies in creating AI that's distinctly beneficial, offering capabilities that complement rather than compete with human interaction.
The ultimate goal isn't to perfect either "AI of CX" or "CX of AI" in isolation, but to integrate them seamlessly. This requires unified experience design that ensures AI personality aligns with brand values and human representative training, seamless handoffs between AI and human interactions, consistent brand voice whether customers interact with AI or humans, and continuous learning that uses insights from AI interactions to improve human training and vice versa.
The most successful organizations treat these as interconnected systems rather than separate projects.
They understand AI not as a replacement for human connection, but as a new medium for expressing it. When done well, AI becomes an extension of your brand's personality, values, and care for customers.
Because ultimately, brand reflects not just what we build, but how customers choose to relate to them and interact with them.
🙏 Thank you for reading Strategic Humanist. If you enjoyed this article, consider subscribing for future articles delivered straight to your inbox. Or share the article with others who may find it valuable.
🤔 Curious about the Strategic Humanist?
I'm a Senior Customer Experience Strategist who helps Fortune 500 companies craft customer-focused solutions that balance business priorities, human needs, and ethical technology standards. My work focuses on keeping humans at the center while helping organizations navigate digital transformation.
Connect with me on LinkedIn to explore more insights on human-machine collaboration, customer experience, and ethical applications of AI.