News and Insights
The AI Trust Paradox in a New Era
April 9, 2025
A Conversation with Joyce Liong about the AI Authenticity Check
As AI-generated content becomes increasingly sophisticated, communications professionals face a balancing act: deploying powerful new tools while maintaining the authentic human connections that build AI trust. I recently sat down with Joyce Liong, senior partner at FINN Partners Singapore and expert in communications strategy, to explore this tension and discover how organizations can navigate the evolving landscape.
Throughout our discussion, several key insights emerged that will resonate with communications professionals at all levels:
- People don’t want perfection; they want authenticity
- Human oversight remains essential, with real people shaping both the message and the intent behind it
- Transparency doesn’t require oversharing—it means providing clarity about your approach
- Trust must be cultivated internally before it can flourish externally
- AI should serve as a co-pilot rather than a replacement
Here’s our conversation.
Q: How can organizations maintain authentic human relationships while leveraging AI in their communications? What is the “AI Authenticity Check?”
A: The answer lies in intention and balance. While AI can help us scale, personalize, and even anticipate what people need, it should never replace the human voice that builds trust over time.
When organizations use AI to handle the repetitive or data-heavy parts of communication, it gives people more time to focus on the moments that matter—like showing empathy in a tough situation, being creative with storytelling, or responding thoughtfully to a customer’s concern. The best communications strategies today integrate AI in a way that amplifies our ability to be human.
When communication feels too polished, too instant, or too generic, audiences tune out. People don’t want perfection, they want authenticity. It’s important to always have a real person shaping the message and the intent behind it, even if AI helps write or deliver it.
Q: For AI in APAC, are there cultural differences across markets that impact the balance between AI adoption and trust-building?
A: Absolutely—and this is where nuance really matters. In terms of AI in APAC markets, especially those with high digital penetration and strong tech infrastructure like Singapore, South Korea, or China, there’s generally more comfort and curiosity around adoption. People see it as a sign of progress. But even then, the how matters: AI must be used in a way that aligns with local expectations around respect, privacy, and service.
In contrast, in places like Japan or parts of Southeast Asia, trust is more relationship-driven, and communication tends to be more implicit. Here, an overreliance on AI or anything that feels “cold” or overly transactional can actually erode trust. It’s not just about the tools, it’s about how those tools are woven into the culture of communication.
Organizations need to localize their AI strategies, just like they would with any other aspect of communications. Cultural intelligence is just as important as data intelligence.
Q: What role should leadership play in creating ethics and boundaries for AI trust and use in their communications?
A: Leadership sets the tone—not just in what is said, but in what is allowed. As AI becomes more embedded in our communications systems, leaders have a responsibility to define where the line is between helpful and harmful, insightful and intrusive.
This isn’t just about compliance, it’s about values. Leaders need to ask the tough questions:
- Are we using AI to better serve our audiences, or just to drive efficiency?
- Are we transparent about what’s AI-generated?
- Are we still prioritizing human insights?
When leaders model thoughtful, ethical AI use, it builds trust both internally and externally. Rather than developing ethics frameworks as an afterthought, the AI Authenticity Check should be developed in line with rollouts.
Q: In what ways can companies demonstrate transparency about their AI usage without undermining stakeholder confidence?
A: Transparency doesn’t mean oversharing—it means clarity. You don’t need to plaster every chatbot with “This was generated by AI,” but you do need to be honest when AI is shaping critical decisions, influencing content creation, or replacing human interaction.
Stakeholders want to know you’re thinking about AI responsibly, not that you’re hiding it. One effective way to do this is to openly share your approach. For example, “We use AI tools to support our content research and personalization, but all final messaging is shaped by humans.” That builds credibility and shows control.
Show that your use of AI aligns with a clear set of values—like fairness, privacy, and accuracy—and people are more likely to trust that you’re using the tools for the right reasons. Transparency builds confidence when it’s paired with accountability.
Q: What impact does over-reliance on AI have on internal culture and employee morale in communications roles?
A: It’s easy to focus on AI’s external benefits—speed, scale, efficiency—and overlook the internal impact. When organizations lean too heavily on AI without proper guardrails, it can erode morale, creativity, and trust from within.
People want to feel that their voice, judgment, and storytelling instincts matter. Teams lose their spark when they feel like they’re just editing AI outputs rather than operating as creators. The key is to bring your teams along on the AI journey. Invest in reskilling, invite them to shape how tools are used, and reinforce the value of their human perspective.
AI should be positioned as a co-pilot. When people feel empowered to use AI as a tool that enhances their work rather than diminishes it, you get the best of both worlds.
Q: How do we preserve brand voice and authenticity when AI starts to sound ‘almost human?’
A: The challenge now isn’t that AI doesn’t sound human, it’s that it often sounds too polished. And ironically, that’s where authenticity starts to erode. We lose the quirks, tone, and nuances that make communication memorable when every brand starts sounding like ChatGPT.
Brand voice is more than the sentence structure or vocabulary. It’s a point of view, it has personality, and it should show empathy. These things come from real people with real experiences. So even as we use AI to help scale our messaging, we need to ensure the final output still reflects who we are and what we stand for.
Having a strong brand voice guide—not just for humans, but also one that informs your AI prompts and training data—will help ensure that your messaging still feels like it was created by a real person.
Q: How should organizations measure AI trust in the new era. What new signals should we look for?
A: For the AI Authenticity Check, traditional metrics like engagement, sentiment, and brand affinity still matter, but they don’t tell the full story anymore. Today, trust is more fragile and more complex. It’s not just about what you say, it’s about how you say it, who says it, and what tools are behind it.
We need to start looking at new signals of trust:
- Are audiences responding more positively to human-led communications versus AI-generated ones?
- Are they opting into or opting out of interactions with bots?
- Are they asking for more transparency?
Internally, we should also be tracking how employees feel about AI’s role in their work. Do they feel informed, involved, empowered, or threatened and disconnected? Trust starts within and extends outward.
Q: What new skills and mindsets will communicators need to thrive in this AI-powered era?
A: Skills like content curation, AI prompt engineering, and digital storytelling will be valuable—but even more important is the mindset shift. Communicators need to embrace experimentation, be comfortable with ambiguity, and stay curious about how tech is evolving audience behavior. Those who combine strong editorial instincts with data fluency, who can harness AI tools but still trust their gut.
And perhaps most critically, they need to hold the line on humanity. AI can help us write faster, predict better, and personalize more, but it’s our job to make sure the heart of communication—connection—doesn’t get lost along the way.