Security and trust

Trust is an enabler of a successful AI strategy, not a hindrance — here’s why

Chris Peake

Chris Peake

Chief Trust Officer

Published on: December 5, 2025

What does it mean to trust artificial intelligence? And what does it mean to be a trusted provider of an AI operating system?

These are among the biggest questions for leaders grappling with how to deploy and govern AI responsibly. People rightfully want to make their work more efficient — and AI allows them to do that. But in a world where data systems are becoming increasingly interconnected, organizations must ensure sensitive data remains secure, and finding the right balance between mitigating risk and enabling innovation is an ongoing struggle.

Every executive I speak with shares the same challenge: Their teams are already using AI regardless of whether it's officially approved. They're copying sensitive client and customer conversations, financial data, or patient information from secure systems because they need to work faster. The technology is simply too useful to ignore, despite the possibility of leaks, fines, or lawsuits.

I know this challenge well. At Gong, I'm responsible for all aspects of our security and IT strategy. This means I expect the same security, privacy, and transparency from Gong's vendors that our customers expect from us.

So, I think a lot about what trust in AI means, why it’s a particularly tricky challenge, and how we at Gong are helping solve it.

The collision between AI adoption pressure and inadequate governance

In regulated industries like healthcare and financial services, the use of ungoverned AI can create enormous risk, particularly for data privacy. Protected health information (PHI) and payment card information (PCI) carry severe financial and legal penalties if mishandled. Organizations are caught in a perfect storm where the pressure to adopt AI is colliding with a fundamental lack of adequate governance.

The numbers tell a sobering story. In Forrester’s AI Pulse Survey, trust in AI systems was among the greatest barriers to the adoption of generative AI, at 29%. Meanwhile, another Forrester report found that 43% of B2B buyers admit they're making defensive purchase decisions more than 70% of the time, by “selecting the safest choice over all other options.”

That's not innovation; that's fear-based decision making.

Organizations can't afford to be late adopters. The competitive advantage of AI is too significant, and the window to establish market position is closing. But they also cannot afford to rush forward without proper safeguards.

Fortunately, AI governance and fast-moving innovation don't have to be in opposition. The Forbes Business Trust Survey found that buyers are twice as likely to recommend a company if they trust it.

That means, when done right, trust is an enabler of a successful AI strategy — not a hindrance.

Governance frameworks that understand context

Everyone recognizes the need for governance, but many leaders trying to adopt AI are still figuring out how to implement it effectively. At the same time, regulations continue to evolve, and organizations struggle to balance innovation with emerging compliance requirements.

For companies to trust AI systems, those systems must be:

  • Transparent: Track decision paths from outcomes to source data, with clear information on model use and visibility into how data is used and protected.
  • Governable: Ensure responsible AI usage and maintain strict data isolation while delivering powerful insights.
  • Secure: Meet enterprises' most stringent security requirements with certifications that meet industry standards.
  • Privacy-first by design: Clear about how your data is protected while also giving you the tools you need to meet evolving global regulations like GDPR, CCPA, and the EU AI Act.
  • Configurable: Accommodate your specific risk tolerance with granular control over security and privacy.

But here's what excites me: The technology to solve this challenge already exists. Through contextual AI, it's possible to identify sensitive information and handle it appropriately, whether that means redacting it, removing it, or rephrasing it to preserve utility while eliminating sensitivity.

Historically, sensitive data has meant technological exclusion, and those who would benefit most from advanced AI capabilities have been held back by data protection requirements. That changes when we can provide both great capability and great data protection simultaneously:

  • Healthcare providers can use AI to rephrase personal medical information in a way that preserves analytical value while removing identifying details. This enables them to gain insights that improve patient outcomes.
  • Financial institutions can identify risks faster to mitigate the possibility of penalties while simultaneously positioning themselves for greater growth.
  • Government agencies serve citizens more efficiently, all without compromising the data they're entrusted to protect.

Building AI that you can trust

As partners, we need to show — not just tell — how we protect sensitive data. We can't just ask customers to trust a checkbox. That means documenting exactly which controls prevent unauthorized access, demonstrating how context-aware redaction works in practice, and publishing performance metrics like false positive rates, so customers can verify our solutions work as promised.

To illustrate what I mean by transparency and governance, I'll share how I'm looking at our responsibilities as an AI provider at Gong.

We continuously strive to build AI with trust at the forefront from the start, and our AI operating system already provides capabilities for automated, intelligent protection. For example, we now support PHI redaction capabilities, building on our existing PCI DSS compliance and custom redaction capabilities. This enables organizations to safely leverage AI insights across teams and workflows that previously couldn’t be monitored or analyzed due to data privacy risks (i.e. governance + trust unlocks adoption and innovation).

These aren't just compliance requirements; they're intelligent systems that understand context and keep sensitive information protected while preserving the analytical power that makes AI valuable in the first place.

This is what trustworthy AI looks like: Technology that provides sustainable value by protecting individuals while empowering organizations. Systems that help rather than hinder. Solutions that don't force impossible choices between innovation and responsibility.

The AI transformation isn't coming — it's here. The question isn't whether to adopt AI, but how to do it responsibly. And that answer starts by asking not what AI can do for your company, but what your company will be able to do when its fully empowered by well-governed AI.

Chris
Chris Peake

Chief Trust Officer

Executive leader with over 25 years in Security and Information Technology, who successfully scaled multiple hyper-growth SaaS organizations past the $1B ARR mark. Expertise lies in advising on enterprise risk, regulatory compliance, and embedding security into product lifecycles to enable secure AI adoption.

State of Revenue 2026

Loading form...