Responsible AI is a moving target. Here’s how we approach it at Gong.

Share
Table of Contents

Stay up-to-date with data-backed insights

Thank you for your submission.

Sales AI Sales Leadership

We’ve moved past the early days of open-ended AI experimentation. That phase of “explore, try, and see what you can do” helped our teams at Gong — and plenty of other companies — move quickly and understand what’s possible. It also surfaced deeper questions about the responsible use of AI and the structures needed to approach this powerful technology thoughtfully.

Whether you’re building or buying AI-powered platforms, one thing is clear: Responsible AI has to be more than a philosophy, a pillar, or an ideal. It has to be a fundamental part of every decision we make in designing and developing the technology we build. It also has to be a critical factor for evaluating and implementing the technology we choose to use.

The questions aren’t going away. In fact, they’re only getting more complex.

What does responsible AI look like in practice? And how do you know if you’re doing it right? 

At Gong, we’ve shifted from organic exploration to intentional design. We’ve formalized our AI strategy and established internal governance, which led to us earning our ISO 42001 certification. All because we take responsible AI very seriously. This allowed us to be more focused on how we incorporate AI into our product and how we use it in our day to day business.

The risks and rewards that AI brings are evolving with incredible speed and magnitude. Navigating them, now and in the future, requires transparency, accountability, and action. Below, I share the steps we’ve taken and the hard questions we’ll keep asking to ensure responsible AI isn’t just something we say, but something we do — whether it’s for the AI we build or the AI we buy. 

Making responsible AI real: ISO 42001 certification

Gong earned the ISO 42001 certification in June 2025, making us an early adopter of the new global AI management standard. While we’re proud of this achievement, the certification itself isn’t the real value.

Certifications matter because they force us to face tough questions head on.

At Gong, we had to answer the following: 

  • Do we truly understand how our AI systems arrive at their decisions? What guardrails or checks should we have in place based on the type of model?
  • What hidden biases and blind spots are hiding beneath the surface?
  • Have we put the right controls in place to ensure that our customers’ data is protected, respected, and never misused?

Asking hard questions, documenting the answers, and taking real action is how we mitigate potential harms in real time and make responsible AI real.

Turning curiosity into confidence

In the early days of generative AI, our teams dove headfirst into accessible, powerful technologies. They moved fast and explored freely. It was an important phase that surfaced big possibilities.

It also surfaced a critical realization: We didn’t want our data pulled into models or tools we couldn’t control or fully understand.

That marked a shift for our security team. We began ensuring Gong’s teams operated within systems that gave us 100% confidence around: 

  • Where our data lives
  • How it is (and isn’t) used
  • Whether we can control the instance and the model’s behavior (integrations, data isolation, licensing, etc.) 

That sometimes meant upgrading to enterprise-grade platforms with stricter privacy and governance controls. But more than that, it was a turning point not just for how Gong uses AI internally, but for how we think about building it. We had to reconsider how we evaluate and build every AI capability across our business. Whether we’re integrating AI into our own workflows or our products, it’s not enough to ask “Can it work?” The bigger question we have to ask is:

Do we understand what we’re implementing well enough to ensure that it does what we intend and nothing more?

That shift pushed our organization toward a more intentional AI approach that’s grounded in strategy, governance, and shared accountability.

Operationalizing responsible AI

Our ISO 42001 certification was the result of a leadership team that set a high bar for clarity and rigor in how we build and manage AI, all without sacrificing product innovation. 

To strike that balance, our security team led a three-step approach: 

1. Get clear on what we need AI to do

Before we could talk about risk or safeguards, we had to define our intent by answering these questions:

  • How do we use AI internally at Gong? 
  • How does AI function across Gong’s platform? 
  • How do we choose and design the models we rely on — whether we build them or not?

Getting specific gave us a baseline for our overarching strategy and surfaced early gaps we wanted to address.

2. Put principles into practice

Once we clarified exactly what we wanted AI to do, it was time to put a plan into action. This meant meeting regulatory requirements like the EU AI Act, testing our AI for bias and security, and putting strong privacy protocols in place for the types of data we use. This ensures our AI only works as it’s intended.

3. Make responsibility everyone’s business

Because we’re an AI-powered platform, AI safeguards aren’t a check-the-box exercise for us. We created a formal AI governance committee with leaders from legal, security, R&D, and go-to-market. Each of these representatives surface different, sometimes unforeseen risks, so bringing them together helps us catch what any one group might miss. 

The committee helped galvanize our strategy, safeguards, and accountability model (the basis for ISO certification) so responsible AI is something we can sustain. While many companies haven’t yet adopted this kind of cross-functional governance, it’s proving to be a solid approach. 

Hold vendor partners to a higher standard

Building internal systems for responsible AI changed how we operate, but it’s only part of the equation. As I’ve said, the AI we integrate matters just as much as the AI we build. That means holding our vendors to the same standard. Here’s the reality check we run during vendor assessments:

  • Is our data being used for model training? Can we opt out?
  • Do we retain complete control and ownership over our data?
  • How exactly is our data protected and for how long?
  • How transparent is the model? Is it a black box?
  • Do we have visibility into how the model was initially set up and how it’s being updated within the product over time? 

Not all of these answers come easily, but comfort isn’t the goal. Clarity is.

Responsibility evolves alongside AI

It’s easier to talk about what we want AI to do than what it shouldn’t do. But preventing avoidable harm and shaping these systems for real-world good is very compelling and worthy work.

At Gong, we’ve committed to sharing openly, asking better questions, and having the courage to say when we don’t have the answers. I challenge others to do the same. The only thing worse than facing tough questions is pretending you don’t have to, because we won’t get a second chance to get this right.

That’s how we’re approaching the weighty responsibility of AI: with clear eyes and strict accountability.