Evaluating AI tech as a security leader: Advice from Gong’s CISO
Stay up-to-date with data-backed insights
Thank you for your submission.

AI has rapidly become ingrained in our lives – we use it personally to plan trips and meals and professionally for task automation and insight gathering.
Today, organizations are feeling the pressure to use AI and realize the coveted efficiency gains. However, many companies implement the technology without fully understanding the potential business impact and risks. And, once they adopt AI, there’s often misalignment around usage and controls.
At Gong, we’re implementing AI solutions and building our own models, giving us a unique perspective on the technology’s inner workings. So, while there isn’t a one-size-fits-all approach to assessing AI platforms, we have uncovered best practices you can use to guide your evaluation process.
Here’s what I’ve learned from building and using AI extensively as Chief Information Security Officer at Gong.
Balancing business agility with risk mitigation
The way we’ve evaluated vendor risk in the past won’t work today. Typical vendor risk assessment questionnaires overlook nuances critical to AI-driven technologies. So, today’s security teams are struggling to redefine technology assessments in the age of AI.
Many companies conduct foundational-level AI evaluations, considering security measurements like SOC 2 and ISO compliance – which are essential, but only reflect one piece of the puzzle.
We need to dive deeper by evaluating the inner workings of a provider’s models, the expected value, and related data exposure risks. However, doing so often leads organizations to build massive AI councils that are hesitant to implement the technology, hindering innovation and growth.
The evaluation process isn’t black and white – security leaders must find the middle ground by building workflows to assess risk without slowing down the business.
Here’s how.
4 ways to evaluate AI solutions
1. Identify your primary use case and related risks
The risks associated with AI platforms vary depending on how you use them. For example, when using a data enrichment platform, you may be less concerned with how the model is trained, and more interested in the output quality. Determine what you care most about – data processing, output quality, how your data is or is not used to train the model, the security of the model, or a potential data leak – then, gather information on the relevant risks and how you’ll be protected against those risks.
2. Understand the AI model infrastructure
When dealing with an AI company, you must have a clear picture of the underlying AI and its implications. It’s critical to understand the inner workings of AI models, where your data is going, and how your data is being protected throughout model training and building. Surface-level understanding isn’t enough.
3. Assess the data disclosure risk
Is your sensitive data being exposed to outside individuals and organizations? Uncover how the platform uses AI to train the model or provide an output. Some models do not train based on your data or maintain a prompt history, so the risk isn’t high. Some models might incorporate your data into their model, while others ingest data to produce results – so make sure you know how that data is being used to generate key outcomes and assess risk vs. reward.
4. Understand quality controls
Is the AI model producing effective, accurate, and unbiased results? For example, if you’re using AI to write outbound emails, make sure the emails aren’t saying anything they shouldn’t be. And consider if the information is accurate – is the solution hallucinating or providing insights rooted in data? Avoid trusting a machine to make critical business decisions without vetting the final result.
Building AI models responsibly at Gong
We’re dedicated to working with regimented AI vendors because we aim to be one ourselves. As Gong’s CISO, I’m laser-focused on ensuring we build robust, secure, and compliant AI models at Gong.
Gong’s AI models are fine-tuned based on real sales conversations, leading to more relevant results for revenue teams. Our models rely on active consent and ensure you retain control over your data, which is never shared in the public domain. Enterprise-grade security is a requirement for all Gong models.
Revenue teams at organizations like Dropbox, Snowflake, and LinkedIn use our models to gather actionable insights, automate tasks, and achieve winning business outcomes.
These models have supported the recent release of Gong Agents – more than a dozen specialized AI agents designed to support you across the revenue lifecycle, including: Â
- AI Briefer: Standardize and streamline knowledge sharing with easy-to-use templates
- AI Composer: Craft more effective email follow-up
- AI Deal Predictor: Identify the health of your pipeline and the likelihood of deals closing
But that’s just the beginning – AI will continue to permeate our personal and professional lives in new ways every day. Today’s security leaders must identify the impact and potential risks before adopting new AI solutions. Conducting thoughtful evaluations will help us keep pace with advances in technology while building a more secure future.