AI is moving at breakneck speed. For GTM, marketing, and sales teams eager to harness its power, there’s never been more promise—or more exposure to risk.
According to recent stats, “70% of enterprise AI projects fail due to inadequate preparation and assessment,” and less than half of companies see a positive ROI on their AI investments. That’s a wakeup call for everyone trying to raise productivity and revenues by leveraging AI.
So, where do the risks lie? I recently created a high-level “AI Risk & Readiness Quiz” designed to help leaders quickly self-assess where their organizations might need to shore up their AI practices. It highlights five critical areas to assess:
“By identifying potential gaps and areas for improvement, you can pave the way for responsible and effective AI adoption.”
1. Data Quality
Can You Trust the Data Behind Your Models?
AI can seem smart but if your training data is messy, incomplete, or biased, even the smartest AI goes off the rails. Surprisingly, many teams still rely on unchecked or vendor-provided data, skipping regular audits or automated data cleaning. That’s a recipe for unpredictable results—and reputational risk if something goes public. Improving your data is fundamental to improving your results from AI. Start small: build data checks into your workflows early, and don’t be afraid to bring in external audits (they catch what you can’t).
2. Transparency
Who Understands Your AI’s Decisions?
Stakeholders—from leadership to customers—are demanding to know how AI comes to its conclusions. If you have an AI product, without transparency or explainable models, trust erodes quickly. This is especially true for customer-facing tools. Pro tip: Prioritize explainability in your model choices and make regular “show your work” demos part of your go-to-market process.
If you are using AI models in your work, it’s important to keep track of how you are using them and be able to answer questions like: What data are you sharing with which models (understand the privacy implications)? What are you asking AI to do with the data? Why have you chosen to use AI in this way?
3. Legacy Integration
Is AI Playing Nice with What You Already Have?
AI rarely lives in a vacuum. In fact, it’s a technology that requires cross-functional planning and collaboration. For GTM teams, seamless integration with legacy systems (think: CRMs, email, data warehouses) is often a stumbling block. Significant integration challenges can hobble progress fast. The fix? Regular integration audits, plus early, honest collaboration between marketing, IT, and ops.
4. Ethical Governance
Is There a Real Framework—or Just Lip Service?
AI ethics can sound abstract—until something goes wrong and your brand takes the hit. Unfortunately, many orgs are still “in development” when it comes to building formal ethics frameworks. You don’t need to boil the ocean: start with clear guidelines for your specific use cases, and hold regular (quarterly or annual) reviews to adapt as you scale.
5. Risk Assessment Frequency
How Often Are You Actually Evaluating Risks?
It’s one thing to run an AI risk assessment during implementation—but AI, data, and business needs change constantly. If you’re only assessing once a year or less, you’re flying blind. The best-performing teams do quarterly reviews, using not just tech audits but stakeholder feedback. Don’t overthink it: set a recurring meeting now and adjust as you go.
What This Means for You
Don’t wait for a crisis to get serious about AI risk. Use simple tools—like the AI Risk & Readiness Quiz—to pinpoint your blind spots, then take small, consistent actions to improve. Build confidence by involving your team and others in your company, checking your data, and conducting regular risk reviews. And don’t be afraid to reach out: the AI community is full of leaders who’ve wrestled with these same challenges. Start today—your future self will thank you.
Discover more from JILL RICHARDS
Subscribe to get the latest posts sent to your email.
