The conversation around AI has quickly shifted from “should we adopt it?” to “how do we operationalize it safely and effectively?” WalkMe’s Global Field CTO KJ Kusch, joined me on the Age of AI podcast recently and in a two-episode conversation, we explored how democratizing AI access across organizations isn’t just an idealistic goal, it’s a business imperative that directly impacts the bottom line.
Watch Exploring the Democratization of AI, Ep. 2, here:
The Measurement Challenge
During a recent tech industry event with CFOs discussing AI implementation, I was somewhat shocked to hear a finance leader from a major corporation claim that measuring AI’s impact is essentially impossible — and he could not have been more sure of himself on this topic. He is wrong: the reality is that organizations can, should, and do measure AI adoption, usage patterns, error rates, and time savings with precision, and frankly, it’s an important part of rolling out AI across an organization. If you don’t have a way of measuring its impact, how can you tell how it’s working?
As we chatted about this, KJ pointed out that the difference between completing business processes with and without AI assistance is both quantifiable and dramatic. When you track metrics like cycle time, error rates, and task completion, the data tells a compelling story. For sales teams building account plans or service centers handling customer queries, the time savings alone can amount to hours per task, not minutes.
Beyond Training: Learning in the Flow of Work
Traditional AI training approaches, things like sending employees to learning portals or relying on technical staff to figure things out, simply don’t scale. The democratization of AI requires a fundamentally different approach: interactive, in-context learning that happens as people work.
This means guiding users through prompting techniques, providing real-time feedback on policy compliance, and offering subtle nudges when they’re about to make mistakes. I always think of it as having an invisible assistant on your shoulder, gently correcting course before problems occur. The key is making AI feel natural and intuitive rather than intimidating and complex.
Tackling Fragmentation and Tool Fatigue
One of the biggest challenges organizations face today is AI tool proliferation. With companies often deploying multiple AI solutions — Copilot here, Anthropic there, Watson somewhere else — users face a fragmented experience that creates confusion and resistance.
As KJ and I discussed, the solution isn’t finding one AI tool to rule them all; that doesn’t exist, regardless of what a vendor might claim. Instead, organizations need a cross-platform orchestration layer that provides consistent guidance, policy enforcement, and user experience across all their AI investments. This centralized approach eliminates the need to train users separately on each tool and ensures consistent adoption of security and compliance measures.
AI is Delivering Real Business Impact
As we talked about AI delivering real business impact, KJ shared a compelling example of a customer who rolled out AI summarization across a department. By embedding usage rules and guidance directly into the tool interface, they achieved zero security incidents while doubling adoption rates in just six weeks, which was half the expected timeframe. For a sales organization where reps are paid on commission, faster deal closure doesn’t just benefit the company; it directly empowers employees to earn more.
This is the kind of measurable ROI that makes executives pay attention and maybe something that CFO I mentioned earlier should check out. But it’s also about something more fundamental: making employees feel safe and confident using AI tools.
Ethics, Inclusion, and the Human Element
Democratizing AI isn’t just about access; it’s about ensuring ethical and equitable use across all employee levels. This requires proactive warnings when users are about to misuse AI, logging of risky behavior patterns, and just-in-time nudges that prevent incidents before they occur.
Consider the example of performance reviews. AI can help managers craft better evaluations, but it needs guardrails to prevent inappropriate language or statements that could create legal liability. The system doesn’t just correct the user after the fact; it identifies problematic content in real-time and guides them toward better alternatives.
Addressing Job Displacement Fears
The anxiety around AI replacing jobs is real and shouldn’t be dismissed. However, the current reality, especially for front-office, user-facing roles, is more about augmentation than replacement. KJ and I agree: the goal should be leveraging AI to give employees “superpowers” rather than making them obsolete.
This requires a phased approach, taking bite-sized steps toward AI integration while planning for year one, year two, and year three transformations. It also means ensuring that AI benefits aren’t concentrated at the executive level. When higher-level managers are the primary users of AI tools while frontline employees are left behind, you create a problematic “haves and have-nots” dynamic that undermines organizational culture.
Industry Patterns and Challenges
Across industries, we’re seeing the fastest AI adoption in service centers and sales organizations — areas where AI can process patterns and accelerate workflows with clear ROI. However, there are still adoption challenges in highly regulated industries like financial services, healthcare, and certain HR functions where risk concerns outweigh perceived benefits.
The irony is that AI could potentially save lives in healthcare settings, yet fear of liability leads some providers to actively avoid it. This defensive posture, similar to early social media resistance, ultimately puts organizations at a competitive disadvantage.
The Path Forward
If there’s one key takeaway from my conversation with KJ, it’s this: for success with your AI initiatives, treat it like the culture change that it is, and not simply the rollout of a new piece of tech to the stack. Success requires building trust, guiding usage thoughtfully, and governing smartly. It means empowering people while enforcing policies and embedding support directly into daily workflows.
The organizations that get this right won’t just avoid AI risks, they’ll unlock measurable value while creating a more capable, confident workforce ready for the future of work.
One other bit of advice: think hard about the AI vendors with whom you choose to work, as they likely will play a big role in your overall success. I have heard KJ’s customers talk about her and their experience deploying and utilizing WalkMe on more than one occasion. The heartfelt way they rave about her, and the fact that she is alongside them every step of the way with every new tech implementation, including their AI initiatives is, to my way of thinking, the personification of just the kind of relationship you want with a vendor. When they treat it like it’s a partnership that they are as deeply invested in as you are, that’s when the magic happens.
Check out Episode 1 of my interview with KJ Kusch here:
https://www.youtube.com/watch?v=tcqNL5ERLnc&t=2s
and you can find and follow her on LinkedIn here.
This aritcle was originally published on LinkedIn.
Read more of my coverage here:
Unlocking the Hidden Value: How Commvault’s Data Rooms Transform Backup Data into AI Assets
Commvault Makes Conversation the New Interface for Enterprise Cyber Resilience
The Readiness Tipping Point: What Kyndryl’s 2025 Report Reveals About Enterprise Tech Strategy
