In today’s fast-paced business landscape, the rush to unlock the potential of generative AI is palpable. However, amidst the excitement, the elephant in the room that can’t be ignored is the need to address the security concerns around all things gen AI. On that front, I found it interesting to read Securing generative AI: What matters now, a report developed by the IBM Insitute for Business Value, which found a stark contrast in the attitudes of C-suite executives. To help address the security challenges posed by generative AI, IBM Security has partnered with AWS to share insights on how organizations can take action to secure AI in their organizations.

Insights from the IBM IBV Study: Security Must Be Foundational for Generative AI Efforts to Deliver Value

I’ll admit to having a bias on this front, but for gen AI efforts to deliver value, security must be a foundational part of the equation. That’s why I found it fascinating to see that while 82% of survey respondents reported they believe secure and trustworthy AI is essential to their business success, only 24% of generative AI projects currently have a security framework in place. Yikes!

Even more concerning, security deficiencies are a reality — and not just as it relates to generative AI. IBM IBV research indicates that corporate capabilities in zero trust (34%), security by design (42%), and DevSecOps (43%) are in pilot stages. Double Yikes!

To make matters worse, the IBM IBV research revealed that nearly 70% of respondents prioritized innovation over security — a trend that is all too common in the tech ecosystem as a whole. I will admit to reading this and immediately thinking “you’ve gotta be kidding me. Haven’t these people learned anything from some of the massive data breaches we’ve experienced in recent years?” In far too many instances, it’s clear that the answer to that question is a resounding “no.”

It’s evident that the gap between understanding the risks associated with AI and taking actionable steps to mitigate them is widening. As AI becomes more entrenched in industries ranging from healthcare to transportation, this divide is only expected to grow. To prevent that, organizations have got to, respectfully, pull their collective heads out and put security front and center — before innovation leads them down a dangerous path.

There is good news, however. IBM IBV research reveals that many organizations are still in the evaluation or pilot stages of key generative AI use cases, like information security (43%) and risk and compliance (46%). This is a pivotal moment for organizations and an opportunity to prioritize security from the outset, so they can avoid falling prey to emerging threats as adversaries learn to exploit AI’s weaknesses.

The AI Threat Landscape: Familiar but Evolving

Without a doubt, gen AI offers organizations unparalleled opportunities, but it also introduces equally unprecedented risks. While executives acknowledge the risks of gen AI, the gap between awareness and action is of concern. Also troubling is the rush to create value from generative AI, often without considering security. As you can see from the graphic below, when asked what they’re most concerned about in adopting gen AI, 51% of execs cited the unpredictable risks and security vulnerabilities the rise of gen AI brings, with 47% citing a concern about new attacks citing existing AI models, data, and services as a major area of concern.

Gen AI Brings a Big Upside for Cybercriminals

Just as all of us across the business world are diving in, learning, and leveraging the many benefits of gen AI, cybercriminals are doing the very same thing. And just as AI is upping our collective games, it is doing the same for threat actors who see the opportunity to disrupt and profit. We are still in the early days of the exploration and adoption of generative AI, but the gap between C-suite concerns and the action they are taking on the security front shows a very real need to secure AI, and quickly. These are early days for threat actors, too, and as they gain more knowledge and expertise and as the ecosystem matures, the risk will be exponentially greater.

However, even in these early days, we are seeing more sophisticated and precisely targeted threats emerging at a greater velocity across the board. Deepfake technology, AI-generated phishing emails, and data leaks due to careless use of tools like ChatGPT are just the beginning. As AI integrates itself into critical infrastructure — whether in energy, healthcare, or transportation — the stakes will only rise higher. As you can see from the graphic below from IBM Security, a whole new raft of threats is quickly emerging.

Emerging threats to Gen AI

IBM X-Force researchers report they are already anticipating a spike in attacks targeting AI systems. With AI adoption accelerating, the clock is ticking for organizations to act and take steps to secure their AI projects now, before these vulnerabilities can be exploited.

The Essential Role of Governance and Risk Management

As organizations dive deeper into generative AI, they also need to reassess their governance, risk, and compliance (GRC) strategies. The IBM IBV research underscores the importance of updating these models to reflect the unique threats posed by AI. Whether organizations are using third-party AI tools, developing AI solutions from pre-trained models, or building custom models from scratch, each scenario presents distinct security challenges. And each demands a unique approach to governance.

For instance, companies using third-party AI tools like OpenAI’s ChatGPT or Microsoft 365 are not absolved of responsibility. Although vendors handle much of the security, organizations must still secure their data. The rise of “shadow AI” (employees using unsanctioned AI tools) is also a serious issue that organizations must address. Employees inadvertently sharing sensitive data with third-party tools can create huge vulnerabilities, and security teams are often left scrambling to mitigate the damage.

A Holistic Approach to Securing the AI Pipeline

IBM and AWS advocate for a “secure-by-design” approach, integrating security into every stage of the AI development pipeline. This isn’t just about protecting data — it’s about ensuring that every aspect of AI, from model training to deployment, is safeguarded against emerging threats. Here’s a visual outlining some of the core areas organizations must focus on:

Securing the AI value stream

Some key considerations include:

Data Security: Protecting AI training data is paramount. Sensitive information must be encrypted, and compliance with data privacy regulations should be non-negotiable. A breach in training data can have far-reaching consequences for an AI model’s accuracy and reliability.

Model Integrity: AI models themselves must be shielded from manipulation. Regular testing, threat modeling, and monitoring for vulnerabilities are key to maintaining the integrity of AI systems over time.

Monitoring and Incident Response: Organizations must establish robust mechanisms for detecting security incidents and responding to them in real time. As AI models evolve, so do the threats targeting them.

Access Control: Identity and access management practices must be strengthened to ensure only authorized personnel can interact with sensitive data and AI models.

A New Era of Responsibility: Trusted Vendor Partners Can Play a Significant Role

With the growing adoption of generative AI, the need for a shared responsibility model in security has never been clearer. IBM and AWS report that organizations increasingly rely on third-party vendors for their AI security solutions. This trend is not unlike the early days of cloud adoption, where companies turned to external partners to ensure secure and efficient management. Now, organizations are looking to these partners to provide the expertise and support necessary to mitigate the complex risks associated with AI. Those partners are helping organizations develop their AI strategy, address risk and regulatory requirements, train teams, and support operationally as they embark on their enterprise AI journeys. The below graphic is from the IBM IBV study, showing how important these capabilities are when selecting a vendor partner for gen AI security needs:

Partnerships help secure GenAI

As AI evolves, organizations need partners who can not only guide them through the intricacies of security but also provide practical tools and services. Over 90% of organizations rely on third-party solutions or managed services to secure their generative AI capabilities. That support is coming from infrastructure partners (like AWS, Azure, and Google), via managed services providers or ecosystem partners or suppliers, as well as via security products or solutions that are embedded within the organization’s tech stack. When it comes to securing generative AI, going it alone, developing your own solutions, is pretty much the slowest path, and the riskiest one. Here is a look at data from the IBM IBV study showing where that third-party support is coming from —

Moving Forward: A Secure AI Future Starts Now

The time to prioritize AI security is now. As generative AI becomes a cornerstone of business strategy, organizations must act quickly to secure their models, data, and systems. A secure-by-design approach, combined with a robust governance and risk framework, is crucial for achieving long-term success.

Organizations can’t afford to wait for threats to materialize before addressing them. AI’s transformative power is undeniable, but without the right security measures in place, that power can quickly become a liability. By focusing on security from the very beginning, and by leveraging the knowledge and expertise of trusted vendor partners, organizations can confidently move forward with their AI initiatives — empowered by the knowledge that they’ve built their AI systems on a foundation of trust, reliability, and resilience.

The goal isn’t to simply react to security risks but to anticipate them and build systems that are resilient in the face of evolving threats. The future of AI depends on it.

IBM Security and AWS partnered on an Action Guide that is showcased in the final part of this report. I encourage you to download it and consider how you can secure generative AI within your organization. The authors of this report have shared contact information on page 26 of the report, and I am confident they would love to hear from you and explore how they can help secure your enterprise AI journey.

Find and download the report here: Security generative AI: What matters now

 

This article was originally published on LinkedIn.

 

Read more of my coverage here:

Why Cybersecurity Must Be a Capital Delivery Imperative

Mitel Secures JITC Certification for OpenScape Voice: A Major Win in Secure Communications

IBM Emerges as a Security Force at RSAC 2025: Innovation Meets Experience