The Pega agentic AI research study reveals a complex landscape in workplace AI adoption, where enthusiasm for AI’s potential is tempered by practical concerns about its reliability and implementation. The study, surveying over 2,100 working professionals across the US and UK, highlights both the growing acceptance of AI agents in daily work routines and the persistent barriers to full-scale adoption.
Pega Agentic AI Research Study Findings: Current State of Adoption is a Mixed Picture
The Pega agentic AI research study findings paint an intriguing picture of the current position of AI in the workplace. With 58% of workers reporting they are already using AI agents and 57% expressing openness to their use, the technology has clearly moved beyond early adoption into mainstream consideration. Not surprisingly, the primary benefits cited by users focus on practical improvements: 41% appreciate the automation of tedious tasks, while 36% value reduced time spent searching for information, and 34% highlight quick meeting summarization capabilities.
The Trust Paradox is Real
However, beneath these encouraging adoption rates lies a significant trust deficit, which is also to be expected. A significant one-third of workers participating in the Pega Agentic AI research study express concerns about the quality of AI-generated work. Another significant third (32%) point to AI’s lack of human intuition and emotional intelligence as a major limitation. The research reveals a fundamental paradox: workers are increasingly willing to use AI tools while simultaneously harboring deep reservations about their reliability. Seeing these results and thinking about trust, I also wonder whether workers also have reservations about trusting the use of/integration of agentic AI into their workflows and concerns about how that might affect their futures. This was a topic not covered in a meaningful way in the research study but something I think about often.
The Human Element Remains Critical
Perhaps the most telling statistic is that 47% of the Pega agentic AI survey respondents shared they believe AI lacks human intuition and emotional intelligence – the highest-ranked concern in the study. This highlights a crucial insight: while workers are ready to embrace AI for mechanical and repetitive tasks, they remain skeptical of its ability to handle work requiring nuanced human judgment or emotional understanding. That is also not a surprising result.
Future Outlook and Implementation Challenges
Despite these concerns, the long-term outlook appears cautiously optimistic, with 46% of survey respondents indicating they believe AI will positively impact their jobs over the next five years. Only 13% indicated they anticipate negative effects, suggesting that workers see AI as more of an opportunity than a threat. That kind of mindset is what you’re looking for in an organization: positivity as it relates to the change ahead and the opportunities that presents.
The path to improved adoption is clear but demanding. Survey respondents identified three key areas for improvement:
- They are looking for enhanced accuracy and reliability (42%)
- There is a very real desire for better training programs (39%)
- They seek an increased transparency in AI decision-making processes (33%)
Strategic Implications for Organizations
The findings of the Pega agentic AI research aren’t really at all surprising. Successful AI implementation requires a more nuanced approach than simple deployment. Organizations need to focus on what Pega’s CTO Don Schuerman describes as “integrating AI agents with actual workflows,” which means ensuring AI tools are not just performing tasks, but performing the right tasks in ways that meaningfully support human workers.
The research points to a clear conclusion: while agentic AI has crossed the threshold into mainstream business operations, its success will depend on organizations’ ability to address fundamental human concerns about reliability, transparency, and the preservation of human judgment in critical decisions. The challenge now lies not in convincing workers to use AI, but in creating AI systems worthy of their trust.
This article was originally published on LinkedIn.
Read more of my coverage here:
Google Gemini Pricing Shift Signals New Phase in Enterprise AI Competition
Grammarly’s New Enterprise ROI Tools Set New Standards for AI Impact Measurement