A conversation with Eryn Peters, Co-founder of the AI Maturity Index
AI maturity is often discussed as a technology issue. In practice, it shows up in how people think, behave, and make decisions at work.
When I spoke with Eryn Peters on the HR Dialogues podcast, one idea kept coming up. Organizations are focusing on tools. But the real question is how people are actually engaging with AI, what holds them back, and whether adoption is creating value or just activity.
That is also the problem Eryn is working to solve through the AI Maturity Index, a research platform designed to understand how knowledge workers are really interacting with AI.
Conversation takeaways
- Culture and geography shape AI adoption patterns more than age
- Restricting AI tools often leads to shadow AI and reduced visibility
- Many organizations are not measuring AI usage or business impact
- Change and adoption are becoming part of every role
- Depth of expertise is becoming more valuable than broad knowledge.
AI maturity goes beyond skills
I have seen many AI readiness frameworks treat maturity as a proxy for skill. Can someone write a prompt? Can they use Copilot? Do they understand what a large language model is?
So I asked Eryn what the AI Maturity Index actually measures.
Instead, the Index looks at a broader picture. As she explained, it captures “a fully well-rounded, holistic view of how people are interacting with AI,” including ethics awareness and whether people understand enough about these systems to guide their use.
That perspective becomes more important once you consider how people really experience AI. It is rarely consistent. As Eryn put it, “We evaluate both positive and negative experiences with AI, because most people have both, to be honest.”
That mix of confidence and hesitation is what shapes behavior. For HR leaders, this changes the focus. AI adoption is not just about capability building. It is about trust, judgment, and how people decide when and how to use these tools.

Culture, not age, is shaping AI adoption patterns
One of the more unexpected insights from the AI Maturity Index research is that age does not appear to be the strongest factor influencing AI maturity. Culture plays a larger role.
Eryn pointed to clear regional patterns. “North America is experimenting a lot,” she said, while the psychological response remains “kind of middle of the road.” People are using AI, but they are still forming their opinions about it.
The contrast becomes more pronounced in the DACH region. There, the Index shows “some of the lowest experimentation and usage, with extremely negative psychological impact.” This is not just lower adoption. It reflects a different starting point in how AI is perceived.
She linked this to the broader cultural context. In Germany, for example, “the cultural perception is: governance first, experimentation later.” In the U.S., the tendency often moves in the opposite direction, “innovate now and govern maybe later.”
That difference creates real tension inside global organizations. In my experience, it often shows up as a split in momentum. One part of the business pushes for speed. Another emphasizes control. For HR, this means AI adoption is not a single rollout. It requires navigating different attitudes toward risk, privacy, and experimentation across regions.
Restriction does not stop AI use; it hides it
When organizations try to manage AI risk, the instinct is often to restrict access. I asked Eryn what actually happens in those environments.
Her response pointed to a disconnect between policy and reality.
As she explained, “The chances are the three approved tools might not really be hitting the mark.” In highly controlled environments, people often continue using AI anyway, but outside approved systems, “go onto their phone, use ChatGPT anyway, and now you have shadow AI.”
The issue is not that usage stops. It becomes invisible.
That creates a different kind of risk. One that is harder to manage because it sits outside visibility. At the same time, Eryn did not suggest removing all controls. A completely open approach creates its own problems.
Instead, she pointed to a structured middle ground. Cross-functional AI councils that include HR, legal, IT, and data teams. The goal is not to block progress, but to enable safe and practical use. For HR, this means designing guardrails that reflect how work actually happens.
The productivity story is still ahead of the data
One of the most striking parts of our conversation was the gap between perception and measurement. Employees may believe AI is making them more productive. But the data tells a more complex story.
“We’re seeing research where people are claiming they’re more productive,” Eryn said, “but when they’re actually measured on their productivity, they’re much less productive.”
That gap matters because it shows how easily perception can be mistaken for impact. At the same time, many organizations are not measuring much at all. “Most companies aren’t even measuring usage, let alone productivity gains,” she noted, and even fewer track outcomes like innovation, value creation, or risk reduction.
This creates a blind spot. Leaders believe progress is happening, but they lack the data to confirm it. Eryn described the current phase clearly. “This is the slowest AI will ever be.” Yet despite that, “most organizations are still in the shotgun phase: try a bunch of tools and see what sticks.”
For HR and business leaders, the implication is clear. If you are not measuring adoption and outcomes, you are not managing AI implementation.
HR’s role is to enable and challenge
As we moved into the role of HR, one point stood out. HR has a responsibility that cannot be delegated.
“I don’t want to see AI leveraged in ways that create barriers to opportunity,” Eryn said, particularly in hiring.
She pointed to a pattern that raises concern. Applications are increasing, but so are referrals. “We’re seeing applications up 30% year-over-year, and hiring by referral up 30% year-over-year.”
That suggests more people are using AI to apply, while organizations are also using AI earlier in recruitment, yet “we’re still not getting to the right people.” The result is a fallback to referrals, which “reduces diversity and creates more homogeneous teams.” This is where HR plays a critical role.
That question extends beyond hiring. It applies to performance, development, and access to opportunity across the employee lifecycle. HR’s role is not just to support AI adoption. It is to shape how it is used.
Change management is now embedded in the work
Another shift we discussed is how organizations approach change. Traditionally, change management was handled by specialists brought in after implementation. That model is fading.
“Change and adoption are part of everybody’s job description now,” Eryn said. Adoption is no longer a phase. It is part of how work gets delivered.
She emphasized that real change requires broad ownership. “It has to be permeated throughout the organization, with champions at every level.”
For HR, this changes the role of change management. It is no longer a separate activity. It is built into how teams design, deliver, and adopt new tools.
Why depth matters more in the age of AI
As AI makes knowledge easier to access, it raises a key question. What capabilities still differentiate people? Eryn’s answer was clear. Depth.
“We’re entering the era of the polymath, not the generalist.”
A generalist knows a little about many things. AI makes that easier. A polymath brings depth across multiple domains and the ability to evaluate outputs critically. That evaluation is the real challenge.
“How are you going to accurately evaluate the outputs if you don’t have enough context and experience?” she asked.
That is why organizations still need people with “specialized knowledge and lived experience” to both prompt effectively and assess results.
And using it better comes down to depth of understanding. For HR, this has direct implications for skills and workforce development. It is not just about broad capability. It is about building expertise that enables judgment.
Final thought: keep it human
At the end of our conversation, I asked Eryn what HR leaders should focus on now. Her answer was simple.
“You don’t have to overcomplicate it. Just remain inherently human.”
AI does not always need to prove value through productivity alone. If it reduces stress or improves how people work, that matters too. She suggested a simple way to stay grounded is to keep two questions visible: “Solve this problem” and “Should a human do this instead?”
Because in the end, “AI is a human movement first, and a technology movement second.”





