- AI
- DevEx
- Product Development
- Strategy
AI in Kentico in numbers
Real adoption metrics, tooling mix, and survey insights on AI usage inside Kentico R&D.

Originally published in a different form on LinkedIn. Lightly edited for clarity and formatting.
It’s been a few months since I joined Kentico, so here’s a follow‑up on our AI adoption — in numbers. AI hype is still everywhere, and many companies talk about adopting AI tools but rarely share actual data.
I’ll cover the tools R&D uses, usage data, plus takeaways from our internal AI feedback survey from late September.
Our agentic AI tools
Our first AI tool was GitHub Copilot. Even before agent mode, a coding assistant in your editor was clearly valuable. Since then we’ve experimented and officially adopted Copilot, Cursor, Codex, and Claude Code.
I’m not calling them “dev” tools. Developers benefit, but these agentic AI tools (semi‑autonomous assistants) support documentation, testing, learning, and more. Everyone in R&D has access; 100% use at least one.
Here are some numbers:
| Metric | Copilot | Cursor | Claude | Codex |
|---|---|---|---|---|
| Users (%) | 40% | 61% | 6% | 2% |
| WAU* | 81% | 70% | N/A | N/A |
| Chat/Agent Mode (%) | 77% | 97% | N/A | N/A |
| Tab Completions (%) | 59% | 83% | N/A | N/A |
| Top Model | N/A | Tie between Claude 4.5 Sonnet Thinking and Default | N/A | N/A |
*WAU = weekly active users (percentage of licensed users active in the last 7 days)
Scope: these are R&D metrics only (people with access to our AI tooling). Adoption percentages exceed 100% because many people use more than one tool.
You might have already noticed some oddities:
- Tool access is above 100%. Each person can have multiple tools.
- Codex has no metrics.
- Claude Code doesn’t offer metrics in our current plan.
- None of the tools have fully replaced Visual Studio.
Let’s take a closer look at each tool.
GitHub Copilot
GitHub Copilot is effectively “free” when you weigh cost vs value. It’s like picking Mario in Mario Kart: a solid, reliable choice that gets the job done. The analytics are decent, though not very granular if you want detail. Oddity: despite being a Microsoft product, Azure DevOps integration is clunky. That’s especially frustrating after seeing how much extra value Copilot unlocks if you’re using GitHub instead.
Cursor
Cursor’s tab completions are unmatched and its tab feature is powerful (see their docs). The analytics are granular, letting you understand usage patterns and spending, and v2 is even better. Team rules are implemented inconsistently: only admins can change them, there’s no tracking, and you can mark them optional but they activate by default. Cursor costs more than Copilot but offers strong value. One shame: still no CLI for Windows.
Claude Code
Claude Code is the outlier. We ran an experiment followed by a limited rollout. With the release of OpenAI’s Codex, users who purely want a CLI tool might get more value there. That said, Claude’s sub agents enable multi‑step or specialized helper workflows (e.g., one agent reviewing code while another drafts docs), making a good introduction to multi‑agent patterns. It supports commands, so there’s compatibility with Cursor commands, but still no support for AGENTS.md. The cost just to get a team plan is high, especially when it’s more complementary than a full replacement for other tooling. There are many opportunities for non‑dev uses, but it’s less user friendly.
Codex
Codex is the odd one. It’s powerful, with good results and price, but visibility of spending and limits is extremely poor (no analytics). Since it includes ChatGPT access, there’s value for non‑developers. Reported internal adoption is low, but many more users likely have ChatGPT access. Remote agents, mobile app integration, and async/remote operations are excellent, but not something we can use with ADO.
Our internal AI feedback survey
I was skeptical about surveys, but they’re still the most universal way to gauge AI adoption. So, we ran our first quarterly AI feedback survey.
Key numbers from the first run:
| Metric | Value |
|---|---|
| Response rate | ~70% |
| Median time saved (dev) | 8h/month |
| Median time saved (non-dev) | 20h/month |
| AI is making my work faster | 4.3 / 5 |
| AI is helping me produce more work | 4.03 / 5 |
| AI is helping me produce higher quality work | 3.36 / 5 |
The sentiment scores are positive; anything above 3.0 is good. AI is helping with speed and output more than quality. Time saved varies a lot, with outliers at 30h and 80h per month. Many respondents said they aren’t saving much time yet but expect greater gains as they master the tools.
Survey notes: R&D only; time saved is self‑reported.
Our Top Use Cases
What people are actually using AI for aligns with what the rest of the industry is reporting. Code generation is up there, but it’s not the top use case. Most AI usage is around supporting activities: documentation, understanding, improving communication.
Developers:
- Documentation
- Testing
- Code generation
- Refactoring
- Code review
Non-Developers:
- Learning and understanding
- Writing and communication
- Code generation
- Research and analysis
Two different focuses
Beyond individual productivity and survey feedback, our numbers also reveal a split focus. We need general‑use tooling for business‑as‑usual tasks, but we also need tooling that enables teams building AI features in Xperience by Kentico, like our native AI assistant AIRA. While GitHub Copilot is an amazing general‑use tool, teams pick what works best for their context.
This is why compatibility between tools, and analytics are so important to us. We want to use whatever fits the use case without starting from scratch or excluding someone because of tool choice. Initially, our codebase was full of tool‑specific config files for VS Code and Cursor mixed with general‑use context files. We’re gradually improving this setup. Understanding usage patterns, formalizing workflows, and sharing assets between tools is crucial.
Next steps and getting more numbers
We’re evaluating analytics tools. I’ve written a basic internal tool (placeholder LinkedIn link) to grab numbers, but we need something more robust. Data collection is easy; meaningful analysis takes time.
We truly appreciate articles like the one by The Pragmatic Engineer on how companies measure AI. We’re keeping an eye on how it’s developing and seeing how we compare with the reported numbers.
We’re still focusing on knowledge sharing and supporting the whole org to adopt AI tooling. We have independent power users building AI features, and we need workflows others can easily replicate. We need documented ways for people to test, build further, and produce real outcomes.
There are too many new tools and AI features in existing tools being released, so our review and experimentation pipeline is quite crowded. It’s something we often have to reevaluate.