Article
April 20, 2026 · 8 min read time73% of Nitoreans use AI tools daily, and 87% are positive about AI. But only 18% feel very confident, and the gaps between roles are wide. Here's what we learned from surveying 115 Nitoreans.
In early March 2026, we ran an AI tools and usage questionnaire across Nitor, covering attitudes, tool usage, impact, barriers, and learning preferences. We received 115 responses, an overall response rate of 44%. In the Technology unit, the response rate was 52%. In addition to developers and architects, the respondents also included 14 designers and 16 people in non-technical roles such as people ops, sales, finance, and marketing.
Adoption and attitudes
According to the survey, 73% of Nitoreans who do client work use AI tools daily. Another 16% use them several times a week. This means that for nine out of ten consultants, AI is now a regular part of how they deliver work for our customers.
Attitudes are overwhelmingly positive: 87% describe their attitude toward AI as positive, with 48% rating it very positive. 9% are neutral and 5% negative. The negative responses are a minority, but their concerns are worth taking seriously. Their worries centre on output quality, data privacy, and the ethics of AI use.
The attitude-confidence gap
While attitudes are very positive, confidence lags somewhat behind, especially among non-developers. In total, only 18% feel very confident. Among technical roles, 22% feel very confident in their ability to use AI effectively, and an additional 44% feel moderately confident. Only 14% of designers are very confident, and none of the non-technical respondents are, despite this group being the most enthusiastic about AI overall.
Confidence and usage reinforce each other: 81% of the very confident respondents use AI daily, compared to 48% among those who feel only somewhat or not very confident. Getting people to practice regularly is likely the fastest way to close the gap.
Tools
In client projects, Copilot leads (44%), followed by Claude (31%), ChatGPT (26%), and Claude Code (18%). Copilot's lead is largely client-driven, as customers provide the tool in two-thirds of cases. Claude usage is more mixed: customers provide it in about half the cases, and Nitor provides it for the rest, filling a gap where clients haven't provisioned it themselves.
When our people choose freely, for Nitor's own internal work, Claude is the most popular tool (40%), followed by ChatGPT (32%) and Copilot (26%).
Claude Code stands out as the tool most people want to use at their client but don't yet have. 24% of respondents named it as the tool they'd most want access to, and several framed it as the obvious next step in their workflow.
The survey listed "Copilot" as a single option. Based on open-ended responses, most people mean GitHub Copilot, but some in non-technical roles mean Microsoft 365 Copilot.
It's common to use multiple tools in combination. One respondent described the typical setup as "copilot in editor, coding agent in CLI, chatbot in browser."
Impact on work
We also asked how AI has actually changed people's client work.
67% say AI helps them spar and get feedback
63% say it helps solve problems
60% say it makes them faster in their work
31% say they can delegate tasks to AI entirely
7% say they haven't noticed any change
Based on these numbers, AI is an amplifier, not a replacement for doing the work yourself. The most commonly mentioned tasks are code generation and debugging, but people also use AI for error diagnosis, learning unfamiliar codebases and languages, and automating repetitive work.
Some responses describe significant workflow shifts. One developer described doing "almost every coding task primarily using an AI Agent," going through multiple rounds of planning, iteration, testing, and final human review, with manual code changes getting rarer. Another described it more concisely: "AI does most of the implementation work, I try to focus on steering and reviewing the outcome."
Not everyone has had a positive experience. One person noted their "daily work routine completely overhauled, not only for the better." Another flagged a risk that many organisations might be facing: "Made reviews painful. Sometimes, some colleagues use AI without checking at all what they commit." A third gave a concrete example: "I generated 1400 lines of Jest tests with AI, but then spent two days hand-adjusting what it generated."
What's holding people back
The top barriers to using AI more are lack of time (41%), data privacy and security concerns (34%), doubts about output quality (30%), and lack of training (23%). 29% say nothing prevents them, but that's almost exclusively technical roles. Every designer and every non-technical respondent reports at least one barrier.
The barriers differ significantly by role. Technical roles face the fewest obstacles. Their main barriers are output quality (33%), privacy (31%), and time (31%). Designers are heavily time-constrained (79%). Non-technical roles cite lack of training (50%) and privacy concerns (50%) most often.
Multiple respondents raised practical questions about data residency, NDA implications, and what's allowed when using AI tools in client work. As one put it, "It would be great to understand how much we can use the tools provided by Nitor when doing work for clients. Where is the line for working on prototypes without customer logos and using fake data?"
Since the survey, we've published updated internal guidelines on which data can be used with different AI tools, based on data classification and residency requirements. Customer restrictions always take precedence. When people are uncertain about what's allowed, they either stop using tools entirely or use them in grey areas, and neither is good. Getting the guidelines right matters.
Several respondents also described feeling overwhelmed by the pace of change: "I don't even know where to start." Another concern relates to context switching: “I get anxious when I spring up new tasks with agents too casually and then suddenly need to manage them all and keep doing context switches.”
A few raised concerns that go beyond practical issues. One was direct: "I feel like using the AI is like the digital oil that we weren't supposed to use as a company. Nobody seems to be interested in the true cost of AI." Others pointed to the energy consumption of AI and the geopolitical dimensions of relying on American AI providers. These are minority voices, but they're asking important questions.
Learning needs differ by role
The survey revealed that the three role groups have almost entirely different learning agendas, both in content and in learning format.
Technical roles are relatively self-directed and want to go deeper: AI-assisted development (61%), security and responsible AI (49%), and building applications with LLMs (42%). The responses suggest an interest in moving beyond using AI tools toward building AI-powered solutions. In terms of format, self-paced training (59%) edges out live workshops (56%), and written guides rank high (51%). Most developers are comfortable figuring things out on their own.
Designers want tools and workflows that bridge design and development. 79% are interested in tools combining design and development, including Figma MCP and Claude Code. 71% are interested in innovation and concept creation AI tools, 64% are interested in AI agents, and 57% are interested in pure design AI tools like Figma Make and Lovable. The format preferences are strikingly different from developers: 93% want live workshops, and half want one-on-one coaching, nearly three times the rate of developers. Communities of practice (71%) and short videos (71%) also rank much higher than among technical roles.
Non-technical roles want the most practical help. 88% want AI for office tools, 75% want to learn AI agents, and 69% want better prompting skills. Like designers, they strongly prefer guided and social learning formats: workshops (81%), coaching (44%), and communities of practice (56%) all score well above the average.
Everyone at Nitor gets 10% Core time and five Academy days per year to develop their skills however they see fit. We're encouraging everyone to use that time to develop AI skills, with separate learning paths tailored to each group.
Conclusions
The survey focuses mostly on one part of AI at Nitor: how Nitoreans use AI tools in their daily work. That's only one part of our AI strategy. Equally important is how we build AI-driven solutions: end-to-end work covering strategy, governance, data, platform engineering, and implementation of AI features.
Today, effective AI tool use is critical for both our developers and everyone else in the organisation. We aim to be the best at using AI tools, but also the most responsible. That is why surveys like this are important: they let us know where we need to improve, and how.
The survey shows that almost everyone has adopted AI tools as part of their daily work. What's left is harder: adopting new workflows and roles, establishing quality practices to keep up with the speed enabled by AI, and making sure no one in the organisation is left behind. We're far from done yet.
The pace of change isn't slowing down. Internally, the conversation has moved well past "should we use AI?" to topics such as multi-agent orchestration, sandboxing autonomous agents, and choosing between competing frontier models for different tasks. If the pace continues, the next round of survey results will look very different from these.