MSP Ticket Resolution: What Manual vs AI-Assisted Actually Looks Like
By ResolvCmd
Most AI marketing for MSPs focuses on Tier 1 automation. Password resets, account unlocks, basic “have you tried restarting it” tickets. And that’s fine — those tools work, and they handle volume.
But Tier 1 isn’t where MSPs lose money. L1 tickets are quick, well-defined, and often already handled by self-service portals or automated workflows. The real cost is in L2 and L3 tickets — the ones that require documented procedures, client-specific configurations, and the kind of institutional knowledge that takes months to build.
This post compares manual and AI-assisted workflows for a real L2 ticket scenario, then looks at what the research says about the impact.
L1 is solved. The real problem is L2 and L3.
Tier 1 automation has been a solved problem for a while now. Chatbots handle password resets. Self-service portals cover account lockouts. RMM tools auto-remediate common alerts. Most MSPs have already reduced their L1 volume through some combination of these tools.
L2 and L3 tickets are different. These tickets involve procedures that vary by client, require referencing internal documentation, and depend on technician judgment. A backup failure at Client A requires different steps than a backup failure at Client B because they run different backup products with different retention policies and different escalation contacts at the vendor.
This is where technicians spend most of their time, and it’s where the gap between documented procedures and actual practice is widest.
A typical L2 ticket: manual workflow
Let’s walk through a real scenario. A monitoring alert fires: “Backup job failed — Acme Corp — Server DC01 — Veeam error code 4096.” Here’s what the manual resolution workflow looks like at most MSPs:
0:00 — Ticket arrives. The alert creates a ticket in the PSA automatically. It includes the client name, server, backup product, and error code.
0:00 to 0:03 — Technician reads and triages. The tech opens the ticket, reads the alert details, and starts thinking about what to do. They may or may not know this client’s backup environment.
0:03 to 0:08 — Documentation search. The tech opens IT Glue (or Hudu, or wherever docs live) in another tab. They search for “Acme Corp Veeam” or “backup failure” or “error 4096.” They scan through results, open a few articles, try to find the one that covers this specific scenario. Maybe they find it. Maybe they find something close but not quite right.
0:08 to 0:12 — Cross-referencing. The tech found a general Veeam troubleshooting article and a separate Acme Corp infrastructure document. They need to cross-reference the two — the general steps say “check the backup repository capacity” but the Acme-specific doc has the repository path and the capacity threshold that triggers their alerting. They flip between tabs.
0:12 to 0:15 — Escalation check (maybe). The tech isn’t sure if Acme Corp has a specific escalation requirement for backup failures. They check the client’s SLA document in IT Glue. Yes, backup failures require notification to the client’s IT director within 30 minutes. They need to handle that too.
0:15 to 0:25 — Resolution. The tech follows the steps, remediates the issue, verifies the backup runs successfully, and documents what they did.
0:25 to 0:30 — Documentation. The tech writes up the resolution in the ticket, closes it out, and moves on.
Total time: approximately 30 minutes. Of that, 12-15 minutes was spent finding and synthesizing information — not doing the actual technical work.
Manual workflow
Triage
Search docs
Cross-ref
Escalation?
Execute
Document
7-10 min total -- most time spent searching, not resolving
With ResolvCmd
Ticket
Resolution surfaced
Team confirms
Done
Resolution surfaced in < 5 seconds
The same ticket: AI-assisted workflow
Same ticket. Same technician. But this time, a resolution engine has already processed the ticket before the tech opens it.
0:00 — Ticket arrives. Same as before. The monitoring alert creates a ticket with client name, server, backup product, and error code.
0:00 — Resolution engine processes the ticket. Within seconds of ticket creation, the resolution engine reads the alert, identifies the client (Acme Corp), the system (Veeam on DC01), and the error type. It searches the connected documentation platform for relevant procedures and produces a structured resolution.
0:01 — Technician opens the ticket. The resolution is already there as an internal note:
Suggested Resolution — Veeam Backup Failure (Error 4096) — Acme Corp DC01
- Check backup repository capacity at \NAS01\VeeamRepo — threshold is 85% per Acme Corp backup policy (Source: IT Glue > Acme Corp > Backup Configuration)
- If repository is full, archive completed backup chains older than 30 days per retention policy (Source: IT Glue > Acme Corp > Backup Retention Policy)
- Check Veeam service status on DC01 — restart VeeamBackupSvc if stopped (Source: IT Glue > Veeam Troubleshooting SOP)
- Re-run the failed backup job manually and verify completion (Source: IT Glue > Veeam Troubleshooting SOP)
- Client notification required: Backup failures require notification to IT Director (Jane Smith, jane@acme.com) within 30 minutes per SLA (Source: IT Glue > Acme Corp > SLA Agreement)
0:01 to 0:12 — Resolution. The tech follows the steps. No searching, no cross-referencing, no wondering about escalation requirements. The client-specific details — repository paths, retention policies, notification contacts — are already in the resolution.
0:12 to 0:15 — Documentation. The tech notes what they did and closes the ticket.
Total time: approximately 15 minutes. The 12-15 minutes of search, cross-referencing, and synthesis are gone. The technician spent their time on the actual technical work.
The numbers: what the research says
The manual vs AI-assisted workflow comparison isn’t just anecdotal. Several research studies quantify the impact of information retrieval overhead on knowledge worker productivity — and the gains from reducing it.
40% reduction in resolution time is the figure from DeskDay’s research on AI-assisted ticket resolution in MSP environments, corroborated by Rev.io’s deployment data. The top-performing implementations reported up to 82% reduction in resolution time, though these were typically L1-heavy environments. For L2 and L3 tickets, the 40% figure is more realistic because the technical work itself can’t be automated — only the information retrieval.
13.8% more inquiries handled per hour. Research on AI-assisted support workers found that technicians using AI tools resolved 13.8% more tickets per hour on average. The productivity gain was largest for less experienced technicians, who benefited most from having institutional knowledge delivered automatically rather than having to find it themselves.
1.8 hours per day searching for information. McKinsey Global Institute research found that knowledge workers spend an average of 1.8 hours per day searching for and gathering information. For MSP technicians, this translates directly into documentation lookups, cross-referencing client configurations, and chasing down procedures across multiple systems.
$2.5 million per year per 1,000 workers wasted on failed searches. IDC research quantified the cost of information retrieval failure — instances where workers search for information, don’t find it, and either recreate it, work without it, or escalate to someone who has it. For a 20-person MSP, that scales to roughly $50,000 per year in wasted effort on failed documentation searches alone.
23 minutes to recover from an interruption. Gloria Mark’s research at UC Irvine found that it takes an average of 23 minutes and 15 seconds to return to a task after an interruption. Every time a technician leaves their ticket to search for documentation, they’re creating a self-imposed interruption. If the search takes them down a rabbit hole — checking the wrong article, asking a colleague, waiting for a response — the recovery time compounds.
These numbers paint a consistent picture. The bottleneck in ticket resolution isn’t the technical work. It’s the information retrieval. Any tool that reduces the search-find-synthesize cycle delivers measurable productivity gains.
Three categories of AI tools for ticket resolution
Not all AI tools for ticket resolution work the same way. The three main approaches are chatbots for end-user deflection, copilots for technician assistance, and resolution engines for documentation-driven ticket resolution. Understanding the categories helps you evaluate what you actually need.
Chatbots and virtual agents. These handle end-user interactions. They sit in front of your helpdesk and try to resolve issues through conversation before creating a ticket. They’re good for L1 deflection — password resets, FAQs, basic troubleshooting. They don’t help your technicians resolve L2 and L3 tickets faster. Examples: Robin by Atera, Moveworks, Aisera.
Chatbots have matured to the point where they can handle structured workflows reliably. A password reset chatbot that verifies identity, triggers the reset, and confirms completion is a well-understood pattern. But the moment the issue requires judgment, client-specific context, or a documented procedure with multiple decision points, chatbots hit their ceiling. They’re designed for conversations, not for executing multi-step technical procedures.
Copilots and AI assistants. These sit inside your PSA or ticketing system and help technicians draft responses, summarize tickets, categorize incoming requests, and suggest next steps. They reduce administrative overhead and help with the “writing” part of ticket resolution. They typically generate suggestions from a general AI model, sometimes supplemented by your internal knowledge base. Examples: ConnectWise Sidekick, Datto Cooper, SysAid Copilot.
Copilots are valuable for the administrative side of ticket resolution — the 5-10 minutes per ticket spent writing updates, categorizing, and summarizing. What they generally don’t do is connect to your external documentation platform and deliver your documented procedures as resolution steps. Their suggestions come from the AI model’s training data and your ticket history, not from the SOPs your team wrote in IT Glue or Confluence.
Resolution engines. These connect your documentation platform to your ticketing system and deliver structured, source-cited resolutions inside tickets. They don’t generate answers from a general model — they find answers in your existing documentation and present them as actionable steps. The focus is on accuracy and traceability rather than fluency. Examples: ResolvCmd.
Resolution engines are the newest category and the least understood. They don’t automate ticket resolution — they automate information retrieval. The technician still does the technical work. But the 12-15 minutes spent searching for documentation, cross-referencing client configurations, and synthesizing multiple articles into a resolution plan gets compressed to seconds.
The categories aren’t mutually exclusive. You might use a chatbot for L1 deflection, a copilot for ticket administration, and a resolution engine for L2/L3 documentation delivery. But understanding which problem each category solves prevents you from buying a chatbot and expecting it to help with Exchange hybrid migration runbooks.
What to evaluate before buying
If you’re evaluating AI tools for ticket resolution, these questions will separate tools that look good in a demo from tools that actually work in your environment:
Where does the knowledge come from? Tools that use your existing documentation (IT Glue, Hudu, Confluence) deliver answers your team wrote and trusts. Tools that generate from a general model might be fluent but miss client-specific details. For MSPs, client-specific accuracy matters more than general correctness.
Does it cite its sources? A resolution that says “restart the print spooler” is a guess. A resolution that says “restart the print spooler (Source: IT Glue > Acme Corp > Printer SOP, updated 2026-03-15)” is verifiable. Source citations let technicians trust and verify, which is the difference between a tool they use and a tool they ignore.
What ticket types does it actually help with? L1 automation and L2/L3 documentation delivery are different problems. Make sure you’re buying for the problem you actually have. If your L1 volume is already low, a chatbot won’t move the needle.
How does it handle multi-tenant documentation? MSPs serve multiple clients with different procedures. The tool needs to match documentation to the right client, not just the right issue type. A generic “backup troubleshooting” article isn’t helpful when you need Client XYZ’s specific Veeam configuration and escalation contacts.
What’s the pricing model? Per-resolution pricing creates unpredictable costs. Per-technician add-ons compound on top of existing per-technician PSA fees. Flat pricing is the most predictable. Model out the 12-month cost at your expected ticket volume and team size.
The bottom line
The difference between manual and AI-assisted ticket resolution isn’t about replacing technicians. It’s about eliminating the search tax — the 12-15 minutes per ticket that technicians spend finding information instead of applying it.
For L1 tickets, automation and chatbots have largely solved this. For L2 and L3 tickets, the answer isn’t automation — it’s information delivery. Get the right documentation into the right ticket at the right time, and your technicians spend their time on technical work instead of documentation archaeology.
The research is clear: 40% faster resolution, 13.8% more tickets handled per hour, and millions in annual savings from reducing failed information searches. The question isn’t whether to invest in AI-assisted resolution. It’s which approach matches how your MSP actually works. If your team is still spending time searching for documentation, see what that search cost really looks like.
Ready to turn your documentation into instant resolutions?
Start Free TrialMore in MSP Operations
MSP AI Service Desk Landscape 2026: Pia, Neo Agent, Helena, Zofiq, and the Question of Autonomy
A working map of the MSP AI service desk landscape in 2026. RPA-flavored automation (Pia), full-autonomy agents (Neo Agent), PSA-bundled AI (Helena AI / DeskDay), ConnectWise's native acquisition (Zofiq), and where each fits.
Best AI Apps for Zendesk IT Teams in 2026
Most Zendesk AI app lists focus on customer support chatbots. This one is for IT teams and MSPs using Zendesk for internal technical support. We cover 8 AI apps that actually handle documentation, SOPs, and structured ticket resolution.