On this page
on this page
Why ChatGPT Doesn't Know Your SOPs (And Why That's an Expensive Problem for MSPs)
MSP Operations

Why ChatGPT Doesn't Know Your SOPs (And Why That's an Expensive Problem for MSPs)

By ResolvCmd

Your technician gets a ticket: “VPN not connecting for user at Denver office.” They paste it into ChatGPT. The response is fast, detailed, and technically correct. Check the client’s VPN gateway settings. Verify the user’s credentials. Restart the VPN service. Run a traceroute.

None of that is wrong. All of it is useless.

Because this client uses a split-tunnel configuration with a specific firewall rule that blocks reconnection attempts after three failures. Your team figured that out eight months ago and documented the workaround in IT Glue. ChatGPT doesn’t know that. It never will.

ChatGPT, Claude, Gemini — these are genuinely powerful tools. This isn’t an argument that AI doesn’t work. It’s an observation that generic AI doesn’t work for the tickets that actually matter to your MSP. The ones that require knowledge about your clients, your configurations, and your procedures.

L1 automation is solved. The real problem is the L2 and L3 tickets that depend on your internal documentation — and no generic AI has access to that. 42% of businesses scrapped their AI projects in 2025. Not because the AI was bad. Because it was deployed without the context that makes it useful.

L1 is solved. The hard tickets are the problem.

L1 automation is table stakes at this point. Zendesk AI handles “where’s my invoice.” Atera’s Robin resets passwords through Slack. ConnectWise zofiQ auto-triages and routes tickets before a dispatcher touches them. If your only AI goal is deflecting simple requests, the market has options.

But L1 tickets aren’t what keep your senior technicians busy until 7pm. The tickets that eat time and blow SLAs are the L2 and L3 issues: the Exchange hybrid migration that broke after a Tuesday patch. The SQL cluster alert that requires a 14-step runbook your team wrote last year. The new hire at Client XYZ who needs access to six systems, each with a different provisioning process that your onboarding SOP covers in detail.

These tickets require knowledge that doesn’t exist on the public internet. They require your documentation.

L1

Simple requests

Password resets, "where's my invoice", basic troubleshooting

Generic AI
Generic AI stops here
L2

Procedural issues

Exchange hybrid migration, backup failures, new user provisioning

ResolvCmd
L3

Complex multi-step

14-step runbooks, client-specific configurations, cross-system procedures

ResolvCmd

These tickets require your documentation, not the public internet

Generic AI is excellent at public-domain tasks. Need a PowerShell script to disable inactive AD accounts older than 90 days? ChatGPT writes a solid one. Need an explanation of a cryptic Windows event log error? Claude will break it down clearly. Need a template for a client-facing outage notification? Any LLM handles that.

But ask ChatGPT “What’s the VPN configuration for Client ABC’s Denver office?” and you get a generic troubleshooting guide. Ask it “What’s our escalation path when monitoring fires for Client XYZ’s SQL cluster?” and you get a textbook ITIL framework. Ask it “What’s our SOP for onboarding a new client onto our RMM platform?” and you get a blog post, not your 47-step internal procedure.

The L1 tools deflect simple requests. Nobody is solving the tickets that require your team’s accumulated knowledge. And those are the tickets where resolution time, consistency, and cost per ticket actually matter.

The tribal knowledge problem MSPs won’t talk about

Here’s the uncomfortable reality: 80% of processes in most organizations are completely undocumented. The real knowledge lives in your senior technicians’ heads, buried in Slack threads from six months ago, scattered across ticket notes that nobody will ever search for, and in the muscle memory of the tech who’s been handling Client ABC since 2021.

This isn’t a failure of discipline. It’s a structural reality of how MSPs operate. Small teams. High ticket volume. Constant context-switching between client environments. Thirty open tickets and a P1 that just came in. Nobody stops to write documentation when they’re fighting fires.

So the knowledge accumulates informally. Your senior tech knows that “the printer thing” means the HP LaserJet on the third floor at Client XYZ that drops off the network every time the print spooler updates, and the fix is to re-map it using a deployment script on a specific share drive because their GPO doesn’t handle reconnection. That’s not in IT Glue. It’s in his head.

When AI enters this environment, it hits a wall. ChatGPT can’t read your Slack history. It doesn’t know about the deployment script. It doesn’t know that Client XYZ has a non-standard GPO configuration. It doesn’t know that your team tried three other approaches before finding the one that works.

95% of company-wide AI launches in 2025 failed to produce intended results. The common thread wasn’t bad models or poor prompting. It was the absence of structured, accessible organizational knowledge for the AI to work with.

Why “just use ChatGPT” makes your team faster at the wrong thing

The danger with generic AI isn’t that it gives obviously wrong answers. It’s that it gives confident, plausible answers that are almost right.

“Almost right” in IT operations is worse than “I don’t know.”

A technician asks ChatGPT how to resolve a DNS resolution failure for a client. ChatGPT gives a textbook answer: flush the DNS cache, check the DNS server settings, verify the forwarder configuration, test with nslookup. Solid troubleshooting steps.

But your SOP for this client specifies a different DNS provider than the default. The forwarder configuration uses a conditional forwarder for their internal domain that has to point to their on-prem DC, not a public resolver. And the verification step involves checking against the client’s firewall rules because they run DNS filtering that blocks external resolution for certain record types.

The technician follows ChatGPT’s generic steps. Changes the forwarder to 8.8.8.8. Internal name resolution breaks. Fifteen users can’t reach the intranet. Now you have a P1 instead of a routine DNS ticket.

Speed works in both directions. If AI makes your team faster at applying the wrong procedure, you’ve accelerated your way into an SLA breach.

MSPs run 8-15 tools on average. Your documentation lives in IT Glue. Your tickets are in ConnectWise. Your quick procedures are in Confluence. Your “hey, how do I handle this” answers are in Slack. Your client-specific configurations are in Hudu. ChatGPT sees none of it. A resolution engine connects to these platforms directly and uses your actual documentation as the source. It operates in a vacuum, filling the gaps with general knowledge that may or may not match your environment.

As Fortune recently put it, AI can’t remember what your company learned the hard way. For MSPs, what your team learned the hard way about each client environment is exactly the knowledge that matters most.

What MSPs actually need from AI

The answer isn’t a smarter general model. GPT-5 won’t know your client’s VPN configuration any better than GPT-4 did. The answer is AI that’s connected to your specific knowledge and integrated into your specific workflow.

Three requirements that generic AI doesn’t meet:

Access to internal documentation. Your SOPs, runbooks, client-specific procedures, network diagrams, and escalation paths. Not the public internet. Not a model’s training data from 2024. The actual documents your team wrote and maintains, stored in IT Glue, Hudu, Confluence, or Google Drive.

Workflow integration. Resolutions delivered inside the ticket, not in a separate browser tab that your technician has to copy-paste from. If the AI output doesn’t appear where work happens, it creates friction instead of removing it. Your team will default to Slacking the senior tech because it’s faster than switching to ChatGPT, crafting a prompt, and translating the output back into actionable steps.

Source verification. Every step should trace back to a specific document. “Step 3 comes from your VPN SOP for Client ABC, last updated January 2026.” Not “based on my training data.” When a resolution is wrong, your team needs to know where to fix it. When a resolution is right, they need to trust it without second-guessing.

The distinction is “AI that knows things” versus “AI that knows your things.” Generic models know everything in general and nothing in particular about your business. Your MSP doesn’t need more general knowledge. It needs its own knowledge activated at the right moment.

If your documentation already exists in IT Glue, Hudu, or Confluence, the problem isn’t creating knowledge. It’s surfacing it inside the ticket when your technician needs it.

Connecting documentation to your ticketing workflow

The approach that actually solves this is a resolution engine: a system that reads incoming tickets, matches them against your internal documentation, and delivers structured, step-by-step resolutions inside the ticket with source links back to the original documents.

This isn’t “AI search” where your technician types a question into another search bar. The system initiates. When a ticket arrives, the resolution is already waiting when the technician opens it. No prompting. No context-switching. No crafting the right question.

This is the problem we built ResolvCmd to solve. It connects your documentation platforms to your ticketing system and delivers source-linked resolutions inside the ticket.

The economics are straightforward. If your senior technician spends two hours a day answering questions that are already documented somewhere, and a resolution engine reduces that by even half, you’ve recovered five or more hours per week of your most expensive team member’s time. That’s not an AI experiment. That’s an operational improvement you can measure.

There’s a secondary benefit that compounds over time: a system that references your documentation on every ticket also reveals where your documentation has gaps. When the engine can’t find a resolution, that’s a signal that a procedure is missing or outdated. Your documentation improves because it’s being used, not because someone scheduled a quarterly review that never happens.

The bottom line

ChatGPT and Claude are impressive tools. They’re the wrong tools for resolving tickets that depend on your internal processes, your client configurations, and your team’s documented procedures.

The gap isn’t AI capability. It’s AI context.

MSPs that get real value from AI in 2026 won’t be the ones with the best prompts or the most expensive models. They’ll be the ones that connected their documentation to their workflow so that every ticket benefits from what the team already knows.

If your technicians are pasting tickets into ChatGPT today, ask one question: is the AI using your documentation, or is it using the internet’s?

That’s the difference between a resolution and a guess.

Ready to turn your documentation into instant resolutions?

Start Free Trial