Yeah we are hearing this a lot from folks that were already trying to implement some version of this prior to the release of these reports. It was cool you were already thinking that way with how you have your ticketing structured.
For the in/out thing you were talking about, I’ve never seen it done that way before. If we had subtypes would that solve it? You could have “Not Booting” as the starting type, and then move into a subtype for your out type? I’ve seen a lot of requests around subtype, which is why I am wondering if we could potentially nail two requests there at the same time.
@andy, something else SyncroMSP should implement, is the concept of valid ticket status transitions, and preventing invalid transistions.
For example it should not be possible to have a ticket begin with the new status, be changed to resolved, and then be changed back to the new status. This can impact reporting.
Sure, technicians can be told “don’t do things like that”, but manually enforcing that is hard from a management point of view. Baking it into the platform in a way that can be configured by the Manager to suit business processes is much better.
After seeing a reference to this from the recent monthly updates video I decided to take another look at this. I also read through the notes here to get a better understanding of how this works because it is really not documented in the report documentation.
This report is still completely useless to us and the numbers that it spits out are completely bogus. We do not work by putting a ticket to in progress and expecting that is the time we are working on the ticket. Our technicians multitask several tickets at a time. Often you are doing a task where you have to wait for something like a Windows update and a reboot. Expecting our technicians to put a ticket in in progress every time they are actually working on it is not a tenable situation. There are many times for instance when we Update a ticket that is in customer reply and then switch it directly to waiting on customer. These times are not tracked.
A useful report for us would be to base this kind of data on the time our techs put into the ticket. Those are the times that are important to us and that is what we look at and need to compare to our tech hours report.
Does anyone here actually use this report and find it working properly?
The numbers are not bogus at all. If your customer is waiting an hour, two hours, three hours, or whatever for something to happen while the ball was in your court, they don’t necessarily care why. Maybe you were waiting 30 minutes for a Windows update to process, but that isn’t relevant to the time it took to solve a ticket. The time is the time, and baked into your averages and your company baseline for each type of ticket.
This report has a distinct purpose, and that’s to track how long it took from the time a ticket was created, to the time it was resolved, while taking care to pause the timer for any statuses where the ball isn’t in your court, as well as pausing the timer when outside of our business hours or during your custom holidays as well.
This average is calculated across all tickets and types, and then compared to your technicians to gauge which are more or less efficient by type using your company baselines. We also have an identical report that tracks this against customer instead of your technicians.
If your techs are logging time against all tickets, there are ways to report for that already. That is not the purpose of this report.
Does anyone here use this report or has tried it? If so, can you discuss how you are using it and how your staff handles the ticketing steps to get the best data?