Hey all,
Today we have a small update for you, and one that’s been a top-requested feature from our community. We’ve added the ability to force an individual asset to fully sync its data out of cycle.
Check out the video below to learn more.
Hey all,
Today we have a small update for you, and one that’s been a top-requested feature from our community. We’ve added the ability to force an individual asset to fully sync its data out of cycle.
Check out the video below to learn more.
Thank you! I know the big, fancy feature releases are sometimes good, but I really, really like small stuff like this. Good work all!
Yeah these little fixes are great.
Still hoping for allowing to specify the date and time for tocket notes
Thank you! I been wanting this badly for some of my stuff.
If I can add on more thing to this change - can we see the powershell command to also force this now we have it as a function?
Glad you are liking the new addition, Travis! In regard to the above request, I don’t think this is one we’d be doing. This is because the sync cycles can be quite complex on the back end, and while we totally understand the need to do an out of cycle sync in certain one-off circumstances, the intent is not to allow bulk assets to circumvent their intended sync cycles, especially on a recurring basis.
I think I get what you are after, though. You want to make a change via script and have it update live on the web as a singular action, right?
One - but not limited example, would be a script that goes in and removes an application. This way, for example, Saved Asset Searches displays correctly when it goes in searches the Asset information on said application. This would be consider more “one offs”, but, hopefully, not a on a recurring basis.
One way to stop people from over using it would be to add a check. So long as the check is active or has happen with in a given time - that would resolve the over use of the function. Just document that it has a delay (que) and a cool down (10-30mins?) that goes with this call.
This way - if lets say a script has ran once - then another one calls sync() again with in 5 mins - the 2nd sync would just be part of the first sync() call that would happen with in x time - refreshing for both.
Bare with me! I am a programmer and I wrote a RMM tool very similar:) I could come up with house keeping and programming ideas all day lol.
I think here we start to get into more of dynamic groups which most likely would require changes to occur in near realtime across the board. Then your saved asset searches would be accurate within a minute or two, and then you’d fire off scripts against groups and things to remove unwanted software, or install compliance-related software, etc. That’s a huge undertaking, but I do understand the ask. You are asking for a very small part of that, but this is definitely heading in that direction :).
For sure! But the more you let us do the work while you put in the limits but expose the functions (any functions - any limits) - then the less you have to do as well:) Scripting is the most powerful tool of any RMM if you allow it to be.
Thanks again for all the hard work!!!
Because I don’t know what comes first, the chicken or the egg, but there’s a bug where the sync date is old and the agent version is old. Wonder if the force full sync will sync and then the agent will update, or if it’s the other way around. Next we need a force update button :).
Well if the sync date is old then it typically means a communication issue, because it would check for and apply any updates if they were available at that time. So if it can’t communicate for some reason that would explain the out-of-date agent scenario.
A welcome feature.
Please consider the ability to control when the scans occur. I cannot allow Syncro service to run on HyperV Hosts or Guests because of the impact on CPU and disk i/o.
Random 6 hour scans are unacceptable because on older devices with platter drives the systems become unusable for up to 20 minutes because every file is scanned. Real Time anti-virus scanning is triggered exacerbating the problem. Too often these scans trigger during mid-day or late afternoon when clients need full resources to perform their duties. Also Architectural and Engineering clients run graphics renderings that take hours to complete. These jobs cain fail or appear to fail with error message “System Not Responding” because Syncro processes are not throttled and take priority away from production tasks.
Please prioritize creating a user definable schedule to only run full system scans during a predefined time, like after hours or on weekends.
Thank you.
If you are seeing 20 minute slowdowns during large syncs I would send that into support. That’s not normal. Every file is not scanned as part of the agent sync.
That is not normal at all. I would really look into the hardware they’re using at the customer level and also get in touch with Syncro support team. At the customer level, I would really look into the health of the computers from the hard drives to the speed at they function. As for AV scans - that be something you will need to work with your AV policies that are not part of Syncro sync.
Hey Andy, this was actually something I discussed on a call with a rep, unfortunately he didn’t know the answer, even if I could use saved searches and run a script against the saved search, but I did find where I could after poking around. (I’m on a trial and hopefully coming onboard from my current RMM solution). Anyway, are those saved searches dynamic at all? To the extent if I did filter a search for say a custom field with a set value, and once that value differs, it would no longer be in that group… that is how the saved searches work correct?
Is there a time frame on that? I’m guessing every 6 hours when it does the full sync, it would then update the custom field, which would then pull it from that saved search?
Hi, thanks all for the replies. I have contacted SyncroMSP support about a year ago, and again on December 2022. It is related to Syncro Live Agent scanning and it touches every file. I think this is part of the “Random 6 Hour Scan”. I am surprised you have not seen it before. Depending on how many files, and speed of machine it can last 20 minutes or longer. Normally we wait it out, but at times have to remote to client sites and stop the services and kill the task. Last happened to me at 3:30 on a Friday afternoon in the middle of a critical support issue, so bad I couldn’t launch our :BYO splashtop.
There isn’t a method to do this in the way you are expecting. If you run a script or schedule a script against a Saved Asset Search, it basically takes the contents of that search at that moment in time and schedules/runs the script against those assets. It doesn’t evaluate the contents of that search again at any point in time. So no way to do this in a “dynamic” fashion.
Oh that’s a big bummer. So I would assume then if we went back into that saved search manually, and refreshed or whatever, it would update the view, and then if we ran the script again it would use the latest refresh of the saved search, or still not accurate?
Hopefully that will be on the roadmap eventually. Dynamic searches would be fantastic, especially for custom monitoring capabilities.
Apologies for kinda derailing thread.
The Syncro agent and Syncro Live are 2 completely different systems. I’m curious what would cause Live to touch any files unless someone is actively using it. If it’s able to do more than it’s supposed to, then that’s a big concern. The only thing that’s built into Live that’s supposed to touch any files is the File System, but no scanner. Sounds like those systems are running spinners if disk usage is able to bog the machine down that much? I don’t believe the sync would be causing this. I can tell you from experience and looking at the logs, none of the syncs take long. If you are seeing this on multiple systems, across customers, something seems suspicious, like something you are using is may be interfering. I have seen strange behavior with S1 that made it look like it was a piece of software going haywire, but it was S1 compatibility issue. You could prove or disprove this by kicking off a sync and see what happens.
What permissions are necessary to allow a user to force a sync? Our techs have all asset permissions except allow installation of rejected patches. Also they have all script and script category permissions except delete.
There is an issue engineers are working on. Right now it’s only available to global admins. Once the fix rolls out it will be available to anyone with the Assets - Edit permission enabled. Sorry about that.