Device ip address not updating

To be clear, as Isaac stated above different data is sent at different intervals. Monitors, checks, presence, scripts (including script results), SNMP data, etc., are all handled in near real time. In the event you are working on a specific machine and need to poll for data that isn’t part of that cycle, you can trigger a force sync which updates all information as needed right from the asset record.

In terms of progress on features, we have specifically mentioned we’d be prioritizing hardening the platform, fixing some of the more longstanding issues, as well as the process of migrating our entire backend from Heroku to AWS which will continue to allow us to scale and provide us with much deeper telemetry on various areas of the platform we’re currently focusing on.

On the feature front we’re looking forward to bringing rich text into early access in Q4, and that’s been one of our top-requested features for some time now. On top of that, we’ll also see the release of our new Technician and Customer Efficiency Reports which should have a marked impact on the majority of MSP’s insight into the effectiveness of their technicians, as well as which customers are consuming inordinate amounts of their technician time versus their baseline. We’ve also released a fair amount of smaller items recently that folks have been clamoring for as well. Those includes features like:

  1. Subscribing another technician to tickets
  2. Adding time to a ticket on behalf of another technician
  3. Per-technician default labor products

As for the question on whether we’re in it for the long haul, we just surpassed 4,000 MSP partners this month who are using our platform to profitability run and grow their MSP. This is just the beginning…

What I think they are saying is the features added seem a bit small compare to the total number of request being made everyday. Somethings being very important and others are super nice to haves. Yet, they seem like they are being ignored and the over all added feature/request continues to grow with little update/idea when something might be added or even on the board to be added.

While I understand that from a programming stand point that it takes time to read over the request, think how it will fit in the bigger picture, and if its something even needed - others might not understand that process and wondering what is going on with your “hardening” work.

For example, I can still see edit post and can see edit history even though I was told that wasnt possible. I can still find API keys and a bunch of other stuff for use that I dont think I should be allow to see:) Even though I posted this issue a while back and was assure it would be fix almost 2 weeks ago.

From andrewd added:

Lack of progress on the feature requests from the dev team is also a concern.
Either they lack enough people, coffee, snacks or all three.

I wonder at this point if there is any point spending time logging more feature requests.

Please don’t get me wrong. I am just repeating what I have read over and over again. Just seems like a growing pain more than anything. I also understand what you mean by “hardening” up by making it easy to track issues and/or fix issues while still providing methods to work around it. I just mean not everyone will understand what that means or includes without going into details.

@DBlue
With that said: Why not just make a script that pulls the IP address and store that in a variable if you really need it to go off a bit sooner? Yes I understand its already there for use - but that kind of the flexibility of having scripts is to work with the RMM to get results you like to see. This is only a work around - and agree it would be nice to pull some data a bit faster than others if it was possible in some cases. For example, I would love it to update a bit sooner when viewing the asset live or some type of LIVE state where it updates a bit sooner than it normally would. Granted, that is what the Sync Data button is - but in a more broken down view instead.

If you were told something was fixed that is inaccurate, please reply to your existing ticket letting those folks know if you haven’t already.

Thankyou @Andy and @travis
Of course there are work arounds possible. Yes, I could write a script…to report into a custom field. I could even write a exe that is a WCF .NET client and then write a WCF .NET server that I run here or in the cloud to send me all that info fully encrypted with SSL, create a .NET GUI to display it on Windows and mobile devices, then update SyncroMSP with a API call. The technical parts of these things are not hard. All these things are possible.
Suddenly this starts to look like an RMM.

It isn’t just the IP address. If the hostname changes, it still takes many hours to up date in the Syncro MSP WebUI.

This is not clear.
Can you please point me to the documentation that provides an exact list of all the data points that are and are not sent in real time.

What I don’t understand is why data isn’t sent when the data state changes.
For example, what I mean by this is that the IP address doesn’t need to be sent by the agent every 6 hours. That is a waste of resources. What does need to happen is that when the agent detects the IP address has changed, then the change is sent to the Syncro MSP cloud.
I’m only using IP address as one example. Insert any other data point in that currently is needlessly transmitted every 6 hours, even if no change has occured, and instead only transmit the data when a change occurs.

But work arounds take time to implement and are not always profitable, and given SyncroMSP has 4000 MSP as partners, the big picture in me suggests that 4000 MSPs implementing work arounds isn’t a productive outcome.

My impression coming in was that Syncro to had much more than 4000 customers, this is excellent information and will therefore temper my expectations.
I understand now, that SyncroMSP is going through some growing pains.

@Andy

great, when will we be able to lock down Admin access by IP address?

script scheduling is failing. Is this linked to the migrating of the entire backend from Heroku to AWS?

How many more months is this expected to take?

I will find out if we have this documented anywhere. I don’t know offhand.

There are multiple things happening at various sync stages. Like I mentioned prior, if you are ever working a device where you need to confirm a change immediately, like a hostname change or an IP address change, you can force a sync on an asset to get that to update.

We can debate which item(s) should be part of which sync, but it’s not a waste of resources. We have to poll for the change regardless, and we just sent a packet of applicable data with any given sync. It’s actually far more resources to attempt to send every change in real time, like application inventory, for example.

I think in some ways this is definitely true, the large migration we are currently undergoing being an example of the scaling we’re going through to accommodate our continued rapid growth well into the future.

I feel like this is more likely to come in the form of SSO, but don’t quote me on that. I don’t have any timelines or anything to provide on if or when those items will happen.

It’s not linked to it, meaning if you are one of the users experiencing that particular issue it would likely be occurring whether we were going through a migration or not. When I said prior the migration will allow us far greater telemetry to try and pinpoint and troubleshoot issues, that will be affected (positively) by the migration. That’s one of the many tools AWS will afford us.

I don’t have an ETA for you on this. It’s not quick, but it’s not something that would take an entire year or something, either. I think that’s as specific as I can get there.

I must be missunderstanding something.

Here is how I see the solution.
If we have a PC that has an Agent on it.
The Agent is a running Windows Service, which can hold a record of the state of the PC in a data structure. Lets continue to use IP address, but could be anything/everything else too.
The Agent can/could detect that change of IP address by comparing its record of state, to the actual state of the PC. There are many triggers in Windows that can be monitored. An IP address change is recorded in the Windows Event Log.
When the Agent running as a Windows Service detects the change, at that point send a message to the Syncro cloud (outbound from the PC).
For some PCs that message for IP addresses might only get sent once per year (or once every 5 years) or even less.
Compare this to every 6 hours.
I’m confident at the scale of the number of SyncroMSP agents out there, sending data up from the endpoint only when the data state changes on the end point instead of polling every 6 hours will free up a large amount of resources, and allow technicians to have more accurate data to work with.

When the application inventory of an endpoint changes due to software being installed or removed, the Agent could be coded in a such way to only send the changes (the differences), rather than sending the entire application inventory again. If I install Microsoft Office on an enpoint, only a single data point for the Microsoft Office installation needs to be sent to Syncro cloud, not the entire app inventory of 100 installations.

This concept of only sending changes I feel is widely regarded as the best way to proceed, and is the foundational concept of Git, MSIX, rsync, robocopy, Windows Bit Service, and many other modern data streaming technologies.

Polling to re upload/download entire data sets and slurp them into a cloud system feels quite old school to me.

This is actually the opposite of how it would work. There are tons of things within Windows that you cannot “subscribe” to for events. So you have to poll. Even if every item we reported on fired an event when the data value changed, there is literally zero expense in having .NET poll anything unless you are going to do something crazy like send the entire event log up every 60 seconds or something like that. The resource hog is millions upon millions of assets attempting to update fields in real time, all the time.

Again, if you need to verify a change when you are working on a device, such as hostname or IP address, you have the option to trigger a full sync anytime you need right from the asset record. When that is triggered all data is refreshed for that particular asset.

Heh yes we understand how differentials work :). It’s not the amount of data being sent that is the resource hog, it’s the frequency of data being sent.

If it’s not the amount of data that’s being sent that’s the problem then why doesn’t everything update every 15mins? You’re doing 15min updates anyway for some things. And why is it inefficiently updating everything rather than just what has changed? That has to cause more hits against the database. It’s pretty undeniable things could be improved and that they are better on other platforms. The system is very basic, there’s no shame in that given Syncro’s age, but let’s not pretend it doesn’t need improvement. As far as when things update, no there isn’t good documentation on this and what there is strewn throughout. I’ve asked before for this to be added. Here’s what I’ve pieced together from logs and experience:

Near Realtime:
Manual script runs
Online Status
SNMP
Every 5 minutes:
Syncro Agent heartbeat
Every 5-10 mins:
Checks for Syncro Agent/Live/Kabuto updates
Every 15 mins (Small Sync):
Event Logs check
Antivirus Status check
Firewall Status check
Hard Drive Space check
Device Manager check
Blue Screen Crashes check
Every 1 hour:
Overmind/Recovery service checks service status
Every 2 hours (Medium Sync):
Managed AV check and install if needed
Third party patch check and install if needed (may be scheduled instead?)
Every 6 hours (Full Sync):
Updates asset details like Pending Reboot, last boot time, device name and installed programs
Windows Update sync
Hard Drive Fragmentation check
Hard Drive Health check

The amount of data is most likely in the MB in size. Not really a problem I assume. Once you have that data on the server though you would still have to process some of that data (granted the client could do some of that) and then apply the differences across. The thing is - you would have to process over all objects in some cases taking a bit more time to display some information or in this case CPU cycles to process said information all the time.

Aka: If there is a lot of data filtering/processing it could slow down the client or server making it a bit of a wash to really be updating that much / that quickly over all assets.

Take the event log for example. Depending on how you pull it you still have to process the time-stamp variable from UTC to normal Date Time. Other wise it just comes out looking like 15604506621 when you really want something more like 7:54 AM. Now, do this with some computers that have over 100,000+ entries.

@Andy
Granted, some of these could be move away from the 6 hours to a more reasonable 5-15 mins. For example, Pending Reboot, Last Boot, Device Name, and Network Information.

Or even flag for faster updating when someone is looking at said assets in view.

Yeah I’d recommend spinning up a feature request and asking for those items to be moved up to a faster sync cycle.

Time stamps could easily be converted on the client side if it’s a performance issue. Event logs being sent to Syncro should only be the ones matched by an Event Policy, I highly doubt they’re sending every log entry, that would be a completely impractical financially as they’re not a SIEM where you’re paying for that storage/processing.

I like the faster updating when recently viewed. I think that’s what labtech did?

resource hog at the endpoint or in the Syncro Cloud?

IMO, this is an incorrect way to look at the alternative system of gathering data that many other RMMs are using.

Example.
Current situation based on info provided in this thread
Agent Full sync runs at 11am.

  • Network IP address change at 11:05am
    Agent Full sync runs at 5pm (6 hours later)
  • IP address in GUI is changed in the WebUI
    Agent Full sync runs at 11pm (6 hours later)
  • IP address didn’t change but in WebUI the IP address is rewritten (because syncro isn’t checking for changes it rewrites everything anyway)
    …every 6 hours for the life of the endpoint WebUI the IP address is rewritten even though it never changes again.

Improved situation
where the endpoint only sends info to Syncro cloud when there is a change (though there would need to be flap detection built into the agent code. Not hard or large, but would need to be considered)

Agent Full sync runs at 11am.

  • Network IP address change at 11:05am
  • IP address in GUI is changed in the WebUI
    Agent at 5pm (6 hours later) does nothing because we no longer do full syncs
    Agent at 11pm (6 hours later) does nothing because we no longer do full syncs

The backend. There is little the agent would do to be a resource hog.

I’m going to go ahead and close the discourse by saying you can already trip a full asset sync whenever you like directly from the asset record. So you won’t have to wait more than a few minutes for any info to update when you’re actively working on a machine. Also, for anything further in regard to what syncs when and changing those times around, please submit a feature request.

Being able to do a full asset sync is only useful if the technician is aware that a change has happened, but the change hasn’t showed in the Syncro WebUI.
If the change has happened and the technician is not aware of the change, then the technician will be making decisions based on the incorrect data in the Syncro Web UI.

Being able to trigger a full asset sync is definitely not a solution.
It is a bandaid at best.

1 Like

Easily and processing are not the same thing though. Even the event policy ones can still reach into the 100,000 range. The storage part again is in the mbs of data. It has little amount of storage needs because its all text and not object base. Its whatever you want to think and also an example of how something that sound simply still takes CPU cycles. Even if its not at the server level - its happening at the client level and that means less resources for the client as you still have to gather said data (keep it in ram), process said data (CPU/format the data) and then send that data (network) times x amount of clients you have. Its not hard to see that it can be very intents if all clients were being updated very quickly. Some task can happen sooner than other of course witch is what I pointed out already.

While this thread doesnt affect me directly - I am just saying andrewd is right, but I dont think we can make it all happen quickly for everyone. It would be best to set some task sooner than others though ( Pending Reboot, Last Boot, Device Name, and Network Information) and/or have some type of flag that might force more updates (or shorter delay) while looking at the asset or client (so it doesnt force update everyone at once - but again that is kind of what the sync button does already).

I know - I wrote an RMM before xD so I do have some background in what might be going on:) The real question is what is the server doing when it updates said information that might be taking more cycles?

Why? You were doing so well Andy. I was thoroughly enjoying the discussion and learning as it went.

IP Address is a great example. IP addresses don’t change all the time, not even close, but it’s a critical piece of real time data to us.

I’ll bet I could list 10 things that update quickly of less importance to us if we had that list of the update times for everything. This thread could become a well thought out and very specific feature request that basically maps out how to do it, using less of your resources and more of the collective intellgence you have in those 4000 MSP’s. What a resource to draw on…

Please add this to your online Syncro documentation.

I’ve made this request internally. I agree it needs to be documented in our KBs.