If my mobile device has wrong time (more or less 5 minutes than actual time), the cosmosDB sdk never works on any query. But it works if the time difference is less than 5 minutes from actual time. Now, the problem is, some of our customers can set time, say 10 minutes ahead of actual time. So we can't use cosmosDB Sdk anymore?
This is expected behaviour and you'll find this with most secure systems.
SSL/TLS, for instance, which is what most web technologies use for secure transmission of data, relies heavily on clock synchronisation for certificate validation and revocation checks (amongst other things).
https://security.stackexchange.com/questions/72866/what-role-does-clock-synchronization-play-in-ssl-communcation
So it's pretty much the case that you're going to have to get your clocks synchronised or you're going to run into a lot of issues with this sort of thing.
The alternative, is to use unsecured systems. However, for the love of all that is holy, don't go down that route.
A potential workaround to all of this would be to containerise your solution and have an accurate clock within that container. That way your service knows the real time and what your customers have as their desktop clocks doesn't matter but it's really far from ideal.
Related
I am not sure what the best practice is about coding server-side sometimes.
Lets say we have a rank tracker application which updates the domain google ranking for specific keywords. We create an application (e.g. Laravel Framework) which includes frontend and backend. Then we have to update the rankings for all websites from time to time. I know that a cronjob would help me to execute a script every few minutes.
But if it gets more complicated like the Uber Driving system... cronjobs will not be enough right? We need some server-side application written in C#, Java, ... which are continuously checking for tasks. right?
I just need some advice. Maybe someone could also point out in which case cronjobs are not enough and we have to use own applications (C#, Java,..) to make sure everything is working fine.
First you need to take a look at what Cron Jobs are used for:
https://en.wikipedia.org/wiki/Cron
The software utility Cron is a time-based job scheduler
So in cases where you want to execute/task things at specific times, then a Cron Job would satisfy that need.
If however you want to continuously check for a condition, and relay that information to-and-fro client and server, Sockets are generally what you're looking at. Specifically web sockets (in web based applications, also again depends on where/how you want to use it).
https://en.wikipedia.org/wiki/WebSocket
https://en.wikipedia.org/wiki/Computer_network_programming
"Continuously" disregards the current time (I.e.: It's not the same as running every second, or even millisecond).
Language also doesn't matter, but preferably you'd want to use something that has decent socket support / well documented libraries available.
TLDR;
Cron Jobs are good for when you want to do things at very specific times, like database backups. Where as sockets are more widely used to relay live information.
I have an application which uses Active Directory intensively. I want to know if there is a way to know exactly what queries are sent and how much time they take on server side.
I can always do some sort of very basic profiler by measuring the time elapsed during the queries through Stopwatch, but it don't help neither to see the queries, nor to know if the time spent is the time the server takes to process the query, or the time lost sending and receiving data through the network or doing stuff on client side.
So is there a profiler for Active Directory similar to the one for SQL Server, or something within .NET Framework which enables to get this data?
That data isn't available - there's no profiling API for Active Directory directly. What you could perhaps do is get the time for it indirectly. If you make a similar network request to the right machine, but one for which you know there will be no processing time at all (or minimal), then you can measure the effect of network overhead.
You can then come at it from the other end. If you use Event Tracing for Windows (not supported by many profilers, but is in there for some, eg ANTS Performance Profiler), then you can track the AD events as they happen, and so separate out the time that is taken with the application from the time for these events to happen. You should then have all the bits you need in order to figure out what's going on, I think.
My server clock is running slow for some reason. I need to put a timestamp on my database transactions and need a reliable time source. Is there an api to the world time zone site or something similar?
You know you can get the server to automatically synchronize with a known time server, right ? Might be easier than coding something custom.
If you want to implement it yourself, you will need to implement a client of the Simple Network Time Protocol (or find an open source one). There are plenty of SNTP servers available, and the SNTP should be relatively easy to implement. Here is the RFC.
I am trying to work out how to calculate the latency of requests through a web-app (Javascript) to a .net webservice.
Currently I am essentially trying to sync both client and server time, which when hitting the webservice I can look at the offset (which would accurately show the 'up' latency.
The problem is - when you sync the time's, you have to factor in latency for that also. So currently I am timeing the sync request (round trip) and dividing by 2, in an attempt to get the 'up' latency...and then modify the sync accordingly.
This works on the assumption that latency is symmetrical, which it isn't. Does anyone know a procedure that would be able to determine specifically the up/down latency of a JS http request to a .net service? If it needs to involve multiple handshakes thats fine, what ever is as accurate as possible.
Thanks!!
I think this is a tough one - or impossible, to be honest.
There are probably a lot of things you can do to come more or less close to what you want. I can see two ways to tackle the problem:
Use something like NTP to synchronize the clocks and use absolute timestamps. This would be fairly easy but is of course only possible if you control both, server and client (which you probably do not).
Try to make an educated guess :) This would be along the lines what you are doing now. Maybe ping could be of some assistance in any way?
The following article might provide some additional idea(s): A Stream-based Time Synchronization Technique For Networked Computer Games.
Mainly it suggests to make multiple measurements and discard "outliers". But in the end it is not that far from your current implementation, if I understand correctly.
Otherwise there is some academic material available for a more theoretical approach (by first reading some stuff, I mean). These are some things I found: Time Synchronization in Ad Hoc Networks and A clock-sampling mutual network time-synchronization algorithm for wireless ad hoc networks. Or you could have a look at the NTP-Protocol.
I have not read those though :)
I'm writing a windows service that needs to execute a task (that connects to a central server) every 30 days +- 5 days (it needs to be random). The service will be running on 2000+ client machines, so the randomness is meant to level them out so the server does not get overloaded.
What would be the best way to do this? Currently, I pick a random time between 25 - 35 days since the task last ran and use that.
Does anyone have a better way? Is there a better way?
What you've got sounds like a pretty good way to me. You might want to bias it somewhat so that if it executed after 25 days this time it's more likely to execute in more than 30 days next time, if you see what I mean.
Another alternative is that you could ask the central server for an appropriate "slot" - that way it could avoid overloading itself (assuming everything behaves itself).
I would certainly do what Jon suggests in his second paragraph, and move the logic to decide when to execute next to the server. This way you effectively have the clients at your control, and can make changes to your algorithm without having to re-distribute the app to your 2000+ machines.
Can the server tell the client when next to connect? If so the server could have a pool of 'scheduled connection slots' that are evenly distributed throughout the time interval. The server can distribute these as it likes and thus ensure an even spread.
Seems good enough. You might want to remove each day from the list as they are used (to ensure each day gets used at some point as by randomly selected some may never get selected!).
On top of the long term leveling, you could have the server return a status code if it is near capacity instructing the client to try again later. At that point you could just delay an hour or so, rather than 25-35 days.