I have a Java application running on Debian OS and communicate with a Windows C# server program. My Java application will connect to C# server program via TCP/IP. A problem I am facing now is that my Debian OS system time is always slower than Windows Server System Time. Both applications are mostly run in an internal network, which has no access to Internet.
May I know is there any way to synchronize the time between these two applications?
I read about NTP, can Java use NTP to synchronize time with C# program?
Must C# program be running as NTP Server? (Any way to do it?)
If writing simply message exchange between these two applications, will there be any problem?
Will be greatly appreciated if anyone can provide links to study the implementations.
The best of all worlds would be to run your own local NTP server, and then sync both boxes to your local NTP server independently. You can even run the server on one of the boxes. Then you will have a common timing baseline from which to operate.
Alternately, if you have no access or support to do this, why not send the time in the data package that is transmitted between systems. Then, you will have the information needed to understand a differential between the machines. (This does not take into account any transmission delays between boxes, but it may be good enough to get the job done.)
Your network should have one centralized NTP service against which all other clocks in that network precisely synchronize themselves. Ideally, that NTP-server would synchronize itself against an Internet time-standard, but whether it does or not, it should be "the one and only source of Truth" for your entire network.
The old adage literally applies here: "a man with one watch always knows what time it is; a man with two watches never does."
It is not appropriate for your application to attempt to manage time-sync, even if somehow it possessed the necessary privileges to do so. (And it shouldn't!!) Instead, it should require that the clocks of all systems must at all times be properly synchronized against one master source. This should be its mandatory prerequisite, but not its personal responsibility.
Related
Please don't get confuse yourself with the title of this question, I don't know what is the exact technical term of what I want to accomplish :). My requirement may be little strange and I already implemented it but I need some best practice/method to do it properly.
Here is my situation.
I am developing a client system monitoring windows application (Tracking software in client side and monitoring software in my system). I have many systems connected to a LAN and I have a monitoring system. If any certain actions happen on client system, I will get notified. I cannot use any databases in my network so what I am doing is, Since my system is also connected to LAN I shared one folder in my system. Whenever some actions happens in client system, Tracking software will create a file containing event to the shared folder in my system. The monitoring software uses a timer which will continuously check for any new files in the shared folder on a certain interval(15 Minutes). If any file found, monitoring system will know some event has happened and will show the event.
But the problem I will get notified only after 15 minutes. Also is I don't think this is the best way. There may be some good and best methods. Is there any way like registering event directly to my Monitoring application from client machine?
Please NOTE: I cannot use any Database for this purpose.
Any suggestions will be appreciated.
Take a look at SignalR - it provides real time notification and can be used exactly as you describe.
You would not require a database (but remember if your server isn't running you will miss events - this may or may not be acceptable).
Take a look at FileSystemWatcher. This will monitor directories and raise events. IME, it works well, but can fail with large amounts of traffic.
This sounds like a perfect candidate for MSMQ (MS Message Queue) and Triggers.
Create an MSMQ that all your Tracking Softwares can write to. Then have an MSMQ trigger (perhaps connecting to a front-end through WCF/named pipes) to display an alert in your Monitoring Software
You may want to use WCF Framework.
Here is two links that can help you:
wcf-tutorial-events-and-callbacks
wcf-tutorial-basic-interprocess-communication
Basically, I am making a program that blocks the internet access after 11h PM. But my only problem is that there is many ways to bypass it, such as shutting down the computer and the user just have to wait until the process gets closed by the OS itself then cancel the shutdown operation (Windows 7).
Any ways to make sure that the program won't get terminated before the pc shutdowns or anything?
If your goal is to block internet access, I recommend enforcing this rule on your router rather than on your PCs. It would be a much simpler, much more reliable solution. Your router probably already supports the feature, but if it doesn't you can buy a new consumer-grade router (dirt-cheap) and/or install a custom firmware that does (see Tomato Firmware for the Linksys WRT-54GL and company).
If the router approach just won't work for you, and you must block internet access in software, I would first suggest investigating Windows "local policy" or "group policy" to see if they can do what you want.
If that's too complex for your taste, try finding an off-the-shelf solution. Look into ZoneAlarm or NetNanny to see if one of them will do the trick.
But if you are bent on writing a C# program to do it for you, you probably want to look into writing a Windows Service. Services are more complex to write and deploy, but they can be configured to run at boot and are not slaved to a user session like regular desktop apps.
That's actually somewhat complex. It's like a virus - how do you keep it running, always?
You might want to read about drivers. Drivers have the highest "trust" by the operating system. They can physically access anything in the computer. Anything but a driver or a core file may be closed by the user manually, is some way or another.
Another thing you can do is to "burn" the file into Kernal.DLL or such. You can do it with a different operating system on the computer (e.g Linux) or by physically writing to the hard disk (not via Windows's API). To physically access the driver, check this out.
I found good advice how to change system time here.
It's ok... But what is the best strategy to change system local time for the WPF client application then?
For example my application periodically gets some data from server and I can pass the server time with it.
Or may be is better to use additional thread to ask server about the server time and change local system time always...
So I don't know which approach is better...
Thanks for any clue.
It is better not to do it at all - it requires admin privileges to change system time, so your program will have to run as admin (may be acceptable in your case, but normally not a good idea).
It is also requires some effort to correctly adjust for network latency when setting time. Please check out how it is normally done, i.e. starting with NTP - Network Time Protocol.
One option is to configure windows to check time more often itself instead doing it by hand as it already implements the functionality.
I am new to parallel programming. I researched a lot about using MPI and Windows HPC server's SOA programming model. But now I am more confused than ever. The task at hand is to run a program over multiple computers (IP address will be provided by the user), this program takes an input file full of millions of strings and extracts certain regions. I have completed the programming (in C# & .net4 fw) and have successfully tested it on a single computer. But the problem is that I have to make it parallel in order to speed things up. I just want somebody to show me the direction towards an approach that doesn't involve using MPI. I hope my question is clear enough.
Thanks.
You could try Dryad. It includes a LINQ-like API.
You will need to instantiate some behaviour on the target machines that will need to be open and listening for your requests. I suggest you investigate WCF for communicating across your network.
Depending on your environment, security is often an interesting hurdle for this type of project.
You could try reworking your application to use silverlight as client processors.
http://www.codeproject.com/KB/silverlight/gridcomputing.aspx
If I understand your question, one solution is to change your program so that it listens on some port for work to do, and get it running on a bunch of computers. Then you want a job submitter running on one or more machines. The submitter can receive IP addresses corresponding to a subset of the computers running your program, and then communicate with the already-running instances of your program, giving them any information they don't already have about which portion of work they're to do.
The worker program, running on many computers, is a "service", and the job submitter program is a "client".
I've got a C# service that currently runs single-instance on a PC. I'd like to split this component so that it runs on multiple PCs. Each PC should be assigned a certain part of the work. If one PC fails, its work should be moved to a backup machine.
Data synchronization can be done by the DB, so that should not be much of an issue. My current idea is to use some kind of load balancer that splits and sends the incoming requests to the array of PCs and makes sure the work is actually processed.
How would I implement such a functionality? I'm not sure if I'm asking the right question. If my understanding of how this goal should be achieved is wrong, please give me a hint.
Edit:
I wonder if the idea given above (load balancer splitswork packages to PCs and checks for result) is feasible at all. If there is some kind of already implemented solution so this seemingly common problem, I'd love to use that solution.
Availability is a critical requirement.
I'd recommend looking at a Pull model of load-sharing, rather than a Push model. When pushing work, the coordinating server(s)/load-balancer must be aware of all the servers that are currently running in your system so that it knows where to forward requests; this must either be set in config or dynamically set (such as in the Publisher-Subscriber model), then constantly checked to detect if any servers have gone offline. Whilst it's entirely feasible, it can complicate the scaling-out of your application.
With a Pull architecture, you have a central work queue (hosted in MSMQ, Sql Server Service Broker or similar) and each processing service pulls work off that queue. Expose a WCF service to accept external requests and place work onto the queue, safe in the knowledge that some server will do the work, even though you don't know exactly which one. This has the added benefits that each server monitors it's own workload and picks up work as-and-when it is ready, and you can easily add or remove servers to/from this model without any change in config.
This architecture is supported by NServiceBus and the communication between Windows Azure Web & Worker roles.
From what you said each PC will require a full copy of your service -
Each PC should be assigned a certain
part of the work. If one PC fails, its
work should be moved to a backup
machine
Otherwise you won't be able to move its work to another PC.
I would be tempted to have a central server which farms out work to individual PCs. This means that you would need some form of communication between each machine and and keep a record back on the central server of what work has been assigned where.
You'll also need each machine to measure it's cpu loading and reject work if it is too busy.
A multi-threaded approach to the service would make good use of those multiple processor cores that are ubiquitoius nowadays.
How about using a server and multi-threading your processing? Or even multi-threading on a PC as you can get many cores on a standard desktop now.
This obviously doesn't deal with the machine going down, but could give you much more performance for less investment.
you can check windows clustering, and you have to handle set of issues that depends on the behaviour of the service (you can put more details about the service itself so I can answer)
This depends on how you wanted to split your workload, this usually done by
Splitting the same workload by multiple services
Means same service being installed on
different servers and will do the
same job. Assume your service is reading huge data from the db servers and processing them to produce huge client specific datafiles and finally this datafile is been sent to the clients. In this approach all your services installed in diff servers will do the same work but they split the work to increaese the performance.
Splitting the part of the workload by multiple services
In this approach each service will be assigned to the indivitual jobs and works on different goals. in above example one serivce is responsible for reading data from db and generating huge data files and another service is configured only to read the data file and send it to clients.
I have implemented the 2nd approach in one of my work. Because this let me isolate and debug the errors in case of any failures.
The usual approach for load balancer is to split service requests evenly between all service instances.
For each work item (request) you can store relative information in database. Then each service should also have at least one background thread checking database for abandoned work items.
I would suggest that you publish your service through WCF (Windows Communication Foundation).
Then implement a "central" client application which can keep track of available providers of your service and dish out work. The central app will act as scheduler and load balancer of the tasks to be performed.
Check out Juwal Lövy's book on WCF ("Programming WCF Services") for a good introduction on this topic.
You can have a look at NGrid : http://ngrid.sourceforge.net/
or Alchemi : http://www.gridbus.org/~alchemi/index.html
both are grid computing framework with load balancers that will get you started in no time.
Cheers,
Florian