I need to process thousands of user details from different (clients) web applications. I have finished a console app that does the actual processing. I have also decided to use MSMQ (the console app will get the user details from a Queue).
I need help deciding how the client web applications will pass data to the Queue. I am thinking I can add a WCF service that will receive data from the client apps and pass it on to the Queue.
Would this be the best way to go? Or is there a better way(s)?
If the whole architecture is Microsoft based I can suggest you to push messages to MSMQ using an InProc dll which is much faster than access via WCF (which add one more layer to the architecture and it slow down the process as it need to serialize/deserialize) the objects. If you design this component in a proper way (SOLID principles) and you make it not coupled to the code you can easily switch to WCF(if you need it) adding a data contract and an End Point to expose your component as a service(at the end of the day WCF exposes an Interface)
Yes it would be the best - in that it's what WCF is for; as it's config driven you'll be able to use different binding types to suit the environment you're in (sending the data across).
The assumption is that the web clients are all (mostly) out on the public internet; being on a private network would give you more options.
WCF can use a queue as a binding type, not sure if that gives you any advantage since you're going to put them into a queue anyway. A synchronous WCF call using an http binding will be fine performance wise as the act of giving it to the MSMQ you have should be pretty quick.
Take a look at NServiceBus
Related
I have a problem, have not much experience in C #, so I did a lot of research and I'm stuck.
I have to make two applications C #, the first applications is windows forms, the second runs in the background, so that the first applications will be a (POS) sales point that need to communicate with the application background for information as (products, customers, etc ...) and send data, so do not want to use web service for problems like timeouts, so anyone can help me with some idea to perform this task?
it is important to mention that the application in background will be just one while the POS applcations wich will communicate with it will be a lot (n number of apps).
There is a myriad of ways of doing interprocess communication. As the question is so generic, I will point out some more common ways.
The background process can be a windows service which updates the DB and POS systems query the DB to retrieve what they need. Even if the background process reads from the same DB, you can have a separate table which has "finished" information ready for the POS piece to pick up. Now you can use a file instead of a DB to store this finished results too, but most folks prefer DB.
You can use WCF channel to establish communication between the POS piece and the background process.
You can convert your background process to a web-service and let your POS piece communicate using XML. I don't think any time-out issue should be a problem. You will have to explain better what time-out issue causes you to not use this option.
You can convert the whole piece into a web-site and the POS will simply be a browser then
You can use a bus like Tibco or MQ to pass data.
Or you can go the old fashioned way of TCP sockets.
The most preferred way is usually the web-servcie or web-site way depending on your constraints.
Typically you'll use a message queue for something like this. They are a component in ensuring clean separation of concerns reducing and cross-application coupling and are meant to receive messages by some publisher (thus freeing the publisher of any further responsibility), and pushing messages to some subscriber.
RabbitMQ is a popular framework: https://www.rabbitmq.com/
(note that RabbitMQ (and other ready-built frameworks) can sometimes be daunting for new application programmers as they handle a great many use cases. However the underlying concept of writing to a queue from one application and reading from the queue in the other application is really the key here... feel free to implement a small utility of your own as a learning experience, but I do recommend an pre-existing framework if you're comfortable using such)
One method is to use named pipes for such communications between different programs.
How to: Use Named Pipes for Network Interprocess Communication
If you do not want to use web service (based on soap protocol),
you could attempt to use web api. In this way, you could build rest based interfaces with json (json streaming between computers is faster than xml streaming).
I think the following link can be usefull to you:
http://www.asp.net/web-api/overview/getting-started-with-aspnet-web-api/using-web-api-with-aspnet-web-forms
I have a need to call external web service on certain events in my application. I don't want to modify my application and make any dependencies of that external web service. So, I need to think of a way to do this with some sort of external component.
One possible approach is that I make database view which will get filled up when some events in my application occurs. Then I will set up trigger on that view which will call CLR function. In that CLR function I will make call to external web service. By doing this, I will get "real-time" integration which is good. But, this approach has downsides. Major one is that it seems that calling web service from CLR is not a good idea since it will block main SQL thread (?!) until CLR receives some answer.
Until now, I have only found that setting this property will help with performance issues:
System.Net.ServicePointManager.DefaultConnectionLimit = 9999
More about it you can find here.
Now, since you know my needs (that is real-time or at least close-to-real-time integration without any calls from my application to exteranl web service) is there some better way to do it?
One other approach I can think of is having some service which will periodically check for changes in my DB that needs to trigger calls to external web service. Once this service detects such change, it will call web service and transfer data. This is not true real-time integration of course. I must admit that, except for performance issues, I like having triggers and CLR much more since it guarantees real-time integration and has no affect on my application whatsoever.
I am not sure that I would agree with the design of moving the web-service call to a database. However, I am sure there are reasons as to why you wouldn't want to change the application.
Here are a couple of options that you can try -
1) Instead of a database, and CLR making web-service calls, use a message queue. NServiceBus is a good choice for passing event occurrences as message, which can trigger this call
2) If you are stuck with using SQL server to store the events, look at SQL server Service broker
I'm trying to design an application that will allow two users over a network to play the prisoner's
dilemma game (http://en.wikipedia.org/wiki/Prisoner%27s_dilemma).
Basically, this involves:
Game starts (Round 1).
Player 1 chooses to either cooperate, or betray.
Player 2 chooses to either cooperate, or betray.
Each other's decisions are then displayed
Round 2 begins
Etc.
I've done some thinking and searching and I think the application should contain the following:
Server class that accepts incoming tcp/ip connections
Gui clients (Seperate program)
For each connection (maximum 2) the server will create a new ConnectedClient class. This class will contain the details of the two player's machines/identities.
The Server class and the ConnectedClient class will connect/subscribe events to each so they can alert one another when e.g. server instruction ready to transmit to players, or players have transmitted their inputs to the server.
I'm not sure whether the best approch is to use a single thread to do or the work, or have it multithreaded. Single threaded would obviously be easier, but I'm not sure whether it is possible for this situation - I've never made a application before requiring TCP/IP connections, and I'm not sure if you can listen for two incoming connections on one thread.
I've found the following guide online, but it seems that it opens two clients on two threads, and they communicate directly to each other - bypassing the server (which I will need to control the game logic): http://www.codeproject.com/Articles/429144/Simple-Instant-Messenger-with-SSL-Encryption-in-Cs
I'm very interested and would be grateful on any advice on how you would go about implementing the application (mainly the server class).
I hope I've explained my intentions clearly. Thanks in advance.
My 1st advice would be to forget about TCP/IP and sockets here. You definitely can do it with that technology stack, but you would also get a lot of headache implementing all the things you want. And the reason is it too low level technology for such a class of tasks. I would go with tcp/ip and sockets only for academic interest, or if I need tremendous control over the communication, or if I have very high performance requirements.
So, my 2nd advice would be to look at WCF technology. Don't be afraid if you haven't used it before. It's not that difficult. And if you were ready to use sockets for your app, you can handle WCF definitely. For you task you can create basic communication withing 1-2 hours from scratch using any WCF tutorial.
So, I would create a server WCF service which will have some API functions containing your business logic. It can be hosted within a windows service, IIS, or even a console application.
And your clients would use that WCF service, calling their functions like it's functions from another local class in your project. WCF could also help you do the events which you want (it's a little bit more advanced topic though). And you can even forget about threading here, most of the things will be working out of the box.
First, as others have said, separate your game logic as much as you can, so the basic funcionality won't depend too much on your comunication infrastructure.
For the communication, WCF can handle the task. You can make your clients send a request to a service hosted in IIS, doing some kind of identification/authentication, and open a Duplex channel from where your service can push results and comunicate the start of new rounds.
Once one client connects, it waits for another. When it happens, it notifies the first client using the Duplex Channel callback and awaits for its choice. Then it asks the second user, awaits for its response. When it comes, it notifies the result to both and restarts the game.
Going a little bit deeper in the implementation:
You will have a service with some operations (like Register, PushDecision, more if needed). You will also define a callback interface, with the operations your service will need to push to the client (NotifyResult, RequestDecision, again, these are examples). You then create proxies for your clients that maps to your service operations and implement the callback operations in a way it expose events and raise them when the service pushs messages.
A use case:
Client A creates the proxy, calls Register on the server. The server receives the call, register the cilent and saves the callback object in a state. A duplex connection will be established. What does that mean? It means that (if you using the PollingDuplexBinding, as you probably will) from now on the proxy object in Client A will be doing long poll requests to the server, checking if there is a callback message. If there isnt, then it long polls again. If there is, it calls the method of the callback in the proxy passing the data the server has push. The callback method in the proxy will tipically raise an event, or execute a delegate, its up to you to choose.
Client B connects (calling Register), does the same as it did to A, and the server, noticing that two clients are connected, requests a response to A through its saved callback. This can happen during the processing of the B's Register call, or it can be triggered to execute in a new thread (or better, run in the ThreadPool or start a new Task) in B's register call.
Client A will receive the server callback requesting its choice. It can then notify the user and get the choice through the UI. A new call is made to the server (PushDecision, for example). The server receives Client A choice, asks B the same way. Once it has both responses, it calculates the result and pushes the outcome to the Clients.
An advantage of using Duplex Channels with PollingDuplex with WPF is that, as it uses long polling, there will be no need to use other ports than 80.
This is by no means a final implementation, is just a little guide to give you some ideas instead of just giving you some misty advices. Of course, there may be a bunch of other ways of doing that with WCF.
We can first assume that the application can handle only two users per time and then, if you want, you can scale up, making your service keep some form of state with a mapping table with locked access, as another example.
Some thoughts on WCF: There is an easy path to start developing with WCF using the Visual Studio tools (svcutil) but I don't like that approach. You don't "get to know" the WCF infrastructure well, you become tied to the verbose magic with which it generates your proxies, and you lose flexibility, especially in special scenarios, like Duplex polling that you may want to use.
The other way, that is to manually create your services and your proxies, is not that hard, though, and gets very interesting once you realize what you can do with it. Related to that I can give you one advice: do everything you can to make your proxy operations use Task-based Async Pattern (you can see the different ways to implement proxy operations here). This will make your code much cleaner and straight forward when combined with the new C# async/await keywords and your UI will be a joy to implement.
I can recommend some links to get you started. Some of them are old, but very didactic.
There used to be a fantastic article of WCF in this link but it seems to be now offline. Luckily, I found the content available there in a file in this link.
This one covers your hosting options.
Topics on WCF infrastructure: link
Topics on Duplex Services: link link link
Topics on Task-based Async Pattern: link link link
Well one advice I can give you if you insist that all user communicate through server and you want your application to scale:
Separate your logic (by understanding each part of the logic you want to build on the server)
Make your classes such that it can handle multiple users per transaction
Use IOCP whenever possible
it depends on the structure of your application if you need authentication and user profiles etc .. you may introduce the WCF or whatever web-service for user and hide your actual action in the background (this will cost you performance but it might be the only suitable solution you have) , so you may have your authentication framework at the top of your server logic, and a pipelined action logic in the behind .. i.e. users get authenticated to be able to access the services presented by the server, but these services pipeline all users and handle as many as possible simultaneously — if you don't need authentication then you might directly communicate to your server logic and you may use completion ports on user's request - a lot of work to be done here.
I have a little experience with WCF and would like to get your opinion/suggestion on how the following problem can be solved:
A web service needs to be accessible from multiple clients simultaneously and service needs to return a result from a shared data set. The concrete project I'm working on has to store a list of IP addresses/ranges. This list will be queried by a bunch of web servers for a validation purposes and we speak of a couple of thousand or more queries per minute.
My initial draft approach was to use Windows service as a WCF host with service contract implementing class that is decorated with ServiceBehavior(InstanceContextMode = InstanceContextMode.Single, ConcurrencyMode = ConcurrencyMode.Multiple) that has a list object and a custom locking for accessing it. So basically I have a WCF service singleton with a list = shared data -> multiple clients. What I do not like about it is that data and communication layers are merged into one and performance wise this doesn't feel "right".
What I really really (- want is Windows service running an instance of IP list holding container class object, a second service running WCF service contract implementation and a way the latter querying the former in a nice way with a minimal blocking. Using another WCF channel would not really take me far away from the initial draft implementation or would it?
What approach would you take? Project is still in a very early stage so complete design re-do is not out of question.
All ideas are appreciated. Thanks!
UPDATE: The data set will be changed dynamically. Web service will have a separate method to add IP or IP range and on top of that there will be a scheduled task that will trigger data cleanup every 10-15 minutes according to some rules.
UPDATE 2: a separate benchmark project will be kicked up that should use MySQL as a data backend (instead on in-memory list).
It depends how far it has to scale. If a single server will suffice, then fine; keep it conveniently in memory (as long as you can recreate the data if the server gets restarted). If the data-volume is low, then simple blocking (lock) should work fine to synchronize the data, or for higher throughput a ReaderWriterLockSlim. I would probably not store it directly in the WCF class instance, though.
I would avoid anything involving sessions (if/when this ties into the WCF life-cycle); this is rarely helpful to simple services.
For distributed load (over multiple servers) I would give consideration to a separate dedicated backend. A database or memcached / AppFabric / etc would be worth consideration.
I've got a C# service that currently runs single-instance on a PC. I'd like to split this component so that it runs on multiple PCs. Each PC should be assigned a certain part of the work. If one PC fails, its work should be moved to a backup machine.
Data synchronization can be done by the DB, so that should not be much of an issue. My current idea is to use some kind of load balancer that splits and sends the incoming requests to the array of PCs and makes sure the work is actually processed.
How would I implement such a functionality? I'm not sure if I'm asking the right question. If my understanding of how this goal should be achieved is wrong, please give me a hint.
Edit:
I wonder if the idea given above (load balancer splitswork packages to PCs and checks for result) is feasible at all. If there is some kind of already implemented solution so this seemingly common problem, I'd love to use that solution.
Availability is a critical requirement.
I'd recommend looking at a Pull model of load-sharing, rather than a Push model. When pushing work, the coordinating server(s)/load-balancer must be aware of all the servers that are currently running in your system so that it knows where to forward requests; this must either be set in config or dynamically set (such as in the Publisher-Subscriber model), then constantly checked to detect if any servers have gone offline. Whilst it's entirely feasible, it can complicate the scaling-out of your application.
With a Pull architecture, you have a central work queue (hosted in MSMQ, Sql Server Service Broker or similar) and each processing service pulls work off that queue. Expose a WCF service to accept external requests and place work onto the queue, safe in the knowledge that some server will do the work, even though you don't know exactly which one. This has the added benefits that each server monitors it's own workload and picks up work as-and-when it is ready, and you can easily add or remove servers to/from this model without any change in config.
This architecture is supported by NServiceBus and the communication between Windows Azure Web & Worker roles.
From what you said each PC will require a full copy of your service -
Each PC should be assigned a certain
part of the work. If one PC fails, its
work should be moved to a backup
machine
Otherwise you won't be able to move its work to another PC.
I would be tempted to have a central server which farms out work to individual PCs. This means that you would need some form of communication between each machine and and keep a record back on the central server of what work has been assigned where.
You'll also need each machine to measure it's cpu loading and reject work if it is too busy.
A multi-threaded approach to the service would make good use of those multiple processor cores that are ubiquitoius nowadays.
How about using a server and multi-threading your processing? Or even multi-threading on a PC as you can get many cores on a standard desktop now.
This obviously doesn't deal with the machine going down, but could give you much more performance for less investment.
you can check windows clustering, and you have to handle set of issues that depends on the behaviour of the service (you can put more details about the service itself so I can answer)
This depends on how you wanted to split your workload, this usually done by
Splitting the same workload by multiple services
Means same service being installed on
different servers and will do the
same job. Assume your service is reading huge data from the db servers and processing them to produce huge client specific datafiles and finally this datafile is been sent to the clients. In this approach all your services installed in diff servers will do the same work but they split the work to increaese the performance.
Splitting the part of the workload by multiple services
In this approach each service will be assigned to the indivitual jobs and works on different goals. in above example one serivce is responsible for reading data from db and generating huge data files and another service is configured only to read the data file and send it to clients.
I have implemented the 2nd approach in one of my work. Because this let me isolate and debug the errors in case of any failures.
The usual approach for load balancer is to split service requests evenly between all service instances.
For each work item (request) you can store relative information in database. Then each service should also have at least one background thread checking database for abandoned work items.
I would suggest that you publish your service through WCF (Windows Communication Foundation).
Then implement a "central" client application which can keep track of available providers of your service and dish out work. The central app will act as scheduler and load balancer of the tasks to be performed.
Check out Juwal Lövy's book on WCF ("Programming WCF Services") for a good introduction on this topic.
You can have a look at NGrid : http://ngrid.sourceforge.net/
or Alchemi : http://www.gridbus.org/~alchemi/index.html
both are grid computing framework with load balancers that will get you started in no time.
Cheers,
Florian