I have two servers(and could be more later) with a WCF service, both behind a load balancer. The client application, in multiple IS servers(also loadbalanced), call the WCF to do some action, lets say a Save.
The same data, lets say Client information, could be opened by several users at same time.
The Save action can be, then, be executed from several users at the same time, and the call will go to different WCF servers.
I want that when a user call Save from UI, and there is already a Save in progress from another UI over the same Client data, then the second user be alerted about it.
For that, all WCF instances should know about actions been executed in other instances.
How can I synchronize data status between all WCF server instances then ?
I dont want to share the data, just some status of the data(opened, save in progress, something like that)
please advice, thanks,
I'm working with c#/.NET4
Added: WCF is actually hosted inside a windows service.
The problem you are having is one of resource management.
You are trying to resolve a way how you can get your service clients to somehow all know about what open handles each other have on internal state within your service, and then force them to orchestrate in order to handle this.
Pushing this responsibility onto your clients is going to make things much more complex in the long run. Ideally clients should be able to call your service in as straightforward manner as possible, without having to know about any other clients who may be using the service. At most clients should be expected to retry if their call was unsuccessful.
Normally you would handle these situations by using locking - the clients could all commit changes simultaneously and all but one will have to retry based on some exception raised or specific response sent back (perhaps including the updated ClientInformation object), depending on how you handle it.
If you absolutely have to implement this notifications stuff, then you could look at using the WCF duplex bindings, whereby your service exposes a callback contract which allow clients to register a handler for notification which can be used to send notifications to all clients on a different channel to the one the request was made on. These however, are complex at best to set up and do not scale particularly well.
EDIT
In response to your comment, the other half of your question was about sharing state across load balanced service endpoints.
Load balancing wcf is just like load balancing websites - if you need to share state across them you must configure some backing data store which all services have access to.
In your case the obvious place would be the database. You just need to make sure that concurrency/deadlock related problems are caught and handled in your service code (using something like NHibernate to persist the data can help with this). I just don't see that you have a real problem here.
Related
In brief, what is the best way to create Azure resources (VM's, ResourceGroups, etc) that are defined programmatically, without locking the web app's interface because of the long time that some of these operations take?
More detailed:
I have a Net Core web application where customers are added, manually. Once added, it automatically creates some resources for Azure. However, I noticed that my interfaces is 'locked' during these operations. What is a relatively simple way of detaching these operations from the web application? I had in mind sending a trigger using a Service Bus or Azure Relay and triggering an Azure Function. However, it seems to me that all these resources return something back, and my web app is waiting for that. I need a 'send and forget method' for that. Just send out the trigger to create these resources, don't bother with the return values for now, and continue with the app.
If a 'send and return' method also works within my web app, that is also fine.
Any suggestions are welcome!
You need to queue the work to run the background and then return the action immediately. The easiest method of doing this is to create a hosted service. There's a couple of different ways to do this:
Use a queued background service and actually queue the work to be done in your action.
Just write the required info to a database table, redis store, etc. and use a timed service to perform the work on a schedule.
In either case, you may also consider splitting this off into a worker service (essentially, a separate app composed of just the hosted service, instead of running it in the same instance as your web app). This allows you to scale the service independently and also insulates your web app from problems that service might encounter.
Once you've set up your service and scheduled the work, you just need some way to let the user know when the work is complete. A typical approach is to use SignalR to allow the server to notify the client with progress updates or success notifications. However, you can also just do something simple like email the user when everything is ready.
I am trying to create a monitoring application for our operations department to be proactive when dealing with systems that are encountering problems. I created an app that does the job but it has some draw backs:
Each copy of the app running serves individual pings to the systems, when 1 ping would suffice.
I have 3 different api's for getting the status of our systems depending whether its hosted IIS, WCF or desktop.
To fix the first issue i was going to create a database which an interim service(app)(monitor) would make the pings, then the app would query the database for updates. After thinking about this I realized the second issue and decided it is a future problem.
So my thought was to, rather than have the interim application pinging the systems, simply have each system have one interface in which it posts it status to the database every x time. But then I ran into a problem with the WCF and IIS services we have. These services can sit for days without anyone actually using the service. How would I make these services continue to post its data?
My questions are:
Is it better to have data REQUESTED or PUSHED in this type of situation?
If REQUESTED, what is a suggested practice for maintaining a single API across mulitple platforms(IIS, WCF, Desktop)?
If PUSHED, how would you handle the case of the Web services which are instance based and not continuously running?
For web services, one solution might be to implement a health-check end point , something that you can simply call like: webservice/isServiceUp?
I prefer that this information is PULLED. If a service / web Service/ Application is down, then you can't possibly rely on it to write something to the DB... it would be possible but highly risky and unreliable.
In a real world situation, it is a little more complicated than that because something might happen between your service host and the consumer (DNS problem for example), in which case, you would want to consider the case of not getting anything back from the isServiceUp (no true no false, just a 400 lvl error)...
Consider using your load balancer for checking on APPS / web services and proactively switching to a different IP in case of issues... it is a possibility.
I have a WCF service hosted in a Windows Service. I want a website to be able to call it asynchronously and then when the work is finished the WCF service will let the website know the result. I've looked at various ways of achieving this but I would like to get some more advice on which way would be best. I've looked into using callbacks but also read they can be unreliable. I've read about not doing it this way at all and just having another interface in my service for the website to query the status. I've looked at using MSMQ which at the moment looks like my preferred way forward but would like some more info on how to set this up or whether I shouldn't do it this way.
Does anyone have any advice please?
The nature of any communication on a network is unreliable. The statement:
I've looked into using callbacks but also read they can be unreliable
Assuming you mean WCF callbacks, they are as unreliable as the clients/servers themselves, they all use the same mechanism.
That said, you can store the client of your WCF service in the HttpApplicationState (if the call is application-wide) or HttpSessionState (if the call is local to a session).
When generating the proxy, make sure that you check the option (or specify on the contract) that you are using asynchronous calls.
Then, you would make the call, using a callback (delegate) to indicate when the async call completed.
When the call completes, you would then store the result in the session state.
If this is something that a client on the front end needs to be aware of, then the browser will have to poll your site, checking for the return result, redirecting to a page that can display the results when the result is populated.
Selecting a binding for your application depends on
Architecture of your application
Requirements
interoperability required or not.
response time of the application
availability of time to implement
Infrastructure you are using or want to use.
As your application is a web application and is built on a request/response model, you will not be able to use asyncronous or msmq style for this architecture(or is not adviceable), because there will not be any thread listining for a delayed async response or msmq call.
you can make use of one way Methods and direct calls to methods. in this case to reduce response time you have to device ways to optimize your service methods and the processing it is doing.
I have an application built that hits a third party company's web service in order to create an email account after a customer clicks a button. However, sometimes the web service takes longer than 1 minute to respond, which is way to long for my customers to be sitting there waiting for a response.
I need to devise a way to set up some sort of queuing service external from the web site. This way I can add the web service action to the queue and advise the customer it may take up to 2 minutes to create the account.
I'm curious of the best way to achieve this. My initial thought is to request the actions via a database table which will be checked on a regular basis by a Console app which is run via Windows Scheduled tasks.
Any issues with that method?
Is there a better method you can think of?
I would use MSMQ, it may be an older technology but it is perfect for the scenario you describe.
Create a WCF service to manage the queue and it's actions. On the service expose a method to add an action to the queue.
This way the queue is completely independent of your website.
What if you use a combination of AJAX and a Windows Service?
On the website side: When the person chooses to create an e-mail account, you add the request to a database table. If they want to wait, provide a web page that uses AJAX to check every so often (10 seconds?) whether their account has been created or not. If it's an application-style website, you could let them continue working and pop up a message once the account is created. If they don't want to wait, they close the page or browse to another and maybe get an e-mail once it's done.
On the processing side: Create a Windows service that checks the table for new requests. Once it's done with a request it has to somehow communicate back to the user, maybe by setting a status flag on the request. This is what the AJAX call would look for. You could send an e-mail at this point too.
If you use a scheduled task with a console app instead of a Windows service, you risk having multiple instances running at the same time. You would have to implement some sort of locking mechanism (at the app or request level) to prevent processing the same thing twice.
What about the Queue Class or Generic Queue Class?
Unfortunetally, your question is too vague to answer with any real detail. If this is something you want managed outside the primary application then a Windows Service would be a little more appropriate then creating a Console... From an integration and lifecycle management perspective this provides a nice foudation for adding other features (e.g. Performance Counters, Hosted Management Services in WCF, Remoting, etc...). MSMQ is great although there is a bit more involved in deployment. If you are willing to invest the time, there are a lot of advantanges to using MSMQ. If you really want to create your own point to point queue, then there are a ton of examples online that can serve as an example. Here is one, http://www.smelser.net/blog/page/SmellyQueue-(Durable-Queue).aspx.
I'm working with an n-Tier application using WinForm and WCF
Engine Service (Windows Service) => WCF Service => Windows Form Client Application
The problem is that the WinForm Client Application need to be 100% available for work even if Engine Service is down.
So how can I make a disconnected architecture in order to make my winform application always available ?
Thanks.
Typically you implement a queue that's internal to your application.
The queue will forward the requests to the web service. In the event the web service is down, it stays queued. The queue mechanism should check every so often to see if the web service is alive, and when it is then forward everything it has stored up.
Alternatively, you can go direct to the web service, then simply post it to the queue in the event of initial failure. However, the queue will still need to check on the web service every so often.
EDIT:
Just to clarify, yes all of the business logic would need to be available client side. Otherwise you would need to provide a "verify" mechanism when the client connects back up.
However, this isn't a bad thing. As you should be placing the business logic in it's own assembly(ies) anyway.
Have a look at Smart Client Factory: http://msdn.microsoft.com/en-us/library/aa480482.aspx
Just to highlight the goals (this is sniped from the above link):
They have a rich user interface that
takes advantage of the power of the
Microsoft Windows desktop.
They connect to multiple back-end
systems to exchange data with them.
They present information coming from
multiple and diverse sources through
an integrated user interface, so the
data looks like it came from one
back-end system.
They take advantage of local storage
and processing resources to enable
operation during periods of no
network connectivity or intermittent
network connectivity.
They are easily deployed and
configured.
Edit
I'm going ansewr this with the usual CYA statement of it really depends. Let me give you some examples. Take an application which will watch the filesystem for files to be generated in any number of different formats (DB2, Flatfile, xml). The application will then import the files, displaying to the user a unified view of the document. And allow him to place e-commerce orders.
In this app, you could choose to detect the files zip them up and upload to the server do the transforms (applying business logic like normalization of data etc). But then what happens if the internet connection is down. Now the user has to wait for his connection before he can place his e-Commerce order.
A better solution would be to run the business rules in the client transforming the files. Now let's say, you had some business logic which would based on the order determine additional rules such as a salesman to route it to or pricing discounts...These might make sense to sit on the server.
The question you will need to ask is what functionality do I need to make my application function when the server is not there. Anything thing which falls within this category will need to be client side.
I've also never used Click Once deployment we had to roll our own updater which is a tale for another thread, but you should be able to send down updates preety easily. You could also code your business logic in an assembly, that you load from a URL, so while it runs client side it can be updated easily.
You can do all your processing off line, and use some thing like Microsoft Sync Framework to sync the data between the client and the server.
Assuming both server and client are .net, you can use same code base to do the data validation both on the server and the client. This way you will have a single code base that will serve both server and client.
You can use frameworks like CSLA.NET to simplify this validation process.