There are several XmlWriterTraceListener-s for each WCF server component.
When user do some action logs are written in different e2e files according to each component. Now we can roundly associate records through separate log files by time-stamp. But it doesn't guarantee accuracy.
The example when such logging is needed:
Some function is evaluating on server and writing logs. We want to know from which client this request was come. Because several clients may work at one time.
May be we should link calls from different components for somehow?
E.g. use something like "token" or "guid" for each callback from client and then bind events from different logs by it?
Is there maybe any standard option for configuring WCF logs?
Yes, there is. This is called activity tracing and WCF supports propagating activities. See more here: Configuring Tracing
As far as I understand your client sends multiple requests to different WCF services in your server. In this case you need client to generate activity ID, then set it as current (use Correlation Manager class) and configure your bindings to propagate activities (see link above).
Related
I have a .NET service with a WCF host that uses Nlog for logging. I also have a WPF client that acts as a WCF client for the .NET service. The .NET services logs all message to a file (for now).
I want to use the client to output the current logged messages (ie, if a logging occurs while the client is open, then it'll be showed in a textbox for instance). If the client is closed, I don't need to see the messages.
I've thought of several ideas, but I'm not sure how good they are:
I could set up another host on the client that can receives messages from the Service.
MSMQ (with or without WCF), but then I think it'll just keep adding messages, which I don't want.
I could just open the Logfile itself, but I don't know which one will be the active logfile (seeing as this is handled by Nlog)
Are there any other ideas? Is there a better way for such communication between a (Windows) service and a client?
You might want to look into SignalR. Here's an example that does something like what you are after to display WF tracking records in a browser: http://blog.petegoo.com/2011/10/02/workflow-service-tracking-viewer/.
I have two servers(and could be more later) with a WCF service, both behind a load balancer. The client application, in multiple IS servers(also loadbalanced), call the WCF to do some action, lets say a Save.
The same data, lets say Client information, could be opened by several users at same time.
The Save action can be, then, be executed from several users at the same time, and the call will go to different WCF servers.
I want that when a user call Save from UI, and there is already a Save in progress from another UI over the same Client data, then the second user be alerted about it.
For that, all WCF instances should know about actions been executed in other instances.
How can I synchronize data status between all WCF server instances then ?
I dont want to share the data, just some status of the data(opened, save in progress, something like that)
please advice, thanks,
I'm working with c#/.NET4
Added: WCF is actually hosted inside a windows service.
The problem you are having is one of resource management.
You are trying to resolve a way how you can get your service clients to somehow all know about what open handles each other have on internal state within your service, and then force them to orchestrate in order to handle this.
Pushing this responsibility onto your clients is going to make things much more complex in the long run. Ideally clients should be able to call your service in as straightforward manner as possible, without having to know about any other clients who may be using the service. At most clients should be expected to retry if their call was unsuccessful.
Normally you would handle these situations by using locking - the clients could all commit changes simultaneously and all but one will have to retry based on some exception raised or specific response sent back (perhaps including the updated ClientInformation object), depending on how you handle it.
If you absolutely have to implement this notifications stuff, then you could look at using the WCF duplex bindings, whereby your service exposes a callback contract which allow clients to register a handler for notification which can be used to send notifications to all clients on a different channel to the one the request was made on. These however, are complex at best to set up and do not scale particularly well.
EDIT
In response to your comment, the other half of your question was about sharing state across load balanced service endpoints.
Load balancing wcf is just like load balancing websites - if you need to share state across them you must configure some backing data store which all services have access to.
In your case the obvious place would be the database. You just need to make sure that concurrency/deadlock related problems are caught and handled in your service code (using something like NHibernate to persist the data can help with this). I just don't see that you have a real problem here.
I have an application built that hits a third party company's web service in order to create an email account after a customer clicks a button. However, sometimes the web service takes longer than 1 minute to respond, which is way to long for my customers to be sitting there waiting for a response.
I need to devise a way to set up some sort of queuing service external from the web site. This way I can add the web service action to the queue and advise the customer it may take up to 2 minutes to create the account.
I'm curious of the best way to achieve this. My initial thought is to request the actions via a database table which will be checked on a regular basis by a Console app which is run via Windows Scheduled tasks.
Any issues with that method?
Is there a better method you can think of?
I would use MSMQ, it may be an older technology but it is perfect for the scenario you describe.
Create a WCF service to manage the queue and it's actions. On the service expose a method to add an action to the queue.
This way the queue is completely independent of your website.
What if you use a combination of AJAX and a Windows Service?
On the website side: When the person chooses to create an e-mail account, you add the request to a database table. If they want to wait, provide a web page that uses AJAX to check every so often (10 seconds?) whether their account has been created or not. If it's an application-style website, you could let them continue working and pop up a message once the account is created. If they don't want to wait, they close the page or browse to another and maybe get an e-mail once it's done.
On the processing side: Create a Windows service that checks the table for new requests. Once it's done with a request it has to somehow communicate back to the user, maybe by setting a status flag on the request. This is what the AJAX call would look for. You could send an e-mail at this point too.
If you use a scheduled task with a console app instead of a Windows service, you risk having multiple instances running at the same time. You would have to implement some sort of locking mechanism (at the app or request level) to prevent processing the same thing twice.
What about the Queue Class or Generic Queue Class?
Unfortunetally, your question is too vague to answer with any real detail. If this is something you want managed outside the primary application then a Windows Service would be a little more appropriate then creating a Console... From an integration and lifecycle management perspective this provides a nice foudation for adding other features (e.g. Performance Counters, Hosted Management Services in WCF, Remoting, etc...). MSMQ is great although there is a bit more involved in deployment. If you are willing to invest the time, there are a lot of advantanges to using MSMQ. If you really want to create your own point to point queue, then there are a ton of examples online that can serve as an example. Here is one, http://www.smelser.net/blog/page/SmellyQueue-(Durable-Queue).aspx.
Hi I have an application that operations like this..
Client <----> Server <----> Monitor Web Site
WCF is used for the communication and each client has its own session on the server. This is so callbacks can be used from the server to callback to the client.
The objective is that a user on the "Monitor Website" can do the following:
a) Look at all of the users currently online - that is using the client application.
b) Select a client and then perform an action on the client.
This is a training system so the idea being the instructor using a web terminal can select his or her target client and then make the client application do something. Or maybe they want to send a message to the client that will be displayed on the clients screen.
What I cant seem to do is to store a list of all the clients in the server application, that can then be retrieved by the server. If I could do this I could then access the callback object for the client and call the appropriate method.
A method on the monitoring website would look something like this...
Service.SendMessage(userhashcode, message)
The service would then somehow look up the callback that matches the hashcode and then do something like this
callback.SendMessage(message)
So far I have tried without look to serialise the callbacks into a centralised DB. However, it doesnt seem possible on the service to serialise a remote object as the callback exists from the client.
Additionally I thought I could create a global hash table in my service but im not sure on how to do this and to make it accesible application wide.
Any help would be appreciated.
Typically, WCF services are "per-call" only, e.g. each caller gets a fresh instance of the service class, it handles the request, formats the response, send it back and then gets disposed. So typically, you don't have anything "session-like" hanging around in memory.
What you do have is not the service classes themselves, but the service host - the class that acts as the host for your service classes. This is either IIS (in that case you just need to monitor IIS), or then it's a custom app (Windows NT Service, console app) that has a ServiceHost instance up and running.
I am not aware what kind of hooks there might be to connect to and "look inside" the service host - but that's what you're really looking for, I guess.
WCF services can also be configured to be session-ful, and keep a session up and running with a service class - but again: you need to have that turned on explicitly. Even then, I'm not really sure if you have many API hooks to get "inside" the service host and have a look around the current sesssions.
Question is: do you really need to? WCF exposes a gazillion of performance counters, so you can monitor and record just about anything that goes on in WCF - wouldn't that be good enough for you?
Right now, WCF services aren't really hosted in a particularly well-designed system - this should become better with the so-called "Dublin" server-addon, which is designed to host WCF services and WF workflows and give admins a great experience monitoring and managing them. "Dublin" is scheduled to be launched shortly after .NET 4.0 becomes available (which Microsoft has promised will be before the end of calendar year 2009).
Marc
What I have done is as follows...
Created a static instance in my service that keeps a dictionary of callbacks keyed by the hashcode of each WCF connection.
When a session is created it publishes itself to a DB table which contains the hash code and additional connection information.
When a user is using the monitor web application, it can get a list of connected clients from the DB and get the hashcode for that client.
If the monitor application user wants to send a command to the client the following happens..
The hashcode for the sessionn is obtained from the db.
A method is called on the service e.g. SendTextMessage(int hashcode, string message).
This method now looks up the callback to the client from the dictionary of callbacks and obtains a reference to it.
The appropriate method in this case SendTextMessage(message) is called on the callback.
Ive tested this and it works ok, Ive also added a functionality to keep the DB table synchronised to the actual WCF sessions and to clean up as required.
I'm working with an n-Tier application using WinForm and WCF
Engine Service (Windows Service) => WCF Service => Windows Form Client Application
The problem is that the WinForm Client Application need to be 100% available for work even if Engine Service is down.
So how can I make a disconnected architecture in order to make my winform application always available ?
Thanks.
Typically you implement a queue that's internal to your application.
The queue will forward the requests to the web service. In the event the web service is down, it stays queued. The queue mechanism should check every so often to see if the web service is alive, and when it is then forward everything it has stored up.
Alternatively, you can go direct to the web service, then simply post it to the queue in the event of initial failure. However, the queue will still need to check on the web service every so often.
EDIT:
Just to clarify, yes all of the business logic would need to be available client side. Otherwise you would need to provide a "verify" mechanism when the client connects back up.
However, this isn't a bad thing. As you should be placing the business logic in it's own assembly(ies) anyway.
Have a look at Smart Client Factory: http://msdn.microsoft.com/en-us/library/aa480482.aspx
Just to highlight the goals (this is sniped from the above link):
They have a rich user interface that
takes advantage of the power of the
Microsoft Windows desktop.
They connect to multiple back-end
systems to exchange data with them.
They present information coming from
multiple and diverse sources through
an integrated user interface, so the
data looks like it came from one
back-end system.
They take advantage of local storage
and processing resources to enable
operation during periods of no
network connectivity or intermittent
network connectivity.
They are easily deployed and
configured.
Edit
I'm going ansewr this with the usual CYA statement of it really depends. Let me give you some examples. Take an application which will watch the filesystem for files to be generated in any number of different formats (DB2, Flatfile, xml). The application will then import the files, displaying to the user a unified view of the document. And allow him to place e-commerce orders.
In this app, you could choose to detect the files zip them up and upload to the server do the transforms (applying business logic like normalization of data etc). But then what happens if the internet connection is down. Now the user has to wait for his connection before he can place his e-Commerce order.
A better solution would be to run the business rules in the client transforming the files. Now let's say, you had some business logic which would based on the order determine additional rules such as a salesman to route it to or pricing discounts...These might make sense to sit on the server.
The question you will need to ask is what functionality do I need to make my application function when the server is not there. Anything thing which falls within this category will need to be client side.
I've also never used Click Once deployment we had to roll our own updater which is a tale for another thread, but you should be able to send down updates preety easily. You could also code your business logic in an assembly, that you load from a URL, so while it runs client side it can be updated easily.
You can do all your processing off line, and use some thing like Microsoft Sync Framework to sync the data between the client and the server.
Assuming both server and client are .net, you can use same code base to do the data validation both on the server and the client. This way you will have a single code base that will serve both server and client.
You can use frameworks like CSLA.NET to simplify this validation process.