I've tried to make an inter-server communication protocol with WCF. But for some reason, when a server disconnects, the Faulted neither the Closed events are called. This is really annoying but I haven't found a solution to it.
private static ServiceHost loginService;
static void Load() {
loginService = new ServiceHost(typeof(LoginService), new Uri[] { new Uri(Settings.Instance.LoginServiceURI) });
loginService.AddServiceEndpoint(typeof(ILoginService), ServiceHelpers.GetBinding(new Uri(Settings.Instance.LoginServiceURI)), Settings.Instance.LoginServiceURI);
loginService.Faulted += new EventHandler(loginService_Faulted);
loginService.Open();
}
static void loginService_Faulted(object sender, EventArgs e)
{
Log.WriteLine(LogLevel.Error, "LoginWCF Faulted. Restarting.");
loginService.Close();
Load();
}
The stupid thing is that only the functions inside the ILoginService interface will throw an exception when the connection died. I thought that TCP had a keep-alive of it's own?
As of now, I haven't find a way to determine whether the channel is faulted or closed until it makes a call and gets the exception. Only when it causes an exception, the status is set to Faulted, because this status is set in CommunicationObject.Fault method that is invoked when there is exception.
The Closed event will be fired out when CommunicationObject.Close is invoked.
Not all, but some wcf bindings (WsHttp & NetTcp for example) support reliable sessions, check http://msdn.microsoft.com/en-us/library/ms733136.aspx. It can be used to detect when the service is going down
Related
On client side, I handle the proxy state so that when its State==CommunicationState.Faulted, it will automatically call Abort() and gracefully transition to CommunicationState.Closed.
On server side, I have 2 events hooked up to callback channel
OperationContext.Current.Channel.Faulted += Channel_Faulted;
OperationContext.Current.Channel.Closed += Channel_Closed;
Here are my events code
private void Channel_Closed(object sender, EventArgs e)
{
var callback = sender as IPtiCommunicationCallback;
PtiClient client;
lock (syncObj)
{
client = clientsList.FirstOrDefault(x => x.Value == callback).Key;
}
if (client != null)
{
//Code to remove client from the list
Disconnect(client);
}
}
private void Channel_Faulted(object sender, EventArgs e)
{
(sender as ICommunicationObject).Abort();
}
Now the question: Will the duplex channel's (the callback channel) state automatically transition accordingly to client's or I have to handle the Faulted State as I did? I'm using NetTcpBinding by the way.
The Callback channel's state will generally mimic the client's state, but this is not guaranteed. For example, if the client is trying to reach the server to close the connection, its state might be Closing while the state on the server side could be Opened. You are handling it correctly in assuming that each side must handle Closed and Faulted states individually.
I have done bit research about what kind of binding should we use and finally decided to go ahead with nettcp binding. See my blog for detail explanation.
http://maheshde.blogspot.com.au/2013/06/duplex-communication-over-internet.html
I have no idea why those events not fired. But triggering WCF call from client every 10 minutes our problem solved.
Our .NET app uses 2 AppDomains. The secondary domain needs access to a Logger object that was created in the main appdomain.
This logger is exposed via a WCF service with a named pipe binding.
This is how i create the "client" for this service:
private void InitLogger()
{
if (loggerProxy != null)
{
Logger.Instance.onLogEvent -= loggerProxy.Log;
}
// Connect to the logger proxy.
var ep = new EndpointAddress("net.pipe://localhost/app/log");
var binding = new NetNamedPipeBinding(NetNamedPipeSecurityMode.None);
//Logger.Debug("Creating proxy to Logger object.");
var channelFactory = new ChannelFactory<ILogProvider>(binding, ep);
loggerProxy = channelFactory.CreateChannel();
channelFactory.Faulted += (sender, args) => InitLogger();
channelFactory.Closed += (sender, args) => InitLogger();
Logger.Instance.onLogEvent += loggerProxy.Log;
}
Recently we are getting random CommunicationObjectFaultedException - i suppose this occurs since the channel times out or due to some other reason that i am missing.
This is the reason i have added the handling of the Closed and Faulted events, which seem to not work properly (perhaps i have not used them appropriately).
EDIT: These events are on the Factory object as suggested, so this explains why they are not being raised.
My question is -- how can i avoid these errors?
Our scenario is we need to keep this channel open at all times throughout the application's lifetime, and the access to this Logger service is needed at all times, and shouldn't time out under any circumstance.
Is there any safe practice of handling this type of situation?
Your code is currently handling Closed and Faulted events raised by the ChannelFactory, but it is the state of the Channel itself you need to worry about.
The ChannelFactory is an artefact which encapsulates the translation of the WCF service contract into an instance of the channel runtime: once you have successfully created your channel (loggerProxy), the closing of the ChannelFactory isn't going to affect communications via the channel - the events you are listening for are irrelevant to your problem.
State transitions of the Channel to Closed or Faulted will go unnoticed to this code, with the result that they will surface in Logger.Instance as exceptions thrown when loggerProxy.Log is invoked, and the event you are trying to log will be lost.
Instead of registering loggerProxy.Log directly as the event handler you should consider registering a wrapper function implementing an exception handler and retry loop around the call to loggerProxy.Log. The existing channel should be closed (or if that fails, aborted) in the exception handler, to ensure it is Disposed properly. The retry loop should reinitialise the channel and try the call again.
I'll comment on two things (i) the timeout and (ii) catching the Faulted events.
Firstly the timeout. By default, channels enter the faulted state if they haven't had a communication during the default time period (around 10 minutes). You can either poke the channel frequently with a recurring event, or reset the timeout to something large. I do the latter as follows:
NetNamedPipeBinding binding = new NetNamedPipeBinding();
// Have to set the receive timeout to be big on BOTH SIDES of the pipe, otherwise it gets faulted and can't be used.
binding.ReceiveTimeout = TimeSpan.MaxValue;
DuplexChannelFactory<INodeServiceAPI> pipeFactory =
new DuplexChannelFactory<INodeServiceAPI>(
myCallbacks,
binding,
new EndpointAddress(
"net.pipe://localhost/P2PSN.Node.Service.Pipe"));
myCallbacks is an instance of a class that deals with callbacks in the duplex pipe and INodeServiceAPI is the interface that describes my API.
Secondly you're right in the the factory events will not be fired. You can catch the events on the channel itself. I use the following code.
proxy = pipeFactory.CreateChannel();
if (proxy is IClientChannel)
{
(proxy as IClientChannel).Faulted += new EventHandler(this.proxy_Faulted);
}
Not pleasant, but something I picked up from StackOverflow elsewhere that works. You must include System.ServiceModel to pick up the IClientChannel interface.
HTH.
I create a WCF SOAP server with an operation that takes some time to perform:
[ServiceContract]
public interface IMyService
{
[OperationContract]
string LongRunningOperation();
}
[ServiceBehavior(
ConcurrencyMode = ConcurrencyMode.Multiple,
UseSynchronizationContext = false,
InstanceContextMode = InstanceContextMode.Single)]
class MyService : IMyService
{
public string LongRunningOperation()
{
Thread.Sleep(20000);
return "Hey!";
}
}
class Program
{
static void Main(string[] args)
{
MyService instance = new MyService();
ServiceHost serviceHost = new ServiceHost(instance);
BasicHttpBinding binding = new BasicHttpBinding();
serviceHost.AddServiceEndpoint(typeof(IMyService), binding, "http://localhost:9080/MyService");
serviceHost.Open();
Console.WriteLine("Service running");
Thread.Sleep(10000);
serviceHost.Close();
Console.WriteLine("Service closed");
Thread.Sleep(30000);
Console.WriteLine("Exiting");
}
}
The ServiceHost is opened, and after 10 seconds I close it.
When calling serviceHost.Close(), all clients currently connected, waiting for LongRunningOperation to finish, are inmediately disconnected.
Is there a wait of closing the ServiceHost in a cleaner way? That is, I want to disable the service listeners, but also wait for all currently connected clients to finish (or specify a maximum timeout).
Im surprised calling ServiceHost.Close is not letting LongRunningOperation complete.
The whole architecture is setup to allow things time to gracefully shut down (e.g. the difference between Close and Abort transitions.). According to MSDN docs:
This method causes a
CommunicationObject to gracefully
transition from any state, other than
the Closed state, into the Closed
state. The Close method allows any
unfinished work to be completed before
returning.
Also there is a CloseTimeout on the ServiceHost for precisely this. Have you tried setting the CloseTimeout to be greater than 20 seconds? (According to Reflector the default CloseTimeout for ServiceHost is 10 seconds...)
In principle, I think something like the following should be possible, though I haven't implemented it to confirm all the details:
Implement a custom IOperationInvoker
wrapping the Dispatcher's normal
OperationInvoker (you'll want an IServiceBehavior to install the wrapped invoker when the service dispatcher runtime is built)
the custom invoker would mostly delegate to the real one, but would also provide
"gate-keeper" functionality to turn away
new requests (e.g. raise a some kind of exception) when the service host is about
to be shut down.
it would also keep track of operation invocations still in
progress and set an event when the last operation invocation finishes or times out.
the main hosting thread would then wait on the invoker's "all finished" event before calling serviceHost.Close().
What you are doing seems all wrong to me. The ServiceHost should never close abruptly. It is a service and should remain available. There is no real way to close gracefully without some participation from the client. When I say close gracefully, this also subjective from a clients perspective.
So I dont think I understand your requirements at all, however one way would be to implement a publish/subscribe pattern and when the host is ready to close, notify all subscribers of this event so that all connections could be closed by each respective client. You can read more about this here http://msdn.microsoft.com/en-us/magazine/cc163537.aspx
Again, this approach to hosting a service is not standard and thats why you are finding it hard to find a solution to this particular problem of yours. If you could elaborate on your use case/usage scenario, it would probably help to find a real solution.
You are describing client side functionality. Sounds like you should wrap the servicehost object and then have your proxy rejecting new requests when it "is closing". You don't close the real servicehost until all calls have been serviced.
You should also take a look at the asynch CTP. To put this kind of logic inside a consumer side "Task" object will be much easier with the upcoming TaskCompletionSource class.
Check this video from dnrtv out. It's not about wcf, but rather about the upcoming language and class support for asynchrony.
I have the following scenario:
My main Application (APP1) starts a Process (SERVER1). SERVER1 hosts a WCF service via named pipe. I want to connect to this service (from APP1), but sometimes it is not yet ready.
I create the ChannelFactory, open it and let it generate a client. If I now call a method on the generated Client I receive an excpetion whitch tells me that the Enpoint was not found:
var factory = new ChannelFactory<T>(new NetNamedPipeBinding(), new EndpointAddress("net.pipe//localhost/myservice");
factory.Open()
var Client = factory.CreateChannel();
Client.Foo();
If I wait a little bit before calling the service, everything is fine;
var Client = factory.CreateChannel();
Thread.Sleep(2000);
Client.Foo();
How can I ensure, that the Service is ready without having to wait a random amount of time?
If the general case is that you are just waiting for this other service to start up, then you may as well use the approach of having a "Ping" method on your interface that does nothing, and retrying until this starts responding.
We do a similar thing: we try and call a ping method in a loop at startup (1 second between retries), recording in our logs (but ultimately ignoring) any TargetInvocationException that occur trying to reach our service. Once we get the first proper response, we proceed onwards.
Naturally this only covers the startup warmup case - the service could go down after a successfull ping, or it we could get a TargetInvocationException for a reason other than "the service is not ready".
You could have the service signal an event [Edited-see note] once the service host is fully open and the Opened event of the channel listener has fired. The Application would wait on the event before using its proxy.
Note: Using a named event is easy because the .NET type EventWaitHandle gives you everything you need. Using an anonymous event is preferable but a bit more work, since the .NET event wrapper types don't give you an inheritable event handle. But it's still possible if you P/Invoke the Windows DuplicateHandle API yourself to get an inheritable handle, then pass the duplicated handle's value to the child process in its command line arguments.
If you're using .Net 4.0 you could use WS-Discovery to make the service announce its presence via Broadcast IP.
The service could also send a message to a queue (MSMQ binding) with a short lifespan, say a few seconds, which your client can monitor.
Have the service create a signal file, then use a FileSystemWatcher in the client to detect when it gets created.
Just while (!alive) try { alive = client.IsAlive(); } catch { ...reconnect here... } (in your service contract, you just have IsAlive() return true)
I have had the same issue and when using net.pipe*://localhost/serviceName*, I solved it by looking at the process of the self-hosted application.
the way i did that was with a utility class, here is the code.
public static class ServiceLocator
{
public static bool IsWcfStarted()
{
Process[] ProcessList = Process.GetProcesses();
return ProcessList.Any(a => a.ProcessName.StartsWith("MyApplication.Service.Host", StringComparison.Ordinal));
}
public static void StartWcfHost()
{
string path = System.IO.Path.GetDirectoryName(Assembly.GetExecutingAssembly().Location);
var Process2 = new Process();
var Start2 = new ProcessStartInfo();
Start2.FileName = Path.Combine(path, "Service", "MyApplication.Service.Host.exe");
Process2.StartInfo = Start2;
Process2.Start();
}
}
now, my application isn't called MyApplication but you get my point...
now in my client Apps that use the host i have this call:
if (!ServiceLocator.IsWcfStarted())
{
WriteEventlog("First instance of WCF Client... starting WCF host.")
ServiceLocator.StartWcfHost();
int timeout=0;
while (!ServiceLocator.IsWcfStarted())
{
timeout++;
if(timeout> MAX_RETRY)
{
//show message that probably wcf host is not available, end the client
....
}
}
}
This solved 2 issues,
1. The code errors I had wend away because of the race condition, and 2
2. I know in a controlled manner if the Host crashed due to some issue or misconfiguration.
Hope it helps.
Walter
I attached an event handler to client.InnerChannel.faulted, then reduced the reliableSession to 20 seconds. Within the event handler I removed the existing handler then ran an async method to attempt to connect again and attached the event handler again. Seems to work.
I am using a netNamedPipeBinding to perform inter-process WCF communication from a windows app to a windows service.
Now my app is running well in all other accounts (after fighting off my fair share of WCF exceptions as anybody who has worked with WCF would know..) but this error is one that is proving to be quite resilient.
To paint a picture of my scenario: my windows service could be queued to do some work at any given time through a button pressed in the windows app and it then talks over the netNamedPipeBinding which is a binding that supports callbacks (two-way communication) if you are not familiar and initiates a request to perform this work, (in this case a file upload procedure) it also throws the callbacks (events) every few seconds ranging from file progress to transfer speed etc. etc. back to the windows app, so there is some fairly tight client-server integration; this is how I receive my progress of what's running in my windows service back into my windows app.
Now, all is great, the WCF gods are relatively happy with me right now apart from one nasty exception which I receive every time I shutdown the app prematurely (which is a perfectly valid scenario). Whilst a transfer is in progress, and callbacks are firing pretty heavily, I receive this error:
System.ServiceModel.ProtocolException:
The channel received an unexpected input message with Action
'http://tempuri.org/ITransferServiceContract/TransferSpeedChangedCallback'
while closing. You should only close your channel when you are not expecting
any more input messages.
Now I understand that error, but unfortunately I cannot guarantee to close my channel after never receiving any more input messsages, as the user may shutdown the app at any time therefore the work will still be continuing in the background of the windows service (kind of like how a virus scanner operates). The user should be able to start and close the win management tool app as much as they like with no interference.
Now the error, I receive immediately after performing my Unsubscribe() call which is the second last call before terminating the app and what I believe is the preferred way to disconnect a WCF client. All the unsubscribe does before closing the connection is simply removes the client id from an array which was stored locally on the win service wcf service (as this is an instance SHARED by both the win service and windows app as the win service can perform work at scheduled events by itself) and after the client id array removal I perform, what I hope (feel) should be a clean disconnection.
The result of this, besides receiving an exception, is my app hangs, the UI is in total lock up, progress bars and everything mid way, with all signs pointing to having a race condition or WCF deadlock [sigh], but I am pretty thread-savvy now and I think this is a relatively isolated situation and reading the exception as-is, I don't think it's a 'thread' issue per-se, as it states more an issue of early disconnection which then spirals all my threads into mayhem, perhaps causing the lock up.
My Unsubscribe() approach on the client looks like this:
public void Unsubscribe()
{
try
{
// Close existing connections
if (channel != null &&
channel.State == CommunicationState.Opened)
{
proxy.Unsubscribe();
}
}
catch (Exception)
{
// This is where we receive the 'System.ServiceModel.ProtocolException'.
}
finally
{
Dispose();
}
}
And my Dispose() method, which should perform the clean disconnect:
public void Dispose()
{
// Dispose object
if (channel != null)
{
try
{
// Close existing connections
Close();
// Attempt dispose object
((IDisposable)channel).Dispose();
}
catch (CommunicationException)
{
channel.Abort();
}
catch (TimeoutException)
{
channel.Abort();
}
catch (Exception)
{
channel.Abort();
throw;
}
}
}
And the WCF service Subscription() counterpart and class attributes (for reference) on the windows service server (nothing tricky here and my exception occurs client side):
[ServiceBehavior(InstanceContextMode = InstanceContextMode.Single,
ConcurrencyMode = ConcurrencyMode.Multiple)]
public class TransferService : LoggableBase, ITransferServiceContract
{
public void Unsubscribe()
{
if (clients.ContainsKey(clientName))
{
lock (syncObj)
{
clients.Remove(clientName);
}
}
#if DEBUG
Console.WriteLine(" + {0} disconnected.", clientName);
#endif
}
...
}
Interface of:
[ServiceContract(
CallbackContract = typeof(ITransferServiceCallbackContract),
SessionMode = SessionMode.Required)]
public interface ITransferServiceContract
{
[OperationContract(IsInitiating = true)]
bool Subscribe();
[OperationContract(IsOneWay = true)]
void Unsubscribe();
...
}
Interface of callback contract, it doesn't do anything very exciting, just calls events via delegates etc. The reason I included this is to show you my attributes. I did alleviate one set of deadlocks already by including UseSynchronizationContext = false:
[CallbackBehavior(UseSynchronizationContext = false,
ConcurrencyMode = ConcurrencyMode.Multiple)]
public class TransferServiceCallback : ITransferServiceCallbackContract
{ ... }
Really hope somebody can help me! Thanks a lot =:)
OH my gosh, I found the issue.
That exception had nothing to do with the underyling app hang, that was just a precautionary exception which you can safely catch.
You would not believe it, I spent about 6 hours on and off on this bug, it turned out to be the channel.Close() locking up waiting for pending WCF requests to complete (which never would complete until the transfer has finished! which defeats the purpose!)
I just went brute-force breakpointing line after line, my issue was if I was too slow..... it would never hang, because somehow the channel would be available to close (even before the transfer had finished) so I had to breakpoint F5 and then quickly step to catch the hang, and that's the line it ended on. I now simply apply a timeout value to the Close() operation and catch it with a TimeoutException and then hard abort the channel if it cannot shut down in a timely fashion!
See the fix code:
private void Close()
{
if (channel != null &&
channel.State == CommunicationState.Opened)
{
// If cannot cleanly close down the app in 3 seconds,
// channel is locked due to channel heavily in use
// through callbacks or the like.
// Throw TimeoutException
channel.Close(new TimeSpan(0, 0, 0, 3));
}
}
public void Dispose()
{
// Dispose object
if (channel != null)
{
try
{
// Close existing connections
// *****************************
// This is the close operation where we perform
//the channel close and timeout check and catch the exception.
Close();
// Attempt dispose object
((IDisposable)channel).Dispose();
}
catch (CommunicationException)
{
channel.Abort();
}
catch (TimeoutException)
{
channel.Abort();
}
catch (Exception)
{
channel.Abort();
throw;
}
}
}
I am so happy to have this bug finally over and done with! My app is now shutting down cleanly after a 3 second timeout regardless of the current WCF service state, I hope I could have helped someone else who ever finds themselves suffering a similar issue.
Graham