As far as I know BasicHttpBinding doesn't support ReliableSession feature. So, this means that when request is received by server (wcf host) then it will be executed wether the client is disconnected afterwards or not. I hope I'm right on this?
The problem is:
I have a WCF service with BasicHttpBinding. We tested this service by calling it 10 times with different threads on the client side. And these requests are all made at the same time (almost). Right after the callings of the thread we're terminating the program by killing the process. As a result 6 out of 10 requests are executed but the 4 of the requests aren't executed. We've checked network traffic with wireshark and saw that 10 of the requests are received by the wcf service host. However, we know that 4 of them didn't executed.
(Timeout values are not configured on binding: that means they're all setted to their defaults. Also the wcf service is hosted on iis).
What's the problem here? Where can I check? What can we do to achieve 10 execution out of 10 even if the client disconnects?
What can we do to achieve 10 execution out of 10 even if the client disconnects?
You can make it the default behavior. Use [OperationContract(IsOneWay=true)] to create a one-way contract where the client does not wait for a reply, but simply disconnects after sending the message.
Since you really need the service to be completed even if the client disconnects i think there is a database transaction that you need to be done.
In case the WCF is connecting to a database, this would be normal especially if you are using the same database user and password, if this is the case try to connect once for all WCF instances.
either way you have to make sure that your WCF provide concurrent access. click here for more information of concurrent WCF access.
Related
I'm trying to route matching domain requests through a single ServerPipe,
but I cant figure out how to get fiddler to reuse that ServerPipe for all ClientPipes
Normal behaviour of a chromium based browser is to open up to 6 connections to a server for a given domain (when there are multiple resources to get). When proxying through fiddler, this normally results in up to 6 ServerPipes. I want to permit the first to establish a server connection, skip connecting the next 5 and then have all client requests use the first server connection. From the browsers point of view it will still have 6 client connections to fiddler.
Here is where I'm at -
Let the first CONNECT request pass as normal, which establishes the first connection to the server
When the next 5 client CONNECT's come in, set the x-ReplyWithTunnel flag on the session in the BeforeRequest handler. This will bypass creating a new server connection, but respond to the client as though a server connection was successfully established
The client sends a bunch of GET requests down the 6 client pipes to fiddler requesting the resources from the server. Those in the first pipe (with the actual server pipe) complete.
Those GET requests in the other 5 tunnels present to fiddler, but no response is processed and sent back to the client.
I've tried all manner of ideas but cannot get fiddler to reuse the single server connection for all 6 client pipes.
All GET requests are to the same domain, so ServerPipe reuse should be ok, no?
Is this even possible ?
If yes, what am I missing ?
Finally worked it out after significant effort.
Under the hood Fiddler uses a "pool" of Pipes to each server.
In my initial post, I was only allowing the creation of a single connection per domain. When the first GET rolled in for a domain, that session takes the only Pipe out of the pool for its use. When the other 5 GET's arrive, they cant find any Pipes in the pool and subsequently fail.
To work around the problem, one needs to throttle the subsequent GET's and only allow them to run one at a time, once each prior GET is complete.
When this orderly process occurs the Pipe will be put back in the pool, and the next GET in the queue will successfully locate a Pipe to use. Unicorns and rainbows then appear.
I used a SemaphoreSlim to "queue" the threads in the BeforeRequest and a .Set() in the BeforeResponse per domain. The basic functionality works for my needs, but a full implementation would require dealing with things like failed Pipes, hanging GET's and flag overrides.
I'm creating a WCF service (to be run in IIS) that a client can talk to. Periodically I want my server to send a heartbeat to a Master server.
At the moment the only way I see to do this is to create a second Windows Service that will send out the heartbeat.
Is there any way to get my original WCF service to run an event periodically so that I can get everything done with just one service?
Not really a good way in a WCF service
If the service is going to get some use you may be able to store the NextHeartBeat timestamp and every request check if it's time to send out a message to the master server.
What you want to do may be achieved with server push or full-duplex approaches. But for heartbeat you might get around with a simple http ping using a WebClient as described here. When self-hosting (non IIS) you can override ServiceBase.OnStart/OnStop and start/stop a timer to periodically trigger the ping.
However, hosting a WCF service in IIS usually means that your service is instantiated on a per-request basis so there is no service instance hanging around to send an enduring ping.
It depends on the purpose you need the heartbeat to the Master Server. Could you instead let the master server periodically do a request on the WCF service?
If you really are in the need for a long running service then hosting WCF in a Windows Service instead of IIS might be an option.
I am developing a Windows RT application that needs to get data from a MVC WebApi server.
The problem is that the response can take from few seconds to 3 minutes.
Which is the best approach to solve it?
For now, I call async to the web api and put a long timeout value to avoid exceptions. Is it a good way? I do not like too much because the server have a open connection opened all time. Can it affect significantly to the server performance?
Is there some thing like "callback" but for web services? I mean that the server calls to the client to send the data.
Yes, there are ways to get server to callback client, for example WCF duplex communication. However, such techniques will usually keep the connection open (in most cases this is TCP session). Most web servers do not support numerous concurrent requests and thus each prolonged call to the server will increment the number of concurrently connected clients. This will lead to heavy resource utilisation at the point where it shouldn't be. If you have many clients, such architecture is bound to fail.
REST requests shall be lightweight, small and fast. Consider using a database to store temporary results and worker servers, to process the load. This is a server-side problem, not client-side.
Finally I solved it using WebSockets (thanks oleksii). It keeps the connection open but I avoid to poll for the result repeatedly. Now, when the server finishes the process, sends the data directly to the client. WebSockets is a protocol that relays over TCP and has been standardized.
http://en.wikipedia.org/wiki/WebSocket
I have a WCF service with NetTcpBinding and DuplexChannel self hosted in the console app. Clients 'subscribe' to the WCF service and i gather callbacks to the list. On client side I have attached handlers to the Faulted and Closed events and when I receive them I just reconnect the client again.
I have a strange behavior (as for me) when testing the crash of service:
When both client and service are tested on localhost I just kill the WCF Service process and clients receive the Faulted event and then try to reconnect all the time until the service is alive again.
When I deploy service on the production server and client is on another comp (over the internet, not in the same domain) when I kill the hosting process - clients don't receive any notification about the fault, although I have proper way to close the app when the Abort is sent to all of the callbacks in the list, when i close it in proper way - clients get the fault event and properly try to resubscribe.
So the question is, why this is happening? The question is not HOW to maintain the alive connection, i think i found how to do that in another way (reliable session or pinging the service)
I just want to know WHY the behavior of the event is different when deployed? The configuration is the same - i didn't change anything.
I am trying to write a monitoring tool to monitor some information
It will gonna work on azure normally. So i gonna host the database on azure also the webservice will be hosted at azure.
On the client's i read from the config file how many time's he need to update the information to the azure database ( with the webservice on azure ).
Now i want to send also some commands to the client itself. Like start service, .... what is the best way to do that?
How can i send it from a website that is hosted on the azure platform?
I think you should consider implementing a WCF service at the client as well. The Azure side of your software could call operations from this service when it needs to instruct the client to do something.
The WCF service at the client should be something simple,hosted in a Windows Service or in your actual client (whatever it is... win forms, console, etc).
Since you have no VPN, it sounds like you may have a problem with hosting a WCF service on the client. If the client is behind a firewall, you would have to modify the firewall configuration to allow your server to connect to this service.
Last time I had to do a service like this, I used Comet. The server maintains a queue of messages to be sent to the client. Your client connects to the web service and requests any available messages. If messages are available, the server returns them. If not, the server leaves the request open for some time. As soon as a message arrives, the server sends it down the already-open connection. The client will either periodically time out/reconnect or send a keep-alive message (perhaps once per minute) in order to keep the connection alive in the intervening firewalls.