Force all requests through one ServerPipe with Fiddler - c#

I'm trying to route matching domain requests through a single ServerPipe,
but I cant figure out how to get fiddler to reuse that ServerPipe for all ClientPipes
Normal behaviour of a chromium based browser is to open up to 6 connections to a server for a given domain (when there are multiple resources to get). When proxying through fiddler, this normally results in up to 6 ServerPipes. I want to permit the first to establish a server connection, skip connecting the next 5 and then have all client requests use the first server connection. From the browsers point of view it will still have 6 client connections to fiddler.
Here is where I'm at -
Let the first CONNECT request pass as normal, which establishes the first connection to the server
When the next 5 client CONNECT's come in, set the x-ReplyWithTunnel flag on the session in the BeforeRequest handler. This will bypass creating a new server connection, but respond to the client as though a server connection was successfully established
The client sends a bunch of GET requests down the 6 client pipes to fiddler requesting the resources from the server. Those in the first pipe (with the actual server pipe) complete.
Those GET requests in the other 5 tunnels present to fiddler, but no response is processed and sent back to the client.
I've tried all manner of ideas but cannot get fiddler to reuse the single server connection for all 6 client pipes.
All GET requests are to the same domain, so ServerPipe reuse should be ok, no?
Is this even possible ?
If yes, what am I missing ?

Finally worked it out after significant effort.
Under the hood Fiddler uses a "pool" of Pipes to each server.
In my initial post, I was only allowing the creation of a single connection per domain. When the first GET rolled in for a domain, that session takes the only Pipe out of the pool for its use. When the other 5 GET's arrive, they cant find any Pipes in the pool and subsequently fail.
To work around the problem, one needs to throttle the subsequent GET's and only allow them to run one at a time, once each prior GET is complete.
When this orderly process occurs the Pipe will be put back in the pool, and the next GET in the queue will successfully locate a Pipe to use. Unicorns and rainbows then appear.
I used a SemaphoreSlim to "queue" the threads in the BeforeRequest and a .Set() in the BeforeResponse per domain. The basic functionality works for my needs, but a full implementation would require dealing with things like failed Pipes, hanging GET's and flag overrides.

Related

Is it appropriate to cancel and retry a HTTP request in this unavailability scenario?

General scenario:
Suppose any given user is associated to a stateful session that is managed by my service
each session is handled by a specific host within my service
All http requests associated to that session must ultimately be handled by the correct host
there exists some ownership resolution protocol allowing any given host in my service to forward a user http request to the correct session owner.
All of the hosts exist in a common virtual network and may address eachother directly
When a host forwards a request, the session owner must respond within X seconds. If it does not respond, the host forwarding the request will failover the session and become the new session owner (assume race conditions between different forwarding hosts attempting to failover is managed in an acceptable way)
In some number of cases, the session owner may become unresponsive and oftentimes this is due to CPU finding itself in a bad state for a period. I want to optimize my retry strategy to minimize the impact of redundant http requests on the faulty host and this basically leads to the heart of my question. If the forwarding host does not hear back from the session owner after M seconds (where M == X / K where K is an integer and K > 1), does it ever make sense to cancel the request and retry again? Or is it better to just wait for a response to the first http request?
In my mind, cancelling and retrying accomplishes nothing as I've either established a tcp connection with the target host or I have not. If I have established a tcp connection, cancelling and retrying seems like it is just causing me to send request data to the target host twice. If the session isn't established then cancelling the request and retrying doesn't really provide any optimization I'm aware of. If there is no explicit error code in those M seconds, what does retrying accomplish?
For practicality, assume I'm using .net 6 and just using a standard a very simple HttpClient to forward requests. I think this has a more generalized answer which is non-specific to dotnet.

WCF operation is not running after client disconnects

As far as I know BasicHttpBinding doesn't support ReliableSession feature. So, this means that when request is received by server (wcf host) then it will be executed wether the client is disconnected afterwards or not. I hope I'm right on this?
The problem is:
I have a WCF service with BasicHttpBinding. We tested this service by calling it 10 times with different threads on the client side. And these requests are all made at the same time (almost). Right after the callings of the thread we're terminating the program by killing the process. As a result 6 out of 10 requests are executed but the 4 of the requests aren't executed. We've checked network traffic with wireshark and saw that 10 of the requests are received by the wcf service host. However, we know that 4 of them didn't executed.
(Timeout values are not configured on binding: that means they're all setted to their defaults. Also the wcf service is hosted on iis).
What's the problem here? Where can I check? What can we do to achieve 10 execution out of 10 even if the client disconnects?
What can we do to achieve 10 execution out of 10 even if the client disconnects?
You can make it the default behavior. Use [OperationContract(IsOneWay=true)] to create a one-way contract where the client does not wait for a reply, but simply disconnects after sending the message.
Since you really need the service to be completed even if the client disconnects i think there is a database transaction that you need to be done.
In case the WCF is connecting to a database, this would be normal especially if you are using the same database user and password, if this is the case try to connect once for all WCF instances.
either way you have to make sure that your WCF provide concurrent access. click here for more information of concurrent WCF access.

How to transfer context to a WebSocket session on reconnect?

I am working on a web application in C#, ASP.NET, and .NET framework 4.5 with the use of WebSockets. In order to plan for scalability in the future, the application pool has the option for web gardens enabled to simulate multiple web servers on my single development machine.
The issue I am having is how to handle re-connects on the websocket side. When a new websocket session is initially created, the client browser can indirectly lock records in a SQL database. But when the connection is lost, my boss would like the browser to attempt to re-connect to the same instance of the websocket server session so it doesn't need to re-lock anything.
I don't know if something like this is possible because on re-connect the load balancer will "randomly" select which web server to handle the new connection. I was thinking of some hack to work around this but it isn't very clean:
Client opens initial websocket connection on Server A and locks a record.
Client temporarily loses internet connection and the websocket closes. (It is important to note that the server side will wait up to 60 seconds before it "disposes" itself; therefore, the SQL record will remain locked until the 60 seconds has elapsed).
Client internet connection is restored and reconnects to the website but this time on Server B.
Server B sees that this context was initially connected on Server A; therefore, transfers the session to Server A.
Server A checks the process id to see if it is running in the correct worker process (in the case of a web garden).
Server A has found the initial instance and handles the connection.
I tried Googling this question but it doesn't seem like a very common issue because I don't think most websocket web apps keep records locked for as long that my applications does (which is could be up to an hour).
Thanks in advance for all of your help!
Update 3/15/2016
I was hoping that the Server.TransferRequest would have been helpful however it doesn't seem to work for web sockets. Would anyone know of a way to best transfer a websocket context from one process to another?
First, you might want to re-examine why you're locking records for a long time and requiring a client to come back to the same server every time. That is not the usual type of high scale web architecture and perhaps you're just creating this need to reconnect to the identical server because of that requirement when maybe you should rethink how that is designed so that your application would work just fine no matter which host a user connects to.
That would certainly simplify scaling to large numbers of users and servers if you could remove that requirement. You can always then implement local caching and semi-sticky connections later as a performance enhancement, but only after you release the requirement to 100% of the time connect to the same host.
If you're going to stick with that requirement to always connect to the same host, then you will ultimately need some sort of sticky load balancing. There are a lot of different schemes. Some are driven by the networking infrastructure in front of your server, some are driven by your server and some are even client driven. They all have different tradeoffs. Here's a brief run-down of some of the schemes:
Hardware, networking load balancer. Here you have a fairly transparent mechanism by which a hardware load balancer (which is really just software running on a custom piece of hardware) sits in front of your web server farm and uses various techniques to make sure whatever server a given user is originally connected to it will get reconnected to on subsequent connections. This can be based on various schemes (IP address, cookie value, etc...) as the key to identifying a particular user and it typically has a number of possible configurations for how it can work.
Proxy load balancer. This is essentially an all software version of the hardware load balancer. Here a proxy sits in front of your server farm and directs connections to a particular server based on some algorithm (IP address, cookie value, etc...).
Server Redirect. Here an incoming connection is randomly assigned to a server. Upon connection the server figures out where the connection is supposed to be connected to an returns a 302 redirect to the actual host causing the client to reconnect to the proper server. This involves one less layer of infrastructure (no physical load balancers), but exposes the different server endpoints to the outside world which the first two options do not.
Client Selection Algorithm. Here the client is given knowledge of the various server endpoints and is coded with an algorithm for consistently selecting one for this user. It could be a hash of a userID that is then divided into the server bucket pool and the end result is that client ends up choosing a particular DNS name such as cl003.myserver.com which it then connects to. This choice requires the least work server-side so can be simpler to implement, but it requires changing the client code in order to modify the algorithm.
For an article on sticky load balancing for Amazon Web Services to give you an idea on how one mechanism works, you can read this: Elastic Load Balancing: Configure Sticky Sessions for Your Load Balancer.
Here's another article on how the nginx proxy is configured for sticky load balancing.
You can find lots of other articles with a Google search for "sticky load balancing".
A discussion of the pros/cons of the various schemes is the subject of a much longer discussion and some of it involves knowledge of more specific requirements and specific capabilities of your infrastructure.

Is it possible that repeated OnConnected will call before previous OnDisconnected?

Imagine some spherical horse in a vacuum:
I lost control of my client application, maybe some error has happened. And I tried to re-enter to the hub immediately.
Is it possible, that OnConnected starts faster then OnDisconnected and I turn up twice on the server?
Edited:
Sorry, I didn't say than I meant SignalR library. I think if my application won't call stop() the server will wait about 30 seconds by default. And I can connect to the server again before OnDisconnected is called. Isn't it?
You'll have to take it from the client's side, also note that if you're using TCP the following would take place:
TCP ensures that your packets will arrive in the order they were sent. And so let's imagine that at the same moment the "horse" hit the space and the connection broke, your server is sending the next packet that would check the connection (if you implemented your server good enough that is).
Here, there's two things that may happen:
The client has already recovered and can respond in time. Meaning the interval in time when the connection had problems was small enough that the next packet from the server hasn't arrived yet. And so responding to your question, there's no disconnection in the first place.
The next packet from the server arrived but the client is not responding (the connection is severed). The server would instantly take note of this, raising the OnDisconnected event. If the client recovers virtually at the same time the server takes note, then it would initiate another connection (OnConnected).
So there's no chance that the client would turn twice. If any, the
disconnection interval will be small enough for the server not to
notice the problem in the first place.
Again, another protocol may behave differently. But TCP is will designed to guarantee a well established connection and communication between a server and clients.
It's worth mentioning that many of the communication frameworks (if not all) use TCP implicitly by default.
A client can connect a second time while the first connection is open (it will have a separate connection id though).
If the client doesn't manage to notify the server that it's closing the connection, the server will wait for a certain amount of time before removing the connection (DisconnectTimeout).
So in that case, if you restart the connection immediately, it will be a new logical connection to the server with a new connection id.
SignalR will also try to reconnect to the existing connection when it is lost, in which case it would retain its connection id once reconnected. I would recommend reading the entire article about SignalR connection lifetime events.

Webservice for serial port devices

I want to create a remote webservice for an application that is now avaliable only localy. This application controlls three devices (each is controlled separately) connected on serial port. The problem is that I don't know how to take care of passing back information that a device return requested data. For example - I send move command to the motion device (which is very slow and can take a minute or more). Can I just set a big timeout on the client side (and server side) and return for example a true/false if operation is completed or is this a bad idea? Is SOAP with big timeouts ok?
And the other question is if Mono on Linux (Ubuntu 9.10, Mono 2.4) is stable enought for making a web service or should I chose Java or some other language?
I'm open for recommendations.
Thanks for your help!
Using big timeouts is not a good idea. It wastes resources on both the server and the client and you will not be able to detect a "true" timeout condition, when the server is unavailable for example, before the allocated timeout expires.
You really have two options. The first is to use polling. Return immediately from the motion request command, acknowledging the reception of the command (and not the completion of it). Then send requests in regular intervals, asking whether the command is completed or not.
The other alternative requires the client to be able to register a callback endpoint, which the server will call when the motion completes. This makes the whole process asynchronous, but requires the client to be able to operate in server mode. This is very easy to do with WCF - I don't know however if this functionality is available in Mono.
Not directly related to your question..., but consider com0com and its friends hub4com and com2tcp.

Categories