Should I disable WCF reliable sessions for Intranet scenarios? - c#

The scenario is a web server in a DMZ that talks to a WCF server for all database related calls.
All calls are server to server in an intranet, either over netTcp or wsHttp from a ASPX page which calls an SVC service.
Theoretically speaking, should I take action to disable the reliable session features, or should I enable them, or would it make no difference?
It appears that reliable sessions introduces configuration risks (i.e. failures because WCF is so difficult to configure).
This is problematic if there never is a message that fails to go from one intranet server to the other, or if there never is an out-of-order message.
I wish I could load test this and monitor for the existence of dropped messages, but my available test environments are way different from the production environment with respect to network reliability.
Note: The users are not using WCF clients, they are just using ordinary web browsers talking to an ASPX page, all WCF activity is on the intranet side of the firewall.

According to msdn:
If your scenario has any of the following characteristics, then you should consider using a WCF reliable session:
SOAP intermediaries, such as SOAP routers.
Proxy intermediaries or transport bridges.
Intermittent connectivity.
Sessions over HTTP.
I think it doesn't make sense whether intranet or DMZ, so it depends on your requirement.

Related

How to transfer context to a WebSocket session on reconnect?

I am working on a web application in C#, ASP.NET, and .NET framework 4.5 with the use of WebSockets. In order to plan for scalability in the future, the application pool has the option for web gardens enabled to simulate multiple web servers on my single development machine.
The issue I am having is how to handle re-connects on the websocket side. When a new websocket session is initially created, the client browser can indirectly lock records in a SQL database. But when the connection is lost, my boss would like the browser to attempt to re-connect to the same instance of the websocket server session so it doesn't need to re-lock anything.
I don't know if something like this is possible because on re-connect the load balancer will "randomly" select which web server to handle the new connection. I was thinking of some hack to work around this but it isn't very clean:
Client opens initial websocket connection on Server A and locks a record.
Client temporarily loses internet connection and the websocket closes. (It is important to note that the server side will wait up to 60 seconds before it "disposes" itself; therefore, the SQL record will remain locked until the 60 seconds has elapsed).
Client internet connection is restored and reconnects to the website but this time on Server B.
Server B sees that this context was initially connected on Server A; therefore, transfers the session to Server A.
Server A checks the process id to see if it is running in the correct worker process (in the case of a web garden).
Server A has found the initial instance and handles the connection.
I tried Googling this question but it doesn't seem like a very common issue because I don't think most websocket web apps keep records locked for as long that my applications does (which is could be up to an hour).
Thanks in advance for all of your help!
Update 3/15/2016
I was hoping that the Server.TransferRequest would have been helpful however it doesn't seem to work for web sockets. Would anyone know of a way to best transfer a websocket context from one process to another?
First, you might want to re-examine why you're locking records for a long time and requiring a client to come back to the same server every time. That is not the usual type of high scale web architecture and perhaps you're just creating this need to reconnect to the identical server because of that requirement when maybe you should rethink how that is designed so that your application would work just fine no matter which host a user connects to.
That would certainly simplify scaling to large numbers of users and servers if you could remove that requirement. You can always then implement local caching and semi-sticky connections later as a performance enhancement, but only after you release the requirement to 100% of the time connect to the same host.
If you're going to stick with that requirement to always connect to the same host, then you will ultimately need some sort of sticky load balancing. There are a lot of different schemes. Some are driven by the networking infrastructure in front of your server, some are driven by your server and some are even client driven. They all have different tradeoffs. Here's a brief run-down of some of the schemes:
Hardware, networking load balancer. Here you have a fairly transparent mechanism by which a hardware load balancer (which is really just software running on a custom piece of hardware) sits in front of your web server farm and uses various techniques to make sure whatever server a given user is originally connected to it will get reconnected to on subsequent connections. This can be based on various schemes (IP address, cookie value, etc...) as the key to identifying a particular user and it typically has a number of possible configurations for how it can work.
Proxy load balancer. This is essentially an all software version of the hardware load balancer. Here a proxy sits in front of your server farm and directs connections to a particular server based on some algorithm (IP address, cookie value, etc...).
Server Redirect. Here an incoming connection is randomly assigned to a server. Upon connection the server figures out where the connection is supposed to be connected to an returns a 302 redirect to the actual host causing the client to reconnect to the proper server. This involves one less layer of infrastructure (no physical load balancers), but exposes the different server endpoints to the outside world which the first two options do not.
Client Selection Algorithm. Here the client is given knowledge of the various server endpoints and is coded with an algorithm for consistently selecting one for this user. It could be a hash of a userID that is then divided into the server bucket pool and the end result is that client ends up choosing a particular DNS name such as cl003.myserver.com which it then connects to. This choice requires the least work server-side so can be simpler to implement, but it requires changing the client code in order to modify the algorithm.
For an article on sticky load balancing for Amazon Web Services to give you an idea on how one mechanism works, you can read this: Elastic Load Balancing: Configure Sticky Sessions for Your Load Balancer.
Here's another article on how the nginx proxy is configured for sticky load balancing.
You can find lots of other articles with a Google search for "sticky load balancing".
A discussion of the pros/cons of the various schemes is the subject of a much longer discussion and some of it involves knowledge of more specific requirements and specific capabilities of your infrastructure.

WCF Scalability with Session

we are evaluating a new project which will have a .NET Server which is available in the internet. We have access to the server but the hosting is done by a 3rd party company.
We are evaluating using WCF on the .NET Server. (I have no professional experience with WCF and just reading into the topic). The WCF service will talk to a SQL Server to perform its duties.
Here is the scenario:
Multiple client machines running our own ActionScript software will connect to that .NET Server.
Clients might be online 24/7 and should periodically poll our server to tell the server that they are there.
A client needs to be able to login, and only if the login has worked the other calls will be allowed and at some point it logs out. So we need to "remember" the state with a particual client...
Highest expected load is around 1000 Clients, of which 500 will only do polling while the other 500 will be "active". "Active" means a maximum of 1 call each minute, no heavy payload in each call, neither in the request nor in the response, just 1-3 database accesses per call.
We already tested some "HelloWorld" with ActionScript and WCF using BasicHttp(s)Binding.
But because we need session handling we were thinking about taking using the wsHttpBinding binding because it can provide us WCF Sessions.
So far so good, but then I stumbled upon the fact that it should
However:
I find that in my Oreilly WCF Services 3rd edition book (Page 177) it is written
and even Microsoft is writing to be careful using that:
http://msdn.microsoft.com/en-us/magazine/cc163590.aspx
"A service configured for private sessions cannot typically support more than a few dozen (or perhaps up to a few hundred) outstanding clients due to the cost associated with each such dedicated service instance."
So because we need to identify the state with each client, we could of course implement our own "Session Handling" on top of stateless HttpBindingBinding, and make a call to that SessionHandling class each time when my WCF methods get called, but I am reluctant to do anything like that, it looks to me like thousands of people should already have faced the same problem.
So, my question now is:
Do you think wsHttpBinding on my server could handle the payload?
How "bad" is it really to go with wsHttpBinding on WCF? Does anybody already have experience with this? Can I use it? What would you use?
Final Remarks:
I am not limited to WCF if we dont like it, we just shall do an evaluation.
From the companies point of view it would also be fine to go for a protobuf-RPC or XML-RPC solution over TCP and the ActionScript clients implementing that. (just examples!) So no need for hosting WCF in IIS on the server as long as the coding part is comfortable (enough) for the programmers on both sides and the ADMINISTRATION on the deployed server is not too much either. With just making some TCP-ports based communication I am a bit afraid what it would mean for the administration in regards to firewall and stuff. Payload is not an issue, client processing power is also not an issue. The only thing I am concerned about is scalability of the server and security.
Thanks in advance for any suggestions!
I would not be concerned with scalability. You can always add a server or two to your farm in case of issues.
I would rather be concerned with your architecture and the need to store anything in session - are you sure about that?
Note that you don't need ws binding to support sessions, basic binding supports sessions as well.

WCF and wsHttpBinding - Message encryption

I'm working on a client-server project implemented using WCF. The clients are deployed on different machines and communicate with services through the internet. I'm relatively new to WCF, and am a bit confused on choosing the appropriate binding for my Web services. The clients need to be authorized to perform operations, however, I'm implementing my own authentication algorithm and trying to avoid Windows authentication for various reasons, but I still need to make sure the message transferred in the channel is encrypted.
Right now I'm using wsHttpBinding with security mode set to Message. Full configuration looks like this:
I've set the authentication type in IIS to Anonymous Authentication to make sure the requests are passed through, and was expecting a service call to fail since MessageClientCredentialType in my binding is explicitly set to Windows. However, when I run the code, the service successfully gets called and returns the expected values. I have a feeling that I'm missing something - why is the call authorized? Can I make sure the message is still encrypted even though authentication type is set to Anonymous? Any help is appreciated.
Edit
To clarify on this, I tested the service with a client deployed to a machine outside the network on a different domain.
This MSDN article kind of sums up a lot of security issues relevant to WCF
http://msdn.microsoft.com/en-us/library/ms733836.aspx
regarding your specific situation,
the negotiateServiceCredential="true" means that you streamline certificate distribution to your clients for message encryption.
This option will only work with windows clients and has some performance problems.
read more here http://msdn.microsoft.com/en-us/library/ff647344.aspx
search the topic "streamline certificate distribution" in this page.
Which account do you use to make the call to the service? Allowing anonymous in IIS lets your request pass through to the service and service should authenticate if your caller has credentials that windows understands (Active directory/NTLM).
In your case, I think you are testing it in your own environment so service responds. Once you deploy it over internet, I doubt your service will allow anybody outside of your domain if you keep clientcredentialtype to windows.
Check these link for securing services on the Internet -
http://msdn.microsoft.com/en-us/library/ms734769.aspx
http://msdn.microsoft.com/en-us/library/ms732391.aspx

Multi Client/Server Discovery in C#

I'm developing a multiple client / multiple server program in C#, and before I got down to the nitty gritty, I was wondering if anyone has ever worked on a similar project and might be able to share their tips / ideas for implementation.
The servers will sit on many PCs, and listen for incoming connections from clients (Or should the Servers broadcast, and the clients listen?).
When a client starts, it should populate a list of potential server IP addresses automatically.
When a server closes, the client should remove that server from it's list.
When a new server starts, the clients should be notified and have it added to their list.
A server may also act as a client, and should be able to see itself, as well as all other servers.
A message sent from a client to the server, that affects the server, should broadcast the change to all connected clients.
Should my server be a Windows Service? What advantages/disadvantages does that present?
Any ideas on how I might go about getting started on this? I've been looking into UDP Multicast, and LAN Scans. I'm using C# and .NET 4.0
EDIT: Found this: http://code.google.com/p/lidgren-network-gen3/ Does anyone have any experience with it and can recommend/not recommend it?
I would suggest NetPeerTcpBinding WCF communications to create a Peer Mesh. Clients and Servers would all join a mesh using a Peer resolver. You can use PNRP or create a custom peer resolver (.Net actually provides you with an implementation called CustomPeerResolverService). See Peer To Peer Networking documentation.
Also you can implement a Discovery service using DiscoveryProxy. With a discovery service, services can announce their endpoints. The discovery service can then service find requests (see FindCriteria) to return endpoints that match the requests. This is referred to as Managed Discovery. Another mode is Ad Hoc Discovery. Each service will announce their endpoints via UDP and discovery clients will probe the network for these endpoints.
I have actually implemented a Managed Discovery service in combination with Peer 2 Peer WCF networking to provide a redundant mesh of discovery services that all share published service endpoints via P2P. Using Managed Discovery I have found performs far better as Ad Hoc Discovery using UDP probing is slower and has some limitations crossing some network boundaries while Managed Discovery leverages a centralized repository of announced service endpoints.
Either/both technologies I think can lead to your solution.
So is this effectively a peer to peer style network (almost like bittorrent), where all servers are clients, but not all clients are servers.
and the requirements are every client should hold a list of all other servers (which are, in turn, clients).
The problem lies in getting the server IPs to the clients in the first place. You can use a master server that has a fixed DNS to act as a kind of tracker, which all of the servers check in to, and the clients check periodically.
Another option (or an additional method) is to use a peer exchange style system, where each of the clients and servers use UDP broadcast packets over a local network to discover each other and then transfer the servers they know of, kind of like a routing protocol. However if the PCs are spread out over a non local network such as the internet, there's little chance that they will ever discover each other on their own, making this method only useful when used in conjunction with other methods of finding servers. Also, you will probably have to deal with router UPnP to allow clients to connect to each other through each others router NAT, so this method is probably too complex for the gains you get. (However, if you're just on a LAN, this is all you need!)
A third option (and again, this sounds a lot like torrent technology), is to use Distributed Hash Tables to store information about the IPs of your servers in the cloud, without having to rely on a central master server.
I have had a shot at a project like this before (a pure P2P, server-less messaging system), but could never get it to work. Without a huge amount of peers, or a master server to track all of the other servers, it is very difficult to reliably retrieve the IPs of all the servers.

WCF vs. WCF Duplex vs. Sockets

I posted about this before to a degree, but after a few days of reading I have a better understanding of WCF and would like to get a bit of feedback before I start working on it.
I basically need to develop a server/client system. The "server" application (c# net console app) will be ran on a machine with a MySQL database, all software installation packages, and whatever else we need local to it. The "client" application (c# net console app) will be ran on the rest of our machines, and will maintain a direct connection to the server software. Using a web front-end, our administrators will be able to install software packages to the clients, create new services, etc.
Since we own all of the machines, and have to configure them anyways, Server Push is not a problem. We don't have to worry about firewalls or any sort of NAT settings as we can just go in and open the ports required for it to operate.
What initially confused me about WCF is I assocated a "WCF Service" with a server. However, since the majority of operations are actually going to be run on the "WCF Service", this is my logic.
1) Make the "client" application actually a "WCF Service" so that the exposed functions are actually ran on the proper machines.
2) Have the "server" application actually a "WCF Client", and issue all of the instructions/commands from here, and just use the return value to update the database/etc.
Would this be the proper method to go follow or should I look into WCF Duplex (Looked extremely confusing at first glance) or just start with raw sockets?
from what I gatther you're trying to do, you're correct. That is the client machines should really have a TCP/IP "server" running on them, and the centeral server machine would have the Tcp/IP "Client".
That way the TCP/IP client (The app running on your server machine) can initiate calls to each of the client machines.
Keep in mind also that a single application can be both a tcp/ip client and server. So your app that's running on the server machine could in turn also be a tcp/ip server that your admin uses to do stuff using a browser. Which effectively means that service is an HTTP service.
So, it is not a client/server thing. It is a hub-and-spoke arrangement of distributed computing. I think, WCF can very well be used. You have multiple servers and a coordinator (the client to all of these servers) that gets the work done from various servers and update the database.
So WCF is well-suited for you. The benefit of WCF is the easy configurability and handling the communication part. You don't have to take much pain for the management of sockets.

Categories