My question is in regards to the number of persistent connections. (Obviously there will be performance loss when adding SSL Encryption).
Currently I have C# server and client applications that use the Asynchronous Socket model http://msdn.microsoft.com/en-us/library/w89fhyex.aspx. I choose this model because it seems best suited to performance and the server application must support 5000 persistent socket connections.
I am at the point where I must secure the data being sent, and I am hoping to use SSL. Would I be able to change to SSLStreams and still support 5000 persistent connections? (I noticed that SSLStreams have asynchronous methods... Also, the reading I have done indicates that a stream is different but the same as a socket...)
One caveat is that the Server does not only communicate with other C# devices, it also talks to iOS, and Android.
Is there anyway to layer SSL on top of the Asynchronous Socket model?
If you can use third-party components, then SSL components of our SecureBlackbox product fit your task perfectly: they provide SSL layer which can be plugged to absolutely any transport, be it synchronous or asynchronous socket or even pigeon mail. And overall feature list is much wider than of built-in SSL.
Related
I have a C# application that uses a native library that sends video to other IP over the internet using UDP. I have no traffic control over that library.
My application also calls web services of another server using WebRequest that I have control over them.
The problem is:
When I have low internet bandwidth, the video stream uses all of my bandwidth. So I fail to get responses from my web service methods during that time.
Is there any way to prioritize the WebRequest or save some bandwidth for them so that I could get responses safely?
I do not know of any method in C# that can prioritize traffic in this way.
I know this is not quite a stack overflow kind of answer but this is how I have handled streaming services not killing the bandwidth in my environments when you have no proper networking infrastructure access which is the “proper” way of doing this.
When you conclude on what method you are going to use I recommend you take a look at https://superuser.com which should be able to answer any stumbling blocks you will have in implementing the solution.
Solution One.
Split into 2 services and communicate through REST API in each service or a database poll.
Then use a network limiting program to prioritize the traffic of one of the services.
https://www.netlimiter.com/ and https://netbalancer.com/ are examples of software that can do this but there are many more.
Advantage: You will have dynamic throttling of your streaming service.
Drawbacks: You will have to have another program running on the server and its definitely not free.
Solution Two.
Use IIS, There’s a built-in throttle in IIS https://www.iis.net/configreference/system.applicationhost/weblimits and look at maxGlobalBandWidth.
Then you have 2 websites that communicate through REST or a database poll.
Advantage: Simple out of the box solution.
Drawbacks: your limits are not dynamic and are in your config file.
Note that you should not use this method if your internet networking speed varies much.
It is pretty straightforward to setup a UDP relay server for simple UDP streams, which you can then use to throttle the traffic as needed. You can put this in your application so everything is self-contained and your relay server is aware of when web requests are made. Create one UdpClient to receive traffic on 127.0.0.1 and have the video streaming library connect to that instead of your actual server. Then create another UdpClient that will relay the traffic to the actual destination that you normally connect to with the library.
You can limit bandwidth any number of ways with this method and how you do it will ultimately depend on your requirements. You can just pause forwarding of UDP frames whenever you start a web request and resume forwarding them after you get a response if pausing is acceptable. If not then you can track average UDP frames/second as you are relaying data and dynamically rate limit to 50% (or whatever) of that by inserting appropriate delays into your relay server while you have a web request pending.
You can look here for an example of a simple UDP relay server implementation for DNS requests, the basic principle would be the same:
https://social.msdn.microsoft.com/Forums/en-US/ce062e62-395f-4110-a4dd-3e9ed3c88286/udp-relay-server?forum=netfxnetcom
I have an application in C# that is a TCP server listening to a port. GPS devices connect to this port. The application is accepting the TCP client and creating a new thread for each client. The client ID in maintained in a hash table that is updated when a client is connected. this was all working fine until around 400 units. Once the number of units increased, the server was unable to handle all connections. The connections are being continuously dropped and once in awhile leads eating up the server CPU and memory and brings it down. Work around was to open another instance of the TCP server listening to a different port and diverted some units to that port. Currently some 1800 units are somehow running in 8 different ports. The server is extremely unstable and units are still unable to stay connected. Facing too many issues on a daily basis. Also using remoting to send settings via the remoting port - this is working only sometimes.
Please help by giving a solution for TCP socket/threading/thread pooling etc. that is both scalable and robust and can in a single port.
This TCP server is running in Windows server 2008 R2 Enterprise with IIS7 and SQL server 2008.
Processor: Intel Xenon CPU E3-1270 V2 #3.50GHz
RAM: 32GB
System: 64-bit operating system
Thanks
Jonathan
Basically, don't use a thread per socket; use one of the async APIs (BeginReceive / ReceiveAsync), or some kind of socket polling (Socket.Select for example, although note that this is implemented in a very awkward way; when I use this, I actually use P/Invoke to get to the raw underlying API). Right at this moment, I have > 30k sockets per process talking to our web-sockets server (which is implemented via via Socket). Note that for OS reasons we do split that over a few different ports - mainly due to limitations of our load-balancer:
One thread per connection is not a really good idea specially when you have to handle 100s of client concurrently.Asynchronous is the way to go with some buffer pooling/managing. If you are looking for something to start with asynchronous sockets have a look at this basic implementation if you are looking for something complete Take a look at this(explanation: Here)
If you are willing check this out too.
In C# you can go with classical BeginXXX/EndXXX methods. Microsoft also have a High Performance Socket API which can be leveraged using XXXAsync methods. A few articles which explain the High Performance Socket API Here and Here
I have the same dilemma as the one who posted this topic, Real-time communication with WCF
except that my problem is not about games programming. I would like to know what's the best method to use to be able to have a real time communication in between two windows applications (server-client). I am using visual c++/c# to date and i would like to be able to display all the Feeds that are being received by my server to the client in real time.
I have started trying to use .NET remoting but in my continuous research, it appears that it will use SOAP-http, and might affect the speed of the communication. My server and client will communicate using the internet and .NET remoting does not permit the use of TCP Channel when communicating in between a firewall or the internet.
Your inputs will be greatly appreciated.
I guess it depends on your escenario, if you want "real-time" and you are willing to lose some packages in the process you are better with UDP, take a video conferencing tool for example, by the time you recover your slow packages you will have to move and display the next frame in the video or audio; that is a good example for the use of UDP. This is the reason why UDP is much faster than TCP.
If however, you are not willing to lose a single bit of your message, then TCP was made for you because if you lost a package the protocol will request it again to have your complete message as complete as possible.
Additionally it depends on the way the communication is being sustained, is the information flowing from one to many?, from many to many?, one to tone?
Take NetNamedPipeBinding for instance, this will be much faster process, but is only deployed in a single machine but accross processes. Whereas NetMsmqBinding will help you to build queues and it will be amazingly reliable and scalable for scenarios where your load will be a massive number of connections.
In the end, it all boils down to your concrete escenario and your business goals.
Hope it helps
If you are willing to do your own message parsing, you can use standard TCP sockets with the TcpClient and TcpListener classes. If your data is already a serializable object, you could serialize it into a text stream and just send it over the socket, deserializing it on the client side.
To get it to work over the internet, the server needs to have the port forwarded on your router, the client would just attach to the server's public IP. You would obviously need to add an exception in your firewall for this port as well.
The biggest problem with WCF and large data is setting up the streaming, by default WCF sends everything at once, which isn't practical for large files.
When I was experimenting with C# and WCF one of the things I kept reading about was how unscalable it is to have clients with a constant current connection to the server. And although WCF allows that it seems that the recommended best practise is to use 'per call' as opposed to 'per session' for instance management if you want to have any kind of decent scalablity. (Please correct me if Im wrong)
However from what I understand IRC uses constant client connections to the server and IRC servers (well networks of servers) are servicing hundreds of thousands of clients at any given time. So in that case is there nothing actually 'bad' about keeping constant client connections to the server?
As long as you don't follow the one-thread-per-connection architecture, a server can support quite a large number of concurrent TCP connections.
IRC doesn't require much per connection state, beyond the TCP send and receive windows.
If you need real-time duplex communication (IRC is a chat protocol), then keeping a TCP connection alive is a relevant option. However, TCP connection brings network overhead and operating systems have practical limits on the number of concurrent open TCP connections. WCF is commonly used in SOAP/HTTP/RPC contexts where duplex communication is not required, but certainly it offers suitable bindings and channels for that as well. To answer your question, there is nothing bad in keeping the connection open if you have real-time, duplex requirements for your communication.
Yes, such architecture is feasible, but... The "ping? pong!" thing was invented for a reason - to let both parties know that the other party is still there. You cannot actually tell if a client is idle, because it does not have much to say or because it is actually disconnected and you are waiting for a TCP timeout.
UPD: "hundreds of thousands of clients" is possible on IRCnet only because of server networks. For a single machine, the C10K problem is still an issue.
I need to develop a client server system where I can have multiple clients communicating with one server at the same time. I want to communicate xml serialized objects and also need to send and receive other commands to invoke methods. Now, I am just starting with socket programming in C# and .Net and found that the asynchronous I/O is the way to go so that the methods dont block the execution of code. Also there are many examples of how to
make a simple client server system. So I have a basic understanding of how that works.
Anyway, what still is not clear to me is how I can set up a server which can manage connections to multiple clients?
Can I just create a new socket per connection and then store those in some kind of list?
Do I need some kind of multiplexing to achieve this?
Do I have to listen at multiple ports?
What`s the best way here?
And the other thing is if I need to develop my own protocol to differentiate between what I am actually sending over the network --> xml serialized object or a command which might be just a string encoded in ascII or something. Or would I develop my own protocol just to send these commands?
Any kind of help is apreciated! If someone knows a good book which covers this sort of stuff, let me know. Cheers
I forgot to mention that some of my clients which are supposed to communicate with my server will be pda and I therefore use the compact framework... So this might bring in some restrictions...
You may find several of my TCP/IP .NET FAQ entries helpful, particularly using a socket as a server socket, which explains how listening servers create new client connections, and XML over TCP/IP, which discusses the decisions you have to make for an XML-over-TCP/IP protocol.
I would abandon your plan to use Sockets and switch to WCF Windows Communication Foundation. It's far more elegant and is designed to do all the things you wanted, in a considerably easier and simpler way than .NET sockets.
If you want a guide of how to use it, there are a set of amazing Microsoft webcasts by Michele Leroux Bustamante that will have you up and running in no time.