We have a TCP Async socket game server written in C# serving many concurrent connections. Everything else works fine except for the problems below:
Frequent disconnections for some users (Not all mind you)
Degraded performance for users on slow connections
Most of the users who face this problem are using GSM connections (portable USB dongles based on CDMA) and often the signal strength is poor. So basically they have rather poor bandwidth. Perhaps under 1KB per sec.
My question: What can we do to make the connections more stable and improve performance even for the slower connections?
I am thinking dynamic TCP buffers on client and server side, but not really sure of the performance degradation due to overhead in dynamically doing this for each connection of my direction is even correct.
Max data packet size is under 1 KB.
Current TCP buffer size on server and client is 16KB
Any pointers or references on how to write stable anync socket code in C# for poor or slow connections would be a great help. Thanks in advance.
"Performance" is a very relative term. It looks like your main concern is with data transfer rates. Unfortunately you can't do much about it given low-bandwidth connections - maybe data compression can help, but actual effect depends on your data, and there's always a tradeoff between transfer rate improvement vs. compression/de-compression delays. There's also latency to consider for any sort of interactive game.
As #Pierre suggested in the comments you might consider UDP for the transport, but that only works if you can tolerate packet loss and re-ordering, and that again depends of the data and what you do with it.
Another approach I would suggest investigating is to provide two different quality-of-service modes. Clients on good links can use full functionality (say, full-resolution game images), while low-bandwidth clients would get reduced options (say, much smaller size low-quality images). You can measure initial round-trip times on client connect/handshake to select the mode.
Hope this helps a bit.
Related
I need to measure as precisely as possible how much of cell service provider's data limit my application uses up.
Is it possible to get the volume of data transferred by a .Net UDP Socket over the network interface (including overhead of UDP and IP)?
The application is a server communicating with a great number of embedded devices, each of which is connected to the internet using GPRS with a very low data limit (several megabytes per month at best, so even a few bytes here and there matter). I know the devices don't open connections with any other servers, so measuring the traffic server-side should be enough.
I know I can't get 100% accurate number (I have no idea what traffic the service provider charges), but I would like to get as close as possible.
Assuming this is IPv4, you could add 28 bytes to every data you transfer but your problem is going to be detecting packet loss and potentially fragmentation. You could add some meta data to your communication to detect packet loss (e.g. sequence numbers, acknowledgments and so on) but that would add more overhead of course which you might not want. Maybe a percentage of expected package loss could help. As for fragmentation again you could compensate when the size of your message is greater than the MTU size (which I believe could be quite small, like 296 bytes, not too sure though, maybe check with your mobile provider)
Another somewhat non-intrusive option could be reading network performance counters of your process or restrict your communication into a separate AppDomain.
If I am not reading data from socket fast enough, the TCP protocol will decrease sliding windows size and sender might get blocked during sending (as discussed here what happens when I don't manage to call `recv` fast enough?).
How do I detect this situation on receiver side on Windows - preferably directly in C# code and without impacting the performance of reading from socket? Other monitoring solution (perfmon, wireshark) is also acceptable but far less optimal for my scenario.
What is the exact scenario? Let's say the server app can transmit data with speed up to 1Mbps, however my client app is able to receive the data only with the speed of 0.5Mbps. How do I find out in the client application that TCP flow control is kicking in and decreasing the transmit speed?
I came over Socket.Available property http://msdn.microsoft.com/en-us/library/system.net.sockets.socket.available.aspx and was wondering if that might be recomendable way of querying this information?
You would be better off reading as fast as you possibly can, rather than wasting time trying to have the system tell you you're not reading fast enough, which can only slow down your reading even further. If you're reading at maximum speed and the sender is still getting blocked, TCP is working correctly and there is nothing you can do about it, except maybe look into a faster machine.
The TCP Window is handled by the Kernel and won't be available to you. I guess you could possibly compare the ReceiveBufferSize with number of bytes Received. If this buffer isn't full, then you are waiting.
Hi
I have TCP/IP client server application. i want to send large serialized object around 1MB through sockets.
Is it possible to get better performance by splitting byte array to for example 10 chunks of arrays and open a socket for each and send them Async compared to opening one socket and send all large data through it ?
Thanks
Splitting the data to less than the MTU will introduce more overhead as there will be more packets - this will actually slow things down. What you are proposing is already being done as part of the protocol i.e. splitting and re-assembling. I would experiment with sending less data e.g. compression.
No, this doesn't speed up the transfer under normal conditions, it only adds overhead. It would only help if you have a slow network segment which is quite busy otherwise and the traffic is shaped per TCP connection.
Make sure that your sockets code is efficient, because wrong buffer and therefore packet sizes, synchroneous operation and other stuff may slow the transfer down.
Hi
Is It true that if we pass little data back and forth to client/server the overhead of tcp/ip is negligible and performance is the same as NamedPipe on the same machine ?
I'd say it's not so much the quantity of data as much as it is the number of requests. In other words, if you have 100,000 connections that pass 100 bytes of data, you're going to have more tcp/ip overhead than if you have 10 connections of 100K each.
That's not to say that there isn't overhead associated with transferring the data via tcp/ip vs. named pipes. There is. But usually I'd say the decision of which one you're going to use has to do more with the architecture of your system than concern about the overhead.
If you're going to transfer data between physical servers, you have to go with tcp/ip; named pipes aren't an option. If you're transferring data between processes on the same server, named pipes are clearly the better performer.
One reason you might want to go with tcp/ip when you're on the same physical server is if there's a chance that you'll break the processes onto physical servers at some point in the future.
To answer your question: If you're not passing a lot of data, and you're not doing it frequently, you're probably not going to notice the tcp/ip overhead when the two endpoints are on the same physical machine.
HTH,
James
I currently have an application that sends XML over TCP sockets from windows client to windows server.
We are rewriting the architecture and our servers are going to be in Java. One architecture we are looking at is a REST architecture over http. So C# WinForm clients will send info using this. We are looking for high throughput and low latency.
Does anyone have any performance metrics on this approach versus some other C# client to Java server communication options.
This isn't really well-enough defined to make any metric statements; how big are the messages, how often would you be hitting the REST service, is it straight HTTP or do you need to secure it with SSL? In other words, what can you tell us about the workload parameters?
(I say this over and over again on performance questions: unless you can tell me something about the workload, I can't -- nobody really can -- tell you what will give better performance. That's why they used to say you couldn't consider performance until you had an implementation: it's not that you can't think about performance, it's that people often couldn't or at least wouldn't think about workload.)
That said, though, you can make some good estimates simply by looking at how many messages you want to exchange, because setup time for TCP/IP often dominates REST. REST offers two advantages here: first, the TCP/IP time often does dominate the message transmission, and that's pretty well optimized in production web servers like Apache or lighttpd; second, a RESTful architecture enhances scalability by eliminating session state. That means you can scale freely using just a simple TCP/IP load balancer.
I would set up a test to try it and see. I understand that the only part of your application you're changing is the client/server communication. So analyse what you're sending now, and put together a test client/server setup sending messages which are representative of what you think your final solution is going to be doing (perhaps representative only in terms of size/throughput).
As noted in the previous post, there's not enough detail to really judge what the performance is going to be like. e.g.
is your message structure/format going to be the same, but merely over HTTP rather than raw sockets ?
are you going to be sending subsets of XML data ? Processing large quantities of XML can be memory intensive (e.g. if you're using DOM-based approach).
What overhead is your chosen REST framework going to be introducing (hopefully very little, but at the moment we don't know).
The best solution is to set something up using (say) Jersey and spend some time testing various scenarios. If you're re-architecting a solution, it's going to be worth a few days investigating performance (let alone functionality, ease of development etc.)
It's going to be plenty fast, unless you have a very, very large number of concurrent clients hitting those servers. The XML shredding keeps getting faster in both Java and .NET. If you are on CLR2 and Java 5 or above, you will be fine. But of course you still need to do the tests to verify.
We've tested in our lab, REST and SOAP transactions, and they are faster than you might think. Tens of thousands of messages per second. Small numbers of modern CPUs generating XML messages can easily saturate a gigabit network. In other words, the network is the bottleneck (transmission of data), not the CPU (serializing & de-serializing XML).
AND, If you do your software design properly, in the very unlikely situation where REST is not sufficient, then swapping out the message format layer (REST => protobufs) will get you better transmission perf, with minimal disruption.
But before you need to go there, you will be able to send some money to Cisco and get lots more headroom.