Increasing TCP/IP Window size - c#

I am trying to send messages over tcp/ip between two servers.
I want to send a message that is 30KB.
But I want to send it with as a whole.
I don't want tcp protocol to break it into segments.
I am using communication between 2 Windows Server 2008 R2.
The client and the server are coded using c#.
I tryed using
tcpclnt.SendBufferSize = 100000;
tcpclnt.Client.DontFragment = true;
and the same at the server.
I also tried configuring the window size of the server(editing the registry).

I would strongly suggest that you need to carry out further research into IPv4 and TCP, as well as Ethernet and Gigabit Ethernet (particularly Jumbo Frames).
Essentially, the short answer to your question is that you cannot send a single IP datagram containing a TCP payload of 30kb, despite the IP header permitting a maximum size of 64kb for the complete datagram.
The reason for this is that the underlying network (most likely Ethernet or Gigabit Ethernet) will have smaller frame sizes, and will therefore require the IP datagram to be fragmented in order to transmit that datagram over the physical network within the frame size limitations of that network.
The TCP protocol does guarantee successful delivery of a complete, uncorrupted datagram (via automatic reassembly, automatic detection of corrupted datagrams and automatic retransmissions of lost or corrupted datagrams), so unless you have a highly specialised requirement, you should be able to just let the TCP stack fragment your message and reassemble it on your behalf.

Altering buffersize will have the sideeffect of ramping up ram usage - not recommended...
As TCP actually deals with streams and not packets (UDP uses packets), I believe your answer lies within framing the message, see message framing
see also code
Found this possible solution somewhat later but thought it should be included here:
SetTcpWindowSize
Search towards the bottom for a VB example entitled "Setting the TCP Window Size for All Network Adapters"
Alternatively there is a buffer handler here which looks like it will do the job of allowing you to read a message in one part even if it is in multiple packets it will allow you to reassemble them via buffer management. See this link

Related

Optimise a TCP connection as similar to a UDP connection using Socket.IOControl

Currently I am using Udp to send small images to my server. This methodology is quick as it works as a 'fire and forget' mechanism. Meaning that the server does not care if the Udp packet sent by the client has been received or not.
I had tried before using Tcp to do the same thing and the Fps is slow in comparison to Udp. One of the reasons is that the Tcp end point on the server will always send an ack back to the client.
Out of interest I wanted to see if I can disable this 'call back' to see what the comparison performance would be.
I cam across socket iocontrol where I can see I can set low level settings to the binding connection. One of these is the keep alive values which can be set passing the IOControlCode.KeepAliveValues parameter with the preferred values for it.
I did look at what is available but could not see an obvious 'disable ack' parameter. I can see
AsyncIO and NonBlockingIO. Would either of these stop the 'ack' being sent back or is it another parameter entirely?
You can't turn off TCP ACK. There is no reason why you can't use TCP to send full frame video (think Netflix).
You just need to use appropriately sized send and receive buffers so that you're not waiting. You don't need to use IOControl to modify these, you can set them directly on the Socket.

c# udp check if message reached

How can you check if the message has reached the destination, I came up with a solution of mine but as I'm not a pro in this kind of topic I'd like to know some other ways.
My solutions is (client side) to send the packet, if no acknowledgement has been received within timeout time, then send once more, (server side) if the message received is correct send acknowledgement, if not received on the other side then send again.
This is a picture of the diagram of this algorithm,
Picture.
In short, both sides send the message twice.
any other ideas?
It depends on your application. But looking at the diagram you attached, you are preferring to TCP communication.
However if you really wanted to use UDP instead of TCP, you have to let go of ACK thing.
Assuming you are continuously streaming images to remote destination. And you do not worry about some frame loss as long as the streaming will be as fast as it will be. You can use UDP with that. Just also consider how reliable the transmission line (physical layer) to predict the outcome.
But if your application is not that time-critical but needs a highest reliability as much as possible, then you can use TCP.
For more detials [visit this]
Below are some comparison with UDP and TCP
The UDP protocol is an extremely simple protocol that allows for the sending of datagrams over IP. UDP is preferable to TCP for the delivery of time-critical data as many of the reliability features of TCP tend to come at the cost of higher latency and delays caused by the unconditional re-sending of data in the event of packet loss.
In contrast to TCP, which presents the programmer with one ordered octet stream per connected peer, UDP provides a packet-based interface with no concept of a connected peer. Datagrams arrive containing a source address1, and programmers are expected to track conceptual peer “connections” manually.
TCP guarantees2 that either a given octet will be delivered to the connected peer, or the connection will be broken and the programmer notified. UDP does not guarantee that any given packet will be delivered, and no notification is provided in the case of lost packets.
TCP guarantees that every octet sent will be received in the order that it was sent. UDP does not guarantee that transmitted packets will be received in any particular order, although the underlying protocols such as IP imply that packets will generally be received in the order transmitted in the absence of routing and/or hardware errors.
TCP places no limits on the size of transmitted data. UDP directly exposes the programmer to several implementation-specific (but also standardized) packet size limits. Creating packets of sizes that exceed these limits increase the chances that packets will either be fragmented or simply dropped. Fragmentation is undesirable because if any of the individual fragments of a datagram are lost, the datagram as a whole is automatically discarded. Working out a safe maximum size for datagrams is not trivial due to various overlapping standards.

Must I supply a FCS when sending a raw Ethernet II frame on Windows?

I'm constructing Ethernet II frames, IPv4 packets, and finally the TCP portion with a payload. From the ground up, creating raw packets.
My question is... on Windows when using C# and raw sockets, will I need to supply the FCS at the end of the packet?
My understanding is that Windows automatically does this, but specifically for Ethernet frames and not for IP or TCP packets.
It turns out automatically added on transmission and stripped on receipt, by the network interface card. The OS doesn't have to deal with it at all and generally doesn't have access to it.

C# local TCP/IP stack access

I have a Visual C++ programm with a proprietary point-to-point protocol built on top of TCP/IP sockets that allows a set of messages to be flow between a third party software.
There is a note in documentation to that protocol:
IP logical packets do not necessarily map directly to physical packets on the underlying network socket, they may be broken apart or aggregated by the TCP/IP stack.
What does this mean?
I've write my C# application to connect and due to technical restriction it is able to run and communicate only locally. Plus every millisecond is critical.
Seems this is not about named pipes: pipelist.exe doesn't show any specific entries.
If you are just using loopback there may be no IP packets at all, and in any case (a) the implementor of your protocol should have already taken all that into account and (b) TCP hides all that from you too - it just provides a byte stream interface.
When TCP/IP packets go out over, say, Ethernet, the packets are repackaged as Ethernet frames. This may include breaking up the original packets.
When the frames arrive at their destination, the Ethernet header information is removed and the original packet (reassembled if necessary) is presented to the TCP/IP layer on the destination machine.
But this repackaging can also happen within the TCP/IP stack. TCP and IP are actually separate protocols; IP is responsible for routing, TCP does the "handshaking" (maintains session state, guarantees delivery [or tries to], etc.)
Named pipes are a completely different interprocess communication mechanism. Usually faster than TCP/IP, but typically restricted to use on a single machine, I believe.
IP logical packets do not necessarily map directly to physical packets on the underlying network socket, they may be broken apart or aggregated by the TCP/IP stack.
TCP/IP is not the lowest level network protocol that exists. There are others: the Ethernet protocol that connects ethernet devices, the 802.11x wireless protocols, and others. All this statement says is that a single IP packet may correspond to multiple packets in the lower-level protocols, and that the IP networking layer is responsible for buffering or joining these packets.
Your application shouldn't need to worry about this at all. TCP/IP networking is handled very efficiently by all modern OS kernels, and unless your requirements are very unusual you should never have to worry about the way your application protocol is broken up into packets by TCP/IP or the lower-level protocols.

Unable to send images on UDP larger than 10K

I am trying to develop a client/server application in which client will send images to the server.
Case 1:
Server is running on one machine that is behind a router and client is running on another machine that is behind some different router. As this communication will be on WAN (public IPs), a port is forwarded on server side router so that server can easily receive incoming UDP datagrams on that port.
UDP's maximum transmission unit (MTU) size is 64KB. It means a UDP socket should be able to transmit anything thing of size less than or equal to 65,536 bytes. When in case of the application i am developing the client is only able to send an image(UDP datagram) of 10-13k. If i try to transfer an image of the size greater than 10Kb, server is unable to receive it and server side UDP socket will be always in (receive) blocking mode.
Case 2:
Server is running on a machine that is behind a router and client is running on a machine that is behind the same router. It means client & server are on the same local area network. Even client and server are sharing the same local area network client is sending the images (UDP datagrams) on server's public IP. In this case server is able to receive any size of UDP datagram upto 64K, which is what i am expecting from my application.
I tried to run my client on different remote PCs but the result is same. Server is not able to receive a UDP datagram of bigger than 10-13Kb. If anyone can help me to deal with this situation, he would be much appreciated.
Link to the code:
http://pastebin.com/f644fee71
Thanks and goodluck with your projects.
Regards,
Atif
Although the IP layer may allow UDP packets of up to 64k in size I think you will find that the maximum "in the wild" UDP packet size will be limited to the smallest MTU of the devices in between your source and destination.
The standard Ethernet MTU is ~1500 bytes. Some devices support "jumbo" frames of up to ~10kbytes to improve performance. But this sort of thing isn't generally supported over the public internet, only on LANs.
The IP layer may fragment the UDP packet (unless the no-fragment bit is set in the packet). But the recipient will only receive and defragment the packet if every fragment is received in order (or out of order within a specific time limit). Otherwise it will discard the packet.
It may also be the case that not all the devices in between your source and destination support the frame size of the sending device. I've encountered situations where I needed to lower the MTU on my routers to ~1450 bytes because intermediate routers were discarding packets at 1500. This is due to MTU discovery not working reliably. Ie, the sending device has no way of determinging what the MTU is of devices on its path to the destination. Somewhere in that path a device will be discarding packets it considers too large.
UDP is a very bad idea for what you are doing. You would be better off using TCP.
If you are concerned about the performance of TCP connection setup/tear down then keep the connection up for as long as possible.
UDP is only a good protocol for delivering data when you don't care too much about whether the target receives the packet or not. Delivery is not guaranteed. In all other cases use TCP.
If you are determined to use UDP you will have to implement path MTU discovery in your protocol and prey that the routers/firewalls don't block the "fragmentation needed" ICMP packets. Which they shouldn't otherwise TCP wouldn't work either. But like I said, I've seen cases in the past where fragmentation needed ICMP packets are blocked or discarded and I had to manually tweak my own MTU.
UDP is a unreliable datagram protocol. The packets are not guaranteed to arrive at their destination, or in the order that you sent them either.
In addition, when sending packets larger than around 1500 bytes you'll find they get fragmented, or worse, dropped. The receiver will try to piece it together best it can.
But if anything is missing, goodbye packet. So really, the limit is actually around that 1500 byte mark but sometimes much less to ensure no fragmentation and that they arrive. To send data larger than that a higher level protocol is going to have to put them all back together for you and request anything thats missing.
How big will the images be? 64K might be too small anyway.
You have two or three options.
1) Use TCP - it's reliable and stream orientated. You don't have to worry about how big or small or fragmentation as this is taken care of for you.
2) Use UDP but develop a higher level application protocol on top. Depending on your protocol that could be quite a bit of work. I'm doing this right now though.
3) Have a look at using the UDT library. It's designed for transferring bulk data over a WAN and has better performance than TCP.
However, I would like to suggest that TCP will likely suit your needs just fine.

Categories