Calculate UDP layer checksum - c#

I want to modify packets I have and send these packets through the network card. To do this, I need to calculate my UDP layer Checksum.
So I found this function that takes an array and returns the Checksum, but I have two small questions:
UDP layer has 8 bytes: 2 source port, 2 destination port, 2 length and 2 checksum.
the function that I found needs to be called with an array, so should I send this function my 6 bytes array with or without the 2 bytes of checksum?
This function mentions that it calculates IP checksum, this is also fit to calculate UDP checksum ?
Edit:
I found this article that calculate IP/TCP/UDP checksum, can i have help to convert the code calculate UDP checksum into c# ?

Have you tried setting it to zero? According to RFC 768 it is optional.
https://www.rfc-editor.org/rfc/rfc768
"An all zero transmitted checksum value means that the transmitter generated no checksum (for debugging or for higher level protocols that don't care)."
If you really want to calculate it you could try looking at the assemble_udp_ip_header function in FreeBSD: http://svnweb.freebsd.org/base/head/sbin/dhclient/packet.c?view=markup .
You shouldn't call it with just a 6 byte array because the checksum procedure should be run on the pseudo header. While you could probably use the function that you mentioned on the pseudo header, I suspect that it has a bug where it can access past the end of the array if the length parameter is not even.
The checksum that you computed is incorrect because it needs to be computed on the psuedo header. You are missing fields such as the protocol, ip address source, ip address destination, and the actual payload. You are also only writing to 6 out of the 8 bytes that you allocated.

The author of that post says in his comments
"...The first parameter is the byte array containing the IP Header packet (already formed but with the checksum field [two bytes] set to zero)."
So you should set the two checksum bytes (bytes 7 and 8) to zero, then send all 8 bytes of your header to have the checksum computed.
As for UDP/IP checksums, they are two different things and the author stated this calculation was specifically for IP header checksum creation.

Related

How to tell if a PCAPNG file was captured with a limited snap length when parsing with SharpPCap in C#?

This seems like a stupid question, but I can't find any way to tell if a packet was only partially captured. All the data lengths I can find in the packet structures use the lengths from the header, and even the byte structures appear to fill out the data with garbage. I.E., if I capture 50 bytes of a 768 byte packet, there are 768 bytes of 'data' in the packet.
The Wireshark source seems to require an exception when parsing a packet to know it was only partially captured. I am only reading the headers information, so I am not parsing anything past the TCP header.
What I really want to do is build a progress bar that works for snap length limited captures, if there is a way to just do that.
Thanks,
If you hit ctrl+c on a packet capture being taken wiht tshark or tcpdump, you can replicate this. The fields captured length and actual length in pcap and pcapng packet headers will differ if the capture is interrupted in the middle of a packet.
Per the documentation, for a single packet header, the relevant fields are:
Public Fields
CaptureLength uint . The the bytes actually captured. If the capture length is
small CaptureLength might be less than PacketLength
PacketLength uint . The length of the packet on the line
I am not seeing pcapng code in the sharppcap repo, so it's unlikely a parser has been implemented.

How to send negative number to SerialDevice DataWriter Object

I am having trouble figuring out how to send a negative number over C# UWP SerialDevice DataWriter object
I am using Windows 10 IOT Core with Raspberry PI 2 B. I am trying to control a Sabertooth 2x25 Motor Driver using packetized serial https://www.dimensionengineering.com/datasheets/Sabertooth2x25.pdf . The documentation describes the communication as:
The packet format for the Sabertooth consists of an address byte, a command byte, a data byte
and a seven bit checksum. Address bytes have value greater than 128, and all subsequent bytes
have values 127 or lower. This allows multiple types of devices to share the same serial line.
And an example:
Packet
Address: 130
Command : 0
Data: 64
Checksum: 66
The databyte can be -127 to 127. The description states that the Data byte is one byte but when I try to convert a negative number to a byte in C# I get an overflow exception because of the sign bit (I think).
Psuedo Code provided in manual
Void DriveForward(char address, char speed)
{
Putc(address);
Putc(0);
Putc(speed);
Putc((address + 0 + speed) & 0b01111111);
}
What would be the best method to write the Data to the DataWriter Object of SerialDevice taking into account negative numbers. I am also open to using a differnt method other than DataWriter to complete the task.
I asked a stupid question. I got -127 to 127 based on some of the manufacturers sample code for use with their driver (which I can't use because it isnt UWP compatible). I just realized that what their driver is probably doing is if you call one of their driver functions Drive(address, speed) it uses the reverse command for negative numbers(removes the sign and goes in reverse at that speed) and forward command for positive numbers. –

Fragmented length prefix causes next data read from buffer use incorrect message length

I'm one of those guys who come here to find answers to those questions that others have asked, and I think i newer asked anything myself, but after two days searching unsuccessfully I decided that it's time to ask something myself. So here it is...
I have a TCP server and client written in C#, .NET 4, asynchronous sockets using SocketAsyncEventArgs. I have a length-prefixed message framing protocol. Overall everything works just fine, but one issue keeps bugging me.
Situation is like this (I will use small numbers just as an example):
Lets say Server has a Send buffer length of 16 bytes.
It sends a message which is 6 bytes long, and prefixes it with 4 bytes long length prefix. Total message length is 6+4=10.
Client reads the data and receives a buffer of 16 bytes length (yes 10 bytes of data and 6 bytes equal to zero).
Received buffer looks like this: 6 0 0 0 56 21 33 1 5 7 0 0 0 0 0 0
So I read first 4 bytes which is my length prefix, I determine that my message is 6 bytes long, I read it as well and everything is fine so far. Then i have 16-10=6 bytes left to read. All of them are zeroes I read 4 of them, since it's my length prefix. So it's a zero length message which is allowed as keep-alive packet.
Remaining data to read: 0 0
Now the issue "kicks in". I got only 2 remaining bytes to read, they are not enough to complete a 4 byte-long length prefix buffer. So I read those 2 bytes, and wait for more incoming data. Now server is not aware that I'm still reading length prefix (I'm just reading all those zeroes in the buffer) and sends another message correctly prefixed with 4 bytes. And the client is assuming the server sends those missing 2 bytes. I receive the data on the client side, and read first two bytes to form a complete 4 byte length buffer. The results are something like that
lengthBuffer = new byte[4]{0, 0, 42, 0}
Which then translates into 2752512 message length. So my code will continue to read next 2752512 bytes to complete the message...
So in every single message framing example I have seen zero length messages are supported as keep-alive's. And every example I've seen doesn't do anything more than I do. The problem is that I do not know how much data I have to read when I receive it from the server. Since I have partially-filled buffer with zeroes, I have to read it all as those zeroes could be keep-alive's I sent from the other end of connection.
I could drop zero-length messages and stop reading the buffer after first empty message and it should fix this issue, and use custom messages for my keep-alive mechanism. But I want to know if I am missing something, or doing something wrong, since every code example I've seen seems to have same issue (?)
UPDATE
Marc Gravell, you sir pulled words out of my mouth. Was about to update that the issue is with sending the data. The problem is that initially when exploring .NET Sockets and SocketAsyncEventArgs I came across this sample: http://archive.msdn.microsoft.com/nclsamples/Wiki/View.aspx?title=socket%20performance
It uses reusable pool of buffers. Simply takes predefined number of maximum client connections allowed, for example 10, takes maximum single buffer size, for example 512, and creates one large buffer for all of them. So 512 * 10 * 2 (for send and receive) = 10240
So we have byte[] buff = new byte[10240];
Then for each client that connects it assigns a piece of this large buffer. First connected client gets first 512 bytes for Data Reading operations, and gets next 512 bytes (offset 512) for Data Sending operations. Therefore the code ended up having already allocated Send buffer which size is 512 (exactly the number the client later receives as BytesTransferred). This buffer is populated with data, and all remaining space out of these 512 bytes is sent as zeroes.
Strange enough this example is from msdn. The reason there is a single huge buffer is to avoid fragmented heap memory, when buffer gets pinned and GC cant collect it or something like that.
Comment from BufferManager.cs in the provided example (see link above):
This class creates a single large buffer which can be divided up and
assigned to SocketAsyncEventArgs objects for use with each socket I/O
operation. This enables bufffers to be easily reused and gaurds
against fragmenting heap memory.
So the issue is pretty much clear. Any suggestions on how I should resolve this are welcome :) Is it true what they say about fragmented heap memory, is it OK to create a data buffer "on the fly"? If so, will I have memory issues when the server scales to a few hundred or even thousands of clients?
I guess the problem is that you are treating the trailing zeros in the buffer you read as data. This is not data. It is garbage. No one ever sent it to you.
The Stream.Read call returns you the number of bytes actually read. You should not interpret the rest of the buffer in any way.
The problem is that I do not know how much data I have to read when I
receive it from the server.
Yes, you do: Use the return value from Stream.Read.
That sounds simply like a bug in either your send or receive code. You should only get BytesTransferred as the data that was actually sent, or some number smaller than that if arriving in fragments. The first thing I would wonder is: did you setup the send correctly? i.e. if you have an oversized buffer, a correct implementation might look like:
args.SetBuffer(buffer, 0, actualBytesToSend);
if (!socket.SendAsync(args)) { /* whatever */ }
where actualBytesToSend can be much less than buffer.Length. My initial suspicion is that
you are doing something like:
args.SetBuffer(buffer, 0, buffer.Length);
and therefore sending more data than you have actually populated.
I should emphasize: there is something wrong in either your send or receive; I do not believe, at least without an example, that there is some fundamental underlying bug in the BCL here - I use the async API extensively, and it works fine - but you do need to accurately track the data you are sending and receiving at all points.
"Now server is not aware that I'm still reading length prefix (I'm just reading all those zeroes in the buffer) and sends another message correctly prefixed with 4 bytes.".
Why? How does the server know what you are and aren't reading? If the server retransmits any part of a message it is in error. TCP already does that for you.
There seems to be something radically wrong with your server.

What is the size of udp packets if I send 0 payload data in c#?

I have figured out the maximum data before fragmentation between 2 endpoints using udp is 1472(other endpoints may vary). This states that mtu is 1500bytes and header overhead per packet is 28bytes. Is it safe to assume that if I send 0 bytes data (payload), the actual data being transferred is 28bytes? I am doing some benchmark, so it is crucial for me to know, what happens in the channel.
The MTU is the maximum size of an IP packet that can be transmitted without fragmentation.
IPv4 mandates a path MTU of at least 576 bytes, IPv6 of at least 1280 bytes.
Ethernet has an MTU of 1500 bytes.
An IP packet is composed of two parts: the packet header and the payload.
The size of an IPv4 header is at least 20 bytes, the size of an IPv6 header at least 40 bytes.
The payload of an IP packet is typically a TCP segment or a UDP datagram.
A UDP datagram consists of a UDP header and the transported data.
The size of a UDP header is 8 bytes.
This means an IP packet with an empty UDP datagram as payload takes at least 28 (IPv4) or 48 (IPv6) bytes, but may take more bytes.
Also note that in the case of Ethernet, the IP packet will additionally be wrapped in a MAC packet (14 byte header + 4 byte CRC) which will be embedded in an Ethernet frame (8 byte preamble sequence). This adds 26 bytes of data to the IP packet, but doesn't count against the MTU.
So you cannot assume that a UDP datagram will cause a specific number of bytes to be transmitted.
Typical IP headers are 20 bytes, if no options have been selected. UDP headers are 8 bytes. Over Ethernet, frame size is 14 bytes (header) + 4 bytes (trailer). Depending on how you capture these packets, you may or may not have to account for frame size.
Without Ethernet (IP + UDP) = 20 + 8 = 28 bytes
With Ethernet = 18 + 28 = 46 bytes
The UdpClient class in C# will return the packet from layer 5 onwards, so you won't have to account for the above.
Update:
The 1500 bytes MTU is enforced at the IP layer. That means that the packet size below the IP layer is insignificant when fragmenting.
That translates to:
Ethernet frame bytes (fixed) = 18
IP header (min) = 20
UDP header (fixed) = 8
Max. allowed payload for no fragmentation = 1472
Total number of bytes that go on the wire = (Sum above) 1518 bytes
(You can count the number of bytes leaving with a tool like Wireshark)
If (IP header + UDP header + Payload > 1500) then the packet is fragmented.
Is it safe to assume that if i send 0 bytes data(payload), the actual data being transferred is 28bytes
No
(and yes... because it usually makes no real difference, insofar it is "safe")
While it is true that a no-payload-no-option UDP/IPv4 datagram is exactly 28 bytes (or "octets" in network jargon), this is by no means a safe assumption.
It is, however, for the most part inconsequential. Switches and routers usually forward a small packet exactly as fast as a larger one (or, with neglegible difference). The only occasion at which you may see a difference is on your bandwidth bill (you pay for all bits on the wire, not just for the ones that you use!).
IPv4 may have up to 40 octets of "options" attached to it, and IPv4 may be encapsulated in IPv6 (without you knowing, even). Both could drastically increase the datagram's size and thus data transferred in a rather obvious way.
Also, the datagram will be further encapsulated on the link layer, both adding preambles and header data, and having minimum frame lengths. The presence of additional headers is, again, pretty obvious, the fact that besides maximum sizes, payloads also have minimum sizes is a less well-known fact.
Ethernet and ATM are two widely used standards that can get in your way on your assumptions here (but other link layers are similar).
An ethernet frame has a minimum size of 64 bytes, and is zero-padded to this size. In presence of 802.1Q (VLAN) this means that the minimum payload for an ethernet frame is 42 octets, otherwise it is 46 octets.
Sending a zero-length UDP/IPv4 datagram over "ordinary" ethernet will therefore append 18 zero bytes to the payload. You never get to see them, but they are there and they will appear on your bill.
Similarly, ATM cells (same as "frame", they use a different word for some reason) are always 53 bytes, with 48 bytes of zero-padded payload. Thus, a zero-payload UDP diagram will cause 20 zero bytes being added whereas a zero-length UDP/IPv6 datagram would keep its original size (being exactly 48 bytes in size), assuming there is no other encapsulation such as PPPoE in between.
Lastly, do note that additional packets may be needed to be sent and received in order to being able to send your packet at all. For example, your ethernet card may have to do ARP (or NDP) to be able to send your datagram. Caching the results amortizes this as you are sending several datagrams, but if you send just one UPD datagram, you may be surprised that about three times as much "data" is sent and received compared to what you might naively expect.
IP overhead is 20bytes and UDP is 8 btes, so yes, 28 bytes.
http://en.wikipedia.org/wiki/User_Datagram_Protocol
Don't forget about Ethernet overhead if you're doing internal testing

Why is the calculated checksum not matching the BCC sent over the serial port?

I've got a little application written in C# that listens on a SerialPort for information to come in. The information comes in as: STX + data + ETX + BCC. We then calculate the BCC of the transmission packet and compare. The function is:
private bool ConsistencyCheck(byte[] buffer)
{
byte expected = buffer[buffer.Length - 1];
byte actual = 0x00;
for (int i = 1; i < buffer.Length - 1; i++)
{
actual ^= buffer[i];
}
if ((expected & 0xFF) != (actual & 0xFF))
{
if (AppTools.Logger.IsDebugEnabled)
{
AppTools.Logger.Warn(String.Format("ConsistencyCheck failed: Expected: #{0} Got: #{1}", expected, actual));
}
}
return (expected & 0xFF) == (actual & 0xFF);
}
And it seems to work more or less. It is accurately not including the STX or the BCC and accurately including the ETX in it's calculations. It seems to work a very large percentage of the time, however we have at least two machines we are running this on, both of which are Windows 2008 64-bit in which the BCC calculation NEVER adds up. Pulling from a recent log I had in one byte 20 was sent and I calculated 16 and one where 11 was sent and I calculated 27.
I'm absolutely stumped as to what is going on here. Is there perhaps a 64 bit or Windows 2008 "gotcha" I'm missing here? Any help or even wild ideas would be appreciated.
EDIT:
Here's the code that reads the data in:
private void port_DataReceived(object sender, System.IO.Ports.SerialDataReceivedEventArgs e)
{
// Retrieve number of bytes in the buffer
int bytes = serialPort.BytesToRead;
// Create a byte array to hold the awaiting data
byte[] received = new byte[bytes];
//read the data and store it
serialPort.Read(received, 0, bytes);
DataReceived(received);
}
And the DataReceived() function takes that string and appends it to global StringBuilder object. It then stays as a string builder until it's passed to these various functions at which point the .ToString() is called on it.
EDIT2: Changed the code to reflect my altered routines that operate on bytes/byte arrays rather than strings.
EDIT3: I still haven't figured this out yet, and I've gotten more test data that has completely inconsistent results (the amount I'm off of the send checksum varies each time with no pattern). It feels like I'm just calculating the checksum wrong, but I don't know how.
The buffer is defined as a String. While I suspect the data you are transmitting are bytes. I would recommend using byte arrays (even if you are sending ascii/utf/whatever encoding). Then after the checksum is valid, convert the data to a string
computing BCC is not standard, but "customer defined". we program interfaces for our customers and many times found different algorithms, including sum, xor, masking, letting apart stx, etx, or both, or letting apart all known bytes. for example, package structure is "stx, machine code, command code, data, ..., data, etx, bcc", and the calculus of bcc is (customer specified!) as "binary sum of all bytes from command code to last data, inclusive, and all masked with 0xCD". That is, we have first to add all the unknown bytes (it make no sense to add stx, etx, or machine code, if these bytes do not match, the frame is discarded anyhow! their value is tested when they are got, to be sure the frame starts, ends correctly, and it is addressed to the receiving machine, and in this case, we have to bcc only the bytes that can change in the frame, this will decrease the time, as in many cases we work with 4 or 8 bit slow microcontrollers, and caution, this is summing the bytes, and not xoring them, this was just an example, other customer wants something else), and second, after we have the sum (which can be 16 bits if is not truncated during the addition), we mask it (bitwise AND) with the key (in this example 0xCD). This kind of stuff is frequently used for all kind of close systems, like ATM's for example (connecting a serial keyboard to an ATM) for protection reasons, etc., in top of encryption and other things. So, you really have to check (read "crack") how your two machines are computing their (non standard) BCC's.
Make sure you have the port set to accept null bytes somewhere in your port setup code. (This maybe the default value, I'm not sure.)
port.DiscardNull = false;
Also, check for the type of byte arriving at he serial port, and accept only data:
private void port_DataReceived(object sender, SerialDataReceivedEventArgs e)
{
if (e.EventType == SerialData.Chars)
{
// Your existing code
}
}

Categories