Recognize BREAK in protocol reliable (RS232) - c#

i have an EMS bus on RS232 with a protocol where all blocks are seperated with a BREAK(0x00).
The Data is send continuously from the device.
My problem is, that I'm not able to seperated this blocks reliable.
In a block is sometimes a 0x00 (but this is no break).
I know that every block starts with 0x01,0x02 or 0x03 and ends with a CRC+BREAK.
Is there a good way to split the blocks in C#?
Thanks.

Do you mean an RS232 break, our just a zero byte being transmitted?
It sounds like you are talking about a zero byte, in which case you need to tell the difference between a data byte and a terminator byte.
In this case, you need to look at identifying features of the data packets. If the packets are always the same length, then you can easily tell if the zero is a terminator. otherwise you'll need to parse the data in the packets to work it out. Do they contain any kind of length information? Or perhaps you have to read the data byte by byte and parse it to work out what each byte means, and thus where the packet ends.
If you can't do this then you don't have a protocol, you have random unpredictable data. Any protocol must allow some way for you to detect and split the packets.

Related

C# order of bytes sent and received by socket

I was wondering about the order of sent and received bytes by/from TCP socket.
I got implemented socket, it's up and working, so that's good.
I have also something called "a message" - it's a byte array that contains string (serialized to bytes) and two integers (converted to bytes). It has to be like that - project specifications :/
Anyway, I was wondering about how it is working on bytes:
In byte array, we have order of bytes - 0,1,2,... Length-1. They sit in memory.
How are they sent? Is last one the first to send? Or is it the first? Receiving, I think, is quite easy - first byte to appear gets on first free place in buffer.
I think a little image I made nicely shows what I mean.
They are sent in the same order they are present in memory. Doing otherwise would be more complex... How would you do if you had a continuous stream of bytes? Wait that the last one has been sent and then reverse all? Or this inversion should work "packet by packet"? So each block of 2k bytes (or whatever is the size of the TCP packets) is internally reversed but the order of the packets is "correct"?
Receiving, I think, is quite easy - first byte to appear gets on first free place in buffer.
Why on the earth the sender should reverse the bytes but the receiver shouldn't? If you build a symmetric system, both do an action or none does it!
Note that the real problem is normally the one of endianness. The memory layout of an int on your computer could be different than the layout of an int of another computer. So one of the two computers could have to reverse the 4 bytes of the int. But endianness is something that is resolved "primitive type" by "primitive type". Many internet protocols, for historical reason, are Big Endian, while Intel CPUs are Little Endians. Even internal fields of TCP are Big Endian (see Big endian or Little endian on net?), but here we are speaking of fields of TCP, not of the data moved by the TCP protocol.

Is it better to download byte per byte in Sockets

I'm building a File Sharing Program, and I would like to know if it's better, while using Sockets, to receive and send byte per byte, or a fixed amount. I'm sending messages of Login, Actual file size list, etc, of 512 bytes, and 65536, when sending and receiving files.
it is depend on your usage and goal:
for High Performance when in non-faulty environment:
choose 1500 bytes
for bad and faulty environment:
choose lower sizes but not byte per byte
It's always better to use reasonably sized blocks for efficiency reasons. Typical network packets are around 1500 bytes in size (Ethernet) and every packet carries a bunch of necessary overhead (such as protocol, destination address and port etc.).
Single bytes is the worst (in terms of efficiency) that you can do.
Handling 1500 or so bytes at a time will be much more efficient than one byte at a time. That is about the size of a typical Ethernet frame.
Keep in mind that you are using a stream of bytes: any concept of message or record is up to you to implement.

Difference between NetworkToHostOrder and HostToNetworkOrder?

It seems that with any given input both of these functions return the same value.
Does that mean that my computer is using big-endian (Win7)? Because I know NetworkOrder is in big-endian so converting between the two should do nothing, then?
I am a bit confused on when I have to use these functions. I am trying to write a simple client-server program and am currently just familiarizing myself with what MSDN has to say about the NetworkStream, IPAdress, and TcpClient classes.
When would I need to use these functions, if at all? When sending byte arrays to the server and back would I need to call these functions on the individual bytes before sending them off? I'd imagine not.. what about if I prepend the data with a length integer. Would I need to call HostToNetworkOrder on that?
Both functions do the exact same conversion; there's two functions so that your code will be more readable and the intention will stand out better.
Your windows system is running on an Inten (or AMD) processor and it's the processor that sets the word format... these are little endian machines.

Create TCP Packet in C#

I'm sending data to an extremely old system via TCP. I need to send 2000 bytes in one packet, and I need it not to be split up (what happens when I write out 2000 bytes via a socket).
While, yes, I shouldn't have to care about this at an application level -- I in fact do care about this, because I have no other options on the older system, everything MUST be received in a single packet.
Is there something less terrible than calling netcat?
Unless you are on a link with jumbo frames the usual MTU on the ethernet is 1500. Subtract IP (20 bytes) and TCP headers (at least 20 bytes). So no luck with 2000 bytes in a single packet.

c# stream received all data?

I'm using C#.Net and the Socket class from the System.Net.Sockets namespace. I'm using the asynchronous receive methods. I understand this can be more easily done with something like a web service; this question is borne out of my curiosity rather than a practical need.
My question is: assume the client is sending some binary-serialized object of an unknown length. On my server with the socket, how do I know the entire object has been received and that it is ready for deserialization? I've considered prepending the object with the length of the object in bytes, but this seems unnecessary in the .Net world. What happens if the object is larger than the buffer? How would I know, 'hey, gotta resize the buffer because the object is too big'?
You either need the protocol to be self-terminating (like XML is, effectively - you know when you've finished receiving an XML document when it closes the root element) or you need to length-prefix the data, or you need the other end to close the stream when it's done.
In the case of a self-terminated protocol, you need to have enough hooks in so that the reading code can tell when it's finished. With binary serialization you may well not have enough hooks. Length-prefix is by far the easiest solution here.
If you use pure sockets, you need to know the length. Otherwise, the size of the buffer is not relevant, because even if you have a buffer of the size of the whole data, it still may not read all into it - check Stream.Read method, it returns the nr of bites actually read, so you need to loop until all data is received.
Yeah, you won't deserialize until you've rxed all the bytes.

Categories