I'm currently subscribing to a multicast UDP. It streams multiple messages, each about 80 bytes, in a single max 1000 byte packet. As the packets come in, I parse them into objects and then store them in a dictionary.
Each packet I receive comes with a sequential number so that I know if I've dropped any packets.
After about 10k packets received, I start to drop packets here and there.
securityDefinition xyz = new securityDefinition(p1,p2,p3,p4,p5...etc);
if (!secDefs.ContainsKey(securityID))
{
secDefs.Add(securityID, xyz); //THIS WILL CAUSE DROPS EVENTUALLY
secDefs.Add(securityID, null); //THIS WORKS JUST FINE
}
else
{
//A repeat definition is received and assuming all
//sequence numbers in the packet line up sequentially, I know i am done
//However if there is a drop somewhere (gap in sequence number),
//I know I am missing something
}
securityDefinition is a class containing roughly 15 ints, 10 decimals and 5 strings (<10 characters each).
Is there a faster way to store these objects in real time that can keep up with the fast UDP feed? I have tried making securityDefinition a struct, I have tried storing the data in a datatable, I have tried adding the secDef to a list and queue. Same issue with all.
It seems the only bottleneck is putting the objects in the dictionary. Creating the object and checking the dictionary to see if it already exists seems fine.
EDIT:
To clarify a few things - the security definitions come in from a server in a loop. There are roughly 1,000,000 definitions. Once they all are sent, they are then sent again, over and over. When my program starts, I need to initialize all the definitions. Once I get a repeat, I know I am done and can close this connection. However, if i receive a packet at sequence number 1, and the next packet is sequence number 3, I know I have dropped packet 2 and have no way of recovering it.
ConcurrentQueue<byte[]> pkts = new ConcurrentQueue<byte[]>();
//IN THE RECEIVER THREAD...
void ProductDefinitionReceiver()
{
while (!secDefsComplete)
{
byte[] data = new byte[1000];
s.Receive(data);
pkts.Enqueue(data);
}
}
//IN A SEPARATE THREAD:
public void processPacketQueue()
{
int dumped = 0;
byte[] pkt;
while (!secDefsComplete)
{
while (pkts.TryDequeue(out pkt))
{
if (!secDefsComplete)
{
//processPkt includes the parsing and inserting the secDef object into the dictionary.
processPkt(pkt);
}
else
{
dumped++;
}
}
}
Console.WriteLine("Dumped: " + dumped);
}
Related
If i send 1000 bytes in TCP, does it guarantee that the receiver will get the entire 1000 bytes "togther"? or perhaps he will first only get 500 bytes, and later he'll receive the other bytes?
EDIT: the question comes from the application's point of view. If the 1000 bytes are reassembles into a single buffer before they reach the application .. then i don't care if it was fragmented in the way..
See Transmission Control Protocol:
TCP provides reliable, ordered delivery of a stream of bytes from a program on one computer to another program on another computer.
A "stream" means that there is no message boundary from the receiver's point of view. You could get one 1000 byte message or one thousand 1 byte messages depending on what's underneath and how often you call read/select.
Edit: Let me clarify from the application's point of view. No, TCP will not guarantee that the single read would give you all of the 1000 bytes (or 1MB or 1GB) packet the sender may have sent. Thus, a protocol above the TCP usually contains fixed length header with the total content length in it. For example you could always send 1 byte that indicates the total length of the content in bytes, which would support up to 255 bytes.
As other answers indicated, TCP is a stream protocol -- every byte sent will be received (once and in the same order), but there are no intrinsic "message boundaries" -- whether all bytes are sent in a single .send call, or multiple ones, they might still be received in one or multiple .receive calls.
So, if you need "message boundaries", you need to impose them on top of the TCP stream, IOW, essentially, at application level. For example, if you know the bytes you're sending will never contain a \0, null-terminated strings work fine; various methods of "escaping" let you send strings of bytes which obey no such limitations. (There are existing protocols for this but none is really widespread or widely accepted).
Basically as far as TCP goes it only guarantees that the data sent from one end to the other end will be sent in the same order.
Now usually what you'll have to do is have an internal buffer that keeps looping until it has received your 1000 byte "packet".
Because the recv command as mentioned returns how much has actually been received.
So usually you'll have to then implement a protocol on top of TCP to make sure you send data at an appropriate speed. Because if you send() all the data in one run through it will overload the under lying networking stack, and which will cause complications.
So usually in the protocol there is a tiny acknowledgement packet sent back to confirm that the packet of 1000 bytes are sent.
You decide, in your message that how many bytes your message shall contain. For instance in your case its 1000. Following is up and running C# code to achieve the same. The method returns with 1000 bytes. The abort code is 0 bytes; you can tailor that according to your needs.
Usage:
strMsg = ReadData(thisTcpClient.Client, 1000, out bDisconnected);
Following is the method:
string ReadData(Socket sckClient, int nBytesToRead, out bool bShouldDisconnect)
{
bShouldDisconnect = false;
byte[] byteBuffer = new byte[nBytesToRead];
Array.Clear(byteBuffer, 0, byteBuffer.Length);
int nDataRead = 0;
int nStartIndex = 0;
while (nDataRead < nBytesToRead)
{
int nBytesRead = sckClient.Receive(byteBuffer, nStartIndex, nBytesToRead - nStartIndex, SocketFlags.None);
if (0 == nBytesRead)
{
bShouldDisconnect = true;
//0 bytes received; assuming disconnect signal
break;
}
nDataRead += nBytesRead;
nStartIndex += nBytesRead;
}
return Encoding.Default.GetString(byteBuffer, 0, nDataRead);
}
Let us know this didn't help you (0: Good luck.
Yes, there is a chance for receiving packets part by part. Hope this msdn article and following example (taken from the article in msdn for quick review) would be helpful to you if you are using windows sockets.
void CChatSocket::OnReceive(int nErrorCode)
{
CSocket::OnReceive(nErrorCode);
DWORD dwReceived;
if (IOCtl(FIONREAD, &dwReceived))
{
if (dwReceived >= dwExpected) // Process only if you have enough data
m_pDoc->ProcessPendingRead();
}
else
{
// Error handling here
}
}
TCP guarantees that they will recieve all 1000 bytes, but not necessarily in order (though, it will appear so to the recieving application) and not necessarily all at once (unless you craft the packet yourself and make it so.).
That said, for a packet as small as 1000 bytes, there is a good chance it'll send in one packet as long as you do it in one call to send, though for larger transmissions it may not.
The only thing that the TCP layer guarantees is that the receiver will receive:
all the bytes transmitted by the sender
in the same order
There are no guarantees at all about how the bytes might be split up into "packets". All the stuff you might read about MTU, packet fragmentation, maximum segment size, or whatever else is all below the layer of TCP sockets, and is irrelevant. TCP provides a stream service only.
With reference to your question, this means that the receiver may receive the first 500 bytes, then the next 500 bytes later. Or, the receiver might receive the data one byte at a time, if that's what it asks for. This is the reason that the recv() function takes a parameter that tells it how much data to return, instead of it telling you how big a packet is.
The transmission control protocol guarantees successful delivery of all packets by requiring acknowledgment of the successful delivery of each packet to the sender by the receiver. By this definition the receiver will always receive the payload in chunks when the size of the payload exceeds the MTU (maximum transmission unit).
For more information please read Transmission Control Protocol.
The IP packets may get fragmented during retransmission.
So the destination machine may receive multiple packets - which will be reassembled back by TCP/IP stack. Depending on the network API you are using - the data will be given to you either reassembled or in RAW packets.
It depends of the stablished MTU (Maximum transfer unit). If your stablished connection (once handshaked) refers to a MTU of 512 bytes you will need two or more TCP packets to send 1000 bytes.
in my application, data are received from UDP socket with high speed rate(450 Mbps - each udp packet size is 1Kb). I need two threads. First thread receives data form socket and writes data in a buffer(byte[]); Second thread reads data from buffer and processes them. I need to receive and process them in the real time mode.
I use a variable "TableCounter" to protect two threads and use "SetThreadAffinityMask" for both threads and set to two different threads of CPU.
I implement this senario in c++ and it's fine with high speed. but in c#, my application is slow and i lost some packet(i have counter in each packet and i check it).
How can I resolve the speed and data lost problems? which type of buffer is good for this senario( two threads access to buffer)? please help me.
/////////////////////////////////////
first thread
/////////////////////////////////////
Datain = new Thread(new ThreadStart(readDataUDPClient));
Datain.IsBackground = true;
Datain.Priority = ThreadPriority.Highest;
Datain.Start();
/////////////////////////////////////
public void readDataUDPClient()
{
var ptr = GetCurrentThread();
SetThreadAffinityMask(ptr, new IntPtr(0x0002));
UdpClient client = new UdpClient(20000);
client.EnableBroadcast = true;
IPEndPoint anyIP = new IPEndPoint(IPAddress.Any, 20000);
client.Client.ReceiveBufferSize = 900000000;
while (true)
{
tdata = new byte[1024];
tdata = client.Receive(ref anyIP);
// check counter
Array.Copy(tdata , 0, Table, TableCounter, 1024);
TableCounter += 1024;
if (TableCounter == TableSize)
{
TableCounter = 0;
Cycle++;
}
if (TableCounter == 10240)
{
waitHandle.Set(); // set event for start second thread
}
}
}
and
/////////////////////////////////////
Second thread
/////////////////////////////////////
public void processData()
{
var ptr = GetCurrentThread();
SetThreadAffinityMask(ptr, new IntPtr(0x0004));
int Counter = 0;
waitHandle.WaitOne();
while (true)
{
if ((Counter < TableCounter) && (Counter <TableSize)) {
// Read from Table
// process data
counter++; // local counter
}if ((Counter ==TableSize)) {
// Read from Table
// process data
counter=0; // local counter
} else {
Thread.Sleep(1); // or break for set another event
}
}
}
FWIW, if data loss is a concern then UDP might not be your best choice.
A brief inspection of the receive code suggests that you don't need to "pre-allocate" tdata. C# doesn't just allocate that memory it also zero's it, each iteration. You are paying the cost to do this, then throwing it away because Receive is returning a (different) buffer. Try something like:
var tdata = client.Receive(ref anyIP);
Allocating tdata and the Array.Copy would be where your time is being spent. A
"more C#'ish" approach would be to declare Table as a list or array like so:
List<byte[]> Table = new List<byte[]>(10);
....
Table[ix] = client.Receive(ref anyIP);
ix++
if (ix > 10) ... do something cycle, etc.
This gets rid of tdata entirely and assigns your return value to the appropriate index in Table. This would of course necessitate changing your second threads function to using the list of buffers structure. These changes reduce two 1024 byte copy operations (one for the initialize, one for the Array.Copy), to setting one pointer.
You also need to be carefully that your first (producing) thread doesn't outpace your second (consuming) thread. I assume that in production your second thread will do something more that go through the buffer 1 byte at time (your second thread loops 1024 times for each time you go through your first thread's loop.
Additional content in response to comments
Two questions came up in the comments. It appears that the original question was a simplified version of the actual problem. However I would recommend the same approach. "Harvest"/Read the data in a tight loop into a pre-allocated array of blocks. For example a 100 element array of n bytes per element. The number of bytes returned is controlled by the UdpClient.Recieve method and will be equal to the data portion of the UDP packet sent. Since the initial array is just an array of pointers (to byte[]) you should make it large enough to handle a suitable number of backlogged data packets. If this is running on a PC with reasonable amount of memory start with something like List(16000), then adjust the starting value up or down depending on how quickly your consumer threads process the data.
If you think that the UdpClientRecieve method is a performance bottleneck, create a Diagnostics.Stopwatch object and wrap the call with a stopwatch.Start/Stopwatch.Stop call. Keep track of those calls to see what the actual time cost is; example:
var tracker = List<double>();
var stopwatch = new System.Diagnostics.Stopwatch();
// do some stuff, then enter while (true) loop
stopwatch.Reset();
client.Receive(ref anyIP);
stopwatch.Stop();
tracker.Add(stopwatch.Elapsed.TotalMilliseconds());
You can write the contents of tracker to a file at the end of a run or use the debugger (and maybe linq) to look at the data. This will tell you exactly how much time is spent in Receive and if you are hitting any "pauses" from say the GC.
The second issue seems to be with the amount of work you are trying to do in your consumer thread (the thread processing the data). If you are doing a lot of work for each data block read, then you might want to consider spawning multiple threads that each work on a different block. You could have a "Dispatcher" thread that reads a block and sends it to one of a pool of worker threads. The Dispatcher block would watch the growth of Table and do whatever is considered appropriate if it gets too "large".
You might want to consider breaking this into multiple questions as SO is more tuned to tightly targeted questions and answers. First try to establish if Receive really is the problem by comparing the actual times to the required or expected times. How many UDP packets per second do you need to handle? How many are being handled? etc.
Apologies if my 'lingo' doesn't make sense... I'm fairly new to this and coding!
I am working on a project which involves a RFID reader and a Bluetooth module communicating with a C# windows form.
The com port event handler sends the RFID tag's unique ID continuously. Is there a way for it to be sent just once?
Is there a way for the program to just receive the ID once, so it can be processed; as opposed to receiving the ID numerous times.
Thanks in advance! :)
My code so far is as follows.
I have the serial port open from somewhere else
private void port_DataRecieved(object sender, System.IO.Ports.SerialDataRecievedEventArgs e)
{
string data = serialPort.ReadExisting(); // read what came from the RFID reader
if (data.Length > 9) // check if the string if bigger than 9 characters
{
CODE = data.Substring(0, 9); // if string is bigger than 9 characters trim the ending characters until it is only 9 long
}
else
{
CODE = data; // if less that 9 characters use however many it gets
}
}
Don't use ReadExisting. Instead check whether there are 9 bytes yet.
If not, return immediately.
If yes, read only 9 and leave the others for the next event.
You probably should have some resynchronization logic also.
Also, received data needs to be in a byte[], not a string. The Microsoft-provided serial port class always leads people to use the wrong approach.
I've got a little application written in C# that listens on a SerialPort for information to come in. The information comes in as: STX + data + ETX + BCC. We then calculate the BCC of the transmission packet and compare. The function is:
private bool ConsistencyCheck(byte[] buffer)
{
byte expected = buffer[buffer.Length - 1];
byte actual = 0x00;
for (int i = 1; i < buffer.Length - 1; i++)
{
actual ^= buffer[i];
}
if ((expected & 0xFF) != (actual & 0xFF))
{
if (AppTools.Logger.IsDebugEnabled)
{
AppTools.Logger.Warn(String.Format("ConsistencyCheck failed: Expected: #{0} Got: #{1}", expected, actual));
}
}
return (expected & 0xFF) == (actual & 0xFF);
}
And it seems to work more or less. It is accurately not including the STX or the BCC and accurately including the ETX in it's calculations. It seems to work a very large percentage of the time, however we have at least two machines we are running this on, both of which are Windows 2008 64-bit in which the BCC calculation NEVER adds up. Pulling from a recent log I had in one byte 20 was sent and I calculated 16 and one where 11 was sent and I calculated 27.
I'm absolutely stumped as to what is going on here. Is there perhaps a 64 bit or Windows 2008 "gotcha" I'm missing here? Any help or even wild ideas would be appreciated.
EDIT:
Here's the code that reads the data in:
private void port_DataReceived(object sender, System.IO.Ports.SerialDataReceivedEventArgs e)
{
// Retrieve number of bytes in the buffer
int bytes = serialPort.BytesToRead;
// Create a byte array to hold the awaiting data
byte[] received = new byte[bytes];
//read the data and store it
serialPort.Read(received, 0, bytes);
DataReceived(received);
}
And the DataReceived() function takes that string and appends it to global StringBuilder object. It then stays as a string builder until it's passed to these various functions at which point the .ToString() is called on it.
EDIT2: Changed the code to reflect my altered routines that operate on bytes/byte arrays rather than strings.
EDIT3: I still haven't figured this out yet, and I've gotten more test data that has completely inconsistent results (the amount I'm off of the send checksum varies each time with no pattern). It feels like I'm just calculating the checksum wrong, but I don't know how.
The buffer is defined as a String. While I suspect the data you are transmitting are bytes. I would recommend using byte arrays (even if you are sending ascii/utf/whatever encoding). Then after the checksum is valid, convert the data to a string
computing BCC is not standard, but "customer defined". we program interfaces for our customers and many times found different algorithms, including sum, xor, masking, letting apart stx, etx, or both, or letting apart all known bytes. for example, package structure is "stx, machine code, command code, data, ..., data, etx, bcc", and the calculus of bcc is (customer specified!) as "binary sum of all bytes from command code to last data, inclusive, and all masked with 0xCD". That is, we have first to add all the unknown bytes (it make no sense to add stx, etx, or machine code, if these bytes do not match, the frame is discarded anyhow! their value is tested when they are got, to be sure the frame starts, ends correctly, and it is addressed to the receiving machine, and in this case, we have to bcc only the bytes that can change in the frame, this will decrease the time, as in many cases we work with 4 or 8 bit slow microcontrollers, and caution, this is summing the bytes, and not xoring them, this was just an example, other customer wants something else), and second, after we have the sum (which can be 16 bits if is not truncated during the addition), we mask it (bitwise AND) with the key (in this example 0xCD). This kind of stuff is frequently used for all kind of close systems, like ATM's for example (connecting a serial keyboard to an ATM) for protection reasons, etc., in top of encryption and other things. So, you really have to check (read "crack") how your two machines are computing their (non standard) BCC's.
Make sure you have the port set to accept null bytes somewhere in your port setup code. (This maybe the default value, I'm not sure.)
port.DiscardNull = false;
Also, check for the type of byte arriving at he serial port, and accept only data:
private void port_DataReceived(object sender, SerialDataReceivedEventArgs e)
{
if (e.EventType == SerialData.Chars)
{
// Your existing code
}
}
If i send 1000 bytes in TCP, does it guarantee that the receiver will get the entire 1000 bytes "togther"? or perhaps he will first only get 500 bytes, and later he'll receive the other bytes?
EDIT: the question comes from the application's point of view. If the 1000 bytes are reassembles into a single buffer before they reach the application .. then i don't care if it was fragmented in the way..
See Transmission Control Protocol:
TCP provides reliable, ordered delivery of a stream of bytes from a program on one computer to another program on another computer.
A "stream" means that there is no message boundary from the receiver's point of view. You could get one 1000 byte message or one thousand 1 byte messages depending on what's underneath and how often you call read/select.
Edit: Let me clarify from the application's point of view. No, TCP will not guarantee that the single read would give you all of the 1000 bytes (or 1MB or 1GB) packet the sender may have sent. Thus, a protocol above the TCP usually contains fixed length header with the total content length in it. For example you could always send 1 byte that indicates the total length of the content in bytes, which would support up to 255 bytes.
As other answers indicated, TCP is a stream protocol -- every byte sent will be received (once and in the same order), but there are no intrinsic "message boundaries" -- whether all bytes are sent in a single .send call, or multiple ones, they might still be received in one or multiple .receive calls.
So, if you need "message boundaries", you need to impose them on top of the TCP stream, IOW, essentially, at application level. For example, if you know the bytes you're sending will never contain a \0, null-terminated strings work fine; various methods of "escaping" let you send strings of bytes which obey no such limitations. (There are existing protocols for this but none is really widespread or widely accepted).
Basically as far as TCP goes it only guarantees that the data sent from one end to the other end will be sent in the same order.
Now usually what you'll have to do is have an internal buffer that keeps looping until it has received your 1000 byte "packet".
Because the recv command as mentioned returns how much has actually been received.
So usually you'll have to then implement a protocol on top of TCP to make sure you send data at an appropriate speed. Because if you send() all the data in one run through it will overload the under lying networking stack, and which will cause complications.
So usually in the protocol there is a tiny acknowledgement packet sent back to confirm that the packet of 1000 bytes are sent.
You decide, in your message that how many bytes your message shall contain. For instance in your case its 1000. Following is up and running C# code to achieve the same. The method returns with 1000 bytes. The abort code is 0 bytes; you can tailor that according to your needs.
Usage:
strMsg = ReadData(thisTcpClient.Client, 1000, out bDisconnected);
Following is the method:
string ReadData(Socket sckClient, int nBytesToRead, out bool bShouldDisconnect)
{
bShouldDisconnect = false;
byte[] byteBuffer = new byte[nBytesToRead];
Array.Clear(byteBuffer, 0, byteBuffer.Length);
int nDataRead = 0;
int nStartIndex = 0;
while (nDataRead < nBytesToRead)
{
int nBytesRead = sckClient.Receive(byteBuffer, nStartIndex, nBytesToRead - nStartIndex, SocketFlags.None);
if (0 == nBytesRead)
{
bShouldDisconnect = true;
//0 bytes received; assuming disconnect signal
break;
}
nDataRead += nBytesRead;
nStartIndex += nBytesRead;
}
return Encoding.Default.GetString(byteBuffer, 0, nDataRead);
}
Let us know this didn't help you (0: Good luck.
Yes, there is a chance for receiving packets part by part. Hope this msdn article and following example (taken from the article in msdn for quick review) would be helpful to you if you are using windows sockets.
void CChatSocket::OnReceive(int nErrorCode)
{
CSocket::OnReceive(nErrorCode);
DWORD dwReceived;
if (IOCtl(FIONREAD, &dwReceived))
{
if (dwReceived >= dwExpected) // Process only if you have enough data
m_pDoc->ProcessPendingRead();
}
else
{
// Error handling here
}
}
TCP guarantees that they will recieve all 1000 bytes, but not necessarily in order (though, it will appear so to the recieving application) and not necessarily all at once (unless you craft the packet yourself and make it so.).
That said, for a packet as small as 1000 bytes, there is a good chance it'll send in one packet as long as you do it in one call to send, though for larger transmissions it may not.
The only thing that the TCP layer guarantees is that the receiver will receive:
all the bytes transmitted by the sender
in the same order
There are no guarantees at all about how the bytes might be split up into "packets". All the stuff you might read about MTU, packet fragmentation, maximum segment size, or whatever else is all below the layer of TCP sockets, and is irrelevant. TCP provides a stream service only.
With reference to your question, this means that the receiver may receive the first 500 bytes, then the next 500 bytes later. Or, the receiver might receive the data one byte at a time, if that's what it asks for. This is the reason that the recv() function takes a parameter that tells it how much data to return, instead of it telling you how big a packet is.
The transmission control protocol guarantees successful delivery of all packets by requiring acknowledgment of the successful delivery of each packet to the sender by the receiver. By this definition the receiver will always receive the payload in chunks when the size of the payload exceeds the MTU (maximum transmission unit).
For more information please read Transmission Control Protocol.
The IP packets may get fragmented during retransmission.
So the destination machine may receive multiple packets - which will be reassembled back by TCP/IP stack. Depending on the network API you are using - the data will be given to you either reassembled or in RAW packets.
It depends of the stablished MTU (Maximum transfer unit). If your stablished connection (once handshaked) refers to a MTU of 512 bytes you will need two or more TCP packets to send 1000 bytes.