What I'm trying to do is:
private Queue<Array[]> queue = new Queue<Array[]>(10);
queue.Enqueue({ bufferarray, networkstream});
I'm not sure if this is even possible, if I use 2 queues with same parameters and always call them after each other will I always pull matching values?
Edit for clarification im trying to equeue the received bytes of a tcp stream and the tcp stream itself into one queue or in 2 different queues if I will dequeue matching values
If you have 2 queues with same parameters and always enqueue the values on both. You can pull them both and they will match with no problem because they function the same. You always get the oldest element of the queue
.
Related
I understand from the Microsoft docs that during the first Peek() operation, any one of the available message brokers respond and send their oldest message. Then on subsequent Peek() operation, we can traverse across the partitions to peek every message with increased sequence number.
My question is, during the very first Peek() operation, I will get a message from any of the first responded partitions. Is there a guarantee that I can peek all the messages from the queue?
In a much simpler way, there are three Partitions:
Partition "A" has 10 messages with sequence number from 1 to 10.
Partition "B" has 10 messages with sequence number from 11 to 20.
Partition "C" has 10 messages with sequence number from 21 to 30.
Now if i perform Peek() operation, if Partition "B" responds first, the first message that I'll get is a message with sequence number 11. Next peek operation will look for a message with incremented sequence number. Won't I miss out messages from Partition "A" which has sequence numbers 1-10 which peek operation can never reach since it always searches for the incremented sequence number?
UPDATE
QueueClient queueClient = messagingFactory.CreateQueueClient("QueueName", ReceiveMode.PeekLock);
BrokeredMessage message = null;
while (iteration < messageCount)
{
message = queueClient.Peek(); // According to docs, Peeks the oldest message from any responding broker, and next iterations peek the message with incremented sequence number
if (message == null)
break;
Console.WriteLine(message.SequenceNumber);
iteration++;
}
Is there a guarantee that I can browse all the messages of a partitioned queue using the snippet above?
There is no guarantee that the returned message is the oldest one across all partitions.
It therefore depends which message broker responds first, and the oldest message from that partition will be shown. There is no general rule as to which partition will respond first in your example, but it is guaranteed that the oldest message from that partition is displayed first.
If you want to retrieve the messages by sequence number, use the overloaded Peek method Peek(Sequencenumber), see: https://learn.microsoft.com/en-us/azure/service-bus-messaging/message-browsing
For partitioned entities, the sequence number is issued relative to the partition.
[...]
The SequenceNumber value is a unique 64-bit integer assigned to a message as it is accepted and stored by the broker and functions as its internal identifier. For partitioned entities, the topmost 16 bits reflect the partition identifier.
(https://learn.microsoft.com/en-us/azure/service-bus-messaging/message-sequencing)
So you cannot compare sequence numbers across partitions to see which one is older.
As an example, I just created a partitioned queue and put a couple of messages into two partitions (in Order):
1. Partition 1, SequenceNumber 61924494876344321
2. Partition 2, SequenceNumber 28991922601197569
3. Partition 1, SequenceNumber 61924494876344322
4. Partition 1, SequenceNumber 61924494876344323
5. Partition 2, SequenceNumber 28991922601197570
Browse/Peek messages: Available only in the older WindowsAzure.ServiceBus library. PeekBatch does not always return the number of messages specified in the MessageCount property. There are two common reasons for this behavior. One reason is that the aggregated size of the collection of messages exceeds the maximum size of 256 KB. Another reason is that if the queue or topic has the EnablePartitioning property set to true, a partition may not have enough messages to complete the requested number of messages. In general, if an application wants to receive a specific number of messages, it should call PeekBatch repeatedly until it gets that number of messages, or there are no more messages to peek.
(https://learn.microsoft.com/en-us/azure/service-bus-messaging/service-bus-partitioning, Emphasis added)
As such, you should be able to repeatedly call Peek / PeekBatch to eventually get all the messages. At least, if you use the official SDKs.
Is it possible to get TransformManyBlocks to send intermediate results as they are created to the next step instead if waiting for the entire IEnumerable<T> to be filled?
All testing I've done shows that TransformManyBlock only sends a result to the next block when it is finished; the next block then reads those items one at a time.
It seems like basic functionality but I can't find any examples of this anywhere.
The use case is processing chunks of a file as they are read. In my case there's a modulus of so many lines needed before I can process anything so a direct stream won't work.
They kludge I've come up with is to create two pipelines:
a "processing" dataflow network the processes the chunks of data as the become available
"producer" dataflow network that ends where the file is broken into
chunks then posted to the start of the "processing" network that actually transforms the data.
The "producer" network needs to be seeded with the starting point of the "processing" network.
Not a good long term solution since additional processing options will be needed and it's not flexible.
Is it possible to have any dataflow block type to send multiple intermediate results as created to a single input? Any pointers to working code?
You probably need to create your IEnumerables by using an iterator. This way an item will be propagated downstream after every yield command. The only problem is that yielding from lambda functions is not supported in C#, so you'll have to use a local function instead. Example:
var block = new TransformManyBlock<string, string>(filePath => ReadLines(filePath));
IEnumerable<string> ReadLines(string filePath)
{
string[] lines = File.ReadAllLines(filePath);
foreach (var line in lines)
{
yield return line; // Immediately offered to any linked block
}
}
I'm currently subscribing to a multicast UDP. It streams multiple messages, each about 80 bytes, in a single max 1000 byte packet. As the packets come in, I parse them into objects and then store them in a dictionary.
Each packet I receive comes with a sequential number so that I know if I've dropped any packets.
After about 10k packets received, I start to drop packets here and there.
securityDefinition xyz = new securityDefinition(p1,p2,p3,p4,p5...etc);
if (!secDefs.ContainsKey(securityID))
{
secDefs.Add(securityID, xyz); //THIS WILL CAUSE DROPS EVENTUALLY
secDefs.Add(securityID, null); //THIS WORKS JUST FINE
}
else
{
//A repeat definition is received and assuming all
//sequence numbers in the packet line up sequentially, I know i am done
//However if there is a drop somewhere (gap in sequence number),
//I know I am missing something
}
securityDefinition is a class containing roughly 15 ints, 10 decimals and 5 strings (<10 characters each).
Is there a faster way to store these objects in real time that can keep up with the fast UDP feed? I have tried making securityDefinition a struct, I have tried storing the data in a datatable, I have tried adding the secDef to a list and queue. Same issue with all.
It seems the only bottleneck is putting the objects in the dictionary. Creating the object and checking the dictionary to see if it already exists seems fine.
EDIT:
To clarify a few things - the security definitions come in from a server in a loop. There are roughly 1,000,000 definitions. Once they all are sent, they are then sent again, over and over. When my program starts, I need to initialize all the definitions. Once I get a repeat, I know I am done and can close this connection. However, if i receive a packet at sequence number 1, and the next packet is sequence number 3, I know I have dropped packet 2 and have no way of recovering it.
ConcurrentQueue<byte[]> pkts = new ConcurrentQueue<byte[]>();
//IN THE RECEIVER THREAD...
void ProductDefinitionReceiver()
{
while (!secDefsComplete)
{
byte[] data = new byte[1000];
s.Receive(data);
pkts.Enqueue(data);
}
}
//IN A SEPARATE THREAD:
public void processPacketQueue()
{
int dumped = 0;
byte[] pkt;
while (!secDefsComplete)
{
while (pkts.TryDequeue(out pkt))
{
if (!secDefsComplete)
{
//processPkt includes the parsing and inserting the secDef object into the dictionary.
processPkt(pkt);
}
else
{
dumped++;
}
}
}
Console.WriteLine("Dumped: " + dumped);
}
I have used QuickFix/.NET for a long time, but in last two days, the engine appears to have sent messages out of sequence twice.
Here is an example, the 3rd message is out of sequence:
20171117-14:44:34.627 : 8=FIX.4.4 9=70 35=0 34=6057 49=TRD 52=20171117-14:44:34.622 56=SS 10=208
20171117-14:44:34.635 : 8=FIX.4.4 9=0070 35=0 34=6876 49=SS 56=TRD 52=20171117-14:44:34.634 10=060
20171117-14:45:04.668 : 8=FIX.4.4 9=224 35=D 34=6059 49=TRD 52=20171117-14:45:04.668 56=SS 11=AGG-171117T095204000182 38=100000 40=D 44=112.402 54=2 55=USD/XXX 59=3 60=20171117-09:45:04.647 278=2cK-3ovrjdrk00X1j8h03+ 10=007
20171117-14:45:04.668 : 8=FIX.4.4 9=70 35=0 34=6058 49=TRD 52=20171117-14:45:04.642 56=SS 10=209
I understand that the QuickFix logger is not in a separate thread.
What could cause this to happen?
The message numbers are generated using GetNextSenderMsgSeqNum method in quickfix/n, which use locking.
public int GetNextSenderMsgSeqNum()
{
lock (sync_) { return this.MessageStore.GetNextSenderMsgSeqNum(); }
}
In my opinion, the messages are generated in sequence and your application is displaying in different order.
In some situations the sender and receiver are not in sync, where receiver expects different sequence number, the initiator sends the message to acceptor that different sequence number is expected.
In that case, sequence number to can be changed to expected sequence number using the method call to update sequence or goto store folder and open file with extension.seqnums and update the sequence numbers.
I hope this will help.
As the datetime is the exact same on both messages, this may be a problem of sorting. This is common across any sorted list where the index is identical on two different items. If this were within your own code I would suggest that to resolve it, you include an extra element as part of the key, such a sequence number
Multiple messages sent by QuickFix with identical timestamps may be sent out of sequence.
A previous answer on StackOverflow suggested re-ordering them on the receiving side, but was not accepted:
QuickFix - messages out of sequence
If you decide to limit yourself to one message per millisecond, say with a sleep() command in between sends, be sure to increase your processes' scheduling priority: https://msdn.microsoft.com/en-us/library/windows/desktop/ms685100(v=vs.85).aspx
You normally get a very long sleep even though you asked for only one millisecond, but I've gotten roughly 1-2 ms with ABOVE_NORMAL_PRIORITY_CLASS. (Windows 10)
You might try to disable Nagle's algorithm, which aggregates multiple TCP messages together and sends them at once. Nagle in and of itself can't cause messages to be sent out of order, but QuickFix may be manually buffering the messages in some weird way. Try telling QuickFix to send them immediately with SocketNodelay: http://quickfixn.org/tutorial/configuration.html
I'm making a simple client application in C#, and have reached a problem.
The server application sends a string in the format of "<number> <param> <param>" etc. In other words, the first symbol is an integer, and the rest are whatever, all are separated by one space each.
The problem I get, when reading this string, is that my program first reads a string with the , and then the next time I read I get the rest of the message.
For example, if I were to do a writeline on what I receive, it would look like this:
(if he sends "1 0 0 0")
1
0 0 0
(EDIT: The formatting doesn't seem to permit this. The 1 is on a row of its own, the rest are supposed to be on the row below, including the space preceding the first 0)
I've run out of ideas how to fix this. Here's the method (I commented out some stuff I tried):
http://pastebin.com/0bXC9J2f
EDIT (again): I forgot, it seems to work just fine when I'm in debug and just go through everything step by step, so I can't find any source of the problem that way.
TCP is stream based and not message based. One Read can contain any of the following alternatives:
A teeny weeny part of message
A half message
Excactly one message
One and a half message
Two messages
Thus you need to use some kind of method to see if a complete message have arrived. The most common methods are:
Add a footer (for instance an empty line) which indicates end of message
Add a fixed length header containing the length of the message
If your protocol is straight TCP, then you cannot send messages, strings or anything else except octet, (byte) streams. Does your 'string' have a null at the end? If so, you need to append received data until the null arrives, then you have your message.
If this is your problem, then you should code your protocol so that it works no matter how many read calls are made on the socket, eg. if a null-terminated string of [99 data bytes+#0] is sent by the server, your protocol should be able to assemble the correct string if 100 bytes are returned in one call, 1 byte is received in 100 calls, or anything in between.
Rgds,
Martin