I did not find a way to get the memory available using IServer, so instead I am trying to do so using IDatabase.ExecuteAsync("MEMORY STATS") and then processing the result
In the Redis Console one can write MEMORY STATS and get an array output - https://redis.io/commands/memory-stats.
This post says I can use ExecuteAsync to pass raw commands - Executing Redis Console commands in c#
Yet when I do IDatabase.ExecuteAsync("MEMORY STATS") I get the following error:
"RedisServerException: ERR unknown command `MEMORY STATS`, with args beginning with:".
You should do IDatabase.ExecuteAsync("MEMORY", "STATS").
This is because in reality, there is a MEMORY command, and STATS, USAGE, etc are treated as a first argument. This is so even when it is documented as a single MEMORY STATS command.
So, translated to RESP2, the server is expecting two separate strings, not a single string with a space in the middle.
Related
I am programming a C# application which will be used to program and test STM32 microcontrollers during production. I would like to program and verify the chip, then write some configuration to the flash memory and finally set the read-out protection. As a backend I decided to use OpenOCD and its Tcl interface running at port 6666.
The problem: I am able to execute commands and get their results, but I don't know how to check if the command was successfully executed or not. E.g. the reset command returns empty string no matters the target is connected or not... Some other commands like mdw return data or error string, but I am looking for some generic way how to check if the command succeeded or not.
Thank you for your ideas.
Assuming your Tcl code has a bit in its heart doing sendBack [eval $script], you'd change it to do this:
set code [catch {eval $script} result]
sendBack [list $code $result]
or even this:
set code [catch {eval $script} result options]
sendBack [list $code $result $options]
You'll need to unpack that list on the other side. The first element is the result code (0 for success, 1 for error, a few others theoretically but you probably won't see them), the second is the result value or the error message, and the third (if you use the second code snippet) is an options dictionary that can contain various things useful for debugging (including structured error codes, a stack trace, etc.)
Passing back the full result tuple is how you transfer the entire result from one context to another. A number of remote debugging tools for Tcl use pretty much exactly the same trick.
Is it possible to get TransformManyBlocks to send intermediate results as they are created to the next step instead if waiting for the entire IEnumerable<T> to be filled?
All testing I've done shows that TransformManyBlock only sends a result to the next block when it is finished; the next block then reads those items one at a time.
It seems like basic functionality but I can't find any examples of this anywhere.
The use case is processing chunks of a file as they are read. In my case there's a modulus of so many lines needed before I can process anything so a direct stream won't work.
They kludge I've come up with is to create two pipelines:
a "processing" dataflow network the processes the chunks of data as the become available
"producer" dataflow network that ends where the file is broken into
chunks then posted to the start of the "processing" network that actually transforms the data.
The "producer" network needs to be seeded with the starting point of the "processing" network.
Not a good long term solution since additional processing options will be needed and it's not flexible.
Is it possible to have any dataflow block type to send multiple intermediate results as created to a single input? Any pointers to working code?
You probably need to create your IEnumerables by using an iterator. This way an item will be propagated downstream after every yield command. The only problem is that yielding from lambda functions is not supported in C#, so you'll have to use a local function instead. Example:
var block = new TransformManyBlock<string, string>(filePath => ReadLines(filePath));
IEnumerable<string> ReadLines(string filePath)
{
string[] lines = File.ReadAllLines(filePath);
foreach (var line in lines)
{
yield return line; // Immediately offered to any linked block
}
}
I have used QuickFix/.NET for a long time, but in last two days, the engine appears to have sent messages out of sequence twice.
Here is an example, the 3rd message is out of sequence:
20171117-14:44:34.627 : 8=FIX.4.4 9=70 35=0 34=6057 49=TRD 52=20171117-14:44:34.622 56=SS 10=208
20171117-14:44:34.635 : 8=FIX.4.4 9=0070 35=0 34=6876 49=SS 56=TRD 52=20171117-14:44:34.634 10=060
20171117-14:45:04.668 : 8=FIX.4.4 9=224 35=D 34=6059 49=TRD 52=20171117-14:45:04.668 56=SS 11=AGG-171117T095204000182 38=100000 40=D 44=112.402 54=2 55=USD/XXX 59=3 60=20171117-09:45:04.647 278=2cK-3ovrjdrk00X1j8h03+ 10=007
20171117-14:45:04.668 : 8=FIX.4.4 9=70 35=0 34=6058 49=TRD 52=20171117-14:45:04.642 56=SS 10=209
I understand that the QuickFix logger is not in a separate thread.
What could cause this to happen?
The message numbers are generated using GetNextSenderMsgSeqNum method in quickfix/n, which use locking.
public int GetNextSenderMsgSeqNum()
{
lock (sync_) { return this.MessageStore.GetNextSenderMsgSeqNum(); }
}
In my opinion, the messages are generated in sequence and your application is displaying in different order.
In some situations the sender and receiver are not in sync, where receiver expects different sequence number, the initiator sends the message to acceptor that different sequence number is expected.
In that case, sequence number to can be changed to expected sequence number using the method call to update sequence or goto store folder and open file with extension.seqnums and update the sequence numbers.
I hope this will help.
As the datetime is the exact same on both messages, this may be a problem of sorting. This is common across any sorted list where the index is identical on two different items. If this were within your own code I would suggest that to resolve it, you include an extra element as part of the key, such a sequence number
Multiple messages sent by QuickFix with identical timestamps may be sent out of sequence.
A previous answer on StackOverflow suggested re-ordering them on the receiving side, but was not accepted:
QuickFix - messages out of sequence
If you decide to limit yourself to one message per millisecond, say with a sleep() command in between sends, be sure to increase your processes' scheduling priority: https://msdn.microsoft.com/en-us/library/windows/desktop/ms685100(v=vs.85).aspx
You normally get a very long sleep even though you asked for only one millisecond, but I've gotten roughly 1-2 ms with ABOVE_NORMAL_PRIORITY_CLASS. (Windows 10)
You might try to disable Nagle's algorithm, which aggregates multiple TCP messages together and sends them at once. Nagle in and of itself can't cause messages to be sent out of order, but QuickFix may be manually buffering the messages in some weird way. Try telling QuickFix to send them immediately with SocketNodelay: http://quickfixn.org/tutorial/configuration.html
Machine configuration is 4CPU 16 GB RAM and trying to process 800MB and 300MB XML files. Some times .NET Saxon API throws out of memory exceptions below stack trace. Looked at the perfstats for previous few hours and server seems to have 10GB free memory. Below code is run in Parallel Tasks using Task.Run() Please advise.
DocumentBuilder documentBuilder = processor.NewDocumentBuilder();
documentBuilder.IsLineNumbering = true;
documentBuilder.WhitespacePolicy = WhitespacePolicy.PreserveAll;
XdmNode _XdmNode = documentBuilder.Build(xmlDocumentToEvaluate);
System.Exception: Error in ExecuteRules method ---> System.OutOfMemoryException: Exception of type 'System.OutOfMemoryException' was thrown.
at net.sf.saxon.tree.tiny.TinyTree.condense(Statistics )
at net.sf.saxon.tree.tiny.TinyBuilder.close()
at net.sf.saxon.event.ProxyReceiver.close()
at net.sf.saxon.pull.PullPushCopier.copy()
at net.sf.saxon.event.Sender.sendPullSource(PullSource , Receiver , ParseOptions )
at net.sf.saxon.event.Sender.send(Source source, Receiver receiver, ParseOptions options)
at net.sf.saxon.Configuration.buildDocument(Source source, ParseOptions parseOptions)
at net.sf.saxon.Configuration.buildDocument(Source source)
at Saxon.Api.DocumentBuilder.Build(XmlReader reader)
at Saxon.Api.DocumentBuilder.Build(XmlNode source)
With an 800Mb input file I think you could start hitting limits other than the actual amount of heap memory available, for example the maximum size of an array or a string. This could be the effect you are seeing. One way the TinyTree saves space is to use a small number of large objects rather than a large number of small objects, so it could trigger this effect.
The TinyTree.condense() method (which is where it is failing) is called at the end of tree construction and attempts to reclaim unused space in the arrays used for the TinyTree data structure. This is done by allocating smaller arrays up to the actual size used, and copying data across. So temporarily it needs additional memory, and this is where the failure is occurring. Looking at the code, there's actually an opportunity to reduce the amount of temporary memory needed.
If there are a lot of repeated text or attribute values in your data then it could be worth using the "TinyTreeCondensed" option which attempts to common up such values. But this could be counter-productive if there isn't such duplication, because of the space used for indexing during the tree building process.
With data this large, I think it's a good idea to examine alternative strategies. For example: XML databases; streamed processing; splitting the file into multiple files; document projection. It's impossible to advise on this without knowing the big picture about what problem you are trying to solve.
I would like to extract the section of console output that occurs between two specific points in a program and store that into a variable. This would be executed in a loop many times. There is no need for output to be echoed into the regular console (if that makes things more efficient).
i.e.
foreach (Procedure p in procedures) {
BeginCapturingConsoleOutput();
p.Execute();
string procedureOutput = EndCapturingConsoleOutput();
}
The code on this page in MSDN does what I think you are looking for:
http://msdn.microsoft.com/en-us/library/16f09842.aspx
Basically, it sets the output stream to something that you define (in the case of the example, a file), performs some action, and at the end sets it back to the standard output stream.