Windows named pipe in node js (preferred shared memory) - c#

I am using named pipe to share some data between 2 processes in windows. One is a node process and other is a C# process. Here is a sample of code I use in my node process:
var net = require('net');
var PIPE_NAME = "mypipe";
var PIPE_PATH = "\\\\.\\pipe\\" + PIPE_NAME;
var L = console.log;
var server = net.createServer(function(stream) {
L('Server: on connection')
stream.on('data', function(c) {
L('Server: on data:', c.toString());
});
stream.on('end', function() {
L('Server: on end')
server.close();
});
stream.write('Take it easy!');
});
server.on('close',function(){
L('Server: on close');
})
server.listen(PIPE_PATH,function(){
L('Server: on listening');
})
I use a NamedPipeClientStream in c# to read the data. I do this in a loop on both the sides, such as my node process is a producer and C# process is a consumer.
This works fine.
But sometimes the C# loop hangs and at that point in my node process I want to overwrite the new data over the old data. I was wondering if I can specify some max size in my pipe (the one I create in nodejs) or a timeout for the data but couldn't find such things in standard documentation.
If it cannot be solved this way, there is a shared memory route to solve the problem but I couldn't find any stable shared memory library for nodejs which works nicely on windows (and I don't have much time to write one right now). I need some pointers to move in the right direction.
Any advice is appreciated. Thanks.
EDIT: I would really want to implement the above stuff using shared memory since I need to share large amount of data at a fast rate and I need to tweak for performance. Any pointers on how to implement it?

I figured out a way to use the drain event in writable stream of nodejs as per my requirement.

Related

Fastest way to retrieve filtered remote event logs

I need to retrieve a few event logs (with specific IDs) from the Security event log from a handful of servers.
I've parallelized the server loop and it works fine and speeds up, but a couple of them have -huge- retention strategies (exported EVTX files for full registries are over 10gb).
I get the logs using a loop like this:
var eventIds = new [] { 1, 2, 3, 4 }
var eventIdQueryStr = string.Join(" or ", eventIds.Select(x => $"EventID={x}"));
var queryXPath = $"*[System[TimeCreated[#SystemTime >= '{dateFrom:s}' and #SystemTime < '{dateTo:s}']]] and *[System[{eventIdQueryStr}]]";
using var session = new EventLogSession(
serverName,
domain,
user,
password,
SessionAuthentication.Default);
var eventsQuery = new EventLogQuery("Security", PathType.LogName, queryXPath) {Session = session};
using (var logReader = new EventLogReader(eventsQuery))
{
for (var eventDetail = logReader.ReadEvent();
eventDetail != null;
eventDetail = logReader.ReadEvent())
{
// event list is a `ConcurrentBag<EventRecord>`
_eventList.Add(eventDetail);
/* other irrelevant stuff for showing progress, not relevant to the speed of the process */
}
}
/* Parsing of the eventList here... irrelevant to the question */
This works, but it's extremely slow. I get around 1 million records (filtered with the xpath query) per hour for each server (not counting the parsing, that's why it's not relevant here) over a VPN.
I understand the event log on Windows is not an indexed database so the query doesn't really speed up things (just filters them), but this is the fastest I could get (with several different techniques, including dumping the whole registry on the remote server, copying the file and/or parsing it directly via the network).
Is there anything I'm missing which could speed it up?
I've tested this on both .NET Core 3.0 and .NET Framework 4.8 with no difference on results whatsoever. Using the wevtutil command line with the same xpath query (using /q:"<query>") give similar performance but I can't get over my head that this is the fastest that can be done.
The servers with big retention in particular are dual xeon servers with LOTS of free CPU and RAM, so I'm quite positive this is not a hardware performance problem.
I'd be grateful for any tips
PS: when I wrote the "progress-showing" parts are irrelevant is because I've tried removing them (just in case outputting progress was the culprit) with no relevant differences to the process speed

How to read text from 'simple' screenshot fast and effectively?

I'm working on a small personal application that should read some text (2 sentences at most) from a really simple Android screenshot. The text is always the same size, same font, and in approx. the same location. The background is very plain, usually a few shades of 1 color (think like bright orange fading into a little darker orange). I'm trying to figure out what would be the best way (and most importantly, the fastest way) to do this.
My first attempt involved the IronOcr C# library, and to be fair, it worked quite well! But I've noticed a few issues with it:
It's not 100% accurate
Despite having a community/trial version, it sometimes throws exceptions telling you to get a license
It takes ~400ms to read a ~600x300 pixel image, which in the case of my simple image, I consider to be rather long
As strange as it sounds, I have a feeling that libraries like IronOcr and Tesseract may just be too advanced for my needs. To improve speeds I have even written a piece of code to "treshold" my image first, making it completely black and white.
My current IronOcr settings look like this:
ImageReader = new AdvancedOcr()
{
CleanBackgroundNoise = false,
EnhanceContrast = false,
EnhanceResolution = false,
Strategy = AdvancedOcr.OcrStrategy.Fast,
ColorSpace = AdvancedOcr.OcrColorSpace.GrayScale,
DetectWhiteTextOnDarkBackgrounds = true,
InputImageType = AdvancedOcr.InputTypes.Snippet,
RotateAndStraighten = false,
ReadBarCodes = false,
ColorDepth = 1
};
And I could totally live with the results I've been getting using IronOcr, but the licensing exceptions ruin it. I also don't have $399 USD to spend on a private hobby project that won't even leave my own PC :(
But my main goal with this question is to find a better, faster or more efficient way to do this. It doesn't necessarily have to be an existing library, I'd be more than willing to make my own kind of letter-detection code that would work (only?) for screenshots like mine if someone can point me in the right direction.
I have researched about this topic and the best solution which I could find is Azure cognitive services. You can use Computer vision API to read text from an image. Here is the complete document.
How fast does it have to be?
If you are using C# I recommend the Google Cloud Vision API. You pay per request but the first 1000 per month are free (check pricing here). However, it does require a web request but I find it to be very quick
using Google.Cloud.Vision.V1;
using System;
namespace GoogleCloudSamples
{
public class QuickStart
{
public static void Main(string[] args)
{
// Instantiates a client
var client = ImageAnnotatorClient.Create();
// Load the image file into memory
var image = Image.FromFile("wakeupcat.jpg");
// Performs label detection on the image file
var response = client.DetectText(image);
foreach (var annotation in response)
{
if (annotation.Description != null)
Console.WriteLine(annotation.Description);
}
}
}
}
I find it works well for pictures and scanned documents so it should work perfectly for your situation. The SDK is also available in other languages too like Java, Python, and Node

Dynamically send n audio sources (concurrently ) to specifc channel on ASIO device

So i have been having some fun exploring the NAudio lib.
However, I'm not sure whether I am missing something using the ASIO class. Basically my requirements are the following:
Dynamically output (mono) sources to an ASIO device, each source to a dedicated channel (later on I will probably be working with 64 channels)
Be free to 'stream' the n audio sources to the device at any time during the session (multiple sources simultaneously)
Have the control over each channel
So in Code I'd have something like:
...
WaveFileReader source1 = new WaveFileReader( pathToMyFile1 );
WaveFileReader source2 = new WaveFileReader( pathToMyFile2 );
WaveFileReader source3 = new WaveFileReader( pathToMyFile3 );
...
WaveFileReader sourceN = new WaveFileReader( pathToMyFileN );
AsioOut out = new AsioOut();
...
/*Now init out...*/
...
out.Play();
...
/* now react on events, possibly within a multi-threaded enviroment*/
/* and concurrently send each of these sources to a dedicated channel*/
/* as required, (as stated, possibly even many at the same time) */
...
So my Question basically is:
Can I, using one of the existing classes, achieve something like this? Or will I have to engineer my own implentation of one of the interfaces (ISampleProvider, IWaveProvider etc, pretty sure it will somehow work going down an abstraction level)?
Thanks for any Input on this!
The MultiplexingWaveProvider can do something close to what you want. Read about it here. For 64 channel work you might find you need to write your own fine-tuned code as performance could become an issue.

PcapDotNet and Datalink type

I'm trying to manipulate some network captures (pcap format) using Pcap.net.
I'm opening the pcap file and creating the dumper with:
OfflinePacketDevice selectedDevice = new OfflinePacketDevice(pcapInFile);
using (PacketCommunicator PcapReader = selectedDevice.Open(655360, PacketDeviceOpenAttributes.Promiscuous, 1000))
{
PacketDumpFile PcapWriter = PcapReader.OpenDump(pcapOutFile);
PcapReader.ReceivePackets(count, PacketDispatcher);
}
And the PacketDispatcher would be something like:
private void PacketDispatcher(Packet packet)
{
// Manipulate the packet
PcapWriter.Dump(packet);
}
Everything is ok as far as the pcapInFile Datalink is ethernet type. But i have several captures without ethernet layer (rawip) and i have to build a new ethernet layer. In this kind of caps the datalink type is the same as the pcapInFile (raw ip) and i want to change it to ethernet...
If i store all the manipulated packets in a ienumerable and dump them with:
PacketDumpFile.Dump(pcapOutFile, new PcapDataLink(1), Packets.Count(), Packets);
It works fine... But, this is not very useful if you are dealing with files of several gigas...
Any idea?
Thanks!
Assuming the concern is about having all packets in RAM, you can create an IEnumerable that doesn't contain everything in RAM using yield.
This way as Dumper dumps packets it will call your method to populate the next item using yield.
Well, despite this is not an answer but a workaround, and since nobody wrote a better solution this is how a fixed it:
Use SharPcap instead of PcapDotNet, then you can declare a reader and a writer this way:
public CaptureFileReaderDevice PcapReader;
public CaptureFileWriterDevice PcapWriter;
PcapReader = new CaptureFileReaderDevice(fileIn);
PcapReader.OnPacketArrival += packetDispatcher;
PcapReader.Capture();
In the packetDispatcher function:
RawCapture raw = new RawCapture(LinkLayers.Ethernet, e.Packet.Timeval,RebuildNullLinkLayer(e, offset));
CaptureHandler.PcapWriter.Write(raw);
And in the RebuildNullLinkLayer function you can add the ethernet layer, modify whatever you want, etc...
Note that when you call the RawCapture Constructor you can choose the Link Layer (mi original issue with Pcap.Net...), so if you are parsing a RawIp capture, you can convert the packets to Ethernet.

get itemtext from SysListView32

i am trying to get the text in SysListView32 from another app by C#.
i can get the LVM_GETITEMCOUNT well but LVM_GETITEMW = 0x1000 + 13 always returns -1. how can i get the text by C#? i am new. thanks very much!
ParenthWnd = FindWindow(ParentClass, ParentWindow);
if (!ParenthWnd.Equals(IntPtr.Zero))
{
zWnd = FindWindowEx(ParenthWnd, zWnd, zClass, zWindow);
if (!zWnd.Equals(IntPtr.Zero))
{
int user = SendMessage(zWnd, LVM_GETITEMCOUNT, 0, 0);
}
You need to work harder to read and write the LVITEM memory since you are working with a control owned by another process. You therefore need to read and write memory in that process. You can't do that without calling ReadProcessMemory, WriteProcessMemory etc.
The most commonly cited example of the techniques involved is this Code Project article: Stealing Program's Memory. Watch out for 32/64 bit gotchas.

Categories