I was wondering what the fastest way to send keystrokes using C# is. Currently I am using SendKeys.Send() and SendKeys.SendWait() with SendKeys.Flush().
I am using the following code to calculate the time of how long it takes for the both of them to work:
Stopwatch sw1 = new Stopwatch();
sw1.Start();
for (int a = 1; a <= 1000; a++)
{
SendKeys.Send("a");
SendKeys.Send("{ENTER}");
}
sw1.Stop();
And:
Stopwatch sw2 = new Stopwatch();
sw2.Start();
for (int b = 1; b <= 1000; b++)
{
SendKeys.SendWait("b");
SendKeys.SendWait("{ENTER}");
SendKeys.Flush();
}
sw2.Stop();
The results of the 2 are:
Result 1: 40119 milliseconds
Result 2: 41882 milliseconds
Now if we put the SendKeys.Flush() on the second test, outside of the loop we get:
Result 3: 46278 milliseconds
I was wondering why these changes in the code make the speed very different.
I was also wondering if there is a faster way of sending many keystrokes, as my application does it a lot. (These tests were done on a really slow netbook)
Thanks!
SendWait() is slower because it waits that the message has been processed by the target application. The Send() function instead doesn't wait and returns as soon as possible. If the application is somehow busy the difference can be even much more evident.
If you call Flush() you'll stop your application to process all events related to the keyboard that are queued in the message queue. It doesn't make too much sense if you sent them using SendWait() and you'll slow down a lot the application because it's inside the loop (imagine Flush() as a selective DoEvents() - yes with all its drawbacks - and it's called by SendWait() itself too).
If you're interested about its performance (but they'll always be limited to the speed at which your application can process the messages) please read this on MSDN. In sum, you can change the SendKeys class to use the SendInput function, rather than a journal hook. As quick reference, simply add this setting to your app.config file:
<appSettings>
<add key="SendKeys" value="SendInput"/>
</appSettings>
Anyway, the goal of the new implementation isn't the speed but consistent behavior across different versions of Windows and and options (the increased performance is kind of a side effect, I guess).
If you have a lot of text to push to a client, you may notice that SendKeys is really sluggish. You can vastly speed things up by using the clipboard. The idea is to put the text you wish to "type" into a target text box in the clipboard and then send a CTRL-V to the target application to paste that text. Here's an illustration:
Clipboard.Clear(); // Always clear the clipboard first
Clipboard.SetText(TextToSend);
SendKeys.SendWait("^v"); // Paste
I found this worked well for me with a cordless bar code scanner that talks via WiFi to a host app that sends long bar codes to a web app running in Google Chrome. It went from tediously pecking out 30 digits over about 4 seconds to instantly pasting all in under a second.
One obvious downside is that this can mess with your user's use of the clipboard. Another is that it doesn't help if you intend to send control codes like TAB or F5 instead of just plain old text.
Related
I have a somewhat weird problem. I have a couple of DLLs that I need to use in order to write and read with an NFC reader.
This works:
LV3_InitializeSystem(5);
setAuthCode();
MessageBox.Show(""); // I immediately click and close the box
short ret = LV3_CheckIssuer();
Console.WriteLine(ret); // 0 - Success
This doesn't work:
LV3_InitializeSystem(5);
setAuthCode();
short ret = LV3_CheckIssuer();
Console.WriteLine(ret); // 90 - Card reader can not be detected.
This also doesn't work:
LV3_InitializeSystem(5);
setAuthCode();
Thread.Sleep(5000);
short ret = LV3_CheckIssuer();
Console.WriteLine(ret); // 90 - Card reader can not be detected.
I have no idea what might be the problem. I tried using threads running the initialize part with no success. How does showing a MessageBox enable the initialization to complete but Thread.Sleep() doesn't?
The DLL is apparently posting some required messages on the Windows message queue. In order for the messages to be processed, the message queue must be emptied.
One way of ensuring these messages are processed is to use Application.DoEvents(). Generally Application.DoEvents() is frowned upon - see https://blog.codinghorror.com/is-doevents-evil/ for reasons why this is.
There are other ways to solve this without using Application.DoEvents(), but it would probably require restructuring your code - for example using async/await with a Task.Delay.
I've been working on this for a week now and used StackOverflow extensively, but I can't figure this out.
I'm writing a plug-in in C# in an Autodesk product, and I'm connected (with Marshal) to a different 3D application. I've done this for dozens of other plug-ins in the past without issue.
This project is unique. From the different 3D application, I'm running a long-running task (file export) on a large model. It takes 1-60 minutes at times.
I get the poison message: "This action cannot be completed because the 'application' is not responding. Choose "Switch To" and..." Technically, I can let the client just click "Retry" until it finds the application, but that is undesirable.
I originally thought I would just put a DoEvents type thing, and it would wait for the export to finish, but the poison message appears while the export sub is running (this is my first occurrence with poison messages, so I'm learning). Then I looked into running this export operation on a background thread, testing ThreadPool and Thread operations. However, I can "start" the service, but it never exports the model from the different 3D application. It just runs forever. (I removed the error message from my original post because I'm not looking for a solution to that sub-problem, but rather what I'm about to describe below)
Lastly, I tried to modify the NetMsmqBinding (I know nothing about this either, but trying to learn it) in the hopes that it will set the number of allowed retries to a bigger number.
System.TimeSpan TS = new System.TimeSpan(0, 30, 10);
System.TimeSpan TB = new System.TimeSpan(10, 0, 0);
NetMsmqBinding NMB = new NetMsmqBinding();
NMB.MaxRetryCycles = 1000;
NMB.ReceiveRetryCount = 1000;
NMB.RetryCycleDelay = TS;
NMB.OpenTimeout = TB;
However, no matter what I change my NetMsmqBinding values to, I always get the "retry" message at the same time. I must not be writing it correctly. In other examples, I noticed an xml file containing these values, and I don't know what that xml is. Nor do I really want to know, because I'd rather have this run in the plug-in, rather than have another xml file to deal with.
I'm finding lots of examples on how to deal with this in the hypothetical (lots of console.write BS), but nothing that actually has a concrete example where a long running COM process is interrupting the main C# utility.
I'd really like to figure out how to reset the retry frequency and cycles to last longer, so that the poison messages aren't presented. How can I do that?
Here is more code, to give some context:
namespace Testing_V0
{
[PluginAttribute("Testing_V0R1", "ADSK", ToolTip = "Testing the plugin", DisplayName = "Testing the plugin")]
[AddInPluginAttribute(AddInLocation.AddIn)]
public class MyPlugin : AddInPlugin
{
public override int Execute(params string[] parameters)
{
System.TimeSpan TS = new System.TimeSpan(0, 30, 10);
System.TimeSpan TB = new System.TimeSpan(10, 0, 0);
NetMsmqBinding NMB = new NetMsmqBinding();
NMB.MaxRetryCycles = 1000;
NMB.ReceiveRetryCount = 1000;
NMB.RetryCycleDelay = TS;
NMB.OpenTimeout = TB;
//NMB.ReceiveErrorHandling = ReceiveErrorHandling.Drop;
//Do the Export process here
}
}
}
Because I didn't have anymore time to work on this, I will explain how I resolved this. It is probably not an advisable way to resolve this issue, but it has some advantages.
On a non-related project a few months ago, I did a similar application where I exported the same file format, but from a console application. There was no issues with the console application like these issues arising in the dll from the AutoDesk product.
Using what I remembered from the console app export, I made a new small console app exe that did just the file export. Then I used System.IO.Process.Start(file.exe, "arguments") command in the parent dll to trigger the executable.
This is a very roundabout way to get rid of the pop up messages, but there are some advantages. The executable runs the export, while the C# DLL continues. This allows me to run a simple file-exists loop until the file appears in the directory, then continues on. I put a progress counter in the C# DLL UI, and it gives the client a nice stable read out while the exporter is running.
Like I said, this is not ideal, but this works for me, for now.
Im using Stockfish game engine to power Human Vs Computer games.
Here is first part of the code:
Process _proc= new Process();
_proc.StartInfo = new ProcessStartInfo(path);
_proc.StartInfo.RedirectStandardInput = true;
_proc.StartInfo.RedirectStandardOutput = true;
_proc.StartInfo.UseShellExecute = false;
_proc.StartInfo.CreateNoWindow = true;
_proc.Start();
_proc.StandardInput.WriteLine("uci");
_proc.StandardInput.WriteLine("ucinewgame");
At this point everything is ok, but when I try to read StandardOutput something weird happens.
string result = _proc.StandardOutput.ReadToEnd();
Stockfish.exe program pops-up my application is running but code after that line is not executing. When I press pause, it points at this line:
If I use:
while (!_proc.StandardOutput.EndOfStream)
{
result += _proc.StandardOutput.ReadLine();
}
Same thing happens only at while statement. result has its full value there, all the text is written into it.
Is there any way to overcome this without async reading?
Side problem:
Since this is all part of singleton class that is used over whole ASP.NET application, i dont feel like using async reading since Im not sure how can I protect (with locking) multiple threads writing into it. Also, I dont know how to stop current thread since the processing of command can last up to 10 sec.
I don't feel like using Thread.Sleep() to constantly check for end of reading output, not elegant.
Considering side problem, how could i avoid multithread problems if async is only solution?
My threading knowledge is weak, so please have that in mind when giving thread related answers. Thank you.
The call to StandardOutput.ReadToEnd will block until this process ends. Is the goal here to read, process, and respond to various text commands from the process you spawn as you receive them?
You must approach this via asynchronous reading.
For example, you could setup a listener to Process.OutputDataReceived. Then call Process.BeginOutputReadLine to start reading. Your code will continue execution. Meanwhile, the .NET Framework will handle incoming text messages on a separate thread.
My application could have up to roughly 100 requests for a batch job within a few milliseconds but in actuality, these job requests are being masked as one job request.
To fix this issue so that only one job request is just not feasible at the moment.
A workaround that I have thought is to program my application to fulfill only 1 batch job every x milliseconds, in this case I was thinking of 200 milliseconds, and ignore any other batch job that may come in within those 200 milliseconds or when my batch job have completed. After those 200 milliseconds are up or when the batch job is completed, my application will wait and accept 1 job request from that time on and it will not process any requests that may have been ignored before. Once my application accepts another job requests, it will repeat the cycle above.
What's the best way of doing this using .Net 4.0? Are there any boiler plate code that I can simply follow as a guide?
Update
Sorry for being unclear. I have added more details about my scenario. Also I just realized that my proposed workaround above will not work. Sorry guys, lol. Here's some background information.
I have an application that builds an index using files in a specified directory. When a file is added, deleted or modified in this directory, my application listens for these events using a FileSystemWatcher and re-indexes these files. The problem is that around 100 files can be added, deleted or modified by an external process and they occur very quickly, ie: within a few milliseconds. My end goal is to re-index these files after the last file change have occurred by the external process. The best solution is to modify the external process to signal my application when it has finished modifying the files I'm listening to but that's not feasible at the moment. Thus, I have to create a workaround.
A workaround that may solve my problem is to wait for the first file change. When the first file change have occurred, wait 200 milliseconds for any other subsequent file changes. Why 200 milliseconds? Because I'm hoping and confident that the external process can perform its file changes within 200 milliseconds. Once my application have waited for 200 milliseconds, I would like it to start a task that will re-index the files and go through another cycle of listening to a file change.
What's the best way of doing this?
Again, sorry for the confusion.
This question is a bit too high level to guess at.
My guess is your application is run as a service, you have your requests come into your application and arrive in a queue to be processed. And every 200 ms, you wake the queue and pop and item off for processing.
I'm confused about the "masked as one job request". Since you mentioned you will "ignore any other batch job", my guess is you haven't arranged your code to accept the incoming requests in a queue.
Regardless, you will generally always have one application process running (your service) and if you choose you could spawn a new thread for each item you process in the queue. You can monitor how much cpu/memory utilization this required and adjust the firing time (200ms) accordingly.
I may not be accurately understanding the problem, but my recommendation is to use the singleton pattern to work around this issue.
With the singleton approach, you can implement a lock on an object (the access method could potentially be something along the lines of BatchProcessor::GetBatchResults) that would then lock all requests to the batch job results object. Once the batch has finished, the lock will be released, and the underlying object, will have the results of the batch job available.
Please keep in mind that this is a "work around". There may be a better solution that involves looking into and changing the underlying business logic that causes multiple requests to come in for a job that's processing on demand.
Update:
Here is a link for information regarding Singleton (includes code examples): http://msdn.microsoft.com/en-us/library/ff650316.aspx
It is my understanding that the poster has some sort of an application that sits and waits for incoming requests to perform a batch job. The problem that he is receiving multiple requests within a short period of time that should actually have come in as just a single request. And, unfortunately, he is not able to solve this problem.
So, his solution is to assume that all requests received within a 200 ms timespan are the same, and to only process these once. My concern with this would be whether this assumption is correct or not? This entirely depends on the sending systems and the environment in which this is being used. The general idea to be able to do this would be to update a lastReceived date/time when a request is processed. Then when a new request comes in, compare the current date/time to the lastReceived date/time and only process it if the difference is greater than 200 ms.
Other possible solutions:
You said you could not modify the sending application so only one job request was sent, but could you add additional information to it, for instance a unique identifier?
Could you store the parameters from the last job request and compare it with the next job request and only process them if they are different?
Based on your Update
Here is an example how you could wait 200ms using a Timer:
static Timer timer;
static int waitTime = 200; //in ms
static void Main(string[] args)
{
FileSystemWatcher fsw = new FileSystemWatcher();
fsw.Path = #"C:\temp\";
fsw.Created += new FileSystemEventHandler(fsw_Created);
fsw.EnableRaisingEvents = true;
Console.ReadLine();
}
static void fsw_Created(object sender, FileSystemEventArgs e)
{
DateTime currTime = DateTime.Now;
if (timer == null)
{
Console.WriteLine("Started # " + currTime);
timer = new Timer();
timer.Interval = waitTime;
timer.Elapsed += new ElapsedEventHandler(timer_Elapsed);
timer.Start();
}
else
{
Console.WriteLine("Ignored # " + currTime);
}
}
static void timer_Elapsed(object sender, ElapsedEventArgs e)
{
//Start task here
Console.WriteLine("Elapsed # " + DateTime.Now);
timer = null;
}
Greetings stackoverflow members,
in a BackgroundWorker of a WPF Frontend i run sox (open source console sound processing tool) in a System.Diagnostics.Process. In that same way i use several other command line tools and parse their output to poulate progress bars in my frontend.
This works fine for the other tools but not for Sox since instead of spamming new lines for each progress step, it updates a single line on the console by only using carriage returns (\r) and no line feeds (\n). I tried both asynchronous and synchronous reads on process.StandardError.
Using async process.ErrorDataReceived += (sender, args) => FadeAudioOutputHandler(clip, args); in combination with process.BeginErrorReadLine(); doesn't produce any individual status updates because for some reason the carriage returns do not trigger ReadLine, even though the MSDN docs suggest that it should. The output is spit out in one chunk when the process finishes.
I then tried the following code for synchronous char by char reads on the stream:
char[] c;
var line = new StringBuilder();
while (process.StandardError.Peek() > -1)
{
c = new char[1];
process.StandardError.Read(c, 0, c.Length);
if (c[0] == '\r')
{
var percentage = 0;
var regex = new Regex(#"%\s([^\s]+)");
var match = regex.Match(line.ToString());
if (match.Success)
{
myProgressObject.ProgressType = ProgressType.FadingAudio
//... some calculations omitted for brevity
percentage = (int) Math.Round(result);
}
else
{
myProgressObject.ProgressType = ProgressType.UndefinedStep;
}
_backGroundWorker.ReportProgress(percentage, myProgressObject);
line.Clear();
}
else
{
line.Append(c[0]);
}
}
The above code does not seem to read the stream in realtime but will stall output for a while. Then it spams a small chunk and finally deadlocks half-way through the process.
Any hints towards the right direction would be greatly appreciated!
UPDATE with (sloppy?) solution:
This drove me crazy because nothing i tried on the C# side of things seemed to have any effect on the results. My original implementation, before changing it 15 times and introducing new dependencies, was fine.
The problem is with sox and RedirectStandardError alone. I discovered that after grabbing the sox source code and building my own version. First i removed all output of sox entirely except for the stuff i was really interested in and then changing the output to full lines followed by a newline \n . I assumed that this would fix my issues. Well, it didn't. I do not know enough c++ to actually find out why, but they seem to have tempered with how stdio writes to that stream, how it's buffered or do it in such a special way that the streamreader on the c# side is not flushed until the default 4096 byte buffer is full. I confirmed that by padding each line to at least 4096 byte. So in conclusion all i had to do was to manually flush stderr in sox.c after each fprintf(stderr, ...) call in display_status(...):
fflush(stderr);
Though, I'm not sure this is anywhere close to an elegant solution.
Thanks to Erik Dietrich for his answer which made me look at this from a different angle.
The situation you describe is a known problem - for a solution including source code see http://www.codeproject.com/KB/threads/ReadProcessStdoutStderr.aspx
It solves both problems (deadlock and the problem with \n)...
I've had to deal with a similar issue with a bespoke build tool in visual studio. I found that using a regex and doing the parsing in the same thread as the reading is a problem and the output processing grinds to a halt. I ended up with a standard consumer producer solution where you read lines from the output and stick them onto a Queue. Then have the queue be dequeued and processed on some other thread. I can't offer source code but this site has some fantastic resources: http://www.albahari.com/threading/part2.aspx
It's a little kludgy, but perhaps you could pipe the output of the uncooperative process to a process that does nothing but process input by characters, insert line feeds, and write to to standard out... So, in terms of (very) pseudo-code:
StartProcess("sox | littleguythatIwrote")
ReadStandardOutTheWayYouAleadyAre()
Could be that just moves the goalposts (I'm a lot more familiar with std in/out/err in the NIX world), but it's a different way to look at the problem, anyway.