I've been working on this for a week now and used StackOverflow extensively, but I can't figure this out.
I'm writing a plug-in in C# in an Autodesk product, and I'm connected (with Marshal) to a different 3D application. I've done this for dozens of other plug-ins in the past without issue.
This project is unique. From the different 3D application, I'm running a long-running task (file export) on a large model. It takes 1-60 minutes at times.
I get the poison message: "This action cannot be completed because the 'application' is not responding. Choose "Switch To" and..." Technically, I can let the client just click "Retry" until it finds the application, but that is undesirable.
I originally thought I would just put a DoEvents type thing, and it would wait for the export to finish, but the poison message appears while the export sub is running (this is my first occurrence with poison messages, so I'm learning). Then I looked into running this export operation on a background thread, testing ThreadPool and Thread operations. However, I can "start" the service, but it never exports the model from the different 3D application. It just runs forever. (I removed the error message from my original post because I'm not looking for a solution to that sub-problem, but rather what I'm about to describe below)
Lastly, I tried to modify the NetMsmqBinding (I know nothing about this either, but trying to learn it) in the hopes that it will set the number of allowed retries to a bigger number.
System.TimeSpan TS = new System.TimeSpan(0, 30, 10);
System.TimeSpan TB = new System.TimeSpan(10, 0, 0);
NetMsmqBinding NMB = new NetMsmqBinding();
NMB.MaxRetryCycles = 1000;
NMB.ReceiveRetryCount = 1000;
NMB.RetryCycleDelay = TS;
NMB.OpenTimeout = TB;
However, no matter what I change my NetMsmqBinding values to, I always get the "retry" message at the same time. I must not be writing it correctly. In other examples, I noticed an xml file containing these values, and I don't know what that xml is. Nor do I really want to know, because I'd rather have this run in the plug-in, rather than have another xml file to deal with.
I'm finding lots of examples on how to deal with this in the hypothetical (lots of console.write BS), but nothing that actually has a concrete example where a long running COM process is interrupting the main C# utility.
I'd really like to figure out how to reset the retry frequency and cycles to last longer, so that the poison messages aren't presented. How can I do that?
Here is more code, to give some context:
namespace Testing_V0
{
[PluginAttribute("Testing_V0R1", "ADSK", ToolTip = "Testing the plugin", DisplayName = "Testing the plugin")]
[AddInPluginAttribute(AddInLocation.AddIn)]
public class MyPlugin : AddInPlugin
{
public override int Execute(params string[] parameters)
{
System.TimeSpan TS = new System.TimeSpan(0, 30, 10);
System.TimeSpan TB = new System.TimeSpan(10, 0, 0);
NetMsmqBinding NMB = new NetMsmqBinding();
NMB.MaxRetryCycles = 1000;
NMB.ReceiveRetryCount = 1000;
NMB.RetryCycleDelay = TS;
NMB.OpenTimeout = TB;
//NMB.ReceiveErrorHandling = ReceiveErrorHandling.Drop;
//Do the Export process here
}
}
}
Because I didn't have anymore time to work on this, I will explain how I resolved this. It is probably not an advisable way to resolve this issue, but it has some advantages.
On a non-related project a few months ago, I did a similar application where I exported the same file format, but from a console application. There was no issues with the console application like these issues arising in the dll from the AutoDesk product.
Using what I remembered from the console app export, I made a new small console app exe that did just the file export. Then I used System.IO.Process.Start(file.exe, "arguments") command in the parent dll to trigger the executable.
This is a very roundabout way to get rid of the pop up messages, but there are some advantages. The executable runs the export, while the C# DLL continues. This allows me to run a simple file-exists loop until the file appears in the directory, then continues on. I put a progress counter in the C# DLL UI, and it gives the client a nice stable read out while the exporter is running.
Like I said, this is not ideal, but this works for me, for now.
Related
I am trying to write UI test case for a WPF application. This consists of a search textbox. On providing input into the textbox, search for the input string is done in a background thread.
This is my basic code:
public void TestMethod1()
{
var applicationDirectory = #"C:\projects\dev\source\bin\Debug";
var applicationPath = Path.Combine(applicationDirectory, "Some.exe");
Application application = Application.Launch(applicationPath);
Window mainWindow = application.GetWindow("Window Title");
mainWindow.Get<TextBox>().Text = "testing things out";
Assert.IsTrue(true);
mainWindow.Dispose();
application.Dispose();
}
Now on the line where I set Text property, the background work will start, and the framework throws the exception:
TestStack.White.UIItems.UIActionException : Window didn't respond,
after waiting for 50000 ms
Basically the framework waits for a configured amount of time and then throws the exception as the background work did not finish in time.
I have checked the documentation and it mentions a workaround, but that would involve me changing my application code. Since this is a legacy application I do not want to change the code (we are in middle of migration, so want to keep code changes limited).
This seems to be a common issue, but have not been able to see any solution? Any ideas?
UPDATE
The following code does work, though still not close to finding the solution to original issue.
mainWindow.Get<TextBox>().BulkText = "testing things out";
I have the goal of uploading a Products CSV of ~3000 records to my e-commerce site. I want to utilise the REST API that my e-comm platform provides so I have something I can re-use and build upon for future sites that I may create.
My main issue that I am having trouble working through is:
- System.Threading.ThreadAbortException
Which I can only attribute to how long it takes to process through all 3K records via a POST request. My code:
public ActionResult WriteProductsFromFile()
{
string fileNameIN = "19107.txt";
string fileNameOUT = "19107_output.txt";
string jsonUrl = $"/api/products";
List<string> ls = new List<string>();
var engine = new FileHelperAsyncEngine<Prod1>();
using (engine.BeginReadFile(fileNameIN))
{
foreach (Prod1 prod in engine)
{
outputProduct output = new outputProduct();
if (!string.IsNullOrEmpty(prod.name))
{
output.product.name = prod.name;
string productJson = JsonConvert.SerializeObject(output);
ls.Add(productJson);
}
}
}
foreach (String s in ls)
nopApiClient.Post(jsonUrl, s);
return RedirectToAction("GetProducts");
}
}
Since I'm new to web-coding, am I going about this the wrong way? Is there a preferred way to bulk-upload that I haven't come across?
I've attempted to use the TaskCreationOptions.LongRunning flag, which helps the cause slightly but doesn't get me anywhere near my goal.
Web and api controller actions are not meant to do long running tasks - besides locking up the UI/thread, you will be introducing a series of opportunities for failure that you will have little recourse in recovering from.
But it's not all bad you have a lot of options here, there is a lot of literature on async/cloud architecture - which explains how to deal with files and these sorts of scenarios.
What you want to do is disconnect the processing of your file from the API request (in your application not the 3rd party)
It will take a little more work but will ultimately create a more reliable application.
Step 1:
Drop the file immediately to disk - I see you have the file on DISK already not sure how it gets there but either way it will work out the same.
Step 2:
Use a process running as
- a console app (easiest)
- a service (requires some sort of install/uninstall of the service)
- or even a thread in your web app (but you will struggle to know when it fails)
Which ever way you choose, the process will watch a directory for file changes, when there is a change it will kick off your method to happily process the file as you like.
Check out the FileSystemWatchers here is a basic example: https://www.dotnetperls.com/filesystemwatcher
Additionally:
If you are interested in running a thread in your Api/Web app, take a look at https://www.hanselman.com/blog/HowToRunBackgroundTasksInASPNET.aspx for some options.
You don't have to use a FileSystemWatcher of course, you could trigger via a flag in a DB - that is being checked periodically, or a system event.
Im using Stockfish game engine to power Human Vs Computer games.
Here is first part of the code:
Process _proc= new Process();
_proc.StartInfo = new ProcessStartInfo(path);
_proc.StartInfo.RedirectStandardInput = true;
_proc.StartInfo.RedirectStandardOutput = true;
_proc.StartInfo.UseShellExecute = false;
_proc.StartInfo.CreateNoWindow = true;
_proc.Start();
_proc.StandardInput.WriteLine("uci");
_proc.StandardInput.WriteLine("ucinewgame");
At this point everything is ok, but when I try to read StandardOutput something weird happens.
string result = _proc.StandardOutput.ReadToEnd();
Stockfish.exe program pops-up my application is running but code after that line is not executing. When I press pause, it points at this line:
If I use:
while (!_proc.StandardOutput.EndOfStream)
{
result += _proc.StandardOutput.ReadLine();
}
Same thing happens only at while statement. result has its full value there, all the text is written into it.
Is there any way to overcome this without async reading?
Side problem:
Since this is all part of singleton class that is used over whole ASP.NET application, i dont feel like using async reading since Im not sure how can I protect (with locking) multiple threads writing into it. Also, I dont know how to stop current thread since the processing of command can last up to 10 sec.
I don't feel like using Thread.Sleep() to constantly check for end of reading output, not elegant.
Considering side problem, how could i avoid multithread problems if async is only solution?
My threading knowledge is weak, so please have that in mind when giving thread related answers. Thank you.
The call to StandardOutput.ReadToEnd will block until this process ends. Is the goal here to read, process, and respond to various text commands from the process you spawn as you receive them?
You must approach this via asynchronous reading.
For example, you could setup a listener to Process.OutputDataReceived. Then call Process.BeginOutputReadLine to start reading. Your code will continue execution. Meanwhile, the .NET Framework will handle incoming text messages on a separate thread.
I was wondering what the fastest way to send keystrokes using C# is. Currently I am using SendKeys.Send() and SendKeys.SendWait() with SendKeys.Flush().
I am using the following code to calculate the time of how long it takes for the both of them to work:
Stopwatch sw1 = new Stopwatch();
sw1.Start();
for (int a = 1; a <= 1000; a++)
{
SendKeys.Send("a");
SendKeys.Send("{ENTER}");
}
sw1.Stop();
And:
Stopwatch sw2 = new Stopwatch();
sw2.Start();
for (int b = 1; b <= 1000; b++)
{
SendKeys.SendWait("b");
SendKeys.SendWait("{ENTER}");
SendKeys.Flush();
}
sw2.Stop();
The results of the 2 are:
Result 1: 40119 milliseconds
Result 2: 41882 milliseconds
Now if we put the SendKeys.Flush() on the second test, outside of the loop we get:
Result 3: 46278 milliseconds
I was wondering why these changes in the code make the speed very different.
I was also wondering if there is a faster way of sending many keystrokes, as my application does it a lot. (These tests were done on a really slow netbook)
Thanks!
SendWait() is slower because it waits that the message has been processed by the target application. The Send() function instead doesn't wait and returns as soon as possible. If the application is somehow busy the difference can be even much more evident.
If you call Flush() you'll stop your application to process all events related to the keyboard that are queued in the message queue. It doesn't make too much sense if you sent them using SendWait() and you'll slow down a lot the application because it's inside the loop (imagine Flush() as a selective DoEvents() - yes with all its drawbacks - and it's called by SendWait() itself too).
If you're interested about its performance (but they'll always be limited to the speed at which your application can process the messages) please read this on MSDN. In sum, you can change the SendKeys class to use the SendInput function, rather than a journal hook. As quick reference, simply add this setting to your app.config file:
<appSettings>
<add key="SendKeys" value="SendInput"/>
</appSettings>
Anyway, the goal of the new implementation isn't the speed but consistent behavior across different versions of Windows and and options (the increased performance is kind of a side effect, I guess).
If you have a lot of text to push to a client, you may notice that SendKeys is really sluggish. You can vastly speed things up by using the clipboard. The idea is to put the text you wish to "type" into a target text box in the clipboard and then send a CTRL-V to the target application to paste that text. Here's an illustration:
Clipboard.Clear(); // Always clear the clipboard first
Clipboard.SetText(TextToSend);
SendKeys.SendWait("^v"); // Paste
I found this worked well for me with a cordless bar code scanner that talks via WiFi to a host app that sends long bar codes to a web app running in Google Chrome. It went from tediously pecking out 30 digits over about 4 seconds to instantly pasting all in under a second.
One obvious downside is that this can mess with your user's use of the clipboard. Another is that it doesn't help if you intend to send control codes like TAB or F5 instead of just plain old text.
My application could have up to roughly 100 requests for a batch job within a few milliseconds but in actuality, these job requests are being masked as one job request.
To fix this issue so that only one job request is just not feasible at the moment.
A workaround that I have thought is to program my application to fulfill only 1 batch job every x milliseconds, in this case I was thinking of 200 milliseconds, and ignore any other batch job that may come in within those 200 milliseconds or when my batch job have completed. After those 200 milliseconds are up or when the batch job is completed, my application will wait and accept 1 job request from that time on and it will not process any requests that may have been ignored before. Once my application accepts another job requests, it will repeat the cycle above.
What's the best way of doing this using .Net 4.0? Are there any boiler plate code that I can simply follow as a guide?
Update
Sorry for being unclear. I have added more details about my scenario. Also I just realized that my proposed workaround above will not work. Sorry guys, lol. Here's some background information.
I have an application that builds an index using files in a specified directory. When a file is added, deleted or modified in this directory, my application listens for these events using a FileSystemWatcher and re-indexes these files. The problem is that around 100 files can be added, deleted or modified by an external process and they occur very quickly, ie: within a few milliseconds. My end goal is to re-index these files after the last file change have occurred by the external process. The best solution is to modify the external process to signal my application when it has finished modifying the files I'm listening to but that's not feasible at the moment. Thus, I have to create a workaround.
A workaround that may solve my problem is to wait for the first file change. When the first file change have occurred, wait 200 milliseconds for any other subsequent file changes. Why 200 milliseconds? Because I'm hoping and confident that the external process can perform its file changes within 200 milliseconds. Once my application have waited for 200 milliseconds, I would like it to start a task that will re-index the files and go through another cycle of listening to a file change.
What's the best way of doing this?
Again, sorry for the confusion.
This question is a bit too high level to guess at.
My guess is your application is run as a service, you have your requests come into your application and arrive in a queue to be processed. And every 200 ms, you wake the queue and pop and item off for processing.
I'm confused about the "masked as one job request". Since you mentioned you will "ignore any other batch job", my guess is you haven't arranged your code to accept the incoming requests in a queue.
Regardless, you will generally always have one application process running (your service) and if you choose you could spawn a new thread for each item you process in the queue. You can monitor how much cpu/memory utilization this required and adjust the firing time (200ms) accordingly.
I may not be accurately understanding the problem, but my recommendation is to use the singleton pattern to work around this issue.
With the singleton approach, you can implement a lock on an object (the access method could potentially be something along the lines of BatchProcessor::GetBatchResults) that would then lock all requests to the batch job results object. Once the batch has finished, the lock will be released, and the underlying object, will have the results of the batch job available.
Please keep in mind that this is a "work around". There may be a better solution that involves looking into and changing the underlying business logic that causes multiple requests to come in for a job that's processing on demand.
Update:
Here is a link for information regarding Singleton (includes code examples): http://msdn.microsoft.com/en-us/library/ff650316.aspx
It is my understanding that the poster has some sort of an application that sits and waits for incoming requests to perform a batch job. The problem that he is receiving multiple requests within a short period of time that should actually have come in as just a single request. And, unfortunately, he is not able to solve this problem.
So, his solution is to assume that all requests received within a 200 ms timespan are the same, and to only process these once. My concern with this would be whether this assumption is correct or not? This entirely depends on the sending systems and the environment in which this is being used. The general idea to be able to do this would be to update a lastReceived date/time when a request is processed. Then when a new request comes in, compare the current date/time to the lastReceived date/time and only process it if the difference is greater than 200 ms.
Other possible solutions:
You said you could not modify the sending application so only one job request was sent, but could you add additional information to it, for instance a unique identifier?
Could you store the parameters from the last job request and compare it with the next job request and only process them if they are different?
Based on your Update
Here is an example how you could wait 200ms using a Timer:
static Timer timer;
static int waitTime = 200; //in ms
static void Main(string[] args)
{
FileSystemWatcher fsw = new FileSystemWatcher();
fsw.Path = #"C:\temp\";
fsw.Created += new FileSystemEventHandler(fsw_Created);
fsw.EnableRaisingEvents = true;
Console.ReadLine();
}
static void fsw_Created(object sender, FileSystemEventArgs e)
{
DateTime currTime = DateTime.Now;
if (timer == null)
{
Console.WriteLine("Started # " + currTime);
timer = new Timer();
timer.Interval = waitTime;
timer.Elapsed += new ElapsedEventHandler(timer_Elapsed);
timer.Start();
}
else
{
Console.WriteLine("Ignored # " + currTime);
}
}
static void timer_Elapsed(object sender, ElapsedEventArgs e)
{
//Start task here
Console.WriteLine("Elapsed # " + DateTime.Now);
timer = null;
}