I am trying to serialize a thread (or Process) to a file and execute the thread on a different machine at some other time.
Actually what I have is something like this:
for (BigInteger i = 0; i < ABigIntegerVariable; i++)
{
// My Calculation
}
I want to suspend the computation and save its state, and resume it later with the saved state, possibly on a different machine.
note: I can't save data at the program closing cuz it contains object and It seems that it is note true to save an object
Thank you
Can't you just save your current loop iterator value and whatever calculation state is at the time you want to "move" it? It depends of course what exactly is happening inside that loop but maybe even crude serialization after each iteration would be enough for you to start at new location?
Of course your loop would have to start from saved data not from i = 0 but as I said, you didn't share any details about what is going on in //My Calculation so either put more details in question or try to figure it out on your own.
Also, as per comment from Sidewinder94, there's no problem with serialization of objects unless you are doing it wrong.
One additional thought: are those calculations dependent on your loop iterator or result of previous loop(s)? Because if not you could just split them into multiple threads/tasks and take advantage of parallel calculations.
Related
When going through a really long array, or have a complicated calculations per each index, is there a way to yield after iterating through the array for the maximum amount of time. The maximum amount of time is the maximum time per each frame.
For example:
for(int i = 0; i < 100000; i++){
do something complicated;
if(maximum amount of time /*right before the user feels lag*/)
yield; (come back and resume i where it last yielded)
}
}
//order does not matter
So basically, what I want to achieve is high percent usage for the cpu, however, I do not want it to go beyond 100%, which the user will experience lag
edit:
Sorry for the little confusion. A more clear example might be 3d rendering in a program such as blender. When the user hits render, it calculates each pixels to determine what color it needs to be. When one looks at the cpu usage, it is close to 100%. however, it does not freeze while it calculates the pixels while it calculates the maximum amount as possible
If you are running your code on multiple CPUs (as implied by the multithreading tag), there should (in the usual case) be no need to stop executing the loop in order for your user interface to remain responsive. Perform the calculation on one or more background threads, and have those background threads update the UI thread as appropriate.
is there a way to yield after iterating through the array for the maximum amount of time
If by yield you mean just stop (and restart from the beginning next frame), then sure. You can pass a CancellationToken to your thread, and have it periodically check for a cancellation request. You can use a timer at the start of each frame to fire off that request, or more likely, use an existing mechanism that already does end-of-frame processing to trigger the thread to stop work.
If by yield you mean stop where I am and resume at that place at the start of the next frame, I would ask why stop given that you have multiple CPUs. If you must stop, you can use the CancellationToken as before, but just keep track of where you are in the loop, resuming from there instead of at the start.
So basically, what I want to achieve is high percent usage for the cpu, however, I do not want it to go beyond 100%, which the user will experience lag
You can never go over 100% CPU usage by definition. To avoid the feeling of lag when the CPU utilization is high, use thread priorities to ensure that the foreground thread has a higher priority than your background threads.
Unless I'm missing something....
double MAX_PROCESSTIME = 50.0;
DateTime loopStart = DateTime.Now();
for(int i = 0; i < 100000; i++){
// do something complicated;
double timePassed = (DateTime.Now() - loopStart).TotalMilliseconds;
if(timePassed > MAX_PROCESSTIME)
{
break;
}
}
How about you consider use a push model instead, to iterate in parallel and raising an event so the consumer just treat each item as they come?
Usually the solution to this problem is to move the work to a separate thread that can't interrupt the UI, and let the UI or a controller thread cancel the work when called for.
Another option is that I've read somewhere typical humans have a perception level of about 25 milliseconds; two events are perceived to occur at the same time as long as they are less than 25 milliseconds apart. Sadly, I can no longer find the original reference, but I did at least find a corroborating article. You can use this fact to set a timer for about that long and let the process run as much as you want until the timer goes off. You may also want to account for the atypical human as well, especially if your app is in an area catering to people that may have above average reflexes.
I have created a Windows Form application that reads in a text file, rearranges the data, and writes to a new text file. I have noticed that it slows down exponentially as it runs. I have been using tracepoints, stopwatches, and datetime to figure out why each iteration is taking longer than the previous, but I can't figure it out. My best guess would be that it might have something to do with the way I'm initializing variables.
I'm not sure how helpful this snippet of code will be but maybe it will give some insight into my problem:
while (cuttedWords.Any())
{
var variable = cuttedWords.TakeWhile(x => x != separator).ToArray();
cuttedWords = cuttedWords.Skip(variable.Length + 1);
sortDataObject.SortDataMethod(variable, b);
if (sortDataObject.virtualPara)
{
if (!virtualParaUsed)
{
listOfNames = sortDataObject.findListOfNames(backgroundWords, ref IDforCounting, countParametersTable);
}
virtualParaUsed = true;
printDataObject.WriteFileVirtual(fileName, ID, sortDataObject.listNames[0], sortDataObject.listNames[1],
sortDataObject.unit, listOfNames, sortDataObject.virtualNames);
sortDataObject.virtualNames.Clear();
}
else
{
int[] indexes = checkedListBox1.CheckedIndices.Cast<int>().ToArray();
printDataObject.WriteFile(fileName, ID, sortDataObject.listNames[0], sortDataObject.listNames[1],
sortDataObject.unit, sortDataObject.hexValue[0], sortDataObject.stringShift, sortDataObject.sign,
sortDataObject.SFBinary[0], sortDataObject.wordValue, sortDataObject.conversions, sortDataObject.stringData, indexes, sortDataObject.conType);
}
decimal sum = ((decimal)IDforCounting) / countParametersTable * 100;
int sum2 = (int)sum;
backgroundWorker1.ReportProgress(sum2);
ID++;
IDforCounting++;
b++;
}
What is strange to me is that I know that each loop runs in a matter of milliseconds, but from the start of one loop to the start of the next, the time keeps increasing.
I apologize if this is not enough information to analyze my issue, but I'm not sure what else I can provide without showing my entire solution.
Thank you.
EDIT: A better questions might be: what is a good way to analyze performance if stopwatches aren't doing the trick. I'd rather not have to download a profiler.
If its taking longer and longer, on each iteration, its probably related to the initial cuttedWords.any().
What type is cuttedWords? If its a database-backed enumerable, it will re-issue the sql statement on every iteration, which may or may not be what you want.
On the other hand, if this is a producer-consumer scenario, it may be that cuttedWords is locked by the producer, causing the consumer to be thread-locked while waiting for the producer to complete its action.
Also, the .reportProgress will cause the backgroundworker to raise an event on the thread that created it, potentially causing UI updates, so maybe try removing that line and see if it helps any. Then replace it with some code that only calls reportProgress if the progress has actually changed.
How do I return the value of difference between the two pixelmaps being compared? I want to know the difference so I can use a while loop to delay execution until the two pixelmaps are within a certain tolerance. The reason being I want to wait until images located in different elements on a web page are loaded before the rest of the code is executed (for an automated test). I am using the Assert.IsTrue to compare the two currently with a 5 percent tolerance but I'm not sure how to turn this into a loop.
ArtOfTest.Common.PixelMap expected = ArtOfTest.Common.PixelMap.FromBitmap(expectedbmp);
ArtOfTest.Common.PixelMap actual = ArtOfTest.Common.PixelMap.FromBitmap(actualbmp);
Assert.IsTrue(expected.Compare(actual,5.0));
It sounds like you're asking how to do a while loop which performs a test and waits until the condition has fulfilled. I don't think the fact that you're doing it in an automated test or not really matters. In either case, assuming something is going on in the background thread which will eventually make the two PixelMaps return true on Compare:
while( !expected.Compare(actual, 5.0))
{
const int numberOfMillisecondsToSleep = 1000;
System.Threading.Thread.Sleep(numberOfMillisecondsToSleep);
}
I don't fully understand the context of the question and this assumes that if you sleep the condition will eventually be fulfilled. If not, this is an endless loop, so be careful.
I have a multithreaded (using a threadpool) C# program that reads from a text file containing logs and batch inserts them into a MongoDB collection. I want a consistent and precise way to measure how long it takes to insert the whole file into the collection.
I can't really call a thread.join (because it's a threadpool), and I can't use a stopwatch because they're running on separate threads.
What's the next best thing?
The current way I'm doing it is the timer on my smartphone. I repeatedly call db.collection.stats() and wait till the count is the same as the number of logs in the file...
If you're using C# 4.0+, I'd recommend you use the CountdownEvent class. Using that class, you can just create an instance with the number of logs for example as the counter:
var countdown = new CountdownEvent(numberOfLogs);
Then, each time you complete a write to MongoDB, you signal from the worker thread:
countdown.Signal(); // decrement counter
And then, in your main process (or another thread):
countdown.Wait(); // returns when the count is zero
// All writes complete
With mongostat (command line tool) you see exactly what goes on in the MongoDB server. It will give you inserts/queries etc per second. It won't automatically stop when it's done inserting, but it will definitely give you an insight to performance. The "inserts" will drop to 0 once you're done importing.
I have a multi-threaded application, and in a certain section of code I use a Stopwatch to measure the time of an operation:
MatchCollection matches = regex.Matches(text); //lazy evaluation
Int32 matchCount;
//inside this bracket program should not context switch
{
//start timer
MyStopwatch matchDuration = MyStopwatch.StartNew();
//actually evaluate regex
matchCount = matches.Count;
//adds the time regex took to a list
durations.AddDuration(matchDuration.Stop());
}
Now, the problem is if the program switches control to another thread somewhere else while the stopwatch is started, then the timed duration will be wrong. The other thread could have done any amount of work before the context switches back to this section.
Note that I am not asking about locking, these are all local variables so there is no need for that. I just want the timed section to execute continuously.
edit: another solution could be to subtract the context-switched time to get the actual time done doing work in the timed section. Don't know if that's possible.
You can't do that. Otherwise it would be very easy for any application to get complete control over the CPU timeslices assigned to it.
You can, however, give your process a high priority to reduce the probability of a context-switch.
Here is another thought:
Assuming that you don't measure the execution time of a regular expression just once but multiple times, you should not see the average execution time as an absolute value but as a relative value compared to the average execution times of other regular expressions.
With this thinking you can compare the average execution times of different regular expressions without knowing the times lost to context switches. The time lost to context switches would be about the same in every average, assuming the environment is relatively stable with regards to CPU utilization.
I don't think you can do that.
A "best effort", for me, would be to put your method in a separate thread, and use
Thread.CurrentThread.Priority = ThreadPriority.Highest;
to avoid as much as possible context switching.
If I may ask, why do you need such a precise measurement, and why can't you extract the function, and benchmark it in its own program if that's the point ?
Edit : Depending on the use case it may be useful to use
Process.GetCurrentProcess().ProcessorAffinity = new IntPtr(2); // Or whatever core you want to stick to
to avoid switch between cores.