Suppose I have some UI, and some Asynchronous Task handler. I can use callback methods/events that link back to the UI and tell it to update its progress display. This is all fine.
But what if I want to change progress on the UI only sometimes? As in, every 10%, I should get a single console output/UI change that would tell me the total/completed, percentage, and the elapsed time. This would require a simple class:
class Progress
{
int completed;
int total;
public Progress(int t)
{
total = t;
completed = 0;
}
bool ShouldReportProgress()
{
if((int)(total/completed) * 100) % 10 == 0)
return true;
return false;
}
}
I need help with 2 things: the method "ShouldReportProgress()" is naive and works incorrectly for various reasons (outputs more than once per step, can skip steps entirely), so I'm hoping there's a better way to do that.
I also am assuming that a class like this MUST exist somewhere already. I could write it myself if it doesn't, but it just seems like a logical conclusion of this issue.
Here's one idea of how you could create your class. I've added a field that the user can set that specifies the ReportIncrement, which is basically stating how often the progress should be reported. There is also an enum for Increment, which specifies whether or not ReportIncrement is a fixed number (i.e. report every 15th completion) or a percentage (i.e. report ever time we complete 10% of the total).
Every time the Completed field is changed, a call is made to ReportProgress, which then checks to see if progress should actually be reported (and it also takes an argument to force reporting). The checking is done in the ShouldReport method, which determines if the current progress is greater than or equal to the last reported progress plus the increment amount:
class Progress
{
public enum Increment
{
Percent,
FixedAmount
}
public Increment IncrementType { get; set; }
public int ReportIncrement { get; set; }
public int Total { get; }
private double completed;
public double Completed
{
get { return completed; }
set
{
completed = value;
ReportProgress();
}
}
public void ReportProgress(bool onlyIfShould = true)
{
if (!onlyIfShould || ShouldReport())
{
Console.WriteLine(
$"{Completed / Total * 100:0}% complete ({Completed} / {Total})");
lastReportedAmount = Completed;
}
}
public Progress(int total)
{
Total = total;
}
private double lastReportedAmount;
private bool ShouldReport()
{
if (Completed >= Total) return true;
switch (IncrementType)
{
case Increment.FixedAmount:
return lastReportedAmount + ReportIncrement <= Completed;
case Increment.Percent:
return lastReportedAmount / Total * 100 +
ReportIncrement <= Completed / Total * 100;
default:
return true;
}
}
}
Then, for example, this class can then be used to report every 10%:
private static void Main()
{
var progress = new Progress(50)
{
ReportIncrement = 10,
IncrementType = Progress.Increment.Percent
};
Console.WriteLine("Starting");
for (int i = 0; i < progress.Total; i++)
{
Console.Write('.');
Thread.Sleep(100);
progress.Completed++;
}
Console.WriteLine("\nDone!\nPress any key to exit...");
Console.ReadKey();
}
Or every 10th completion:
// Same code as above, only this time we specify 'FixedAmount'
var progress = new Progress(50)
{
ReportIncrement = 10,
IncrementType = Progress.Increment.FixedAmount
};
Related
I've been struggling with Parallel.For and local variable. I'm trying to update percentages in a console app from a parallel for. I add page in multiple document and I would like an update on the percentage for each document.
Here's my attempt so far :
I'm taking the number of the line with Console.CursorTop, and I want to pass it to a method who's gonna override the line.
Loop from Program.cs
Parallel.For(0, generationFile.nbOfFile, optionsParallel,
i =>
{
string fileName = $"{tmpDirectoryPath}/{i + 1}_{guid}.pdf";
AddPage(fileName, generationFile, i);
});
The AddPage method
private static void AddPage(string fileName, GenerationFile generationFile, int i)
{
var cursorPosition = Console.CursorTop;
//Ajout des pages
for (int j = 0; j < generationFile.SizeMaxFile; j++)
{
Page page = Page.Create(outDoc, size);
AddText(outDoc, page, font, 14, i, fileName, j, generationFile.SizeMaxFile);
for (int k = 0; k < 292; k++)
{
AddImage(outDoc, page, 30, 30);
}
outDoc.Pages.Add(page);
ConsoleManager.UpdateConsole(i, j, cursorPosition, generationFile);
}
}
The UpdateConsole method
public static void UpdateConsole(int fileNumber, double progression, int cursorPosition, GenerationFile generationFile)
{
progression = (progression / 100) * generationFile.ArchiveESC;
Console.ForegroundColor = ConsoleColor.White;
Console.SetCursorPosition(0, cursorPosition);
Console.WriteLine($"\rFichier n°{fileNumber + 1}/{generationFile.SizeMaxFile} en cours de création : {progression}% ", Console.ForegroundColor);
}
I think everything works fine, except for the cursorPosition who take one value at the beginning and never change, so the same line is updated. I understand that there is something to do with local and/or shared variable, but I'm fairly new in parallel processing so even with the other threads on this topic and the MSDN, I don't understand what to do.
The way I prefer to handle progress reporting is to have all worker threads report to a shared progress field, and have a separate timer that reads this fields and reports the progress to the user. This lets me control how often progress is reported, regardless of how fast items are processed. I also want an abstraction layer that allows different ways to report progress. After all, the same method might be used from console, the UI, or not at all.
For example something like this:
public interface IMyProgress
{
void Increment(int incrementValue);
}
public sealed class MyProgress : IMyProgress, IDisposable
{
private readonly int totalItems;
private readonly Timer myTimer;
private volatile int progress;
private int lastUpdatedProgress;
public MyProgress(TimeSpan progressFrequency, int totalItems)
{
this.totalItems = totalItems;
myTimer = new Timer(progressFrequency.TotalMilliseconds);
myTimer.Elapsed += OnElapsed;
myTimer.Start();
}
private void OnElapsed(object sender, ElapsedEventArgs e)
{
var currentProgress = progress;
if (lastUpdatedProgress != currentProgress)
{
lastUpdatedProgress = currentProgress;
var percent = currentProgress * 100 / (double)totalItems;
Console.Write($"\rWork progress: {percent}%");
}
}
public void Increment(int incrementValue) => Interlocked.Add(ref progress, incrementValue);
public void Dispose() => myTimer?.Dispose();
}
This can be called from a parallel method like:
static void Main(string[] args)
{
Console.WriteLine("Hello World!");
var myWorkItems = Enumerable.Range(1, 10000).ToList();
using var progress = new MyProgress(TimeSpan.FromSeconds(1), myWorkItems.Count);
DoProcessing(myWorkItems, progress);
}
private static void DoProcessing(IEnumerable<int> items, IMyProgress progress)
{
Parallel.ForEach(items, item =>
{
Thread.Sleep(20);
progress.Increment(1);
});
}
I would be a bit careful when using carriage returns. In my experience console applications tend to be used by other programs as much as by humans, and then it is likely that the output will be redirected to a file, and that does not support carriage returns. So I would try to make the output look good.
I would avoid trying to move the cursor around. I have tried that, but the result was unsatisfactory, YMMV.
For example, I have 3 progress bars next to each other in a 'progress bar group'. If I set the progress bar group's progress to 50%, the first progress bar will be full, the second progress bar will be half full, and the third progress bar will be unchanged.
I know that this has something to do with math and fractions, but I've been racking my head over this for hours and I can't figure out a good way of doing this.
The details I gave above isn't actually my real problem, it's just there to help understand my situation better. Here's the real thing:
So you can probably get what I'm trying to do now by looking at that image. If you happen to have some code lying around for this, please share it. If you don't, then don't waste your time on me and just tell me the general idea instead.
Here's a class that might work for you:
public class JoinedProgressBar
{
private List<ProgressBar> _progressBars;
public JoinedProgressBar(List<ProgressBar> progressBars)
{
_progressBars = progressBars ?? new List<ProgressBar>();
}
public void UpdateBarsPercent(int value)
{
UpdateBars(value * GetSum() / 100);
}
public void UpdateBars(int value)
{
var remaining = value;
for(int i = 0; i < _progressBars.Count; i++)
{
_progressBars[i].Value =
remaining <= _progressBars[i].Minimum ? _progressBars[i].Minimum :
remaining >= _progressBars[i].Maximum ? _progressBars[i].Maximum : remaining;
remaining -= _progressBars[i].Maximum;
}
}
public int GetSum()
{
var bars = _progressBars.Select(pb => pb.Maximum).ToList();
return bars.Count > 0 ? bars.Sum() : 0;
}
public void SetOverallMaximum(int maximum)
{
for (int i = 0; i < _progressBars.Count; i++)
{
_progressBars[i].Minimum = 0;
_progressBars[i].Maximum = maximum / _progressBars.Count;
}
}
}
Sample usage:
var jpb = new JoinedProgressBar(new List<ProgressBar>() { progressBar1, progressBar2, progressBar3 });
for(int i = 0; i <= 100; i += 10)
{
jpb.UpdateBarsPercent(i);
await Task.Delay(1000);
}
I imagined the result would be a negative value, due to not locking and multiple threads sharing the same object. I have tested this many times with release and debug version, every time the result is correct. Why is it still correct?
Code :
static BankAccount ThisbankAccount = new BankAccount(10000);
public static void WithdrawMoney()
{
for(int i = 0; i < 1000; i++)
ThisbankAccount.WithdrawMoney(25);
}
static void Main(string[] args)
{
Thread client1 = new Thread(WithdrawMoney);
Thread client2 = new Thread(WithdrawMoney);
Thread client3 = new Thread(WithdrawMoney);
client1.Start();
client2.Start();
client3.Start();
client3.Join();
Console.WriteLine( ThisbankAccount.Balance);
Console.ReadLine();
}
}
public class BankAccount
{
object Acctlocker = new object();
public BankAccount(int initialAmount)
{
m_balance = initialAmount;
}
public void WithdrawMoney(int amnt)
{
// lock(Acctlocker)
// {
if (m_balance - amnt >= 0)
{
m_balance -= amnt;
}
// }
}
public int Balance
{
get
{
return m_balance;
}
}
private int m_balance;
}
Just because something works now doesn't mean it is guaranteed to work. Race conditions are hard to trigger and might take years to surface. And when they surface, they can be very hard to track down and diagnose.
To see your problem in action, change this code:
if (m_balance - amnt >= 0)
{
m_balance -= amnt;
}
to:
if (m_balance - amnt >= 0)
{
Thread.Sleep(10);
m_balance -= amnt;
}
That introduces a slow enough code path to highlight the problem really easily.
The reason you aren't spotting it with your current code is that the operations you are doing (subtraction and comparisons) are very fast. So the window for the race condition is very small - and you are lucky enough for it not to occur. But, over unlimited time, it definitely will occur.
I'm making an application whose job is to generate two lists and display them on demand. As well as update the values every second.
I need to update the list in such a way so that the oldest value in the list is replaced first. How would I do that? Below is my code in it's current state.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace Data_Collector
{
//What is needed?
//Need to generate a list of both metric values, and imperial values.
//Need to be able to display a list
public class IMeasuringDevice_Helper
{
private int limit; //Limits the list size.
private float RNDouble;
public void MetricValueGenerator()
{
limit = limit + 1;
Console.WriteLine("Limit Level = " + limit);
if (limit <= 10)
{
List<float> MetricValueGenerated = new List<float>();
Random rnd = new Random();
float rndINT = rnd.Next(1, 10);
RNDouble = rndINT / 10;
Console.WriteLine(RNDouble);
MetricValueGenerated.Add(RNDouble);
}
else
{
Console.WriteLine("limit reached");
}
}
public void ImperialValueGenerator()
{
//TODO
}
}
}
You will need a Queue for this, but you will need to extend it. The default C# Queue is First-In, First-Out (exactly the semantic you want), but does not subscribe to limits the way your code currently handles them. It simply grows by a growth factor if full.
So you will want to extend the Queue object and override the Enqueue method to do what you want. It will probably look a little like this:
public class BoundedQueue<T> : Queue<T>
{
private readonly int _bound;
public BoundedQueue(int bound)
{
_bound = bound;
}
public new void Enqueue(T item)
{
if(Count >= _bound)
{
throw new IndexOutOfRangeException("limit reached");
// If simply throwing an exception isn't cool, you can also do the following to pop off the oldest item:
// base.Dequeue();
}
base.Enqueue(item);
}
}
The only thing to be aware of is when you turn this into some other kind of object for display, you may see it in the reverse order you expect, as the oldest item will be at the 'top' of the queue. You can sort this out by simply calling the Reverse() method that works with most LINQ-enabled objects.
If you don't want to do any class extensions as #YYY has suggested, pretty much replace List with Queue, replace .add() with .Enqueue() and instead of oldestValue = yourList[oldestIndex] use oldestValue = yourQueue.Dequeue().
Aside from your question, your variables should start with a lowercase letter, and RNDouble = rndINT / 10; is going to end up = 0 most of the time because you should divide by 10.0 instead of 10.
Ok so I was bored... (also I wouldn't go with this approach, but I'm guessing you're learning and haven't been taught about Queues, so this might help with Lists):
public class MeasuringDevice_Helper
{
private const int LIMIT = 10; // This is a constant value, so should be defined IN_CAPS in class definition
List<double> metricValues = new List<double>(LIMIT); // This needs to be at class level so it doesn't get lost
Random rnd = new Random(); // This is used frequently so define at class level
public void GenerateMetricValue() // This is now named like the action it performs
{
Console.WriteLine("Current metric values = " + metricValues.Count);
if (metricValues.Count < LIMIT) // This should just be < not <=
{
float rndInt = rnd.Next(1, 10);
double rndDouble = rndInt / 10.0; // An Int divided by Int will = Int
Console.WriteLine("Added " + rndDouble);
metricValues.Add(rndDouble);
}
else
{
Console.WriteLine("limit reached");
}
}
public double PopOldestMetricValue()
{
double value = metricValues[0];
metricValues.RemoveAt(0);
return value;
}
}
I have a task to show difference between syncronized and unsyncronized multithreading. Therefore I wrote an application simulating withdrawing money from clients' bank accounts. Each of some number of threads chooses a random user and withdraws money from the account.
Every thread should withdraw every account once. First time the threads are syncronized, but the second time they are not. So there must be a difference between accounts, withdrawed by syncronized and unsyncronized threads. And the difference must be different for different numbers of users and threads. But in my application I have difference just for 1000 threads. So I need unsyncronized threads' results to be strongly different from syncronized threads' ones.
The class User:
public class User : IComparable
{
public string Name { get; set; }
public int Start { get; set; }
public int FinishSync { get; set; }
public int FinishUnsync { get; set; }
public int Hypothetic { get; set; }
public int Differrence { get; set; }
...
}
The method which withdraws money:
public void Withdraw(ref List<User> users, int sum, bool isSync)
{
int ind = 0;
Thread.Sleep(_due);
var rnd = new Random(DateTime.Now.Millisecond);
//used is list of users, withrawed by the thread
while (_used.Count < users.Count)
{
while (_used.Contains(ind = rnd.Next(0, users.Count))) ; //choosing a random user
if (isSync) //isSync = if threads syncroized
{
if (Monitor.TryEnter(users[ind]))
{
try
{
users[ind].FinishSync = users[ind].FinishSync - sum;
}
finally
{
Monitor.Exit(users[ind]);
}
}
}
else
{
lock (users[ind])
{
users[ind].FinishUnsync = users[ind].FinishUnsync - sum;
}
}
_used.Add(ind);
}
done = true;
}
And the threads are created this way:
private void Withdrawing(bool IsSync)
{
if (IsSync)
{
for (int i = 0; i < _num; i++)
{
_withdrawers.Add(new Withdrawer(Users.Count, _due, _pause));
_threads.Add(new Thread(delegate()
{ _withdrawers[i].Withdraw(ref Users, _sum, true); }));
_threads[i].Name = i.ToString();
_threads[i].Start();
_threads[i].Join();
}
}
else
{
for (int i = 0; i < _num; ++i)
{
_withdrawers.Add(new Withdrawer(Users.Count, _due, _pause));
_threads.Add(new Thread(delegate()
{ _withdrawers[i].Withdraw(ref Users, _sum, false); }));
_threads[i].Name = i.ToString();
_threads[i].Start();
}
}
}
I've changed the Withdraw class this way, bc the problem could have been in creating threads separately from the delegate:
class Withdrawer
{
private List<int>[] _used;
private int _due;
private int _pause;
public int done;
private List<Thread> _threads;
public Withdrawer(List<User> users, int n, int due, int pause, int sum)
{
_due = due;
_pause = pause;
done = 0;
_threads = new List<Thread>(users.Count);
InitializeUsed(users, n);
CreateThreads(users, n, sum, false);
_threads.Clear();
while (done < n) ;
Array.Clear(_used,0,n-1);
InitializeUsed(users, n);
CreateThreads(users, n, sum, true);
}
private void InitializeUsed(List<User> users, int n)
{
_used = new List<int>[n];
for (int i = 0; i < n; i++)
{
_used[i] = new List<int>(users.Count);
for (int j = 0; j < users.Count; j++)
{
_used[i].Add(j);
}
}
}
private void CreateThreads(List<User> users, int n, int sum, bool isSync)
{
for (int i = 0; i < n; i++)
{
_threads.Add(new Thread(delegate() { Withdraw(users, sum, isSync); }));
_threads[i].Name = i.ToString();
_threads[i].Start();
}
}
public void Withdraw(List<User> users, int sum, bool isSync)
{
int ind = 0;
var rnd = new Random();
while (_used[int.Parse(Thread.CurrentThread.Name)].Count > 0)
{
int x = rnd.Next(_used[int.Parse(Thread.CurrentThread.Name)].Count);
ind = _used[int.Parse(Thread.CurrentThread.Name)][x];
if (isSync)
{
lock (users[ind])
{
Thread.Sleep(_due);
users[ind].FinishSync -= sum;
}
}
else
{
Thread.Sleep(_due);
users[ind].FinishUnsync -= sum;
}
_used[int.Parse(Thread.CurrentThread.Name)][x] = _used[int.Parse(Thread.CurrentThread.Name)][_used[int.Parse(Thread.CurrentThread.Name)].Count - 1];
_used[int.Parse(Thread.CurrentThread.Name)].RemoveAt(_used[int.Parse(Thread.CurrentThread.Name)].Count - 1);
Thread.Sleep(_pause);
}
done++;
}
}
Now the problem is FinishUnSync values are correct, while FinishSync values are absolutely not.
Thread.Sleep(_due);
and
Thread.Sleep(_pause);
are used to "hold" the resourse, bc my task is the thread should get resourse, hold it for a _due ms, and after processing wait _pause ms before finishing.
Your code isn't doing anything useful, and doesn't show the difference between synchronized and unsynchronized access. There are many things you'll need to address.
Comments in your code say that _used is a list of users that have been accessed by the thread. You're apparently creating that on a per-thread basis. If that's true, I don't see how. From the looks of things I'd say that _used is accessible to all threads. I don't see anywhere that you're creating a per-thread version of that list. And the naming convention indicates that it's at class scope.
If that list is not per-thread, that would go a long way towards explaining why your data is always the same. You also have a real race condition here because you're updating the list from multiple threads.
Assuning that _used really is a per-thread data structure . . .
You have this code:
if (isSync) //isSync = if threads syncroized
{
if (Monitor.TryEnter(users[ind]))
{
try
{
users[ind].FinishSync = users[ind].FinishSync - sum;
}
finally
{
Monitor.Exit(users[ind]);
}
}
}
else
{
lock (users[ind])
{
users[ind].FinishUnsync = users[ind].FinishUnsync - sum;
}
}
Both of these provide synchronization. In the isSync case, a second thread will fail to do its update if a thread already has the user locked. In the second case, the second thread will wait for the first to finish, and then will do the update. In either case, the use of Monitor or lock prevents concurrent update.
Still, you would potentially see a difference if multiple threads could be executing the isSync code at the same time. But you won't see a difference because in your synchronized case you never let more than one thread execute. That is, you have:
if (IsSync)
{
for (int i = 0; i < _num; i++)
{
_withdrawers.Add(new Withdrawer(Users.Count, _due, _pause));
_threads.Add(new Thread(delegate()
{ _withdrawers[i].Withdraw(ref Users, _sum, true); }));
_threads[i].Name = i.ToString();
_threads[i].Start();
_threads[i].Join();
}
}
else
{
for (int i = 0; i < _num; ++i)
{
_withdrawers.Add(new Withdrawer(Users.Count, _due, _pause));
_threads.Add(new Thread(delegate()
{ _withdrawers[i].Withdraw(ref Users, _sum, false); }));
_threads[i].Name = i.ToString();
_threads[i].Start();
}
}
So in the IsSync case, you start a thread and then wait for it to complete before you start another thread. Your code is not multithreaded. And in the "unsynchronized" case you're using a lock to prevent concurrent updates. So in one case you prevent concurrent updates by only running one thread at a time, and in the other case you prevent concurrent updates by using a lock. There will be no difference.
Something else worth noting is that your method of randomly selecting a user is highly inefficient, and could be part of the problem you're seeing. Basically what you're doing is picking a random number and checking to see if it's in a list. If it is, you try again, etc. And the list keeps growing. Quick experimentation shows that I have to generate 7,000 random numbers between 0 and 1,000 before I get all of them. So your threads are spending a huge amount of time trying to find the next unused account, meaning that they have less likelihood to be processing the same user account at the same time.
You need to do three things. First, change your Withdrawl method so it does this:
if (isSync) //isSync = if threads syncroized
{
// synchronized. prevent concurrent updates.
lock (users[ind])
{
users[ind].FinishSync = users[ind].FinishSync - sum;
}
}
else
{
// unsynchronized. It's a free-for-all.
users[ind].FinishUnsync = users[ind].FinishUnsync - sum;
}
Your Withdrawing method should be the same regardless of whether IsSync is true or not. That is, it should be:
for (int i = 0; i < _num; ++i)
{
_withdrawers.Add(new Withdrawer(Users.Count, _due, _pause));
_threads.Add(new Thread(delegate()
{ _withdrawers[i].Withdraw(ref Users, _sum, false); }));
_threads[i].Name = i.ToString();
_threads[i].Start();
}
Now you always have multiple threads running. The only difference is whether access to the user account is synchronized.
Finally, make your _used list a list of indexes into the users list. Something like:
_used = new List<int>(users.Count);
for (int i = 0; i < _used.Count; ++i)
{
_used[i] = i;
}
Now, when you select a user, you do this:
var x = rnd.Next(_used.Count);
ind = _used[x];
// now remove the item from _used
_used[x] = _used[_used.Count-1];
_used.RemoveAt(_used.Count-1);
That way you can generate all users more efficiently. It will take n random numbers to generate n users.
A couple of nitpicks:
I have no idea why you have the Thread.Sleep call in the Withdraw method. What benefit do you think it provides?
There's no real reason to pass DateTime.Now.Millisecond to the Random constructor. Just calling new Random() will use Environment.TickCount for the seed. Unless you really want to limit the seed to numbers between 0 and 1,000.