Lock statement exception c# - c#

I have a method Add and unit test ParallelAddAmountTest. Everything works well but lock block have an exception. It is bad. How to refactor method Add?
public void Add(int amountToAdd)
{
lock (balanceLock)
{
if (Amount + amountToAdd > MaxAllowedAmount)
{
throw new ArgumentException("Cannot add the specified amount: the sum exceeds account limit.");
}
if (Amount + amountToAdd <= MaxAllowedAmount)
{
AddAmountAndEmulateTransactionDelay(amountToAdd);
}
}
}
Unit test for the method Add:
public void ParallelAddAmountTest()
{
var balance = new Balance(90000);
var withdrawTenThTask = Task.Run(() => balance.Add(10000));
var withdrawFiveThTask = Task.Run(() => balance.Add(10000));
Assert.Throws(
typeof(AggregateException),
() => Task.WaitAll(withdrawFiveThTask, withdrawTenThTask),
"Cannot add the specified amount: the sum exceeds account limit.");
Assert.AreEqual(100000, balance.Amount);
}

Related

My Recursive method has a stackoverflow when used with a particular dataset

I am trying to search a List for combinations that add up to a particular value (I've tried using Knapsack but it was checking too much for my particular scenario and taking too long). The code works and finds matches for most of the datasets I throw at it but fails for some others.
All of my List s don't have more that 100 entries.
I had assumed that because my recursive method is wrapped in a sensible if statement that a stack overflow would not occur but it blows up on this line: if(x.Sum(y => y.Amount) == limit)
Here is my code:
private static void Check(List<double> trackedListToCopy, double limit)
{
List<CheckerObject> trackedList = new List<CheckerObject>();
int idCount = 1;
trackedListToCopy.ForEach(x => {
trackedList.Add(new CheckerObject() { Amount = x, Id = idCount});
idCount++;
});
List<List<CheckerObject>> possiblitiesToCheck = new List<List<CheckerObject>>();
if (trackedList.FirstOrDefault(x=>x.Amount == limit) != null)
{
Console.WriteLine("Exact match found");
return;
}
trackedList.ForEach(item =>
{
var listToCheck = new List<CheckerObject>();
if (trackedList.First().Id == item.Id)
return;
listToCheck.Add(trackedList.First());
listToCheck.Add(item);
possiblitiesToCheck.Add(listToCheck);
if (possiblitiesToCheck.Any(x => x.Sum(y => y.Amount) == limit))
{
List<CheckerObject> match = possiblitiesToCheck.First(x => x.Sum(y => y.Amount) == limit);
Console.WriteLine("Match found with 2 entries" + match);
return;
}
});
var baseList = new List<List<CheckerObject>>(possiblitiesToCheck);
SubsequentCheck(baseList, trackedList, limit);
}
private static void SubsequentCheck(List<List<CheckerObject>> baseList, List<CheckerObject> trackedList, double limit)
{
//List<List<CheckerObject>> copy = new List<List<CheckerObject>>(baseList);
trackedList.ForEach(item => {
baseList.ForEach(x =>{
if (!x.Contains(item))
{
x.Add(item);
if(x.Sum(y => y.Amount) == limit)
{
string show = "";
x.ForEach(n => { show += n.Amount + ","; });
Console.WriteLine(String.Format("Match Found from {0}", show));
}
if (x.Sum(y => y.Amount) < limit)
{
SubsequentCheck(new List<List<CheckerObject>>(baseList), trackedList, limit);
}
}
});
});
}
public class CheckerObject
{
public int Id { get; set; }
public double Amount { get; set; }
}
Any and all help is as always greatly appreciated.
Change
private static void SubsequentCheck(List<List<CheckerObject>> baseList, List<CheckerObject> trackedList, double limit)
to
private static void SubsequentCheck(List<List<CheckerObject>> baseList, List<CheckerObject> trackedList, double limit, int recursiveDepth = 0)
and
if (x.Sum(y => y.Amount) < limit)
{
SubsequentCheck(new List<List<CheckerObject>>(baseList), trackedList, limit);
}
to
if (x.Sum(y => y.Amount) < limit)
{
SubsequentCheck(new List<List<CheckerObject>>(baseList), trackedList, limit, recursiveDepth + 1);
}
Then log everything or put a conditional breakpoint on recursive depth greater than some sensible limit.
Also change the .ForEach calls to foreach loops so that the loop can be stopped properly for efficiency (the return statement does not end the loop in this case)
Also change the Lists for Hashsets when calling .Contains so frequently if performance is an issue

Nunit Negative Tests Expected Exception

I have a class that I check the index or the length of the string. And I want to write a Nunit Negative Test:
if the length of the string Out of Range the Nunit Test is true. Or the first index is digit "false" the Nunit Test is true.
What i try:
My CheckKeyClass:
public void SetKey(string keyToAnalyse)
{
Line = new string[keyToAnalyse.Length];
int nummeric;
bool num;
if (Line.Length == 0 || Line.Length < 24)
{
throw new Exception("Index Out of Range " + Line.Length);
}
// Ist Product a Character
num = int.TryParse(Line[0], out nummeric);
if (!num)
{
if (Line[0] == "K")
{
Product = 0;
}
}
else
{
throw new Exception("The Productnumber is not right: " + Line[0] ". \nPlease give a Character.");
}
}
My Nunit Test:
[Test]
public void NegativeTests()
{
keymanager.SetKey("KM6163-33583-01125-68785");
// Throws<ArgumentOutOfRangeException>(() => keymanager.Line[24]);
}
// ExpectedException Handling
public static void Throws<T>(Action func) where T : Exception
{
var exceptionThrown = false;
try
{
func.Invoke();
}
catch (T)
{
exceptionThrown = true;
}
if (!exceptionThrown)
{
throw new AssertFailedException(String.Format("An exception of type {0} was expected, but not thrown", typeof(T)));
}
}
So if the Line.length Out of Range the Test must be green also true.
How do I use that the Tests is true?
thx
Use Assert.Throws()
string keyHasLengthOf24 = "KM6163-33583-01125-68785";
var ex = Assert.Throws<Exception>(() => keymanager.SetKey(keyHasLengthOf24));
Assert.That(ex.Message, Is.EqualTo("Index Out of Range "));
See this SO answer for further detail.

Track progress when using TPL's Parallel.ForEach

What is the best way way to track progress in the following
long total = Products.LongCount();
long current = 0;
double Progress = 0.0;
Parallel.ForEach(Products, product =>
{
try
{
var price = GetPrice(SystemAccount, product);
SavePrice(product,price);
}
finally
{
Interlocked.Decrement(ref this.current);
}});
I want to update the progress variable from 0.0 to 1.0 (current/total) but i don't want to use anything that would have an adverse effect on the parallelism.
Jon's solution is good, if you need simple synchronization like this, your first attempt should almost always use lock. But if you measure that the locking slows things too much, you should think about using something like Interlocked.
In this case, I would use Interlocked.Increment to increment the current count, and change Progress into a property:
private long total;
private long current;
public double Progress
{
get
{
if (total == 0)
return 0;
return (double)current / total;
}
}
…
this.total = Products.LongCount();
this.current = 0;
Parallel.ForEach(Products, product =>
{
try
{
var price = GetPrice(SystemAccount, product);
SavePrice(product, price);
}
finally
{
Interlocked.Increment(ref this.current);
}
});
Also, you might want to consider what to do with exceptions, I'm not sure that iterations that ended with an exception should be counted as done.
Since you are just doing a few quick calculations, ensure atomicity by locking on an appropriate object:
long total = Products.LongCount();
long current = 0;
double Progress = 0.0;
var lockTarget = new object();
Parallel.ForEach(Products, product =>
{
try
{
var price = GetPrice(SystemAccount, product);
SavePrice(product,price);
}
finally
{
lock (lockTarget) {
Progress = ++this.current / total;
}
}});
A solution without using any blocking in the body:
long total = Products.LongCount();
BlockingCollection<MyState> states = new BlockingCollection<MyState>();
Parallel.ForEach(Products, () =>
{
MyState myState = new MyState();
states.Add(myState);
return myState;
},
(i, state, arg3, myState) =>
{
try
{
var price = GetPrice(SystemAccount, product);
SavePrice(product,price);
}
finally
{
myState.value++;
return myState;
}
},
i => { }
);
Then, to access the current progress:
(float)states.Sum(state => state.value) / total

List<T>Get Chunk Number being executed

I am breaking a list into chunks and processing it as below:
foreach (var partialist in breaklistinchunks(chunksize))
{
try
{
do something
}
catch
{
print error
}
}
public static class IEnumerableExtensions
{
public static IEnumerable<List<T>> BreakListinChunks<T>(this IEnumerable<T> sourceList, int chunkSize)
{
List<T> chunkReturn = new List<T>(chunkSize);
foreach (var item in sourceList)
{
chunkReturn.Add(item);
if (chunkReturn.Count == chunkSize)
{
yield return chunkReturn;
chunkReturn = new List<T>(chunkSize);
}
}
if (chunkReturn.Any())
{
yield return chunkReturn;
}
}
}
If there is an error, I wish to run the chunk again. Is it possible to find the particular chunk number where we received the error and run that again ?
The batches have to be executed in sequential order .So if batch#2 generates an error, then I need to be able to run 2 again, if it fails again. I just need to get out of the loop for good .
List<Chunk> failedChunks = new List<Chunk>();
foreach (var partialist in breaklistinchunks(chunksize))
{
try
{
//do something
}
catch
{
//print error
failedChunks.Add(partiallist);
}
}
// attempt to re-process failed chunks here
I propose this answer based on your comment to Aaron's answer.
The batches have to be executed in sequential order .So if 2 is a problem , then I need to be able to run 2 again, if it fails again. I just need to get out of the loop for good.
foreach (var partialist in breaklistinchunks(chunksize))
{
int fails = 0;
bool success = false;
do
{
try
{
// do your action
success = true; // should be on the last line before the 'catch'
}
catch
{
fails += 1;
// do something about error before running again
}
}while (!success && fails < 2);
// exit the iteration if not successful and fails is 2
if (!success && fails >= 2)
break;
}
I made a possible solution for you if you don't mind switching from Enumerable to Queue, which kind of fits given the requirements...
void Main()
{
var list = new Queue<int>();
list.Enqueue(1);
list.Enqueue(2);
list.Enqueue(3);
list.Enqueue(4);
list.Enqueue(5);
var random = new Random();
int chunksize = 2;
foreach (var chunk in list.BreakListinChunks(chunksize))
{
foreach (var item in chunk)
{
try
{
if(random.Next(0, 3) == 0) // 1 in 3 chance of error
throw new Exception(item + " is a problem");
else
Console.WriteLine (item + " is OK");
}
catch (Exception ex)
{
Console.WriteLine (ex.Message);
list.Enqueue(item);
}
}
}
}
public static class IEnumerableExtensions
{
public static IEnumerable<List<T>> BreakListinChunks<T>(this Queue<T> sourceList, int chunkSize)
{
List<T> chunkReturn = new List<T>(chunkSize);
while(sourceList.Count > 0)
{
chunkReturn.Add(sourceList.Dequeue());
if (chunkReturn.Count == chunkSize || sourceList.Count == 0)
{
yield return chunkReturn;
chunkReturn = new List<T>(chunkSize);
}
}
}
}
Outputs
1 is a problem
2 is OK
3 is a problem
4 is a problem
5 is a problem
1 is a problem
3 is OK
4 is OK
5 is OK
1 is a problem
1 is OK
One possibility would be to use a for loop instead of a foreach loop and use the counter as a means to determine where an error occurred. Then you could continue from where you left off.
You can use break to exit out of the loop as soon as a chunk fails twice:
foreach (var partialList in breaklistinchunks(chunksize))
{
if(!TryOperation(partialList) && !TryOperation(partialList))
{
break;
}
}
private bool TryOperation<T>(List<T> list)
{
try
{
// do something
}
catch
{
// print error
return false;
}
return true;
}
You could even make the loop into a one-liner with LINQ, but it is generally bad practice to combine LINQ with side-effects, and it's not very readable:
breaklistinchunks(chunksize).TakeWhile(x => TryOperation(x) || TryOperation(x));

Parallel.Foreach + yield return?

I want to process something using parallel loop like this :
public void FillLogs(IEnumerable<IComputer> computers)
{
Parallel.ForEach(computers, cpt=>
{
cpt.Logs = cpt.GetRawLogs().ToList();
});
}
Ok, it works fine. But How to do if I want the FillLogs method return an IEnumerable ?
public IEnumerable<IComputer> FillLogs(IEnumerable<IComputer> computers)
{
Parallel.ForEach(computers, cpt=>
{
cpt.Logs = cpt.GetRawLogs().ToList();
yield return cpt // KO, don't work
});
}
EDIT
It seems not to be possible... but I use something like this :
public IEnumerable<IComputer> FillLogs(IEnumerable<IComputer> computers)
{
return computers.AsParallel().Select(cpt => cpt);
}
But where I put the cpt.Logs = cpt.GetRawLogs().ToList(); instruction
Short version - no, that isn't possible via an iterator block; the longer version probably involves synchronized queue/dequeue between the caller's iterator thread (doing the dequeue) and the parallel workers (doing the enqueue); but as a side note - logs are usually IO-bound, and parallelising things that are IO-bound often doesn't work very well.
If the caller is going to take some time to consume each, then there may be some merit to an approach that only processes one log at a time, but can do that while the caller is consuming the previous log; i.e. it begins a Task for the next item before the yield, and waits for completion after the yield... but that is again, pretty complex. As a simplified example:
static void Main()
{
foreach(string s in Get())
{
Console.WriteLine(s);
}
}
static IEnumerable<string> Get() {
var source = new[] {1, 2, 3, 4, 5};
Task<string> outstandingItem = null;
Func<object, string> transform = x => ProcessItem((int) x);
foreach(var item in source)
{
var tmp = outstandingItem;
// note: passed in as "state", not captured, so not a foreach/capture bug
outstandingItem = new Task<string>(transform, item);
outstandingItem.Start();
if (tmp != null) yield return tmp.Result;
}
if (outstandingItem != null) yield return outstandingItem.Result;
}
static string ProcessItem(int i)
{
return i.ToString();
}
I don't want to be offensive, but maybe there is a lack of understanding. Parallel.ForEach means that the TPL will run the foreach according to the available hardware in several threads. But that means, that ii is possible to do that work in parallel! yield return gives you the opportunity to get some values out of a list (or what-so-ever) and give them back one-by-one as they are needed. It prevents of the need to first find all items matching the condition and then iterate over them. That is indeed a performance advantage, but can't be done in parallel.
Although the question is old I've managed to do something just for fun.
class Program
{
static void Main(string[] args)
{
foreach (var message in GetMessages())
{
Console.WriteLine(message);
}
}
// Parallel yield
private static IEnumerable<string> GetMessages()
{
int total = 0;
bool completed = false;
var batches = Enumerable.Range(1, 100).Select(i => new Computer() { Id = i });
var qu = new ConcurrentQueue<Computer>();
Task.Run(() =>
{
try
{
Parallel.ForEach(batches,
() => 0,
(item, loop, subtotal) =>
{
Thread.Sleep(1000);
qu.Enqueue(item);
return subtotal + 1;
},
result => Interlocked.Add(ref total, result));
}
finally
{
completed = true;
}
});
int current = 0;
while (current < total || !completed)
{
SpinWait.SpinUntil(() => current < total || completed);
if (current == total) yield break;
current++;
qu.TryDequeue(out Computer computer);
yield return $"Completed {computer.Id}";
}
}
}
public class Computer
{
public int Id { get; set; }
}
Compared to Koray's answer this one really uses all the CPU cores.
You can use the following extension method
public static class ParallelExtensions
{
public static IEnumerable<T1> OrderedParallel<T, T1>(this IEnumerable<T> list, Func<T, T1> action)
{
var unorderedResult = new ConcurrentBag<(long, T1)>();
Parallel.ForEach(list, (o, state, i) =>
{
unorderedResult.Add((i, action.Invoke(o)));
});
var ordered = unorderedResult.OrderBy(o => o.Item1);
return ordered.Select(o => o.Item2);
}
}
use like:
public void FillLogs(IEnumerable<IComputer> computers)
{
cpt.Logs = computers.OrderedParallel(o => o.GetRawLogs()).ToList();
}
Hope this will save you some time.
How about
Queue<string> qu = new Queue<string>();
bool finished = false;
Task.Factory.StartNew(() =>
{
Parallel.ForEach(get_list(), (item) =>
{
string itemToReturn = heavyWorkOnItem(item);
lock (qu)
qu.Enqueue(itemToReturn );
});
finished = true;
});
while (!finished)
{
lock (qu)
while (qu.Count > 0)
yield return qu.Dequeue();
//maybe a thread sleep here?
}
Edit:
I think this is better:
public static IEnumerable<TOutput> ParallelYieldReturn<TSource, TOutput>(this IEnumerable<TSource> source, Func<TSource, TOutput> func)
{
ConcurrentQueue<TOutput> qu = new ConcurrentQueue<TOutput>();
bool finished = false;
AutoResetEvent re = new AutoResetEvent(false);
Task.Factory.StartNew(() =>
{
Parallel.ForEach(source, (item) =>
{
qu.Enqueue(func(item));
re.Set();
});
finished = true;
re.Set();
});
while (!finished)
{
re.WaitOne();
while (qu.Count > 0)
{
TOutput res;
if (qu.TryDequeue(out res))
yield return res;
}
}
}
Edit2: I agree with the short No answer. This code is useless; you cannot break the yield loop.

Categories