HAA0502 Explicit new reference type allocation - c#

I have ASP.Net Core 2.1, C# application. I am using Clr Heap Allocation Analyzer
https://marketplace.visualstudio.com/items?itemName=MukulSabharwal.ClrHeapAllocationAnalyzer
One of the methods looks as below
Ex#1
public void ConfigureServices(IServiceCollection services) {
services.AddSingleton<IPocoDynamo>(serviceProvider => {
var pocoDynamo = new PocoDynamo(serviceProvider.GetRequieredService<IAmazonDynamoDB>());
pocoDynamo.SomeMethod();
return pocoDynamo;
});
}
Ex.#2
public async Task<EventTO> AddEvent(EventTO eventObj)
{
try
{
throw new Exception("Error!");
}
catch (Exception ex)
{
Logger.Log(ex, eventObj);
return null;
}
}
I am using DI throughout the app. But wherever the analyzer is finding new keyword thing, it is warning as
HAA0502 Explicit new reference type allocation
Also wherever Lambda expression is used, it is warning as (like in ex#1)
Warning HAA0301 Heap allocation of closure Captures:
What is causing this & how to address this?
Thanks!

Heap Allocation Analyzer is used to mark all the allocations your code performs. This is not something you would like to have always on: consider the following silly code
public static string MyToString(object? o)
{
if (o == null)
throw new ArgumentNullException(nameof(o)); // HAA0502 here
return o.ToString() ?? string.Empty;
}
The analyzer will emit HAA0502 in the form of warning as information on the marked line to tell you that you are allocating a new instance. Now, it is obvious in this case what you are doing, and it is a trivial warning, but the purpose of the analyzer is to help you spot nasty allocations that might turn your code into something slower.
Now consider this silly code here:
public static void Test1()
{
for (int i = 0; i < 100; i++)
{
var a = i + 1;
var action = new Action(
() => // HAA0301 Heap allocation of closure Capture: a
{
Console.WriteLine(a);
}
);
action();
}
}
Other than HAA0502 which will be marked on new Action( because we are creating a new object, there is an additional warning on the lambda: HAA0301. This is why the analyzer gets more useful: here the analyzer is telling you that the runtime will create a new object containing your captured variable a. If you are not familiar with this, you may think at that code to get transformed in something like this (for explanatory purposes only):
private sealed class Temp1
{
public int Value1 { get; }
public Temp1(int value1)
{
Value1 = value1;
}
public void Method1()
{
Console.WriteLine(Value1);
}
}
public static void Test1()
{
for (int i = 0; i < 100; i++)
{
var a = i + 1;
var t = new Temp1(a);
t.Method1();
}
}
In the latter code, it becomes evident that at every iteration you are allocating an object.
The main question you may have is: is allocating an object a problem? In 99.9% of the cases it is not a problem and you may embrace the simplicity of writing readable, precise and concise code without dealing with low level details, but if you are caught in performance issues (i.e. the remaining 0.01%), the analyzer can get quite handy as it shows in one shots where you or the compiler in your behalf is allocating something. Allocating objects require a future garbage collector cycle to reclaim the memory.
Regarding your code, you are initializing a service via DI with the factory pattern: that code runs once. Therefore there is no surprise you are allocating a new object. So you can safely suppress the warning on this portion of code. You may use the IDE to let generate the suppression code. This is why I suggest to keep the analyzer disabled and enable it only when hunting performance problems.

Related

Inform the compiler that a variable might be updated from another thread

This would generally be done using volatile. But in the case of a long or double that's impossible.
Perhaps just making it public is enough, and the compiler then knows that this can be used by another assembly and won't "optimize it out"? Can this be relied upon? Some other way?
To be clear, I'm not worried about concurrent reading/writing of the variable. Only one thing - that it doesn't get optimized out. (Like in https://stackoverflow.com/a/1284007/939213 .)
The best way to prevent code removal is to use the code.
if you are worries about optimizing the while loop in your example
class Test
{
long foo;
static void Main()
{
var test = new Test();
new Thread(delegate() { Thread.Sleep(500); test.foo = 255; }).Start();
while (test.foo != 255) ;
Console.WriteLine("OK");
}
}
you still could use volatile to do this by modifying your while loop
volatile int temp;
//code skipped in this sample
while(test.foo != 255) { temp = (int)foo;}
Now assuming you are SURE you won't have any thread safety issues. you are using your long foo so it won't be optimized away. and you don't care about losing any part of your long since you are just trying to keep it alive.
Make sure you mark your code very clearly if you do something like this. possibly write a VolatileLong class that wraps your long (and your volatile int) so other people understand what you are doing
also other thread-safty tools like locks will prevent code removal. for example the compiler is smart enough not to remove the double if in the sinleton pattern like this.
if (_instance == null) {
lock(_lock) {
if (_instance == null) {
_instance = new Singleton();
}
}
}
return _instance;

memory management impact on setting objects to null in finally

public void func1()
{
object obj= null;
try
{
obj=new object();
}
finally
{
obj = null;
}
}
is there any advantage of assigning null to a reference in finally block in regards of memory management of large objects?
Let's deal with the explicit and implicit questions here.
Q: First and foremost, is there a point in assigning null to a local variable when you're done with it?
A: No, none at all. When compiled with optimizations and not running under a debugger, the JITter knows the segment of the code where a variable is in use and will automatically stop considering it a root when you've passed that segment. In other words, if you assign something to a variable, and then at some point never again read from it, it may be collected even if you don't explicitly set it to null.
So your example can safely be written as:
public void func1()
{
object obj = new object();
// implied more code here
}
If no code in the "implied more code here" ever accesses the obj variable, it is no longer considered a root.
Note that this changes if running in a non-optimized assembly, or if you hook up a debugger to the process. In that case the scope of variables is artificially extended until the end of their scope to make it easier to debug.
Q: Secondly, what about fields in the surrounding class?
A: Here it can definitely make a difference.
If the object surrounding your method is kept alive for an extended period of time, and the need for the contents of a field has gone, then yes, setting the field to null will make the old object it referenced eligible for collection.
So this code might have merit:
public class SomeClass
{
private object obj;
public void func1()
{
try
{
obj=new object();
// implied more code here
}
finally
{
obj = null;
}
}
}
But then, why are you doing it like this? You should instead strive to write cleaner code that doesn't rely on surrounding state. In the above code you should instead refactor the "implied more code here" to be passed in the object to use, and remove the global field.
Obviously, if you can't do that, then yes, setting the field to null as soon as its object reference is no longer needed is a good idea.
Fun experiment, if you run the below code in LINQPad with optimizations on, what do you expect the output to be?
void Main()
{
var s = new Scary();
s.Test();
}
public class Scary
{
public Scary()
{
Console.WriteLine(".ctor");
}
~Scary()
{
Console.WriteLine("finalizer");
}
public void Test()
{
Console.WriteLine("starting test");
GC.Collect();
GC.WaitForPendingFinalizers();
GC.Collect();
Console.WriteLine("ending test");
}
}
Answer (mouseover to show when you think you've got it):
.ctor
starting test
finalizer
ending test
Explanation:
Since the implicit this parameter to an instance method is never used inside the method, the object surrounding the method is collected, even if the method is currently running.

should i try to avoid "new" keyword in ultra-low-latency software?

I'm writing HFT trading software. I do care about every single microsecond. Now it written on C# but i will migrate to C++ soon.
Let's consider such code
// Original
class Foo {
....
// method is called from one thread only so no need to be thread-safe
public void FrequentlyCalledMethod() {
var actions = new List<Action>();
for (int i = 0; i < 10; i++) {
actions.Add(new Action(....));
}
// use actions, synchronous
executor.Execute(actions);
// now actions can be deleted
}
I guess that ultra-low latency software should not use "new" keyword too much, so I moved actions to be a field:
// Version 1
class Foo {
....
private List<Action> actions = new List<Action>();
// method is called from one thread only so no need to be thread-safe
public void FrequentlyCalledMethod() {
actions.Clear()
for (int i = 0; i < 10; i++) {
actions.Add(new Action { type = ActionType.AddOrder; price = 100 + i; });
}
// use actions, synchronous
executor.Execute(actions);
// now actions can be deleted
}
And probably I should try to avoid "new" keyword at all? I can use some "pool" of pre-allocated objects:
// Version 2
class Foo {
....
private List<Action> actions = new List<Action>();
private Action[] actionPool = new Action[10];
// method is called from one thread only so no need to be thread-safe
public void FrequentlyCalledMethod() {
actions.Clear()
for (int i = 0; i < 10; i++) {
var action = actionsPool[i];
action.type = ActionType.AddOrder;
action.price = 100 + i;
actions.Add(action);
}
// use actions, synchronous
executor.Execute(actions);
// now actions can be deleted
}
How far should I go?
How important to avoid new?
Will I win anything while using preallocated object which I only need to configure? (set type and price in example above)
Please note that this is ultra-low latency so let's assume that performance is preferred against readability maintainability etc. etc.
In C++ you don't need new to create an object that has limited scope.
void FrequentlyCalledMethod()
{
std::vector<Action> actions;
actions.reserve( 10 );
for (int i = 0; i < 10; i++)
{
actions.push_back( Action(....) );
}
// use actions, synchronous
executor.Execute(actions);
// now actions can be deleted
}
If Action is a base class and the actual types you have are of a derived class, you will need a pointer or smart pointer and new here. But no need if Action is a concrete type and all the elements will be of this type, and if this type is default-constructible, copyable and assignable.
In general though, it is highly unlikely that your performance benefits will come from not using new. It is just good practice here in C++ to use local function scope when that is the scope of your object. This is because in C++ you have to take more care of resource management, and that is done with a technique known as "RAII" - which essentially means taking care of how a resource will be deleted (through a destructor of an object) at the point of allocation.
High performance is more likely to come about through:
proper use of algorithms
proper parallel-processing and synchronisation techniques
effective caching and lazy evaluation.
As much as I detest HFT, I'm going to tell you how to get maximum performance out of each thread on a given piece of iron.
Here's an explanation of an example where a program as originally written was made 730 times faster.
You do it in stages. At each stage, you find something that takes a good percentage of time, and you fix it.
The keyword is find, as opposed to guess.
Too many people just eyeball the code, and fix what they think will help, and often but not always it does help, some.
That's guesswork.
To get real speedup, you need to find all the problems, not just the few you can guess.
If your program is doing new, then chances are at some point that will be what you need to fix.
But it's not the only thing.
Here's the theory behind it.
For high-performance trading engines at good HFT shops, avoiding new/malloc in C++ code is a basic.

Error with ReaderWriterLockSlim

I got this exception
The read lock is being released without being held.
at System.Threading.ReaderWriterLockSlim.ExitReadLock()
at .. GetBreed(String)
Below is the only place in code that accesses the lock. As you can see, there is no recursion. I'm having trouble understanding how this exception could occur.
static readonly Dictionary<string, BreedOfDog> Breeds
= new Dictionary<string,BreedOfDog>();
static BreedOfDog GetBreed(string name)
{
try
{
rwLock.EnterReadLock();
BreedOfDog bd;
if (Breeds.TryGetValue(name, out bd))
{
return bd;
}
}
finally
{
rwLock.ExitReadLock();
}
try
{
rwLock.EnterWriteLock();
BreedOfDog bd;
//make sure it hasn't been added in the interim
if (Breeds.TryGetValue(t, out bd)
{
return bd;
}
bd = new BreedOfDog(name); //expensive to fetch all the data needed to run the constructor, hence the caching
Breeds[name] = bd;
return bd;
}
finally
{
rwLock.ExitWriteLock();
}
}
I'm guessing you have something re-entrant, and it is throwing an exception when obtaining the lock. There is a catch-22 whether you "take the lock", "try" vs "try", "take the lock", but the "take the lock", "try" has fewer failure cases (the "aborted between take and try" is so vanishingly unlikely you don't need to stress).
Move the "take the lock" outside the "try", and see what the actual exception is.
The problem is most likely that you are failing to take the lock (probably re-entrancy), then trying to unlock something you didn't take. This could mean that the exception surfaces in the orginal code that took the lock, due to trying to release twice when only taken once.
Note: Monitor has new overloads with "ref bool" parameters to help with this scenario - but not the other lock types.
Use LockRecursionPolicy.SupportsRecursion when instantiating the RWLS. If the error goes away then you actually do have some type of recursion involved. Perhaps it is in code that you did not post?
And if you are really concerned about getting maximum concurrency out of this (as I suspect you are since you are using a RWLS) then you could use the double-checked locking pattern. Notice how your original code already has that feel to it? So why beat around bush? Just do it.
In the following code notice how I always treat the Breeds reference as immutable and then inside the lock I recheck, copy, change, and swap out the reference.
static volatile Dictionary<string, BreedOfDog> Breeds = new Dictionary<string,BreedOfDog>();
static readonly object LockObject = new object();
static BreedOfDog GetBreed(string name)
{
BreedOfDog bd;
if (!Breeds.TryGetValue(name, out bd))
{
lock (LockObject)
{
if (!Breeds.TryGetValue(name, out bd))
{
bd = new BreedOfDog(name);
var copy = new Dictionary<string, BreedOfDog>(Breeds);
copy[name] = bd;
Breeds = copy;
}
}
}
return bd;
}

Multithreading, lambdas and local variables

My question is, in the below code, can I be sure that the instance methods will be accessing the variables I think they will, or can they be changed by another thread while I'm still working? Do closures have anything to do with this, i.e. will I be working on a local copy of the IEnumerable<T> so enumeration is safe?
To paraphrase my question, do I need any locks if I'm never writing to shared variables?
public class CustomerClass
{
private Config cfg = (Config)ConfigurationManager.GetSection("Customer");
public void Run()
{
var serviceGroups = this.cfg.ServiceDeskGroups.Select(n => n.Group).ToList();
var groupedData = DataReader.GetSourceData().AsEnumerable().GroupBy(n => n.Field<int>("ID"));
Parallel.ForEach<IGrouping<int, DataRow>, CustomerDataContext>(
groupedData,
() => new CustomerDataContext(),
(g, _, ctx) =>
{
var inter = this.FindOrCreateInteraction(ctx, g.Key);
inter.ID = g.Key;
inter.Title = g.First().Field<string>("Title");
this.CalculateSomeProperty(ref inter, serviceGroups);
return ctx;
},
ctx => ctx.SubmitAllChanges());
}
private Interaction FindOrCreateInteraction(CustomerDataContext ctx, int ID)
{
var inter = ctx.Interactions.Where(n => n.Id = ID).SingleOrDefault();
if (inter == null)
{
inter = new Interaction();
ctx.InsertOnSubmit(inter);
}
return inter;
}
private void CalculateSomeProperty(ref Interaction inter, IEnumerable<string> serviceDeskGroups)
{
// Reads from the List<T> class instance variable. Changes the state of the ref'd object.
if (serviceGroups.Contains(inter.Group))
{
inter.Ours = true;
}
}
}
I seem to have found the answer and in the process, also the question.
The real question was whether local "variables", that turn out to be actually objects, can be trusted for concurrent access. The answer is no, if they happen to have internal state that is not handled in a thread-safe manner, all bets are off. The closure doesn't help, it just captures a reference to said object.
In my specific case - concurrent reads from IEnumerable<T> and no writes to it, it is actually thread safe, because each call to foreach, Contains(), Where(), etc. gets a fresh new IEnumerator, which is only visible from the thread that requested it. Any other objects, however, must also be checked, one by one.
So, hooray, no locks or synchronized collections for me :)
Thanks to #ebb and #Dave, although you didn't answer the question directly, you pointed me in the right direction.
If you're interested in the results, this is a run on my home PC (a quad-core) with Thread.SpinWait to simulate the processing time of a row. The real app had an improvement of almost 2X (01:03 vs 00:34) on a dual-core hyper-threaded machine with SQL Server on the local network.
Single-threaded, using foreach. I don't know why, but there is a pretty high number of cross-core context switches.
Using Parallel.ForEach, lock-free with thread-locals where needed.
Right now, from what I can tell, your instance methods are not using any member variables. That makes them stateless and therefore threadsafe. However, in that same case, you'd be better off marking them "static" for code clarity and a slight performance benefit.
If those instance methods were using a member variable, then they'd only be as threadsafe as that variable (for example, if you used a simple list, it would not be threadsafe and you may see weird behavior). Long story short, member variables are the enemy of easy thread safety.
Here's my refactor (disclaimer, not tested). If you want to provide data that's passed in, you'll stay saner if you pass them as parameters and don't keep them as member variables :
UPDATE: You asked for a way to reference your read only list, so I've added that and removed the static tags (so that the instance variable can be shared).
public class CustomerClass
{
private List<string> someReadOnlyList;
public CustomerClass(){
List<string> tempList = new List<string>() { "string1", "string2" };
someReadOnlyList = ArrayList.Synchronized(tempList);
}
public void Run()
{
var groupedData = DataReader.GetSourceData().AsEnumerable().GroupBy(n => n.Field<int>("ID"));
Parallel.ForEach<IGrouping<int, DataRow>, CustomerDataContext>(
groupedData,
() => new CustomerDataContext(),
(g, _, ctx) =>
{
var inter = FindOrCreateInteraction(ctx, g.Key);
inter.ID = g.Key;
inter.Title = g.First().Field<string>("Title");
CalculateSomeProperty(ref inter);
return ctx;
},
ctx => ctx.SubmitAllChanges());
}
private Interaction FindOrCreateInteraction(CustomerDataContext ctx, int ID)
{
var query = ctx.Interactions.Where(n => n.Id = ID);
if (query.Any())
{
return query.Single();
}
else
{
var inter = new Interaction();
ctx.InsertOnSubmit(inter);
return inter;
}
}
private void CalculateSomeProperty(ref Interaction inter)
{
Console.Writeline(someReadOnlyList[0]);
//do some other stuff
}
}

Categories