In a code ValidateRequestmethod is defined
private bool ValidateRequest()
{
return _doc != null;
}
This method is called from everywhere I want to check if _doc is null. This method has been used 5 times in a cs file.
Performance point of view is it advisable to define a method with just a line? I think before calling this method everything from called will be pushed on stack and after it will be pulled from stack.
Any thoughts?
=== Edit ====
I am using .NET version 3.5
Don't bother with it. The compiler will probably inline the method as the corresponding IL is quite short.
If that method helps with code maintainability, as it communicates intention go on with it
It's highly unlikely that moving a single line into a method will have a significant impact on your application. It's actually quite possible that this will have no impact as the JIT could choose to inline such a function call. I would definitely opt for keeping the check in a separate method unless a profiler specifically showed it to be a problem.
Focus on writing code that is clear and well abstracted. Let the profiler guide you to the real performance problems.
As always: when you have doubts, benchmark!
And when you benchmark, do it in release mode, otherwise you're not benchmarking with compiler optimizations.
After that, if it does indeed impact performance, you can inline it with NGen.
This SO post talks about it.
ok, so this is just from LinqPad, and not I guess a definitive answer, but the following code produced a minuscule discrepancy:(00:00:00.7360736 vs 00:00:00.0740074)
void Main()
{
var starttime = DateTime.Now;
for (var i = 0; i < 1000000000; i++)
{
if (ValidateRequest()) continue;
}
var endtime = DateTime.Now;
Console.WriteLine(endtime.Subtract(starttime));
starttime = DateTime.Now;
for (var i = 0; i < 100000000; i++)
{
if (_doc != null) continue;
}
endtime = DateTime.Now;
Console.WriteLine(endtime.Subtract(starttime));
}
private object _doc = null;
private bool ValidateRequest()
{
return _doc != null;
}
Related
I have ASP.Net Core 2.1, C# application. I am using Clr Heap Allocation Analyzer
https://marketplace.visualstudio.com/items?itemName=MukulSabharwal.ClrHeapAllocationAnalyzer
One of the methods looks as below
Ex#1
public void ConfigureServices(IServiceCollection services) {
services.AddSingleton<IPocoDynamo>(serviceProvider => {
var pocoDynamo = new PocoDynamo(serviceProvider.GetRequieredService<IAmazonDynamoDB>());
pocoDynamo.SomeMethod();
return pocoDynamo;
});
}
Ex.#2
public async Task<EventTO> AddEvent(EventTO eventObj)
{
try
{
throw new Exception("Error!");
}
catch (Exception ex)
{
Logger.Log(ex, eventObj);
return null;
}
}
I am using DI throughout the app. But wherever the analyzer is finding new keyword thing, it is warning as
HAA0502 Explicit new reference type allocation
Also wherever Lambda expression is used, it is warning as (like in ex#1)
Warning HAA0301 Heap allocation of closure Captures:
What is causing this & how to address this?
Thanks!
Heap Allocation Analyzer is used to mark all the allocations your code performs. This is not something you would like to have always on: consider the following silly code
public static string MyToString(object? o)
{
if (o == null)
throw new ArgumentNullException(nameof(o)); // HAA0502 here
return o.ToString() ?? string.Empty;
}
The analyzer will emit HAA0502 in the form of warning as information on the marked line to tell you that you are allocating a new instance. Now, it is obvious in this case what you are doing, and it is a trivial warning, but the purpose of the analyzer is to help you spot nasty allocations that might turn your code into something slower.
Now consider this silly code here:
public static void Test1()
{
for (int i = 0; i < 100; i++)
{
var a = i + 1;
var action = new Action(
() => // HAA0301 Heap allocation of closure Capture: a
{
Console.WriteLine(a);
}
);
action();
}
}
Other than HAA0502 which will be marked on new Action( because we are creating a new object, there is an additional warning on the lambda: HAA0301. This is why the analyzer gets more useful: here the analyzer is telling you that the runtime will create a new object containing your captured variable a. If you are not familiar with this, you may think at that code to get transformed in something like this (for explanatory purposes only):
private sealed class Temp1
{
public int Value1 { get; }
public Temp1(int value1)
{
Value1 = value1;
}
public void Method1()
{
Console.WriteLine(Value1);
}
}
public static void Test1()
{
for (int i = 0; i < 100; i++)
{
var a = i + 1;
var t = new Temp1(a);
t.Method1();
}
}
In the latter code, it becomes evident that at every iteration you are allocating an object.
The main question you may have is: is allocating an object a problem? In 99.9% of the cases it is not a problem and you may embrace the simplicity of writing readable, precise and concise code without dealing with low level details, but if you are caught in performance issues (i.e. the remaining 0.01%), the analyzer can get quite handy as it shows in one shots where you or the compiler in your behalf is allocating something. Allocating objects require a future garbage collector cycle to reclaim the memory.
Regarding your code, you are initializing a service via DI with the factory pattern: that code runs once. Therefore there is no surprise you are allocating a new object. So you can safely suppress the warning on this portion of code. You may use the IDE to let generate the suppression code. This is why I suggest to keep the analyzer disabled and enable it only when hunting performance problems.
You can skip to my approach if you don't mind what I'm actually trying to do.
What I'm trying to do
Hey I'm trying to make mutant testing,
inspired by the talk
http://www.infoq.com/presentations/kill-better-test
But I'm using c# and the best mutant libraries are made in java such as
http://pitest.org/
There are some frameworks for c# such as ninjaturtles and visualmutator,
but they both doesn't work in my computer for some reason(I get a weird error).
and also I thought it would be interesting creating my own.
About mutant testing
For those who doesn't know what is mutant testing,
it's a testing for the tests, most people use
code coverage to check that their test cover all the scenarios,
but it's not enough,
cause just because it gets to a piece of code doesn't mean it tests it.
It changes a piece of code, and if the tests still pass
it means you didn't tested the piece of code.
My approach
So I've tried starting with a simple code
that gets the il codes of a method.
var classType = typeof(MethodClass);
var methodInfo = classType.GetMethod("ExecuteMethod", BindingFlags.NonPublic | BindingFlags.Static);
byte[] ilCodes = methodInfo.GetMethodBody().GetILAsByteArray();
this is the MethodClass I'm trying to change:
public class MethodClass
{
private static int ExecuteMethod()
{
var i = 0;
i += 5;
if (i >= 5)
{
i = 2;
}
return i;
}
}
now I'm trying to replace the ils
for (int i = 0; i < ilCodes.Length; i++)
{
if (ilCodes[i] == OpCodes.Add.Value)
{
ilCodes[i] = (byte)OpCodes.Sub.Value;
}
}
but then I'm not sure how to update my function to work with the new il codes.
I've tried using
var dynamicFunction = new DynamicMethod("newmethod", typeof(int), null);
var ilGenerator = dynamicFunction.GetILGenerator();
and then the il generator has a function emit, that gets operator and value, so I could use this. but I don't have the value to put in the emit..
Does anybody know how to do it?
I'm writing HFT trading software. I do care about every single microsecond. Now it written on C# but i will migrate to C++ soon.
Let's consider such code
// Original
class Foo {
....
// method is called from one thread only so no need to be thread-safe
public void FrequentlyCalledMethod() {
var actions = new List<Action>();
for (int i = 0; i < 10; i++) {
actions.Add(new Action(....));
}
// use actions, synchronous
executor.Execute(actions);
// now actions can be deleted
}
I guess that ultra-low latency software should not use "new" keyword too much, so I moved actions to be a field:
// Version 1
class Foo {
....
private List<Action> actions = new List<Action>();
// method is called from one thread only so no need to be thread-safe
public void FrequentlyCalledMethod() {
actions.Clear()
for (int i = 0; i < 10; i++) {
actions.Add(new Action { type = ActionType.AddOrder; price = 100 + i; });
}
// use actions, synchronous
executor.Execute(actions);
// now actions can be deleted
}
And probably I should try to avoid "new" keyword at all? I can use some "pool" of pre-allocated objects:
// Version 2
class Foo {
....
private List<Action> actions = new List<Action>();
private Action[] actionPool = new Action[10];
// method is called from one thread only so no need to be thread-safe
public void FrequentlyCalledMethod() {
actions.Clear()
for (int i = 0; i < 10; i++) {
var action = actionsPool[i];
action.type = ActionType.AddOrder;
action.price = 100 + i;
actions.Add(action);
}
// use actions, synchronous
executor.Execute(actions);
// now actions can be deleted
}
How far should I go?
How important to avoid new?
Will I win anything while using preallocated object which I only need to configure? (set type and price in example above)
Please note that this is ultra-low latency so let's assume that performance is preferred against readability maintainability etc. etc.
In C++ you don't need new to create an object that has limited scope.
void FrequentlyCalledMethod()
{
std::vector<Action> actions;
actions.reserve( 10 );
for (int i = 0; i < 10; i++)
{
actions.push_back( Action(....) );
}
// use actions, synchronous
executor.Execute(actions);
// now actions can be deleted
}
If Action is a base class and the actual types you have are of a derived class, you will need a pointer or smart pointer and new here. But no need if Action is a concrete type and all the elements will be of this type, and if this type is default-constructible, copyable and assignable.
In general though, it is highly unlikely that your performance benefits will come from not using new. It is just good practice here in C++ to use local function scope when that is the scope of your object. This is because in C++ you have to take more care of resource management, and that is done with a technique known as "RAII" - which essentially means taking care of how a resource will be deleted (through a destructor of an object) at the point of allocation.
High performance is more likely to come about through:
proper use of algorithms
proper parallel-processing and synchronisation techniques
effective caching and lazy evaluation.
As much as I detest HFT, I'm going to tell you how to get maximum performance out of each thread on a given piece of iron.
Here's an explanation of an example where a program as originally written was made 730 times faster.
You do it in stages. At each stage, you find something that takes a good percentage of time, and you fix it.
The keyword is find, as opposed to guess.
Too many people just eyeball the code, and fix what they think will help, and often but not always it does help, some.
That's guesswork.
To get real speedup, you need to find all the problems, not just the few you can guess.
If your program is doing new, then chances are at some point that will be what you need to fix.
But it's not the only thing.
Here's the theory behind it.
For high-performance trading engines at good HFT shops, avoiding new/malloc in C++ code is a basic.
Maybe this is dreaming, but is it possible to create an attribute that caches the output of a function (say, in HttpRuntime.Cache) and returns the value from the cache instead of actually executing the function when the parameters to the function are the same?
When I say function, I'm talking about any function, whether it fetches data from a DB, whether it adds two integers, or whether it spits out the content of a file. Any function.
Your best bet is Postsharp. I have no idea if they have what you need, but that's certainly worth checking. By the way, make sure to publish the answer here if you find one.
EDIT: also, googling "postsharp caching" gives some links, like this one: Caching with C#, AOP and PostSharp
UPDATE: I recently stumbled upon this article: Introducing Attribute Based Caching. It describes a postsharp-based library on http://cache.codeplex.com/ if you are still looking for a solution.
I have just the same problem - I have multiply expensive methods in my app and it is necessary for me to cache those results. Some time ago I just copy-pasted similar code but then I decided to factor this logic out of my domain.
This is how I did it before:
static List<News> _topNews = null;
static DateTime _topNewsLastUpdateTime = DateTime.MinValue;
const int CacheTime = 5; // In minutes
public IList<News> GetTopNews()
{
if (_topNewsLastUpdateTime.AddMinutes(CacheTime) < DateTime.Now)
{
_topNews = GetList(TopNewsCount);
}
return _topNews;
}
And that is how I can write it now:
public IList<News> GetTopNews()
{
return Cacher.GetFromCache(() => GetList(TopNewsCount));
}
Cacher - is a simple helper class, here it is:
public static class Cacher
{
const int CacheTime = 5; // In minutes
static Dictionary<long, CacheItem> _cachedResults = new Dictionary<long, CacheItem>();
public static T GetFromCache<T>(Func<T> action)
{
long code = action.GetHashCode();
if (!_cachedResults.ContainsKey(code))
{
lock (_cachedResults)
{
if (!_cachedResults.ContainsKey(code))
{
_cachedResults.Add(code, new CacheItem { LastUpdateTime = DateTime.MinValue });
}
}
}
CacheItem item = _cachedResults[code];
if (item.LastUpdateTime.AddMinutes(CacheTime) >= DateTime.Now)
{
return (T)item.Result;
}
T result = action();
_cachedResults[code] = new CacheItem
{
LastUpdateTime = DateTime.Now,
Result = result
};
return result;
}
}
class CacheItem
{
public DateTime LastUpdateTime { get; set; }
public object Result { get; set; }
}
A few words about Cacher. You might notice that I don't use Monitor.Enter() ( lock(...) ) while computing results. It's because copying CacheItem pointer ( return (T)_cachedResults[code].Result; line) is thread safe operation - it is performed by only one stroke. Also it is ok if more than one thread will change this pointer at the same time - they all will be valid.
You could add a dictionary to your class using a comma separated string including the function name as the key, and the result as the value. Then when your functions can check the dictionary for the existence of that value. Save the dictionary in the cache so that it exists for all users.
PostSharp is your one stop shop for this if you want to create a [Cache] attribute (or similar) that you can stick on any method anywhere. Previously when I used PostSharp I could never get past how slow it made my builds (this was back in 2007ish, so this might not be relevant anymore).
An alternate solution is to look into using Render.Partial with ASP.NET MVC in combination with OutputCaching. This is a great solution for serving html for widgets / page regions.
Another solution that would be with MVC would be to implement your [Cache] attribute as an ActionFilterAttribute. This would allow you to take a controller method and tag it to be cached. It would only work for controller methods since the AOP magic only can occur with the ActionFilterAttributes during the MVC pipeline.
Implementing AOP through ActionFilterAttribute has evolved to be the goto solution for my shop.
AFAIK, frankly, no.
But this would be quite an undertaking to implement within the framework in order for it to work generically for everybody in all circumstances, anyway - you could, however, tailor something quite sufficient to needs by simply (where simplicity is relative to needs, obviously) using abstraction, inheritance and the existing ASP.NET Cache.
If you don't need attribute configuration but accept code configuration, maybe MbCache is what you're looking for?
this is what I'm currently doing:
protected void setupProject()
{
bool lbDone = false;
int liCount = 0;
while (!lbDone && liCount < pMaxRetries)
{
try
{
pProject.ProjectItems.Item("Class1.cs").Delete();
lbDone = true;
}
catch (System.Runtime.InteropServices.COMException loE)
{
liCount++;
if ((uint)loE.ErrorCode == 0x80010001)
{
// RPC_E_CALL_REJECTED - sleep half sec then try again
System.Threading.Thread.Sleep(pDelayBetweenRetry);
}
}
}
}
now I have that try catch block around most calls to the EnvDTE stuff, and it works well enough. The problem I have is when I to loop through a collection and do something to each item once.
foreach(ProjectItem pi in pProject.ProjectItems)
{
// do something to pi
}
Sometimes I get the exception in the foreach(ProjectItem pi in pProject.ProjectItems) line.
Since I don't want to start the foreach loop over if I get the RPC_E_CALL_REJECTED exception I'm not sure what I can do.
Edit to answer comment:
Yes I'm automating VS from another program and yes I usually am using VS for something else at the same time. We have an application that reads an xml file then generates around 50 VS solutions based on the xml file. This usually takes a couple of hours so I try to do other work while this is happening.
There is a solution on this MSDN page: How to: Fix 'Application is Busy' and 'Call was Rejected By Callee' Errors. It shows how to implement a COM IOleMessageFilter interface so that it will automatically retry the call.
First, Hans doesn't want to say so but the best answer to "how to do this" is "don't do this". Just use separate instances of visual studio for your automation and your other work, if at all possible.
You need to take your problem statement out somewhere you can handle the error. You can do this by using in integer index instead of foreach.
// You might also need try/catch for this!
int cProjectItems = pProject.ProjectItems.Length;
for(iProjectItem = 0; iProjectItem < cProjectItems; iProjectItem++)
{
bool bSucceeded = false;
while(!bSucceeded)
{
try{
ProjectItem pi = pProject.ProjectItems[iProjectItem];
// do something with pi
bSucceeded = true;
}catch (System.Runtime.InteropServices.COMException loE)
{
liCount++;
if ((uint)loE.ErrorCode == 0x80010001) {
// RPC_E_CALL_REJECTED - sleep half sec then try again
System.Threading.Thread.Sleep(pDelayBetweenRetry);
}
}
}
}
I didn't have much luck with the recommended way from MSDN, and it seemed rather complicated. What I have done is to wrap up the re-try logic, rather like in the original post, into a generic utility function. You call it like this:
Projects projects = Utils.call( () => (m_dteSolution.Projects) );
The 'call' function calls the function (passed in as a lambda expression) and will retry if necessary. Because it is a generic function, you can use it to call any EnvDTE properties or methods, and it will return the correct type.
Here's the code for the function:
public static T call<T>(Func<T> fn)
{
// We will try to call the function up to 100 times...
for (int i=0; i<100; ++i)
{
try
{
// We call the function passed in and return the result...
return fn();
}
catch (COMException)
{
// We've caught a COM exception, which is most likely
// a Server is Busy exception. So we sleep for a short
// while, and then try again...
Thread.Sleep(1);
}
}
throw new Exception("'call' failed to call function after 100 tries.");
}
As the original post says, foreach over EnvDTE collections can be a problem as there are implicit calls during the looping. So I use my 'call' function to get the Count proprty and then iterate using an index. It's uglier than foreach, but the 'call' function makes it not so bad, as there aren't so many try...catches around. For example:
int numProjects = Utils.call(() => (projects.Count));
for (int i = 1; i <= numProjects; ++i)
{
Project project = Utils.call(() => (projects.Item(i)));
parseProject(project);
}
I was getting the same error using C# to read/write to Excel. Oddly, it worked in debug mode but not on a deployed machine. I simply changed the Excel app to be Visible, and it works properly, albeit about twice as slow. It is annoying to have an Excel app open and close dynamically on your screen, but this seems to be the simplest work-around for Excel.
Microsoft.Office.Interop.Excel.Application oApp = new ApplicationClass();
oApp.Visible = true;
oApp.DisplayAlerts = false;