Detecting if JIT is available - c#

What would be the canonical (if any, otherwise any reliable) way of detecting if the JIT engine is available in a portable way?
For example, Xamarin.iOS does not support JIT since the iOS platform enforces DEP. Mono is quite good at bridging the gap by interpreting most things like lambda expressions these days, but some thing are not (properly) implemented, leading to runtime exceptions that hurt performance badly.
I want to do this detection from a shared Portable Class Library, if possible.

You could try performing an operation that you know will fail on your AoT code but not on JIT (creating a type dynamically, for example) and use a try/catch block to define whether or not JIT is available.
It could be implemented like this (note that this will only fail when the linker is set to "Link All Assemblies"):
private static bool? _isJITAvailable = null;
public static bool IsJITAvailable()
{
if(_isJITAvailable == null)
{
try
{
//This crashes on iPhone
typeof(GenericClass<>).MakeGenericType(typeof(int));
_isJITAvailable = true;
}
catch(Exception)
{
_isJITAvailable = false;
}
}
return _isJITAvailable.Value;
}
I would not do so, though. Do you really need those "better performing callbacks" to begin with? This really sounds like a premature optimization.

After some digging into the Mono source code, it seems the "magic" build-time property is FULL_AOT_RUNTIME. For example, System.Reflection.Emit.AssemblyBuilder is conditionally compiled based on this property being unset, meaning
var canJit = Type.GetType ("System.Reflection.Emit.AssemblyBuilder") != null;
should get me what I want.

Why not test the performance of the two callbacks at runtime?
If yours is slower than the platform's, use the platform's.
If the platform's is slower than yours, use yours.

Related

Entity Framework 6 - Enforce asynchronous queries, compile time prevent synchronous calls

With the move to EF6.1, our goal is to use exclusivity the Async/Await options speaking with our data sets. While porting from our previous Linq2Sql, there are many .ToList(), .FirstOrDefault(), and .Count()'s. I know we can search and fix those all, but it would be nice if we could, at compile time, prevent those functions from even being permitted into a build. Does anyone have have a creative idea on how to accomplish this? Even if they were compiler warnings that could be thrown (such as use of the Obsolete attribute).
You can use the .NET Compiler Platform to write a Diagnostic and Code Fix that will look for these patterns and provide warnings/errors.
You could even implement a Syntax Transformation to automatically change these constructs - though the effort might be more expensive than just doing it by hand.
Following up to this... i never found a solution that can detect this at compile time, but I was able to do this in code in the DataContext:
public EfMyCustomContext(string connctionString)
: base(string.Format(CONNECTION_STRING, connctionString))
{
#if DEBUG
this.Database.Log = LogDataBaseCall;
#endif
}
#if DEBUG
private void LogDataBaseCall(string s)
{
if (s.Contains("Executing "))
{
if (!s.Contains("asynchronously"))
{
// This code was not executed asynchronously
// Please look at the stack trace, and identify what needs
// to be loaded. Note, an entity.SomeOtherEntityOrCollection can be loaded
// with the eConnect API call entity.SomeOtherEntityOrCollectionLoadAsync() before using the
// object that is going to hit the sub object. This is the most common mistake
// and this breakpoint will help you identify all synchronous code.
// eConnect does not want any synchronous code in the code base.
System.Diagnostics.Debugger.Break();
}
}
}
#endif
Hope this helps someone else, and still would love if there was some option during compile.

#if DEBUG for exception handling

I have a public method ChangeLanguage that is in a class library that is supposed to be used by other people who have no idea what the source code is but know what it can do.
public string ChangeLanguage(string language)
{
#if DEBUG
// Check if the new language is not null or an empty string.
language.ThrowArgumentNullExceptionIfNullOrEmpty("language", GetString("0x000000069"));
#else
if (string.IsNullOrEmpty(language))
language = _fallbackLanguage;
#endif
}
For me it looked obvious to check if a language was actually passed to the method and if not, throw an error to the developer. Tho this is a very small performance loss, I think it's better to not throw an exception but just use the fallback language when no language was provided.
I'm wondering if this is a good design approach and if I should keep using it at other places in the library like here:
_appResDicSource = Path.Combine("\\" + _projectName + ";component", _languagesDirectoryName, _fileBaseName + "_" + language + ".xaml");
_clsLibResDicSource = "\\Noru.Base;component\\Languages\\Language_" + language + ".xaml";
ResourceDictionary applicationResourceDictionary;
ResourceDictionary classLibraryResourceDictionary;
try { applicationResourceDictionary = new ResourceDictionary { Source = new Uri(_appResDicSource, UriKind.RelativeOrAbsolute) }; }
catch
{
#if DEBUG
throw new IOException(string.Format(GetString("1x00000006A"), _appResDicSource));
#else
return ChangeLanguage(_fallbackLanguage);
#endif
}
try { classLibraryResourceDictionary = new ResourceDictionary { Source = new Uri(_clsLibResDicSource, UriKind.RelativeOrAbsolute) }; }
catch
{
#if DEBUG
throw new IOException(string.Format(GetString("1x00000006B"), _clsLibResDicSource));
#else
return ChangeLanguage(_fallbackLanguage);
#endif
}
It depends on semantics of the call, but I would consider Debug.Fail (which will be eliminated if DEBUG is not defined):
public string ChangeLanguage(string language)
{
if (string.IsNullOrEmpty(language))
{
Debug.Fail("language is NOK");
language = _fallbackLanguage;
}
}
The issue is that, on the one hand, as mentioned by #OndrejTucny and #poke, it is not desirable to have different logic for different build configurations. It is correct.
But on the other hand, there are cases where you do not want the application to crash in the field due to a minor error. But if you just ignore the error unconditionally, you decrease the chances to detect it even on the local system.
I do not think that there is a universal solution. In general, you may end up deciding to throw or not, to log or not, to add an assertion or not, always or sometimes. And the answers depend on a concrete situation.
No, this is certainly not a good design approach. The problem is your method's DEBUG and RELEASE contracts are different. I wouldn't be happy using such API as a developer.
The inherent problem of your solution is that you will end up with production behavior that cannot be reproduced in the dev/test environment.
Either the semantics of your library is such that either not providing a language code is an error, then always raise an exception, or it is a valid condition that leads to some pre-defined behavior, such as using the 'fallback' language code, then always substitute the default. However, it shouldn't be both and decided only by the selection of a particular compilation of your assembly.
I think this is a bad idea. Mostly because this separates the logic of a debug build and a release build. This means that when you—as a developer—only build debug builds you will not be able to detect errors with the release-only logic, and you would have to test that separately (and in case of errors, you would have a release build, so you couldn’t debug those errors properly). So this adds a huge testing burden which needs to be handled for example by separate unit tests that run on debug and release builds.
That being said, performance is not really an issue. That extra logic only happens in debug builds and debug builds are not optimized for performance anyway. Furthermore, it’s very unlikely that such checks will cause any noticeable performance problems. And unless you profile your application and actually verify that it is causing performance problems, you shouldn’t try to optimize it.
How often is this method getting called? Performance is probably not a concern.
I don't think the decision to select the fallback language is the library's to make. Throw the exception, and let the clients choose to select a default, fallback if desired.

What's the explanation of this behavior in C#

Consider this simple console application:
class Program
{
static void Main(string[] args)
{
var human = CreateHuman(args[0]);
Console.WriteLine("Created Human");
Console.ReadLine();
}
public static object CreateHuman(string association)
{
object human = null;
if (association == "is-a")
{
human = new IsAHuman();
}
else
{
human = new HasAHuman();
}
return human;
}
}
public class IsAHuman : Human
{
}
public class HasAHuman
{
public Human Human { get; set; }
}
The Human class is in another assembly, say HumanAssembly.dll. If HumanAssembly.dll exists in the bin directory of our console app, everything would be fine. And as we might expect, by removing it we encounter FileNotFoundException.
I don't understand this part though. Comment human = new IsAHuman(); line, recompile and remove HumanAssembly.dll. Console app won't throw any exception in this case.
My guess is that CLR compiler differentiates between is a and has a associations. In other words, CLR tries to find out and understand and probably load all the types existing in the class definition statement, but it can instantiate a class without knowing what's inside it. But I'm not sure about my interpretation.
I fail to find a good explanation. What is the explanation for this behavior?
You are seeing the behavior of the JIT compiler. Just In Time. A method doesn't get compiled until the last possible moment, just before it is called. Since you removed the need to actually construct a Human object, there is no code path left that forces the jitter to load the assembly. So your program won't crash.
The last remaining reference to Human is the HashAHuman.Human property. You don't use it.
Predicting when the jitter is going to need to load an assembly is not that straight-forward in practice. It gets pretty difficult to reason through when you run the Release build of your code. That normally enables the optimizer that's built into the jitter, one of its core optimization strategies is to inline a method. To do that, it needs access to the method before it is called. You'd need an extra level of indirection, an extra method that has the [MethodImpl(MethodImplOptions.NoInlining)] attribute to stop it from having a peek. That gets to be a bit off into the deep end, always consider a plug-in architecture first, something like MEF.
Here is great explanation of what you are looking for.
The CLR Loader
Specially in the following lines -
This policy of loading types (and assemblies and modules) on demand means that parts of a program that are not used are never brought into
memory. It also means that a running application will often see new
assemblies and modules loaded over time as the types contained in
those files are needed during execution. If this is not the behavior
you want, you have two options. One is to simply declare hidden static
fields of the types you want to guarantee are loaded when your type is
loaded. The other is to interact with the loader explicitly.
As the Bold line says, if you code does not execute a specific line then the types won't be loaded, even if the code is not commented out.
Here is also a similar answer that you might also be interested in -
How are DLLs loaded by the CLR?

Is '#IF DEBUG' deprecated in modern programming?

This is my first StackOverflow question so be nice! :-)
I recently came across the following example in some C# source code:
public IFoo GetFooInstance()
{
#IF DEBUG
return new TestFoo();
#ELSE
return new Foo();
#ENDIF
}
Which lead me to these questions:
Is the use of "#IF DEBUG" unofficially deprecated? If not what is considered to be a good implementation of its use?
Using a class factory and/or tools like MOQ, RhinoMocks, etc how could the above example be implemented?
Using an IoC container, the entire function becomes redundant, instead of calling GetFooInstance you'd have code similar to:
ObjectFactory.GetInstance<IFoo>();
The setup of your IoC container could be in code or through a configuration file.
Nope. We use it all the time to sprinkle diagnostic information in our assemblies. For example I have the following shortcut used when debugging:
#if DEBUG
if( ??? ) System.Diagnostics.Debugger.Break();
#endif
Where I can change ??? to any relevant expresion like Name == "Popcorn". etc. This ensures that none of the debugging code leaks into the release build.
Just like some of the other posters have mentioned, I use #if statements all the time for debugging scenarios.
The style of code that you have posted is more of a factory creation pattern, which is common. I use it frequently, and not only do I not consider it depreciated, I consider the use of #if and #define statements to be an important tool in my bag of tricks.
I believe CastleWindsor (http://www.castleproject.org/container/index.html) also has an IoC container. I believe the general pattern is that in the configuration file, you state that TestFoo or IFoo will be the class created when CastleWindsor initializes the IoC container.
Yes. I would strongly advise AGAINST using "#IF DEBUG" except in rare circumstances
It was common place in C, but you should not use in a modern language such as C# for several reasons:
Code (and header) files become nightmarish
It is too easy to make a mistake and leave in/out a conditional for a Release build
Testing becomes a nightmare: need to test many combinations of builds
Not all code is compile checked, unless you compile for ALL possible
conditional symbols!
#Richard's answer shows how you can replace using IoC (much cleaner).
I strongly deprecate #if; instead use if with a manifest constant:
#define DEBUG 0 // actually in a header or on a command line
public IFoo GetFooInstance()
{
if (DEBUG)
return new TestFoo();
else
return new Foo();
}
Why is #IF bad? Because in the #IF version, not all code is typechecked. For complicated, long-lived codes, bitrot can set in, and then suddenly you need DEBUG but the code won't build. With the if version, the code always builds, and given even the most minimal optimization settings, your compiler will completely eliminate the unreachable code. In other words, as long as DEBUG is known to be always 0 (and #define will do that for you), there is no run-time cost to using if.
And you are guaranteed that if you change DEBUG from 0 to 1, the system will build.
Your compiler should be smart enough to figure out that:
final DEBUG=false;
...
if(DEBUG)
return new debugMeh();
else
return new meh();
the method never has to be called and can, in fact, be compiled out of the final assembly.
That plus the fact that even the unoptimized performance difference wouldn't amount to anything significant makes using some different syntax mostly unnecessary.
EDIT: I'd like to point out something interesting here. Someone in comments just said that this:
#IF DEBUG
return new TestFoo();
#ELSE
return new Foo();
#ENDIF
was easier to write than this:
if(DEBUG)
return new TestFoo();
else
return new Foo();
I find it really amazing the lengths that people will go to to defend the way they've done things as correct. Another said that they would have to define a DEBUG variable in each file. I'm not sure about C#, but in Java we generally put a
public static final DEBUG=true;
in our logger or another central factory object (although we actually find better ways to do this--such as using DI).
I'm not saying the same facilities don't exist in C#, I'm just amazed at the lengths people will go to to prove the solution they hold onto because of habit is correct.

How do I write (test) code that will not be optimized by the compiler/JIT?

I don't really know much about the internals of compiler and JIT optimizations, but I usually try to use "common sense" to guess what could be optimized and what couldn't. So there I was writing a simple unit test method today:
#Test // [Test] in C#
public void testDefaultConstructor() {
new MyObject();
}
This method is actually all I need. It checks that the default constructor exists and runs without exceptions.
But then I started to think about the effect of compiler/JIT optimizations. Could the compiler/JIT optimize this method by eliminating the new MyObject(); statement completely? Of course, it would need to determine that the call graph does not have side effects to other objects, which is the typical case for a normal constructor that simply initializes the internal state of the object.
I presume that only the JIT would be allowed to perform such an optimization. This probably means that it's not something I should worry about, because the test method is being performed only once. Are my assumptions correct?
Nevertheless, I'm trying to think about the general subject. When I thought about how to prevent this method from being optimized, I thought I may assertTrue(new MyObject().toString() != null), but this is very dependent on the actual implementation of the toString() method, and even then, the JIT can determine that toString() method always returns a non-null string (e.g. if actually Object.toString() is being called), and thus optimize the whole branch. So this way wouldn't work.
I know that in C# I can use [MethodImpl(MethodImplOptions.NoOptimization)], but this is not what I'm actually looking for. I'm hoping to find a (language-independent) way of making sure that some specific part(s) of my code will actually run as I expect, without the JIT interfering in this process.
Additionally, are there any typical optimization cases I should be aware of when creating my unit tests?
Thanks a lot!
Don't worry about it. It's not allowed to ever optimize anything that can make a difference to your system (except for speed). If you new an object, code gets called, memory gets allocated, it HAS to work.
If you had it protected by an if(false), where false is a final, it could be optimized out of the system completely, then it could detect that the method doesn't do anything and optimize IT out (in theory).
Edit: by the way, it can also be smart enough to determine that this method:
newIfTrue(boolean b) {
if(b)
new ThisClass();
}
will always do nothing if b is false, and eventually figure out that at one point in your code B is always false and compile this routine out of that code completely.
This is where the JIT can do stuff that's virtually impossible in any non-managed language.
I think if you are worried about it getting optimized away, you may be doing a bit of testing overkill.
In a static language, I tend to think of the compiler as a test. If it passes compilation, that means that certain things are there (like methods). If you don't have another test that exercises your default constructor (which will prove it wont throw exceptions), you may want to think about why you are writing that default constructor in the first place (YAGNI and all that).
I know there are people that don't agree with me, but I feel like this sort of thing is just something that will bloat out your number of tests for no useful reason, even looking at it through TDD goggles.
Think about it this way:
Lets assume that compiler can determine that the call graph doesn't have any side effects(I don't think it is possible, I vaguely remember something about P=NP from my CS courses). It will optimize any method that doesn't have side effects. Since most tests don't have and shouldn't have any side effects then compiler can optimize them all away.
The JIT is only allowed to perform operations that do not affect the guaranteed semantics of the language. Theoretically, it could remove the allocation and call to the MyObject constructor if it can guarantee that the call has no side effects and can never throw an exception (not counting OutOfMemoryError).
In other words, if the JIT optimizes the call out of your test, then your test would have passed anyway.
PS: Note that this applies because you are doing functionality testing as opposed to performance testing. In performance testing, it's important to make sure the JIT does not optimize away the operation you are measuring, else your results become useless.
It seems that in C# I could do this:
[Test]
public void testDefaultConstructor() {
GC.KeepAlive(new MyObject());
}
AFAIU, the GC.KeepAlive method will not be inlined by the JIT, so the code will be guaranteed to work as expected. However, I don't know a similar construct in Java.
Every I/O is a side effect, so you can just put
Object obj = new MyObject();
System.out.println(obj.toString());
and you're fine.
Why should it matter? If the compiler/JIT can statically determine no asserts are going to be hit (which could cause side effects), then you're fine.

Categories