Imagine the following setup:
UWP Library:
MinVersion: 10240
TargetVersion: 16299
This library checks at runtime if the UniversalApiContract Version 5 is present.
If yes, it will use the new NavigationView control that.
UWP App:
MinVersion: 10240
TargetVersion: 10240
This app references the UWP Library project.
When I run this app on my Computer, which has Windows 10 Version 16299 installed, the following happens:
The UWP Library checks at runtime for the api contract. As I have the newest version of Windows 10, yes it is present.
Then it tries to create the NavigationView control, and I get a TypeLoadException with the message Could not find Windows Runtime type 'Windows.UI.Xaml.Controls.NavigationView'.
What? Why? Does the ApiInformation class not respect the target version of the running app?
What can I do to work around this issue?
I thought ApiInformation was the way to avoid this, but apparently not?!
Here is a Github repository showcasing the error:
https://github.com/haefele/ApiInformationTargetVersionFail
If you set the target-version of the MyApp project to 16299, everything works fine.
[Edit May 29 2018]
The methods of the ApiInformation type are based on a trivial lookup of the WinRT metadata on disk -- if the metadata is there, the call succeeds. This enables "light-up" of new features on new platforms without increasing your minimum version. What's important though is that ApiInformation knows nothing about the implementation of the API: sometimes it might be missing (eg on early "Insider" builds of the OS) and sometimes it might not work due to "quirks" (see example below). .NET also had a different view of the world due to the way the JIT and .NET Native toolchains work.
This can cause problems...
.NET apps use a concept of a "Union WinMD" which is the union of all known types (including Extension SDKs) that exist in the Windows SDK that corresponds to the MaxVersionTested setting of the app. If you're running the app on a down-level platform, ApiInformation will tell you the API doesn't exist, but .NET can still JIT methods based on the Union WinMD and perform some reflection tasks. If you actually try and call the API (because you forgot the ApiInformation check) you will get a MissingMethodException at runtime because the API doesn't really exist.
A different problem can occur if you include a higher-versioned .NET library inside a lower-versioned app and then try to run it on the higher-versioned build of the OS. In this case, ApiInformation will succeed because the type exists in the system metadata, but .NET will throw a MissingMethodException at runtime because the type didn't exist in the Union WinMD used to build the app.
Important: This is based on the Target Version (aka MaxVersionTested) of the app, not the library!
If you build a release version of the app, you will even see the .NET Native toolchain display a warning such as this in the Output window:
warning : ILTransform : warning ILT0003: Method 'Foo.Bar()' will always throw an exception due to the missing method 'SomeNewType.NewMethod()'. There may have been a missing assembly.
There is no good way around this, other than to build your application with the same target version as the library (so that it can resolve all the references).
Another problem you can encounter is when your app (or a library it consumes) use APIs "from the future" that didn't exist in the OS listed as the MaxVersionTested of the app. Many of the APIs will work, but some don't due to incompatibilities with the simulated legacy mode the app is running in.
Hypothetical example of the library problem
Imagine that version X of the OS only supported black-and-white apps, where the background is always white and text, graphics, etc. are always black. Apps are built using this underlying assumption - including having graphics buffers that allocate only 1-bit-per-pixel, or never worrying about text being invisible because the background and foreground colours are the same. Everything is fine.
Now version Y of the OS comes out, and it supports colour graphics (say, 8-bits-per-pixel). Along with this new feature comes a pair of new APIs, SetForegroundColor() and SetBackgroundColor() that let you choose whatever colour you want. Any app (or library) that asks ApiInformation whether these two new APIs exists will succeed on version Y of the OS, and any app with a MaxVersionTested of at least Y can use them successfully. But for compatibility reasons they cannot work in an app that only targeted version X because it has no idea colours exist. Their graphics buffers are the wrong size, their text might become invisible, and so on. So the APIs will fail at runtime when used in an X-targeted app, even though the OS has the metadata (and the implementation) to support them.
Unfortunately there is no good way of handling this situation today, but it is a relatively rare occurrence. It is equivalent to a legacy Win32 library using LoadLibrary / GetProcAddress (or a legacy .NET library using reflection) to discover APIs that are "from the future."
Related
I have several issues with several SDK's comming from OEM manufacturers for specific devices. SDK is usually based on C or C++ dll, so I have a lot of Marshaling going around (a lot===YOU CAN'T EVEN IMAGINE). Problem start with next version of SDK when they extend some functions or some structures, they effectively break compatibility. In past I have made copy of our library supporting their device and start making changes to support new SDK. But each time our library was only for specific SDK, and upgrades of our systems were tough (Installation script if one heavy thing also ~ 3 GB install).
I have 78 projects in solution, commonly 4-5 libraries for each OEM Manufacturer, this is without any service tools. And Yesterday I said NO MORE. Started research on subject how to recompile C# code in runtime and reload/replace same assembly without quiting App.
And the result is the following:
- Class file that defines external C/C++ dll API was referenced from external Project referencing only System.dll. And me being insane I've already had each SDK version changes wrapped around #if #elif #endif so I could recompile last version of our library to support previous version of SDK. But that was maybe only once done, I've used #defines along with CSharpCodeProvider to recompile this assembly in runtime. Idea was like this:
Application loading ...
Open main SDK file get file version (extract version and identify it).
Load original External Assembly in new AppDomain (so I could destroy domain later).
Extract current version from external assembly.
Destroy new AppDomain to release hook from external assembly.
If versions mismatch, recompile external assembly (source code for external assembly is embedded within parent assembly), and replace original DLL with just compiled one.
Continue loading application...
So far this test approach works on one live demo system, and I was amazed. Switching from one to another SDK was flawless without any hick-ups.
And also code recompiles it self only when SDK version changes. So with safe guard I could say this is my first Metamorphic code I've wrote, that recompiles/changes it self from runtime.
Unfortunately this approach requires me to add one more Project for each OEM Manufacturers SDK. Which effectively kills my first though why I said NO MORE. True I now have only two libraries to maintain per one OEM manufacturer, and there will be no more projects added after this. But...
I wonder is there better approach which could allow me to replace DLL of currently loaded assembly in runtime from true within same assembly? Or change executing code on "fly" each time, this mainly includes Marshaled function, classes, structures, constants, ...?
Please notice code should be maintained from within same project without any externals. Also please notice this project exposes only hard-coded interface to "outside" world (Interface is referenced Interface only project - is more complex than I wrote). But this "outside" world is blind to any OEM specific stuff, which was the point using interface to have exactly same behavior across any OEM Device.
Any ideas? thoughts? suggestions?
In .NET we have several platforms each of which is composed by its own runtime, its own base libraries and its own supporting software for booting the runtime and so forth.
Concerning those different platforms we can target a specific platform when we are compiling our code. This means that we compile for a specific platform.
In the new .NET Core project model this is even clearer. On the project.json file we specify in the frameworks section the platforms we want to compile for by listing their TFM's.
My problem here is that as I understand, the main difference between developing to a platform or another is the base libraries available (for the full .NET we have the whole BCL for instance). But this seems to be one "run time issue" rather than "compile time issue".
The reason is that when the code is deployed as IL to the specific platform and when it's going to run that it'll see if the necessary assemblies from the needed base libraries are available right?
In that case, why there's this idea of "compiling for a specific platform"? Is the compilation process different for each platform? Is the generated IL different for each platform?
In that case, why there's this idea of "compiling for a specific platform"? Is the compilation process different for each platform? Is the generated IL different for each platform?
The IL is different, but generally only slightly, i.e. the assembly flags may be different, to indicate the target platform specified when compiled.
Of course, you may have conditionally-compiled code in your assembly, protected by #if directives. I assume you are not referring to that sort of difference. But just because the main part of the IL is the same from platform to platform, that doesn't necessarily mean you can run any IL on any platform.
Often, the target platform specified during compilation will be a critical choice, because the managed code engages in some kind of interop with native code that's available only for a specific architecture. Another reason is if the program for some reason requires the use of x64 architecture for virtual address space reasons (i.e. the process expects to need to allocate more than the nominal 3GB maximum available to x86 processes).
I've developed a simple and small Universal Windows App that uses EF7 and SQLite. It compiles and runs smoothly when the option "Compile with .NET Native tool chain" is unchecked.
If I check the option "Compile with .NET Native tool chain", I get the following compilation error:
Error Type 'System.MarshalByRefObject' was not included in compilation, but was referenced in type 'Microsoft.Data.Entity.Design.OperationExecutor'. There may have been a missing assembly.
After this there's a lot of other errors, but I believe that solving this one will also take care of the rest.
Does anyone know how to solve this?
I presume what has happened is that you're using a library that isn't targeting the .NET surface area available to UWP. The surface area for UWP is the set of APIs called .NET Core, you can see the source here: http://www.github.com/dotnet/corefx. Most likely you'll need a newer version of EF... although I know they've had some other issues with our ahead of time compilation strategy (see: https://github.com/aspnet/EntityFramework/issues/3603). We're continuing to work with them to get it sorted out and are hopeful that EF will be in a great place by Update 2 sometime in March.
The reason you only see this with .NET Native is because the compiler is walking your entire application at compile time in order to generate native code for everything that it thinks you're going to call. It happens to notice that this type is unavailable and correctly errors out. I presume you don't actually call this code path in your application because it would produce a similar error on CoreCLR... it would just happen at runtime and not compile time.
If you don't actually need this type (and everything else you need also doesn't need this type etc etc), it's possible that removing this directive from your application will allow the tree shaker to eliminate this type from your application before things go awry:
<Assembly Name="*Application*" Dynamic="Required All" />
This directive causes all of the types in your application and the non framework libraries you reference to be rooted and thus unable to be shaken away. Having this directive by default makes our analysis easier and keeps most folks from having to know too anything about our analysis engine. It's possible that removing this can help you avoid the issue.
Let me know if that works out or if you have any other questions. We always love to get feedback and provide some support at dotnetnative#microsoft.com.
I have a WPF application which utilizes a handwriting control.
By using an
<InkCanvas></InkCanvas>
In my XAML, I was able to get the user's strokes, and turn them into text using the InkAnalysis class. However, this is strictly 32bit, and my requirements dictate a 64bit build.
Unable to find a 64bit compatible library, I looked into upgrading to .NET 4.5 and utilizing the Windows8 classes which are available to desktop apps (by also adding
<TargetPlatformVersion>8.1</TargetPlatformVersion> to the csproj file so that I could add the 'Windows' namespace references). Luckily, Windows.UI.Input.Inking is.
However, when I add the reference to Windows.UI.Input.Inking, I get a build error which states:
Unknown build error, 'Cannot resolve dependency to Windows Runtime type 'Windows.Foundation.Metadata.PlatformAttribute'. When using the ReflectionOnly APIs, dependent Windows Runtime assemblies must be resolved on demand through the ReflectionOnlyNamespaceResolve event.'
I have looked into the:
Windows.Foundation.Metadata.PlatformAttribute
And it seems to want an enum member, either:
Windows.Foundation.Metadata.Platform.Windows
or
Windows.Foundation.Metadata.Platform.WindowsPhone
This is a desktop application, so I would obviously choose to target Platform.Windows, but cannot figure out how to tell the compiler this.
How can I incorporate this Windows.UI.Input.Inking class into my WPF application? My end goal is simply to convert strokes from the inkcanvas into text, in a 64 bit environment.
I discovered that I was receiving this error due to the reference added to the:
Windows.UI.Input.Inking
library. It seems that the correct way to add reference to Windows 8/8.1 WinAPI components (from a non WinAPI application) is the following:
Add <TargetPlatformVersion>8.1</TargetPlatformVersion> to the csproj file
Add reference to the Windows library (this is the key - adding the specific lib, in this case, Windows.UI.Input.Inking, causes the build error)
Add the more specific (ex: Windows.UI.Input.Inking) reference in the actual file where the API is required
I am working on creating a NuGet package which will edit the csproj file, and add the Windows reference. I'll update this if/when it is completed.
I'm working with an external DLL to consume an OCR device using a wrapper written by me. I have made tests on the wrapper and it works perfectly. But, when I use a WinForms project to consume the client class of the wrapper (located an another project), an error arises when calling C# methods imported from the DLL (using [DLLImport(...)]) saying that the DLL is not registered.
The error says:
"DLL Library function no found. Check registry install path."
All executions have been made in debug mode.
I've compared both projects configuration. The most relevant difference is that Test project is oriented to Any CPU and WinForms app only points to x86.
What could it be?
Updates
I've tried to register the dll using Regsvr32.exe but it didn't work. I thought about using Gacutil.exe but it required to uninstall all frameworks beyond .net framework 1.1...
I was wondering... at testing environment probably everything works well because testing framework has its dll's or executable files (or something like that) totally registered in windows, so those are trusted dlls. It is possible that debug generated dlls are not trusted by windows and therefore this problem arises?
I've created a form in the same troubling project and then I call the OCRWrapper from a button I've added to it. The OCR's worked!!. Unfortunately, it is difficult to rewrite the first form because we have invested a lot of hours in it; so, I'm still wondering what do I need to change in the troubling form...
I started again the form's development from scratch and added all the components related to it; everything worked well, the OCR read succesfully all data. When I loaded a combo box using a call to an ObjectContext and the error appeared again... I'm using an entity framework connected to Oracle.
I have a theory.
Let's imagine the following situation:
The ocr.dll depends on some other native DLL, lets call it other.dll [A].
In case this is a static dependency, you'll see it through Dependency Walker.
If its dynamic, you can use Sysinternals Process Explorer to monitor DLL loading in your working test project at run-time.
Your ADO.NET provider uses native DLLs under the hood (this is certainly true for ODP.NET) which depend on other.dll [B], which happens to have the same name but is actually a different DLL (or at least a different version) compared to other.dll [A].
Then, in run-time, this might happen:
When you connect to the database, ADO.NET provider dynamically loads its native DLLs, including the other.dll [B].
Then you try to call a function from OCR DLL. The P/Invoke tries to load the OCR DLL dynamically and succeeds, but the other.dll [B] is already loaded and ocr.dll tries to use some function from it, instead from other.dll [A] where it actually exists.
Welcome to DLL hell. So what can you do?
Try varying the order of calls to ocr.dll and ADO.NET provider to see anything changes. If you are (very) lucky, other.dll [A] might actually be a newer version that is still backward-compatible with other.dll [B] and things migh magically start to work.
Try another version of ADO.NET provider.
Try another ADO.NET provider.
Try getting a statically-linked ocr.dll from your vendor (i.e. no run-time dependency on other.dll [A]).
So, the call to the DLL works from a single button, but it does not work from a complex form. I'd say that there is an undefined behavior going on. The question remains whether it is you, that wrote the marshalling incorrectly, or it the DLL that is badly written.
Since we do not have access to the source code of the DLL, maybe you can post the prototype of the function, and all relevant struct definitions, and the DllImport line that you wrote for it?
Google can't find that error message which means(not definitely though :)) it is not a system message but a custom one coming from the code in the dll. So the dll does something dodgy. I guess it tries to double dispatch your call to another function internally.
Few things I suggest you try:
Run a x86 configuration. In the project properties -> Build tab set the platform to x86. this is assuming the dll is an x86 dll.
dumpbin /headers orc.dll
File Type: DLL
FILE HEADER VALUES
14C machine (**x86**)
4 number of sections
4CE7B6FC time date stamp Sat Nov 20 11:54:36 2010
0 file pointer to symbol table
0 number of symbols
E0 size of optional header
2102 characteristics
Executable
32 bit word machine
DLL
This command line should tell you the bitness. In case it is a 64 bit run a 64 bit config instead but I bet it is 32 bit.
Do not include the dll in the project. I guess you do that already. Make sure the dll is in a folder that is in the %PATH% environment variable. When you run this at command prompt:
where ocr.dll
should tell you where the dll is. If it doesn't add the folder where the dll is installed to the %PATH%.