I have a .net 2.0 c# ClickOnce app and it connects to its data via Web Services. I've been told that one way to potentially speed up the application is to generate a serialization assembly beforehand. I have several questions on this front.
The default setting to whether to generate a serialization assembly is Auto. What criteria does VS2005 use to decide whether to generate a serialization assembly or not? It seems like it does not generate under Debug configuration, but it does under Release configuration, but I can't tell for sure and can't the information anywhere.
Does serialization assembly actually improve the startup of the application? Specifically what does it improve? Do I actually need a serialization assembly?
It is really asking "Shall I pre-generate the serialization assemblies and include it in the deployed project or shall I fall back on the default of generating the assemblies on the fly?" Generally that won't hurt too much after the first hit perf-wise. Where it can play in is that the serialization assemblies are generated in %SYSTEMROOT%\TEMP. Which, in some cases, the process can't access, leading to fatal exceptions in most cases.
This is not relevant to your situation, but there's another good reason for pre-generating the serialization assembly - it's necessary when hosting your code in SQL Server (i.e. SQLCLR). SQL Server doesn't allow these assemblies to be generated dynamically, so your serialization code would fail inside SQL Server.
In most cases, you aren't likely to see a huge benefit from this, especially if you keep the app open for a while. Pre-generating a serialization assembly mainly helps the first time (in an exe lifetime) that you serialize a specific type as xml.
According to Intellitrace, only the first time you XML-serialize a type, a FileNotFoundException is thrown and then caught. This means CLR expects to load an assembly containing all the XML-Serializers for that specific Assembly and when it's not found, a FileNotFoundException is thrown to signal the XmlSerializer: "Hey! Generate the darn assembly!" and this is what happens during that "Catch" and then the previously not found file exists.
I've read somewhere that using try-catch for logic is a bad exercise. IDK why Microsoft has used this approach...
Related
We have a project that may generate a lot of exception(because it interacts with a protocol that is widely used but not widely respected).
When it has been implemented, all the "sensitive" methods/constructors were set as DebuggerStepThrough.
Since VS2015, the DebuggerStepThrough is now ignored(We now have VS 2017). I know that we can go to the Exception settings of Visual Studio, we can specify what kind of exception we want or not, and add conditions with projects types, but this has 2 issues:
It's not persisted with the project when we push it to our GIT server
We often change those exception settings to track a very particular issue that should not be interrupted, then we reset then, so it would means that we will regularly loose those changes.
So, is there some compilation settings, or any other way to make sure that we don't receive those exception, and that we can share this accross the team(=commit to our git server)?
NB: This question is NOT about whether we should handle those exceptions or not.
Would migrating that code to a separate assembly help? I'm not sure, but you might be able to reference it without pdb's and it should fail silently if you have proper catches in place.
Sounds like a dumb question I know, but bear with me :-) I'm currently creating a PoSh module which contains a few custom commands. I had already written a PoSh Advanced Module previously to do what I want to do, but I've decided it's time to take the plunge and learn C# !
One of my commands needs to create an instance of a class which is contained in a third party SDK assembly. That assembly is not contained in the GAC. In my PoSh Advanced Function previously, I would query a registry key to confirm that the SDK was installed (and get the path to it), then I would use System.Reflection.Assembly.LoadFile to load the assembly.
In my C# version, my plan was to do something similar. I've managed to query the registry, confirm that the assembly exists etc and even load it. However, because the assembly isn't referenced in Visual Studio, it just throws loads of intellisense errors when I try to instantiate a class from that assembly. I initially suspected I might need to use something from the Activator class to get around this, but I've been through all the methods there and couldn't find anything that might help.
After a bit more pondering, I wondered if perhaps my approach is wrong, and maybe I shouldn't be doing the "manual" loading but instead allow .net handle all that for me, eg by adding a reference to the assembly. In that case however, how do I reference an assembly in VS without knowing where (or even if) it will be installed on the target/invoking machine ?
Or, if my original approach is correct, how do instantiate the class "manually" (or otherwise) without VS being so unhappy. I did consider adding a "temporary" reference to the assembly on my machine, but I think I'd have to remove that again before doing the retail build. And I'd also have to add temporary using directives I guess.
I have googled this quite a bit, but haven't found anything that might help me at all. So I'd really appreciate any guidance anybody can provide. Maybe I should be looking at something else entirely, like App Domains ?
Thanks in advance
After loading assembly use CreateInstance method and store result in dynamic type variable:
dynamic test = assembly.CreateInstance("Full.Type.Name");
You'll not get intellisense, but compiler will assume that this variable supports any operation. Beware, invalid operations will result in errors only at runtime.
Going over some legacy code, I ran into piece of code that was using reflection for loading some dll's that their source code was available (they were another project in the solution).
I was cracking my skull trying to figure out why it was done this way (naturally the code was not documented...).
My question is, can you think about any good reason for preferring to load an assembly via reflection rather than referencing it?
Yes, if you have a dynamic module system, where different DLLs should be loaded depending on conditions at runtime. We do this where I work; we do a license check for different optional modules that may be loaded into our system, and then only load the DLLs associated with each module if the license checks out. This prevents code that should never be executed from being loaded, which can both improve performance slightly and prevent bugs.
Dynamically loading DLLs may also allow you to drastically change functionality without changing any source code. The main assembly may for instance set in motion a discovery process where it finds all classes that implement some interface, and chooses which one to use depending on some runtime criterion.
These days you'll typically want to use MEF for this kind of task, but that's only been around since .NET 4.0, so there are probably many codebases out there that do it manually. (I don't know much about MEF. Maybe you have to do this part manually there as well.)
But anyway, the answer to your question is that there certainly are good reasons to dynamically load DLLs using reflection. Whether it applies in your case is impossible to say without more details.
Without knowing you specific project, noone here can tell you why it was done that way in your case.
But the general reasons are:
updateability: You can simply recompile and replace the updated libary instead of having to recompile and replace the whole application.
cooperation: if the interface is clear, that way multiple teams can work together. one for the main application and others for the dlls
reusability: sometimes you need the same functionality in multiple projects, so the same dll can be used again and again
extensability: in some cases you want to be able to later extend your program with plugins that where not present at shipment time. This can be realized using dlls.
I hope this helps you understand some of your setup..
Reason for loading an assembly via reflection rather than referencing it?
Let us consider a scenario, where there are three classes with method DoWork() this method returns string, you are accessing it by checking the condition (strong type).
Now you have two more classes in two different DLL's how would you cope up the change?
1)You can add reference of new DLL's , change the conditional check and make it work.
2)You can use reflection , pass on condition and assembly name at run time, this allows you to add any number of functionality at runttime without any change of code in primary appliation.
I've written a multi-threaded windows service in C#. For some reason, csc.exe is being launched each time a thread is spawned. I doubt it's related to threading per se, but the fact that it is occurring on a per-thread basis, and that these threads are short-lived, makes the problem very visible: lots of csc.exe processes constantly starting and stopping.
Performance is still pretty good, but I expect it would improve if I could eliminate this. However, what concerns me even more is that McAfee is attempting to scan the csc.exe instances and eventually kills the service, apparently when one the instances exits in mid-scan. I need to deploy this service commercially, so changing McAfee settings is not a solution.
I assume that something in my code is triggering dynamic compilation, but I'm not sure what. Anyone else encounter this problem? Any ideas for resolving it?
Update 1:
After further research based on the suggestion and links from #sixlettervariables, the problem appears to stem from the implementation of XML serialization, as indicated in Microsoft's documentation on XmlSerializer:
To increase performance, the XML serialization infrastructure dynamically generates assemblies to serialize and deserialize specified types.
Microsoft notes an optimization further on in the same doc:
The infrastructure finds and reuses those assemblies. This behavior occurs only when using the following constructors:
XmlSerializer.XmlSerializer(Type)
XmlSerializer.XmlSerializer(Type, String)
which appears to indicate that the codegen and compilation would occur only once, at first use, as long as one of the two specified constructors are used. However, I don't benefit from this optimization because I am using another form of the constructor, specifically:
public XmlSerializer(Type type, Type[] extraTypes)
Reading a bit further, it turns out that this also happens to be a likely explanation for a memory leak that I have been observing when my code executes. Again, from the same doc:
If you use any of the other constructors, multiple versions of the same assembly are generated and never unloaded, which results in a memory leak and poor performance. The easiest solution is to use one of the previously mentioned two constructors. Otherwise, you must cache the assemblies in a Hashtable.
The two workarounds that Microsoft suggests above are a last resort for me. Going to another form of the constructor is not preferred (I am using the "extratypes" form for serialization of derived classes, which is a supported use per Microsoft's docs), and I'm not sure I like the idea of managing a cache of assemblies for use across multiple threads.
So, I have sgen'd, and see the resulting assembly of serializers for my types produced as expected, but when my code executes the sgen-produced assembly is not loaded (per observation in the fusion log viewer and process monitor). I'm currently exploring why this is the case.
Update 2:
The sgen'd assembly loads fine when I use one of the two "friendlier" XmlSerializer constructors (see Update 1, above). When I use XmlSerializer(Type), for example, the sgen'd assembly loads and no run-time codegen/compilation is performed. However, when I use XmlSerializer(Type, Type[]), the assembly does not load. Can't find any reasonable explanation for this.
So I'm reverting to using one of the supported constructors and sgen'ing. This combination eliminates my original problem (the launching of csc.exe), plus one other related problem (the XmlSerializer-induced memory leak mentioned in Update 1 above). It does mean, however, that I have to revert to a less optimal form of of serialization for derived types (the use of XmlInclude on the base type) until something changes in the framework to address this situation.
Psychic debugging:
Your Windows Service does XML serialization/deserialization
To increase performance, the XML serialization infrastructure dynamically generates assemblies to serialize and deserialize specified types.
If this is the case you can build these XML Serializer Assemblies a-priori.
I'm developing a component (HttpModule) that's used by a number of web applications on a .NET website, and I want the component to be easily maintainable. I've come up with something outlined below but wanted to see if there were any positive/negative thoughts or general feedback on the implementation, as I'm not 100% familiar with Assembly loading, especially in terms of memory overhead.
(I don't really want to do this: Create Your Own .NET Assembly Cache)
The lightweight HttpModule itself is in the GAC and referenced from the site's root web.config. On each request it opens a text file (stored in the web's root/bin) that contains just a strong named's assembly name (e.g. "My.MyLibrary, Version=1.1.0.0, Culture=en, PublicKeyToken=03689116d3a4ae33") and then checks the current AppDomain to see if it is already referenced (iterates over GetAssemblies()). If not, it then calls Assembly.Load to load myLibrary and uses basic Reflection to Invoke() a custom method in My.MyLibrary that actually does the intended processing work of the HttpModule.
My.MyLibrary itself is also in the GAC. To upgrade the app without any app restarts, put a new version in the GAC, and just edit the string in the text file. I'm using the text file because a) it's fast and b) I didn't want to have to update a machine/web.config and cause a recycle to redirect the HttpModule to use a new version of My.MyLibrary. It seems to work okay. The old version can be uninstalled from the GAC when it's finally ready to be. So hopefully the only time an app pool/iis reset would be needed would be to change the HttpModule part.
Any replies much appreciated!
-Will
Personally I would say if you can avoid using any late binding it would be better, but as you want to be able to have the freedom to just throw a new assembly at your application then it does seem like late binding makes sense.
With regards to your method of storing and retrieving the list of assemblies, I would use an XML object and load it from the file. You will find adding extra information to it simpler this way, otherwise you will have to maintain your own file format.
You may also want to consider adding some code to catch errors generated from these assemblies, unload them and put a flag in your file telling your HttpModule not to load it until it has been updated.