At my workplace we deploy internal application by only replacing assemblies that have changed (not my idea).
We can tell which assemblies we need to deploy by looking at if the source files that are compiled into the assemblies have changed. Most of the time we don't need to redeploy assemblies that depend on assemblies that have changed. However we have found some cases where even though no source files in an assembly have changed, we need to redeploy it.
So far we know that any of these changes in an assembly, will require all dependent assemblies to need to be recompiled and deployed:
Constant changes
Enum definition changes (order of values)
Return type of a function changes and caller uses var (sometimes)
Namespace of a class changes to another already referenced namespace.
Are there any other cases that we're missing? I'm also open to arguments why this entire approach is flawed (although it's been used for years).
Edit To be clear, we're always recompiling, but only deploying assemblies where the source files in them have changed.
So anything that breaks compilation will be picked up (method name changes, etc.), since they require changes in the calling code.
Here is another one:
Changes to optional parameter values.
The default values get directly compiled to the assembly using them (if not specified)
public void MyOptMethod(int optInt = 5) {}
Any calling code such as this:
theClass.MyOptMethod();
Will end up compiled to:
theClass.MyOptMethod(5);
If you change the method to:
public void MyOptMethod(int optInt = 10) {}
You will need to recompile all dependent assemblies if you want the new default to apply.
Additional changes that will require recompilation (thanks Polynomial):
Changes to generic type parameter constraints
Changes to method names (especially problematic when using reflection, as private methods may also be inspected)
Changes to exception handling (different exception type being thrown)
Changes to thread handling
Etc... etc... etc...
So - always recompile everything.
First off, we have sometimes deployed only a few assemblies in an application instead of the complete app. However, this is by no means the norm and has ONLY been done in our test environments when the developer had very recently (as in within the last few minutes) published the whole site and was just making a minor tweak. However, once the dev is satisfied they will go ahead and do a full recompile and republish.
The final push to testing is always based off a full recompile / deploy. The pushes to staging and ultimately production are based off of that full copy.
Besides repeatability, one reason is that you really can't be 100% positive that a human didn't miss something in the comparisons. Next, the amount of time to deploy 100 assemblies versus 5 is trivial and quite frankly not worth the amount of human time it takes to try and figure out what really changed.
Quite frankly, the list you have in combination with Oded's answer ought to be enough to convince others of the potential for failure. However, the very fact that you have already run into failures due to this lackadaisical approach should be enough of a warning flag to stop it from continuing.
At the end of the day, it really boils down to a question of professionalism. Standardization and repeatability of the process of moving code out of development, through the various hoops and ultimately into production are extremely important in creating robust mission critical applications. If your deployment process is frought with the potential for failure due to these types of risk inducing short cuts, it raises questions on the quality of the code being produced.
Related
I have a sql clr that has a few functions and Stored procedures. I have the project in an EXTERNAL_ACCESS mode and a key for signing. - works just fine.
I have added another function to the project that uses the ICSharpcode.SharpZipLib. I initially got an incompatibility error for the versions, which I think I resolved following instructions on another post.
The project buids ok, but now I get the following error during that last phase of my project (SQL server db project). This is on my local machine, where I have admin privileges.
Creating [ICSharpCode.SharpZipLib]...
(47,1): SQL72014: .Net SqlClient Data Provider: Msg 6211, Level 16, State 1, Line 1 CREATE ASSEMBLY failed because type '<PrivateImplementationDetails>' in safe assembly 'ICSharpCode.SharpZipLib' has a static field '$$method0x6000014-1'. Attributes of static fields in safe assemblies must be marked readonly in Visual C#, ReadOnly in Visual Basic, or initonly in Visual C++ and intermediate language.
(47,0): SQL72045: Script execution error. The executed script:
CREATE ASSEMBLY [ICSharpCode.SharpZipLib]
AUTHORIZATION [dbo]
FROM 0x4D5A90000300000004000000FFFF0000B800000000000000400000000000000000000000000000000000000000000000000000000000000000000000800000000E1FBA0E00B409CD21B8014CCD21546869732070726F6772616D2063616E6E6F742062652072756E20696E20444F53206D6F64652E0D0D0A2400000000000000504500004C0103003877BE5A0000000000000000E00002210B010B000000020000200000000000009E1B02000020000000200200000040000020000000100000040000000000000004000000000000000060020000100000000000000300408500001000001000000000100000100000000000001000000000000000000000004C1B02004F000000002002003804000000000000000000000000000000000000004002000C00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000200000080000000000000000000000082000004800000000000000000000002E74657874000000A4FB0100002000000000020000100000000000000000000000000000200000602E7273726300000038040000002002000010000000100200000000000000000000000000400000402E72656C6F6300000C000000004002000
An error occurred while the batch was being executed.
Appreciate any help !
Thx,
Satya
I guess the first thing would be to see if you could do without IcSharpcode.SharpZipLib. If not then:
If you have access to the source for IcSharpcode.SharpZipLib you could change the static to be read-only.
Last option is to deploy the assemblies with PERMISSION_SET = UNSAFE.
The CLR host that runs within SQL Server is highly restricted as compared to the CLR host running on the OS. One reason for the restrictions is that the App Domains are shared across sessions. So everyone executing a particular SQLCLR method (be it a Stored Procedure, Function, User-Defined Type, User-Defined Aggregate, or Trigger) is executing the same method in the same static class in the same App Domain. Hence, static class variables are shared resources, and unless you are very careful and deliberate in using them, they can quite easily lead to race conditions and odd (and difficult-to-debug) behavior.
The error message is about this, but it is also a bit misleading since it mentions that SAFE Assemblies do not allow such things. More accurately, it is non-UNSAFE Assemblies do not allow such things (i.e. neither SAFE nor EXTERNAL_ACCESS).
So, as Niels mentioned in his answer, you can mark the Assembly as UNSAFE and it will load and probably work. However, unless you know how that variable (and any others that are marked as static but were not yet mentioned) is used, it could lead to race conditions if one session overwrites the value that another session was still using. Or there is potential for a previous value to be left there that could adversely impact the next caller. You would need to look through the code to ensure that this isn't an issue prior to attempting to set the Assembly to UNSAFE.
While not as quick and easy, you really do need to start with updating the code to mark those static variables as readonly and try recompiling to make sure that there are no attempts to write to that variable throughout the code. And if other parts of the code do write to that static variable, then you need to refactor the code or find other code to do the same thing. I ran into this years ago and opted to use DotNetZip for my SQL# project, though I still did need to make minor modifications for things such as static variables.
I have a library DLL full with sort algorithmn, parsers, validators, converters etc. The DLL is about 40 Mb (that is not much I know but still). Now I would like to reference just the parsers of that DLL. The point is to get out those parsers without shipping 40 Mb to the customer.
Is there a way everytime I make a release build to just take those up-to-date parsers from my library, store them into some kind of .partialDll file and deliver only them to the customer? The result would be me keeping all my helper classes in one big library which keeps growing and the customers get just what they ordered..
I guess I would need to deal with alot of reflection to achieve something like this, right? Any ideas?
Let me start with a quote from MSDN:
"Assemblies are the building blocks of .NET Framework applications; they form the fundamental unit of deployment […]."
Note that the quote is about assemblies, not about DLLs. There's a difference!
Although most .NET assemblies consist of exactly one DLL file, that is not a strict requirement: An assembly can in fact consist of more than one file; such a "multi-file assembly" can, for instance, consist of several DLLs, which in turn are called "netmodules". (A netmodule might have a .netmodule file extension by convention, but it's really a DLL containing .NET metadata and bytecode.) Each multi-file assembly has exactly one "main" module which carries the metadata that references all the other assembly files and so ties them together into a logical whole.
While an assembly has to be deployed in full (as per the above quote), the .NET runtime can load only those netmodules that are actually required for JIT code compilation and execution.
So you can split up an assembly into several parts, and have the runtime load only what is actually needed; but you cannot do the same to a netmodule / DLL file. A DLL file can only be deployed and loaded in its entirety.
Note also that Visual Studio's support for netmodules is non-existent for all practical purposes, so most people don't use them, which is why you see so few multi-file assemblies in the real world.
The bottom line is this: In practice, if you or your clients are interested in only a part of an assembly ("DLL"), then it's usually easier to split a large assembly (that is, one large Visual Studio project) into several inter-dependent assemblies (several smaller Visual Studio projects).
In general, no, there is no way to achieve that. Once you pack "everything" into a module and compile it, you can't split that module later into smaller ones. (well, ok, you can analyze the bytecode and rewrite the assembly, see the end of this post).
For me, your nullhypothesis seems wrong. You don't need to work with "one huge library that keeps all your helper classes", and really, you dont want, or you will not want to either. If you don't feel like that, I assure you that in time, years maybe, you will hate such one-to-have-it-all approach.
This is exactly what you want to escape from and this is why .Net and many other languages/environments support concept of "libraries" or "modules" and allow you to use multiple of them, and that's why most of the projects you see everywhere aren't created as "one huge EXE". It's much easier to reuse, analyze and even hunt bugs when you have it in smaller chunks.
--
However, if you'd insist, there are ways (ugly) to achive something-like you think. I assume that the "huge DLL" is in C# and is controlled by you.
First, somewhat naiive but working way, is to use "file links". In VisualStudio you can have a project that contains tons of files and producess a BigDLL "all.dll", and just by its side you can create another project that will not contain any files at all, but that will contain links to the first projects' files. Use typical "Add a file.." option to a project and note that near the final "Add" button there's a down arrow that expands to "Add as link..".
This will cause the file to stay in HugeProject, but the SmallProject will see the file too and when SmallProject is compiled, it will pull the code from that file too.
Note that this way you will actually build two separate modules assemblies: big one and small one, and your final product will need to reference the small one.
This way is naiive and ugly, it is just as if you manually copied/splitted the huge project into smaller ones, but with the tiny advantage is that you don't need to copy the code files around.
--
intermission for side-thoughts:
you can use #if to conditionally turn off some currently-unused code, however setting the flags that drive those IFs will be cumbersome
you can edit .csproj files and use MSBuild conditional clauses to automatically exclude unused code files from your HugeProject during final builds, however setting the flags that drive those IFs will be cumbersome too
--
The second way is to keep everything in the HugeProject, and to have your application(s) reference it directly, and then after building and testing everything, just before packing that and sending to customer - use some kind of trimming utility that will check what parts of code are referenced and that will remove all dead code from the assemblies. I can't give you any name for such utility, but many obfuscators come with such feature.
They will run through your compiled code, cross-reference everything, change/remove/trash class/method/propertynames and also they may as a bonus remove unused bits. Then, they'll write mangled assemblies back to disk ensuring that they reference each other and not the original ones from before mangling.
example: See a question related to that
example: See an example of such utility also consider ILMerge for better results.
Cons - utility may leave some trash it couldn't decide whether it is used or not, finding/testing/buying it may take some time and resources, you can have some signing problems since the stripped-assembly will be a brand new assembly, etc. Also, such utilities have problems if you invoke some code only by reflection and it may require you to provide some extra hints or to make sure the code "seems to be used" (example: a whole namespace of "plugins" that implement "IPlugin" and then your app searched that NS for Types and uses Activator.CreateInstance to instantiate them; no hard-linked usages, trimmer may decide to remove all plugins as "unused"; you'll need to configure trimmer carefully or be suprised).
Probably a few other ways could be found too, but seriously, in most of the times, you don't want to waste your time on that, especially manually. So just tidy up your code and split it into small libs, or start looking for automatic obfuscator&trimmer.
I have to work with an old version of Mono in Unity projects. I find myself recreating some classes and extension methods that exist in later versions of .NET. Should I be marking these with an attribute that will make it easy to take them out at a later point, just wait for the inevitable errors, and delete the duplicate code, or take some other approach I'm not familiar with yet? If the attribute route is the way to go, is there already an appropriate attribute created for this kind of thing?
Here's what I'd like:
[PresentInDotNET(3.5)]
I fill in the version and get alerted when the framework is at that level or higher.
Split them off to a separate assembly, and change the set of assemblies that make up the final delivery based on the .NET version. You need to rebuild your main assembly to refer to the correct assemblies (depending on whether Foo is in MySystem or System), but as long as you keep namespaces identical, that's all. If you are not even interested in keeping compatibility with older versions, you can simply delete classes from this assembly as they become available.
Alternatively, if the classes/extension methods you are recreating are not interesting (in the sense that you gain nothing by having .NET supply them for you), simply put them in their separate namespace and accept that you are duplicating code already present in newer versions. It doesn't matter a whole lot which assembly gets the job done, after all, as long as it happens.
Whatever you do, try to avoid going the route of #ifdefs, runtime discovery, and other conditional code, as this is much harder to maintain.
How about adding "// TODO" comments for places like this? Visual Studio will display these in the Task window and you can get at them pretty easily.
Is there any way, using msbuild or otherwise, to detect which projects have changes in the current build and update the FileAssemblyVersion attribute in AssemblyInfo.cs for those projects only?
Assuming you've set up incremental [get and] compiles, the next step would be to hook into the MSBuild sequence. Have a look in FrameworkDir\Microsoft.Common.Targets. The problem is that things just are not set up to work in this way - the fact that there are _TimestampBeforeCompile and _TimeStampAfterCompile steps which just show that you cant determine a priori if something is going to compile. While you could theoretically hook in before [the language specific] CoreCompile [e.g., in Microsoft.CSharp.targets], the problem would be that you need to have the same Inputs as it does in order to determine if its going to happen, which would mean lots of cut and pasting and keeping in sync with system files. The other thing to be wary of is noted in the comment at the top of the _ComputeNonExistentFileProperty target.
So, outside of doing some very deep modifications to the sequence (e.g., hooking in a 'post build' bit which forces a second compile if a custom _TimeStampAfterCompile of yours detects that a compilation took place, I'd say there's no easy, recommended or supported way.
Having said that, the AssemblyFileVersion (you refer to FileAssemblyVersion, which doesnt exist :P) is easy to modify after the compile as its just a resource - you'll find tools for that. But I assume you're really talking about doing both it and the AssemblyVersion, which cant be tweaked after the fact in the same way.
I am looking out for some good practices on naming assemblies and versioning them. How often do you increment the major or minor versions?
In some cases, I have seen releases going straight from version 1.0 to 3.0. In other cases, it seems to be stuck at version 1.0.2.xxxx.
This will be for a shared assembly used in multiple projects across the company. Looking forward to some good inputs.
Some good information from this article on Suzanne Cook's blog on MSDN (posted 2003-05-30):
When to Change File/Assembly Versions
First of all, file versions and assembly versions need not coincide
with each other. I recommend that file versions change with each
build. But, don’t change assembly versions with each build just so
that you can tell the difference between two versions of the same
file; use the file version for that. Deciding when to change assembly
versions takes some discussion of the types of builds to consider:
shipping and non-shipping.
Non-Shipping Builds In general, I recommend keeping non-shipping assembly versions the same between shipping builds. This
avoids strongly-named assembly loading problems due to version
mismatches. Some people prefer using publisher policy to redirect new
assembly versions for each build. I recommend against that for
non-shipping builds, however: it doesn’t avoid all of the loading
problems. For example, if a partner x-copies your app, they may not
know to install publisher policy. Then, your app will be broken for
them, even though it works just fine on your machine.
But, if there are cases where different applications on the same
machine need to bind to different versions of your assembly, I
recommend giving those builds different assembly versions so that the
correct one for each app can be used without having to use
LoadFrom/etc.
Shipping Builds As for whether it’s a good idea to change that version for shipping builds, it depends on how you want the binding to
work for end-users. Do you want these builds to be side-by-side or
in-place? Are there many changes between the two builds? Are they
going to break some customers? Do you care that it breaks them (or do
you want to force users to use your important updates)? If yes, you
should consider incrementing the assembly version. But, then again,
consider that doing that too many times can litter the user’s disk
with outdated assemblies.
When You Change Your Assembly Versions To change hardcoded versions to the new one, I recommend setting a variable to the version
in a header file and replacing the hardcoding in sources with the
variable. Then, run a pre-processor during the build to put in the
correct version. I recommend changing versions right after shipping,
not right before, so that there's more time to catch bugs due to the
change.
One way to define your versioning is to give semantic meaning to each portion:
Go from N.x to N+1.0 when compatibility breaks with the new relase
Go from N.M to N.M+1 when new features are added which do not break compatibility
Go from N.M.X to N.M.X+1 when bug fixes are added
The above is just an example -- you'd want to define the rules that make sense for you. But it is very nice for users to quickly tell if incompatibilities are expected just by looking at the version.
Oh, and don't forget to publish the rules you come up with so people know what to expect.
Semantic Versioning has a set of guidelines and rules as to how to apply this (and when). Very simple to follow and it just works.
http://semver.org/
The first thing I would recommend is to become familiar with the differences between the Assembly version and the File version. Unfortunately, .NET tends to treat these as the same when it comes to the AssemblyInfo files in that it usually only puts AssemblyVersion and allows the FileVersion to default to the same value.
Since you said this is a shared assembly, I'm assuming you mean it's shared at a binary level (not by including the project in the various solutions). If that's the case you want to be very deliberate about changing the Assembly version as that is what .NET uses to strong name the assembly (to allow you to put it in the GAC) and also makes up the "assembly full name". When the assembly version changes, it can have breaking changes for the applications that use it without adding assembly redirect entries in the app.config file.
As for naming, I think it depends on what your company naming rules are (if any) and the purpose of the library. For exmaple, if this library provides "core" (or system level) functionality that isn't specific to any particular product or line of business, you could name it as:
CompanyName.Framework.Core
if it's part of a larger library, or simply
CompanyName.Shared
CompanyName.Core
CompanyName.Framework
As far as when to increment version numbers, it's still rather subjective and depends on what you consider each portion of the build number to represent. The default Microsoft scheme is Major.Minor.Build.Revision, but that doesn't mean you can't come up with your own definitions. The most important thing is to be consistent in your strategy and make sure that the definitions and rules make sense across all of your products.
In almost every version scheme I've seen the first two portions are Major.Minor. The major version number usually increments when there are large changes and/or breaking changes, while the minor version number usually increments to indicate that something changed which did was not a breaking change. The other two numbers are considerably more subjective and can be the "build" (which is often times a serial date value or a sequentially updating number that changes each day) and the "revision" or patch number. I've also seen them reversed (giving Major.Minor.Revision.Build) where build is a sequentially incrementing number from an automated build system.
Keep in mind that the assembly major and minor versions are used as the type library version number when the assembly is exported.
Finally, take a look at some of these resources for more information:
http://msdn.microsoft.com/en-us/library/51ket42z.aspx
http://msdn.microsoft.com/en-us/library/system.reflection.assemblyversionattribute.aspx
http://blogs.msdn.com/suzcook/archive/2003/05/29/57148.aspx