I am looking out for some good practices on naming assemblies and versioning them. How often do you increment the major or minor versions?
In some cases, I have seen releases going straight from version 1.0 to 3.0. In other cases, it seems to be stuck at version 1.0.2.xxxx.
This will be for a shared assembly used in multiple projects across the company. Looking forward to some good inputs.
Some good information from this article on Suzanne Cook's blog on MSDN (posted 2003-05-30):
When to Change File/Assembly Versions
First of all, file versions and assembly versions need not coincide
with each other. I recommend that file versions change with each
build. But, don’t change assembly versions with each build just so
that you can tell the difference between two versions of the same
file; use the file version for that. Deciding when to change assembly
versions takes some discussion of the types of builds to consider:
shipping and non-shipping.
Non-Shipping Builds In general, I recommend keeping non-shipping assembly versions the same between shipping builds. This
avoids strongly-named assembly loading problems due to version
mismatches. Some people prefer using publisher policy to redirect new
assembly versions for each build. I recommend against that for
non-shipping builds, however: it doesn’t avoid all of the loading
problems. For example, if a partner x-copies your app, they may not
know to install publisher policy. Then, your app will be broken for
them, even though it works just fine on your machine.
But, if there are cases where different applications on the same
machine need to bind to different versions of your assembly, I
recommend giving those builds different assembly versions so that the
correct one for each app can be used without having to use
LoadFrom/etc.
Shipping Builds As for whether it’s a good idea to change that version for shipping builds, it depends on how you want the binding to
work for end-users. Do you want these builds to be side-by-side or
in-place? Are there many changes between the two builds? Are they
going to break some customers? Do you care that it breaks them (or do
you want to force users to use your important updates)? If yes, you
should consider incrementing the assembly version. But, then again,
consider that doing that too many times can litter the user’s disk
with outdated assemblies.
When You Change Your Assembly Versions To change hardcoded versions to the new one, I recommend setting a variable to the version
in a header file and replacing the hardcoding in sources with the
variable. Then, run a pre-processor during the build to put in the
correct version. I recommend changing versions right after shipping,
not right before, so that there's more time to catch bugs due to the
change.
One way to define your versioning is to give semantic meaning to each portion:
Go from N.x to N+1.0 when compatibility breaks with the new relase
Go from N.M to N.M+1 when new features are added which do not break compatibility
Go from N.M.X to N.M.X+1 when bug fixes are added
The above is just an example -- you'd want to define the rules that make sense for you. But it is very nice for users to quickly tell if incompatibilities are expected just by looking at the version.
Oh, and don't forget to publish the rules you come up with so people know what to expect.
Semantic Versioning has a set of guidelines and rules as to how to apply this (and when). Very simple to follow and it just works.
http://semver.org/
The first thing I would recommend is to become familiar with the differences between the Assembly version and the File version. Unfortunately, .NET tends to treat these as the same when it comes to the AssemblyInfo files in that it usually only puts AssemblyVersion and allows the FileVersion to default to the same value.
Since you said this is a shared assembly, I'm assuming you mean it's shared at a binary level (not by including the project in the various solutions). If that's the case you want to be very deliberate about changing the Assembly version as that is what .NET uses to strong name the assembly (to allow you to put it in the GAC) and also makes up the "assembly full name". When the assembly version changes, it can have breaking changes for the applications that use it without adding assembly redirect entries in the app.config file.
As for naming, I think it depends on what your company naming rules are (if any) and the purpose of the library. For exmaple, if this library provides "core" (or system level) functionality that isn't specific to any particular product or line of business, you could name it as:
CompanyName.Framework.Core
if it's part of a larger library, or simply
CompanyName.Shared
CompanyName.Core
CompanyName.Framework
As far as when to increment version numbers, it's still rather subjective and depends on what you consider each portion of the build number to represent. The default Microsoft scheme is Major.Minor.Build.Revision, but that doesn't mean you can't come up with your own definitions. The most important thing is to be consistent in your strategy and make sure that the definitions and rules make sense across all of your products.
In almost every version scheme I've seen the first two portions are Major.Minor. The major version number usually increments when there are large changes and/or breaking changes, while the minor version number usually increments to indicate that something changed which did was not a breaking change. The other two numbers are considerably more subjective and can be the "build" (which is often times a serial date value or a sequentially updating number that changes each day) and the "revision" or patch number. I've also seen them reversed (giving Major.Minor.Revision.Build) where build is a sequentially incrementing number from an automated build system.
Keep in mind that the assembly major and minor versions are used as the type library version number when the assembly is exported.
Finally, take a look at some of these resources for more information:
http://msdn.microsoft.com/en-us/library/51ket42z.aspx
http://msdn.microsoft.com/en-us/library/system.reflection.assemblyversionattribute.aspx
http://blogs.msdn.com/suzcook/archive/2003/05/29/57148.aspx
Related
I have a library DLL full with sort algorithmn, parsers, validators, converters etc. The DLL is about 40 Mb (that is not much I know but still). Now I would like to reference just the parsers of that DLL. The point is to get out those parsers without shipping 40 Mb to the customer.
Is there a way everytime I make a release build to just take those up-to-date parsers from my library, store them into some kind of .partialDll file and deliver only them to the customer? The result would be me keeping all my helper classes in one big library which keeps growing and the customers get just what they ordered..
I guess I would need to deal with alot of reflection to achieve something like this, right? Any ideas?
Let me start with a quote from MSDN:
"Assemblies are the building blocks of .NET Framework applications; they form the fundamental unit of deployment […]."
Note that the quote is about assemblies, not about DLLs. There's a difference!
Although most .NET assemblies consist of exactly one DLL file, that is not a strict requirement: An assembly can in fact consist of more than one file; such a "multi-file assembly" can, for instance, consist of several DLLs, which in turn are called "netmodules". (A netmodule might have a .netmodule file extension by convention, but it's really a DLL containing .NET metadata and bytecode.) Each multi-file assembly has exactly one "main" module which carries the metadata that references all the other assembly files and so ties them together into a logical whole.
While an assembly has to be deployed in full (as per the above quote), the .NET runtime can load only those netmodules that are actually required for JIT code compilation and execution.
So you can split up an assembly into several parts, and have the runtime load only what is actually needed; but you cannot do the same to a netmodule / DLL file. A DLL file can only be deployed and loaded in its entirety.
Note also that Visual Studio's support for netmodules is non-existent for all practical purposes, so most people don't use them, which is why you see so few multi-file assemblies in the real world.
The bottom line is this: In practice, if you or your clients are interested in only a part of an assembly ("DLL"), then it's usually easier to split a large assembly (that is, one large Visual Studio project) into several inter-dependent assemblies (several smaller Visual Studio projects).
In general, no, there is no way to achieve that. Once you pack "everything" into a module and compile it, you can't split that module later into smaller ones. (well, ok, you can analyze the bytecode and rewrite the assembly, see the end of this post).
For me, your nullhypothesis seems wrong. You don't need to work with "one huge library that keeps all your helper classes", and really, you dont want, or you will not want to either. If you don't feel like that, I assure you that in time, years maybe, you will hate such one-to-have-it-all approach.
This is exactly what you want to escape from and this is why .Net and many other languages/environments support concept of "libraries" or "modules" and allow you to use multiple of them, and that's why most of the projects you see everywhere aren't created as "one huge EXE". It's much easier to reuse, analyze and even hunt bugs when you have it in smaller chunks.
--
However, if you'd insist, there are ways (ugly) to achive something-like you think. I assume that the "huge DLL" is in C# and is controlled by you.
First, somewhat naiive but working way, is to use "file links". In VisualStudio you can have a project that contains tons of files and producess a BigDLL "all.dll", and just by its side you can create another project that will not contain any files at all, but that will contain links to the first projects' files. Use typical "Add a file.." option to a project and note that near the final "Add" button there's a down arrow that expands to "Add as link..".
This will cause the file to stay in HugeProject, but the SmallProject will see the file too and when SmallProject is compiled, it will pull the code from that file too.
Note that this way you will actually build two separate modules assemblies: big one and small one, and your final product will need to reference the small one.
This way is naiive and ugly, it is just as if you manually copied/splitted the huge project into smaller ones, but with the tiny advantage is that you don't need to copy the code files around.
--
intermission for side-thoughts:
you can use #if to conditionally turn off some currently-unused code, however setting the flags that drive those IFs will be cumbersome
you can edit .csproj files and use MSBuild conditional clauses to automatically exclude unused code files from your HugeProject during final builds, however setting the flags that drive those IFs will be cumbersome too
--
The second way is to keep everything in the HugeProject, and to have your application(s) reference it directly, and then after building and testing everything, just before packing that and sending to customer - use some kind of trimming utility that will check what parts of code are referenced and that will remove all dead code from the assemblies. I can't give you any name for such utility, but many obfuscators come with such feature.
They will run through your compiled code, cross-reference everything, change/remove/trash class/method/propertynames and also they may as a bonus remove unused bits. Then, they'll write mangled assemblies back to disk ensuring that they reference each other and not the original ones from before mangling.
example: See a question related to that
example: See an example of such utility also consider ILMerge for better results.
Cons - utility may leave some trash it couldn't decide whether it is used or not, finding/testing/buying it may take some time and resources, you can have some signing problems since the stripped-assembly will be a brand new assembly, etc. Also, such utilities have problems if you invoke some code only by reflection and it may require you to provide some extra hints or to make sure the code "seems to be used" (example: a whole namespace of "plugins" that implement "IPlugin" and then your app searched that NS for Types and uses Activator.CreateInstance to instantiate them; no hard-linked usages, trimmer may decide to remove all plugins as "unused"; you'll need to configure trimmer carefully or be suprised).
Probably a few other ways could be found too, but seriously, in most of the times, you don't want to waste your time on that, especially manually. So just tidy up your code and split it into small libs, or start looking for automatic obfuscator&trimmer.
I have to work with an old version of Mono in Unity projects. I find myself recreating some classes and extension methods that exist in later versions of .NET. Should I be marking these with an attribute that will make it easy to take them out at a later point, just wait for the inevitable errors, and delete the duplicate code, or take some other approach I'm not familiar with yet? If the attribute route is the way to go, is there already an appropriate attribute created for this kind of thing?
Here's what I'd like:
[PresentInDotNET(3.5)]
I fill in the version and get alerted when the framework is at that level or higher.
Split them off to a separate assembly, and change the set of assemblies that make up the final delivery based on the .NET version. You need to rebuild your main assembly to refer to the correct assemblies (depending on whether Foo is in MySystem or System), but as long as you keep namespaces identical, that's all. If you are not even interested in keeping compatibility with older versions, you can simply delete classes from this assembly as they become available.
Alternatively, if the classes/extension methods you are recreating are not interesting (in the sense that you gain nothing by having .NET supply them for you), simply put them in their separate namespace and accept that you are duplicating code already present in newer versions. It doesn't matter a whole lot which assembly gets the job done, after all, as long as it happens.
Whatever you do, try to avoid going the route of #ifdefs, runtime discovery, and other conditional code, as this is much harder to maintain.
How about adding "// TODO" comments for places like this? Visual Studio will display these in the Task window and you can get at them pretty easily.
At my workplace we deploy internal application by only replacing assemblies that have changed (not my idea).
We can tell which assemblies we need to deploy by looking at if the source files that are compiled into the assemblies have changed. Most of the time we don't need to redeploy assemblies that depend on assemblies that have changed. However we have found some cases where even though no source files in an assembly have changed, we need to redeploy it.
So far we know that any of these changes in an assembly, will require all dependent assemblies to need to be recompiled and deployed:
Constant changes
Enum definition changes (order of values)
Return type of a function changes and caller uses var (sometimes)
Namespace of a class changes to another already referenced namespace.
Are there any other cases that we're missing? I'm also open to arguments why this entire approach is flawed (although it's been used for years).
Edit To be clear, we're always recompiling, but only deploying assemblies where the source files in them have changed.
So anything that breaks compilation will be picked up (method name changes, etc.), since they require changes in the calling code.
Here is another one:
Changes to optional parameter values.
The default values get directly compiled to the assembly using them (if not specified)
public void MyOptMethod(int optInt = 5) {}
Any calling code such as this:
theClass.MyOptMethod();
Will end up compiled to:
theClass.MyOptMethod(5);
If you change the method to:
public void MyOptMethod(int optInt = 10) {}
You will need to recompile all dependent assemblies if you want the new default to apply.
Additional changes that will require recompilation (thanks Polynomial):
Changes to generic type parameter constraints
Changes to method names (especially problematic when using reflection, as private methods may also be inspected)
Changes to exception handling (different exception type being thrown)
Changes to thread handling
Etc... etc... etc...
So - always recompile everything.
First off, we have sometimes deployed only a few assemblies in an application instead of the complete app. However, this is by no means the norm and has ONLY been done in our test environments when the developer had very recently (as in within the last few minutes) published the whole site and was just making a minor tweak. However, once the dev is satisfied they will go ahead and do a full recompile and republish.
The final push to testing is always based off a full recompile / deploy. The pushes to staging and ultimately production are based off of that full copy.
Besides repeatability, one reason is that you really can't be 100% positive that a human didn't miss something in the comparisons. Next, the amount of time to deploy 100 assemblies versus 5 is trivial and quite frankly not worth the amount of human time it takes to try and figure out what really changed.
Quite frankly, the list you have in combination with Oded's answer ought to be enough to convince others of the potential for failure. However, the very fact that you have already run into failures due to this lackadaisical approach should be enough of a warning flag to stop it from continuing.
At the end of the day, it really boils down to a question of professionalism. Standardization and repeatability of the process of moving code out of development, through the various hoops and ultimately into production are extremely important in creating robust mission critical applications. If your deployment process is frought with the potential for failure due to these types of risk inducing short cuts, it raises questions on the quality of the code being produced.
Firstly, I think this forum is not appropriate for my question, so if it is in wrong place, kindly forgive and place wherever appropriate. I didn't find proper forum for my question.
I have developed a C# application (Win Forms). Now I need to handle its version numbering. I can't make out what is the best way to do. I want the version number to be simple something like 1.2 or 1.2.1. I read about SVN Version, but that also seems little confusing at this stage. There are different version types for the application - 1 with the installer , and 1 without installer.
I think the release version and the development version should be the same - please correct me if I am wrong. Should it be handled automatically or change manually? What are the best, simple and easy way to handle version numbering of an application.
We use major.minor[.build[.revision]]. And we give the semantics of:
major = major version. (Kind of big changes, maybe even with UI refresh).
minor = as medium set of changes. (maybe new internal processes or engines' refactoring).
As for build and revision:
0 - Means its alpha stage.
1 - Beta.
2 - Release candidate.
3 - Production.
So, if your app its on 3.2.1.0. You know you're at the alpha stage of the 3.2 version. And so on.
NOTE: Although it may seems kinda large to include the revision we found it to be a good practice because if we found some bug or unexpected behavior we just fix and increment revision and not build.
I think - and this comes from my experience, not just an idea - that you should use 4 part version numbering - very much along the lines of #Randolf. However I would make a different definition of the parts and the versioning.
major - this should be incremented when the version is a new build, is not compatible with previous versions without an upgrade process, or when the build/development platform changes ( so moving from .net 2.0 to .net 4.0 would count.
minor - this should be incremented when the data structures underlying your application change ( whether this is a db or not ). This means that a data build or update will be needed, which for clients indicates the level of work that may be needed for an upgrade.
build - this should always be incremented whenever a full production build is made, as a release candidate.
revision - this should be updated for every build, and used for bug fixes on a release candidate.
This means that you can identify with the version number exactly which changes and fixes are in that release, which is crucial for support.
Manual or automatic - this route would imply a manual update, and this is important to enable you to identify what a release contains.
Release and development version numbers should generally be the same, becasue the version number should only be incremented when a build for potential release is made. Having said that, you should also make sure that you can do development on any supported version, which may be lower than current development version, if a new release is in testing.
Frustratingly, .Net seems to consider build numbers the 'wrong' way round, according to many people. AssemblyInfo specifies build number as [Major][Minor][Build][Revision], which to me doesn't make any sense. Surely a nightly build happens more often than a revision of the spec, and is therefore the 'smallest' change? I'm not going to fight against the framework though, so I'm just going to have to tolerate it.
Maybe it's the same root cause as the phenomenon of Americans specifying dates in the wrong order. Again, common sense would dictate large->small consistently.
With regards to organising this conceptually, I would say that each part of a four-part build number should indicate the most recent change of the appropriate magnitude; i.e:
Major: A major upgrade of the application, which you expect users to pay for if it's a commercial project. Users should expect to face infrastructure concerns, such as service packs and new .Net versions;
Minor: A significant rollup of bug fixes and change requests that would fulfil the description of 'small feature'. Anything that should arguably have been in the program already can be rolled into a minor version;
Build: Personal choice, but for me this would be the unique build number. If you get your binaries from an integration server, this could run into the tens of thousands. Likely to be a nightly build, but could also be built on demand when the PM says 'go'. The build number should uniquely correspond to an event that kicked off a full production build on the integration server.
Revision: Should correspond to an amendment to the specification, at least conceptually. Would typically match an item in the changelog i.e. all incremental changes up to and including change request x.
In BuildMaster, we consider the #.#.#.# release number format to represent:
[major version].[minor version].[maintenance version].[build number]
Since mostly I would be regurgitating information from our blog, I'll just give you a link to the article written by a colleague of mine: http://blog.inedo.com/2011/03/15/release-numbering-best-practices/
When it comes to updating your release numbers, I would just leave the local development version at 0.0.0.0 and let your automated build process worry about the numbering.
C# 2008 SP1
I am wondering what is the best way to handle revision numbers.
I had always thought there is normally only 3 numbers. (Major, Minor, and Bug fixes).
However, I am left wondering what the build number is and the Revision number.
For example, in the past I have normally used only 3 numbers. I there is some very minor change or a bug fix I would increment the 3rd number (bug fixes).
Because I am new to this. What is normally done in the professional world?
Many thanks for any advice,
In my AssemblyInfo file I have the following:
// Version information for an assembly consists of the following four values:
//
// Major Version
// Minor Version
// Build Number
// Revision
//
// You can specify all the values or you can default the Build and Revision Numbers
// by using the '*' as shown below:
// [assembly: AssemblyVersion("1.0.*")]
[assembly: AssemblyVersion("1.0.2.*")]
[assembly: AssemblyFileVersion("1.0.0.0")]
// 1.0.2 Added feature for detecting if a sound card is installed.
From MSDN:
Build : A difference in build number represents a recompilation of
the same source. This would be
appropriate because of processor,
platform, or compiler changes.
Revision : Assemblies with the same name, major, and minor version numbers
but different revisions are intended
to be fully interchangeable. This
would be appropriate to fix a security
hole in a previously released
assembly.
Phil Haack has a nice deconstruction of the .NET versioning system, but in practice I don't think his concerns really matter since in my experience the .NET/MS versioning system is only really used by technical for debugging/support/tracking purposes, the public and project management will often be date or made-up-marketing-version-number based.
FWIW every .NET project I've worked on ha been governed by "X.Y.*" i.e. we like to manually control what the major and minors are but let the system control the build and revision.
Try this:
http://autobuildversion.codeplex.com/
I think Paul Alexander's answer is the correct one but I'd like to add the following remark to your AssemblyInfo file:
If you use 1.0.2.* remember that the last bit (replaced by the *) is a random number So if you build 1.0.2.*` this evening, and you build it again tomorrow morning, the second version could have a lower version number than the build you did earlier.
You'll want to consider versioning your assembly and file differently. When you change the assembly version number it's possible, but extremely difficult, for existing code to use it without recompiling. We only change the assembly version when there are breaking changes so we can release minor bug fixes easily.
Read more on our Assembly Versioning policy.
We use the Bug-fix number in increments of two. Odd numbers mean released minor bug fixes and even numbers mean working on bug fixes. Apparently this is common in some businesses, i.e., the one I am in.
The idea is to identify accidental releases, and somehow I like the idea. Doing things so that you easily see things that are wrong. This does not say that an odd number necessarily is a non-accidental release, only that it is with a quite high probability.
Thanks for all your suggestions.
I have decided to use the following.
My project has the following version number: 1.0.2.20.
1. :Major changes
0. :Minor changes
2. :Bug fixes
20 :Subversion revision number
So I have changed the following in my assemblyinfo.c file
[assembly: AssemblyVersion("1.0.2.20")]
Any suggestions on this idea, I would be happy to hear.
Many thanks,