Upgrading Code from .Net 1.1 to 2.0/3.5 (C#) - c#

I am upgrading a Windows Client application that was earlier .NET 1.1. The previous developer handwrote many solutions that can be done automatically with the newer versions of .NET. Since I am relatively fresh to .NET and do not have the complete overview of the features I am asking here.
What is the most notable classes and syntax features provided in later .NET versions that is likely to swap out handwritten code with features from the library?

Biggest changes off the top of my head:
Use generic collections instead of ArrayList, Hashtable etc.
For C# 3.5, use LINQ instead of manually filtering/projecting
Use generic delegates instead of having to declare your own all the time
Use anonymous methods instead of creating a one line method used to create a delegate in one place
Use BackgroundWorker for WinForms background tasks
Generics is the most wide-reaching change in my view.

Personally, I would leave any 1.1 code that works fine when compiled with 2.0/3.5. Unless you have the time, anything you rewrite you'll have to test again, and you still may introduce new bugs that your testing can't find.
Things that I'd look to use for future versions though, would be generics and LINQ. Generics with .NET 2, and LINQ with .NET 3.5.

LINQ was a big leap. Might be possible to use that in some places (e.g. XML code). Also, generics may reduce the need for some classes.

Also be aware of and test for possible impact of breaking changes between the versions of the framework. A google search should reveal the top issues.

Related

What .NET version to choose?

I am writing public .NET class library version for our online REST service and I can't decide which version of .NET to choose.
I would like to use .NET 4.0 version but such compiled class library can't be used in .NET 2.0 version?
Maybe there is statistic how many developers use .Net 2.0 version?
There's little reason not to use the latest version of the framework. Not only do you get all the latest features and whistles that speed development time, but you also get to take advantage of all the bug fixes and improvements that Microsoft has done under the hood.
The only advantage of targeting earlier versions of the framework is in a vain hope that the user won't have to download and install anything in order to use your app. But that's far from foolproof, and mostly in vain. Remember that Windows is not a .NET Framework delivery channel and you can't reliably assume that the user will have any version of the .NET Framework installed. Even if you insisted on counting on it being bundled with Windows (which you shouldn't), lots of users still haven't upgraded from Windows XP. Even if you counted on it being pushed out over Windows Update, there are significant numbers of users who either don't use Windows Update, don't use Windows Update very often, or who live out in remote areas with poor/slow Internet access and can't download all of those updates.
The moral of the story is that you're going to have to provide the appropriate version of the .NET Framework with your application anyway. And the .NET 4.0 runtime is actually significantly smaller than the previous versions, so there's little reason to target them. The team has worked really hard on that, and their efforts have really paid off. Even better, as atornblad notes, most apps can target the Client Profile version of the framework which trims out some infrequently used pieces and slims things down another ~16%.
Additionally, I strongly recommend using a setup application that handles installing the required framework for the user automatically and seamlessly. Visual Studio comes with built-in support for creating setup applications, or you could use a third-party installer utility like Inno Setup. That makes using the latest version a no-brainer.
Everyone else seems to be recommending using the latest version, so I'll buck the trend and suggest 2.0 if you don't actually need any features from later versions... if this really is a client library and you have no control over and little idea about who is going to use it.
It really does depend on who your users are likely to be, which in turn depends on what the REST service is. If it's some sort of social media thing, then I'd say it's more likely that your clients will be in environments where they can use .NET 4. If it's something which might well be used by financial institutions or other big businesses, they may well not have the option of using .NET 4, so you should consider earlier versions. This is the approach we've taken for Noda Time where we believe the library will be useful in a wide variety of situations, and we can't predict the client requirements.
Of course, if you know all your clients and know they will all be able to use .NET 4, then go with that.
The big downsides of sticking to .NET 2.0 are that you won't be able to use LINQ internally (unless you use LINQBridge or something similar, which adds another dependency for your library) and you won't be able to (cleanly) provide extension methods. If you can usefully expose more features to the client if you use a later version, you may want to provide multiple versions of the library - but obviously that's a maintenance headache.
Another consideration is whether you ought to provide a Silverlight version - which again depends on what sort of service you're providing and what sort of users you're expecting.
If you are making a REST service you should probably use 4.0.
The only time you need to consider using a legacy version is if another project should reference your compiled dll. The REST service is exposed using HTTP over internet and the client will not use the .dll directly. Or did I understand the question wrong?
It's almost always is a good idea to use the latest version, cause MS provides a lot of bugfixes and innovations in those.
If, in your system, there is a limitation for 2.0, I'm afraid you need to use that one, cause you need to "make the stuff work".
For versions approx destribution, can look on this SO answer (but it till 3.5 version)
When you do not create your library to fit in an existing legacy environment you should always use the most up to date releases.
If I don't understand you wrongly you're looking to create a .NET-based client library to work with some REST service(s) made by you too.
Perhaps you want to provide a client library which can be consumed by 2.0, 3.5 and 4.0 applications, and this is absolutely possible, and using best features of each framework version.
Maybe there're more approaches, but I'd like to suggest you three of them:
Conditional compilation-based approach. You can implement your classes using a common feature set found in legacy and newer framework versions, but take advantage of useful features present in each version. This is possible using conditional compilation and compilation symbols, since you can define specific code to be compiled depending on target framework version (check this question: Is it possible to conditionally compile to .NET Framework version?).
Symbolic links in Visual Studio 2010-based approach. You can choose to use a common feature set, keeping in mind that this is going to be the one found in the oldest version. That is you can create a project which compiles in 2.0, and others for newer versions, adding all compilable files and embedded resources as symbolic links in these Visual Studio projects. This is going to produce an assembly for any of supported framework versions. You can mix conditional compilation-based approach with this one, and you can get a great way of delivering your public assembly in various framework versions in a very reliable and easy-to-maintain way. Note whenever you add a new compiled file or resource to a project, you need to create the corresponding symbolic links for it for your other projects. Check this MSDN article if you want to learn more about linked files: http://msdn.microsoft.com/en-us/library/9f4t9t92.aspx.
Version specific, optimized assemblies. Maybe the most time-consuming approach. It requires more effort, but if your REST service isn't a giant one, you can have room to develop an specific assembly for each framework version, and take advantage of best features and approaches of all of them.
My opinion
In my opinion, I'd take #2 approach, because it has the best of #1 and #3. If you get used with it, it's easy to maintain and it's all about discipline, and you'll have a good range of choices for your client developers.
I'd compromise and use the oldest framework that provides you (the library's author) the most bang for your buck. It's compromise that lets you develop the fastest and exposes your library to the more users. For me, that usually means 3.5 because I tend to use LINQ extensively.
It should be trivial to provide both 2.0 and 4.0 binaries, as long as you're not using any of the 4.0 specific dlls.
You can also publish your client library source code - .NET binaries are already so easy to decompile that you're not leaking out anything valuable this way.

c# MemoryMappedFile in .net 3.5

I need to use MemoryMappedFile class in .net 3.5...
is there any way to find the code of the classes used in .net 4.0 and create to use in .net 3.5?
thanks in advance
If you need memory mapped files in .NET 3.5, you could also write your own wrapper around the respective Win32 methods from scratch. This might take a bit more effort, but it avoids any licensing issues.
.NET 4 uses a whole new CLR, and I wouldn't be at all surprised to find that enough had changed under the hood to make this basically infeasible.
Basically, you should be working to use a version of .NET that supports the functionality you need - any workaround you find is very likely to cause hard-to-diagnose issues, IMO.
You could use a decompiler or the shared source (links omitted on purpose) to get the code. A quick look doesn't reveal any calls into the CLR, but it looks everything is "plain" C# and some P/Invoke to Win32.
However, note that you would have to pull quite some classes to make it possible, not only those of System.IO.MemoryMappedFiles. And in the end you could still run into issues as described by Jon.
Not to speak of any licensing issues, of course. Which would honestly be the first showstopper anyhow.

C# Func<> delegates in library

The generic Func<> and Action<> delegates from later versions of .NET are very appealing, and it's been demonstrated in many places that these can easily be recreated in code targeting .NET 2.0, such as here.
From the perspective of a library targeting .NET 2.0 however, which may be consumed by applications built against any higher version of .NET, how does this stack up. Is implementing this "compatibility layer" within the library absolutely a recipe for conflict (both in terms of private and public interface), or are there ways to make this work independent of the target framework that the consuming application builds against?
If this is a non-starter, would it be better to either:
A) Define an identical set of parametrized delegates with different names? or..
B) Stick strictly with .NET 2.0 convention, and define new delegate types as I need them?
The plain truth is that Func<> and Action<> are a good idea. They make your code much easier to read and they avoid a shocking amount of messy boilerplate delegate declarations. That's why you want to use them.
So you have this really appealing programming style you want to use, it's a standard technique that is now used almost universally instead of the old way, but you can't use it because you are targeting a legacy version of the framework. What should you do?
You have three choices:
Use the programming style that was in common use before the feature
Add the feature to your own code in spirit but with non-conflicting names
Add the feature to your own code with the "real" names but in your own namespace
Using the old programming style gives up all the benefits that we have come to appreciate from the feature. That's a big sacrifice. But maybe all your co-developers are used to this style of programming.
Using the feature with non-conflicting names seems sensible enough. People will be able to read the code and benefit from the features, but no-one will be confused that they appear to be something that they're not. When you are finally ready to upgrade, you'll have to patch up the names. Luckily Ctrl+R, Ctrl+R makes doing that very easy.
Using the feature with the same names as the standard feature means your code can target an older version but appear to be using newer features. Seems like a win/win. But this could cause confusion and you have to be careful that your types aren't exposed to other unknowing assemblies, possibly causing source level compilation problems. So you have to be careful and be perfectly clear about what is happening. But it can work effectively.
You have to pick whatever approach makes sense in your situation depending on your needs. There is no one right answer for everybody, only trade-offs.

Why should I upgrade to c# 4.0?

I know there are some nice new features in C# 4.0 but I can't, for the life of me, think of a compelling reason for either upgrading existing projects or for switching to new projects.
I've seen some posts where people have said that if their hosting service didn't provide .Net 4 that they'd find another provider as .Net 4 was pinicle to their direction <?>.
Now my boss is trying to get me to agree to switch all our production environments to C# 4 and to do it now.
So the question is has anyone either began using, or converted a project to, C# 4 for a compelling reason? Was there a feature that you just had to have that would make your life so much easier?
There are some cool new features in C# 4.0:
Dynamic member lookup
Covariant and contravariant generic type parameters
Optional ref Keyword when using COM
Optional parameters and named arguments
Indexed properties
In his release blog post Scott Guthrie goes into detail about the features of .NET 4 in general. Another great resource is a white paper at http://www.asp.net/learn/whitepapers/aspnet4. However, I'd doubt you are going to need one / any of these new features right away. As Scott Hanselman blogged:
there's a lot of stuff that's new and
added in .NET 4, but not in that
"overwhelming-I-need-to-relearn-everything"
way.
Whether or not you should upgrade is therefore dependent on a variety of other factors. Some reasons that spring to mind:
Standardizing your development environment on a single platform VS2010 over VS2008.
Size of the .NET Framework is substantially reduced
Speed improvements if you are a Visual Studio Tools for Office developer
An open dialogue with your manager seems appropriate to understand his reasoning for the upgrade. I'd argue that because it's shiny isn't a compelling reason.
As a reference this dated Stack Overflow question "Why not upgrade to the latest .net framework" provides the inverse to your question.
Quite frankly System.Collections.Concurrent has made developing multi-threaded applications a breeze.
The new and improved System.Linq.Expressions makes writing dynamically compiled code seem like child's play.
The new named parameters feature means I can have big constructors and not get confused as to what each parameter is. Immutable objects are just that much easier.
Surprisingly not mentioned:
PLINQ
Task Parallel Library
Is your question specific to C# 4.0, or .NET 4.0?
In C# 4.0 there are only a couple of really nice new features. Covariance/contravariance is not useful all the time, but when you run into a need for it, it can really save a lot of pain. Optional method parameters can reduce a lot of ugly method overrides, and make certain method calls a lot cleaner. If you're using COM or IronPython or any of a few similar frameworks, the dynamic keyword can also be a real lifesaver.
.NET 4.0 in general has a ton of really interesting features across a variety of frameworks. Foreign Key support in Linq to Entities, for example, is making life a lot easier for us. A lot of people are really excited about POCO support. They also added support for some of the LINQ methods (e.g. Distinct) that were previously missing from the Entity Framework.
So it will really all boil down to which frameworks you're using and how you're using them, and how expensive it will be for you to make the switch.
First, what is compelling to me may mean nothing to you. Having said that, I would upgrade Visual Studio if budget allows. In fact, personally I think there is a huge career risk in staying with a company that doesn't keep your tools up to date. You will fall behind in your knowledge of the field without access to the latest tools.
As for converting all your projects just to convert them it seems like folly to me. Putting aside all the extra work distribution (and upgrading the machines to have .NET 4), you have to consider the chance that you will have something go wrong. (And if you are like me some things must be called from 3rd party programs using .NET 3.5 making them unable to convert.)
My first rule would be that nothing is converted unless you are working on it anyway. But I would seriously look to convert anything that could use improvement from either parallel code, or COM interop.
I do have a compelling project that was converted. I had a long running web method being called. In the version that exists now, I return from the method without knowing the results. Instead I gave the user a way to check later. By moving to a parallel foreach loop this works much better and I can let the user know if there were any errors.
The same project is also being converted to use RIA services which have greatly improved and reducing the amount of my own code.
I upgraded for the same reason everyone else did.
...so I can put it on my resume :)
If you're starting a new project today, it's probably best to start it on 4.0, since down the road you will have to migrate it at some point anyways (assuming it stays around long enough, older versions of .net will simply stop being supported).
C# 4 implies other things.. depending on your project... WCF 4, WPF 4, ASP.NET 4, MVC 2, Entity Framework 2, etc.. So don't just look at C# as the reason to change, you also have to look at the whole stack. If there's still nothing compelling, then staying where you're at is probably a wise choice.
If you're doing WPF / Silverlight, I would definitely recommend upgrading to Visual Studio 2010 (I know, you can write .NET 4.0 code without an IDE, but that's an edge case if ever there were one).
The multi monitor support is nifty but buggy. I spend a lot of time trying to get windows to refresh.
In terms of language, the COM interop (as #Gvs mentioned) is also vastly improved with the dynamic datatype and optional parameters.
UPDATE: Multiple monitor support is pretty rock solid with VS 2010 SP1.
If you can get your boss to pop for the $10,000+ Visual Studio Ultimate Edition, IntelliTrace is a compelling reason to upgrade your environment and justification enough the for investment.
COM integration is much easier with the dynamic datatype, and optional parameters.
For me there are two things:
optional arguments -- because I am sick of polluting classes with X versions of the same method (overloading)
dynamic keyword -- because the expressiveness of generics in C# is a joke, this way I can at least "write what I mean" without hoops, of course with execution speed penalty
The more compact the code (i.e. if you express the idea without addition "oh, how to avoid limitation Y of the language"), the better, because the code is much easier to maintain and it is harder to make a stupid (or worse) mistake.
There are no compelling stability or security reasons to switch. Shouldn't that be your boss's concern?

Conversion to .Net 3.5

I recently started maintaining a .Net 1.1 project and would like to
convert it to .Net 3.5.
Any tips to share on reducing code by making use of new features?
As a first attempt, I would like to convert a bunch of static helper functions.
Update: The main reason why I am converting is to learn new features like static classes, LINQ etc. Just for my own use at least for now.
I would suggest starting by migrating to .NET 2.0 features, first.
My first step would be to slowly refactor to move all the collections to generic collections. This will help tremendously, and ease the migration into the .NET 3.5 features, especially with LINQ. It should also have a nice impact on your performance, since any collections of value types will perform better.
Be wary in this of converting HashTables to Dictionary<T,U>, since the behavior is different in some cases, but otherwise, ArrayList->List<T>, etc are easy, useful conversions.
After that, moving helpers to static classes, and potentially extension methods, would be a good next step to consider. This can make the code more readable.
You can use static classes (C# 2.0 feature) to rewrite old helper functions.
If you can swing it, the easiest way I've found to do this is using Visual Studio 2008 and ReSharper. ReSharper will show you, via some kindly visible notation, where you can improve your code. Then it will show you a keyboard shortcut Alt+Enter to "fix" your code.
ReSharper also has a feature called "Cleanup Code" that will will do some of the refactoring for you.
Why convert at all, .Net 1.1 is fully compatible with .Net 3.5, so you should not find any breaking changes when you migrate. If you need to refactor an area because it has problems or you wish to extend it somehow then I would consider migrating to use newer features, but otherwise why touch it and risk breaking it?
Edit As this is a learning excercise rather than changing production code I'd revise my views somewhat; this is probably a good way to learn new approaches. I'd certainly look at LINQ. In places where the old code was iterating through lists or manipulating XML or data from a DB see if you can re-write using LINQ instead.
If you want to start by "convert[ing] a bunch of static helper functions", then you'll want to check out extension methods.
It's likely that you can use them to make your code simpler and more readable.

Categories