I researched the use of a condition framework to verify data instead of
if(cond) throw new SomeException();
SomeFramework.MakeSure(cond);
In the end my choice is to use either the CodeContract or CuttingEdge.Conditions frameworks.
I can not decide which framework to use. I can tell you that what I don't like about the 'CodeContract' framework is that you have to install the extra msi in order to use it and the options you need to choose; not that its bad, but it feels not natural. (And of course its still under MS research.)
What do you think?
The CodeContracts framework is part of .NET 4. So you can write code against it without having to install anything, it's just that without the rewriter component the code contracts won't have any effect at runtime. I take this inclusion in the framework as a sign that Microsoft intends to integrate code contracts more in future.
According to the stats on the CuttingEdge.Conditions CodePlex page, it's only been downloaded 4,189 times. There are some nice things about the syntax, but unless there is something specifically supported by CuttingEdge.Conditions and not by CodeContracts, you might as well stick with the version that's part of .NET.
The key features of code contracts, as far as I am concerned, are as follows:
You can set up code contracts on interfaces, to specify the expected behaviour of types implementing those interfaces.
Code contracts are inherited.
I haven't tried CuttingEdge.Conditions, but it's not obvious that it supports these two features (whereas CodeContracts does).
The main difference is that Code Contracts includes a static checker. This means your contracts will be checked at compile time for correctness.
Also, as long as you are building for .NET 4, your users don't need to install anything. The rewriter works at compile time and the rest of CC is part of .NET.
Edit: I recommend people use https://github.com/adamralph/liteguard
CuttingEdge.Conditions has been forked to become just Conditions. The original author was no longer maintaining nor uses the project: https://conditions.codeplex.com/workitem/20064
CodeContracts are not implemented in Mono. There was a GSOC project but it didn't end up with a complete solution so Conditions is your only choice if targeting Xamarin.iOS/Xamarin.Android/Xamarin.Mac platforms or just mono in general.
The library is now a portable class library and cross-platform support by default is in:
https://github.com/ghuntley/Conditions and https://www.nuget.org/packages/Conditions/
Related
How can I efficiently determine up front (as in, before waiting for the runtime errors to pour in; ideally before doing the code conversion) all the methods called by my .NET Framework library that are NOT actually implemented in .NET Standard 2.0/2.1? The Portability Analyzer only tells you if the method is not there at all (won't compile); the compiler only tells you what won't compile; but neither will tell you the methods whose functionality has been gutted such that it will produce runtime errors if your code ever hits it. I'd rather not leave around such time bombs for my users to find... and while I'd like to think my unit tests and automated UI tests will cover 100% of my code, most of us haven't quite achieved that.
So, I am hoping someone is aware of an analyzer that highlights things that are there but are NOT implemented?... or even has a simple list of methods that are there, but don't work?... Or any other tool/approach for finding those up front?
(Just as evidence of the need... a couple months ago I used Portability Analyzer to tell me what I needed to fix to convert my .NET Framework libs to .NET Standard 2.0... not too bad... so, I did it. Compiler found a few other issues based on the specific arg lists. Okay, fixed those. All compiled. All tests ran just fine. So, I thought that I was good. So, then a month ago I started converting our app to .NET 5, building on the newly-converted .NET Standard 2.0 libs. But today I got "lucky" and discovered that, although it compiles just fine, it will never actually work... Thread.Abort doesn't do what it is supposed to (i.e., inject a ThreadAbortException into the target Thread), it instead does nothing to that thread, and worse throws a PlatformNotSupportedException on the calling thread. Egad. Glad I found that before my customers did. But I wonder how many other methods are similarly compiling fine under .NET Standard 2.0, but have been similarly gutted such that they won't actually function properly. It would have been nice if the Portability Analyzer had told me that... or any other tool/approach that might be out there.)
Ah, quaabaam's comment/link was a better overview than I had found before as it includes the two things I have been missing:
This list of such not-really-supported methods:
https://learn.microsoft.com/en-us/dotnet/core/compatibility/unsupported-apis
and mention of a "Platform Compatibility Analyzer" which is different from the ".NET Portability Analyzer". To enable that in your .NET Standard x.x projects (or .NET Framework projects that you're considering converting), then add a .editorconfig file to your project with this line:
dotnet_code_quality.enable_platform_analyzer_on_pre_net5_target=true
Thanks quaabaam!! I have dozens of articles/posts/MS-docs open on doing the conversion, and none of them mention that!
I am writing public .NET class library version for our online REST service and I can't decide which version of .NET to choose.
I would like to use .NET 4.0 version but such compiled class library can't be used in .NET 2.0 version?
Maybe there is statistic how many developers use .Net 2.0 version?
There's little reason not to use the latest version of the framework. Not only do you get all the latest features and whistles that speed development time, but you also get to take advantage of all the bug fixes and improvements that Microsoft has done under the hood.
The only advantage of targeting earlier versions of the framework is in a vain hope that the user won't have to download and install anything in order to use your app. But that's far from foolproof, and mostly in vain. Remember that Windows is not a .NET Framework delivery channel and you can't reliably assume that the user will have any version of the .NET Framework installed. Even if you insisted on counting on it being bundled with Windows (which you shouldn't), lots of users still haven't upgraded from Windows XP. Even if you counted on it being pushed out over Windows Update, there are significant numbers of users who either don't use Windows Update, don't use Windows Update very often, or who live out in remote areas with poor/slow Internet access and can't download all of those updates.
The moral of the story is that you're going to have to provide the appropriate version of the .NET Framework with your application anyway. And the .NET 4.0 runtime is actually significantly smaller than the previous versions, so there's little reason to target them. The team has worked really hard on that, and their efforts have really paid off. Even better, as atornblad notes, most apps can target the Client Profile version of the framework which trims out some infrequently used pieces and slims things down another ~16%.
Additionally, I strongly recommend using a setup application that handles installing the required framework for the user automatically and seamlessly. Visual Studio comes with built-in support for creating setup applications, or you could use a third-party installer utility like Inno Setup. That makes using the latest version a no-brainer.
Everyone else seems to be recommending using the latest version, so I'll buck the trend and suggest 2.0 if you don't actually need any features from later versions... if this really is a client library and you have no control over and little idea about who is going to use it.
It really does depend on who your users are likely to be, which in turn depends on what the REST service is. If it's some sort of social media thing, then I'd say it's more likely that your clients will be in environments where they can use .NET 4. If it's something which might well be used by financial institutions or other big businesses, they may well not have the option of using .NET 4, so you should consider earlier versions. This is the approach we've taken for Noda Time where we believe the library will be useful in a wide variety of situations, and we can't predict the client requirements.
Of course, if you know all your clients and know they will all be able to use .NET 4, then go with that.
The big downsides of sticking to .NET 2.0 are that you won't be able to use LINQ internally (unless you use LINQBridge or something similar, which adds another dependency for your library) and you won't be able to (cleanly) provide extension methods. If you can usefully expose more features to the client if you use a later version, you may want to provide multiple versions of the library - but obviously that's a maintenance headache.
Another consideration is whether you ought to provide a Silverlight version - which again depends on what sort of service you're providing and what sort of users you're expecting.
If you are making a REST service you should probably use 4.0.
The only time you need to consider using a legacy version is if another project should reference your compiled dll. The REST service is exposed using HTTP over internet and the client will not use the .dll directly. Or did I understand the question wrong?
It's almost always is a good idea to use the latest version, cause MS provides a lot of bugfixes and innovations in those.
If, in your system, there is a limitation for 2.0, I'm afraid you need to use that one, cause you need to "make the stuff work".
For versions approx destribution, can look on this SO answer (but it till 3.5 version)
When you do not create your library to fit in an existing legacy environment you should always use the most up to date releases.
If I don't understand you wrongly you're looking to create a .NET-based client library to work with some REST service(s) made by you too.
Perhaps you want to provide a client library which can be consumed by 2.0, 3.5 and 4.0 applications, and this is absolutely possible, and using best features of each framework version.
Maybe there're more approaches, but I'd like to suggest you three of them:
Conditional compilation-based approach. You can implement your classes using a common feature set found in legacy and newer framework versions, but take advantage of useful features present in each version. This is possible using conditional compilation and compilation symbols, since you can define specific code to be compiled depending on target framework version (check this question: Is it possible to conditionally compile to .NET Framework version?).
Symbolic links in Visual Studio 2010-based approach. You can choose to use a common feature set, keeping in mind that this is going to be the one found in the oldest version. That is you can create a project which compiles in 2.0, and others for newer versions, adding all compilable files and embedded resources as symbolic links in these Visual Studio projects. This is going to produce an assembly for any of supported framework versions. You can mix conditional compilation-based approach with this one, and you can get a great way of delivering your public assembly in various framework versions in a very reliable and easy-to-maintain way. Note whenever you add a new compiled file or resource to a project, you need to create the corresponding symbolic links for it for your other projects. Check this MSDN article if you want to learn more about linked files: http://msdn.microsoft.com/en-us/library/9f4t9t92.aspx.
Version specific, optimized assemblies. Maybe the most time-consuming approach. It requires more effort, but if your REST service isn't a giant one, you can have room to develop an specific assembly for each framework version, and take advantage of best features and approaches of all of them.
My opinion
In my opinion, I'd take #2 approach, because it has the best of #1 and #3. If you get used with it, it's easy to maintain and it's all about discipline, and you'll have a good range of choices for your client developers.
I'd compromise and use the oldest framework that provides you (the library's author) the most bang for your buck. It's compromise that lets you develop the fastest and exposes your library to the more users. For me, that usually means 3.5 because I tend to use LINQ extensively.
It should be trivial to provide both 2.0 and 4.0 binaries, as long as you're not using any of the 4.0 specific dlls.
You can also publish your client library source code - .NET binaries are already so easy to decompile that you're not leaking out anything valuable this way.
The generic Func<> and Action<> delegates from later versions of .NET are very appealing, and it's been demonstrated in many places that these can easily be recreated in code targeting .NET 2.0, such as here.
From the perspective of a library targeting .NET 2.0 however, which may be consumed by applications built against any higher version of .NET, how does this stack up. Is implementing this "compatibility layer" within the library absolutely a recipe for conflict (both in terms of private and public interface), or are there ways to make this work independent of the target framework that the consuming application builds against?
If this is a non-starter, would it be better to either:
A) Define an identical set of parametrized delegates with different names? or..
B) Stick strictly with .NET 2.0 convention, and define new delegate types as I need them?
The plain truth is that Func<> and Action<> are a good idea. They make your code much easier to read and they avoid a shocking amount of messy boilerplate delegate declarations. That's why you want to use them.
So you have this really appealing programming style you want to use, it's a standard technique that is now used almost universally instead of the old way, but you can't use it because you are targeting a legacy version of the framework. What should you do?
You have three choices:
Use the programming style that was in common use before the feature
Add the feature to your own code in spirit but with non-conflicting names
Add the feature to your own code with the "real" names but in your own namespace
Using the old programming style gives up all the benefits that we have come to appreciate from the feature. That's a big sacrifice. But maybe all your co-developers are used to this style of programming.
Using the feature with non-conflicting names seems sensible enough. People will be able to read the code and benefit from the features, but no-one will be confused that they appear to be something that they're not. When you are finally ready to upgrade, you'll have to patch up the names. Luckily Ctrl+R, Ctrl+R makes doing that very easy.
Using the feature with the same names as the standard feature means your code can target an older version but appear to be using newer features. Seems like a win/win. But this could cause confusion and you have to be careful that your types aren't exposed to other unknowing assemblies, possibly causing source level compilation problems. So you have to be careful and be perfectly clear about what is happening. But it can work effectively.
You have to pick whatever approach makes sense in your situation depending on your needs. There is no one right answer for everybody, only trade-offs.
I know there are some nice new features in C# 4.0 but I can't, for the life of me, think of a compelling reason for either upgrading existing projects or for switching to new projects.
I've seen some posts where people have said that if their hosting service didn't provide .Net 4 that they'd find another provider as .Net 4 was pinicle to their direction <?>.
Now my boss is trying to get me to agree to switch all our production environments to C# 4 and to do it now.
So the question is has anyone either began using, or converted a project to, C# 4 for a compelling reason? Was there a feature that you just had to have that would make your life so much easier?
There are some cool new features in C# 4.0:
Dynamic member lookup
Covariant and contravariant generic type parameters
Optional ref Keyword when using COM
Optional parameters and named arguments
Indexed properties
In his release blog post Scott Guthrie goes into detail about the features of .NET 4 in general. Another great resource is a white paper at http://www.asp.net/learn/whitepapers/aspnet4. However, I'd doubt you are going to need one / any of these new features right away. As Scott Hanselman blogged:
there's a lot of stuff that's new and
added in .NET 4, but not in that
"overwhelming-I-need-to-relearn-everything"
way.
Whether or not you should upgrade is therefore dependent on a variety of other factors. Some reasons that spring to mind:
Standardizing your development environment on a single platform VS2010 over VS2008.
Size of the .NET Framework is substantially reduced
Speed improvements if you are a Visual Studio Tools for Office developer
An open dialogue with your manager seems appropriate to understand his reasoning for the upgrade. I'd argue that because it's shiny isn't a compelling reason.
As a reference this dated Stack Overflow question "Why not upgrade to the latest .net framework" provides the inverse to your question.
Quite frankly System.Collections.Concurrent has made developing multi-threaded applications a breeze.
The new and improved System.Linq.Expressions makes writing dynamically compiled code seem like child's play.
The new named parameters feature means I can have big constructors and not get confused as to what each parameter is. Immutable objects are just that much easier.
Surprisingly not mentioned:
PLINQ
Task Parallel Library
Is your question specific to C# 4.0, or .NET 4.0?
In C# 4.0 there are only a couple of really nice new features. Covariance/contravariance is not useful all the time, but when you run into a need for it, it can really save a lot of pain. Optional method parameters can reduce a lot of ugly method overrides, and make certain method calls a lot cleaner. If you're using COM or IronPython or any of a few similar frameworks, the dynamic keyword can also be a real lifesaver.
.NET 4.0 in general has a ton of really interesting features across a variety of frameworks. Foreign Key support in Linq to Entities, for example, is making life a lot easier for us. A lot of people are really excited about POCO support. They also added support for some of the LINQ methods (e.g. Distinct) that were previously missing from the Entity Framework.
So it will really all boil down to which frameworks you're using and how you're using them, and how expensive it will be for you to make the switch.
First, what is compelling to me may mean nothing to you. Having said that, I would upgrade Visual Studio if budget allows. In fact, personally I think there is a huge career risk in staying with a company that doesn't keep your tools up to date. You will fall behind in your knowledge of the field without access to the latest tools.
As for converting all your projects just to convert them it seems like folly to me. Putting aside all the extra work distribution (and upgrading the machines to have .NET 4), you have to consider the chance that you will have something go wrong. (And if you are like me some things must be called from 3rd party programs using .NET 3.5 making them unable to convert.)
My first rule would be that nothing is converted unless you are working on it anyway. But I would seriously look to convert anything that could use improvement from either parallel code, or COM interop.
I do have a compelling project that was converted. I had a long running web method being called. In the version that exists now, I return from the method without knowing the results. Instead I gave the user a way to check later. By moving to a parallel foreach loop this works much better and I can let the user know if there were any errors.
The same project is also being converted to use RIA services which have greatly improved and reducing the amount of my own code.
I upgraded for the same reason everyone else did.
...so I can put it on my resume :)
If you're starting a new project today, it's probably best to start it on 4.0, since down the road you will have to migrate it at some point anyways (assuming it stays around long enough, older versions of .net will simply stop being supported).
C# 4 implies other things.. depending on your project... WCF 4, WPF 4, ASP.NET 4, MVC 2, Entity Framework 2, etc.. So don't just look at C# as the reason to change, you also have to look at the whole stack. If there's still nothing compelling, then staying where you're at is probably a wise choice.
If you're doing WPF / Silverlight, I would definitely recommend upgrading to Visual Studio 2010 (I know, you can write .NET 4.0 code without an IDE, but that's an edge case if ever there were one).
The multi monitor support is nifty but buggy. I spend a lot of time trying to get windows to refresh.
In terms of language, the COM interop (as #Gvs mentioned) is also vastly improved with the dynamic datatype and optional parameters.
UPDATE: Multiple monitor support is pretty rock solid with VS 2010 SP1.
If you can get your boss to pop for the $10,000+ Visual Studio Ultimate Edition, IntelliTrace is a compelling reason to upgrade your environment and justification enough the for investment.
COM integration is much easier with the dynamic datatype, and optional parameters.
For me there are two things:
optional arguments -- because I am sick of polluting classes with X versions of the same method (overloading)
dynamic keyword -- because the expressiveness of generics in C# is a joke, this way I can at least "write what I mean" without hoops, of course with execution speed penalty
The more compact the code (i.e. if you express the idea without addition "oh, how to avoid limitation Y of the language"), the better, because the code is much easier to maintain and it is harder to make a stupid (or worse) mistake.
There are no compelling stability or security reasons to switch. Shouldn't that be your boss's concern?
I am upgrading a Windows Client application that was earlier .NET 1.1. The previous developer handwrote many solutions that can be done automatically with the newer versions of .NET. Since I am relatively fresh to .NET and do not have the complete overview of the features I am asking here.
What is the most notable classes and syntax features provided in later .NET versions that is likely to swap out handwritten code with features from the library?
Biggest changes off the top of my head:
Use generic collections instead of ArrayList, Hashtable etc.
For C# 3.5, use LINQ instead of manually filtering/projecting
Use generic delegates instead of having to declare your own all the time
Use anonymous methods instead of creating a one line method used to create a delegate in one place
Use BackgroundWorker for WinForms background tasks
Generics is the most wide-reaching change in my view.
Personally, I would leave any 1.1 code that works fine when compiled with 2.0/3.5. Unless you have the time, anything you rewrite you'll have to test again, and you still may introduce new bugs that your testing can't find.
Things that I'd look to use for future versions though, would be generics and LINQ. Generics with .NET 2, and LINQ with .NET 3.5.
LINQ was a big leap. Might be possible to use that in some places (e.g. XML code). Also, generics may reduce the need for some classes.
Also be aware of and test for possible impact of breaking changes between the versions of the framework. A google search should reveal the top issues.