I am currently testing out Ndepend, and it gives me a warning that assemblies should be marked as CLSCompliant.
Our project is all C#, so it is not really needed.
What I am wondering is: are there any negative effects of marking a dll as clscompliant or should I just disable the warning?
Note I am not asking what CLSCompliant means that is covered here: What is the 'CLSCompliant' attribute in .NET?
This is one of those subtle cases... CLS compliance is probably of most importance to library authors, who can't control who the caller is. In your case, you state "our project is all C#", in which case you are right: it isn't needed. It adds restrictions (for example, on unsigned types) which might (or might not) affect representing your data in the most obvious way.
So: if it adds no value to you whatsoever, then frankly: turn that rule off. If you can add it for free (no code changes except the attributes), then maybe OK - but everything is a balance - effort vs result. If there is no gain here, don't invest time.
If you are a library author (merchant or OSS), then you should follow it.
There are several C# features that are not CLS compliant, for example unsigned types. Since several languages are case insensitive, there must be no types and members that differ only by case, e.g. MyObject and myObject, and several other features. Thus, if you don't plan to work with other .NET languages there is no reason to mark your code as CLSCompliant.
The only negative effects would be compiler errors when your code is marked as CLSCompliant, but it is not.
Related
What is the difference between c# and f# assemblies? Some flag maybe? I want to determine it using reflection API only
There's no single value to check that would tell you what you need, but there's a good amount of circumstancial evidence that you could look at - IlSpy is your friend if you want to explore it.
I would suggest you check for presence of these two indicators, either of them being present would mean you're likely looking at an F# assembly unless someone is really dedicated to mess things up for you.
FSharpInterfaceDataVersionAttribute on the assembly. This was my initial suggestion, however there are compiler flags that, when set, would prevent this attribute from being emitted: --standalone and --nointerfacedata. I find it highly doubtful either of them would be commonly used in the field, but the fact remains there are openly available ways of opting out from the attribute being emitted right now.
asm.GetCustomAttribute(typeof(FSharpInterfaceDataVersionAttribute))
Presence of StartupCode types. They're an artifact of how F# compiler compiles certain constructs, and it seems they're present even empty, so they should be highly reliable.
asm.GetTypes().Where(fun x -> x.FullName.StartsWith("<StartupCode$"))
In particular looking for a reference to FSharp.Core is not a great idea, as it would be commonly referenced from C# projects as well if you're working with mixed solutions (and there's nothing stopping anyone from just getting it off nuget).
Working within Visual Studio 2015, I have a conditional check to the effect of:
if(String.IsNullOrWhiteSpace(stringToTest))
And I saw an IDE001 quick tip or action suggesting that the "Name can be simplified" with a suggested correction of:
if(string.IsNullOrWhiteSpace(stringToTest))
With the only difference being to use string instead of String.
MSDN examples use an uppercase S with String, and this SO answer clarifies that "string is an alias in C# for System.String. So technically, there is no difference."
And to be clear, my question relies upon the answers within String vs. string, but I have a different question than what is asked there.
Also related is this SO question, although the answers there don't really address the question. That particular question is very similar to mine, however it is marked as a duplicate of the other SO question I noted. And there is a comment by the OP indicating this is brand new behavior only seen in 2015.
My Question
My question is that if the two variable types are equivalent, and MS examples use the upper case version, why am I seeing quick actions to use the lower case version? Was there a change in the .NET 4.6 framework and VS2015 to encourage using the lower case version? It doesn't seem like I should be seeing that type of a tip.
Well, as smarter than me have noted there's actually no difference in the compiling level, and like you (and like JohnyL as you'll see ;), I also thought it's a bug and got to what's leading me to my answer:
why am I seeing quick actions to use the lower case version?
Taken from this informative (and funny) bug discussion, these are the main points for this feature:
It doesn't just change the letter case, it replaces the String type name with the string keyword. The fact that the 2 happen to differ only by case is a coincidence. There are cases where the number of characters is different (Int32 -> int) or the name is completely different (Single -> float).
Lower case names are easier to type.
For people that actually prefer the consistent format of string in the code (it's probably dependent on other languages you code in and their conventions) this feature helps change existing source code to be consistent.
string is also a keyword with a well defined meaning while String's meaning may be different depending on context.
Was there a change in the .NET 4.6 framework and VS2015 to encourage using the lower case version?
As far as I've read, No.
BTW, you can change this behavior to suit your preference in Tools > Options > Text Editor > C# > Code Style -> Uncheck "Prefer intrinsic predefined type keyword in member access expressions".
I am only speculating, but it seems to me that the quick tip is intended to help you simplify System.String to string, ignoring the fact that your usings have made it redundant, at least in terms of character-counting.
Call it a bug (albeit an extremely minor one) or at least the IDE getting overzealous. One could argue that this is a valid simplification in a broader sense, particularly if you are to use these short "aliases" consistently in your code. As a C++ developer, I'm not really seeing it, but there you go.
There is no difference for compiler but IDE quick fixes are also used for ensuring good styling (e.g. naming conventions). You are programming in C# so you're expected to use its features (in this case - bultin type alias).
I think you are using int instead of Int32, right? So the same is for string and String. Although there is no real difference in length for string technically this is still similar case.
I have a suspicion that the primary reason for changing System.String to string is because it is regarded as a primitive .NET. And since all primitives have aliases - System.Int32 -> int, System.Char -> char etc., for consistency-sake, "string" is treated the same. Looking through all sorts of other MSDN documentation you'll see the two being used interchangeably; I think that's a simple oversight on their part.
Whether its warranted or not, I'm still going to use string over String as the quick tips suggest. Sounds like an example of Grandma's Cooking Secret, but is there a reason to change that behavior in this case?
In the C# language specification a Program is defined as
Program the input to the compiler.
While an Application is defined as
Application an assembly that has an entry point
But, they define
Program instantiation — the execution of an application.
Given the definition of "Program", shouldn't this be...
Application instantiation — the execution of an application.
instead?
As far as I can tell, this term doesn't occur in the Microsoft version of the specification. The annotated ECMA spec has this annotation after "Program":
Programs, assemblies, applications and class libraries
This definition of program differs from common usage. In C#, a program is just the input to the compiler. The output of the compiler is an assembly, which is either an application or a class library.
There aren't any other annotations nearby though. It does seem somewhat odd, which is perhaps why it doesn't appear in the MS spec.
No.
They're definitions, so they can be whatever they want. Your mistake is attempting to find a semantic link in the word program, where there is none. They are, as you've noted, unrelated.
What they're saying is "this is how we use this term"; there's basically nothing wrong with choosing any term, as long as the definitions are consistent. foo, bar, and baz would have been just as correct as program instantiation. As long as the names are internally consistent, and the definitions are correct, the names could be anything. They're just labels.
Someone at Microsoft obviously thought that it was more important that the term program instantiation be reflected in it's common usage. The term program probably didn't get the same treatment but, again, they're just names. And the names are "atomic": the word program is not at all related to the term program instantiation.
Since they're just labels, they terms can be replaced by anything. One possibility is:
X = the input to the compiler.
Y = an assembly that has an entry point
Z = the execution of Y.
Replacing any of the names with anything else makes no difference in their usage.
If I replace the above definition of Z with a new term XY:
XY = the execution of Y
this still holds. It's just a label, it gets semantic content from the definition, not from it's name. XY has no semantic relationship to X, and it's relationship to Y is only incidental.
When you read definitions of things, especially technical specifications, it's important to keep this in mind. There's often no best term for something, as there are often multiple common terms for the same thing, and they're not often defined rigourously enough to be meaningful in a precise specification.
There's an entire branch of philosophy dedicated to issues like this, and causing a "conflict" in the sense that you cite is pretty much unavoidable.
The writer's of the C# specification made their choice, and as long as it's internally consistent, it's "correct".
Based on given input, what the compiler outputs is still the program, only in the format of an assembly / application, such that it is:
Program, valid — a C# program constructed according to the syntax rules and diagnosable semantic rule.
I'll take a brave step here and say that we could remove ourselves from such specific context and look at an available definition of usage in English for Program:
(6) A set of coded instructions that enables a machine, especially a
computer, to perform a desired
sequence of operations.
(7) An instruction sequence in programmed instruction.
Both what is input to the compiler and that which is output can both be labelled by the above.
But actually, I'm going to have to vote to close as this is a question regarding the semantics of the English language.
It seems strange that the flagship language of .NET would include programming constructs that are not CLS-compliant. Why is that?
Example (from here): Two or more public / protected / protected internal members defined with only case difference
public int intA = 0;
public int INTA = 2;
or
public int x = 0;
public void X()
{
}
Even unsigned integers aren't compliant (at least, on the public API), yet they are very important to people who do bit-shifting, in particular when right-shifting (signed and unsigned have different right-shift behaviours).
They are ultimately there to give you the freedom to use them when appropriate. The case-sensitivity one I could get less religious about - despite the convenience of having a field and property differ only by case, I think I could live happily with being forced to use a different name! Especially now we have automatically implemented properties...
This is similar to how it lets you use unsafe - the difference here being that a few unsigned integers aren't going to destabilise the entire runtime, so they don't need quite as strong molly-guards.
You can add:
[assembly: CLSCompliant(true)]
if you like, and the compiler will tell you when you get it wrong.
And finally: most code (by volume) is not consumed as a component. It is written to do a job, and maybe consumed by other code in-house. It is mainly library writers / vendors that need to worry about things like CLS compliance. That is (by the numbers) the minority.
That's not how CLS compliance works. It is something that's your burden. C# doesn't restrain itself to strict compliancy itself, that would make it a language with poor expressivity. Dragging all .NET languages down to the lowest common denominator would have quickly killed the platform as a viable programming environment.
It is up to you to ensure that the publicly visible types in your assembly meet CLS compliancy. Making sure that class members don't differ only by case is very simple to do. Let the compiler help you out by using the [assembly:CLSCompliant(true)] attribute and the compiler will warn you when you slipped.
See http://msdn.microsoft.com/en-us/library/bhc3fa7f.aspx.
CLS is a specification that one opts-in to. Quote from above:
When you design your own CLS-compliant components, it is helpful to use a CLS-compliant tool. Writing CLS-compliant components without this support is more difficult because otherwise you might not have access to all the CLS features you want to use.
Some CLS-compliant language compilers, such as the C# or Visual Basic compilers, enable you to specify that you intend your code to be CLS-compliant. These compilers can check for CLS compliance and let you know when your code uses functionality that is not supported by the CLS. The C# and Visual Basic compilers allow you to mark a program element as CLS-compliant, which will cause the compiler to generate a compile-time error if the code is not CLS-compliant. For example, the following code generates a compiler warning.
Example code from above link:
using System;
// Assembly marked as compliant.
[assembly: CLSCompliant(true)]
// Class marked as compliant.
[CLSCompliant(true)]
public class MyCompliantClass {
// ChangeValue exposes UInt32, which is not in CLS.
// A compile-time warning results.
public void ChangeValue(UInt32 value){ }
public static void Main( ) {
int i = 2;
Console.WriteLine(i);
}
}
This code generates the following C# warning:
Copy warning CS3001: Argument type 'uint' is not CLS-compliant
My two cents about CLS Compliance
The .net languages are all evolutions of languages that were in existence at the time it was created. The languages were crafted so that you could easily convert the base projects into .Net projects without too much development effort. Due to the vast differences between the languages there needed to be some sort of convention for the languages to talk with each other. Take the language examples below:
VB.Net is a language that is derived from the earlier language VB6. It's suppose to be very similar in style to VB6 and as such takes alot of the conventions VB6 used. Since VB6 was suppose to be easy to learn/use by non developers it has certain characteristics that make it more idiot proof. Dynamic typing, case insensitivity being two of these things.
C#.Net/C++.Net are derivatives of the more programmer friendly C++. Since they're an evolution of this language it has things in it that C++ would let you do. Case sensitivity, static typing etc.
Now when faced with two dissimilar languages that they wanted to make interoperable Microsoft did the only reasonable thing. They made restrictions on how the two languages can interact with each other through the use of the basically a software contract. This code can be used in only this way because of the differences in the languages.
For example take VB.Net code calling C# code
If the C# code had two functions that differed only in case, X() vs x(), VB.net would never be able to call this code correctly since it is case insensitive. The CLS compliance has to make this illegal. If you look at the other rules they're basically doing the same thing for other language features between the different languages.
I would guess the case insensitivity was only included in CLS compliance so that VB.NET could be CLS compliant. From what I understand, there is no issue if a particular language construct is not CLS compliant unless you are using it in such a way that the incompliant peices are available in the public API of your code.
The hint from Microsoft would seem to be that CLS compliance is only important in code that you are accessing from different languages (such as referencing a C# assembly from a VB.NET project).
I think Microsoft wanted to give developers freedom. No constraints if not necessary. C# is not restricted by CLS because not everyone needs interoperability with VB.
If there was universal agreement about what features should be included in a programming language, the world would only need one programming language (which would include precisely those features everyone agreed should be there). Of course, in reality some people will regard as important features which others don't care about (or even find distasteful). The CLS standard essentially does three things:
It says that certain features are so important that any language which does not include them should be considered inadequate for general-purpose .net programming.
It says that programmers whose libraries don't require any features other than those listed should expect that their libraries will be usable by programmers using any language that is suitable for general-purpose .net programming.
It informs programmers that if parts of their libraries would require the use of features which languages are not required to support, those parts might not be usable in languages which are suitable for .net programming, but don't include the required features.
When languages like vb.net or C# allow the creation of non-CLS compliant programming, what that means is that Microsoft decided that certain features were useful enough to justify inclusion in those languages, but not so wonderful or noncontroversial as to justify mandating all languages include them.
Why don't any programming language load the default libraries like stdio.h, iostream.h or using Systemso that there declaration is avoided?
As these namespace/libraries are required in any program, why the compilers expect it to be declared by the user.
Do any programs exist without using namespace/headers? even if yes, whats wrong in loading a harmless default libraries?
I don't mean that .. I am lazy to write a line of code but it makes less sense (as per me) for a compiler to cry for declaration of so called default thingummies ending up in a compilation error.
It's because there are programs which are written without the standard libraries. For example, there are plenty of C programs running on embedded systems that don't provide stdio.h, since it doesn't make any sense on those platforms (in C, such environments are referred to as "freestanding", as opposed to the more usual "hosted").
The “default” libraries are not “required in any program”, and indeed there are many cases where they are not even available (operating system kernel/drivers, microcontrollers, etc). And more in the mainstream, many high-level graphical programs use system-specific GUI/graphics libraries instead of standard I/O.
For stdio.h/iostream(.h): the quick answer is that in the biggest part of your software, they are not needed (definitively not both). Headless devices/servers should having a logging module instead and GUI's don't always have a console to interface with.
Many languages (especially scripting languages, and languages that carry a standard runtime as part of the language spec) do do this.
The trade-off is convenience versus software-engineering goodness. The problem with opening namespaces by default is you end up with a lot of names being available immediately at the top level, which can cause name clashes and confusion, pollute Intellisense/autocompletion lists, etc.
To follow up on caf's answer.
You need to tell the compiler about these headers/libraries so you do not have to include anything that you do not want. Because they are not needed in every program. Any programmer is able to write a library in c or c++ that does not depend on any runtime libraries. This ability makes it possible to write as lean software as possible, and save memory/diskspace/compile time/link time (pick what you need most). In low level languages you should only pay for what you need, nothing more.
There is name collision problem exist. As language standards developed, they provide more and more features, giving them more amd more names. And the probability that system name and name defined in user program raises. To avoid this, new features defined in modules which will not included if program not using it. As system libraries use many common usage words as its symbols (like open, restricted) problem is serious.
But explicit module inclusion is not only method for avoiding collisions. They also are: using "standard" namespace for system names (e.g. C++, namespace std), reserving names and name patterns (e.g. C, double underscores), allowed redefinition (e.g. Forth).
Because libraries are external component of the language. If a day all library (or part of it) change headers, namespaces the language element don't change with them. The compiler checks the syntax and rules of programming language only.