Is C# faster than VB.NET? [closed] - c#

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 12 years ago.
You'd think both are the same.
But maybe it's the compiler that Microsoft has used, but I've noticed that when compiling two very small programs, identical logic. VB.NET uses more IL instructions.
Is it true than that c# must be faster, if only because its compiler is smarter.

This is really hard to respond to definitively with the limited amount of information available. It would help a lot if you provided the code from both samples and the compiler options used.
To answer the question though, no C# is not inherently faster. Both languages generate to IL and run on the CLR. For most features they even generate the same IL. There are differences for some similar features but they rarely add up to significant performance changes.
VB can appear slower if you run into some of the subtle differences in the languages and environment. A couple of common examples are ...
Many environments default to checked integer operations for VB.Net but not C#
Subtle coding issues can lead to late binding where it appears to be early binding
Believing switch and Select have the same semantics
Once these are removed the languages perform with very similar performance profiles.

The answer is yes and no. It really depends on what specific feature you are referring to. Likewise, there are areas where VB executes faster. I can give an example of each.
This code in VB...
For i As Integer = 0 To Convert.ToInt32(Math.Pow(10, 8))
Next
...is about 100x faster than this code in C#.
for (int i = 0; i <= Convert.ToInt32(Math.Pow(10, 8)); i++)
{
}
It is not that the VB compiler is better at generating code that executes for loops faster though. It is that VB computes the loop bound once while C# computes the loop condition on each iteration. It is just a fundamental difference in the way the languages were intended to be used.
This code is C#...
int value = 0;
for (int i = 0; i <= NUM_ITERATIONS; i++)
{
value += 1;
}
...is slightly faster than the equivalent in VB.
Dim value As Integer = 0
For i As Integer = 0 To NUM_ITERATIONS
value += 1
Next
The reason in this case is that the default behavior for VB is to perform overflow checking while C# does not.
I am sure there are other difference in the languages that demonstrate similiar performance biases. But, both languages are built on top of the CLR and both compile to the same IL. So making blanket statements like "Language X is faster than language Y" without adding the important qualifying "in situation Z" clause are simply incorrect.

C# match more close to IL than VB.NET
VB.NET sometimes do lot of things behind the scenes. Like On Error Resume Next, that write a try catch for each statement
But in general both have the same features and performance.
You can open your code in Reflector and see as C# code. Realize if C# code was what you expected

Make sure the programs really are identical. For example, depending on Options, these two lines are actually very different:
Dim x = "some string"
.
string x = "some string";
To match that C# code, the VB should look like this:
Dim x As String = "some string"

It sounds like the differences are purely in the compilers interpretation of the source code. A tech republic article comes to pretty much the same conclusion:
https://web.archive.org/web/1/http://articles.techrepublic%2ecom%2ecom/5100-10878_11-1027686.html

I haven't done any tests, but I think speed would be about the same. If anything select for coding style and syntax.

Related

Why does C# also not allow empty conditions in while loops? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
Edit: I changed most of my question, because it was too long and altough my question is a request of facts, it was considered opinion based. Having said that, please read the comments where I try to explain why closing this question was wrong IMHO.
Also: I'd like to appologize for my initial question, I am not a native English speaker and I didn't know the word [blindly] had such a negative tone. I actually used the word in other questions.
Background:
Consider the following piece of C# code:
for(; /*empty condition*/ ;)
{
//Infinite loop
}
This amongst other methods is considered good practice to make an infinite loop. But when we would try a similar approach with a while loop it would never compile:
while(/*empty condition*/)
{
//Compiler error
}
At first I thought that this was some sort of bug in my compiler, but then I read the C# language specification. I found this was be design. Why? Because C# is based on C, and C behaves this way.
So now the question is, why does C behave like this? Somebody else asked this question on StackOverflow already. The answer was pretty unsatisfying and came down to this:
It behaves like this because it is described like this in the C language specification.
This answer reminded me of many discussions I had with my parents when I was a kid: "Why do I have to clean my room?" - "Because we say so.". Further answers speculate (i.e. no sources or arguments were added) that while() is "hacky" and that "using for(;;) made more sense".
My research
Edit: deleted because it was considered to long. It basically was an effort to figure out why C had this construction.
My question:
After all my research I concluded that the while loop's inability to accept empty expressions is illogical if the for loop can.
But if that is true, then why did the C# language design team copy this bevahiour?
You: "C# is based on C and why would you reinvent the wheel?"
True, but why make the same illogical decissions? If your grandfather would jump of a bridge, then would you do it too, just because you are based on him? And isn't the creation of a new language - based on an old one - the ideal situation to avoid/fix the illogical pitfalls of the old language?
So to repeat my question:
Why did the C# design team copy this behaviour?
After all my research I can only conclude that the while loop's inability to accept empty expressions is illogical.
A very far fetched conclusion. IMHO it is the for(;;) loop that is illogical (and not only in this respect).
It is clear that while() { ... } would have been possible but what exactly is the merit?
As a matter of style I would prefer for(;true;) over for(;;), it has less chance of being misread.
Being able to write a 'for-ever' loop is a minor issue, avoiding typos is much more important.
Readability is the only thing that counts, you're not making much of a case for while().
And what should happen in this statement?
if() Foo();
C11 specification states:
The statement
for ( clause-1 ; expression-2 ; expression-3 ) statement
behaves as follows: The expression expression-2 is the
controlling expression that is evaluated before each execution of the
loop body. The expression expression-3 is evaluated as a void
expression after each execution of the loop body. If clause-1 is a
declaration, the scope of any identifiers it declares is the remainder
of the declaration and the entire loop, including the other two
expressions; it is reached in the order of execution before the first
evaluation of the controlling expression. If clause-1 is an
expression, it is evaluated as a void expression before the first
evaluation of the controlling expression. 158)
Both clause-1 and expression-3 can be omitted. An omitted expression-2 is replaced by a
nonzero constant.
which is not the case with the while-loop. However this is C11. ANSI C doesn't make this clear really. Though, I assume C# is based on how C most commonly work(s|ed), not how it's specified to work.
To speculate, I could think that the for-loop in early C wasn't well defined, so programmers found out that you can write an infinite loop like for (;;). To be compatible with old programs, the standards never forbid this. So there is really no reason to write like this. It's just history I guess.

Why doesn't C# allow an else clause on loops? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
I decided to learn some Python (IronPython) syntax today. In doing so, I was impressed by a construct that it allows with its loops.
Python supports an else clause on its loops. An else on a loop basically says, "if this loop finished normally, then enter this clause".
Allow me to demonstrate using C#.
This code:
Something something = SomeCallToSetThisUp();
bool isCompatable = false;
foreach (Widget widget in widgets)
{
isCompatable = widget.IsCompatableWithSomething(something);
if (!isCompatable)
break;
}
if (isCompatable)
compatableSomethings.Add(something);
could become this code (not valid C#):
Something something = SomeCallToSetThisUp();
foreach (Widget widget in widgets)
{
if (!widget.IsCompatableWithSomething(something));
break;
}
else
compatableSomethings.Add(something);
Having never seen this, it struck me as cool. And once you learn it, it seemed as readable as any code I have seen.
While not universally needed (sometimes you want to affect every item in the list), I do think that it would be useful.
So, my question is: Why isn't this in C#?
I have a few ideas why:
break can make debugging harder, so the designers did not want to encourage it.
Not everything that is shiny can make it into the language. (limited scope).
But those are just guesses. I am asking for an actual canonical reason.
The usual answer is because no-one asked for it or the cost of developing and maintaining it outweights the benefits.
From Eric Lippert's blog:
I've already linked several times to Eric Gunnerson's great post on
the C# design process. The two most important points in Eric's post
are: (1) this is not a subtractive process; we don't start with C++ or
Java or Haskell and then decide whether to leave some feature of them
out. And (2) just being a good feature is not enough. Features have to
be so compelling that they are worth the enormous dollar costs of
designing, implementing, testing, documenting and shipping the
feature. They have to be worth the cost of complicating the language
and making it more difficult to design other features in the future.
After we finished the last-minute minor redesigns of various parts of
C# 3.0, we made a list of every feature we could think of that could
possibly go into a future version of C#. We spent many, many hours
going through each feature on that list, trying to "bucket" it. Each
feature got put into a unique bucket. The buckets were labelled:
Pri 1: Must have in the next version
Pri 2: Should have in the next version
Pri 3: Nice to have in the next version
Pri 4: Likely requires deep study for many years before we can do it
Pri 5: Bad idea
Obviously we immediately stopped considering the fours and fives in
the context of the next version. We then added up the costs of the
features in the first three buckets, compared them against the design,
implementation, testing and documenting resources we had available.
The costs were massively higher than the resources available, so we
cut everything in bucket 2 and 3, and about half of what was in bucket
1. Turns out that some of those "must haves" were actually "should haves".
Understanding this bucketing process will help when I talk about some
of the features suggested in that long forum topic. Many of the
features suggested were perfectly good, but fell into bucket 3. They
didn't make up the 100 point deficit, they just weren't compelling
enough.
http://blogs.msdn.com/b/ericlippert/archive/2008/10/08/the-future-of-c-part-one.aspx
Additionally, you need to weight if the feature will be easily understood by existing / new developers. IMHO else on loop is not very readable, especially since the keyword for 'execute this block if the previous one finished OK' is finally.
What is more, I think Enumerable.Any / Enumerable.All methods are much better in this scenarios.
Looping through a collection and checking a condition are different things, so they should be separate language constructs.
Because for else loops are a hack from languages like python. If you feel like you need a for else loop, you should probably put that code in a separate function.

Why isn't string concatenation automatically converted to StringBuilder in C#? [duplicate]

This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
Why is String.Concat not optimized to StringBuilder.Append?
One day I was ranting about a particular Telerik control to a friend of mine. I told him that it took several seconds to generate a controls tree, and after profiling I found out that it is using a string concatenation in a loop instead of a StringBuilder. After rewriting it worked almost instantaneously.
So my friend heard that and seemed to be surprised that the C# compiler didn't do that conversion automatically like the Java compiler does. Reading many of Eric Lippert's answers I realize that this feature didn't make it because it wasn't deemed worthy enough. But if, hypothetically, costs were small to implement it, what rationale would stop one from doing it?
But if, hypothetically, costs were small to implement it, what rationale would stop one from doing it?
It sounds like you're proposing a bit of a tautology: if there is no reason to not do X, then is there a reason to not do X? No.
I see little value in knowing the answers to hypothetical, counterfactual questions. Perhaps a better question to ask would be a question about the real world:
Are there programming languages that use this optimization?
Yes. In JScript.NET, we detect string concatenations in loops and the compiler turns them into calls to a string builder.
That might then be followed up with:
What are some of the differences between JScript .NET and C# that justify the optimization in the one language but not in the other?
A core assumption of JScript.NET is that its programmers are mostly going to be JavaScript programmers, and many of them will have already built libraries that must run in any implementation of ECMAScript. Those programmers might not know the .NET framework well, and even if they do, they might not be able to use StringBuilder without making their library code non-portable. It is also reasonable to assume that JavaScript programmers may be either novice programmers, or programmers who came to programming via their line of business rather than a course of study in computer science.
C# programmers are far more likely to know the .NET framework well, to write libraries that work with the framework, and to be experienced programmers who understand why looped string concatenation is O(n2) in the naive implementation. They need this optimization generated by the compiler less because they can just do it themselves if they deem it necessary.
In short: compiler features are about spending our budget to add value for the customer; you get more "bang for buck" adding the feature to JScript.NET than you do adding it to C#.
The C# compiler does better than that.
a + b + c is compiled to String.Concat(a, b, c), which is faster than StringBuilder.
"a" + "b" is compiled directly to "ab" (useful for multi-line literals).
The only place to use StringBuilder is when concatenating repetitively inside a loop; the compiler cannot easily optimize that.

Why doesn't this obvious infinite recursion give a compiler warning? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
Many months back, I had to fix up some code that caused some problems. The code looked basically like this:
int badFun() { return badFun(); }
This obviously caused a stack overflow even in the high level language I was working with (4Test in SilkTest). There's no way this code could be seen as beneficial. The first sign of problems were warnings seen after the script finished, but no compile errors or warnings. Curiously, I tried writing programs in C++, C# and Python with the same structure, and all of them compiled/interpreted with no syntax errors or warnings, even through there were runtime errors in all cases. I didn't even see any warnings in any of these cases. Why isn't this seen as a possible problem by default?
EDIT: I tried writing the equivalent of that function in all three languages, so I added those function tags. I'm more interested in overall reasons why code like this gets through with no warnings. Please retag if necessary.
Here's the deal: compiler warnings are features. Features require effort, and effort is a finite quantity. (It might be measured in dollars or it might be measured in the number of hours someone is willing to give to an open source project, but I assure you, it is finite.)
Therefore we have to budget that effort. Every hour we spend designing, implementing, testing and debugging a feature is an hour we could have spent doing something else. Therefore we are very careful about deciding what features to add.
That's true of all features. Warnings have special additional concerns. A warning has to be about code that has the following characteristics:
Legal. Obviously the code has to be legal; if it is not legal then its not a warning in the first place, its an error.
Almost certainly wrong. A warning that warns you about correct, desirable code is a bad warning. (Also, if the code is correct, there should be a way to write the code such that the warning goes away.)
Inobvious. Warnings should tell you about mistakes that are subtle, rather than obvious.
Amenable to analysis. Some warnings are simply impossible; a warning that requires the compiler to solve The Halting Problem for example, is not going to happen since that is impossible.
Unlikely to be caught by other forms of testing.
In your specific example, we see that some of these conditions are met. The code is legal, and almost certainly wrong. But is it inobvious? Someone can easily look at the code and see that it is an infinite recursion; the warning does not help much. Is it amenable to analysis? The trivial example you give is, but the general problem of finding unbounded recursions is equivalent to solving the halting problem. Is it unlikely to be caught by other forms of testing? No. The moment you run that code in your test case, you're going to get an exception telling you precisely what is wrong.
Thus, it is not worth our while to make that warning. There are better ways we could be spending that budget.
Why isn't this seen as problem by default?
The error is a run time error, not a compile time error. The code is perfectly valid, it just does something stupid. The very simple case that you show could certainly be detected, but many cases that would be only slightly more complicated would be difficult to detect:
void evil() {
if (somethingThatTurnsOutToAlwaysBeTrue)
evil();
}
In order to determine whether that's a problem, the compiler has to try to figure out whether the condition will always be true or not. In the general case, I don't think this is any more computable than determining whether the program will eventually stop (i.e. it's provably not computable).
No compiler of any programming language has any sort of idea about the semantics of the code it compiles. This is valid code, though stupid, so it will be compiled.
How is the compiler or interpreter suppose to know what the function is doing? The scope of the compiler and interpreter is to compile or interpret the syntactical code-- not interpret the semantics of your code.
Even if a compiler did check for this, where do you draw the line? What if you had a recursive function that calculated factorial forever?
Because compiler will not check for these kind of stuff.
If you install a code analyzer like Resharper in Visual Studio it bring a warning of infinite recursive call or sth like that in case you enabled the code analysis option.
I doubt the compiler can detect a run-time phenomena (stack overflow) at compile time. There is many valid cases to call a function inside itself, recursion. But how can the compiler know the good from the bad cases of recursion?
Unless it has some added AI to it, I don't think a compiler could detect the differences between good and bad recursion, that's the job of a programmer.
As you have mentioned Compiler just checks syntatical errors.
the recursive function isperfectly valid w/o any error.
During Runtime,
when stack is overflown it throws an error because of stack overflow *not because of code*.
Recursive function is perfetly valid, but again in implementation we need to put condition check to return values before stack is filled.

Who Writes Microsoft Support Articles? Can They Always Be Trusted? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
Here is an example of the type of article I'm talking about:
http://support.microsoft.com/kb/319401
I assume these articles are written by people who work for Microsoft and that the code in the articles will always be rock solid and never contain any malicious code. I just want to make sure I can explain to my boss that this is an ok place to copy code from (I've been told never to copy code from the internet, but this seems like a safe source).
I would trust them not to be malicious, but they're not always good code. (MSDN samples are sometimes pretty awful.)
For example, here's some code in the sample you gave:
compareResult = ObjectCompare.Compare
(listviewX.SubItems[ColumnToSort].Text,
listviewY.SubItems[ColumnToSort].Text);
// Calculate correct return value based on object comparison
if (OrderOfSort == SortOrder.Ascending)
{
// Ascending sort is selected, return normal result of compare operation
return compareResult;
}
else if (OrderOfSort == SortOrder.Descending)
{
// Descending sort is selected, return negative result of compare operation
return (-compareResult);
}
else
{
// Return '0' to indicate they are equal
return 0;
}
Now, there are two issues here:
Why is it deemed valid to have a comparer with no sort order? This should be a constructor parameter, validated at the point of construction IMO.
You should not just negate the result of one comparison to perform a "reverse comparison". That breaks if the result of the first comparison is int.MinValue - because -int.MinValue == int.MinValue. It's better to reverse the arguments used to perform the original comparison.
There are other things I'd take issue with in this code, but these two should be enough to make my point.
I heartily agree with the other answers too, in terms of:
- Check the copyright / licence etc of any code you want to use
- Make sure you understand anything you want to use
Your boss probably wouldn't mind if you only copied the code into a test project that you use to test and understand the code. You can then use what you've learned to write the production code.
And while I don't think anyone outside of Microsoft knows the names of the people who write those support articles, they come from the same vendor that your toolchain does, so if you don't trust the support articles, then you can't trust the tools you've bought either.
Microsoft Knowledgebase articles show safe (as in non-malicious but not necessarily secure) code, but usually the example provides the most basic use case possible. There's a good chance that you'll have to tweak the code a bit for it to work the way you want.
You should also pay attention to the date of the articles. For example, the article you link to is almost three years old. There's definitely a better way to handle that situation now.
Be aware that most codes in articles are there to help you understand the concepts. They are not "production ready". Learn the concepts instead and implement your own.
Have you been told not to copy code from the internet because of rights issues? If so then you don't have to worry about this Microsoft code.
I would advise you not to use any code you don't understand. If you can't say if the code is malicious or not don't use it.
MSDN and kb support articles are written by MS employees that are part of the given product's UX team (user experience). These are people who typically have a background in technical writing, but are not necessarily developers themselves (although some are). It's very common for the UX team to collaborate with developers on the product to ensure their code samples are correct. However this collaboration in my experience is one of the lowest priorities a typical MS developer has and can go ignored, and so it can at times lead to poor code getting out.
With that said, I completely agree with Carl Norum's comment. Copying code you do not understand is done at your own risk. Make sure you understand any code you place in your product!
I've always found the Microsoft articles to be of the highest quality (sadly not their products).
However, there's always the danger of a spoofing site.
Explain that you carefully read the article to understand the information in there, and only copy code that you understand.
If you don't understand the code, then even if the code is correct it may not be doing what you actually need done, thus your program will be incorrect.
You also will have a hard time debugging and maintaining code if there are parts that you don't understand.

Categories