Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
Edit: I changed most of my question, because it was too long and altough my question is a request of facts, it was considered opinion based. Having said that, please read the comments where I try to explain why closing this question was wrong IMHO.
Also: I'd like to appologize for my initial question, I am not a native English speaker and I didn't know the word [blindly] had such a negative tone. I actually used the word in other questions.
Background:
Consider the following piece of C# code:
for(; /*empty condition*/ ;)
{
//Infinite loop
}
This amongst other methods is considered good practice to make an infinite loop. But when we would try a similar approach with a while loop it would never compile:
while(/*empty condition*/)
{
//Compiler error
}
At first I thought that this was some sort of bug in my compiler, but then I read the C# language specification. I found this was be design. Why? Because C# is based on C, and C behaves this way.
So now the question is, why does C behave like this? Somebody else asked this question on StackOverflow already. The answer was pretty unsatisfying and came down to this:
It behaves like this because it is described like this in the C language specification.
This answer reminded me of many discussions I had with my parents when I was a kid: "Why do I have to clean my room?" - "Because we say so.". Further answers speculate (i.e. no sources or arguments were added) that while() is "hacky" and that "using for(;;) made more sense".
My research
Edit: deleted because it was considered to long. It basically was an effort to figure out why C had this construction.
My question:
After all my research I concluded that the while loop's inability to accept empty expressions is illogical if the for loop can.
But if that is true, then why did the C# language design team copy this bevahiour?
You: "C# is based on C and why would you reinvent the wheel?"
True, but why make the same illogical decissions? If your grandfather would jump of a bridge, then would you do it too, just because you are based on him? And isn't the creation of a new language - based on an old one - the ideal situation to avoid/fix the illogical pitfalls of the old language?
So to repeat my question:
Why did the C# design team copy this behaviour?
After all my research I can only conclude that the while loop's inability to accept empty expressions is illogical.
A very far fetched conclusion. IMHO it is the for(;;) loop that is illogical (and not only in this respect).
It is clear that while() { ... } would have been possible but what exactly is the merit?
As a matter of style I would prefer for(;true;) over for(;;), it has less chance of being misread.
Being able to write a 'for-ever' loop is a minor issue, avoiding typos is much more important.
Readability is the only thing that counts, you're not making much of a case for while().
And what should happen in this statement?
if() Foo();
C11 specification states:
The statement
for ( clause-1 ; expression-2 ; expression-3 ) statement
behaves as follows: The expression expression-2 is the
controlling expression that is evaluated before each execution of the
loop body. The expression expression-3 is evaluated as a void
expression after each execution of the loop body. If clause-1 is a
declaration, the scope of any identifiers it declares is the remainder
of the declaration and the entire loop, including the other two
expressions; it is reached in the order of execution before the first
evaluation of the controlling expression. If clause-1 is an
expression, it is evaluated as a void expression before the first
evaluation of the controlling expression. 158)
Both clause-1 and expression-3 can be omitted. An omitted expression-2 is replaced by a
nonzero constant.
which is not the case with the while-loop. However this is C11. ANSI C doesn't make this clear really. Though, I assume C# is based on how C most commonly work(s|ed), not how it's specified to work.
To speculate, I could think that the for-loop in early C wasn't well defined, so programmers found out that you can write an infinite loop like for (;;). To be compatible with old programs, the standards never forbid this. So there is really no reason to write like this. It's just history I guess.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
In C++, a function with a non-void return type without a return statement is allowed. So, the following code will compile:
std::string give_me_a_string()
{
}
In C#, however, such a method is not allowed. So, the following code will not compile:
public string GiveMeAString()
{
}
Why is this the case? What was the design rationale in these two languages?
C++ requires code to be "well-behaved" in order to be executed in a defined manner, but the language doesn't try to be smarter than the programmer – when a situation arises that could lead to undefined behaviour, the compiler is free to assume that such a situation can actually never happen at runtime, even though it cannot be proved via its static analysis.
Flowing off the end of a function is equivalent to a return with no value; this results in undefined behavior in a value-returning function.
Calling such a function is a legitimate action; only flowing off its end without providing a value is undefined. I'd say there are legitimate (and mostly legacy) reasons for permitting this, for example you might be calling a function that always throws an exception or performs longjmp (or does so conditionally but you know it always happens in this place, and [[noreturn]] only came in C++11).
This is a double-edged sword though, as while not having to provide a value in a situation you know cannot happen can be advantageous to further optimization of the code, you could also omit it by mistake, akin to reading from an uninitialized variable. There have been lots of mistakes like this in the past, so that's why modern compilers warn you about this, and sometimes also insert guards that make this somewhat manageable at runtime.
As an illustration, an overly optimizing compiler could assume that a function that never produces its return value actually never returns, and it could proceed with this reasoning up to the point of creating an empty main method instead of your code.
C#, on the other hand, has different design principles. It is meant to be compiled to intermediate code, not native code, and thus its definability rules must comply with the rules of the intermediate code. And CIL must be verifiable in order to be executed in some places, so a situation like flowing off the end of a function must be detected beforehand.
Another principle of C# is disallowing undefined behaviour in common cases. Since it is also younger than C++, it has the advantage of assuming computers are efficient enough to support more powerful static analysis than what the situation was during the beginning of C++. The compilers can afford detecting this situation, and since the CIL has to be verifiable, only two actions were viable: silently emit code that throws an exception (sort of assert false), or disallow this completely. Since C# also had the advantage of learning from C++'s lessons, the developers chose the latter option.
This still has its drawbacks – there are helper methods that are made to never return, and there is still no way to statically represent this in the language, so you have to use something like return default; after calling such methods, potentially confusing anyone who reads the code.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I had an argument with my teammate about the following.
We need to parse a symbol in a string to int(it is always a digit), this particular functionality is used in a number of places. So this can be done this way:
var a = int.Parse(str[i].ToString());
The argument was: do we need to create a function for this.
int ToInt(char c) {
return int.Parse(c.ToString());
}
that can be used:
var a = ToInt(str[i]);
My opinion is that creating such a function is bad: it gives no benefits except for typing couple characters less (no, as we have autocomplete), but such practice increase a codebase and makes code more complecated to read by introducing additional functions. My teammate's reason is that this is more convinient to call just one such function and there is nothing bad in such a practice.
Actually question relates to a general: when it is ok(if at all) to wrapp combination of 2-3-4 functions with a new function?
So I would like to hear your opinions on that.
I argee that this is mostly defined based on personal preferences. But also I would like to hear some objective factors to define a convention for such situations in our project.
There are many reasons to create a new sub-routine/method/function. Here is a list of just a few.
When the subroutine is called more than once.
If it makes your code easier to read/understand.
Personal preference.
Actually, the design can be done in many ways of course, and depends on the actual design of the whole software, readability, easy of refactoring, and encapsulation. These things are to be considered on each occasion by its own.
But on this specific case, I think its better to keep it without a function and use it as the first example for many reasons:
Its actually one line of code.
The overhead of calling a function in performance will be far more the benefit you get from making it.
The compiler itself probably will unwrap it again into the one line call if you make it a function, though its not always the case.
The benefit you get from doing so, will be mainly if you want to add error checking, TryParse, etc... in the function.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
Which is best?
private long sumVals()
{
return (dbReturn("NUns") / dbReturn("TSpd")) * 60;
}
private long dbReturn(string dbField)
{
// ... access db, get value
return retVal;
}
or
private long sumVals()
{
long numUnits = dbReturn("NUns");
long targetSpeed = dbReturn("TSpd");
return (numUnits / targetSpeed) * 60;
}
private long dbReturn(string dbField)
{
// ... access db, get value
return retVal;
}
Is it better to try and put it all onto one line, so there is less code overall, or to spread it out like in the second one?
Is one or the other quicker? Is there a benefit, eg, while compiling?
Your case is simple, so the first one is OK. But in general, I would go for the second one.
It is important that you (and others) can read the code, but you don't need to save memory (fewer lines of code as well as fewer variables).
Your code will be easier to understand and debug if you choose to write it the second way. You also don't have to have a lot of comments if your variable names explain the code well enough, which makes your code easier to read in general. (I am not telling you to stop commenting, but to write code which does not need trivial comments!)
See this question for more answers.
My rule of thumb is to include enough content to fully describe what the intent of the code is, and no more. In my opinion, assigning values to variables only to use those variables immediately is actually less readable. It communicates the flow of the program well enough, but doesn't communicate the actual intent.
If you renamed the function from dbReturn to GetDatabaseValue then I don't think I can come up with a more expressive way to write this function than:
return (GetDatabaseValue("NUns") / GetDatabaseValue("TSpd")) * 60);
This communicates the intent perfectly (notwithstanding the fact that I don't know what "NUns" and "TSpd" mean). Fewer symbols means fewer things to understand when reading the code.
Full disclosure: Including extra symbols does improve debuggability. I write this way when I am first building a function so that I can track down where things go wrong. But, when I am satisfied with the implementation, I compress it down as much as possible for my and my co-workers' sanity.
As far as I can tell, there would be no run-time performance gain achieved by either approach. Compilers are awesome - they do this inlining without your knowledge. The only difference is in the code's readability.
To me, longer is always better. Modern compilers will shrink most code to be very fast. However, being able to maintain code through lots of comments and easy-to-read code is hugely important.... especially if you are one of those guys who have to maintain someone else's code!
So, my vote is the longer version (with a comment explaining what you are doing too!)
This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
Why is String.Concat not optimized to StringBuilder.Append?
One day I was ranting about a particular Telerik control to a friend of mine. I told him that it took several seconds to generate a controls tree, and after profiling I found out that it is using a string concatenation in a loop instead of a StringBuilder. After rewriting it worked almost instantaneously.
So my friend heard that and seemed to be surprised that the C# compiler didn't do that conversion automatically like the Java compiler does. Reading many of Eric Lippert's answers I realize that this feature didn't make it because it wasn't deemed worthy enough. But if, hypothetically, costs were small to implement it, what rationale would stop one from doing it?
But if, hypothetically, costs were small to implement it, what rationale would stop one from doing it?
It sounds like you're proposing a bit of a tautology: if there is no reason to not do X, then is there a reason to not do X? No.
I see little value in knowing the answers to hypothetical, counterfactual questions. Perhaps a better question to ask would be a question about the real world:
Are there programming languages that use this optimization?
Yes. In JScript.NET, we detect string concatenations in loops and the compiler turns them into calls to a string builder.
That might then be followed up with:
What are some of the differences between JScript .NET and C# that justify the optimization in the one language but not in the other?
A core assumption of JScript.NET is that its programmers are mostly going to be JavaScript programmers, and many of them will have already built libraries that must run in any implementation of ECMAScript. Those programmers might not know the .NET framework well, and even if they do, they might not be able to use StringBuilder without making their library code non-portable. It is also reasonable to assume that JavaScript programmers may be either novice programmers, or programmers who came to programming via their line of business rather than a course of study in computer science.
C# programmers are far more likely to know the .NET framework well, to write libraries that work with the framework, and to be experienced programmers who understand why looped string concatenation is O(n2) in the naive implementation. They need this optimization generated by the compiler less because they can just do it themselves if they deem it necessary.
In short: compiler features are about spending our budget to add value for the customer; you get more "bang for buck" adding the feature to JScript.NET than you do adding it to C#.
The C# compiler does better than that.
a + b + c is compiled to String.Concat(a, b, c), which is faster than StringBuilder.
"a" + "b" is compiled directly to "ab" (useful for multi-line literals).
The only place to use StringBuilder is when concatenating repetitively inside a loop; the compiler cannot easily optimize that.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I'm looking for a tool (paid or OSS) to convert a mid-sized VB.NET project to a C# project. I've searched StackOverflow and have found a few questions/answers, but most suggest .NET Reflector or online copy/paste single file tools. Reflector doesn't seem to fit the bill as it will convert an assembly, but we're looking for a whole-sale project converter which will maintain the project including file names, comments, etc.
We're fully willing to manually address items that cannot be automatically converted, but would like to start off with a fairly comprehensive converted project.
One recommendation we found is Elegance Technologies' CSharpener for VB.NET - http://www.elegancetech.com/csvb/csvb.aspx. Based on their site, it hasn't been revved since pre-VS 2008.
Recommendations will be appreciated.
SharpDevelop is an open source IDE and it allows you to covert between VB and C#.
Do be aware that there are some things which can be done nicely in VB.net that cannot be done nicely, if at all in C# (and vice versa). Two of note:
In vb.net, declaration-initializations (e.g. "Dim Foo As Bar = Whatever") in a derived class occur after the base constructor has run, and can make reference to the object being constructed. In C#, such declaration-initializations occur before the base constructor is run, and cannot reference the object under construction. One could probably move all such initialization to the constructor, but if there are multiple constructors that may require the creation of redundant code.
In vb.net, a Catch statement may include a condition (e.g. Catch Ex As FancyException When Ex.SomeProperty = 9). In C#, the only way to a achieve a somewhat similar result is to catch an exception and then decide if it meets the necessary criteria, rethrowing if not; this will yield different semantics in a number of ways. Among other things, at the time the When clause is evaluated, Finally statements which will be tripped by the exception will not yet have run, so allowing the state of the system to be captured. Further, if break-on-unhandled-exception is set, and no "When" condition is satisfied, the debugger will break at the location where the original exception occurred. If the exception had been caught and rethrown, the debugger would break at the re-throw.
I would think an IL-to-C# translator might do an okay job of moving initializations to an object's constructors, though that lead to some annoying repetition. I don't think there's any way for C# code to match the semantics of VB.net's exception handling, though.
Two words: A programmer.
If you want it to be the most bug free and just work hire a programmer.
A quick google turns up http://www.freelancer.com where you can hire a one time programmer.
If you're not satisfied with SharpDevelop, TangibleSolutions will provide support with their converters to ensure your happiness.
SharpDevelop is quite good, but at my company we've found VBConversions to provide a much more complete conversion. It's a commerical app though, but for the time saved over SharpDevelop it was a no-brainer for us.
As a specific example, one thing we found that SharpDevelop didn't convert correctly was VB indexes, which use curvy brackets. It seemed unable to distinguish between indexes and method calls so didn't convert the indexes to square brackets. VBConversions converted them fine. This one thing made it worth its purchase for us.