Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I'm a little bit confused why type instances are allowed to be created without their future use and the compiler doesn't emit even a warning about it.
public void M()
{
new int();
new object();
}
I've never created an instance without assigning it to a variable or calling it's members, and if I saw a line like ;new SomeType(); I would consider it as a mistype. I understand that technically .ctor can assign some static fields or do something else it's not supposed to do, but I don't consider it a sufficient argument for not emitting a warning.
Are there any patterns where ignoring an instance is appropriate? What am I missing?
Additional points not clear for me:
1. CodeAnalysis gives a warning "CA1806: Do not ignore method results" for object but not for int or any other value type.
2. The compiler doesn't emit IL for ignored structs even without optimization flag.
Instantiating an object can have side effects in C#.
The constructor could do almost anything, such as creating a database entry, writing a text file, or updating a static property somewhere before going out of scope.
Having said that, it is not good programming style to instantiate an object for the sole purpose of producing a side effect. That is what the CodeAnalysis warning is implying.
I understand that technically .ctor can assign some static fields or do something else it's not supposed to do, but I don't consider it a sufficient argument for not emitting a warning
As Eric Lippert said
My usual response to “why is feature X not implemented?” is that of course all features are unimplemented until someone designs, implements, tests, documents and ships the feature, and no one has yet spent the money to do so. And yes, though I have famously pointed out that even small features can have large costs, this one really is dead easy, obviously correct, easy to test, and easy to document. Cost is always a factor of course, but the costs for this one really are quite small.
http://blogs.msdn.com/b/ericlippert/archive/2009/05/18/foreach-vs-foreach.aspx
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
In C++, a function with a non-void return type without a return statement is allowed. So, the following code will compile:
std::string give_me_a_string()
{
}
In C#, however, such a method is not allowed. So, the following code will not compile:
public string GiveMeAString()
{
}
Why is this the case? What was the design rationale in these two languages?
C++ requires code to be "well-behaved" in order to be executed in a defined manner, but the language doesn't try to be smarter than the programmer – when a situation arises that could lead to undefined behaviour, the compiler is free to assume that such a situation can actually never happen at runtime, even though it cannot be proved via its static analysis.
Flowing off the end of a function is equivalent to a return with no value; this results in undefined behavior in a value-returning function.
Calling such a function is a legitimate action; only flowing off its end without providing a value is undefined. I'd say there are legitimate (and mostly legacy) reasons for permitting this, for example you might be calling a function that always throws an exception or performs longjmp (or does so conditionally but you know it always happens in this place, and [[noreturn]] only came in C++11).
This is a double-edged sword though, as while not having to provide a value in a situation you know cannot happen can be advantageous to further optimization of the code, you could also omit it by mistake, akin to reading from an uninitialized variable. There have been lots of mistakes like this in the past, so that's why modern compilers warn you about this, and sometimes also insert guards that make this somewhat manageable at runtime.
As an illustration, an overly optimizing compiler could assume that a function that never produces its return value actually never returns, and it could proceed with this reasoning up to the point of creating an empty main method instead of your code.
C#, on the other hand, has different design principles. It is meant to be compiled to intermediate code, not native code, and thus its definability rules must comply with the rules of the intermediate code. And CIL must be verifiable in order to be executed in some places, so a situation like flowing off the end of a function must be detected beforehand.
Another principle of C# is disallowing undefined behaviour in common cases. Since it is also younger than C++, it has the advantage of assuming computers are efficient enough to support more powerful static analysis than what the situation was during the beginning of C++. The compilers can afford detecting this situation, and since the CIL has to be verifiable, only two actions were viable: silently emit code that throws an exception (sort of assert false), or disallow this completely. Since C# also had the advantage of learning from C++'s lessons, the developers chose the latter option.
This still has its drawbacks – there are helper methods that are made to never return, and there is still no way to statically represent this in the language, so you have to use something like return default; after calling such methods, potentially confusing anyone who reads the code.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I guess this question is for a large part a matter of what you prefer aswell as being very situational, but I just came across a path to a gameobject today with a pretty long reference, and I thought if a temp reference wouldn't be better for this situation.
The code:
if (enlargeableButtons[i][j].gameObject.activeSelf && enlargeableButtons[i][j].IsHighlighted())
{
enlargeableButtons[i][j].gameObject.SetIsHighlighted(true, HoverEffect.EnlargeImage);
}
In a case where the path is this long with multiple array indexes to check, it would definitely be faster, but because of the extra object also be more expensive to do it like this:
GameObject temp = enlargeableButtons[i][j].gameObject;
if (temp.activeSelf && temp.IsHighlighted())
{
temp.SetIsHighlighted(true, HoverEffect.EnlargeImage);
}
But how much and would it be worth it?
I seriously doubt you will see any performance gain using a direct reference instead of going through the jagged array.
Maybe if this code is running in a very tight loop with lots and lots of iterations, you might get a few milliseconds of difference.
However, from the readability point of view, the second option is much more readable - so I would definitely go with it.
As a rule - You should design your code for clarity, not for performance.
Write code that conveys the algorithm it is implementing in the clearest way possible.
Set performance goals and measure your code's performance against them.
If your code doesn't measure to your performance goals, Find the bottle necks and treat them.
Don't go wasting your time on nano-optimizations when you design the code.
and a personal story to illustrate what I mean:
I once wrote a project where I had a lot of obj.child.grandchild calls. after starting to write the project I've realized it's going to be so many calls I just created a property on the class I was working on referring to that grandchild and my code suddenly became much nicer.
Declaring GameObject temp just creates a reference to enlargeableButtons[i][j].gameObject. It is extra overhead, but not much. You won't notice a difference unless you're repeating that thousands of times or more.
As a personal rule, if I just need to reference it once, I don't bother with declaring a variable for it. But if I need to use something like enlargeableButtons[i][j].gameObject multiple times, then I declare a variable.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I had an argument with my teammate about the following.
We need to parse a symbol in a string to int(it is always a digit), this particular functionality is used in a number of places. So this can be done this way:
var a = int.Parse(str[i].ToString());
The argument was: do we need to create a function for this.
int ToInt(char c) {
return int.Parse(c.ToString());
}
that can be used:
var a = ToInt(str[i]);
My opinion is that creating such a function is bad: it gives no benefits except for typing couple characters less (no, as we have autocomplete), but such practice increase a codebase and makes code more complecated to read by introducing additional functions. My teammate's reason is that this is more convinient to call just one such function and there is nothing bad in such a practice.
Actually question relates to a general: when it is ok(if at all) to wrapp combination of 2-3-4 functions with a new function?
So I would like to hear your opinions on that.
I argee that this is mostly defined based on personal preferences. But also I would like to hear some objective factors to define a convention for such situations in our project.
There are many reasons to create a new sub-routine/method/function. Here is a list of just a few.
When the subroutine is called more than once.
If it makes your code easier to read/understand.
Personal preference.
Actually, the design can be done in many ways of course, and depends on the actual design of the whole software, readability, easy of refactoring, and encapsulation. These things are to be considered on each occasion by its own.
But on this specific case, I think its better to keep it without a function and use it as the first example for many reasons:
Its actually one line of code.
The overhead of calling a function in performance will be far more the benefit you get from making it.
The compiler itself probably will unwrap it again into the one line call if you make it a function, though its not always the case.
The benefit you get from doing so, will be mainly if you want to add error checking, TryParse, etc... in the function.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
Here's an example. I saw a "ReadOnlyDictionary" class online and it had the following code:
void ICollection.CopyTo(Array array, int index)
{
ICollection collection = new List<KeyValuePair<TKey, TValue>>(this._source);
collection.CopyTo(array, index);
}
For example, should I check array for a null argument, or should I let the the CopyTo method do that for me? It just seems a bit redundent, but if best practices say to check everything in your own method, then that's what I want to do. I'm just not sure what "best practices" says to do.
I think it wise to say if you plan to do something with array that relies on it NOT being null then you should check this. But if it just a pass through then I don't see a reason why you should check.
Another thought is if the method gets complicated in the future. You might still want to check for it because someone may modify the code and use array without realizing that it might be null. This is only for maintaining good code in my opinion.
If somebody else's library or API* is going to complain about my inputs, I don't want to give it those inputs, I want to validate and/or complain first. This is especially important if calls into external APIs are expensive, such as a database or web service call.
You know what inputs the API is going to reject. Don't send those, invalidate them in your own public API.
*Note: I consider my own public boundaries to be the same thing. If I have class Foo that does not like given arguments, if I invoke Foo, at some level before doing so, I'm going to validate my arguments. You don't do this at every level (assume there are layers of indirection, maybe, private methods calling into private methods, etc.), but at some reasonable public boundary, I will validate. Validate early, don't let complicated logic or work be done when it's just going to be rejected anyway.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
Is it best practice to place method bodies before or after they are called ? I generally place them after; interested in what others are doing ?
I prefer after. The reason for this is because it makes the flow of your code more logical. Code flows from top to bottom anyway, so it's logical that methods called appear after the current method.
This has the added advantage of the entry point of your program/class being at the top, which is where you start looking anyway.
When developing Java, I place the method bodies after they are called. This will typically result in classes that have a small number of public methods at the top, followed by quite a few private methods at the bottom. I think this makes the class easier to read and understand: you just need to read those few public methods at the top to understand what the class does — in many cases you can stop reading once you get to the private methods.
I also note that Java IDEs typically place the method body after the current method when you refactor code. For example in Eclipse, if you select a block of code and click Refactor | Extract Method... it will place that selected code in a new method below the current one.
It is entirely a matter of personal preference. For most people, the code navigation facilities of a modern IDE mean that it hardly makes any difference how the methods are ordered.
The method placement is largely irrelevant to me (of course in case of some static methods that need to be defined before invoked):
The code formatters are usually in place (and running automatically - if not for you, turn them on) which results in the source being ordered nicely by type of the method and then alphabetically, rather without the regard to the method call sequence
I use the modern IDE, where finding the proper method is done in a different way than sequentially going through the whole source