Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
I have the following if condition param.days is a string.
if (param.days != null)
This works fine, but if I say
If (param.days)
then it does not evaluate correctly at runtime. Both statements are not the same in C#.
It does say that the value is null but then C# tries to cast it to a bool which is non-nullable.
Why did the C# designers choose to do it this way?
Such a statement is valid in C++, but why is this not considered valid in C#?
Such a statement is valid in C++, but why is this not considered valid in C#
Because C# assumes different languange rules. It does not assume that every number / reference can be treated as a boolean by checking if it is zero vs non-zero, null vs non-null. If you want to test whether something is null: test whether it is null.
Note: if days is actually a T? (aka Nullable<T>), then you can check:
if(param.days.HasValue)
which is then identical to if(param.days != null)
Alternatively, if your type can sensibly be treated as a boolean, then there are operators you can override to tell the compiler that.
C# unlike C++, does not implicitly cast integer to bool.
To clarify, this is answering the question amendment in the comments: why did the C# designers choose not to implement null to boolean evaluation whereas C++ allows it.
Taken from Eric Lippert's post "null is not false":
Some languages allow null values of value types or reference types, or
both, to be implicitly treated as Booleans.
And similarly for nullable value types; in some languages a null value
type is implicitly treated as "false".
The designers of C# considered those features and rejected them.
First, because treating references or nullable value types as Booleans
is a confusing idiom and a potential rich source of bugs. And second,
because semantically it seems presumptuous to automatically translate
null -- which should mean "this value is missing" or "this value is
unknown" -- to "this value is logically false".
This particular sentence covers your string example, but nothing of other types having implicit boolean evaluation.
However, one might surmise the reason for items such as integers not evaluating to boolean also falls under the banner of being a poor idiom or too presumptuous.
The comparison in the if statement needs to evaluate to a boolean result. param.days is not a boolean. You need to compare the value to null to get a boolean result. C# is type safe.
In C#, the If statement requires the contents of the brackers to be a boolean expression.
Consider If ("Hello World").
Is "Hello World" true or false? It's neither, it's a string.
You may want to consider a LINQ expression such as .Any() for example, If (myListOfCats.Any()) as your .days property implies a collection of objects.
The comparison in the if statement requires to a boolean result. param.days is string not a boolean. C# does not implicitly cast integer to bool.
You need to compare the value to null or use string.IsNullOrEmpty() to get a boolean result
If you want to do so try this code:
if (!string.IsNullOrEmpty(param.days))
{
}
OR
if (param.days!=NULL)
{
}
Related
According to a question asked and answered here not calling ToString on methods like WriteLine, String.Format etc introduces boxing for value types as the methods are accepting objects.
Resharper though considers the code redundant. I on the other hand wish not only to remove the rule but also to reverse the rule to suggest the addition of ToString on value types to remove the unneeded boxing.
How can I do that?
EDIT Clarification
In String.Format accepts objects and not value types, thus when calling String.Format without ToString on a value type boxing occurs in order for the argument to match the expected argument in String.Format. Which is completely unneeded and reduces (even if it is slightly) the performance of the call.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I have a situation as below:
Some class
public class Numbers
{
}
Some other class
public class Calculations
{
Numbers objNumber = null;
public void abc()
{
objNumbers = new Numbers();
}
}
What is wrong with the statement Numbers objNumber = null;?
I know I could have simply ignored the null assignment; just curious why this is wrong.
It's not wrong, it's just unnecessary, all reference types are null by default, so there is no point on initializing the variable to null because the type of the variable is a reference type.
Reference types in c#, when declared are initialized automatically with null value. So there is no difference in
Numbers objNumbers = null;
and
Numbers objNumbers;
Both will work the same. You can check this behavior by using a debugging the code.
I don't see any issue with what you are doing, just additional note types are created with default value. It won't make any difference assigning null or ignoring it.
I see just there is a typo, which you could easily identify and fix it.
There is an extra s in objNumbers, the field is defined as objNumber.
The null will be the default value for the object(reference type). So It will be null even without the assignment by you. And there is nothing wrong with that statement.
In short
Numbers objNumbers = null; and Numbers objNumbers; are same.
Consider the following example;
Numbers objNumbers = new Numbers();
objNumbers = null;
Here assignment of null make Sense
Strictly speaking it is not wrong, as the code compiles (provided you add an s to your field name).
However, there are multiple reasons this code might be unreliable from a perspective of design or correctness.
You code has a mutable field not initialized to any meaningful value: Consider the possibility that the method abc() may not get called prior to the value stored at objNumbers being accessed, which will result in a null reference exception for any likely use case.
Now consider following the null object pattern and initializing the field to an instance of this null version of the Numbers class.
That code would be much less likely to produce a null reference exception... and it would force you to think about your application's behavior in the case where accessing the field occurs prior to the execution of method abc.
For both these reasons, in my opinion, it is a favorable design discipline to avoid initializing your fields/properties to null.
Additional reading on the topic, such as this thread, shows dealing with nulls is often considered a problem in need of a solution. Initializing your code with instances and values instead of null is a start in the right direction.
Don't think much. Even though you don't assign still all objects before allocating memory are null.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I've been reading a book on C# and it explains that int, float, double etc. are "basic" types meaning that they store the information at the lowest level of the language, while a type such as 'object' puts information in the memory and then the program has to access this information there. I don't know exactly what this means as a beginner, though!
The book however does not explain what the difference is. Why would I use int or string or whatever instead of just object every time, as object is essentially any of these types?
How does it impact the performance of the program?
This is a very broad subject, and I'm afraid your question as currently stated is prone to being put on hold as such. This answer will barely scratch the surface. Try to educate yourself more, and then you'll be able to ask a more specific question.
Why would I use int or string or whatever instead of just object every time, as object is essentially any of these types?
Basically you use the appropriate types to store different types of information in order to execute useful operations on them.
Just as you don't "add" two objects, you can't get the substring of a number.
So:
int foo = 42;
int bar = 21;
int fooBar = foo + bar;
This won't work if you declared the variables as object. You can do an addition because the numeric types have mathematical operators defined on them, such as the + operator.
You can refer to an integer type as an object (or any type really, as in C# everything inherits from object):
object foo = 42;
However now you won't be able to add this foo to another number. It is said to be a boxed value type.
Where exactly these different types are stored is a different subject altoghether, about which a lot has been written already. See for example Why are Value Types created on the Stack and Reference Types created on the Heap?. Also relevant is the difference between value types and reference types, as pointed out in the comments.
C# is a strongly typed language, which means that the compiler checks that the types of the variables and methods that you use are always consistent. This is what prevents you from writing things like this:
void PrintOrder(Order order)
{
...
}
PrintOrder("Hello world");
because it would make no sense.
If you just use object everywhere, the compiler can't check anything. And anyway, it wouldn't let you access the members of the actual type, because it doesn't know that they exist. For instance, this works:
OrderPrinter printer = new OrderPrinter();
printer.PrintOrder(myOrder);
But this doesn't
object printer = new OrderPrinter();
printer.PrintOrder(myOrder);
because there is no PrintOrder method defined in the class Object.
This can seem constraining if you come from a loosely-typed language, but you'll come to appreciate it, because it lets you detect lots of potential errors at compile time, rather than at runtime.
What the book is referring to is basically the difference between value types (int, float, struct, etc) and reference types (string, object, etc).
Value types store the content in a memory allocated on the stack which is efficient where as reference types (almost anything that can have the null value) store the address where data is. Reference types are allocated on the heap which is less efficient than the stack because there is a cost to allocating and deallocating the memory used to store your data. (and it's only deallocated by the garbage collector)
So if you are using object every time it will be slower to allocate the memory and slower to reclaim it.
Documentation
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I am trying to figure out the differences between C++ and C# data types. I know C# and java differ because data types are stored as objects in C# instead of having the core class library providing a wrapper class to represent the data types as a Java object. However I can't find much on the differences of C# and C++ data types...
The difference you describe is wrong. Java, C# and C++ all treat the primitives as basic objects. C and C++, being low-level languages, keep them that way - they are unique to the compiler as primitives.
In Java, there exist thin wrappers, such as java.lang.Integer which is a class containing a single int member variable.
C# can implicitly treat a primitive as an object, and will on the fly convert for example an int to a System.Int32 as required by various situations. The process is called Boxing and Unboxing, of which the first is implicit and the second is explicit. For further reference see the linked article.
To put it simply, C#'s primitive types like int bool, short etc... are organized as structures, in contrary with C++ primitive types which are not structures.
For example, in the C# on the int primitive type itself you can call some methods (for example you can call methods Parse or Equals). This is also true for the bool primitive type.
To go even further, Int32 and int are totally the same types in the C#, as well as bool and Boolean are. So the int, bool, short etc... are keywords in the C# which are actually masking the following structures Int32, Boolean, Int16. You can try it by calling:
int a=int.MaxValue;
Int32 b = a;
In the first line we are creating variable a which type is int. The value of the variable a is set to the int.MaxValue which is actually constant defined in the type int or to be more precisely Int32.
On the second line value of the variable b becomes the value of the variable a. This
confirms that a and b are the variables of the same type, otherwise, an error would occur.
At the other hand, in the C++, primitive types are not organized as structures, so you can't call any method on the primitive type or the primitive type instance. These are also called compiler primitives.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I have never used nullable types in my C# code. Now I have decided to change my coding practice by introducing nullable types in my code.
What are the major changes in the coding practices should be made while making a transition from normal datatypes to nullable data-types in case of application programming?
What are the areas that should be taken care of?
What are the points I should always watch out for?
Don't use nullable types because they are a "cool new thing" you have discovered.
Use them where they are applicable and genuinely useful.
There are overheads to using them, and if used incorrectly they will unnecessarily increase the complexity of your code.
You also have to be careful to avoid null dereferences, so they place additonal burden on programmers working with that code. (In some cases this is preferable to the cost of a workaround approach though!)
Nullable<T> is useful for when you need a possible invalid state for a value type, or if the data is being retrieved from a database that may contain a null value for the column. It is very common in some old FORTRAN code I am porting to C# for invalid values to be negative or 0, but this is troublesome especially when the values are used in mathematical functions. It is far more explicit to use Nullable<T> to show that possible invalid state.
It is worth mentioning that the following is the same:
Nullable<int> myNullableInt;
int? myNullableInt;
In addition I find the following property useful:
public bool? IsHappy { get; set; }
This allows me to have a tri-state boolean value: yes, no, not answered.
A couple of other good ideas for using nullable types:
Don't forget the flexibility in syntax. Nullable<int> is the same as int?
Check for null (var.HasValue) and cast it to base type before using it as the base type.
They seem suitable for the starting value of some value type variables.
int? lastCodeReceived;
if (lastCodeReceived.HasValue)
{
// At least one code has been received.
}
I rarely use nullable types. The only place I've use them is when I'm dealing with null types in the database.