Can a String[] hold System.Object inside it? - c#

Do you feel question is strange? yes what happened also strange. let me explain.
I have found a snippet from this Covariance and Contravariance with C# Arrays
string[] strings = new string[1];
object[] objects = strings;
objects[0] = new object();
Jon skeet explains that above code will throw ArrayTypeMismatchException, as said yes it does.
what I did is I put a breakpoint in line 3, Using DebuggerVisualizer I manually set objects[0] = new object() it doesn't throw any error and it works. later checking strings[0].GetType() returns System.Object. not only System.Object any type can be set in string[] by above mentioned procedure.
I have no idea how this happened i raised my question as a comment over there in the very same question i saw this but no answers.
Am curious to know what is happening behind. Anybody explain pls.
Edit1 This is even Interesting
After reproducing the above behaviour try this
int len = strings[0].Length;
if you place mouse over the Property Length is says strings[0].Length threw ArgumentException with message Cannot find the method on the object instance but actually it doesnt throw exception and code runs yielding result len=0

Your example seems to answer the question: yes, a string reference can refer a non-string object. This is not intended, however.
Consider what you have found, a bug in the debugger.
As Jon Skeet explains in the answer you mention, because .NET arrays have this "crazy" covaraiance even though arrays are not read-only but more like read-write, everytime one writes to an array of references the framework has to check the type of the object one tries to write to the array, and throw an ArrayTypeMismatchException if you're about to use a wrong type, like assigning an instance of Cat to an array of Dogs (a runtime Dog[]) which has been cast by "crazy" covariance into an Animal[].
What you have demonstrated is that when we use the Immediate window of the Visual Studio debugger (or similar windows), this required type check is not done, and as a result this can lead to any type Y (except pointer types probably) being assigned to a reference type variable of any reference type X. Like this:
X[] arrayOfX = new X[1];
object[] arrayCastByCrazyCovariance = arrayOfX;
Y badObject = new Y(); // or another constructor or method to get a Y
// Set breakpoint here.
// In Immediate window assign: arrayCastByCrazyCovariance[0] = badObject
// Detach debugger again.
X anomalousReferenceVariable = arrayOfX[0];
anomalousReferenceVariable.MemberOfX(); // or other bad things
This can make a Cat bark like a Dog, and stuff like that.
In the linked thread on Bypassing type safeguards, the answer by CodesInChaos shows an unrelated technique with which you can put a reference to an object of a "wrong" and unrelated type into a reference variable.

(I have preferred to rewrite my answer because the previous one had too many updates and wasn't clear enough).
Apparently, it has been found a not-so-perfect behaviour in one of the tools (Immediate Window) in the VS debugging part. This behaviour does not affect (AT ALL) the normal execution of the code and, purely speaking, does not affect even the debugging process.
What I meant in the last sentence above is that, when I debug code, I never use the Immediate Window, just write any code I want, execute it and see what the debugger shows. The referred problem does not affect this process (which can be called "debugging actually-executed code"; in the proposed example, pressing F11 when you are on objects[0] = new object();), what would imply a serious problem in VS. Thus from my point of view (the kind of debugging I do) and from the execution point of view, the referred error has no effect at all.
The only application of this error is when executing the "Immediate Window" functionality, a feature of the debugger which estimates what the code will deliver before it actually delivers it (what might be called "debugging not-executed code" or "estimating expected outputs from non-executed code", etc.; in the proposed example, being on line objects[0] = new object();, not pressing F11 but using the Immediate Window to input values and let this feature to tell you what is expected to happen).
In summary, the referred problem has to be understood within the right context, that is, it is not an overall-applicable problem, not even a problem in the whole debugger (when you press F11 in the referred line from the debugger, it outputs an error and thus the debugger does understand perfectly that this situation is wrong), but just in one of its tools. I am not even sure if this behaviour is even acceptable for this tool (i.e., what "Immediate Window" delivers is a prediction which might not be 100% right; if you want to know for sure what will happen, execute the code and let the debugger show you the information).
QUESTION: Can a String[] hold System.Object inside it?
ANSWER: NO.
CLARIFICATION: covariance is a complex reality which might not be perfectly accounted by some of the secondary tools in VS (e.g.,
"Immediate Window") and thus there might be cases where the
aforementioned statement does not fully apply. BUT this is a local
behaviour/bug in the specific tool with no effect in the actual
execution of the code.

Related

Delegate allocation: capture of 'this' reference in JetBrains Rider

When I was doing my code in C# I found a yellow line below arrow function as shown in image, I hovered my cursor over it and it showed
"Delegate allocation: capture of 'this' reference"
I searched this on google but didn't find anything.
I also found that when I remove initRemoteConfig field than that line goes away.
So can anybody explain what's happening here and what to do to remove this warning?
This is about lambda capturing outer context (docs about capturing a local variable, or there are some answers about this on SO). It seems that initRemoteConfig is a field/property on class containing this code, so your lambda needs to capture current instance of this class.
Also this is not a build in rider inspection it comes from heap allocations viewer which helps you to prevent unnecessary allocations, but sometimes you still need to allocate, so you can't always fix this warnings (i.e. this plugin will always "warn" about any allocation and it is up to you to decide if it is necessary or not). In this particular case, if it is suitable for your context, you can make the initRemoteConfig property/field a static one, i.e. something like this:
private static int initRemoteConfig;
Action x = () =>
{
initRemoteConfig = 3;
};
will not give you this warning (but has some other drawbacks).

Object instance valid only for the current method

Is it possible to create an object that can register whether the current thread leaves the method where it was created, or to check whether this has happened when a method on the instance gets called?
ScopeValid obj;
void Method1()
{
obj = new ScopeValid();
obj.Something();
}
void Method2()
{
Method1();
obj.Something(); //Exception
}
Can this technique be accomplished? I would like to develop a mechanism similar to TypedReference and ArgIterator, which can't "escape" the current method. These types are handled specially by the compiler, so I can't mimic this behavior exactly, but I hope it is possible to create at least a similar rule with the same results - disallow accessing the object if it has escaped the method where it was created.
Note that I can't use StackFrame and compare methods, because the object might escape and return to the same method.
Changing method behavior based upon the source of the call is a bad design choice.
Some example problems to consider with such a method include:
Testability - how would you test such a method?
Refactoring the calling code - What if the user of your code just does an end run around your error message that says you can't do that in a different method than it was created? "Okay, fine! I'll just do my bad thing in the same method, says the programmer."
If the user of your code breaks it, and it's their fault, let it break. Better to just document your code with something like:
IInvalidatable - Types which implement this member should be invalidated with Invalidate() when you are done working with this.
Ignoring the obvious point that this almost seems like is re-inventing IDisposible and using { } blocks (which have language support), if the user of your code doesn't use it right, it's not really your concern.
This is likely technically possible with AOP (I'm thinking PostSharp here), but it still depends on the user using your code correctly - they would have to have it in the build process, and failing to function if they aren't using a tool just because you're trying to make it easy on them is evil.
Another point - If you are just attempting to create an object which cannot be used outside of one method, and any attempted operation outside of the method would fail, just declare it a local inside the method.
Related: How to find out which assembly handled the request
Years laters, it seems this feature was finally added to C# 7.2: ref struct.
Another related language feature is the ability to declare a value type that must be stack allocated. In other words, these types can never be created on the heap as a member of another class. The primary motivation for this feature was Span and related structures. Span may contain a managed pointer as one of its members, the other being the length of the span. It's actually implemented a bit differently because C# doesn't support pointers to managed memory outside of an unsafe context. Any write that changes the pointer and the length is not atomic. That means a Span would be subject to out of range errors or other type safety violations were it not constrained to a single stack frame. In addition, putting a managed pointer on the GC heap typically crashes at JIT time.
This prevents the code from moving the value to the heap, which partly solves my original problem. I am not sure how returning a ref struct is constrained, though.

What could be causing an NRE when closing a form?

Under certain circumstances, I get an NRE when closing a form using its Close button, which simply calls the native (WinForms) Close() method.
Certain paths through the code do fine, but one particular path causes the Null Reference Exception. Since this is nominally a scenario where something that is null is being referenced, how could this be taking place when the form is simply being closed? I can imagine there being a memory leak, perhaps, but something null being referenced I'm not understanding.
What potential causes in the code might there be?
UPDATE
Answer to Jon Skeet:
I'm unable to debug this in the normal way for reasons boring and laborious to relate (again), but what I can do is this:
catch (Exception ex)
{
MessageBox.Show(ex.Message);
MessageBox.Show(ex.InnerException.ToString());
SSCS.ExceptionHandler(ex, "frmEntry.saveDSD");
}
The last is an internal/custom exception processing method.
All I get from these lines is:
"Null Reference Exception"
Nothing (empty string)
"Exception: NullReferenceException Location: frmEntry.btnSave.click
Note that the last exception shown now implicates btnSave.click as the culprit, whereas it formerly pointed the finger at saveDSD. Curiouser and curiouser. Is this a case of the Hawthorne Effect* in action (well, a modified Hawthorne Effect, in the sense that adding this debugging code might be changing things).
The canonical Hawthorne Effect being more like this sort of scenario: A bunch of cats are playing basketball. Some girls walk up and watch. The cats start showing off and hotdogging, and one breaks his leg. Is it the girls' fault? No. Would it have happened had they not been watching? No.
UPDATE 2
Talk about the Hawthorne Effect: When I copied the two MessageBox.Show() calls to the catch block of frmEntry.btnSave.click, I then got:
"Null Reference Exception"
"Null Reference Exception"
"Exception: NullReferenceException Location: frmEntry.Closing"
IOW, the location of the NRE keeps moving about, like a squirrel on a country road when [h,sh]e meets two cars going opposite directions.
UPDATE 3
And it happened once more: adding those MessageBox.Show()s to the form's Closing event causes the NRE to pop up out of a different hole and declare itself. This inbred code is begetting (or cloning) half-wits by the score, or so it seems.
UPDATE 4
If writing bad code were a crime, the cat who wrote this stuff should be in solitary confinement in a SuperMax.
Here's the latest egregious headscratcher/howler:
int cancelled = ListRecords.cancelled;
if (cancelled == 0)
. . .
The public "cancelled" member of ListRecords (in another class) is only ever assigned values of 0 and 1. Thus, not only does cancelled sound like a bool, it acts like a bool, too. Why was it not declared as a bool?!?!?
The official image of this codebase should be Edvard Munch's "The Scream"
UPDATE 5
Maybe I'm venting, but recent experiences have caused me to come up with a new name for certain types of code, and an illustration for many projects. Code that doesn't go anywhere, but simply takes up space (assignments made, but are then not acted on, or empty methods/handlers called) I now call "Winchester Mystery House code."
And for the typical project (I've been a "captured" employee as well as a contractor on many projects now, having almost 20 years experience in programming), I now liken the situation to a group of people wallowing in quicksand. When they hire new people to "come aboard," they really want them to jump into the quicksand with them. How is this going to help matters? Is it just a case of "misery loves company"? What they should be doing is saying "Throw us a rope!" not "Come on in, the quicksand is fine!"
Many times it's probably just entropy and "job security" at play; if they keep wrestling with the monstrosity they've chosen or created, their hard-won knowledge in how to keep the beast more-or-less at bay will keep them in their semi-comfy job. If they make any radical changes (most teams do need to make some radical improvements, IMO), this may endanger their employment status. Maybe the answer is to employ/contract "transition teams" to move teams/companies from prehistoric dinosaur-clubbing times into the 21st Century, training the "old guard" so that they can preserve their jobs. After all, there aren't enough of these "saviors" to steal the old hands' jobs anyway - train them up, get them up to speed, and move on. All would benefit. But many seemingly prefer to continue wallowing in the mire, or quicksand.
Before anybody hires a programmer, they should give them at the minimum a test wherein passing the test would prove that they are at least familiar with the basic tenets delineated in Steve McConnell's "Code Complete."
Okay, back to battling the quicksand...
UPDATE 6
Or: "Refactoring Spo-dee-o-dee":
A "gotcha" with refactoring is if you change the name of a variable such as this:
ChangeListType chgLst; // ChangeListType is an enum
...to this:
ChangeListType changeListType;
...and if there is a data table with a column named "chgLst", SQL statements can also get changed, which can ruin, if not your day, at least part of it.
Another potential gotcha is when you get, "[var name] is only assigned but its value is never used" hints. You may think you can just cavalierly 86 all of these, but beware that any related code which you may comment out along with assignments to these dead vars is not producing relied-upon side effects. Admittedly, this would be a bad way to code (side effectual methods storing their return vals in vars that are ignored), but ... it happens, especially when the original coder was a propeller-head mad scientist cowboy.
This code is so chock full of anti-patterns that I wouldn't be surprised if Butterick and/or Simplicity put out a contract on this guy.
On second thought, aside from the architecture, design, coding, and formatting of this project, it's not really all that bad...
Mary Shelley's "Frankenstein" was very prescient indeed, but not in the way most people have thought. Rather than a general foreboding about technology run amok, it is much more specific than that: it is a foregleam of most modern software projects, where "pieces parts" from here and there are jammed together willy-nilly with little regard to whether "the hip bone's connected to the thigh bone" (or should be) and whether those parts match up or will reject one another; the left hand doesn't know what the right is doing, and devil take the hindmost! Etc.
That can happen under any of these situations:
object obj is null and your code has obj.ToString()
string item is null and your code has item.Trim()
Settings mySettings is null and your code has mySettings.Path
Less common things would be a thread running that posts information or a serial port that tries to receive data after the SerialDataReceivedEventHandler has been removed.
Look at what your suspect form's code and the parent form's code are doing about the time this suspect form is closed and what other processes might be running on the suspect form.

Is there any advantage in writing a set of operations in a single line if all those operations have to occur anyway?

From a post-compilation perspective (rather than a coding syntax perspective), in C#, is there any actual difference in the compiled code between a set of operations that have occurred on one line to a set of operations that occur across multiple lines?
This
object anObject = new object();
anObject = this.FindName("rec"+keyPlayed.ToString());
Rectangle aRectangle = new Rectangle();
aRectangle = (Rectangle)anObject;
vs this.
Rectangle aRectangle = (Rectangle)this.FindName("rec"+keyPlayed.ToString());
I wonder because there seems to be a view that the least amount of lines used is better however I would like to understand if this is because there is a tangible technical benefit or if there was at some point a tangible benefit or if it is indeed for a reason that is quantifiable?
The number of lines don't matter; the IL will be identical if the code is equivalent (your's isn't).
And actually, unless we know what FindName returns, we can't answer properly - since by casting to object you might be introducing a "box" operation, and you might be changing a conversion operation (or perhaps a passive no-op cast) into an active double-cast (cast to object, cast to Rectangle). For now, I'll assume that FindName returns object, for simplicity. If you'd used var, we'd know at a glance that your code wasn't changing the type (box / cast / etc):
var anObject = this.FindName("rec"+keyPlayed.ToString());
In release mode (with optimize enabled) the compiler will remove most variables that are set and then used immediately. The biggest difference between the two lines above is that the second version doesn't create and discard new object() and new Rectangle(). But if you hadn't have done that, the code would have been equivalent (again, assuming that FindName returns object):
object anObject;
anObject = this.FindName("rec"+keyPlayed.ToString());
Rectangle aRectangle;
aRectangle = (Rectangle)anObject;
Some subtleties exist if you re-use the variable (in which case it can't necessarily be removed by the compiler), and if that variable is "captured" by a lambda/anon-method, or used in a ref/out. And some more subtleties for some math scenarios if the compiler/JIT chooses to do an operation purely in the registers without copying it back down to a variable (the registers have different (greater) width, even for "fixed-size" math like float).
I think that you should generally aim to make your code as readable as possible, and sometimes that means seperating out your code and sometimes it means having it on one line. Aim for readablity and if performance becomes a problem, use profiling tools to analyse the code and refactor it if necessary.
The compiled code may not have any difference (with optimization enabled perhaps), but think about readability too :)
In your example, everything on one line is actually more readable than separate lines. What you were trying to do was immediately obvious there. But others can quickly point out counter-examples. So use your good judgment to decide which way to go.
There's a refactoring pattern to prefer a call to a temporary variable. Following this pattern reduces the number of lines of code but makes interactive debugging harder.
One the main practical issues which differ between the two is, when debugging it can be useful to have the individual steps on different lines with results being passed to local variables.
This means that you can cleanly step through the different bits of code which give the final result and see the intervening values.
When you build optimized the compiler will remove the steps and make the code efficient.
Tony
With your example there is an actual difference, as you in the first piece of code are creating objects and values that you don't use.
The proper way to write that code is like this:
object anObject;
anObject = this.FindName("rec" + keyPlayed.ToString());
Rectangle aRectangle;
aRectangle = (Rectangle)anObject;
Now, the difference between that and the single line version is that you are declaring one more local variable. In most cases the compiler can optimize that so that the generated code is identical anyway, and even if it actually uses one more local variable in the generated code, that is still negligable compared to anything else you do in that code.
For this example I think that the single line version is clearer, but with more complicated code it can of course be clearer to split it into several stages. Local variables are very cheap, so you should not hesitate to use some if the code gets clearer.

Can I detect whether I've been given a new object as a parameter?

Short Version
For those who don't have the time to read my reasoning for this question below:
Is there any way to enforce a policy of "new objects only" or "existing objects only" for a method's parameters?
Long Version
There are plenty of methods which take objects as parameters, and it doesn't matter whether the method has the object "all to itself" or not. For instance:
var people = new List<Person>();
Person bob = new Person("Bob");
people.Add(bob);
people.Add(new Person("Larry"));
Here the List<Person>.Add method has taken an "existing" Person (Bob) as well as a "new" Person (Larry), and the list contains both items. Bob can be accessed as either bob or people[0]. Larry can be accessed as people[1] and, if desired, cached and accessed as larry (or whatever) thereafter.
OK, fine. But sometimes a method really shouldn't be passed a new object. Take, for example, Array.Sort<T>. The following doesn't make a whole lot of sense:
Array.Sort<int>(new int[] {5, 6, 3, 7, 2, 1});
All the above code does is take a new array, sort it, and then forget it (as its reference count reaches zero after Array.Sort<int> exits and the sorted array will therefore be garbage collected, if I'm not mistaken). So Array.Sort<T> expects an "existing" array as its argument.
There are conceivably other methods which may expect "new" objects (though I would generally think that to have such an expectation would be a design mistake). An imperfect example would be this:
DataTable firstTable = myDataSet.Tables["FirstTable"];
DataTable secondTable = myDataSet.Tables["SecondTable"];
firstTable.Rows.Add(secondTable.Rows[0]);
As I said, this isn't a great example, since DataRowCollection.Add doesn't actually expect a new DataRow, exactly; but it does expect a DataRow that doesn't already belong to a DataTable. So the last line in the code above won't work; it needs to be:
firstTable.ImportRow(secondTable.Rows[0]);
Anyway, this is a lot of setup for my question, which is: is there any way to enforce a policy of "new objects only" or "existing objects only" for a method's parameters, either in its definition (perhaps by some custom attributes I'm not aware of) or within the method itself (perhaps by reflection, though I'd probably shy away from this even if it were available)?
If not, any interesting ideas as to how to possibly accomplish this would be more than welcome. For instance I suppose if there were some way to get the GC's reference count for a given object, you could tell right away at the start of a method whether you've received a new object or not (assuming you're dealing with reference types, of course--which is the only scenario to which this question is relevant anyway).
EDIT:
The longer version gets longer.
All right, suppose I have some method that I want to optionally accept a TextWriter to output its progress or what-have-you:
static void TryDoSomething(TextWriter output) {
// do something...
if (output != null)
output.WriteLine("Did something...");
// do something else...
if (output != null)
output.WriteLine("Did something else...");
// etc. etc.
if (output != null)
// do I call output.Close() or not?
}
static void TryDoSomething() {
TryDoSomething(null);
}
Now, let's consider two different ways I could call this method:
string path = GetFilePath();
using (StreamWriter writer = new StreamWriter(path)) {
TryDoSomething(writer);
// do more things with writer
}
OR:
TryDoSomething(new StreamWriter(path));
Hmm... it would seem that this poses a problem, doesn't it? I've constructed a StreamWriter, which implements IDisposable, but TryDoSomething isn't going to presume to know whether it has exclusive access to its output argument or not. So the object either gets disposed prematurely (in the first case), or doesn't get disposed at all (in the second case).
I'm not saying this would be a great design, necessarily. Perhaps Josh Stodola is right and this is just a bad idea from the start. Anyway, I asked the question mainly because I was just curious if such a thing were possible. Looks like the answer is: not really.
No, basically.
There's really no difference between:
var x = new ...;
Foo(x);
and
Foo(new ...);
and indeed sometimes you might convert between the two for debugging purposes.
Note that in the DataRow/DataTable example, there's an alternative approach though - that DataRow can know its parent as part of its state. That's not the same thing as being "new" or not - you could have a "detach" operation for example. Defining conditions in terms of the genuine hard-and-fast state of the object makes a lot more sense than woolly terms such as "new".
Yes, there is a way to do this.
Sort of.
If you make your parameter a ref parameter, you'll have to have an existing variable as your argument. You can't do something like this:
DoSomething(ref new Customer());
If you do, you'll get the error "A ref or out argument must be an assignable variable."
Of course, using ref has other implications. However, if you're the one writing the method, you don't need to worry about them. As long as you don't reassign the ref parameter inside the method, it won't make any difference whether you use ref or not.
I'm not saying it's good style, necessarily. You shouldn't use ref or out unless you really, really need to and have no other way to do what you're doing. But using ref will make what you want to do work.
No. And if there is some reason that you need to do this, your code has improper architecture.
Short answer - no there isn't
In the vast majority of cases I usually find that the issues that you've listed above don't really matter all that much. When they do you could overload a method so that you can accept something else as a parameter instead of the object you are worried about sharing.
// For example create a method that allows you to do this:
people.Add("Larry");
// Instead of this:
people.Add(new Person("Larry"));
// The new method might look a little like this:
public void Add(string name)
{
Person person = new Person(name);
this.add(person); // This method could be private if neccessary
}
I can think of a way to do this, but I would definitely not recommend this. Just for argument's sake.
What does it mean for an object to be a "new" object? It means there is only one reference keeping it alive. An "existing" object would have more than one reference to it.
With this in mind, look at the following code:
class Program
{
static void Main(string[] args)
{
object o = new object();
Console.WriteLine(IsExistingObject(o));
Console.WriteLine(IsExistingObject(new object()));
o.ToString(); // Just something to simulate further usage of o. If we didn't do this, in a release build, o would be collected by the GC.Collect call in IsExistingObject. (not in a Debug build)
}
public static bool IsExistingObject(object o)
{
var oRef = new WeakReference(o);
#if DEBUG
o = null; // In Debug, we need to set o to null. This is not necessary in a release build.
#endif
GC.Collect();
GC.WaitForPendingFinalizers();
return oRef.IsAlive;
}
}
This prints True on the first line, False on the second.
But again, please do not use this in your code.
Let me rewrite your question to something shorter.
Is there any way, in my method, which takes an object as an argument, to know if this object will ever be used outside of my method?
And the short answer to that is: No.
Let me venture an opinion at this point: There should not be any such mechanism either.
This would complicate method calls all over the place.
If there was a method where I could, in a method call, tell if the object I'm given would really be used or not, then it's a signal to me, as a developer of that method, to take that into account.
Basically, you'd see this type of code all over the place (hypothetical, since it isn't available/supported:)
if (ReferenceCount(obj) == 1) return; // only reference is the one we have
My opinion is this: If the code that calls your method isn't going to use the object for anything, and there are no side-effects outside of modifying the object, then that code should not exist to begin with.
It's like code that looks like this:
1 + 2;
What does this code do? Well, depending on the C/C++ compiler, it might compile into something that evaluates 1+2. But then what, where is the result stored? Do you use it for anything? No? Then why is that code part of your source code to begin with?
Of course, you could argue that the code is actually a+b;, and the purpose is to ensure that the evaluation of a+b isn't going to throw an exception denoting overflow, but such a case is so diminishingly rare that a special case for it would just mask real problems, and it would be really simple to fix by just assigning it to a temporary variable.
In any case, for any feature in any programming language and/or runtime and/or environment, where a feature isn't available, the reasons for why it isn't available are:
It wasn't designed properly
It wasn't specified properly
It wasn't implemented properly
It wasn't tested properly
It wasn't documented properly
It wasn't prioritized above competing features
All of these are required to get a feature to appear in version X of application Y, be it C# 4.0 or MS Works 7.0.
Nope, there's no way of knowing.
All that gets passed in is the object reference. Whether it is 'newed' in-situ, or is sourced from an array, the method in question has no way of knowing how the parameters being passed in have been instantiated and/or where.
One way to know if an object passed to a function (or a method) has been created right before the call to the function/method is that the object has a property that is initialized with the timestamp passed from a system function; in that way, looking at that property, it would be possible to resolve the problem.
Frankly, I would not use such method because
I don't see any reason why the code should now if the passed parameter is an object right created, or if it has been created in a different moment.
The method I suggest depends from a system function that in some systems could not be present, or that could be less reliable.
With the modern CPUs, which are a way faster than the CPUs used 10 years ago, there could be the problem to use the right value for the threshold value to decide when an object has been freshly created, or not.
The other solution would be to use an object property that is set to a a value from the object creator, and that is set to a different value from all the methods of the object.
In this case the problem would be to forget to add the code to change that property in each method.
Once again I would ask to myself "Is there a really need to do this?".
As a possible partial solution if you only wanted one of an object to be consumed by a method maybe you could look at a Singleton. In this way the method in question could not create another instance if it existed already.

Categories