C# Using Lazy.Value right after its declaration - c#

There's a lot of code like this in company's application I'm working at:
var something = new Lazy<ISomething>(() =>
(ISomething)SomethingFactory
.GetSomething<ISomething>(args));
ISomething sth = something.Value;
From my understanding of Lazy this is totally meaningless, but I'm new at the company and I don't want to argue without reason.
So - does this code have any sense?

Code that is being actively developed is never static, so one possibility of why they code it this way is in case they need to move the assignment to another place in the code later on. However, it sounds as if this is occurring within a method, and normally I would expect Lazy initialization to occur most often for class fields or properties, where it would make more sense (because you may not know which method in the class would first use it).
Unfortunately, it could just as likely be more a lack of knowledge of how the Lazy feature works in C# (or lazy init in general), and maybe they are just trying to use the latest "cool feature" they found out about.
I have seen weird or odd things proliferate in code at a company, simply because people saw it coded one way, and then just copied it, because they thought the original person knew what they were doing and it made sense. The best thing to do is to ask why it was done that way. Worst case, you'll learn something about your company's procedures or coding practices. Best case, you may wind up educating them if they say "gee, I don't know".

Well, in this case is meaningless of course because you are getting the value right after creating the object but maybe this is done to follow a standard or something like that.
At my company we do similar things registering the objects in the Unity container and calling Unity to create the instance just after registering it.

Unless they are using something multiple times in the method, it seems pretty useless, and slightly less efficient than just performing the action immediately. Otherwise, Lazy<T> is going through the Value get and checking to see if the value has been materialized yet, and performing a Func call.. Usefull for deferred loading, but pointless if it is just used once in a method immediately..
Lazy<T> however is usually really helpful for Properties on a class

It can be useful if the Lazy.Value is going to be moved out of the method in the future, but anyway it can be considered as overengineering, and not the best implementation as the Lazy declaration seemed to be extracted to a property in this case.
Thus shortly - yes, it's useless.

Related

Most elegant way of delayed or repeatable initializing

I am trying to rewrite extremely ugly class in one application at work. In one of our classes, there are hundreds of lines of code that ensure initialization and re-initialization of some classes. Currently, this is done in the awful brute force-y way, where you write your init code and manually copy it to re-init part (as they are very similar).
Because of this , I started to rewrite it to a form of a list of delegates which are then called with a parameter in both places (bool isReinit). Then I noticed that most of the delegates are also identical, as the initialization process of 90 percent of the classes is identical. This means that I should be able to create some default initialization function to simplify the code drastically. Currently I created something like this :
https://dotnetfiddle.net/RVS5UT
I also created class CustomInitializer which implements IInitializer and only takes one Func as a parameter and runs it in Initialize, for the cases where the initialization is a lot different.
Now, this simplified and anonymized piece of working code, but it works. The problem is that the whole approach is very awkward and the constructor signature is ugly as hell. Is there some way to simplify this ? I can't find any pattern or approach that would help me ? Any step towards better code is welcome and maybe I am just missing something.
There is also another catch. One solution I figured out would be to store the property pairs (var1a + var1b, var2a+var2b, ..) in an object and pass it directly to Initialize method. But this would mean moving the properties, which is sadly not possible at the moment, because the file has over 18k lines and code reviewers would kill me for changing third of them because of refactoring of one method (even if its a long one). I need to leave the target properties (var1a, var1b, var2a, ..) where they are now. This could also mean that there is no elegant way to solve this.
I am using .NET 4.0, C# 5.0
EDIT: I have no access to the initialized types (another stupid catch)
Thanks for your help.
the file has over 18k lines
Wow, looks like a lot of fun.
It is absolutely good to try to improve it. And believe me, whatever your co-workers may think, there is nothing else to do than refactoring here, unless this code does not need to evolve.
But, it seems to me you go on the path of complexity, trying to be DRY instead of trying to be expressive. The idea of having StandardInitializer and CustomInitializer managing lambdas is extremely complex. The initialization of a class should be in the class it is responsible to initialize. If some behaviors are really shared, they may share a base class or a collaboration class.
I recommend you this discussion on Working Effectively With Legacy Code. As you'll see and probably already know, the first key point is to have tests.
Please don't try to refactor such a class without a test harness. Otherwise you'll introduce regression, you'll be frustrated, and your co-workers will be comforted in their vision that nothing can be done here without breaking everything.
And don't forget if tests are hard to create, it's because of bad code, not because tests are expensive. Bad code is expensive.
After some tests protect you, try to think in terms of responsibility and life cycle. For example in a WPF application, it is a common issue to have "initializable" ViewModel because they do some async web service call to initialize themselves.
In this case, the object with the responsibilty of lifecycle for a given ViewModel, has also the responsibility to init it. If it manages several Initializable view models, then this kind of code is fine:
foreach (var initializable in initializables)
{
initializable.Initialize();
}
But please, whatever solution you choose, keep a clear separation between Initialize and Reinitialize (if they have things in common, make them call an internal shared function). It is a very bad idea to write stuff like:
init.Initialize(true);
It clearly states that the behavior of your Initialize function will change depending of a boolean value. If you have 2 behaviors, you should have 2 functions with clear naming.

Should you Unit Test simple properties?

Should you Unit Test simple properties of a class, asserting that a value is set and retrieved? Or is that really just unit testing the language?
Example
public string ConnectionString { get; set; }
Test
public void TestConnectionString()
{
var c = new MyClass();
c.ConnectionString = "value";
Assert.Equal(c.ConnectionString, "value");
}
I guess I don't see the value in that.
I would suggest that you absolutely should.
What is an auto-property today may end up having a backing field put against it tomorrow, and not by you...
The argument that "you're just testing the compiler or the framework" is a bit of a strawman imho; what you're doing when you test an auto-property is, from the perspective of the caller, testing the public "interface" of your class. The caller has no idea if this is an auto property with a framework-generated backing store, or if there is a million lines of complex code in the getter/setter. Therefore the caller is testing the contract implied by the property - that if you put X into the box, you can get X back later on.
Therefore it behooves us to include a test since we are testing the behaviour of our own code and not the behaviour of the compiler.
A test like this takes maybe a minute to write, so it's not exactly burdensome; and you can easily enough create a T4 template that will auto-generate these tests for you with a bit of reflection. I'm actually working on such a tool at the moment to save our team some drudgery
If you're doing pure TDD then it forces you to stop for a moment and consider if having an auto public property is even the best thing to do (hint: it's often not!)
Wouldn't you rather have an up-front regression test so that when the FNG does something like this:
//24-SEP-2013::FNG - put backing field for ConnectionString as we're now doing constructor injection of it
public string ConnectionString
{
{get { return _connectionString; } }
{set {_connectionString="foo"; } }//FNG: I'll change this later on, I'm in a hurry
}
///snip
public MyDBClass(string connectionString)
{
ConnectionString=connectionString;
}
You instantly know that they broke something?
If the above seems contrived for a simple string property I have personally seen a situation where an auto-property was refactored by someone who thought they were being oh so clever and wanted to change it from an instance member to a wrapper around a static class member (representing a database connection as it happens, the resons for the change are not important).
Of course that same very clever person completely forgot to tell anyone else that they needed to call a magic function to initialise this static member.
This caused the application to compile and ship to a customer whereupon it promptly failed. Not a huge deal, but it cost several hours of support's time==money....
That muppet was me, by the way!
EDIT: as per various conversations on this thread, I wanted to point out that a test for a read-write property is ridiculously simple:
[TestMethod]
public void PropertyFoo_StoresCorrectly()
{
var sut = new MyClass();
sut.Foo = "hello";
Assert.AreEqual("hello", sut.Foo, "Oops...");
}
edit: And you can even do it in one line as per Mark Seeman's Autofixture
I would submit that if you find you have such a large number of public properties as to make writing 3 lines like the above a chore for each one, then you should be questioning your design; If you rely on another test to indicate a problem with this property then either
The test is actually testing this property, or
You will spend more time verifying that this other test is failing because the property is incorrect (via debugger, etc) than you would have spent typing in the above code
If some other test allows you to instantly tell that the property is at fault, it's not a unit test!
edit (again!): As pointed out in the comments, and rightly so, things like generated DTO models and the like are probably exceptions to the above because they are just dumb old buckets for shifting data somewhere else, plus since a tool created them, it's generally pointless to test them.
/EDIT
Ultimately "It depends" is probably the real answer, with the caveat that the best "default" disposition to be the "always do it" approach, with exceptions to that taken on an informed, case by case basis.
Generally, no. A unit test should be used to test for the functionality of a unit. You should unit test methods on a class, not individual, automatic properties (unless you are overriding the getter or setter with custom behaviour).
You know that assigning a string value to an automatic string property will work if you get the syntax and setter value correct as that is a part of the language specification. If you do not do this then you will get a runtime error to point out your flaw.
Unit tests should be designed to test for logical errors in code rather than something the compiler would catch anyway.
EDIT: As per my conversation with the author of the accepted answer for this question I would like to add the following.
I can appreciate that TDD purists would say you need to test automatic properties. But, as a business applications developer I need to weigh up, reasonably the amount of time I could spend writing and performing tests for 'trivial' code such as automatic properties compared to how long it would reasonably take to fix an issue that could arise from not testing. In personal experience most bugs that arise from changing trivial code are trivial to fix 99% of the time. For that reason I would say the positives of only unit testing non-language specification functionality outweigh the negatives.
If you work in a fast paced, business environment which uses a TDD approach then part of the workflow for that team should be to only test code that needs testing, basically any custom code. Should someone go into your class and change the behavior of an automatic property, it is their responsibility to set up a unit test for it at that point.
I would have to say no. If that doesn't work, you have bigger problems. I know I don't. Now some will argue that having the code alone would make sure the test failed if the property was removed, for example. But I'd put money on the fact that if the property were removed, the unit test code would get removed in the refactor, so it wouldn't matter.
Are you adhering to strict TDD practices or not?
If yes then you absolutely should write tests on public getters and setters, otherwise how will you know if you've implemented them correctly?
If no, you still probably should write the tests. Though the implementation is trivial today, it is not guaranteed to remain so, and without a test covering the functionality of a simple get/set operation, when a future change to implementation breaks an invariant of "setting property Foo with a value Bar results in the getter for property Foo returning value Bar" the unit tests will continue to pass. The test itself is also trivially implemented, yet guards against future change.
The way I see is that how much unit testing (or testing in general) is down to how confident are you that the code works as designed and what are the chances of it breaking in the future.
If you have a lower confidence of the code breaking (maybe due to the code being out sourced and the cost of checking line by line is high) then perhaps unit testing properties is appropriate.
Once thing you can do is write a helper class that can go over all get/set properties of a class to test that they still behave as designed.
Unless the properties perform any other sort of logic, then no.
Yes, it is like unit testing the language. It would be completely pointless to test simple auto-implemented properties otherwise.
According to the book The Art of Unit Testing With Examples in .NET, a unit test covers not any type of code, it focuses on logical code. So, what is logical code?
Logical code is any piece of code that has some sort of logic in it, small as it may be. It’s logical code if it has one or more of the
following: an IF statement, a loop, switch or case statements,
calculations, or any other type of decision-making code.
Does a simple getter/setter wrap an any logic? The answer is:
Properties (getters/setters in Java) are good examples of code that
usually doesn’t contain any logic, and so doesn’t require testing. But
watch out: once you add any check inside the property, you’ll want to
make sure that logic is being tested.
My answer which is from former test manager viewpoint and currently a development manager ( responsible for software delivery in time and quality) viewpoint. I see people are mentioning pragmatism. Pragmatism is not a good adviser because it may pair up with laziness and/or time pressure. It may led you on the wrong way. If you mention pragmatism you have to be careful to keep your intentions on the track of professionalism and common sense. It requires humility to accept the answers because they might not that you want to hear.
From my viewpoint what is important are the next:
you should find the defect as early as possible. Doing so you have to apply proper testing strategy. If it is testing properties then you have to test properties. If not, then don't do it. Both comes with a price.
your testing should be easy and fast. The bigger part (unit, integration, etc.) of the code tested in build time is the better.
you should do root cause analysis to answer the questions below and make your organization protected from the current type of error. Don't worry, another type of defect will come up and there will be always lessons to be learned.
what the root cause is?
how to avoid it next time?
another aspect is the cost of creating/maintaining tests. Not testing properties because they are boring to maintain and/or you have hundreds of properties is ridiculous. You should create/apply tools which makes the woodcutting job instead of human. In general, you always have to enhance your environment in order to be more efficient.
what other says are not good adviser - doesn't matter whether it was said by Martin Fowler or Seeman - the environment they are I'm pretty sure not the same as you are in. You have to use your knowledge and experience to setup what is good for your project and how to make it better. If you apply things because they were said by people you are respect without even thinking it through you will find yourself in deep trouble. I do not say that you don't need advises and/or other people help or opinion, you have to apply common sense to apply them.
TDD does not answer two important question, however, BDD does give you answers for the questions below. But, if you follow only one, you won't have delivery in time and quality. So doesn't matter whether you are purist TDD guy or not.
what must be tested? ( it says everything must be tested - wrong answer in my opinion )
when testing must be finished?
All in all, there is no good answer. Just other questions you have to answer to get that temporary point where you are able to decide whether it is needed or not.

Singleton Service classes in c++

Coming from a .NET/C# Background and having solid exposure to PRISM, I really like the idea of having a CompositionContainer to get just this one instance of a class whenever it is needed.
As through the ServiceLocator this instance is also globally accessible this pretty much sums up to the Singleton Pattern.
Now, my current Project is in c++, and I'm at the point of deciding how to manage plugins (external dll loading and stuff like that) for the program.
In C# I'd create a PluginService, export it as shared and channel everything through that one instance (the members would basically only amount to one list, holding the plugins and a bunch of methods). In c++ obviously I don't have a CompositionContainer or a ServiceLocator.
I could probably realize a basic version of this, but whatever I imagine involves using Singletons or Global variables for that matter. The general concern about this seems to be though: DON'T EVER DO GLOBALS AND MUCH LESS SINGLETONS.
what am I to do?
(and what I'm also interested in: is Microsoft here giving us a bad example of how to code, or is this an actual case of where singletons are the right choice?)
There's really no difference between C# and C++ in terms of whether globals and singletons are "good" or "bad".
The solution you outline is equally bad (or good) in both C# and C++.
What you seem to have discovered is simply that different people have different opinions. Some C# developers like to use singletons for something like this. And some C++ programmers feel the same way.
Some C++ programmers think a singleton is a terrible idea, and... some C# programmers feel the same way. :)
Microsoft has given many bad examples of how to code. Never ever accept their sample code as "good practices" just because it says Microsoft on the box. What matters is the code, not the name behind it.
Now, my main beef with singletons is not the global aspect of them.
Like most people, I generally dislike and distrust globals, but I won't say they should never be used. There are situations where it's just more convenient to make something globally accessible. They're not common (and I think most people still overuse globals), but they exist.
But the real problem with singletons is that they enforce an unnecessary and often harmful constraint on your code: they prevent you from creating multiple instances of an object, as though you, when you write the class, know how it's going to be used better than the actual user does.
When you write a class, say, a PluginService as you mentioned in a comment, you certainly have some idea of how you plan it to be used. You probably think "an instance of it should be globally accessible (which is debatable, because many classes should not access the pluginservice, but let's assume that we do want it to be global for now). And you probably think "I can't imagine why I'd want to have two instances".
But the problem is when you take this assumption and actively prevent the creation of two instances.
What if, two months from now, you find a need for creating two PluginServices? If you'd taken the easy route when you wrote the class, and had not built unnecessary constraints into it, then you could also take the easy route now, and simply create two instances.
But if you took the difficult path of writing extra code to prevent multiple instances from being created, then you now again have to take the difficult path: now you have to go back and change your class.
Don't build limitations into your code unless you have a reason: if it makes your job easier, go ahead and do it. And if it prevents harmful misuse of the class, go ahead and do it.
But in the singleton case it does neither of those: you create extra work for yourself, in order to prevent uses that might be perfectly legitimate.
You may be interested in reading this blog post I wrote to answer the question of singletons.
But to answer the specific question of how to handle your specific situation, I would recommend one of two approaches:
the "purist" approach would be to create a ServiceLocator which is not global. Pass it to those who need to locate services. In my experience, you'll probably find that this is much easier than it sounds. You tend to find out that it's not actually needed in as many different places as you thought it'd be. And it gives you a motivation to decouple the code, to minimize dependencies, to ensure that only those who really have a genuine need for the ServiceLocator get access to it. That's healthy.
or there's the pragmatic approach: create a single global instance of the ServiceLocator. Anyone who needs it can use it, and there's never any doubt about how to find it -- it's global, after all. But don't make it a singleton. Let it be possible to create other instances. If you never need to create another instance, then simply don't do it. But this leaves the door open so that if you do end up needing another instance, you can create it.
There are many situations where you end up needing multiple instances of a class that you thought would only ever need one instance. Configuration/settings objects, loggers or wrappers around some piece of hardware are all things people often call out as "this should obviously be a singleton, it makes no sense to have multiple instances", and in each of these cases, they're wrong. There are many cases where you want multiple instances of just such classes.
But the most universally applicable scenario is simply: testing.
You want to ensure that your ServiceLocator works. So you want to test it.
If it's singleton, that's really hard to do. A good test should run in a pristine, isolated environment, unaffected by previous tests. But a singleton lives for the duration of the application, so if you have multiple tests of the ServiceLocator, they'll all run on the same "dirty" instance, so each test might affect the state seen by the next test.
Instead, the tests should each create a new, clean ServiceLocator, so they can control exactly which state it is in. And to do that, you need to be able to create instances of the class.
So don't make it a singleton. :)
There's absolutely nothing wrong with singletons when they're
appropriate. I have my doubts concerning CompositionContainer (but
I'm not sure I understand what it is actually supposed to do), but
ServiceLocator is the sort of thing that will generally be a singleton
in any well designed application. Having two or more ServiceLocator
will result in the program not functionning as it should (because a
service will be registered in one of them, and you'll be looking it up
in another); enforcing this programatically is positive, at least if you
favor robust programming. In addition, in C++, the singleton idiom is
used to control the order of initialization; unless you make
ServiceLocator a singleton, you can't use it in the constructor of any
object with static lifetime.
While there is a small group of very vocal anti-singleton fanatics,
within the larger C++ community, you'll find that the consensus favors
singletons, in certain very restricted cases. They're easily abused
(but then, so are templates, dynamic allocation and polymorphism), but
they do solve one particular problem very nicely, and it would be silly
to forgo them for some arbitrary dogmatic reason when they're the best
solution for the problem.

Am I using static in the right way?

I'm writing an XNA engine and I am storing all of the models in a List. In order to be able to use this throughout the engine, I've made this a public static List<Model> so I can access it from any new classes that I develop. It certainly makes obtaining the list of models really easy to get too, but is this the right usage? Or would I be better off actually passing a variable through in a method declaration?
In OOP it's generally advisable to avoid using static methods and properties, unless you have a very good reason to do so. One of the reasons for that is that in the future you may want to have two or more instances of this list for some reason, and then you'll be stuck with static calls.
Static methods and properties are too rigid. As Stevey states it:
Static methods are as flexible as
granite. Every time you use one,
you're casting part of your program in
concrete. Just make sure you don't
have your foot jammed in there as
you're watching it harden. Someday you
will be amazed that, by gosh, you
really DO need another implementation
of that dang PrintSpooler class, and
it should have been an interface, a
factory, and a set of implementation
classes. D'oh!
For game development I advocate "Doing The Simplest Thing That Could Possibly Work". That includes using global variables (public static in C#), if that is an easy solution. You can always turn it into something more formal later. The "find all references" tool in Visual Studio makes this really easy.
That being said, there are very few cases where a global variable is actually the "correct" way to do something. So if you are going to use it, you should be aware of and understand the correct solution. So you can make the best tradeoff between "being lazy" and "writing good code".
If you are going to make something global, you need to fully understand why you are doing so.
In this particular case, it sounds like you're trying to trying to get at content. You should be aware that ContentManager will automatically return the same content object if you ask for it multiple times. So rather than loading models into a global list, consider making your Game class's built-in ContentManager available via a public static property on your Game class.
Or, better still, there's a method that I prefer, that I think is a bit better: I explain it in the answer to another question. Basically you make the content references private static in the classes that use them and pass the ConentManager into public static LoadContent functions. This compartmentalises your use of static to individual classes, rather than using a global that is accessed from all over your program (which would be difficult to extricate later). It also correctly handles loading content at the correct time.
I'd avoid using static as much as possible, over time you'll just end up with spaghetti code.
If you pass it in the constructor you're eliminating an unnecessary dependency, low coupling is good. The fewer dependencies there are, the better.
I would suggest to implement a Singleon object which encapsulates the model list.
Have a look at the MSDN singleton implementation.
This is a matter of balance and trade-offs.
Of course, OOP purists will say that avoid such global variables at all costs, since it breaks code compartmentization by introducing something that goes "out of the box" for any module, and thus making it hard to maintain, change, debug etc.
However, my personal experience has been that it should be avoided only if you are part of a very large enterprise solutions team, maintaining a very large enterprise-class application.
For others cases, encapsulating globally-accessible data into a "global" object (or a static object, same thing) simplifies OOP coding to a great extent.
You may get the middle-ground by writing a global GetModels() function that returns the list of models. Or use DI to automatically inject the list of models.

AOP Dirty Tracking

In the past I have used a few different methods for doing dirty checking on my entities. I have been entertaining the idea of using AOP to accomplish this on a new a project. This would require me to add an attribute on every proptery in my classes where I want to invoke the dirty flag logic when the property is set. If I have to add an extra line of code to each property for the attribture, what is the benefit over just calling a SetDirty() method in the setters. I guess I am asking what would be the advantage, if any, of using the AOP approach?
I'd say that not only is there not any advantage in this case: there's a bit of a disadvantage. You're using the same number of lines of code whether you call dirty() or you use AOP, but just calling dirty() is more simple and clear, as far as intent goes.
AOP, honestly, is a bit oversold, I think. It adds another level of indirection, in terms of reading the code, that often it doesn't pay back.
The key thing to think about here is, does it help the next guy reading this (which may be you a few months down the road) understand more quickly and clearly what I'm trying to do. If you have trouble figuring out what's better about the less straightforward approach, you probably shouldn't be using it. (And I say this as a Haskell programmer, which means I'm far from adverse to non-straightforward approaches myself.)
The advantage is that should you decide to change the implementation of how to invoke the dirty flag logic, you'll only need to make one change (in the AOP method's body), not N changes (replacing all your SetDirty calls with something else).
I don't see any benefit if you have to decorate your entities with an attribute. Espeically if all your doing is calling a single method. If the logic was more complex then I could make an argument for using AOP.
If let's say each time you modify a property you wanted to track that change as a version, this might be more complex behavior that could be injected, then having this abstracted out of the property could be beneficul. At the same point you would probally want to version changing several properties at once so I come back to there not being much value.
The use of AOP is for cross cutting concerns. This means that you want to have a feature such as logging, security, ect but the business logic really does not belong in your class. This could be for the Dirty flag logic as the Domain object should not care that it has been changed. That is up to your DirtyLogicUtility or what ever name it has.
For example you want to log every time a method gets called for every you could place this in every function, but later on you want to have logic so that it is logged on every other call.
AOP keeps your classes clean doing what they are supposed to do while leaving the other pieces alone.
Some AOP implementations, specifically PostSharp, allow you to apply the attribute at an Assembly level with wildcards as to which classes it applies to.
Why do you want the dirty check to be the responsibility of the entities? You can manage this somewhere else. The pattern is called Unit of work

Categories