Is abusing IDisposable to benefit from "using" statements considered harmful? [closed] - c#

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
The purpose of the interface IDisposable is to release unmanaged resources in an orderly fashion. It goes hand in hand with the using keyword that defines a scope after the end of which the resource in question is disposed of.
Because this meachnism is so neat, I've been repeatedly tempted to have classes implement IDisposable to be able to abuse this mechanism in ways it's not intended for. For example, one could implement classes to handle nested contexts like this:
class Context : IDisposable
{
// Put a new context onto the stack
public static void PushContext() { ... }
// Remove the topmost context from the stack
private static void PopContext() { ... }
// Retrieve the topmost context
public static Context CurrentContext { get { ... } }
// Disposing of a context pops it from the stack
public void Dispose()
{
PopContext();
}
}
Usage in calling code might look like this:
using (Context.PushContext())
{
DoContextualStuff(Context.CurrentContext);
} // <-- the context is popped upon leaving the block
(Please note that this is just an example and not to the topic of this question.)
The fact that Dispose() is called upon leaving the scope of the using statement can also be exploited to implement all sorts of things that depend on scope, e.g. timers. This could also be handled by using a try ... finally construct, but in that case the programmer would have to manually call some method (e.g. Context.Pop), which the using construct could do for thon.
This usage of IDisposable does not coincide with its intended purpose as stated in the documentation, yet the temptation persists.
Are there concrete reasons to illustrate that this is a bad idea and dispell my fantasies forever, for example complications with garbage collection, exception handling, etc. Or should I go ahead and indulge myself by abusing this language concept in this way?

So in asp.net MVC views, we see the following construct:
using(Html.BeginForm())
{
//some form elements
}
An abuse? Microsoft says no (indirectly).
If you have a construct that requires something to happen once you're done with it, IDisposable can often work out quite nicely. I've done this more than once.

"Is it an abuse of the IDisposable interface to use it this way"? Probably.
Does using using as a purely "scoping" construct make for more obvious intent and better readability of code? Certainly.
The latter trumps the former for me, so I say use it.

You certainly wouldn't be the first one to 'abuse' IDisposable in that way. Probably my favorite use of it is in timers, as the StatsD.NET client demonstrates:
using StatsdClient;
...
using (statsd.LogTiming( "site.db.fetchReport" ))
{
// do some work
}
// At this point your latency has been sent to the server
In fact, I'm pretty sure Microsoft themselves use it in some libraries. My rule of thumb would be - if it improves readbility, go for it.

Related

C# 8.0 Using Declarations [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
In C# 8.0 we can now use using declarations in C# 8.0. Are they really such a good idea? Consider this using statement:
private int SomeMethod()
{
using (var t = new StreamWriter("somefile.txt"))
{
} // dispose of variable t
// 100 lines of code
}
As soon as the closing brace is reached, the variable t is disposed of. With using declarations, the scenario is different:
private int SomeMethod()
{
using var t = new StreamWriter("somefile.txt");
// 100 lines of code
} // dispose of variable t
The variable t is only disposed at the end of the method. Using statements seem more efficient to me, because you only keep the object "alive" for as long as you need it.
The answers can be as many as different scenarios.
In your case for example it could either be:
The function is big enough that it do would make sense to split. Remember that in modern programming with unit testing in mind, the units must be sufficiently small and the functions to do specific things.
The 100 lines will end in quite quickly. If that's the case, then it's ok to use the new more readable definition.
The same resources are needed a few lines below. Then why not use the same instance and then dispose?
In the rest of the lines, something else happens that takes time. Then it does not make sense to keep an item non-disposed (like a Stream) and the old way should be used.
The list could go on. There is no one solution fits all example but in most cases I think the first applies.

ASP.NET Core - Any reason not to use a parameter object with dependency injection? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I like the DI feature of ASP.NET Core, but am finding that some of my classes end up with huge constructor parameter signatures...
public class Foo {
private IBar1 _bar1;
private IBar2 _bar2;
// lots more here...
public Foo(IBar1 bar1, IBar2 bar2, lots more here...) {
_Bar1 = bar1;
_Bar2 = bar2;
// ...
}
public DoSomething() {
// Use _bar1
}
}
In case this looks like a code smell, it's worth pointing out that any controller is going to use AutoMapper, an email service and 2 or 3 managers related to ASP.NET Identity, so I have 4 or 5 dependencies before I start injecting a single repository. Even if I only use 2 repositories, I can end up with 6 or 7 dependencies without actually violating any SOLID principles.
I was wondering about using a parameter object instead. I could create a class that has a public property for every injected dependency in my application, takes a constructor parameter for each one, and then just inject this into each class instead of all the individual Bars...
public class Foo {
private IAllBars _allBars;
public Foo(IAllBars allBars) {
_allBars = allBars;
}
public DoSomething() {
// Use _allBars.Bar1
}
}
The only disadvantage I can see is that it would mean that every class would have every dependency injected into it via the parameter object. In theory, this sounds like a bad idea, but I can't find any evidence that it would cause any problems.
Anyone any comments? Am I letting myself into potential trouble by trying to make my constructor code neater?
What you're describing sounds like the service locator pattern, and while it seems tempting to simplify your code by eliminating all those constructor parameters, it usually ends up hurting maintainability in the long run. Check out Mark Seemann's post Service Locator violates encapsulation for more details about why it should be avoided.
Generally, when you find yourself with a class with dozens of constructor parameters, it means that class might have too many responsibilities. Can it be decomposed into a number of smaller classes with narrower goals? Rather than introducing a "catch-all" class that knows about everything, maybe there's a complex part of your application that you can abstract behind a facade.
Sometimes, you do end up with large coordinator classes that have many dependencies and that's okay in certain circumstances. However, if you have many of these it's usually a design smell.

Is it bad practice to have an empty class as a base class, with the expectation that the class may have members in the future? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
Simple example:
public class Food
{
public virtual void Eat()
{
StuffInMouth();
}
}
public class Fruit : Food
{
// Nothing here yet, but likely could be in the future
// Is this bad from a .NET/C# style guidelines perspective?
}
public class Apple : Fruit
{
public virtual void Eat()
{
Clean();
base.Eat();
}
}
public class Orange : Fruit
{
public virtual void Eat()
{
Peel();
base.Eat();
}
}
As simple as I can put it, it is called Speculative Generality.
Reasons for the Problem: Sometimes code is created "just in case" to support anticipated future features that never get implemented. As a result, code becomes hard to understand and support.
As Steve McConnell points out in Code Complete - 2,
Programmers are notoriously bad at guessing what functionality might be needed someday.
1. Requirements aren't known, so programmer must guess: Wrong guesses will mean the code must be thrown away.
2. Even a close guess will be wrong about the details: These intricacies will undermine the programmer's assumptions - the code must be (or should be) thrown away.
3. Other/future programmers may assume the speculative code works better or is more necessary than it is: They build code on the foundation of speculative code, adding to the cost when the speculative code must be removed or changed.
4. The speculative generality adds complexity and requires more testing and maintenance: This adds to the cost and slows down the entire project.
Credits: Code Complete - 2 | Pluralsight course on refactoring.
Imho It is an extra abstraction layer with no added value.
It adds unnecessary complexity, so in my opinion it's bad practice and an example of YAGNI.
yeah, it is important to realize that while it can save time to do things like this (if it is used) it is typically better to code for the immediate future, or at least understand when you are coding for the future that will never come.
There is also maintenance overhead of implementing things too early.
No one said a class must have any members. A class represents a category of objects, so in case they have no useful properties (not in the language's meaning) it's perfectly fine to have it empty. So what's important is whether the class does represent a meaningful category of objects your code needs to work with or not.
In your case, should your code generally operate on food, having a common Food ancestor makes sense. However, a better concept may be introducing an IFood interface, effectively decoupling the food contract from actual inheritance hierarchies. For example, a meaningful hierarchy may start with an Animal class, but not every animal is considered food (disclaimer: this is rough example).

Initializing an instance with no use [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I'm a little bit confused why type instances are allowed to be created without their future use and the compiler doesn't emit even a warning about it.
public void M()
{
new int();
new object();
}
I've never created an instance without assigning it to a variable or calling it's members, and if I saw a line like ;new SomeType(); I would consider it as a mistype. I understand that technically .ctor can assign some static fields or do something else it's not supposed to do, but I don't consider it a sufficient argument for not emitting a warning.
Are there any patterns where ignoring an instance is appropriate? What am I missing?
Additional points not clear for me:
1. CodeAnalysis gives a warning "CA1806: Do not ignore method results" for object but not for int or any other value type.
2. The compiler doesn't emit IL for ignored structs even without optimization flag.
Instantiating an object can have side effects in C#.
The constructor could do almost anything, such as creating a database entry, writing a text file, or updating a static property somewhere before going out of scope.
Having said that, it is not good programming style to instantiate an object for the sole purpose of producing a side effect. That is what the CodeAnalysis warning is implying.
I understand that technically .ctor can assign some static fields or do something else it's not supposed to do, but I don't consider it a sufficient argument for not emitting a warning
As Eric Lippert said
My usual response to “why is feature X not implemented?” is that of course all features are unimplemented until someone designs, implements, tests, documents and ships the feature, and no one has yet spent the money to do so. And yes, though I have famously pointed out that even small features can have large costs, this one really is dead easy, obviously correct, easy to test, and easy to document. Cost is always a factor of course, but the costs for this one really are quite small.
http://blogs.msdn.com/b/ericlippert/archive/2009/05/18/foreach-vs-foreach.aspx

Is it BAD or GOOD idea to validate constructor parameters in constructor method of an immutable? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
You have an immutable object, and you set its internal variables in the constructor which accepts couple of parameters.
Question:
Do you see any problems to VALIDATE constructor parameters in the constructor method of an immutable object and throw ArgumentExceptions if not valid?
(to me it makes sense but I wanted to ask in case there are some better ways or something not OK with this - for example if it is a better design to move validation from constructor to a factory)
Or if I generalize it by rephrasing the question:
Is it OK to put business rules-wise logic in the constructor methods? Or should constructors always do nothing more than setting object's internals?
Thanks
In a way, it makes sense to validate in the constructor itself because you know that all usages of it will pass through that single point, and any other developer that will use your code will be protected from making mistakes because of your "low-level" validations.
If you move the validation higher up the call chain, you leave the class code cleaner but you expose the code to the possibility of "you're using it wrong" bugs.
Constructor validation has a slight problem in case of invalid data: What do you do then? You have to throw an exception, which might be awkward and also a performance hit, if you create "invalid" instances often.
To get rid of try ... catch every time you instantiate the object, you would have to create a factory anyway.
I think the factory is a good approach, but in a slightly different way - validate the arguments given to the factory method and only then create a (valid) instance.
A class should, to the best of it's ability, document the guarantees it makes, and do its best to keep itself in a valid state at all times. Any incoming calls that are either inappropriate or would put the object in an invalid state should generate exceptions.
This holds true for constructors too. A constructor that doesn't validate its inputs makes it possible for others to create invalid instances of your class. But if you always validate, then anyone with a reference to your class can be confident that it is valid.
If it was me I'd validate the parameters before I pass them into the constructor. You never know how your code is going to evolve so doing the validation in a factory as you suggest should provide a bit more visibility and feels 'cleaner'.
If you have a choice for where to raise an exception, just go with wherever you're more likely to remember to surround it with a try..catch, it helps to consider other users of your codebase too. This more often then not depends on the purpose of the class, and how you see it being used. However consistency is also important.
Sometimes it's useful to not raise exceptions in either and instead have a separate ValidateInstance() function for immutable types. Your other choices are as you say class creation (via factory or constructor) or on class usage (usually a bad idea if an error can be thrown sooner.. but sometimes makes sense).
putting them in the constructor has the advantage that they will also surface in a Factory method, if you chose to make one later.
HTH

Categories