Best practice: Using redundant methods from external libraries [closed] - c#

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
In the code base I'm working on there are several examples of
if (!Directory.Exists(dir))
{
Directory.CreateDirectory(dir);
}
According to the MSDN documentation (http://msdn.microsoft.com/en-us/library/54a0at6s(v=vs.110).aspx) this is redundant because createDirectory won't overwrite an existing directory.
This could be seen as making the code clearer, as it's not obvious from the .CreateDirectory(dir) method that this is the behaviour.
On the flip side, this is code bloat and keeping it around (even adding it to a library/utility class) has its issues (means you have to read/maintain more lines of code for example).
What's considered best practice here?

It may look redundant, but I can see a reason why someone decided to go that way.
The main difference is:
Directory.Exists() returns just bool
Directory.CreateDirectory() returns DirectoryInfo
So even when the directory exists, there is additional work performed to get that DirectoryInfo instance, which may not be necessary at all.
Another thing that come up is the fact, that you have to know that Directory.CreateDirectory does not override the directory if it exists! With additional Directory.Exists call even when someone doesn't know that he can really easily figure out what's going on with this piece of code.
And I don't think there is a best practice here.

Personally, I would normally remove the redundant code.
This could be seen as making the code clearer, as it's not obvious from the .CreateDirectory(dir) method that this is the behaviour.
In general, I'd argue that would be better served by a comment, rather than a redundant code path. Adding extra code to avoid a lack of knowledge seems like a weak reason to include the check.
That being said, there is a potential (very minor) performance gain in avoiding the call to CreateDirectory, as that method will construct a DirectoryInfo instance. In practice, this is most likely "noise" (as IO calls tend to be relatively expensive anyways), so its not something I would factor into the equation unless it proved to be a measured problem.

A race condition could cause the creation of a directory to fail even if the preliminary check has passed.
Therefore I consider this code as incorrect and I dissuade you from using it.

Related

How do C# Programmers effectively use "using ___;"? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I'm a beginner C# developer and I'm branching out into looking at certain parts of what some more advanced bits of code. However, I cannot wrap my head around how developers and programmers use the "using" commands effectively. I understand how they work, and if they are a public class file they can have their methods accessed, but how do programmers know from picking up an API how to use it?
Sorry if this question seems like a total breeze and as though I've misunderstood the concept entirely (maybe I have, haha) but it seems like something where without extensively going through the API and it's documentation, most people can chew through these things quite easily.
First of all, not sure if you are aware of not, but the using directive does not actually "import" or start "using" anything. using System; merely tells the compiler that whenever you use something like DateTime, it will check System.DateTime and try to look for the type there. In fact, you can write in C# without using the using directive at all (unless you need to resolve a naming conflict), but of course the program will become unnecessarily "wordy".
As for the other part of your question, you don't begin writing a C# program starting with using. You first have to find the proper "tools" (classes) for the problem you are trying to solve by the program, and only then add using so that you can work with them efficiently without typing the namespace over and over. Moreover, most modern IDEs will add the directive automatically, either when you create a new file (adding some common namespaces), or when you use a class in a namespace that you forgot to import with using.

Can someone please explain (nicely) why OOP Method Overloading is a good thing? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
First, please no spamming because I am not necessarily an OOP devotee. That said, I have been a programmer on and off for almost 30 years and have created a lot of pretty cool production code systems/solutions in several industries. I've also done my share of break/fix, database development, etc. Even a bout 10 years as a web programmer, not developer, so I an not so much a newbie but someone trying to get an answer about something that frankly is eluding me.
I started as a "C" programmer int he early 1980's and "C" served me well into the early 2000s (even today most scripting and higher level languages use "C" syntactical elements).
That said, overloading seems to violate every principle of what I was taught were "good coding practices" by increasing ambiguity in the opportunity for omission of intended code to be executed for a given condition or actually running a routine you didn't expect to due to some condition falling through the cracks. Also generally seems to creates LOTS of confusion for learners.
I am not saying overloading is bad per se, I just want to better understand it's practical application to real problems other than simply a way to provide input validation or perhaps just to handle inputs from other sources that you have no control over in an API or something else that you don't necessarily know the type of (again not clear on how or why that could actually happen either) C# has a lot of parse and try catch to handle this as do most OOP languages.
In over a decade, I have yet to get a straight, non judgmental and dare I say unsnarky answer to this question. Surely there is someone who can offer a reasonable explanation of why it is used.
So I pose the question to you the stack overflow gurus, Personally, does having a method/function that is potentially callable multiple different ways with multiple exclusive code segments really a good thing, or does it just suggest lack of good planning when designing software. Again, not knocking, judging, or disparaging, I just don't get it.....please enlighten me!
I'd say std::to_string is a pretty good example of good use of overloading. Why would you want to have different functions for converting different types to std::string? You don't. You just want one - std::to_string and you want it to behave sensibly whatever type of argument you give it - and it does just that. Using overloading keeps the client code simple.

Hundreds of failing unit tests [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I have inherited a very old (first commits are in 1999) code base and have found 500 of the 2000 or so unit tests to be failing. My question is, should I go through each test manually and check if it is still relevant or should I start over?
Nobody here can answer this as such, but you have to ask for each test:
Does this test still make sense? If not, remove it.
Is the test testing something that should work? Do something to fix it up.
Is the test conceptually useful, but what it tests has changed so it is now failing? Rewrite it so that it works in its new way.
How much effort to fix vs. value is the test? If it's a lot of effort, and low value, maybe remove it...
We can't really say whether you should do one thing or another.
It's probably worth just LOOKING at the tests, and especially looking at the effort of fixing the test before starting any real work.
You may also need to consult with some kind of test-manager for your group, and seek their input to the coverage/bug rate/common problems, etc for that part of the code.
When I look at old tests in our code base, it's sometimes best to remove, sometimes worth "fixing up" and sometimes worth starting from scratch. Unless you are familiar with the test, it's hard to say before you spend some effort on investigating the issue...

Could you please explain this statement by providing an implementation of the design pattern it alludes to? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
The Managed Threading Best Practices page states:
Avoid providing static methods that alter static state. In common server scenarios, static state is shared across requests, which means multiple threads can execute that code at the same time. This opens up the possibility of threading bugs. Consider using a design pattern that encapsulates data into instances that are not shared across requests. Furthermore, if static data are synchronized, calls between static methods that alter state can result in deadlocks or redundant synchronization, adversely affecting performance.
I understand all the rest except for the one sentence that is in bold.
How would you do this without essentially changing the field from a static one to an instance one? Isn't that saying, "In a server scenario, avoid using static class-level members as much as you can?"
If it isn't, could you please provide an implementation of the design pattern it is alluding to?
How would you do this without essentially changing the field from a static one to an instance one?
No one can possibly answer this question without knowing why you thought that putting something in a static field was a good idea in the first place.
Isn't that saying, "In a server scenario, avoid using static class-level members as much as you can?"
No. To be clear, that is a good idea. But that's not what this sentence is trying to communicate. It is saying if you have a problem that you think could be solved by making a static method that modifies static state, then maybe you should consider finding some other way to solve the problem.
If it isn't, could you please provide an implementation of the design pattern it is alluding to?
Design patterns exist to solve problems. You haven't said what problem you're solving, so it's impossible to recommend a pattern.
Look, suppose you're planning on constructing a building on sand, and I tell you that only fools build on sand, and you then say OK, give me a design for a building that still meets my needs, but not built on sand. I don't know what your needs are and I don't know why you thought that building on sand was a good idea in the first place, so no, I can't do that. But that does not change the fact that only fools build on sand.
Are you thinking of modifying static state in a multithreaded server scenario? That's a really foolish thing to do. Find another way to do whatever you want to do. How? I haven't the faintest idea; I don't know what you're trying to do. But that doesn't change the fact that you're unlikely to be successful by modifying static state on a multithreaded server.

Why would you need to emit IL code? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I work in a code base that is quite large and today I found a project that was emitting IL code inside a normal class.
The project containing the IL code being emitted was a implementation of a Service Locator MSDN Desctiption.
What are the advantages of doing this and why would this be done as apposed to using the C# language?
Typically this is done to circumvent the overhead of using reflection, using information only available at runtime.
You would then use reflection, which can be slow depending on what you do, to build a new piece of code that works directly with the data given to it, without using reflection.
Advantages:
Performance
Disadvantages:
Hard to debug
Hard to get right
Hard to read code afterwards
Steep learning curve
So you need to ensure it's really worth the price before embarking on this.
Note that this is a general answer. In the specific case you came across, there is no way to answer why this was done nor which particular advantages (or disadvantages) you would have without actually seeing the code.
There are many uses for this.
One of the more often used scenario is for changing/injecting code on the fly:
.NET CLR Injection: Modify IL Code during Run-time
A good tutorial that help me to understand a good use for it is:
Dynamic... But Fast: The Tale of Three Monkeys, A Wolf and the DynamicMethod and ILGenerator Classes
Good luck

Categories