As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Donald Knuth said that "premature optimization is the root of all evil", and I gradully believe the saying.
So can I put it that when writing an application, we should concentrate on completing the functions, without concerning performance issue, until we cannot bear the low performance?
I'm afraid that if I use a wrong pattern many times which slows down the application, as a result fix the issue may consume a considerable amount of time. Should I test the performance of the pattern before widely using it?
The pattern I mentioned may refer to use Linq or for-loop, use Delegate.BeginInvoke, Parallel.For, or Task<T>, dispose IDisposable or just ignore it, etc.
Any refernece materials are all welcomed.
I agree with the spirit of Knuth's quote about premature optimization, as it can cause code to become overly complex and unwieldy too early in development, impacting both the quality of the code and the time needed to complete a project.
Two concerns I have about your post:
You should have a sense about whether or not your function/algorithms can theoretically scale/perform or not to meet your requirement (e.g. the O complexity of your solution -> http://en.wikipedia.org/wiki/Analysis_of_algorithms )
The patterns you mention are actually concrete implementation items, only some of which are related to performance, e.g.
Parallel.For/Task - these are useful for gaining performance on multi-core systems
IDisposable - this is for resource management related, and not something to be avoided
Linq vs. for-loop - this can be a point-optimization, but you'll need to benchmark/assess your use case to determine which is best for you (e.g. Linq can be moved to PLinq in some cases for parallelism)
Delegate.BeginInvoke - a non-optional component of thread synchronization
Never code without concern for performance.
Code for performance up until the code would get more complex (e.g. parallel).
But with C# 5.0 even parallel is not complex.
If you have some calls you think might need to optimize then design for that.
Design the code so optimization does not change the interface to the method.
There is speed, memory, concurrency (if a server app), reliability, security, and support.
Clean code is often the best code.
Don't get crazy until you know you have a performance problem but don't get sloppy.
In answering another question on SO I told the person they did not need a DataTable and DataReader would be faster with less memory. Their response was it still runs in 1/2 a second I don't care. To me that is sloppy code.
#JonhSanders I disagree that "Code for performance up until the code would get more complex" will cause either bugs or incomplete. For me coding for performance is not the same as optimize. First pass on anything but throw away code I code for performance - nothing exotic just best practices. Where I see potential hot spots that I might need to come back and optimize I write with optimization in mind. P.S. I agree on closing the question.
Related
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
I frequently find myself creating methods to hold small (<50 lines) algorithms. However, when I first learned methods, we were constantly taught that they were a way to condense/clean up code by housing commonly used code snippets inside a block.
I like to use methods not only for that purpose, but to house small snippets so that my main method is clean and understandable, and the "meat" of the code is hidden within those methods. Stylistically, is this incorrect?
There is nothing wrong with breaking up a large method into smaller ones to allow for readability.
It has several benefits, in particular if you keep each method in a single level of abstraction - making the large method read fluently and making each small method simple and easy to understand.
No, that is not incorrect. There are multiple goals you should have while writing code. Not repeating code by consolidating it into reusable classes and methods speaks to Maintainability. Factoring out items of work into spearate methods improves Readability, which also happens to help Maintainability down the road.
Like all things, there is a balance to seek. Having to mentally traverse too many levels of a call stack to understand an algorithm or find a defect can become a drawback. But if a method is on the order of 50 lines, I find it difficult to believe you would hit this case.
Stylistically, at least from my experience, this is not incorrect. The reason I say this is that a program, if written by one or one hundred programmers will be composed and completed by the talents and experiences of that programmer / programmers. What this means is that there are a large number of ways to solve the problem and the question you should ask yourself is did my implementation work? Did it complete the task / feature? If so, then great!
Because you are concerned with style, I will refer you to Uncle Bob's SOLID principles, as many others have done so with similar questions on style. You mentioned having an algorithm that is composed of <50 lines of code in a single method, I would argue to try following Uncle Bob's Single Responsibility (the 'S' in S.O.L.I.D) principle as best as you can, when you can. This will challenge you to look at that that <50 line-in-a-single-method algorithm and consider breaking it up into more methods that focus on doing one thing, and one thing well. That way you can achieve testability and readability. These are always two things that will go a long way toward "good style".
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
I read this question Asynchronous vs Multithreading- is there a difference and search in Google about differences.
What is the benefit to use Asynchronous instead of Multithreading?
and when use Asynchronous instead of Multithreading?
If your task can be done using asynchronous programming, the it is better to do it that way instead of going for multi-threaded programming. for 3 reasons:-
1: Perfomance
In multi-threading, the CPU or w/e has to keep switching between threads. So, even if your thread is doing nothing and just sitting there (or more likely, doing a comparison to see if a condition is true so it can get one with doing with w/e it was created to do), the CPU still switches threads and the process takes some time. I don't think that would be very bad, but your performance surely takes a hit.
2: Simplicity & Brevity
Also, maybe it's just me, but asynchronous programming just seems more natural to me. And before you ask, no, I'm not a fan of JS but still. Not only that, but you run into problems with shared variables and thread-safeness and others — all of which can be side-stepped by using asynchronous programming and callbacks.
3: Annoying implementations of threads
In Python there's this really horrible thing called a GIL (Global Interpreter Lock). Basically, Python doesn't let you actually run concurrent threads. Also, if you're thinking about running a threaded program on a multi-core CPU, forget it.
There might be caveats in C# too, I don't know. These are just my 2 cents...
All that said, asynchronous and multi-threading are really not that comparable. While multi-threading may be used (inefficiently) to implement asynchronousity, it is a way to get concurrency and acynhrounousity is a programming style, like OOP (Object Oriented Programming).
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I have a lot of legacy code where json is parsed manually by using a for loop. This takes O(n) time in general. I know json.net would be better in terms of time and space but gaining an insight about how it works, would help me make an informed decision whether its worth the effort to actually go ahead and invest the time and man power to move everything to json.net.
To paraphrase your question into a more general one, lets assume you were looking for advice on which JSON serialization implementation to choose for various scenarios.
I'm aware of three obvious answers to this question:
NewtonSoft JSON.NET
Provides an abundance of features and excellent performance
ServiceStack.Text
Provides simplicity and blazing performance
BCL JsonSerializer
Avoids the 3rd party library dependency, but is significantly slower
If you don't care about the 3rd party library dependency, go for the first option as it will give you performance and functionality. If you don't need a ton of features, evaluate whether ServiceStack.Text does what you need it to (if unsure, go with JSON.NET). In any other case, stick with what you have.
Also, don't spend time making your code faster by replacing your JSON code before you know that this particular area is a performance bottleneck (or otherwise warrants replacement, e.g. because it's a maintenance problem). If you are considering replacing code to gain performance, isolate a few methods to profile and benchmark your current code against similar scenarios using the alternate implementation or library, in order to avoid making a decision based on assumptions.
Last, knowing how it works internally should not be a factor in your decision process unless you specifically are planning to be able to modify the source of it (or otherwise need to be able to understand it).
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
So, I've heard some people say that Regular Expressions is extremely inefficient and slow (and I especially hear about it with regards to C#). Is it true that Regex is that slow and is it really as bad to use as they make it out to be?
If it is that slow, what should I use in its place in large scale applications?
So, I've heard some people say that Regular Expressions is extremely inefficient and slow
That's not true. At least it is not true in all cases. It's just that there might be more adapted tools for some tasks than regular expressions. But claiming something like this and drawing such conclusions is simply wrong. There are situations where regexes work perfectly fine.
You will have to use it appropriately. It should not be the case of if all you have is a hammer, everything looks like a nail
Regexes are heavy weight and powerful and do have performance impact. You should not use for simple operations where say, string operations like substring would have sufficed. And you should not use them for very complicated stuff, as you get both performance and more importantly, readability hits.
And you should definitely not try to use regex for xml, html etc and use the appropriate parsers.
Bottomline: It is a tool. Have it in your toolkit, and use it appropriately.
Regular expressions are not as efficient as other techniques that are appropriate for specific situations.
However, that doesn't mean that one should not use them. If they provide adequate performance for your particular application, then using them is fine. If your application's performance is inadequate, do some testing to determine the cause of the problem rather than just eliminating all the regexes.
Also, there are smart ways to use regexes. For example, if you are doing the same operation a lot, then cache and reuse the regular expression object rather than recreating it every time it is used.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
We are about to implement a small automated securities trader. The trader will be build on top of the excellent quickfix FIX engine.
After due though, we narrowed our options down to implementing it in C# or in Python. Please specify the pros and cons of each language for this task, in term of:
Performance (The fact that Python uses a GIL troubles me in terms of thread concurrency)
Productivity
Scalability (We may need to scale this trader to a fully-sized platform)
EDIT
I've rephrased the question to make it less "C# vs. Python" (which I find irrelevant - both languages have their merits), but I'm simply trying to draw a comparison table before I make the decision.
I like both languages and a think both would be a good choice. The GIL might really be the most important difference. But I'm not sure if it's a problem in your case. The GIL only affects code running in pure Python. I assume that your tool depends more on I/O than on raw number crunching. If your I/O libraries handle the GIL correctly, they can execute concurrent code without problems. And even for number crunching you still have numpy.
My choice would depend on your existing knowledge. If you have experienced C# developers at hand I would go for C#. If you start absolutly from scratch and it's really 50:50, then I would go for Python. It's easier to learn, free and in many cases more productive.
And just to mention it: You might also have a look at IronPython. ;-)
For points "Performance" and "Scalability" I would suggest C# (although a large part of performance depends on your algorithms). Productivity is much of a subjective thing, but now C# has all cool features like lambda, anonymous method, classes etc which makes it much more productive.