As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I have a lot of legacy code where json is parsed manually by using a for loop. This takes O(n) time in general. I know json.net would be better in terms of time and space but gaining an insight about how it works, would help me make an informed decision whether its worth the effort to actually go ahead and invest the time and man power to move everything to json.net.
To paraphrase your question into a more general one, lets assume you were looking for advice on which JSON serialization implementation to choose for various scenarios.
I'm aware of three obvious answers to this question:
NewtonSoft JSON.NET
Provides an abundance of features and excellent performance
ServiceStack.Text
Provides simplicity and blazing performance
BCL JsonSerializer
Avoids the 3rd party library dependency, but is significantly slower
If you don't care about the 3rd party library dependency, go for the first option as it will give you performance and functionality. If you don't need a ton of features, evaluate whether ServiceStack.Text does what you need it to (if unsure, go with JSON.NET). In any other case, stick with what you have.
Also, don't spend time making your code faster by replacing your JSON code before you know that this particular area is a performance bottleneck (or otherwise warrants replacement, e.g. because it's a maintenance problem). If you are considering replacing code to gain performance, isolate a few methods to profile and benchmark your current code against similar scenarios using the alternate implementation or library, in order to avoid making a decision based on assumptions.
Last, knowing how it works internally should not be a factor in your decision process unless you specifically are planning to be able to modify the source of it (or otherwise need to be able to understand it).
Related
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Donald Knuth said that "premature optimization is the root of all evil", and I gradully believe the saying.
So can I put it that when writing an application, we should concentrate on completing the functions, without concerning performance issue, until we cannot bear the low performance?
I'm afraid that if I use a wrong pattern many times which slows down the application, as a result fix the issue may consume a considerable amount of time. Should I test the performance of the pattern before widely using it?
The pattern I mentioned may refer to use Linq or for-loop, use Delegate.BeginInvoke, Parallel.For, or Task<T>, dispose IDisposable or just ignore it, etc.
Any refernece materials are all welcomed.
I agree with the spirit of Knuth's quote about premature optimization, as it can cause code to become overly complex and unwieldy too early in development, impacting both the quality of the code and the time needed to complete a project.
Two concerns I have about your post:
You should have a sense about whether or not your function/algorithms can theoretically scale/perform or not to meet your requirement (e.g. the O complexity of your solution -> http://en.wikipedia.org/wiki/Analysis_of_algorithms )
The patterns you mention are actually concrete implementation items, only some of which are related to performance, e.g.
Parallel.For/Task - these are useful for gaining performance on multi-core systems
IDisposable - this is for resource management related, and not something to be avoided
Linq vs. for-loop - this can be a point-optimization, but you'll need to benchmark/assess your use case to determine which is best for you (e.g. Linq can be moved to PLinq in some cases for parallelism)
Delegate.BeginInvoke - a non-optional component of thread synchronization
Never code without concern for performance.
Code for performance up until the code would get more complex (e.g. parallel).
But with C# 5.0 even parallel is not complex.
If you have some calls you think might need to optimize then design for that.
Design the code so optimization does not change the interface to the method.
There is speed, memory, concurrency (if a server app), reliability, security, and support.
Clean code is often the best code.
Don't get crazy until you know you have a performance problem but don't get sloppy.
In answering another question on SO I told the person they did not need a DataTable and DataReader would be faster with less memory. Their response was it still runs in 1/2 a second I don't care. To me that is sloppy code.
#JonhSanders I disagree that "Code for performance up until the code would get more complex" will cause either bugs or incomplete. For me coding for performance is not the same as optimize. First pass on anything but throw away code I code for performance - nothing exotic just best practices. Where I see potential hot spots that I might need to come back and optimize I write with optimization in mind. P.S. I agree on closing the question.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I'm implementing a SOAP Webservice for sending thousands of emails and storing thousands of XML response records in a local database. (c#.net, visual studio 2012)
I would like to make my service consumer as fast and lightweight as possible.
I need to know some of the considerations. I always have a feeling that my code should run faster than it is.
E.g.
I've read that using datasets increase overhead. So should I use lists of objects instead?
Does using ORM introduce slowness into my code?
Is a console application faster than a winform? Because the user needs no GUI to deal with. There are simply some parameters sent to the app that invoke some methods.
What are the most efficient ways to deal with a SOAP Web Service?
Make it work, then worry about making it fast. If you try to guess where the bottle necks will be, you will probably guess wrong. The best way to optimize something is to measure real code before and after.
Datasets and ORM and win form apps, and console apps can all run plenty fast. Use the technologies that suit you, then tune the speed if you actually need it.
Finally if you do have a performance problem, changing your choice of algorithms to better suit your problem will likely yield much greater performance impact than changing any of the technologies you mentioned.
Considering my personal experience with soap, in this scenario I would say your main concern should be on how you retrieve this information from your database (procedures, views, triggers, indexes and etc).
The difference between console, winform and webapp isn't that relevant.
After the app is done you should make a huge stress test on it to be able to see where lies your performance problem, if it exists.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
So, I've heard some people say that Regular Expressions is extremely inefficient and slow (and I especially hear about it with regards to C#). Is it true that Regex is that slow and is it really as bad to use as they make it out to be?
If it is that slow, what should I use in its place in large scale applications?
So, I've heard some people say that Regular Expressions is extremely inefficient and slow
That's not true. At least it is not true in all cases. It's just that there might be more adapted tools for some tasks than regular expressions. But claiming something like this and drawing such conclusions is simply wrong. There are situations where regexes work perfectly fine.
You will have to use it appropriately. It should not be the case of if all you have is a hammer, everything looks like a nail
Regexes are heavy weight and powerful and do have performance impact. You should not use for simple operations where say, string operations like substring would have sufficed. And you should not use them for very complicated stuff, as you get both performance and more importantly, readability hits.
And you should definitely not try to use regex for xml, html etc and use the appropriate parsers.
Bottomline: It is a tool. Have it in your toolkit, and use it appropriately.
Regular expressions are not as efficient as other techniques that are appropriate for specific situations.
However, that doesn't mean that one should not use them. If they provide adequate performance for your particular application, then using them is fine. If your application's performance is inadequate, do some testing to determine the cause of the problem rather than just eliminating all the regexes.
Also, there are smart ways to use regexes. For example, if you are doing the same operation a lot, then cache and reuse the regular expression object rather than recreating it every time it is used.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
Trying to decide between Dapper, Massive and PetaPoco. I do like simplicity of Dapper, flexibility of Massive and POCO support in PetaPoco but am currently procrastinating about which one to choose for my next project.
I do realise that, to a large extent, it's the matter of personal taste, but believe it will be valuable to hear some opinions on the subject, especially from people who tried more than one of these, err, libraries (what is the right term: library, file, framework?)
Try to decide which of the features you mention -- simplicity, flexibility, POCO support -- will be most useful to you and your project(s) one year from now. Which is most likely to make your work easier?
Then you'll have your answer. And if you still can't choose, pick Dapper (just a random selection I made :-) As the Cheshire Cat says, if you don't really know where you want to go, it doesn't matter which road you choose.
PetaPoco's documentation seems more mature than others. It seems like it is the safest route.
i havent triend any of those. by default i always base my decision on number of lines of resulting client code and type safety. of course there are number of other metrics you should take into account but if your project is not constrained by any special (exotic) requirements those two are generally appliable.
btw. i am aware about controversy my response may lead to ;)
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
We are about to implement a small automated securities trader. The trader will be build on top of the excellent quickfix FIX engine.
After due though, we narrowed our options down to implementing it in C# or in Python. Please specify the pros and cons of each language for this task, in term of:
Performance (The fact that Python uses a GIL troubles me in terms of thread concurrency)
Productivity
Scalability (We may need to scale this trader to a fully-sized platform)
EDIT
I've rephrased the question to make it less "C# vs. Python" (which I find irrelevant - both languages have their merits), but I'm simply trying to draw a comparison table before I make the decision.
I like both languages and a think both would be a good choice. The GIL might really be the most important difference. But I'm not sure if it's a problem in your case. The GIL only affects code running in pure Python. I assume that your tool depends more on I/O than on raw number crunching. If your I/O libraries handle the GIL correctly, they can execute concurrent code without problems. And even for number crunching you still have numpy.
My choice would depend on your existing knowledge. If you have experienced C# developers at hand I would go for C#. If you start absolutly from scratch and it's really 50:50, then I would go for Python. It's easier to learn, free and in many cases more productive.
And just to mention it: You might also have a look at IronPython. ;-)
For points "Performance" and "Scalability" I would suggest C# (although a large part of performance depends on your algorithms). Productivity is much of a subjective thing, but now C# has all cool features like lambda, anonymous method, classes etc which makes it much more productive.