As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
I remember years ago hearing that it is more efficient to have loops decrementing instead of incrementing especially when programming microprocessors.
Is this true, and if so, what are the reasons?
One thing that occurs off the bat is that the end condition on a decrementing loop is liable to be quicker. If you are looping up to a certian value, a comparison with that value is going to be required every iteration. However, if you are looping down to zero, then the decrement itself on most processors will set the zero flag if the value being decremented hits zero, so no extra comparison operation will be required.
Small potatoes I realize, but in a large tight inner loop it might matter a great deal.
In c# it makes no difference to the efficiency. The only reason to have decrementing loops is if you are looping through a collection and removing items as you go.
Don't know about decrementing, but I know that using ++i instead of i++ is more performant.
There are lots of articles on the web why this is, but it comes down to this: using ++i makes it so that i gets declared the first time automaticly, and using i++ doesn't do that. (in C# at least)
Again, don't know if decrementing is more performant, just thought I'd throw that out there seeing how you're asking about performance :)
You'd just use decrementing because it's easier in some situations for all I know..
Related
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
I have a 500+GB text file. it has to be searched for duplicates, remove them, sort and save final file. Of course for such big file, the LINQ or such things are not good at all and will not work so they have to use External Sorting. there is an app called "Send-Safe List Manager". its speed is super fast, for a 200MB txt file it gives the result in less than 10 seconds. after examining inside the exe using "Greatis WinDowse" app i found that it has been written in Delphi. there are some external sorting classes written in C#. i have tested a 200MB file with them and all were over 1 minute. so my question is that for this kind of calculations is Delphi faster than C# and if i have to write my own, then should i use delphi? and with C# can i reach that speed at all?
Properly written sorting code for large file must be disk bound - at that point there essentially no difference what language you use.
Delphi generates native code and also allows for inline assembly, so in theory, maximum speed for a specific algorithm could be easier to reach in Delphi.
However, the performance of what you describe will be tied to the IO performance, and the performance difference between possible algorithms will be of several orders of magnitude more than the Delphi vs. .NET difference.
The language is probably the last thing you should look at if trying to speed that up.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Could someone suggest good CircularBuffer implementation? I need both "not thread-safe" and "thread-safe" versions. I expect following operations:
ability to provide size of the buffer when creating
adding elements
iterating elements
removing elements while iterating
probably removing elements
I expect implementation to be highly optimized in terms of speed and used memory, average and worst times etc.
I expect "not thread-safe" implementation to be extremely fast. I expect "thread-safe" implementation to be fast, probably using "lock-free code" for synchronization and it's ok to have some restrictions if this is required for speed.
If buffer is too small to store new (added) element it's ok to silenty override existent element or raise exception.
Should I use disruptor.net?
Adding link to a good example Disruptor.NET example
Not thread safe:
System.Collections.Generic.Queue
Thread safe:
System.Collections.Concurrent.ConcurrentQueue
or
System.Collections.Concurrent.BlockingCollection (which uses a concurrent queue by default internally)
Although technically you really shouldn't use the term "thread safe". It's simply too ambiguous. The first is not designed to be used concurrently by multiple threads, the rest are.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Barring actual performance tests of my code (I'm at the design stage), what is the general consensus on interfacing C code into C#? When will it be fruitful to do so, and when would it not?
There is no simple answer.
Most of the time, the overhead of marshaling parameters into and back from methods will be negligible, and often far lower then the processing done inside the function if it's not a trivial function. However, doing it inside a tight, performance-critical loop might violate your performance constraints.
The overhead itself largely depends on type of arguments and return values of the method. It is cheaper to marshal an integer than an array containing structures which contain many strings.
It is impossible to tell without knowing your use cases.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
So, I've heard some people say that Regular Expressions is extremely inefficient and slow (and I especially hear about it with regards to C#). Is it true that Regex is that slow and is it really as bad to use as they make it out to be?
If it is that slow, what should I use in its place in large scale applications?
So, I've heard some people say that Regular Expressions is extremely inefficient and slow
That's not true. At least it is not true in all cases. It's just that there might be more adapted tools for some tasks than regular expressions. But claiming something like this and drawing such conclusions is simply wrong. There are situations where regexes work perfectly fine.
You will have to use it appropriately. It should not be the case of if all you have is a hammer, everything looks like a nail
Regexes are heavy weight and powerful and do have performance impact. You should not use for simple operations where say, string operations like substring would have sufficed. And you should not use them for very complicated stuff, as you get both performance and more importantly, readability hits.
And you should definitely not try to use regex for xml, html etc and use the appropriate parsers.
Bottomline: It is a tool. Have it in your toolkit, and use it appropriately.
Regular expressions are not as efficient as other techniques that are appropriate for specific situations.
However, that doesn't mean that one should not use them. If they provide adequate performance for your particular application, then using them is fine. If your application's performance is inadequate, do some testing to determine the cause of the problem rather than just eliminating all the regexes.
Also, there are smart ways to use regexes. For example, if you are doing the same operation a lot, then cache and reuse the regular expression object rather than recreating it every time it is used.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
let's say that we ignore the target and source hardware for a moment. So, what's the better endian style to go with -- big or small?
I'm just trying to go with consensus / convention on this one. The best guidance I've received so far is "it depends" so always specify. That's fine. I'll do that.
However, in this situation there is no need to be one way or the other. There's no legacy, so I thought, "what would be the cleanest choice for current & emerging hardware."
Use whatever is predominant in your hardware. Or use "network byte order" (big endian) because the internet does. Or pick one at random. It's unimportant.
Don't choose. Just use whatever your compiler/platform uses. That gives no hassle and just works.
If you are doing raw network stuff, you may want to convert things to/from network endianness though, which is big endian. But don't mess up your whole code because of that. Just do the conversion when you get to the network writing part.
Actually the answer is it depends
If you just want a choice then Since in Big Endian high order byte comes first ,you can always check positive or negative from the first byte.
It doesn't matter. Just pick one.
This is a topic of endless debate. One does not hold a particular advantage over the other.