CircularBuffer highly efficient implementation (both thread-safe and not thread-safe) [closed] - c#

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Could someone suggest good CircularBuffer implementation? I need both "not thread-safe" and "thread-safe" versions. I expect following operations:
ability to provide size of the buffer when creating
adding elements
iterating elements
removing elements while iterating
probably removing elements
I expect implementation to be highly optimized in terms of speed and used memory, average and worst times etc.
I expect "not thread-safe" implementation to be extremely fast. I expect "thread-safe" implementation to be fast, probably using "lock-free code" for synchronization and it's ok to have some restrictions if this is required for speed.
If buffer is too small to store new (added) element it's ok to silenty override existent element or raise exception.
Should I use disruptor.net?
Adding link to a good example Disruptor.NET example

Not thread safe:
System.Collections.Generic.Queue
Thread safe:
System.Collections.Concurrent.ConcurrentQueue
or
System.Collections.Concurrent.BlockingCollection (which uses a concurrent queue by default internally)
Although technically you really shouldn't use the term "thread safe". It's simply too ambiguous. The first is not designed to be used concurrently by multiple threads, the rest are.

Related

Why is it so easy to deal with concurrency with Rx? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
A few months back, we introduce Rx into our codebase, and since then the codebase is getting more and more "reactive". I feel that it is really easy to introduce concurrency into the codebase with Rx, as not a single line of "locking" was used yet.
However, I do not understand why it is easy with Rx when I can't explain it in words.
Is it related to the "Actor Model" and "Functional Reactive Programming" concept?
Can someone kindly enlighten me on this please?
I think the main reason it's "easy" is because of the blood, sweat and tears poured into the Rx library by the very smart Dev team behind it at MS.
Look at the (open) source code to see just how much careful code goes into enforcing the Rx grammar and the parameterisation of when and where things run using Schedulers. That has plenty of defensive concurrent code in it. I suggest it's the grammar and Schedulers that bring the simplicity.
Using the model is quite easy, but achieving that simplicity was not trivial. You are benefiting from standing on the shoulders of giants that have hidden the complexity behind a neat and tidy API :)
Incidentally, there is still the odd trap for you to fall into... I'm sure you'll find one sooner or later! One example is that Subject<T>.OnNext() is not protected from concurrent access in Rx 2.x for performance reasons.

Performance penalties of interop'ing C# with C functions [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Barring actual performance tests of my code (I'm at the design stage), what is the general consensus on interfacing C code into C#? When will it be fruitful to do so, and when would it not?
There is no simple answer.
Most of the time, the overhead of marshaling parameters into and back from methods will be negligible, and often far lower then the processing done inside the function if it's not a trivial function. However, doing it inside a tight, performance-critical loop might violate your performance constraints.
The overhead itself largely depends on type of arguments and return values of the method. It is cheaper to marshal an integer than an array containing structures which contain many strings.
It is impossible to tell without knowing your use cases.

Better to have decrementing loops? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
I remember years ago hearing that it is more efficient to have loops decrementing instead of incrementing especially when programming microprocessors.
Is this true, and if so, what are the reasons?
One thing that occurs off the bat is that the end condition on a decrementing loop is liable to be quicker. If you are looping up to a certian value, a comparison with that value is going to be required every iteration. However, if you are looping down to zero, then the decrement itself on most processors will set the zero flag if the value being decremented hits zero, so no extra comparison operation will be required.
Small potatoes I realize, but in a large tight inner loop it might matter a great deal.
In c# it makes no difference to the efficiency. The only reason to have decrementing loops is if you are looping through a collection and removing items as you go.
Don't know about decrementing, but I know that using ++i instead of i++ is more performant.
There are lots of articles on the web why this is, but it comes down to this: using ++i makes it so that i gets declared the first time automaticly, and using i++ doesn't do that. (in C# at least)
Again, don't know if decrementing is more performant, just thought I'd throw that out there seeing how you're asking about performance :)
You'd just use decrementing because it's easier in some situations for all I know..

Uses of a concurrent dictionary [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
Where do you think one could find usages for the Concurrent Dictionary thats part of the .Net framework 4?
Has anybody used the Concurrent Dictionary, if so why? Examples if any would be great.
I can think of its uses in the factory pattern wherein you want to store different instances of a class if it has been initialized already. Eg. Hibernates SessionFactory. What do you think?
I've used it heavily for caching scenarios. ConcurrentDictionary, especially when combined with Lazy<T>, is great for caching results when construction of a type is expensive, and works properly in multithreaded scenarios.
Whenever I don't want to worry about concurrent access to the dictionary really - i.e. I have some tiny HTTP web server app which logs/caches some request data in a dictionary structure - there can be any number of concurrent requests and I don't want to deal with having to manually lock the dictionary.
It's just one more thing that you don't have to do yourself and potentially get wrong, instead the framework takes care of that aspect for you.
We use it for caching objects in a multi-threaded app... performance is great and no worries about lock or similar...
Any basic cache that you need to be thread-safe is a candidate, especially for web apps that are inherently highly threaded. A dictionary requires a lock; either exclusive or reader/writer (the latter being more awkward to code). A hashtable is automatically thread-safe for readers (requiring a lock for writers), but often involves boxing of keys (and sometimes values), and has no static type safety.
A concurrent dictionary, however, is really friendly to use; no lock code, and entirely thread-safe. Less overhead than reader/writer locks (including the "slim" variety).

What is the best way to communicate between two process with C#? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
use net remoting or others ??
I want sample way, I think that no socket or other would be more easy to deploy ...
anyway ,help please , thanks.
You will want to begin with Namedpipes.
Since you are dealing with C#, have a look at:
http://msdn.microsoft.com/en-us/library/aa365590(v=vs.85).aspx
Essentially, this got me going instantly when I was looking into it:
http://msdn.microsoft.com/en-us/library/bb546085.aspx#Y1920
Good luck.
It depends on what you want to do. If you just want to send notifications between two processes running on the same computer, named events work just fine. If you want to send long messages, then there are named pipes, sockets, and WCF (which replaces .NET Remoting). You might also want to share memory with memory mapped files. There are several other possibilities.
The method you use depends in large part on how much data you want to communicate, how fast you need it to be, and how much time you want to spend futzing with it.

Categories