Uses of a concurrent dictionary [closed] - c#

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
Where do you think one could find usages for the Concurrent Dictionary thats part of the .Net framework 4?
Has anybody used the Concurrent Dictionary, if so why? Examples if any would be great.
I can think of its uses in the factory pattern wherein you want to store different instances of a class if it has been initialized already. Eg. Hibernates SessionFactory. What do you think?

I've used it heavily for caching scenarios. ConcurrentDictionary, especially when combined with Lazy<T>, is great for caching results when construction of a type is expensive, and works properly in multithreaded scenarios.

Whenever I don't want to worry about concurrent access to the dictionary really - i.e. I have some tiny HTTP web server app which logs/caches some request data in a dictionary structure - there can be any number of concurrent requests and I don't want to deal with having to manually lock the dictionary.
It's just one more thing that you don't have to do yourself and potentially get wrong, instead the framework takes care of that aspect for you.

We use it for caching objects in a multi-threaded app... performance is great and no worries about lock or similar...

Any basic cache that you need to be thread-safe is a candidate, especially for web apps that are inherently highly threaded. A dictionary requires a lock; either exclusive or reader/writer (the latter being more awkward to code). A hashtable is automatically thread-safe for readers (requiring a lock for writers), but often involves boxing of keys (and sometimes values), and has no static type safety.
A concurrent dictionary, however, is really friendly to use; no lock code, and entirely thread-safe. Less overhead than reader/writer locks (including the "slim" variety).

Related

Asynchronous vs Multithreading [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
I read this question Asynchronous vs Multithreading- is there a difference and search in Google about differences.
What is the benefit to use Asynchronous instead of Multithreading?
and when use Asynchronous instead of Multithreading?
If your task can be done using asynchronous programming, the it is better to do it that way instead of going for multi-threaded programming. for 3 reasons:-
1: Perfomance
In multi-threading, the CPU or w/e has to keep switching between threads. So, even if your thread is doing nothing and just sitting there (or more likely, doing a comparison to see if a condition is true so it can get one with doing with w/e it was created to do), the CPU still switches threads and the process takes some time. I don't think that would be very bad, but your performance surely takes a hit.
2: Simplicity & Brevity
Also, maybe it's just me, but asynchronous programming just seems more natural to me. And before you ask, no, I'm not a fan of JS but still. Not only that, but you run into problems with shared variables and thread-safeness and others — all of which can be side-stepped by using asynchronous programming and callbacks.
3: Annoying implementations of threads
In Python there's this really horrible thing called a GIL (Global Interpreter Lock). Basically, Python doesn't let you actually run concurrent threads. Also, if you're thinking about running a threaded program on a multi-core CPU, forget it.
There might be caveats in C# too, I don't know. These are just my 2 cents...
All that said, asynchronous and multi-threading are really not that comparable. While multi-threading may be used (inefficiently) to implement asynchronousity, it is a way to get concurrency and acynhrounousity is a programming style, like OOP (Object Oriented Programming).

When writing an application, finish the functions first, then optimize? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Donald Knuth said that "premature optimization is the root of all evil", and I gradully believe the saying.
So can I put it that when writing an application, we should concentrate on completing the functions, without concerning performance issue, until we cannot bear the low performance?
I'm afraid that if I use a wrong pattern many times which slows down the application, as a result fix the issue may consume a considerable amount of time. Should I test the performance of the pattern before widely using it?
The pattern I mentioned may refer to use Linq or for-loop, use Delegate.BeginInvoke, Parallel.For, or Task<T>, dispose IDisposable or just ignore it, etc.
Any refernece materials are all welcomed.
I agree with the spirit of Knuth's quote about premature optimization, as it can cause code to become overly complex and unwieldy too early in development, impacting both the quality of the code and the time needed to complete a project.
Two concerns I have about your post:
You should have a sense about whether or not your function/algorithms can theoretically scale/perform or not to meet your requirement (e.g. the O complexity of your solution -> http://en.wikipedia.org/wiki/Analysis_of_algorithms )
The patterns you mention are actually concrete implementation items, only some of which are related to performance, e.g.
Parallel.For/Task - these are useful for gaining performance on multi-core systems
IDisposable - this is for resource management related, and not something to be avoided
Linq vs. for-loop - this can be a point-optimization, but you'll need to benchmark/assess your use case to determine which is best for you (e.g. Linq can be moved to PLinq in some cases for parallelism)
Delegate.BeginInvoke - a non-optional component of thread synchronization
Never code without concern for performance.
Code for performance up until the code would get more complex (e.g. parallel).
But with C# 5.0 even parallel is not complex.
If you have some calls you think might need to optimize then design for that.
Design the code so optimization does not change the interface to the method.
There is speed, memory, concurrency (if a server app), reliability, security, and support.
Clean code is often the best code.
Don't get crazy until you know you have a performance problem but don't get sloppy.
In answering another question on SO I told the person they did not need a DataTable and DataReader would be faster with less memory. Their response was it still runs in 1/2 a second I don't care. To me that is sloppy code.
#JonhSanders I disagree that "Code for performance up until the code would get more complex" will cause either bugs or incomplete. For me coding for performance is not the same as optimize. First pass on anything but throw away code I code for performance - nothing exotic just best practices. Where I see potential hot spots that I might need to come back and optimize I write with optimization in mind. P.S. I agree on closing the question.

Consuming a SOAP Web Service with the lowest overhead [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I'm implementing a SOAP Webservice for sending thousands of emails and storing thousands of XML response records in a local database. (c#.net, visual studio 2012)
I would like to make my service consumer as fast and lightweight as possible.
I need to know some of the considerations. I always have a feeling that my code should run faster than it is.
E.g.
I've read that using datasets increase overhead. So should I use lists of objects instead?
Does using ORM introduce slowness into my code?
Is a console application faster than a winform? Because the user needs no GUI to deal with. There are simply some parameters sent to the app that invoke some methods.
What are the most efficient ways to deal with a SOAP Web Service?
Make it work, then worry about making it fast. If you try to guess where the bottle necks will be, you will probably guess wrong. The best way to optimize something is to measure real code before and after.
Datasets and ORM and win form apps, and console apps can all run plenty fast. Use the technologies that suit you, then tune the speed if you actually need it.
Finally if you do have a performance problem, changing your choice of algorithms to better suit your problem will likely yield much greater performance impact than changing any of the technologies you mentioned.
Considering my personal experience with soap, in this scenario I would say your main concern should be on how you retrieve this information from your database (procedures, views, triggers, indexes and etc).
The difference between console, winform and webapp isn't that relevant.
After the app is done you should make a huge stress test on it to be able to see where lies your performance problem, if it exists.

CircularBuffer highly efficient implementation (both thread-safe and not thread-safe) [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Could someone suggest good CircularBuffer implementation? I need both "not thread-safe" and "thread-safe" versions. I expect following operations:
ability to provide size of the buffer when creating
adding elements
iterating elements
removing elements while iterating
probably removing elements
I expect implementation to be highly optimized in terms of speed and used memory, average and worst times etc.
I expect "not thread-safe" implementation to be extremely fast. I expect "thread-safe" implementation to be fast, probably using "lock-free code" for synchronization and it's ok to have some restrictions if this is required for speed.
If buffer is too small to store new (added) element it's ok to silenty override existent element or raise exception.
Should I use disruptor.net?
Adding link to a good example Disruptor.NET example
Not thread safe:
System.Collections.Generic.Queue
Thread safe:
System.Collections.Concurrent.ConcurrentQueue
or
System.Collections.Concurrent.BlockingCollection (which uses a concurrent queue by default internally)
Although technically you really shouldn't use the term "thread safe". It's simply too ambiguous. The first is not designed to be used concurrently by multiple threads, the rest are.

How does Serialize and De-serialize work internally? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I have a lot of legacy code where json is parsed manually by using a for loop. This takes O(n) time in general. I know json.net would be better in terms of time and space but gaining an insight about how it works, would help me make an informed decision whether its worth the effort to actually go ahead and invest the time and man power to move everything to json.net.
To paraphrase your question into a more general one, lets assume you were looking for advice on which JSON serialization implementation to choose for various scenarios.
I'm aware of three obvious answers to this question:
NewtonSoft JSON.NET
Provides an abundance of features and excellent performance
ServiceStack.Text
Provides simplicity and blazing performance
BCL JsonSerializer
Avoids the 3rd party library dependency, but is significantly slower
If you don't care about the 3rd party library dependency, go for the first option as it will give you performance and functionality. If you don't need a ton of features, evaluate whether ServiceStack.Text does what you need it to (if unsure, go with JSON.NET). In any other case, stick with what you have.
Also, don't spend time making your code faster by replacing your JSON code before you know that this particular area is a performance bottleneck (or otherwise warrants replacement, e.g. because it's a maintenance problem). If you are considering replacing code to gain performance, isolate a few methods to profile and benchmark your current code against similar scenarios using the alternate implementation or library, in order to avoid making a decision based on assumptions.
Last, knowing how it works internally should not be a factor in your decision process unless you specifically are planning to be able to modify the source of it (or otherwise need to be able to understand it).

Categories