How to share same object instance (singleton) between processes in C#? [duplicate] - c#

This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
How to share objects across processes in .Net?
I can do this for a single process (single .exe) but how can I do it between processes?

You can do this via remoting. You class needs to inherit from MarshalByRefObject, which will give your clients a proxy to the real object.

You'd need to use some sort of distributed hash table or caching mechanism.
Try to avoid things like remoting if you can, because calls to a remote object can get expensive and start to really hurt performance. If you do go with .net remoting, then carefully consider the interface of the remote object. You should be passing coarse grained data across the process boundry, so avoid chatty interfaces with lots of calls with little bits of data.
What are the requirements of the class that you want to act as a singleton? There might be a totally different way of looking at it. Currently the thinking is that singletons are undesirable because they are difficult to unit test reliably, so avoiding the singleton concept could be the direction to take.

using .Net remoting
(see answers above or by this url: http://msdn.microsoft.com/en-us/library/kwdt6w2k%28VS.71%29.aspx)

Related

What is the advantage for using TcpClient & TcpServer over Socket [duplicate]

This question already has answers here:
TCPClient vs Socket in C#
(2 answers)
Closed 6 years ago.
Two computers have to communicate via TCP/IP to synchronize a certain process flow. What would be the advantage to use the wrapper classes TcpClient & TcpServer over a Socket object?
I have programmed it using the first but somehow it seems for me to complicated and could be much easier solved just using the latter.
Any good advice for me?
The idea is that with the wrapper classes much of the code that you are likely to want has already been written for you.
Advantages of using the wrapper should be:
Validation already done
Less code to write
Already tested extensively
Code re-use is to be applauded where it makes sense to do so
Advantages of rolling your own:
You get exactly what you want
You can create your own syntax
Disadvantages of rolling your own:
You have to write ALL the code, including tests
If you are like me, you are probably not as knowledgeable as the specialist who wrote the wrapper
As a result it is likely that the resulting code could be less efficient than the code in the wrapper.
The decision is always yours. After all, you could actually rewrite the whole framework if you wanted to do so, but why would you bother?
You need to look at what is provided for you by the wrapper and decide for yourself whether it provides what you need. If it does, then I would say use it. If it fails to meet your requirements either write your own or extend the wrapper so that it does do what you want.
Hope that helps.

Using static functions in a asp.net 3.5 website [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I am building a ASP.NET webapplication in which I use several classes containing static functions for retreiving database values and such (based on session of user so their results are session specific, not application wide).
These functions can also be called from markup, which makes developing my GUI fast and easy.
Now I am wondering: is this the right way of doing things, or is it better to create a class, containing these functions and create an instance of the class when needed?
What will happen when there are a lot of visitors to this website? Will a visitor have to wait until the function is 'ready' if it's also called by another session? Or will IIS spread the workload over multiple threads?
Or is this just up to personal preferences and one should test what works best?
EDIT AND ADDITIONAL QUESTION:
I'm using code like this:
public class HandyAdminStuff
{
public static string GetClientName(Guid clientId)
{
Client client = new ClientController().GetClientById(clientId);
return client.Name;
}
}
Will the Client and ClientController classes be disposed of after completion of this function? Will the GarbageCollector dispose of them? Or will they continue to 'live' and bulk up memory everytime the function is called?
** Please, I don't need answers like: 'measure instead of asking', I know that. I'd like to get feedback from people who can give a good answer an maybe some pro's or cons, based on their experience. Thank you.
"Will a visitor have to wait until the function is 'ready' if it's also called by another session?"
Yes. It may happen if you have thread safe function body, or you perform some DB operations within transaction that locks DB.
Take a look at these threads:
http://forums.asp.net/t/1933971.aspx?THEORY%20High%20load%20on%20static%20methods%20How%20does%20net%20handle%20this%20situation%20
Does IIS give each connected user a thread?
It would be better to have instance based objects because they can also be easily disposed (connections possibly?) and you wouldn't have to worry about multithreading issues, additional to all the problems "peek" mentioned.
For example, each and every function of your static DAL layer should be atomic. That is, no variables should be shared between calls inside the dal. It is a common mistake in asp.net to think that [TreadStatic] data is safe to be used inside static functions. The only safe pool for storing per request data is the Context.Items pool, everything else is unsafe.
Edit:
I forgot to answer you question regarding IIS threads. Each and every request from your customers will be handled by a different thread. As long as you are not using Session State, concurrent requests from the same user will be also handled concurrently by different threads.
I would not recommend to use static function for retrieving data. This because these static functions will make your code harder to test, harder to maintain, and can't take advantage of any oo principals for design. You will end up with more duplicate code, etc.

Why does the .NET framework rely on interfaces? [duplicate]

This question already has answers here:
Why would I want to use Interfaces? [closed]
(19 answers)
Closed 8 years ago.
I am working on learning C# in depth. I am mostly confused by the frequent implementation of interfaces. I always read that this class implements this interface. For instance, SqlConnection class implements IDbConnection. What is the benefit for developers in this case?
the interfacing is based on object-oriented principles, e.g. see SOLID. You should not rely on implementation of other classes you're working with - it should be sufficient for you to know only what they do and what they should return. A good example with the SqlConnection would be that you may be able to change the DB you are using quite simply (to e.g. MySQL or Oracle) by changing the implementation on just one place, providing that your code is correctly using the interfaces and propagating the instances.
An interface contains definitions for a group of related functionalities that a given type must implement (a sort of Method Signature Contract). It does not, however, guarantee the specific behavior of those implementations.
Interfaces are particularily useful as they allow the programmer to include behavior from multiple sources in programming languages that do not support multiple inheritance of classes like C#.

What is the measurements for determining If the code is Thread safe or not in .net [duplicate]

This question already has answers here:
Multi Threading [closed]
(5 answers)
Closed 9 years ago.
How can I measure a code if it is thread-safe or not?
may be general guidelines or best practices
I know that the code to be threading safe is to work across threads without doing unpredictable behavior, but that's sometimes become very tricky and hard to do!
I came up with one simple rule, which is probably hard to implement and therefore theoretical in nature. Code is not thread safe if you can inject some Sleep operations to some places in the code and so change the outcome of the code in a significant way. The code is thread safe otherwise (there's no such combination of delays that can change the result of code execution).
Not only your code should be taken into account when considering thread safety, but other parts of the code, the framework, the operating system, the external factors, like disk drives and memory... everything. That is why this "rule of thumb" is mainly theoretical.
I think The best answer would be here
Multi Threading, I couldn't have notice such an answer before writing this question
I think it is better to close is it !
thanks
Edit by 280Z28 (since I can't add a new answer to a closed question)
Thread safety of an algorithm or application is typically measured in terms of the consistency model which it is guaranteed to follow in the presence of multiple threads of execution (or multiple processes for distributed systems). The two most important things to examine are the following.
Are the pre- and post-conditions of individual methods preserved when multiple threads are used? For example, if your method "adds an element to a dynamically-sized list", then one post condition would be that the size of the list increases by 1 as a result of the add method. If your algorithm is thread-safe, then calling the add method 2 times would result in the size increasing by exactly 2, regardless of which threads were used for the add operations. On the other hand, if the algorithm is not thread-safe, then using multiple threads for the 2 calls could result in anything, ranging from correctly adding the 2 items all the way to the possibility of crashing the program entirely.
When changes are made to data used by algorithms in the program, when do those changes become visible to the other threads in the system. This is the consistency model of your code. Consistency models can be very difficult to understand fully so I'll leave the link above as the starting place for your continued learning, along with a note that systems guaranteeing linearizability or sequential consistency are often the easiest to work with, although not necessarily the easiest to create.

Are immutable objects good practice? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Should I make my classes immutable where possible?
I once read the book "Effective Java" by Joshua Bloch and he recommended to make all business objects immutable for various reasons. (for example thread safety)
Does this apply for C# too?
Do you try to make your objects immutable, so you have less problems when working with them?
Or is it not worth the inconvenience you have to create them?
The immutable Eric Lippert has written a whole series of blog posts on the topic. Part one is here.
Quoting from the earlier post that he links to:
ASIDE: Immutable data structures are the way of the future in C#. It is much easier to reason about a data structure if you know that it will never change. Since they cannot be modified, they are automatically threadsafe. Since they cannot be modified, you can maintain a stack of past “snapshots” of the structure, and suddenly undo-redo implementations become trivial. On the down side, they do tend to chew up memory, but hey, that’s what garbage collection was invented for, so don’t sweat it.
This is going to be more of an opinion type answer but...
I find that the ease of understanding a program, i.e. maintaining and debugging said application, is inversly proportional to the amount of stateful transitions that occur during the processing of each component. The less state I need to cart around in my head, the more focus I can pay attention to the logic within the algorithms as it is written.
Immutable objects are the central feature of functional programming; it has its own advantages and disadvantages. (E.g. linked lists are practically impossible to be immutable, but immutable objects make parallelism a piece of cake.) So as a comment on your post noted, the answer is "it depends".
Off the top of my head, I can't think of a reason for immutable objects making thread safe code somehow "better".
If I want an object to be thread safe, I will either put a lock around it or I will make a copy of it and update the reference once I'm done working on it. I typically wouldn't want a new object for every little change.
For me, immutable strings create more headaches for threading than it helps.
I actually went out of my way to make an "in-place" "ToUpper" using unsafe code isntead of the built in String.ToUpper(). It runs about 4 times faster and consumes 1/2 the peak memory.
Another nice benefit of immutable structures is that you can locally cache instances of them and reuse them across multiple threads without fear of unexpected behaviors as would be the case if they were mutable.
For instance, suppose you are using an external caching service such as memcached or Velocity or some other equally simplistic distributed hashtable service. You could just use the C# client library and call it good enough. However, that is being wasteful with resources given a short-lived context like a web request scenario. What you really want is to pull each object from the cache once and only once in your context.
The safest way to get this job done is to place a local hashtable in your process in front of the cache provider. On the first request for the cache key you'd pull down the serialized byte stream that represents the object you wish to use and store that byte stream in your local hashtable. On subsequent requests for the same cache key, just look up the byte stream in the local hashtable and deserialize the object to a new instance for each request. This is to prevent multiple redundant trips to the cache server node for the same information that assumedly has not changed over the lifetime of your context.
With immutable structures, you could deserialize the byte stream only once on the first request and get away with storing the deserialized instance in the hashtable instead of the byte stream and just share that one single immutable instance of your object. This obviously cuts down on deserialization penalties which can add up rather quickly if your consuming code is written in such a fashion that it does not care how many calls it makes to the caching provider, assuming the cache is faster than querying your underlying data store.
Perhaps this is more of a subjective answer, but it's a specific problem that can be solved uniquely by using immutable structures so I thought it was relevant to share.

Categories