Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I'm developing tool that migrates issues from old to new issue tracking system. I have separated everything with interfaces, but I'm not sure what's the best way to glue it back together. I have 3 dependencies, that require runtime data:
INewSystemClient - client to connect to new system
IRetryStrategy - handles timeouts and retries
IMigrationSettings
These 3 dependencies are dependencies of many others. I couldn't figure out other way to glue everything, than registering these 3 as singletons (via DI container). I also know, that singletons are considered a bad pattern, so I'm considering switching to abstract factory.
Example relationship which forced me to use singleton:
Dependency1(INewSystemClient client, ...) // constructor for Dependency1
Dependency2(INewSystemClient client, ...) // constructor for Dependency2
INewSystemClient requires runtime data like user, pw, host etc.
Should I switch to abstract factory and make factory create objects instead of DI container?
I think you are confusing terms just like a Singleton pattern (most say it's an anti-pattern now) is not the same as a singleton instance in your IOC, an Abstract factory pattern is not the same as a DI factory. What you need to think about is scopes or when the object is created and disposed.
In your desktop app there can be multiple scopes in which you can register an object (On an App level or "a singleton", on a Module level, on a thread level, on a Page level...) This usually depends on the framework you are using (Prism, MvvmLight, caliburn.micro...) if you are building you own system you might want to look how some of the other frameworks did it.
I know Unity has a cool way of handling factories and lazy initializations.
Usually a singleton instance is best used for stuff that won't be accessed in multiple threads that will change some values. This is when you need to create locks and you can slow things down in a big way like blocking your UI thread. For example if you have an HttpClient that just call a single backend api that every one can use it would make sense to make it a singleton scope.
If for example you want to write to a database you might want to have a different EF context per page so the entity tracking doesn't happen on two page.
I have 3 dependencies, that require runtime data:
From your question it is unclear how those dependencies consume runtime data. If they require it during initialization, that's a code smell. If you are passing along that runtime data through method calls on already initialized (and immutable) classes, that's completely fine.
I also know, that singletons are considered a bad pattern, so I'm considering switching to abstract factory.
Filip Cordas already touched this, but I like to repeat: You are confusing two things. When it comes to applying DI, the Singleton Pattern is a bad thing, but having a single instance of some class at runtime (a.k.a. the Singleton Lifestyle) is completely fine. Some (like me) prefer making all components to be registered with the Singleton Lifestyle, since this forces immutability and statelessness, which simplifies registration and prevents all kinds of common misconfigurations, such as Captive Dependencies.
Should I switch to abstract factory and make factory create objects instead of DI container?
As explained here, Abstract Factories are typically not the right solution, and I consider them a code smell. They are typically used to build up application components using runtime data, but as stated earlier, application components should not require runtime data during construction.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
I'm reading a book which says:
// code smell
public interface IProductRepositoryFactory {
IProductRepository Create();
}
The Dependencies created by an Abstract Factory should conceptually require a runtime value, and the translation from a runtime value into an Abstraction should make sense.
By specifying an IProductRepositoryFactory Abstraction with a parameterless
Create method, you let the consumer know that there are more instances of the given
service, and that it has to deal with this. Because another implementation of IProductRepository might not require multiple instances or deterministic disposal at all,
you’re therefore leaking implementation details through the Abstract Factory with its
parameterless Create method.
I'm a little bit confused here, what does "more instances of the given service" mean, does it mean that you call a concrete Factory's Create method multiple times? what's wrong with that? even if you have factory methods that does have parameters as:
public interface IProductRepositoryFactory {
IProductRepository Create(string type);
}
and if you call a concrete Factory's Create method multiple times there will be multiple instances too. so what's wrong with parameterless factory methods? what does it leak?
An abstraction is "leaky" when it fails to hide details of the underlying implementation that it is supposed to hide.
It isn't true that a parameterless factory method is always leaky, because that of course depends on what the abstraction it returns is supposed to hide, and the author you reference never really specifies what information details are exposed that are supposed to remain hidden.
But what he says is often correct. If you provide this IProductRepositoryFactory method, then the receiver can create as many instances of IProductRepository as it likes... but why would the receiver want to make one, or two, or a bunch? If you pass this factory interface, then the choice is probably important. The receiver must know about the kinds of trade-offs involved in making one instance vs. many. It probably has to do with caching, thread pooling, etc.
Often, this is the kind of implementation detail that the receiver should not have to know about.
But, you know...
It is actually pretty common and perfectly fine under many circumstances to inject interfaces that look a lot like this case, and this gets into fussing about the definitions of words.
For instance, you might pass in a factory that the receiver would use like this:
factory.createDocument().setTitle(title).setContent(content).save();
Perfectly fine. What's the difference? Well, in this case it's that the document we're creating is not a "Dependency". The factory itself is the dependency. The service it provides is the ability to create documents, which the caller will then own. These documents are obviously stateful and have identity. This is not something that the Document abstraction is supposed to hide at all, and so this is not a leaky abstraction.
Similar patterns happen a lot when working with multithreaded code. You will quite often have a thread-safe factory service that creates objects that are not thread-safe.
does it mean that you call a concrete Factory's Create method multiple times? what's wrong with that?
Well, their point is that you don't know what to do with the product repository. Is it a singleton? Then it should be an actual singleton instance, not coming from a factory. Is it disposable? Well, you're returning a IProductRepository, not an IDisposable, so there's nothing to suggest that you should be disposing that.
even if you have factory methods that does have parameters [...] if you call a concrete Factory's Create method multiple times there will be multiple instances too
I believe their thinking is that you'd be getting already built instances based on your parameters that are cached between runs, so there's no disposing involved.
I'm not sure I fully agree with their thinking, but I will say that in my opinion you'll never sell this pattern to me. Either use a singleton, or dependency injection (which supersedes singletons as well).
To address this question properly really requires reading the full article for any readers of the answers here to have the necessary context. In terms of the specifics of your particular question, here's some clarification on the concerns that are raised:
Injecting the factory instead of the service itself puts the onus of lifetime management of the dependency (IProductRepository) on the consumer (HomeController). If the dependency was injected instead of the factory, a proxy class or the IoC framework could be charged with lifetime management, freeing up the consumer to focus on working with the API surface of the dependency.
Use of a proxy repository class could further limit the leak, as IDisposable would no longer be implemented by IProductRepository nor exposed to the consumer since the proxy would manage lifetime.
The Create method of the factory implies that multiple instances of whatever implementation is being handed back can be created. Again, this places extra and unnecessary responsibility on the consumer to manage- or at least be concerned with this when it could be handled elsewhere. As pointed out in Matt's answer there are circumstances where it's perfectly fine- if not expected- to be able to generate more than one instance from a factory. In the article in question it's really a matter of the repository pattern and the conventions that come with it that makes the design awkward; typically it doesn't make sense to have multiple instances of a given type of repository but by injecting the factory instead of the factory instance, the code allows this thus creating the leak.
Overall, most of the concern here revolves around the fact that a factory is being injected as a dependency instead of the dependency itself which ultimately requires more knowledge of the dependency than is necessary on the consumer end. The parameterless factory method returning an abstraction further compounds this. If some runtime information needed to be provided in order for that factory to make a decision as to what concrete type to instantiate and hand back as an abstraction, injecting the factory would make more sense. As it stands it's just not great design, and poor design can lead to additional mental overhead. The fact that the Create method hands back an interface instance instead of a concrete instance without accepting any parameters might a) raise questions as to why the factory would hand back an instance of one type or another, thus requiring knowledge of how the factory makes its decisions, or b) require knowledge of the fact that there is only one implementation of IProductRepository. Neither of these are anything that the consumer, or the developers leveraging the dependency, should really have to concern themselves with. A proper abstraction combined with proper IoC mitigates these concerns.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
It might have been asked before but I cannot find even in the official site why I should use MediatR and what problems it solves?
Is it because I can pass a single object in my constructor rather than a multitude of Interfaces?
Is it a replacement or competitor of ServicesBus etc...
Basically what are the benefit and what problem does it solve
I want to buy into it but its not clear to me why I should use it.
many thanks
Is it because I can pass a single object in my constructor rather than
a multitude of Interfaces?
No.
Is it a replacement or competitor of ServicesBus etc...
No.
Basically what are the benefit and what problem does it solve
Among other things, one of the problem MediatR is trying to solve is DI Constructor Explosion in your MVC controllers
public DashboardController(
ICustomerRepository customerRepository,
IOrderService orderService,
ICustomerHistoryRepository historyRepository,
IOrderRepository orderRepository,
IProductRespoitory productRespoitory,
IRelatedProductsRepository relatedProductsRepository,
ISupportService supportService,
ILog logger
)
This is a highly debated topic and there is no one-size-fits-all solution, take a look at this question
How to avoid Dependency Injection constructor madness?
If you want to hide dependencies behind even more abstractions, then at this point you will want to take a look at all the options, like refactoring, separating concerns a little more, or other techniques.
In all honesty, the example problem and solution given on the MediatR website is a little suspect, however it does have its uses. In short, you need to choose whats right for you and your environment.
Overview of the Mediator Pattern
A mediator is an object that makes decisions on how and when objects interact with each other. It encapsulates the “how” and coordinates execution based on state, the way it’s invoked or the payload you provide to it.
In regards to the spirit of your question, you should really have a look at this site:
Simplifying Development and Separating Concerns with MediatR
MediatR is an open source implementation of the mediator pattern that
doesn’t try to do too much and performs no magic. It allows you to
compose messages, create and listen for events using synchronous or
asynchronous patterns. It helps to reduce coupling and isolate the
concerns of requesting the work to be done and creating the handler
that dispatches the work.
More about Mediator Pattern
Can you in your own opinion describe why would you use it
The mediator pattern helps decoupling your application via communication through a mediator (its a thing) .
Usually a program is made up of a large number of classes. However, as more classes are added to a program, the problem of communication between these classes may become more complex. This makes the program harder to read and maintain. Furthermore, it can become difficult to change the program, since any change may affect code in several other classes.
With the mediator pattern, communication between objects is encapsulated within a mediator object. Objects no longer communicate directly with each other (decoupling), but instead communicate through the mediator. This reduces the dependencies between communicating objects, thereby reducing coupling.
In modern software, the mediator pattern is usually found within many frameworks, however you can create your own, or use one of many that are available.
From here, i think you should probably just do more research, i mean usually you figure out you need these things before you research them, however in this case i think you really need to find some good examples to know whether you want the Mediator Pattern, and even more The MediatR library
Update
wired_in had some great practical comment on this
All MediatR does is service locate a handler for a request. That is
not the mediator pattern. The "mediator" in this instance, does not
describe how two objects communicate, it uses inversion of control
that is already being used in an application and just provides a
useless layer of abstraction that only serves to make an application
harder to reason about as a whole. You already achieve decoupling by
using standard constructor injection with IoC. I don't understand why
people buy into this. Let's create multiple composite roots just so we
don't have to put interfaces in our constructor.
and
The OP is completely justified in questioning the point of MediatR.
The top responses I hear to the question involve explaining the use of
the mediator pattern in general, or that it makes the calling code
cleaner. The former explanation assumes that the MediatR library
actually implements the mediator pattern, which is far from clear. The
latter is not a justifcation for adding another abstraction on top of
an already abstracted IoC container, which creates multiple composite
roots. Just inject the handler instead of service locating it
It is just a way to implement communication between your business logic components.
Imagine that you have:
FirstRequest // which handled by FirstRequestHandler(FirstRequest)
SecondRequest // which handled by SecondRequestHandler(SecondRequest)
ThirdRequest // which handled by ThirdRequestHandler(ThirdRequest)
... there are hundreds of them ...
And then comes ComplexRequest, when ComplexResponse have to be a combination of FirstResponse and ThirdResponse.
How should we solve this?
Well, ComplexRequestHandler would have to inject FirstHandler and ThirdHandler, get their results, and combine them.
But why should ComplexRequestHandler should have access to FirstRequestHandler interface ?
Why we should bother to inject First, Third ... OneHundredAndTwentythHandler into our ComplexHandler ?
What MediatR gives us in such use case, is a third party that tells us:
"Give me a request, and I"ll get you the right response, Trust me!"
So ComplexHandler doesn't know anything about First and Third Handlers.
It knows only about the required requests and responses (which usually are only just wrapping DTOs).
Note: You don't have to necessarily use the MediatR library for that. You can read about the Mediator Pattern and implement one yourself.
This question already has answers here:
What is dependency injection?
(37 answers)
Closed 7 years ago.
TL;DR Version
Can you explain the concept of dependency injection to an 'enthusiast programmer' with a fundamental understanding of programming. i.e. Classes, functions and variables.
What is the purpose of dependency injection, is it purely a readability / ease of programming concept or does it provide compile time benefits also?
My original more waffly version!
My coding skills are reasonably rudimentary. (it isn't my primary skill, but it does sometimes come in handy to proof of concept something) I can hack stuff together and make things work, but I'm always perfectly aware that there are a host of better / more efficient ways of doing things.
Primarily I bounce things around between functions and classes and variables! (like what I learnt on my c64 a long time ago!)
Dependency injection seems to be everywhere lately, and while I think I kind of get it, I feel like I'm missing a point (or something)
The problem seems to be when I try to google around to understand what it is at a fundamental level, it very quickly gets very complicated and my head hurts (I'm a bear of very small brain and big words confuse me!)
So I'm hoping someone can explain dependency injection to a five year old! What is it, why do I need it, how does it make my coding better. (ideally working in concepts of functions, classes and variables!)
This is largely language independent, it seems to be a thing that all languages use, but my language of choice is usually C# (typically ASP/MVC though some native Windows / Console) though I've recently started poking around with Nodejs also.
Actually it seems this is a duplicate of this question - What is dependency injection?
(which fared much better than my version! - that's what I get for waffling)
Dependency Injection allows the developer to put a class reference into a project a will without having to create it as an object.
In the case of spring, where I have most knowledge of DI, I would like the classpath in a config.xml file and I can pass this class reference to any class were I need it to be called. The class being injected is like a singleton as only that one reference is being passed without declaring it as a singleton. It is usually a service class that is injected.
Dependency injection allows a client the flexibility of being configurable. Only the client's behaviour is fixed. This can save time swapping out what is to be injected in an xml file without the need to recompile the project.
This is why I say it is more like a singleton using only one instance
public class Foo {
// Constructor
public Foo() {
// Specify a specific implementation in the constructor instead of using dependency injection
Service service1 = new ServiceExample();
}
private void RandomMethod() {
Service service2 = new ServiceExample();
}
}
Here the one service is used twice because two instances are created. I have seen projects where class files have become so big where a service class was created three times through out the one class in different methods.
public class Foo {
// Internal reference to the service
private Service service1;
// Constructor
public Foo(Service service1) {
this.service1 = service1;
}
}
The same issuse can be created in the second example but by having all dependencies in the constructor, it makes it a little more obvious to the developer looking at the code what is being used and that the service has already been created from the start.
Code Injection may have various meanings depending on context.
For example, in a security context it can mean malicious code being injected into your application (eg. sql-injection).
In other contexts (e.g aspect-oriented programming) it might mean a way to patch a method with additional code for an aspect.
Dependency Injection is something different and means a way for one part of code (e.g a class) to have access to dependencies (other parts of code, e.g other classes, it depends upon) in a modular way without them being hardcoded (so they can change or be overriden freely, or even be loaded at another time, as needed)
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I will have the following components in my application
DataAccess
DataAccess.Test
Business
Business.Test
Application
I was hoping to use Castle Windsor as IoC to glue the layers together but I am bit uncertain about the design of the gluing.
My question is who should be responsible for registering the objects into Windsor?
I have a couple of ideas;
Each layer can register its own objects. To test the BL, the test bench could register mock classes for the DAL.
Each layer can register the object of its dependencies, e.g. the business layer registers the components of the data access layer. To test the BL, the test bench would have to unload the "real" DAL object and register the mock objects.
The application (or test app) registers all objects of the dependencies.
I am seeking some ideas and pros/cons with the different paths.
In general, all components in an application should be composed as late as possible, because that ensures maximum modularity, and that modules are as loosely coupled as possible.
In practice, this means that you should configure the container at the root of your application.
In a desktop app, that would be in the Main method (or very close to it)
In an ASP.NET (including MVC) application, that would be in Global.asax
In WCF, that would be in a ServiceHostFactory
etc.
The container is simply the engine that composes modules into a working application. In principle, you could write the code by hand (this is called Poor Man's DI), but it is just so much easier to use a DI Container like Windsor.
Such a Composition Root will ideally be the only piece of code in the application's root, making the application a so-called Humble Executable (a term from the excellent xUnit Test Patterns) that doesn't need unit testing in itself.
Your tests should not need the container at all, as your objects and modules should be composable, and you can directly supply Test Doubles to them from the unit tests. It is best if you can design all of your modules to be container-agnostic.
Also specifically in Windsor you should encapsulate your component registration logic within installers (types implementing IWindsorInstaller) See the documentation for more details
While Mark's answer is great for web scenarios, the key flaw with applying it for all architectures (namely rich-client - ie: WPF, WinForms, iOS, etc.) is the assumption that all components needed for an operation can/should be created at once.
For web servers this makes sense since every request is extremely short-lived and an ASP.NET MVC controller gets created by the underlying framework (no user code) for every request that comes in. Thus the controller and all its dependencies can easily be composed by a DI framework, and there is very little maintenance cost to doing so. Note that the web framework is responsible for managing the lifetime of the controller and for all purposes the lifetime of all its dependencies (which the DI framework will create/inject for you upon the controller's creation). It is totally fine that the dependencies live for the duration of the request and your user code does not need to manage the lifetime of components and sub-components itself. Also note that web servers are stateless across different requests (except for session state, but that's irrelevant for this discussion) and that you never have multiple controller/child-controller instances that need to live at the same time to service a single request.
In rich-client apps however this is very much not the case. If using an MVC/MVVM architecture (which you should!) a user's session is long-living and controllers create sub-controllers / sibling controllers as the user navigates through the app (see note about MVVM at the bottom). The analogy to the web world is that every user input (button click, operation performed) in a rich-client app is the equivalent of a request being received by the web framework. The big difference however is that you want the controllers in a rich-client app to stay alive between operations (very possible that the user does multiple operations on the same screen - which is governed by a particular controller) and also that sub-controllers get created and destroyed as the user performs different actions (think about a tab control that lazily creates the tab if the user navigates to it, or a piece of UI that only needs to get loaded if the user performs particular actions on a screen).
Both these characteristics mean that it's the user code that needs to manage the lifetime of controllers/sub-controllers, and that the controllers' dependencies should NOT all be created upfront (ie: sub-controllers, view-models, other presentation components etc.). If you use a DI framework to perform these responsibilities you will end up with not only a lot more code where it doesn't belong (See: Constructor over-injection anti-pattern) but you will also need to pass along a dependency container throughout most of your presentation layer so that your components can use it to create their sub-components when needed.
Why is it bad that my user-code has access to the DI container?
1) The dependency container holds references to a lot of components in your app. Passing this bad boy around to every component that needs to create/manage anoter sub-component is the equivalent of using globals in your architecture. Worse off any sub-component can also register new components into the container so soon enough it will become a global storage as well. Developers will throw objects into the container just to pass around data between components (either between sibling controllers or between deep controller hierarchies - ie: an ancestor controller needs to grab data from a grandparent controller). Note that in the web world where the container is not passed around to user code this is never a problem.
2) The other problem with dependency containers versus service locators / factories / direct object instantiation is that resolving from a container makes it completely ambiguous whether you are CREATING a component or simply REUSING an existing one. Instead it is left up to a centralized configuration (ie: bootstrapper / Composition Root) to figure out what the lifetime of the component is. In certain cases this is okay (ie: web controllers, where it is not user code that needs to manage component's lifetime but the runtime request processing framework itself). This is extremely problematic however when the design of your components should INDICATE whether it's their responsibility to manage a component and what it's lifetime should be (Example: A phone app pops up a sheet that asks the user for some info. This is achieved by a controller creating a sub-controller which governs the overlaying sheet. Once the user enters some info the sheet is resigned, and control is returned to the initial controller, which still maintains state from what the user was doing prior). If DI is used to resolve the sheet sub-controller it's ambiguous what the lifetime of it should be or whom should be responsible for managing it (the initiating controller). Compare this to the explicit responsibility dictated by the use of other mechanisms.
Scenario A:
// not sure whether I'm responsible for creating the thing or not
DependencyContainer.GimmeA<Thing>()
Scenario B:
// responsibility is clear that this component is responsible for creation
Factory.CreateMeA<Thing>()
// or simply
new Thing()
Scenario C:
// responsibility is clear that this component is not responsible for creation, but rather only consumption
ServiceLocator.GetMeTheExisting<Thing>()
// or simply
ServiceLocator.Thing
As you can see DI makes it unclear whom is responsible for the lifetime management of the sub-component.
NOTE:
Technically speaking many DI frameworks do have some way of creating
components lazily (See: How not to do dependency injection - the
static or singleton container) which is a lot better than passing
the container around, but you are still paying the cost of mutating your
code to pass around creation functions everywhere, you lack first-level
support for passing in valid constructor parameters during creation,
and at the end of the day you are still using an indirection mechanism
unnecessarily in places where the only benefit is to achieve testability,
which can be achieved in better, simpler ways (see below).
What does all this mean?
It means DI is appropriate for certain scenarios, and inappropriate for others. In rich-client applications it happens to carry a lot of the downsides of DI with very few of the upsides. The further your app scales out in complexity the bigger the maintenance costs will grow. It also carries the grave potential for misuse, which depending on how tight your team communication and code review processes are, can be anywhere from a non-issue to a severe tech debt cost. There is a myth going around that Service Locators or Factories or good old Instantiation are somehow bad and outdated mechanisms simply because they may not be the optimal mechanism in the web app world, where perhaps a lot of people play in. We should not over-generalize these learnings to all scenarios and view everything as nails just because we've learned to wield a particular hammer.
My recommendation FOR RICH-CLIENT APPS is to use the minimal mechanism that meets the requirements for each component at hand. 80% of the time this should be direct instantitation. Service locators can be used to house your main business layer components (ie: application services which are generally singleton in nature), and of course Factories and even the Singleton pattern also have their place. There is nothing to say you can't use a DI framework hidden behind your service locator to create your business layer dependencies and everything they depend on in one go - if that ends up making your life easier in that layer, and that layer doesn't exhibit the lazy loading which rich-client presentation layers overwhelmingly do. Just make sure to shield your user code from access to that container so that you can prevent the mess that passing a DI container around can create.
What about testability?
Testability can absolutely be achieved without a DI framework. I recommend using an interception framework such as UnitBox (free) or TypeMock (pricey). These frameworks give you the tools you need to get around the problem at hand (how do you mock out instantiation and static calls in C#) and do not require you to change your whole architecture to get around them (which unfortunately is where the trend has gone in the .NET/Java world). It is wiser to find a solution to the problem at hand and use the natural language mechanisms and patterns optimal for the underlying component then to try to fit every square peg into the round DI hole. Once you start using these simpler, more specific mechanisms you will notice there is very little need for DI in your codebase if any at all.
NOTE: For MVVM architectures
In basic MVVM architectures view-models effectively take on the
responsibility of controllers, so for all purposes consider the
'controller' wording above to apply to 'view-model'. Basic MVVM works
fine for small apps but as the complexity of an app grows you may want
to use an MVCVM approach. View-models become mostly dumb DTOs to
facilitate data-binding to the view while interaction with the
business layer and between groups of view-models representing
screens/sub-screens gets encapsulated into explicit
controller/sub-controller components. In either architecture the
responsibility of controllers exists and exhibits the same
characteristics discussed above.
Closed as exact duplicate of this question. But reopened, as the other Singleton questions are for general use and not use for DB access
I was thinking of making an internal data access class a Singleton but couldn't convince myself on the choice mainly because the class has no state except for local variables in its methods.
What is the purpose of designing such classes to be Singletons after all?
Is it warranting sequential access to the database which is not convincing since most modern databases could handle concurrency well?
Is it the ability to use a single connection repeatedly which could be taken care of through connection pooling?
Or Is it saving memory by running a single instance?
Please enlighten me on this one.
I've found that the singleton pattern is appropriate for a class that:
Has no state
Is full of basic "Service Members"
Has to tightly control its resources.
An example of this would be a data access class.
You would have methods that take in parameters, and return say, a DataReader, but you don't manipulate the state of the reader in the singleton, You just get it, and return it.
At the same time, you can take logic that could be spread among your project (for data access) and integrate it into a single class that manages its resources (database connections) properly, regardless of who is calling it.
All that said, Singleton was invented prior to the .NET concept of fully static classes, so I am on the fence on if you should go one way or or the other. In fact, that is an excellent question to ask.
From "Design Patterns: Elements Of Reusable Object-Oriented Software":
It's important for some classes to
ahve exactly one instance. Although
there can be many printers in a
system, there should only be one
printer spooler. There should only be
one file system and one window
manager. ...
Use the Singleton pattern when:
there must be exactly one instance of a class, and it must be accessible to clients from a well-known access point
the sole instance should be extensible by subclassing and clients should be able to use an extended instance without modifying their code
Generally speaking, in web development, the only things that should actually implement Singleton pattern are in the web framework itself; all the code you write in your app (generally speaking) should assume concurrency, and rely on something like a database or session state to implement global (cross-user) behaviors.
You probably wouldn't want to use a Singleton for the circumstances you describe. Having all connections to a DB go via a single instance of a DBD/DBI type class would seriously throttle your request throughput performance.
The Singleton is a useful Design Pattern for allowing only one instance of your class. The Singleton's purpose is to control object creation, limiting the number to one but allowing the flexibility to create more objects if the situation changes. Since there is only one Singleton instance, any instance fields of a Singleton will occur only once per class, just like static fields.
Source: java.sun.com
using a singleton here doesn't really give you anything, but limits flexibility
you WANT concurrency or you won't scale
worrying about connections and memory here is a premature optimization
As one example, object factories are very often good candidates to be singletons.
If a class has no state, there's no point in making it a singleton; all well-behaved languages will only create, at most, a single pointer to the vector table (or equivalent structure) for dispatching the methods.
If there is instance state that can vary among instances of the class, then a singleton pattern won't work; you need more than one instance.
It follows, then, by exhaustion, that the only cases in which Singleton should be used is when there is state that must be shared among all accessors, and only state that must be shared among all accessors.
There are several things that can lead to something like a singleton:
the Factory pattern: you construct
and return an object, using some
shared state.
Resource pools: you have a shared
table of some limited resources,
like database connections, that you
must manage among a large group of
users. (The bumpo version is where
there is one DB connection held by
a singleton.)
Concurrency control of an external
resource; a semaphore is generally
going to be a variant of singleton,
because P/V operations must
atomically modify a shared counter.
The Singleton pattern has lost a lot of its shine in recent years, mostly due to the rise of unit testing.
Singletons can make unit testing very difficult- if you can only ever create one instance, how can you write tests that require "fresh" instances of the object under test? If one test modifies that singleton in some way, any further tests against that same object aren't really starting with a clean slate.
Singletons are also problematic because they're effectively global variables. We had a threading issue a few weeks back at my office due to a Singleton global that was being modified from various threads; the developer was blinded by the use of a sanctioned "Pattern", not realizing that what he was really creating was a global variable.
Another problem is that it can be pathologically difficult to create true singletons in certain situations. In Java for example, it's possible to create multiple instances of your "singleton" if you do not properly implement the readResolve() method for Serializable classes.
Rather than creating a Singleton, consider providing a static factory method that returns an instance; this at least gives you the ability to change your mind down the road without breaking your API.
Josh Bloch has a good discussion of this in Effective Java.
You have a repository layer that you want created once, and that reference used everywhere else.
If you go with a standard singleton, there is a bad side effect. You basically kill testability. All code is tightly couple to the singleton instance. Now you cannot test any code without hitting the database (which greatly complicates unit testing).
My advice:
Find an IOC that you like and integrate it into your application (StructureMap, Unity, Spring.Net, Castle Windsor, Autofac, Ninject...pick one).
Implement an interface for you repository.
Tell the IOC to treat the repository as a singleton, and to return it when code is asking for the repository by the interface.
Learn about dependency injection.
This is a lot of work for a simple question. But you will be better off.
with c#, I would say that a singleton is rarely appropriate. Most uses for a singleton are better resolved with a static class. Being mindful of thread safety is extremely important though with anything static. For database access, you probably don't want a single connection, as mentioned above. Your best bet is to create a connection, and use the built in pooling. You can create a static method that returns a fresh connection to reduce code, if you like. However an ORM pattern/framework may be better still.
In c# 3.5 extension methods may be more appropriate than a static class.