Interface best practices [closed] - c#

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
I'm working on an application that is split in the following way (simplified a bit)
App
|
Framework
|
Data (Persistance)
|
Data.Couchbase
Inside App we're setting up the DI container and registering which concrete implementations will be used when a particular interface is needed.
I.e. IUserRepository in the Data namespace will be fulfilled by CouchbaseUserRepository in the Data.Couchbase namespace. In the future if we swap out the persistance layer with another technology, say Mongo, we could update the DI registration and it would be fulfilled by say MongoUserRepository
All well and good....
Question 1
There is an obvious benefit to providing interfaces that cross system boundaries but whatabout within the Data.Couchbase namespace itself. Is there any point in having an ICouchbaseUserRepository interface if it doesn't provide any extra functionality itself? It seems as though if we register IUserRepository -> CouchbaseUserRepository that should be good enough? Similary within concrete implementations is there any point in splitting those components up further into interfaces that probably wont be swapped out
Question 2
Similarly is it worth having a bunch of interfaces in Framework whos only purpose is to proxy on to interfaces in Data, therefore App can have a dependency only on Framework and not have to also depend on Data. I can see the advantage.... actually maybe I can't..... it seems like we're going to have hundreds of interfaces in the framework whereby we could just have a dependency on 'Data' especially if these assemblies are going to live on the same tin
Cheers

Answer 1
CouchbaseUserRepository implementing IUserRepository makes great sense. All of your application logic uses only the interface, so that if you later switch to Mongo you only have to make your MongoUserRepository implement the same interface.
Answer 2
If you're building a large application, definitely go with two layers of interfaces: db level, and framework level. If it's not so big, it might be too much. In either case it would not be incorrect to create interfaces at your framework level as well.

Both of these questions fall into a coding and design style realm. As such they might be a better fit for the Programmers site. (https://softwareengineering.stackexchange.com/)
When dealing with issues of design the leading practice is clear -- you should have an interface that defines you DB tier and then you can inject a new type of data access with ease.
Where there is no functional need for an interface (as you describe in your questions) the need for the interface is to provide clarity -- it is coding style which makes your system clearer.
For example, in a contact management system it might make sense to define an interface to a database / system "object" which defines a contact. Then in addition make interfaces for people and organizations. This is helpful in the context of this application. Creating such interfaces for a document management system would make no sense. For both of them you would want the interface defined for DB injection.
You as the designer and programmer have decide -- there are no hard and fast rules when it comes to style choices in designing a system.

Related

Is it a BAD design to have [DataContract] marked class in the business layer? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I need to use a class that represents a Client Request that is sent to my REST WCF Service.
But I want to pass this request also to my method in the business layer. (Currently it is part of the WCF service)
Is it a BAD design to have [DataContract] marked class in the business layer?
So as I stated in comment:
If you feel bad about it introduce an interface that is implemented by that class and use that in your business layer. Note that DataContract attribute is merely marker for serialization engine.
I found this to be a useful answer. Basically your POCO's can be serialized by DataContractSerializer without adding the [DataContract] attributes.
https://stackoverflow.com/a/14185417/1099260
If the other option is to almost duplicate the contract to a different, "BL class", just to make you feel better about not sending DataContracts to a certain layer - I'd say don't - send the DataContract.
Also, "request", "command" and "data transfer object" are not strictly "facade" objects (or patterns). You can use a request object in your DAL API (I assume a layered architecture) in order to simplify method signatures and enable future refactorings, right? So why not use the same request object in all your layers?
If there are deployment issues (I can't think of any) with regards to the actual DataContract attribute - than it's a different story.
If, in the future, you are going to need different interfaces of the REST WCF Client Request service and you don't have control over the consuming clients, you may have problems if your BL is heavily dependent on the WCF contract class. It might become difficult for you to modify the service interface without breaking your BL, or modify your BL without breaking the interface to your clients.
If neither of the above apply to you, I'd say go ahead with what #Rahal said in his comment (I've +1'd him).

Selecting an IoC framework (for DI and AOP) [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
We are building a .NET application and i'd like to integrate a framework for doing DI and some AOP (injecting some diagnostics/logging code).
I know there's a multitude of frameworks out there, i am not sure which one to select, since each site that reviews them gives out different results and opinions.
I would love to hear some objective information based on real world experience for doing the things we require (listed above).
Short answer: Take a look at PRISM, UNITY and MEF to stay fully in the realm of Microsoft patterns and (best) practices. No reason to divert from that imo, unless you do really small projects (for which Prism may be oversized).
The answer is foremost in the design of your application. If you design your application around the SOLID principles, adding cross-cutting concerns will mostly be as simple as writing a decorator. In other words, when you need code weaving frameworks as Postsharp or need to do interception, you probably need to take a close look at your design again. Take a look for instance at how to model business operations with commands and handlers, or how to model queries as DTOs and handlers.
All containers allow you to wrap services with decorators, simply because you could register a lambda that does something like this:
container.Register<ICommandHandler<ProcessOrderCmd>>(() =>
new DiagnosticsCommandHandlerDecorator<ProcessOrderCmd>(
new ProcessOrderCommandHandler()));
However, when the whole application is designed around SOLID and the application grows big, manually configuring every service like this will become cumbersome and time consuming. So in that case it is very useful to pick a DI framework contains a batch registration feature and has support for registering decorators. Especially support for handling generic decorators (as the DiagnosticsCommandHandler<T> as shown above) will get important.
For instance, when you use the Simple Injector IoC container you can register all command handlers with a decorator in just two lines of code:
// This registers all command handlers in the container.
container.RegisterManyForOpenGeneric(typeof(ICommandHandler<>),
typeof(ICommandHandler<>).Assembly);
// This wraps all command handlers with the given decorator.
container.RegisterDecorator(typeof(ICommandHandler<>),
typeof(DiagnosticsCommandHandlerDecorator<>));
Although some patterns or frameworks might be overkill for small applications, I believe that the SOLID principles are core principles for object oriented design, and every application should be design with those principles in mind.

Entity Framework Code First convention over configuration should be the other way around [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Why has MS decided to use convention over configuration.
I deal with very large projects and not all projects are data centric. In fact, even with data centric projects, my entity classes have a lot of custom functionality that needs to be persistence agnostic.
With the current MSM approach, I end up having to apply attributes to non-persistence properties instead of the other way around. Shouldn't that be the point of code-first? To use a working class hierarchy and turn it into persistence compatible as an 'addition'?
I understand that some conventions are very useful such as the naming of Identity or primary key properties and foreign keys. But honestly, tell me how many developers would use code-first instead of model-first if they did NOT already have a class structure???
You don't need to use any persistence dependent attributes in your classes. EF code first uses model configuration to define mapping - that configuration is either defined directly in OnModelCreating method of your derived DbContext or in separate configuration classes per every your entity and complex type. Attributes are just shortcuts converted to these configurations.
If you are creating appropriate abstractions then this should not be a problem IMO. It sounds like you are mixing business entities and logic with data entities. If you follow the repository pattern and abstract the persistence entities, then most of your POCOs should follow convention. I would suggest re-evaluating your architecture as it sounds very coupled to the persistence layer. If you create a more loosely coupled architecture, then these conventions should make sense to you. Just my two cents, though
I've worked quite a bit applying code first to pre-existing business objects to create a persistence-ignorant architecture. There's more than one way to apply EF in this context - I agree that dressing up your business objects with attributes is less than ideal. What we've done in the past is
Define an interface stored in the BLL assembly that is implemented
by the DAL. This is used by upper layers for CRUD operations
In the DAL, map business objects using EntityTypeConfiguration(of T)
It's worked pretty well for us, and decouples the upper layers from the DAL pretty nicely

The best way to organize work with database [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
I wonder what is the best way to organize working with database from C# code.
I used to have different approaches for that:
Each object has Save, Update and Delete methods, which implemented all the logic.
There was some static class that has static methods Get, GetList, Update -- almost the same method as previous, but database logic was extracted from data domain class.
Now I think it will be great to have some non-static classes for each of datadomain class I need to store somewhere, that will do the storage.
What is the best way to do it? Are there any good libraries that provide good API for storing objects?
Use a Object/Relation Mapper (ORM). My personal favorite is nhibernate. Use Entity Framework if you want a Microsoft product.
ORM's usually implement UnitOfWork and Repository patterns which makes it easier for you to handle database operations and follow the Single Responsibility Principle.
You also mentioned singletons. I would avoid them if possible. Have you tried to unit test a class that uses singletons? It's impossible unless the singleton is a proxy for the actual implementation. The easiest way to remove singletons is to invert dependencies by using dependecy injection. There are several containers available.
The recommended way to work with databases these days is through an ORM.
Look at nHibernate (community project) and Entity Framework (Microsoft provided) for some of the most popular choices for .NET, though there are many many more.
These are libraries that provide you with all the data access that you need, with some configuration needed.
The repository pattern is a very common way to structure your data-access logic.
Both methods get you under scrutny review or fired where I work.
See:
Separation of concerns. An object should not deal with loading or savint itseld.
Static methos for that mean you never can have the same class in two different databases.
This is a hard problem - that is solved for 20 years using things like UnitOfWork patterns or Repository patterns. TONS of projects around for that - remove "C#" (no, guy, the world does not resuolve around one langauge) and look up Object/Relational mappers. I remember using one 20 years ago with samlltalk and writing one 10 years ago in C# at times .NET developers at large thought string manipulation is state of the art.
NHibernate, Entity Framework, heck, even BlToolkit as my preferred lightweight toolkit these days show you proper patterns.

Code Generation - Domain/model first (DDD) [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I'm looking for a 'complete' solution for code-generation based on DDD or model first approach. Ideally, this would be a separate application or VS plugin that we could use and re-use to generate as much of the standard plumbing code as possible, preserving my custom business logic as well.
I would like to generate VS projects, including WCF sercvice app, Data layer, entity model etc. and client applications such as ASP.MVC (and/or web-forms) sites with scaffolding, windows client.
I know there are many choices like Entity Framework vs NHibernate, open-source frameworks such as S#ahrp Architecture, and there are commercial products as well. I'm open to anything as I know most of the investment will be in time.
Update:
To add to this: The Entity Framework (4.0) is a big step forward as it will generate c# business classes as well as the database schema, allowing you to focus on the 'model', which is good. Is there anything that will go one level higher to allow generation of other objects based on a (meta)model of some kind.
I'd recommend taking a look at CodeSmith. It comes with several different template frameworks like PLINQO (Linq-to-SQL), NHibernate, CSLA and .netTiers (which sounds closer to what you are looking for).
Also take a look at the video tutorials on how to use the frameworks located here.
Thanks
-Blake Niemyjski
I understand that SparxEA (Enterprise Architect) supports code generation (and the generation of models from code) but I've never actually done that with it myself.
So this should definately allow you to model your system / domain and then generate appropriate code.
It also seems to support integration with Visual Studio: http://www.sparxsystems.com.au/products/mdg/int/vs/index.html

Categories