Organizing service of BusinessLayer - c#

currently trying to think of a strategy for implementing services in the business layer. My first approach was to implement a service functionality per class, but the number of functionalities will eventually grow and become hard to call from presentation layer since id have to remember them all (Large amounts of classes). The opposite alternative would be to have one single class with all services implemented which would created a gigantic file.
I've seen implementations that implement functionalities(methods) inside each a class such has (ProductBLL ou CompanyBLL) which would make the services more manageable, however some services such as "getmeProductsAndCompanies" which are somewhat frequent doesn't seem to belong neither to ProductBLL nor CompanyBLL.
My question is: Is it good idea to make a class AplicationService that has a method per Service that instantiates the correct ServiceClass and correct method? My goal with this was to instantiate in PL AplicationService as and as.getmeProductsAndCompanies()
The internet material i passed through so far has very theoretical or very complex solutions. I am open to suggestions too.

My first approach was to implement a service functionality per class,
but the number of functionalities will eventually grow and become hard
to call from presentation layer since id have to remember them all
(Large amounts of classes)
I do not think aggregating all services into a single facade will help matters. It will only complicate them. Consider instead structuring services and devising some naming pattern for them.
For example, you have OrderService that does everything to orders (bad name choice, btw ;) ). Eventually it grows too big, and when this happens, you must split it in two. When splitting, you must use functional approach to naming. The name of the service must answer the question "What does this service do exactly and with what types of data". For example, OrderDisplayService looks like a good choice to me.
When you must find out which service you need to inject into your governor entity (MVC-like controller, usually), you must first type service namespace name (\Acme\Services\), then the object name you want to deal with (Order), then type a verb, describing what exactly you want to do with it (Display) and then press your IDE autocomplete buttons. You will have a relatively short list of services, available for injection (I suppose you use some IoC container for that).
Split your services into layers or units so that when you work in IDE, you could see only a functionally complete part of them in the currently expanded directory

Use composite pattern. Basically, you create as many small classes / functions as you can. Then those parts will be called by bigger class/function. Then the bigger class can also be used by some more bigger class again.

Related

Can I just use a Generic Repository and a Generic Service class for the whole project?

There is this generic repository implementation
http://www.itworld.com/development/409087/generic-repository-net-entity-framework-6-async-operations
By the looks of it , it seems that i can just have a single generic repository for my whole project and for almost all of the entities in the database it will work fine. For the ones that it doesn't , i can create a more specific repository , e.g. MembershipRepository which derives from the base repository and overrides the methods as needed, such as Find for example.
Now one could also write a generic service class too.... similar to the above, and then creating only a few more specific services.
That will drastically reduce the project size. No need to write redundant repositories per entity, and a much smaller number of service layer classes.
Surely it can't be that simple. Is there a catch to this? Let's ignore for a moment that EntityFramework has the repository+UOW pattern built in and repository pattern isn't needed.
We do.
I am torn about it honestly. For smaller domains its perfectly fine and works a treat. For larger ones (like the one I am working with currently), your repository can never really be generic enough to warrant a single one.
For example, the generic repository in the code base that I currently work with is now littered with all sorts of very specific methods for things like eager fetching, paging, etc. Its much more than what it started out as. Looking back at the revision history, it once only had GetAll, GetById, Create and Update methods. Now it has things like GetAllEagerFetch with overloads for various JOIN types, GetAllPaged, GetAllPagedEagerFetch, DeleteById, ExecuteStoredProcedure, ExecuteSql (yuck), etc. There is a lot more.
One way around this is to perhaps follow the Interface Segregation Principle so that your repository can be huge and generic but consumers only care about what they need to care about. I don't particularly like that though.
That being said - we have moved away from a Repository-style setup in more recent projects. We prefer a CQRS setup now with Command and Query objects that have a specific purpose. This leans more towards the Single Responsibility Principle instead (doesn't follow it to the "Uncle Bob degree".. but the classes have some well defined responsibilities).

A suitable design pattern for similar web service implementations

I'm consuming a SOAP web service that creates a separate service point and WSDL for each of its customers. I don't know why the do that. But e.g. if they have two clients A and B, the service designates two different service addresses with different WSDL addresses. These separate WSDLs are 90% the same objects and same functions but some of them are different based on the type of the customer. Therefore the created objects are eventually not the same even though they work exactly the same way.
So in order to fetch the correct service, I store the name of the customer somewhere on a table ("A" or "B") and my program has to know which customer its dealing with every run. I don't want to have different programs for each customer. I just want my program to get the customer name and based on that understand which model and which controller functions it will use.
What is the design pattern(s) that will help me facilitate this issue?
Chances are, in the future there will be an additional customer, so I want my code to be as loosely-coupled as it gets.
I have always wanted to use design patterns correctly in my code so I guess it's time to do so. Should I use a Strategy Pattern? Can you briefly explain what is the best solution for this?
I would use two design patterns in your case. The first one would be the Facade pattern. Use the facade pattern to simplify the interface of the web services your application has to deal with. Make sure you only need to change the implementation of the facade when the webservice contract changes. Convert the objects from the service into objects under your control and call functions with names and parameters that fit your domain and abstraction level.
The second design pattern would be the Adapter pattern. In your case, you should determine if you can decide on a common interface for both web services. So, if the 10% difference between the two services can be converted into one interface (of objects and/or functions) you use in your application.
The facade would use adapters to convert the 10% difference into common objects and functions. After that, the facade uses the common objects and functions, as well as the other 90% of the web services, to supply a proper abstraction layer for your application.
If there are additional customers in the future, you'll most likely only need to add or modify an adapter.

Prevent WCF exposing my whole class?

I've just begun learning WCF, and I'm coming from a total non-web background.
I have built a 3-tier desktop application, which compiles into one exe, which runs locally.
Now I want to move the whole business logics layer to a centric server, and make the GUI a client application.
As far as I understand, WCF should be my solution, as indeed, it helped me achieved what I wanted.
I mange to run remote functions, which is the basic of what I need.
My problem now, is that I don't quite understand the architecture.
For example, one of my services, returns a data type (class), from my Business Logics layer.
This class automatically becomes available to the client through the WCF mechanism.
But the problem is, this class contains some methods, which i definitely do not want to expose to the client.
For example a Save method (saves to the db).
Further more, sometimes I don't even want to allow the client to change all the properties of the class, since this class might be sent to one of my services.
I do not want to re-validate the class instance in the service.
What should I do? Should I build another layer, restricted version of the Business Logics, which I expose to the client? Or is there any way expose only part of my class to the client, without restricting the server it self?
I know this is a basic question, but honestly i've searched a lot before asking here. My problem is I don't quite know what to search.
My second question is then, do you have any recommendation for any resource that can explain me this architecture...?
Typically, if you want to encapsulate your business layer, you would not want to expose the business objects directly. This is because you now have a de-coupled client and you don't necessarily want to have to update the client every time the business logic/properties change.
This is where Data Transfer Objects (DTO) come into play nicely. Usually, you want to have control over your contract (data and methods) that you expose. Therefore, you would explicitly make other objects (DTOs) that make up the transfer layer. Then, you can safely change your client and server code independently (as long as both still fulfill the contract objects).
This usually requires a little more mapping (before you send or receive on each side) but it is often worth it.
For WCF, your interfaces and classes marked with [ServiceContract] and your classes marked with [DataContract] usually make up this transfer layer.
In WCF to expose method to client you have to mark it with OperationContractAttribute. So if you don't want clients to use your Save method, just don't mark them with with this attribute.
More info here: http://msdn.microsoft.com/en-us/library/system.servicemodel.servicecontractattribute.aspx
Pretty much same thing with properties, but different attribute: DataMemberAttribute. If you don't wont client to see it, just don't mark them with it (DataMember attribute)
But the problem is, this class contains some methods, which i definitely do not want to expose to the client.
Are you able to provide an example of your class and interface code? If so I'm sure you might be able to get more specific answers.
For example a Save method (saves to the db).
One possible approach would be to separate your class into 2 classes. Define the properties in the first class and then use that class as the base class of your second class. Then use the second class to define the methods. This would allow you to return only the properties while allowing you to keep your code DRY.
Further more, sometimes I don't even want to allow the client to change all the properties of the class, since this class might be sent to one of my services.
I do not want to re-validate the class instance in the service.
While you are able to define logic in the get and set methods for each property I would highly recommend revalidating any input received between services simply because any future changes or errors in one service could potentially lead to larger problems across your application. In addition this also helps to ensure your application is more secure against any potential attacks.
Should I build another layer, restricted version of the Business Logics, which I expose to the client? Or is there any way expose only part of my class to the client, without restricting the server it self?
I agree with the above answers that you should be able to limit access to the different properties and methods using the data and method attributes within your interfaces.
My second question is then, do you have any recommendation for any resource that can explain me this architecture...?
If you are looking for inexpensive but highly valuable video based training I've found the courses that Pluralsight offers to be quite good for both architecture as well as WFC services (btw, I am not associated with them, just enjoyed their training).

How can I implement DI/IoC for a repeated and variable process without creating kernels on demand?

I know, this probably wins the award for longest and most confusing question title. Allow me to explain...
I am trying to refactor a long-running batch process which, to be blunt, is extremely ugly right now. Here are a few details on the hard specifications:
The process must run several times (i.e. in the tens of thousands);
Each instance of the process runs on a different "asset", each with its own unique settings;
A single process consists of several sub-processes, and each sub-process requires a different slice of the asset's settings in order to do its job. The groups are not mutually exclusive (i.e. some settings are required by multiple sub-processes).
The entire batch takes a very long time to complete; thus, a single process is quite time-sensitive and performance is actually of far greater concern than design purity.
What happens now, essentially, is that for a given asset/process instance, a "Controller" object reads all of the settings for that asset from the database, stuffs them all into a strongly-typed settings class, and starts each sub-process individually, feeding it whichever settings it needs.
The problems with this are manifold:
There are over 100 separate settings, which makes the settings class a ball of mud;
The controller has way too much responsibility and the potential for bugs is significant;
Some of the sub-processes are taking upwards of 10 constructor arguments.
So I want to move to a design based on dependency injection, by loosely grouping settings into different services and allowing sub-processes to subscribe to whichever services they need via constructor injection. This way I should be able to virtually eliminate the bloated controller and settings classes. I want to be able to write individual components like so:
public class SubProcess : IProcess
{
public SubProcess(IFooSettings fooSettings, IBarSettings barSettings, ...)
{
// ...
}
}
The problem, of course, is that the "settings" are specific to a given asset, so it's not as simple as just registering IFooSettings in the IoC. The injector somehow has to be aware of which IFooSettings it's supposed to use/create.
This seems to leave me with two equally unattractive options:
Write every single method of IFooSettings to take an asset ID, and pass around the asset ID to every single sub-process. This actually increases coupling, because right now the sub-processes don't need to know anything about the asset itself.
Create a new IoC container for each full process instance, passing the asset ID into the constructor of the container itself so it knows which asset to grab settings for. This feels like a major abuse of IoC containers, though, and I'm very worried about performance - I don't want to go and implement this and find out that it turned a 2-hour process into a 10-hour process.
Are there any other ways to achieve the design I'm hoping for? Some design pattern I'm not aware of? Some clever trick I can use to make the container inject the specific settings I need into each component, based on some kind of contextual information, without having to instantiate 50,000 containers?
Or, alternatively, is it actually OK to be instantiating this many containers over an extended period of time? Has anybody done it with positive results?
MAJOR EDIT
SettingsFactory: generates various Settings objects from the database on request.
SubProcessFactory: generates subprocesses on request from the controller.
Controller: iterates over assets, using the SettingsFactory and SubProcessFactory to create and launch the needed subprocesses.
Is this different than what you're doing? Not really from a certain angle, but very much from another. Separating these responsibilities into separate classes is important, as you've acknowledged. A DI container could be used to improve the flexibility of both Factory pieces. The implementation details are, in some ways, less critical than improving the design, because once the design is improved, implementation can vary more readily.

Other than testing, how is Dependency Injection any better than static classes/methods?

Other than testability, what's the big advantage of utilizing D.I. (and I'm not talking about a D.I. framework or IoC) over static classes? Particularly for an application where you know a service won't be swapped out.
In one of our c# application, our team is utilizing Dependency Injection in the web web GUI, the service layer, and the repository layer rather than using static methods. In the past, we'd have POCOs (busines entity objects) that were created, modified, passed around, and saved by static classes.
For example, in the past we might have written:
CreditEntity creditObj = CreditEntityManager.GetCredit(customerId);
Decimal creditScore = CreditEntityManager.CalculateScore(creditObj);
return creditScore;
Now, with D.I., the same code would be:
//not shown, _creditService instantiation/injection in c-tors
CreditEntity creditObj = _creditService.GetCredit(customerId);
Decimal creditScore = _creditService.CalculateScore(creditObj);
return creditScore;
Not much different, but now we have dozens of service classes that have much broader scope, which means we should treat them just as if they were static (i.e. no private member variables unless they are used to define further dependencies). Plus, if any of those methods utilize a resource (database/web service/etc) we find it harder to manage concurrency issues unless we remove the dependency and utilize the old static or using(...) methods.
The question for D.I. might be: is CreditEntityManager in fact the natural place to centralize knowledge about how to find a CreditEntity and where to go to CalculateScore?
I think the theory of D.I. is that a modular application involved in thing X doesn't necessarily know how to hook up with thing Y even though X needs Y.
In your example, you are showing the code flow after the service providers have been located and incorporated in data objects. At that point, sure, with and without D.I. it looks about the same, even potentially exactly the same depending on programming language, style, etc.
The key is how those different services are hooked up together. In D.I., potentially a third party object essentially does configuration management, but after that is done the code should be about the same. The point of D.I. isn't to improve later code but to try and match the modular nature of the problem with the modular nature of the program, in order to avoid having to edit modules and program logic that are logically correct, but are hooking up with the wrong service providers.
It allows you to swap out implementations without cracking open the code. For example, in one of my applications, we created an interface called IDataService that defined methods for querying a data source. For the first few production releases, we used an implementation for Oracle using nHibernate. Later, we wanted to switch to an object database, so we wrote and implementation for db4o, added its assembly to the execution directory and changed a line in the config file. Presto! We were using db4o without having to crack open the code.
This has been discussed exactly 1002 times. Here's one such discussion that I remember (read in order):
http://scruffylookingcatherder.com/archive/2007/08/07/dependency-injection.aspx
http://ayende.com/Blog/archive/2007/08/18/Dependency-Injection-More-than-a-testing-seam.aspx
http://kohari.org/2007/08/15/defending-dependency-injection
http://scruffylookingcatherder.com/archive/2007/08/16/tilting-at-windmills.aspx
http://ayende.com/Blog/archive/2007/08/18/Dependency-Injection-IAmDonQuixote.aspx
http://scruffylookingcatherder.com/archive/2007/08/20/poking-bears.aspx
http://ayende.com/Blog/archive/2007/08/21/Dependency-Injection-Applicability-Benefits-and-Mocking.aspx
About your particular problems, it seems that you're not managing your services lifestyles correctly... for example, if one of your services is stateful (which should be quite rare) it probably has to be transient. I recommend that you create as many SO questions about this as you need to in order to clear all doubts.
There is a Guice video which gives a nice sample case for using D.I. If you are using a lot of 3-rd party services which need to be hooked upto dynamically D.I will be a great help.

Categories