Created a group of related providers using the provider pattern. Now would like to enhance my providers due to new requirements given to me. The providers were created for a bunch of customers who integrate with our web services. Now some of the same customers would like to integrate with us using a web page. Going thru our web page the front end logic of course would be different but half of the provider logic would be the same. So I was thinking of adding another abstract class in particular customers provider to handle web page integration with provider. Here's a code ex using possible enhancement:
//Same Customer provider dll
//Methods defined for handling web service integration
public abstract class XMLBaseProvider : ProviderBase
//Methods defined for handling web page integration logic
public abstract class XMLWebPageBaseProvider : XMLBaseProvider
Now in the app.config I define another provider section that points to XMLWebPageBaseProvider along with a new provider name. This works but an wondering am I abusing the provider pattern coding it this way? Is there any concerns or gotchas I should be worried about by doing this. Has anybody here implemented this provider pattern like I described above?
Also note we probably will get more customers who will integrate with us using the web page integration. I would just hate having to keep adding more and more providers(dlls) to solution.
Thanks,
DND
I think your ideas are good. For what you described, your design will work fine. As one of the commentators noted, though, the requirements might expand into JSON. In my experience, the need for different formats always grows over time. When that happens, inheritance becomes quite brittle. The class hierarchy will grow to more and more levels of abstract classes. In the end, it will be difficult to manage.
The commentator suggested using composition and I agree. A strategy or visitor pattern will likely serve you better over the long run.
If the application is critical to the business and the business is growing, consider going a step further. Move as much of the provider logic as possible out of the code and into a configuration file or configuration database. This will be a big win in the long run because it minimizes the amount of code that must be change when the requirements grow. Changing the code risk creating bugs, mandates a new build and deployment, etc. Changing some data is much easier and less risky.
This strategy is generally referred to as data-driven programming. Have a look at this question.
Related
currently trying to think of a strategy for implementing services in the business layer. My first approach was to implement a service functionality per class, but the number of functionalities will eventually grow and become hard to call from presentation layer since id have to remember them all (Large amounts of classes). The opposite alternative would be to have one single class with all services implemented which would created a gigantic file.
I've seen implementations that implement functionalities(methods) inside each a class such has (ProductBLL ou CompanyBLL) which would make the services more manageable, however some services such as "getmeProductsAndCompanies" which are somewhat frequent doesn't seem to belong neither to ProductBLL nor CompanyBLL.
My question is: Is it good idea to make a class AplicationService that has a method per Service that instantiates the correct ServiceClass and correct method? My goal with this was to instantiate in PL AplicationService as and as.getmeProductsAndCompanies()
The internet material i passed through so far has very theoretical or very complex solutions. I am open to suggestions too.
My first approach was to implement a service functionality per class,
but the number of functionalities will eventually grow and become hard
to call from presentation layer since id have to remember them all
(Large amounts of classes)
I do not think aggregating all services into a single facade will help matters. It will only complicate them. Consider instead structuring services and devising some naming pattern for them.
For example, you have OrderService that does everything to orders (bad name choice, btw ;) ). Eventually it grows too big, and when this happens, you must split it in two. When splitting, you must use functional approach to naming. The name of the service must answer the question "What does this service do exactly and with what types of data". For example, OrderDisplayService looks like a good choice to me.
When you must find out which service you need to inject into your governor entity (MVC-like controller, usually), you must first type service namespace name (\Acme\Services\), then the object name you want to deal with (Order), then type a verb, describing what exactly you want to do with it (Display) and then press your IDE autocomplete buttons. You will have a relatively short list of services, available for injection (I suppose you use some IoC container for that).
Split your services into layers or units so that when you work in IDE, you could see only a functionally complete part of them in the currently expanded directory
Use composite pattern. Basically, you create as many small classes / functions as you can. Then those parts will be called by bigger class/function. Then the bigger class can also be used by some more bigger class again.
Other than testability, what's the big advantage of utilizing D.I. (and I'm not talking about a D.I. framework or IoC) over static classes? Particularly for an application where you know a service won't be swapped out.
In one of our c# application, our team is utilizing Dependency Injection in the web web GUI, the service layer, and the repository layer rather than using static methods. In the past, we'd have POCOs (busines entity objects) that were created, modified, passed around, and saved by static classes.
For example, in the past we might have written:
CreditEntity creditObj = CreditEntityManager.GetCredit(customerId);
Decimal creditScore = CreditEntityManager.CalculateScore(creditObj);
return creditScore;
Now, with D.I., the same code would be:
//not shown, _creditService instantiation/injection in c-tors
CreditEntity creditObj = _creditService.GetCredit(customerId);
Decimal creditScore = _creditService.CalculateScore(creditObj);
return creditScore;
Not much different, but now we have dozens of service classes that have much broader scope, which means we should treat them just as if they were static (i.e. no private member variables unless they are used to define further dependencies). Plus, if any of those methods utilize a resource (database/web service/etc) we find it harder to manage concurrency issues unless we remove the dependency and utilize the old static or using(...) methods.
The question for D.I. might be: is CreditEntityManager in fact the natural place to centralize knowledge about how to find a CreditEntity and where to go to CalculateScore?
I think the theory of D.I. is that a modular application involved in thing X doesn't necessarily know how to hook up with thing Y even though X needs Y.
In your example, you are showing the code flow after the service providers have been located and incorporated in data objects. At that point, sure, with and without D.I. it looks about the same, even potentially exactly the same depending on programming language, style, etc.
The key is how those different services are hooked up together. In D.I., potentially a third party object essentially does configuration management, but after that is done the code should be about the same. The point of D.I. isn't to improve later code but to try and match the modular nature of the problem with the modular nature of the program, in order to avoid having to edit modules and program logic that are logically correct, but are hooking up with the wrong service providers.
It allows you to swap out implementations without cracking open the code. For example, in one of my applications, we created an interface called IDataService that defined methods for querying a data source. For the first few production releases, we used an implementation for Oracle using nHibernate. Later, we wanted to switch to an object database, so we wrote and implementation for db4o, added its assembly to the execution directory and changed a line in the config file. Presto! We were using db4o without having to crack open the code.
This has been discussed exactly 1002 times. Here's one such discussion that I remember (read in order):
http://scruffylookingcatherder.com/archive/2007/08/07/dependency-injection.aspx
http://ayende.com/Blog/archive/2007/08/18/Dependency-Injection-More-than-a-testing-seam.aspx
http://kohari.org/2007/08/15/defending-dependency-injection
http://scruffylookingcatherder.com/archive/2007/08/16/tilting-at-windmills.aspx
http://ayende.com/Blog/archive/2007/08/18/Dependency-Injection-IAmDonQuixote.aspx
http://scruffylookingcatherder.com/archive/2007/08/20/poking-bears.aspx
http://ayende.com/Blog/archive/2007/08/21/Dependency-Injection-Applicability-Benefits-and-Mocking.aspx
About your particular problems, it seems that you're not managing your services lifestyles correctly... for example, if one of your services is stateful (which should be quite rare) it probably has to be transient. I recommend that you create as many SO questions about this as you need to in order to clear all doubts.
There is a Guice video which gives a nice sample case for using D.I. If you are using a lot of 3-rd party services which need to be hooked upto dynamically D.I will be a great help.
I'm new to the MVC framework and have just run through the NerdDinner sample project. I'm loving this approach over form-based asp.net.
I'd like to spin of a more sizable side project using this same approach. Do you see anything in that project that would prevent me from enlarging the basic structure to a more complex website?
Examples of things that make me wary:
1) The NerdDinner sample accesses a db of only two tables, my db has around 30.
2) The NerdDinner project uses the LinqToSQL classes directly... all the way from the model, through the controller, to the view... is that kosher for a larger project?
Do you see any other parts of the NerdDinner framework that might cause me future grief?
I agree with others that the model should be the only place you use linq2sql and my little addendum to that is only use linq2sql in models in small projects. For larger sites it might be worth the overhead to create a separate Web Service project that does all the talking to the database and utilize the web service in your Model.
I never fully checked out the Nerd Diner example but other best practices include Typed Views and using a datamodeler that allows for easy validation (see xval or the DataAnnotations model binder). To me these are 2 of the most important best practices/
Stephen Walter has alot of excellent tips on his website that are worth checking out and taking into account when setting up a new MVC project.
I would add a service layer between the repositories and controllers. The service layer will contain all of your business logic leaving your controllers to deal mainly with processing form inputs and page flow.
Within the repositories I map LinqToSql classes and fields to domain models and then use the domain models within the service layer, controllers and views. For a larger system the extra layers will prove their worth in the long run.
There's alot of debate around the internet when it comes to the Linq to Sql classes. Some feel that it's not enough abstraction when you use the classes directly, and some feel that that's what they're there for. At work we starting revamping our site, and we're using MVC. The way we decided to go was basically each one of the LINQ to SQL classes implements an interface. IE:
public partial class LinqToSqlClass //generated class
{
public int Id{get;set;}
}
interface ILinqToSqlClass
{
int Id{get;set;}
}
public partial class LinqToSqlClass : ILinqToSqlClass
{
}
This is just a very small part of it. We then have a repository that gets you any of these generated class, but only as that of their interface type. This way, we're never actually working directly with the Linq to Sql classes. There are many many different ways to do this, but generally I would say yes, if you're dealing with a large database (especially if the schema may change) or if you're dealing with data that may come from more than one source, definitely don't use the classes directly.
Bottom line is, there's alot of good info in that Nerd Dinner chapter, but when creating your own project, you'll obviously run into issues of your own so take it as you go.
The Nerd Dinner text makes the claim that the MVC framework can equally well accommodate other common data abstractions. (It's true.) It sounds like your organization already has one it likes. A good learning strategy would probably be to adapt one to the other.
I have a C# public API that is used by many third-party developers that have written custom applications on top of it. In addition, the API is used widely by internal developers.
This API wasn't written with testability in mind: most class methods aren't virtual and things weren't factored out into interfaces. In addition, there are some helper static methods.
For many reasons I can't change the design significantly without causing breaking changes for applications developed by programmers using my API. However, I'd still like to give internal and external developers using this API the chance to write unit tests and be able to mock the objects in the API.
There are several approaches that come to mind, but none of them seem great:
The traditional approach would be to force developers to create a proxy class that they controlled that would talk to my API. This won't work in practice because there are now hundreds of classes, many of which are effectively strongly typed data transfer objects that would be a pain to reproduce and maintain.
Force all developers using the API that want to unit test it to buy TypeMock. This seems harsh to force people to pay $300+ per developer and potentially require them to learn a different mock object tool than what their used to.
Go through the entire project and make all the methods virtual. This would allow mock-ing of objects using free tools like Moq or Rhino Mocks, but it could potentially open up security risks for classes that were never meant to be derived from. Additionally this could cause breaking changes.
I could create a tool that given an input assembly would output an assembly with the same namespaces, classes, and members, but would make all of the methods virtual and it would make the method body just return the default value for the return type. Then, I could ship this dummy test assembly each time I released an update to the API. Developers could then write tests for the API against the dummy assembly since it had virtual members that are very mock-able. This might work, but it seems a bit tedious to write a custom tool for this and I can't seem to find an existing one that does it well (especially that works well with generics). Furthermore, it has the complication that it requires developers to use two different assemblies that could possibly go out of date.
Similar to #4, I could go through every file and add something like "#ifdef UNITTEST" to every method and body to do the equivalent of what a tool would do. This doesn't require an external tool, but it would pollute the codebase with a lot of ugly "#ifdef"'s.
Is there something else that I haven't considered that would be a good fit? Does a tool like what I mentioned in #4 already exist?
Again, the complicating factor is that this is a rather large API (hundreds of classes and ~10 files) and has existing applications using it which makes it hard to do drastic design changes.
There have been several questions on Stack Overflow that were generic about retrofitting an existing application to make it testable, but none seem to address the concerns I have (specifically in the context of a widely used API with many third-party developers). I'm also aware of "Working Effectively With Legacy Code" and think it has good advice, but I am looking for a specific .net approach given the constraints mentioned above.
UPDATE: I appreciate the answers so far. One that Patrik Hägne brought up is "why not extract interfaces?" This indeed works to a point, but has some problems such as the existing design has many cases where we take expose a concrete class. For example:
public class UserRepository
{
public UserData GetData(string userName)
{
...
}
}
Existing customers that are expecting the concrete class (e.g. "UserData") would break if they were given an "IUserData."
Additionally, as mentioned in the comments there are cases where we take in a class and then expose it for convenience. This could cause problems if we took in an interface and then had to expose it as a concrete class.
The biggest challenge to a significant rewrite or redesign is that there is a huge investment in the current API (thousands of hours of development and probably just as much third party training). So, while I agree that a better SOLID design rewrite or abstraction layer (that eventually could become the new API) that focused on items like the Interface Separation Principle would be a plus from a testability perspective, it'd be a large undertaking that probably can't be cost justified at the present time.
We do have testing for the current API, but it is more complicated integration testing rather than unit-testing.
Additionally, as mentioned by Chad Myers, this is question addresses a similar problem that the .NET framework itself faces in some areas.
I realize that I'm probably looking for a "silver bullet" here that doesn't exist, but all help is appreciated. The important part is protecting the huge time investments by many third party developers as well as the huge existing development to create the current API.
All answers, especially those that consider the business side of the problem, will be carefully reviewed. Thanks!
What you're really asking is, "How do I design my API with SOLID and similar principles in mind so my API plays well with others?" It's not just about testability. If your customers are having problems testing their code with yours, then they're also having problems WRITING/USING their code with yours, so this is a bigger problem than just testability.
Simply extracting interfaces will not solve the problem because it's likely your existing class interfaces (what the concrete classes expose as their methods/properties) aren't design with Interface Segregation Principle in mind, so the extracted interface would have all sorts of problems (some of which you mentioned in comment to a previous answer).
I like to call this the IHttpContext problem. ASP.NET, as you know, is very difficult to test around or with due to the "Magic Singleton Dependency" problem of HttpContext.Current. HttpContext is not mockable without fancy tricks like what TypeMock uses. Simply extracting an interface of HttpContext is not going to help that much because it's SO huge. Eventually, even IHttpContext would become a burden to test with so much so that it's almost not worth doing any more than trying to mock HttpContext itself.
Identifying object responsibilities, slicing up interfaces and interactions appropriately, and designing with Open/Closed Principle in mind is not something you and try to force/cram into an existing API designed without these principles in mind.
I hate to leave you with such a grim answer, so I'll give you one positive suggest: How's about YOU take all the grief on behalf of your customers and make some sort of service/facade layer over top of your old API. This service layer will have to deal with the minutiae and pain of your API, but will present a nice, clean, SOLID-friendly public API that your customers can use with much less friction.
This also has the added benefit of allowing you to slowly replace parts of your API and eventually make it so your new API isn't just a facade, it IS the API (and the old API is phased out).
Another approach would be to create a seperate branch of the API and do option 3 there. Then you just maintain these two versions and deprecate the former. Merging changes from one branch into the other should work automatically most of the time.
As a reply to your edit, interface extraction does indeed work very well here:
public interface IUserRepository
{
IUserData GetData(string userName);
}
public class UserRepository
: IUserRepository
{
// The old method is not touched.
public UserData GetData(string userName)
{
...
}
// Explicitly implement the interface method.
IUserData IUserRepository.GetData(string userName)
{
return this.GetData(userName);
}
}
As I also said in a comment this may not be the way to go in every place. I think you should identify some main points in your API where it's extra important for your customers to be able to fake the interaction and start there. You don't have to make a complete rewrite of the whole API but it can transform gradually.
One approach you don't mention (and the one I'd prefer in most cases) is to extract interfaces for the classes you want the user of the API to be able to fake. Not knowing your API not every single class in it has to have it's interface extracted.
Third party users should not be testing your API. They would want to test their code against your API and so they need to create Mocks of the API etc. but they would be relying on your testing of the API to ensure it works. Or is that what you meant? Do you want to make your API easy to test against?
Start again in that case, and this time think about the testers :)
I agree with Kim. Why not re-write your core API using the best practices you explained, and supply a set of proxy/adapter classes that expose the old interface but talk to your new API?
Old developers will be naturally encouraged to migrate to the new API, but not be forced to immediately do so. New developers will simply use your new API. Announce an EOL for your old API interface if you are concerned about developers staying on the old API.
I have a large .NET web application. The system has projects for different intentions (e.g. CMS, Forum, eCommerce), and I have noticed a (naive) pattern of calling on another project's class. For example, the ecommerce module needs functionality to generate a file on the fly for products, and I call and reference a method in the CMS to do this, because file handling is really a job for the CMS.
Obviously (and I know why), this is bad design and a case of high coupling.
I know a few ways to handle high coupling, like restructuring the project (although I don't really think this is a robust solution), but what else can I do to reduce high coupling? Any simple tips? Also, it would be good to know why/how they reduce coupling. I use .NET 3.5 and Sql Server 2005 so things like JMS (which I keep coming across in my search for tips on this design issue), are not applicable.
Thanks
BTW,
One of the reasons I ask this is that I have read the previous questions similar to this but usually if a question that has been asked before is asked again, different tips can be learnt as different people reply to the post.
I know of dependency injection/IOC, but I am interested in the small things that can be done to reduce coupling.
How could I choose between using a static class, or an interface-derived class, or the IOC approach when deciding on how to reduce coupling? Also, I could develop a web service which could call a static class - mixing up the approaches in my solution.
The interesting thing is that in my application, I don't want it to be disjointed. So I just have a forum, ecommerce system, and any other module required, but everything has to gel into one site so each module (which is represented as a dedicated project in my Visual Studio solution) needs to know about every other module and work with it. So for example, I might have a module which handles user profiles (working with ASP.NET membership, roles, etc), but this will work with the forum module as a user on the forum will be a registered user on the site (one login throughout), and his or her profile will be coming from the user profile module. This is as opposed to seperate profiles as seen on other sites I've come across).
You should expose web services in those projects who will be needed by other projects. This is kind of the base level idea behind SOA. So, I would just create web services and consume them, which will decouple you quite a bit from how you have it now. Hope this helps.
I'd consider starting by doing an "extract interface" refactoring on the tightly coupled pieces. For example, if using the CMS as a backing store, create an interface that can store things, then create a mediator or adapter class that knows about the CMS, but isolate the logic that knows about the storage mechanism details to just that class.
Then, for testing, you can easily substitute an in-memory store or local-filesystem store that doesn't depend on the CMS being up.
Consider using techniques like dependency injection (See StructureMap, Spring.Net, NInject) to simplify instantiation if a simple factory doesn't give you the flexibility you need.
It sounds like you have a layering problem. Your assemblies should have a single dependency cycle - from least stable to most stable. That allows you to version sensibly. Generally, that cycle would be something like UI (least stable) -> Domain Core (stable) -> Data Access (most stable). You can throw in a Utilities or some infrastructre assemblies along the way, but again - they should be considered more stable than the assemblies dependent on them.
I'd guess your App.ECommerce and App.Cms assemblies are more siblings than layers - so you would not want those to depend on each other, but that doesn't mean you can't reuse functionality. For your particular scenario, you need to push the needed functionality down to a Core or Utilities assembly that both ECommerce and Cms can depend on. If it's a specific implementation that ECommerce provides, then you can push an interface or abstract base class to the Core - and have a higher layer (perhaps IoC container) wire up the concrete Cms.FileCreator class to the ECommerce.IFileCreator dependency.
Get proper abstractions in place as described by others (interfaces, etc). Program against abstractions, not concretions.
Design your classes with Dependency Injection in mind as you have described.
Use an Inversion of Control Container as the mortar between the bricks.
Unity from the Patterns & Practices team complements the Enterprise Library.
Scott Hanselman has a nice List of .NET Inversion of Control Containers.
Well, I don't know anything about .NET, but how about refactoring common code into a separate, underlaying project/layer? Loads of stuff in a web app can be done generically to suit both a CMS, a forum and eCommerce, writing to a file is a perfect example.
Another approach could be to see the forum and eCommerce as modules in a CMS, which would also make sense. Then they could safely use specified API:s of the CMS.