Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I am having trouble deciding between two possible design choices. I have a web site which has a pretty extensive business layer and DAL (website, bll, and dal are all in multiple separate dlls). I need to design a windows service that can take some of my business objects, write them to a file, and store them locally within our network. The files are then imported into a 3rd party program which does further processing on them.
I can design this service one of two ways:
Wrap the service around the business layer and DAL. This would be quick and easy but the downside is every time the business layer changes, the service will have to be updated.
Add a web service to the web site and just query the web service for what I need. The windows service wouldn't have to use the business layer and as long as the web service doesn't change, I'll be good. The only downside is that I may have to create some basic business objects to parse the web service's xml into.
The windows service will have to poll the business layer/dal or web service every 10-20 minutes or so. The windows service is necessary because the web site is hosted offsite and thus doesn't have access to any of our local resources. I am leaning towards option 2 but I'm torn.
Given the two choices, which is the better option? Are there other possible options that I haven't considered? Also, how do you usually design for situations where you have one core set of libraries that are primarly used by a website but may end up being used either for data retrieval or to perform some function?
I'm not sure what the criteria is for storing certain business objects as files on the network, but if you're doing this on a regular basis then presumably you are trying to track changes of some kind, so there is another solution: Build the logic directly into the business/persistence layer.
If this secondary file storage is a business requirement, then it ought to be embedded directly in that tier and triggered by some sort of event. That way, instead of having an what is essentially an ad-hoc post-processing job that can get out of sync with the rest of the system, you have just one coherent system.
Invert the design - instead of wrapping a web service around the business services and using it for ad-hoc reporting, create a web service that encapsulates the data you need to receive from the export on a regular basis, and have your business tier send messages to it when new data is ready. You can send messages asynchronously so as not to tie up the business services, and depending on your reliability requirements you could set up a message queue (it's easier than it sounds, WCF already knows how to use MSMQ as the delivery mechanism, it's just a few configuration settings to change).
I can't say with any certainty that this is better than your first two options without knowing a good deal more about the architecture, the amount and type of data, the scheduling and reporting requirements, etc., but it is something you should consider. If you think that your business services are likely to change fairly frequently, then it might work better have it push data outward to a "warehouse" type abstraction rather than having a mining process to pull it.
Otherwise, I think I would go with option 2. I don't know if you've worked with WCF services before but you should know that you never actually have to parse XML. Everything is done through data contracts and when you generate a proxy for the web service, you get strongly-typed .NET objects. If you can pass your domain objects directly through the service API then it's really very little work at all to create the web service.
The real downside to a web service is that you have to take steps to ensure that your service contract never substantially changes (otherwise it can break clients). So you might eventually end up needing to create Data Transfer Objects on the service side to use as the public API instead of passing through domain objects. But in many cases you won't need to do this for a good long while, so go ahead and try it out, you'll see that it's pretty straightforward.
A variant of option two:
Add a WCF service to the site, exposing the information required as basic DTO DataContracts.
You could use AutoMapper or similar within the WCF service to handle the boring bit of converting your business objects to DTOs.
From your point two I understand, that you would just add the web-api for this extra-service. Thus, you would have to update two parts for any changes (extra-service, web-api, dll). With option one you would only have to update two parts (extra-service, dll), thus I would go with one.
BUT if you target for a general web api which you always have to maintain, go with option two.
For more flexibility instead of hard-wrapping your service around business and DAL, and instead of relying on the web site (through integrated web service) make use of design concepts like: interfaces, dynamic Type loading, Inversion of Control so your service is a thin decoupled layer that communicates with the business and DAL and allows for dynamic updates of the business and DAL without recompiling the service. Maybe put assemblies in the Global Assembly Cache of the machine to be shared across various other projects assemblies and apps.
I know it seems like throwing out jargon for the sake of it but that's how I would start to think.
Edit:
Loading types dynamically is actually amazing and easy. This is a quick C# pseudo code for one way, and without testing it might actually be right.
// Get a System.Type from string representation
Type t = Type.GetType("type name");
// Create instance of type.
object o = Activator.CreateInstance(t);
// Cast it to the interface (or actual Type) you're working with.
IMyInterface strongObject = (IMyInterface)o;
// ... and continue from there with the instance.
Instructions about how to formulate the string representation of a type name can be found in MSDN under Type.AssemblyQualifiedName, Type.GetType and similar places. In short you can see a lot of assembly qualified type names in the app.config or web.config files because they use the same format.
Related
I am using .NET 4 to create a small client server application for a customer. Should I create one giant service that implements many contracts (IInvoice, IPurchase, ISalesOrder, etc) or should I create many services running one contract each on many ports? My questions specifically is interested in the pros/cons of either choice. Also, what is the common way of answering this question?
My true dilemma is that I have no experience making this decision, and I have little enough experience with wcf that I need help understanding the technical implications of such a decision.
Don't create one large service that implements n-number of service contracts. These types of services are easy to create, but will eventually become a maintenance headache and will not scale well. Plus, you'll get all sorts of code merging conflicts if there's a development group competing for check-ins/check-outs.
Don't create too many services either. Avoid the trap of making your services too fine-grained. Try to create services based on a functionality. The methods exposed by these services shouldn't be fine-grained either. You're better off having fewer methods that do more. Avoid creating similar functions like GetUserByID(int ID), GetUserByName(string Name) by creating a GetUser(userObject user). You'll have less code, easier maintenance and better discoverability.
Finally, you're probably only going to need one port no matter what you do.
UPDATE 12/2018
Funny how things have changed since I wrote this. Now with the micro-services pattern, I'm creating a lot of services with chatty APIs :)
You would typically create different services for each main entity like IInvoice, IPurchase, ISalesOrder.
Another option is to seperate queries from commands. You could have a command service for each main entity and implement business operations accepting only the data they need in order to perform the operation (avoid CRUD-like operations); and one query service that returns the data in the format required by the client. This means that the command part uses the underlying domain model/business layer; while the query service directly operates on the database (bypassing the business, which is not needed for querying). This simplifies your querying a lot and makes it more flexible (return only what the client needs).
In real time applications you have one service contract for each entity like Invoice, Purchase and SalesOrder will have separate ServiceContract
However for each service contract there will be heterogeneous clients like Invoice will be called by backoffice through windows application using netNamedPipeBinding or netTcpBinding and same time client application needs to call the service using basicHttpBinding or wsHttpBindings. Basically you need to create multiple endpoints for each service.
Its seems that you are mixing between DataContract(s) and ServiceContract(s).
You can have one ServiceContract and many DataContract(s) and that would perfectly suit your needs.
The truth is that splitting up WCF services - or any services is a balancing act. The principle is that you want to to keep downward pressure on complexity while still considering performance.
The more services you create, the more configuration you will have to write. Also, you will increase the number of proxy classes you need to create and maintain on the client side.
Putting too many ServiceContracts on one service will increase the time it takes to generate and use a proxy. But, if you only end up with one or two Operations on a contract, you will have added complexity to the system with very little to gain. This is not a scientific prescription, but a good rule of thumb could be say about 10-20 OperationContracts per ServiceContract.
Class coupling is of course a consideration, but are you really dealing with separate concerns? It depends on what your system does, but most systems deal with only a few areas of concern, so splitting things up may not actually decrease class coupling that much anyway.
Another thing to remember, and this is ultra important is to always make your methods as generic as possible. WCF deals in DataContracts for a reason. DataContracts mean that you can send any object to and from the server so long as the DataContracts are known.
So, for example, you might have 3 OperationContracts:
[OperationContract]
Person GetPerson(string id);
[OperationContract]
Dog GetDog(string id);
[OperationContract]
Cat GetCat(string id);
But, so long as these are all known types, you could merge these in to one operation like:
[OperationContract]
IDatabaseRecord GetDatabaseRecord(string recordTypeName, string id);
Ultimately, this is the most important thing to consider when designing service contracts. This applies for REST if you are using a DataContract serialization like serialization method.
Lastly, go back over your ServiceContracts every few months and delete operations that are not getting used by the clients. This is another big one!
You should take the decision based the load expected, extensibility needed and future perspective. As you wrote " small client server application for a customer" it is not giving clear idea of intended use of the development in hand. Mr. Big's answer must be considered too.
You are most welcome to put forward further question backed with specific data or particulars about the situation in hand. Thanks.
I've just begun learning WCF, and I'm coming from a total non-web background.
I have built a 3-tier desktop application, which compiles into one exe, which runs locally.
Now I want to move the whole business logics layer to a centric server, and make the GUI a client application.
As far as I understand, WCF should be my solution, as indeed, it helped me achieved what I wanted.
I mange to run remote functions, which is the basic of what I need.
My problem now, is that I don't quite understand the architecture.
For example, one of my services, returns a data type (class), from my Business Logics layer.
This class automatically becomes available to the client through the WCF mechanism.
But the problem is, this class contains some methods, which i definitely do not want to expose to the client.
For example a Save method (saves to the db).
Further more, sometimes I don't even want to allow the client to change all the properties of the class, since this class might be sent to one of my services.
I do not want to re-validate the class instance in the service.
What should I do? Should I build another layer, restricted version of the Business Logics, which I expose to the client? Or is there any way expose only part of my class to the client, without restricting the server it self?
I know this is a basic question, but honestly i've searched a lot before asking here. My problem is I don't quite know what to search.
My second question is then, do you have any recommendation for any resource that can explain me this architecture...?
Typically, if you want to encapsulate your business layer, you would not want to expose the business objects directly. This is because you now have a de-coupled client and you don't necessarily want to have to update the client every time the business logic/properties change.
This is where Data Transfer Objects (DTO) come into play nicely. Usually, you want to have control over your contract (data and methods) that you expose. Therefore, you would explicitly make other objects (DTOs) that make up the transfer layer. Then, you can safely change your client and server code independently (as long as both still fulfill the contract objects).
This usually requires a little more mapping (before you send or receive on each side) but it is often worth it.
For WCF, your interfaces and classes marked with [ServiceContract] and your classes marked with [DataContract] usually make up this transfer layer.
In WCF to expose method to client you have to mark it with OperationContractAttribute. So if you don't want clients to use your Save method, just don't mark them with with this attribute.
More info here: http://msdn.microsoft.com/en-us/library/system.servicemodel.servicecontractattribute.aspx
Pretty much same thing with properties, but different attribute: DataMemberAttribute. If you don't wont client to see it, just don't mark them with it (DataMember attribute)
But the problem is, this class contains some methods, which i definitely do not want to expose to the client.
Are you able to provide an example of your class and interface code? If so I'm sure you might be able to get more specific answers.
For example a Save method (saves to the db).
One possible approach would be to separate your class into 2 classes. Define the properties in the first class and then use that class as the base class of your second class. Then use the second class to define the methods. This would allow you to return only the properties while allowing you to keep your code DRY.
Further more, sometimes I don't even want to allow the client to change all the properties of the class, since this class might be sent to one of my services.
I do not want to re-validate the class instance in the service.
While you are able to define logic in the get and set methods for each property I would highly recommend revalidating any input received between services simply because any future changes or errors in one service could potentially lead to larger problems across your application. In addition this also helps to ensure your application is more secure against any potential attacks.
Should I build another layer, restricted version of the Business Logics, which I expose to the client? Or is there any way expose only part of my class to the client, without restricting the server it self?
I agree with the above answers that you should be able to limit access to the different properties and methods using the data and method attributes within your interfaces.
My second question is then, do you have any recommendation for any resource that can explain me this architecture...?
If you are looking for inexpensive but highly valuable video based training I've found the courses that Pluralsight offers to be quite good for both architecture as well as WFC services (btw, I am not associated with them, just enjoyed their training).
Struggling with this one today.
Rewriting a web-based application; I would like to do this in such a way that:
All transactions go through a web services API (something like http://api.myapplication.com) so that customers can work with their data the same way that we do / everything they can do through our provided web interface they can also do programmatically
A class library serves as a data layer (SQL + Entity Framework), for a couple of design reasons not related to this question
Problem is, if I choose not to expose the Entity Framework objects through the web service, it's a lot of work to re-create "API" versions of the Entity Framework objects and then write all the "proxy" code to copy properties back and forth.
What's the best practice here? Suck it up and create an API model class for each object, or just use the Entity Framework versions?
Any shortcuts here from those of you who have been down this road and dealt with versioning / backwards compatibility, other headaches?
Edit: After feedback, what makes more sense may be:
Data/Service Layer - DLL used by public web interface directly as well as the Web Services API
Web Services API - almost an exact replica of the Service Layer methods / objects, with API-specific objects and proxy code
I would NOT have the website post data through the web services interface for the API. That way leads to potential performance issues of your main website. Never mind that as soon as you deploy a breaking API change you have to redeploy the main website at the same time. There are reasons why you wouldn't want to be forced to do this.
Instead, your website AND web services should both communicate directly to the underlying business/data layer(s).
Next, don't expose the EF objects themselves. The web service interface should be cleaner than this. In other words it should try and simplify the act of working with your backend as much as possible. Will this require a fair amount of effort on your part? yes. However, it will pay dividends when you have to change the model slightly without impacting currently connected clients.
It depends on project complexity and how long you expect it to live. For small, short living projects you can share domain objects across all layer's. But if it's big project, and you expect it to exist, work well, and update for next 5 years....
In my current project (which is big), I first started with shared entities across all layers, then i discovered that I need separate entities for Presentation, and now (6 month's passed) I'm using separate classes for each layer (persistence, service, domain, presentation) and that's not because i'm paranoid or was following some rules, just I couldn't make all work with single set of classes across layers... Make you conclusions..
P.S. There are tools that can help you convert your objects, like Automapper and Value Injecter.
I would just buck up and create an API specifically aimed at the needs of the application. It doesn't make much sense to what amounts to exposing the whole DB layer. Just expose what needs to be exposed in order to make the app work, and nothing else.
Several "parts" (a WinForms app for exmaple) of my project use a DAL that I coded based on L2SQL.
I'd like to throw in several WebApps into the mix, but the issue is that the DAL "offers" much more data than the WebApps need. Way more.
Would it be OK if I wrapped the data that the websites need within a web-service, and instead of the website connecting directly to the DAL it would go through the web-service which in turn would access the DAL?
I feel like that would add a lot of overhead, but on the other hand, I definitely don't like the feeling of knowing that the WebApps have the "capabilities" of accessing much more data than they actually need.
Any input would be greatly appreciated.
Thank you very much for the help.
You can either create web services, or add a repository layer that presents only the data that your applications require. A repository has the additional benefit of being a decoupling layer, making it easier to unit test your application (by providing a mock repository).
If you plan on eventually creating different frontends (say, a web UI and a WPF or Silverlight UI), then web services make a lot of sense, since they provide a common data foundation to build on, and can be accessed across tiers.
If your data access layer were pulling all data as IQueryable, then you would be able to query your DAL and drill down your db calls with more precision.
See the very brief blog entry I wrote on Repository and Service layers using Linq to SQL. My article is built around MVC but the concept of Repository and Service layers would work just fine with WebForms, WinForms, Web Services, etc.
Again, the key here is to have your Repository or your Dal return an object AsQueryable whereby you wait until the last possible moment to actually commit to requesting data.
Your structure would look something like this
Domain Layer
Repository Layer (IQueryable)
Service layer for Web App
Website
Service layer for Desktop App
Desktop App
Service layer for Web Services
Web Service
Inside your Service layer is where you customize the specific calls based on the application your developing for. This allows for greater security and configuration on a per-app basis while maintaining a complete repository that doesn't need to be modified until you swap out your ORM (if you ever decide you need to swap out your ORM)
There is nothing inherently wrong with having more than you need in this case. The entire .NET 4 Client Profile contains over 50MB of assemblies, classes, etc. I might use 5% of it in my entire career. That doesn't mean I don't appreciate having all of it available in case I need it.
If you plan to provide the DAL to developers that should not have access to portions of the data, write a wrapper or derive a new DAL. I would avoid the services route unless you're confident you can accommodate for the overhead.
Sounds like you are on the right track. If many applications are going to use the this data you gain a few advantages by having services with DTOs.
If the domain model changes, just the mapping to the DTO needs to change. You can isolate the consuming application from these changes.
Less data over the wire
You can isolate you applications from the implementation of the DAL.
You can expose different services (maybe different DTOs) for different applications if it is necessary to restrict what parts of the object model should be exposed.
I have a quick question that I am hoping is fairly simple to answer. I am attempting to develop a shared Employee object library for my company. The idea is to create a centralized database that contains information about our employees (Reporting Hierarchy, Office Locations, General Info, etc) and then create an shared object library for this database.
My question is what is the best way to create this library so it can be shared among applications.
Do I create a self contained library that stores the database connection (I can see concurrency issues here and it doesn't feel right).
Client -> Server and then deploy the "client library" for use among any application.
OR would a Web/WCF service be more ideally suited to this situation.
There are many options because the question can be translated broadly. I suggest taking to heart all answers. Having said that here's my spin on it...
I used to view software layers as vertical because of n-tier training, and have a hard time breaking away from those notions to something conceptually broader and less restrictive. I strive to view .NET assembles as just pieces of a puzzle.
You're right to separate connection string from code and that's easily supported by .NET .config file, or application settings.
I often prefer a small, core library having the business logic, concepts and flows although each of those can be broken out. And within that concept you can still break out business from data access as different assemblies to swap in a new kind of data access. But sticking with the core module (a kind of "business kernel" or "engine" if you will).
You can express your "business kernel" through many presentation types, for example
textual/Console I-O
GUI: WinForms, WPF, Silverlight, ASP.NET, LED/pixelboard, etc
as cmdlets for Powershell interactions
web service expressions
kinds of mobile apps
etc.
You can accelerate development using patterns to bend software to your will and related implementations like: Microsoft Enterprise Library, loosen the coupling with dependency injection e.g. Ninject (one of many), or inversion of control techniques, etc.
I usually prefer to have a middle tier layer (so some sort of Web/WCF service between the client and the database). This way you separate the clients from the database, so that you can control the number of connections, or you can change the schema of the database in a way that will be transparent for the clients.
Depending on your situation, you can either make the clients connect to the WCF service (preferred in most cases), or create a dll that will wrap the connection to the service and perform some additional processing on the client side.
It depends how deep you need to integrate you library into main application. If you want to extend application domain with custom entities, you have following options:
Built-in persistence into library. You will need to pass connection string to repository class, but also database must include the hardcoded scheme for your library. If you use LINQ to SQL as data access library, you may mark up you entities with mapping attributes (see http://msdn.microsoft.com/en-us/library/system.data.linq.mapping.aspx)
Provide domain library only, but implement persistence outside, if your data layer supports POCO mapping (EF 4 do).
Usually, putting domain model into separated assembly causes few problems:
Integration into application. Application itself usually provides few services, like data access, security, logging, web services etc. If your application have ideal design and layers fully decoupled from each other, there is no problem to add new entities, but usually data access layer requires inheritance from base class, logger is singleton, security checks are hardcoded into business logic methods etc. Such applications must be refactored, services must be extracted into interfaces, and such interfaces must be passed to components in separated assembly.
Entity references. If you use rich domain model, you probably want to reference entities declared in another assembly . Partially this problem can be solved by generics, but you need to have special design of your data access layer that allows you to get lists of generic entities, or get entity by id etc.
Database integration. It may be hard to maintain database changes, if some entities are developed separately from others, espesially by other team.
Just be sure to keep your connection method separate from your data access layer, and then you can change the connection method later if requirements change. If you have a simple DLL that holds your real logic, then adding a communication layer on top should be simple. This will also allow you to use all three methods you mentioned and have all your actual logic in a single DLL used amongst all three.