We are developing multiple web services in C# using WCF, but we´re new doing it.
So, for what we have read and learnt, this is our approach:
We have a class library that we called CommonLibrary that has a few classes that are going to be used on all our services (language stuff, type of user connected and a common object that all the services are meant to return).
We have another class library called SecurityLibrary which validates the user that is consuming the method.
At the moment we have 2 services that are almost at 90% finished, both of them use CommonLibrary and SecurityLibrary.
Now the questions:
Is this a bad approach?
Are we violating the SOA principles of encapsulation and autonomy by using common/shared library with each of our services?
A third person told us to copy all the code of those libraries on each of our services so we have a 100% autonomous service, is this the right way? I think is hard for maintenance and shows a lot of duplicity. Any update made on one has to be replicated or merged on those other services...
No, it is not a bad approach?
If using libaries in your service, you should also keep away from the .NET-library. I am wonding why you are thinking that a service process is only allowed to exist of only one assembly.
Furthermore, copy-paste code is a very, very bad habbit. It is known as a anti-design-pattern. I duplicates the maintainance and also all the bugs inside it.
Sharing libaries does not make your service less "autonomous". I think it could make them more compatible if they are sharing types.
A good service is just a process, existing of one or more (shared) assemblies, with a well defined service contract. This service contract is never allowed to be broken.
BTW: In my answer I did not include problems which shared assemblies in the GAC. That is a feature or problem shared by all processes, not only services.
Related
I am dealing with some architectural design concerns that is needed to be sorted out. My current architecture can be seen below. Each box is a project in visual studio, and they together forms solution.
My Core application is coded in WestCore.AppCore Context, and I have another project group called CSBINS (which includes system web service integrations) CSBINS is an merchant product that is why I found it better to seperate it to another project and only depend it with most commonly used interfaces from WestCore.AppCore.
Right now WestCore.Api does not have any logic in it. All the application logic is handled inside AppCore and AppCore.Csbins
The Problem is I sometimes have need to use WestCore.AppCore.Csbins services inside WestCore.AppCore which causes cross referencing issue.
the best approach right now that I think is to add Endpoint Services into WestCore.Api and move cross platform logic to Endpoint Services.
However I would like to get suggestions and design concerns about going further on this since I am very sure that there would be many design choices.
I am also considering to move common AppCore Interfaces and Classes to WestCore.AppCore.Common so that I wont need to reference whole WestCore.AppCore project to WestCore.AppCore.Csbins.
Why are you using services inside other services - this is probably a bad thing and needs refactoring.
Those CORE projects look like are application services projects, it might help calling them 'WestCore.ApplicationServices', Core implies it belongs at the domain level.
It sounds like you need to impliment an anti corruption layer to integrate with the 3rd party vendor rather than creating a whole new 'domain' context. This should be as straightforward as degining an interface in your domain layer (personally I use the *Gateway suffix to identifiy interfaces that interact with external systems)
Not knowing anything about your domain I would probably start with something that looks like this: (I've assumed the csbins is some sort of payment or accounting gateway)
Also, I would strongly recommend avoiding "Common" and "Shared" libraries at the domain level, you shouldn't need them. Your interfaces and classes are DOMAIN objects and belong in your DOMAIN library. The Application Services should be using domain models directly and having implementation of domain interfaces supplied via Dependency Injection. Hopefully your Domain Models are fleshed out enough that your application service classes are just orchestration wrappers.
Application Services in DDD are supposed to orchestrate full business use cases, using Repositories to fetch Aggregates, calling methods on the Aggregates and managing infrastructure concerns like database transactions.
When reading books from Eric Evans, Vaughn Vernon and Scott Millett, you can find great examples on how separate your projects. But I never found clear answers for this situation.
Suppose you have a Domain, and three "entry points" to communicate with this domain:
Rest API for synchronous actions
Messenger "daemon" / "service" running on the OS for asynchronous actions
Powershell cmdlets for administrative users for maintenance actions
where do you place those Application Services if you have one DLL per entry point for deployment purpose?
Option A: dedicated Application Service project (DLL) referenced by all entry point DLLs.
Option B: Application Services located in each entry point's DLL.
In the first option, you can benefit from code reuse when multiple entry points share the same use cases. Same thing for unit tests. However, you theoretically have to deploy an Application Service DLL having too much features for some entry points.
In the second option, you have to duplicate code (and tests) in each entry point's dll when they share the same use cases, but you can theoretically have the control on infrastructure concern like database transaction that could be different depending the execution is in a Powershell Cmdlet on in an API.
In my opinion, the real answer is a question of personal preference.
Anyone having experience with both approaches (success or failure) have some tips or recommandations?
Option A: dedicated Application Service project (DLL) referenced by all entry point DLLs.
This is roughly what I would expect to see. You have three composition roots here, that should always share the same model (to ensure that all paths enforce the current business invariant) and the same book of record (if they don't share the same book of record, they really don't need to share anything at all).
In fact, I strongly suspect that you could separate these completely -- run "the model" in a "microservice", and deploy your three interfaces above that each uses a common service client DLL to talk to that core service.
You might, for instance, review the onion architecture. It aligns fairly closely with the image of a single dll for the application services, with each of your compositions roots using a different interface to adapt their own API to that of the model.
you theoretically have to deploy an Application Service DLL having too much features for some entry points.
That's so; there's a trade off there. My guess is that in most deployments, shipping a single fat DLL is going to be more cost effective than trying to deploy multiple jars with different subsets of the same model.
Personally, I'd start with a fat microservice, a well designed API, and fat clients in each of the composition roots above, and then if necessarily replace the fat clients with thinner, more specialized ones if the trade offs support that choice.
Just to be sure I understand one of your point. Are you suggesting that my domain (what you called "the model") should expose an API, and my different entry points (what you called "composition root") should call this API?
Yes, that's a fair description of the proposal, except I want to be more clear on the "should expose an API" part. The API should be explicit. That is to say, looking at the code, you should be able to point to a seam in your code where the separation of concerns happens
This part is where the model lives
That part is where the specialization lives
Your option B is (provided you make the seam explicit) is this idea within a single library. Your option A is this idea, with seam as the interface between two libraries (still running in the same process). Microservices is this idea, with the two libraries running in different processes.
You get different tradeoffs - for instance, if the model runs in a dedicated microservice, then (a) changing the model is "easy", because there's exactly one authority to swap out, and (b) you now have the freedom to implement your specialized interfaces in any technology that can exchange messages with your domain service, (c) you can also scale out the model independently of how you scale out the specializations.
But you also get additional complexity, in that you need to think more about the stability of the API when the client and server have independent deployment cycles.
I need to invoke WCF service 1 or WCF service 2, based on certain condition evaluated at runtime. Both the services are similar but hosted on different servers.
I have added two service references, NS1 and NS2 pointing to different urls. Current code already uses NS1. Considering this NS1 implementation has already been done at many places. What would be best way to refactor the code, to select dynamically which service has to be invoked ?
In general, it is considered a bad practice to program directly against the proxy generated by the svcutil.exe.
The best way is to wrap it in a class of your own and reference this class each time you require the service. This will also allow you to implement more advanced business logic such as routing (in your case) and other cross cutting concerns.
For example: you can now abstract from the application the strategy you are using to connect to the service, i.e. Service reference or ChannelFactory. You can easily share the service between different assemblies without ambiguity.
You are saying that you have much code written directly against NS1. Grind your teeth and wrap it. It is a lot of dirty work but the risk is very low.
Having said the above, I wonder about the requirement itself, where a service calls another instance of itself on another server (if I got you right). This smells funny, what is the problem you are trying to solve?
Struggling with this one today.
Rewriting a web-based application; I would like to do this in such a way that:
All transactions go through a web services API (something like http://api.myapplication.com) so that customers can work with their data the same way that we do / everything they can do through our provided web interface they can also do programmatically
A class library serves as a data layer (SQL + Entity Framework), for a couple of design reasons not related to this question
Problem is, if I choose not to expose the Entity Framework objects through the web service, it's a lot of work to re-create "API" versions of the Entity Framework objects and then write all the "proxy" code to copy properties back and forth.
What's the best practice here? Suck it up and create an API model class for each object, or just use the Entity Framework versions?
Any shortcuts here from those of you who have been down this road and dealt with versioning / backwards compatibility, other headaches?
Edit: After feedback, what makes more sense may be:
Data/Service Layer - DLL used by public web interface directly as well as the Web Services API
Web Services API - almost an exact replica of the Service Layer methods / objects, with API-specific objects and proxy code
I would NOT have the website post data through the web services interface for the API. That way leads to potential performance issues of your main website. Never mind that as soon as you deploy a breaking API change you have to redeploy the main website at the same time. There are reasons why you wouldn't want to be forced to do this.
Instead, your website AND web services should both communicate directly to the underlying business/data layer(s).
Next, don't expose the EF objects themselves. The web service interface should be cleaner than this. In other words it should try and simplify the act of working with your backend as much as possible. Will this require a fair amount of effort on your part? yes. However, it will pay dividends when you have to change the model slightly without impacting currently connected clients.
It depends on project complexity and how long you expect it to live. For small, short living projects you can share domain objects across all layer's. But if it's big project, and you expect it to exist, work well, and update for next 5 years....
In my current project (which is big), I first started with shared entities across all layers, then i discovered that I need separate entities for Presentation, and now (6 month's passed) I'm using separate classes for each layer (persistence, service, domain, presentation) and that's not because i'm paranoid or was following some rules, just I couldn't make all work with single set of classes across layers... Make you conclusions..
P.S. There are tools that can help you convert your objects, like Automapper and Value Injecter.
I would just buck up and create an API specifically aimed at the needs of the application. It doesn't make much sense to what amounts to exposing the whole DB layer. Just expose what needs to be exposed in order to make the app work, and nothing else.
I need to create a project for multiple web services using WCF in c#. The web services will be calling other assemblies to perform the core processing. The assemblies will be accessing data from SQL Server. One of the parameters that will be part of every web service method will include the database to use. My problem is how to pass the database parameter to assemblies to use. I can't change all the signatures for all the satellite assemblies to use. I want to reference some kind of variable that the satellite assembles reference. Theses same satellite assemblies are used with a Windows Forms app and an ASP.NET app so I would need to have something that all types of applications could use. Static fields are not good since for one web service call the database could be "X" and for another it would be "Y". Any ideas?
This is the sort of thing that might play nicely with an IoC or DI framework - having some interface that includes the database information, and have it pushed into all the callers for you. Even without IoC, hiding the implementation in an interface sounds like a solid plan.
With your static concept; a [ThreadStatic] might work but is a little hacky (and you need to be religious about cleaning the data between callers), or another option is to squirrel some information away on the Principal, as this is relatively easily configured from both WCF (per-call) and winforms (typically per-process). In either case, be careful about any thread-switching (async, etc). In particular, note that ASP.NET can change threads in the middle of a single page pipeline.