I'm new in web services and I'm developing a C# WCF service that is calling an external service from another company to get some client data (for example: name, address, phone, etc), this part is working fine so far.
The external service is based on a standard XML Schema, and other companies will have soon the same service generated from the same XML Schema, using the same name methods and returning the same type of xml file.
My first question is that after I complete this first implementation, there is any way to add “dynamically” the other external companies services, having the information of their URL/Ports/etc, or do I have to insert each on them manually as services reference in my internal service project every time I need to add a new one, then compile and re-deploy?
My second question is related with the data contract /members, my understanding is that even if they are returning the same XML files, their data contracts/members will be different, is that true? So I’ll have to make a specific code to read the information I need from their data contracts for each new external company?? If this is true I have been thinking to make a generic code to read the raw xml, is this the best choice?
While C# is a compiled language it does support pluggin architecture through MEF. You could use this and add a small plugin .dll for each of your sources.
That being said it's quite possible that all you need is a configuration list containing connection details for each of your sources and connecting to them dynamically. That will only work if they're using the exact same schema, so that the objects they serve will serialize the same for all sources. You will have to instantiate the proxy dynamically through code using that configuration then, of course.
I should add something for your second question. As long as you're the one defining the contract, it doesn't matter if their actual objects are different. All you care about on your end is the xml they serve, and that you can connect using your representation. In fact, you can generate the contract as a .wsdl document. Each of the service-implementer can then generate domain objects from that. On the other hand if you're not the one "owning" the contract, some of the sources may decide to do it slightly differently, which will cause you a headache. Hopefully that's not your scenario though.
Best of luck! :)
My first question is that after I complete this first implementation, there is any way to add “dynamically” the other external companies services, having the information of their URL/Ports/etc
Unfortunately yes, you will have add service, compile it and deploy every time
My second question is related with the data contract /members, my understanding is that even if they are returning the same XML files, their data contracts/members will be different, is that true?
If you will use auto generated every service will create different contracts. I would think about creating you own class and convert external classes using reflection and extension methods
Related
We've been using Linq2Xsd for a decade. For enforcing centralised control internal data structures and external interfaces we found it invaluable. Our team size is about 10 developers in 3 sub-teams. We really like the way you can branch the code, edit the contract, assign it to a developer and give them written description of the DC changes and required functionality
However, it's getting to the point where the technologies we use are long in the tooth and we can't guarantee that they are going to continue functioning in the future.
However, we not willing to give up the contract-first nature of our workflow. (In our particular department) It just works too well.
Our "back end" system consists of:
A Linq To Sql Data Access Layer, generated from the database (to be replaced with Devart's LinqConnect)
A Linq To Xsd powered Business Logic Layer, generated from the master contract XSD, both public entry points and public data structures. Also validates incoming XML against the XSD
Many interface projects that pull the WSDL from the BLL an auto generates a service, be it SOAP or REST.
Here are the use cases that need to be fill by one or many, well supported or at least open source, add-ons to Visual Studio 2012 and/or Visual Studio 2017.
A method of creating a structured document, of a well know type, that can be read at build time and is used to create the public static methods and public data classes exposed by the project.
Incoming data, either binary or text document, can be verified against said contract.
I have 3 projects in my solution.
A common class library named ReportBuilderLib
A WPF application named ReportClient that contains a service reference to a 3rd project -
A WCF web service which contains web methods for my application to call upon.
Initially when setting up both the service and the application i added the common library to references on both projects so that i could use the classes i needed to in both.
It quickly cam clear that in the process of generating the code to use the web methods in my client application, it was automatically importing certain namespaces that i had used in service application.
This was throwing me conflicting reference warnings as they were effectively being imported from two separate resources.
I then removed the reference to the library in my report client, i could see that VS was only importing one out of the two namespaces my client requires. Both of which are returned by methods in my ServiceContract!
Having looked at the generated code for the client, it seems to be re-creating the classes i have included in the library and providing only the public properties for access.
Is it possible to use librarys like i am trying to with WCF. Or should i scrap the common library idea and simply create some data transfer classes on the service end?
You should be able to reference the common library on both ends, but it may be useful and less of a headache to implement data transfer classes like you suggested. Using special classes (or serialization like JSON) to send and receive data from the service would make it easier for you to re-use the service for multiple client projects.
Any time you decrease the coupling between layers of an application you make it easier to implement changes/upgrades in the future :)
As of now, my project relies heavily on WCF which is linked to a database.
we use the classes generated from the database which are ORM if you will to do processing in our system.
i know that using DataSvcUtil, we can easily extract out all the classes and compile that as a DLL to be shared across our other systems.
But in our current project, we create another DLL which mirrors the WCF generated table class rather than using those classes directly.
So my question is there a best practice on these sort of things?
and
what's the pros and cons of these two methods?
are there other
methods?
thanks
Updates:
It seems like the consensus is on creating your own custom classes rather than relying on those that are created by WCF.
I am currently following this method, and as of now just using extension to create method to convert to the model and another one to convert it back to the type.
And having your own simpler class is good for extensibility and other stuff :)
I would suggest to still use WCF, but use compilied dll as client instead of service reference. This way you can still have your interface consistent, even if you will decide to change database in future. The pros of using DLL:
As your service will grow, users may occasionally start getting timeouts when trying to generate service reference
You will be safe from people having wrong service reference. When generating service reference some properties can be changed, thus users can generate potentially dead service reference
You will be protected from other IDEs generating slightly different references
It's a bit easier to be backwards compatible and to pinpoint the problem as you will be 100% sure that the way client is used is the same across users.
Cons of using DLL:
You will have additional reference
I'm not that familiar with WCF-- but I use Linq To Sql which I'm assuming generates the same types of classes (as does any ORM tool). I always create my own POCO classes which describe my domain model. I know there is a bit more work involved-- and you are then tasked with mapping your POCO classes with your generated classes. But I find it the best way to keep my domain classes pure. The generated classes can be somewhat complex with attributes describing the tables and columns which will be used to populate them. I like the generated classes because they make it easier for me to interact with the database-- but I always like the separation of having the simple domain classes-- it also gives me the flexibility to swap out database implementations.
It is better to have a separate dll as you do in your current project - decoupling is a best practice, generating the WCF DataContracts from the database is almost certainly not a good idea however - it can be used for the first shot but subsequent changes to your database should not be directly reflected in the web service.
One of the advantages of using WCF is that you can easily achieve decoupling through a service layer, if you were to distribute a dll compiled in the way you describe you would essentially be coupling all clients to your database representation.
Decoupling enables your ORM / database to be tweaked as necesarry without all you clients having to re-compile.
On the con side - decoupling like this is a bit slower to implement up front - so if you have a very small project can be overkill - but if you are working cross team or in any way distributed then it is essential.
I've been asked to come up with a .net web service stub for multiple similar webservices which will:
implement create/read/update/delete/find for an arbitrary object.
hold persistent xml data for objects of that type.
Is there anything out there that does this job already or anything that can make the job of creating it easier?
You may look at WCF Data Services. As usual with Microsoft products it is impossible to understand what it really does by looking at the front page, but is something of a "query a database by specifying the query in the URL in a standardized format (OData)". It creates a WCF service as a front end, and you should be able to write your own dataprovider for XML files.
It is arbitrary in the sense that the technology is independent of the object type, but you need to create a schema for its specific datatypes.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I am having trouble deciding between two possible design choices. I have a web site which has a pretty extensive business layer and DAL (website, bll, and dal are all in multiple separate dlls). I need to design a windows service that can take some of my business objects, write them to a file, and store them locally within our network. The files are then imported into a 3rd party program which does further processing on them.
I can design this service one of two ways:
Wrap the service around the business layer and DAL. This would be quick and easy but the downside is every time the business layer changes, the service will have to be updated.
Add a web service to the web site and just query the web service for what I need. The windows service wouldn't have to use the business layer and as long as the web service doesn't change, I'll be good. The only downside is that I may have to create some basic business objects to parse the web service's xml into.
The windows service will have to poll the business layer/dal or web service every 10-20 minutes or so. The windows service is necessary because the web site is hosted offsite and thus doesn't have access to any of our local resources. I am leaning towards option 2 but I'm torn.
Given the two choices, which is the better option? Are there other possible options that I haven't considered? Also, how do you usually design for situations where you have one core set of libraries that are primarly used by a website but may end up being used either for data retrieval or to perform some function?
I'm not sure what the criteria is for storing certain business objects as files on the network, but if you're doing this on a regular basis then presumably you are trying to track changes of some kind, so there is another solution: Build the logic directly into the business/persistence layer.
If this secondary file storage is a business requirement, then it ought to be embedded directly in that tier and triggered by some sort of event. That way, instead of having an what is essentially an ad-hoc post-processing job that can get out of sync with the rest of the system, you have just one coherent system.
Invert the design - instead of wrapping a web service around the business services and using it for ad-hoc reporting, create a web service that encapsulates the data you need to receive from the export on a regular basis, and have your business tier send messages to it when new data is ready. You can send messages asynchronously so as not to tie up the business services, and depending on your reliability requirements you could set up a message queue (it's easier than it sounds, WCF already knows how to use MSMQ as the delivery mechanism, it's just a few configuration settings to change).
I can't say with any certainty that this is better than your first two options without knowing a good deal more about the architecture, the amount and type of data, the scheduling and reporting requirements, etc., but it is something you should consider. If you think that your business services are likely to change fairly frequently, then it might work better have it push data outward to a "warehouse" type abstraction rather than having a mining process to pull it.
Otherwise, I think I would go with option 2. I don't know if you've worked with WCF services before but you should know that you never actually have to parse XML. Everything is done through data contracts and when you generate a proxy for the web service, you get strongly-typed .NET objects. If you can pass your domain objects directly through the service API then it's really very little work at all to create the web service.
The real downside to a web service is that you have to take steps to ensure that your service contract never substantially changes (otherwise it can break clients). So you might eventually end up needing to create Data Transfer Objects on the service side to use as the public API instead of passing through domain objects. But in many cases you won't need to do this for a good long while, so go ahead and try it out, you'll see that it's pretty straightforward.
A variant of option two:
Add a WCF service to the site, exposing the information required as basic DTO DataContracts.
You could use AutoMapper or similar within the WCF service to handle the boring bit of converting your business objects to DTOs.
From your point two I understand, that you would just add the web-api for this extra-service. Thus, you would have to update two parts for any changes (extra-service, web-api, dll). With option one you would only have to update two parts (extra-service, dll), thus I would go with one.
BUT if you target for a general web api which you always have to maintain, go with option two.
For more flexibility instead of hard-wrapping your service around business and DAL, and instead of relying on the web site (through integrated web service) make use of design concepts like: interfaces, dynamic Type loading, Inversion of Control so your service is a thin decoupled layer that communicates with the business and DAL and allows for dynamic updates of the business and DAL without recompiling the service. Maybe put assemblies in the Global Assembly Cache of the machine to be shared across various other projects assemblies and apps.
I know it seems like throwing out jargon for the sake of it but that's how I would start to think.
Edit:
Loading types dynamically is actually amazing and easy. This is a quick C# pseudo code for one way, and without testing it might actually be right.
// Get a System.Type from string representation
Type t = Type.GetType("type name");
// Create instance of type.
object o = Activator.CreateInstance(t);
// Cast it to the interface (or actual Type) you're working with.
IMyInterface strongObject = (IMyInterface)o;
// ... and continue from there with the instance.
Instructions about how to formulate the string representation of a type name can be found in MSDN under Type.AssemblyQualifiedName, Type.GetType and similar places. In short you can see a lot of assembly qualified type names in the app.config or web.config files because they use the same format.