We have a service that's receiving data in XML, and there's an accompanying namespace/schema definition that usually changes once a year, sometimes more.
The schema describes a very large object and we only use a small portion of it that has not changed in at least 2 years since I've been handling it. However, the schema change forces us to re-generate the C# classes, re-build and re-deploy the application.
It would be good to not have to touch the application unless there's a change in the parts that we use.
For a separate throwaway application that was set up with a certain namespace, I had the code replace the incompatible namespace with the compatible one and deserialize the data that way.
Is there a solution for this problem that's more elegant?
Edit: the data we receive is only the subset of the whole schema, that's why it's not a problem to deserialize it with the namespace replacement.
Related
We've been using Linq2Xsd for a decade. For enforcing centralised control internal data structures and external interfaces we found it invaluable. Our team size is about 10 developers in 3 sub-teams. We really like the way you can branch the code, edit the contract, assign it to a developer and give them written description of the DC changes and required functionality
However, it's getting to the point where the technologies we use are long in the tooth and we can't guarantee that they are going to continue functioning in the future.
However, we not willing to give up the contract-first nature of our workflow. (In our particular department) It just works too well.
Our "back end" system consists of:
A Linq To Sql Data Access Layer, generated from the database (to be replaced with Devart's LinqConnect)
A Linq To Xsd powered Business Logic Layer, generated from the master contract XSD, both public entry points and public data structures. Also validates incoming XML against the XSD
Many interface projects that pull the WSDL from the BLL an auto generates a service, be it SOAP or REST.
Here are the use cases that need to be fill by one or many, well supported or at least open source, add-ons to Visual Studio 2012 and/or Visual Studio 2017.
A method of creating a structured document, of a well know type, that can be read at build time and is used to create the public static methods and public data classes exposed by the project.
Incoming data, either binary or text document, can be verified against said contract.
I am working on a project that consumes (external) services.
The vendor has provided a whole heap of XSDs (89 of them) and I want the convert them all into .Net (C#) classes / class library.
I am using the XSD utility on these but as there is a lot of cross-referencing and importing, they are failing with error messages saying type 'xxxxx' not declared
Now, based my my googling, this is quite simply overcome by compiling the complete reference "tree" but ....
I have 89 files to convert
It concatenates all the schema names together for the output .cs file name (and breaks due to being too long (> 260char))
I thought about creating a class library assembly, starting with the base level schemas (ones without imports) and then telling XSD to convert a schema but use any referenced types from this assembly... but I am not sure how or even if it is possible.
So, how can I best do this please... any advice is welcome..
And yes, 89 schemas are a lot and unfortunately, I have no control on this, I just have to suck it up and deal with it.
You can use /P[arameters]:file.xml option in xsd.exe to specify many parameters in separate file instead of pass them in command line.
Sample of this xml:
<xsd xmlns='http://microsoft.com/dotnet/tools/xsd/'>
<generateClasses language='CS' namespace='MyNamespace'>
<schema>FirstSchema.xsd</schema>
<schema>SecondSchema.xsd</schema>
<schema>ThirdSchema.xsd</schema>
</generateClasses>
</xsd>
I need to develop a webservice which will be exposed to a java client over SOAP. We have a well defined schema in place which we use to communicate between both the systems. Now I need to expose an operation on my WCF contract which takes the Schema object and store it inside our DB.
I have followed the following for developing the webservice.
Host it over basichttp in wcf
Create an object model of the schema using xsd.exe
Take the schema as a parameter on the operation something like DoThis(SchemaObject schema)
Since this is going to be exposed in WCF, I have gone and modified the xsd tool generated object model. Our schema has mutiple level of nesting, and is a combination of 4 different schema linked together. The object graph generated by xsd tool has abstract classes, inheritence etc.
For this to work, I have gone and defined DataContract attrbute on everyclass and added the namespace to it, which was already there in the XmlTypeAttribute. Also I have added DataMemebers to each properties.
Some of the properties in the schema are arrays which was defined by the tool using xmlarrayitem attribute.
Now when I send a request using SOAP UI, the object is not getting deserialized as expected. Almost all the fields are coming as null, which has some sort of inheritence hierarchy. I have added KnownType attribute to the appropriate datacontracts, but still not working.
My question is:
Is this the right way to develop a webservice.
Is there a way to avoid putting the datacontract and data members and just work witht he serialization attributes added by the xsd tool?
Is it necessary to use datacontract attribute, will it not work with the xmlserialization attributes as it works inthe case xml deserialization?
WCF supports two types of serialization - DataContractSerializer and XmlSerializer.
XSD.exe generates strong type entities with all necessary XmlSerializer attributes. You do not need to add any DataContract or DataMemeber attributes to work with generated classes in WCF.
See MSDN for more details - http://msdn.microsoft.com/en-us/library/ms733901.aspx
Be also very careful with your entities generated by xsd.exe. As you probably already seen WCF server will eat many serialization changes you can do in these files but that will be breaking change for clients because they relay on XSD.
If possible I would remain these auto-generated entities without changes to guarantee that interface is not broken. You may introduce separate DTO classes for using in Business Layer. You can implement inheritence hierarchy over there.
Bunch of Unit Tests can help if you feel that auto-generated classes need to be changed. These Test Cases should generate different data sets, serialize them into XML and check that XML over XSD.
Technically, I don't see any particular flaw in the way you are implementing the service.
But from the architectural point of view it's too complicated for me. It's always easier to send 'flat' data-structures and hide complexity somewhere else.
I would suggest the following steps
Develop some special 'transport' scheme, maximally flattening it. It makes changes of the service easier when your model changes. And also it makes less painful to generate and cope with xsd.
Code special transformators on both sides of the channel to translate normal model to 'flat' and then vice-versa.
I'm new in web services and I'm developing a C# WCF service that is calling an external service from another company to get some client data (for example: name, address, phone, etc), this part is working fine so far.
The external service is based on a standard XML Schema, and other companies will have soon the same service generated from the same XML Schema, using the same name methods and returning the same type of xml file.
My first question is that after I complete this first implementation, there is any way to add “dynamically” the other external companies services, having the information of their URL/Ports/etc, or do I have to insert each on them manually as services reference in my internal service project every time I need to add a new one, then compile and re-deploy?
My second question is related with the data contract /members, my understanding is that even if they are returning the same XML files, their data contracts/members will be different, is that true? So I’ll have to make a specific code to read the information I need from their data contracts for each new external company?? If this is true I have been thinking to make a generic code to read the raw xml, is this the best choice?
While C# is a compiled language it does support pluggin architecture through MEF. You could use this and add a small plugin .dll for each of your sources.
That being said it's quite possible that all you need is a configuration list containing connection details for each of your sources and connecting to them dynamically. That will only work if they're using the exact same schema, so that the objects they serve will serialize the same for all sources. You will have to instantiate the proxy dynamically through code using that configuration then, of course.
I should add something for your second question. As long as you're the one defining the contract, it doesn't matter if their actual objects are different. All you care about on your end is the xml they serve, and that you can connect using your representation. In fact, you can generate the contract as a .wsdl document. Each of the service-implementer can then generate domain objects from that. On the other hand if you're not the one "owning" the contract, some of the sources may decide to do it slightly differently, which will cause you a headache. Hopefully that's not your scenario though.
Best of luck! :)
My first question is that after I complete this first implementation, there is any way to add “dynamically” the other external companies services, having the information of their URL/Ports/etc
Unfortunately yes, you will have add service, compile it and deploy every time
My second question is related with the data contract /members, my understanding is that even if they are returning the same XML files, their data contracts/members will be different, is that true?
If you will use auto generated every service will create different contracts. I would think about creating you own class and convert external classes using reflection and extension methods
So, I'm trying to take an .xsd file (musicxml fixed standard), create an object class, use portions of it - specificially the note object - include it in a graph object, and then save both the graph object and a musicxml validated file.
All in all, the solutions I'm using have one or two massively breaking shortcomings.
Xsd2Code - Creates the file; but for some reason it makes a Items collection (of the type I need, ObservableCollection), and then an enumerable ItemsChoiceType[0-9] ObservableCollection. The problem with the enumerable is after it generates, I have to either have to switch the latter to an Array, or do mumbo-jumbo for the XmlSerialisation attrs. Generates a 2mb .cs file, so alot of code that would be autogenerated and would have to have a crapton of .extend.cs files to get it to fit. Maybe I have to change some switches for it to work? What switches fix this?
LinqToXsd / OpenLinqToXsd - Generates the file, hard codes it to reference a DLL file, then forces you to use List (no option to go to ObservableCollection), which doesn't have EditItem and can't be used for binding to WPF/XAML. Otherwise, a bunch more .extend.cs files.
Altova C# generator - Expensive, requires a bunch of their DLLs to include in the project, messy.
Long story short, has anyone used any of these systems successfully and what did you have to do to shoehorn them? What kind of pain will I have to deal with beyond the issues I'm having
I remember now for XSD.exe: XSD notation doesn't export, individual classes (such as 'note') don't serialise out to xml. I would have to write out the entire thing from scorepartwise to every piece inbetween. Which means I can't serialise a graph object that has 'note's as vertices.