What can replace Linq2Xsd? - c#

We've been using Linq2Xsd for a decade. For enforcing centralised control internal data structures and external interfaces we found it invaluable. Our team size is about 10 developers in 3 sub-teams. We really like the way you can branch the code, edit the contract, assign it to a developer and give them written description of the DC changes and required functionality
However, it's getting to the point where the technologies we use are long in the tooth and we can't guarantee that they are going to continue functioning in the future.
However, we not willing to give up the contract-first nature of our workflow. (In our particular department) It just works too well.
Our "back end" system consists of:
A Linq To Sql Data Access Layer, generated from the database (to be replaced with Devart's LinqConnect)
A Linq To Xsd powered Business Logic Layer, generated from the master contract XSD, both public entry points and public data structures. Also validates incoming XML against the XSD
Many interface projects that pull the WSDL from the BLL an auto generates a service, be it SOAP or REST.
Here are the use cases that need to be fill by one or many, well supported or at least open source, add-ons to Visual Studio 2012 and/or Visual Studio 2017.
A method of creating a structured document, of a well know type, that can be read at build time and is used to create the public static methods and public data classes exposed by the project.
Incoming data, either binary or text document, can be verified against said contract.

Related

Do I have to really create multiple models?

MS stack developer historically.
I have committed to retooling to the following stack
angular -> ms web.api2 -> C# business objects -> sql server
Being old, I develop the database from requirements and use Codesmith to generate the business logic layer. (yes, I have heard of entity framework. even tried it once).
As I embrace Angular and web API 2
I find that Angular wants me to write a model on the front end. This seems to be just a data structure, I cant even add helper methods to it
So I also often write a class with helper methods that takes an instance of the model. Kind of ugly,but it does marry structure and logic.
I find that Web API2 wants me to write a model. This again seems to be just a data structure. I am exploring the dynamic data type, but really this doesn't buy me much. Instead of writing a class, I'm writing a mapping function.
The question is this:
Is there any way around having 3+ copies of each class spread across the stack?
Codesmith is a very capable code generator... it can gen multiple files... but...
If its just a couple data members, and 3 places, I can copy paste edit and get it done.
Just seems to me that now committing to keeping a data structure in synch in 3 different environments is setting oneself up for a lot of work.
I have spent the last 15 years trying to shove as much code as I can into a framework of inheritable classes so I can keep things DRY.
Am I missing something? Are there any patterns that can be suggested?
[I know this isn't a question tailored for SO, but it is where all the smart people shop. Downvote me if you feel honor bound to do so.]
Not entirely familiar with how CodeSmith generates it's classes, but if they are just plain-old-CLR-objects that serialize nicely, you can have WebApi return them directly to your Angular application. There are purists that will frown upon this, but depending on the application, there may be a justification.
Then, in the world of Angular, you have a few options, again, depending on your requirements/justification, and your application - again, purists will definitely frown upon some of the options.
create classes that match what's coming down from the server (more correct method)
Treat everything as "any", lose type safety, and just access properties as you need them i.e. don't create the model. (obviously less correct method)
find a code generation tool that will explore API end points to determine what they return, and generate your typescript classes for you.
Personally, using Entity Framework, I (manually) create my POCO's for database interraction, have a "view"/DTO class that WebAPI would then send back to the client, and a definition of the object in Typescript, but I am a control freak, and don't like generated code.

How to handle XML deserialization with a changing schema

We have a service that's receiving data in XML, and there's an accompanying namespace/schema definition that usually changes once a year, sometimes more.
The schema describes a very large object and we only use a small portion of it that has not changed in at least 2 years since I've been handling it. However, the schema change forces us to re-generate the C# classes, re-build and re-deploy the application.
It would be good to not have to touch the application unless there's a change in the parts that we use.
For a separate throwaway application that was set up with a certain namespace, I had the code replace the incompatible namespace with the compatible one and deserialize the data that way.
Is there a solution for this problem that's more elegant?
Edit: the data we receive is only the subset of the whole schema, that's why it's not a problem to deserialize it with the namespace replacement.

Adding similar Web Services to .NET C# application

I'm new in web services and I'm developing a C# WCF service that is calling an external service from another company to get some client data (for example: name, address, phone, etc), this part is working fine so far.
The external service is based on a standard XML Schema, and other companies will have soon the same service generated from the same XML Schema, using the same name methods and returning the same type of xml file.
My first question is that after I complete this first implementation, there is any way to add “dynamically” the other external companies services, having the information of their URL/Ports/etc, or do I have to insert each on them manually as services reference in my internal service project every time I need to add a new one, then compile and re-deploy?
My second question is related with the data contract /members, my understanding is that even if they are returning the same XML files, their data contracts/members will be different, is that true? So I’ll have to make a specific code to read the information I need from their data contracts for each new external company?? If this is true I have been thinking to make a generic code to read the raw xml, is this the best choice?
While C# is a compiled language it does support pluggin architecture through MEF. You could use this and add a small plugin .dll for each of your sources.
That being said it's quite possible that all you need is a configuration list containing connection details for each of your sources and connecting to them dynamically. That will only work if they're using the exact same schema, so that the objects they serve will serialize the same for all sources. You will have to instantiate the proxy dynamically through code using that configuration then, of course.
I should add something for your second question. As long as you're the one defining the contract, it doesn't matter if their actual objects are different. All you care about on your end is the xml they serve, and that you can connect using your representation. In fact, you can generate the contract as a .wsdl document. Each of the service-implementer can then generate domain objects from that. On the other hand if you're not the one "owning" the contract, some of the sources may decide to do it slightly differently, which will cause you a headache. Hopefully that's not your scenario though.
Best of luck! :)
My first question is that after I complete this first implementation, there is any way to add “dynamically” the other external companies services, having the information of their URL/Ports/etc
Unfortunately yes, you will have add service, compile it and deploy every time
My second question is related with the data contract /members, my understanding is that even if they are returning the same XML files, their data contracts/members will be different, is that true?
If you will use auto generated every service will create different contracts. I would think about creating you own class and convert external classes using reflection and extension methods

What's the pros and cons of using classes generated from WCF vs Creating your own model dll?

As of now, my project relies heavily on WCF which is linked to a database.
we use the classes generated from the database which are ORM if you will to do processing in our system.
i know that using DataSvcUtil, we can easily extract out all the classes and compile that as a DLL to be shared across our other systems.
But in our current project, we create another DLL which mirrors the WCF generated table class rather than using those classes directly.
So my question is there a best practice on these sort of things?
and
what's the pros and cons of these two methods?
are there other
methods?
thanks
Updates:
It seems like the consensus is on creating your own custom classes rather than relying on those that are created by WCF.
I am currently following this method, and as of now just using extension to create method to convert to the model and another one to convert it back to the type.
And having your own simpler class is good for extensibility and other stuff :)
I would suggest to still use WCF, but use compilied dll as client instead of service reference. This way you can still have your interface consistent, even if you will decide to change database in future. The pros of using DLL:
As your service will grow, users may occasionally start getting timeouts when trying to generate service reference
You will be safe from people having wrong service reference. When generating service reference some properties can be changed, thus users can generate potentially dead service reference
You will be protected from other IDEs generating slightly different references
It's a bit easier to be backwards compatible and to pinpoint the problem as you will be 100% sure that the way client is used is the same across users.
Cons of using DLL:
You will have additional reference
I'm not that familiar with WCF-- but I use Linq To Sql which I'm assuming generates the same types of classes (as does any ORM tool). I always create my own POCO classes which describe my domain model. I know there is a bit more work involved-- and you are then tasked with mapping your POCO classes with your generated classes. But I find it the best way to keep my domain classes pure. The generated classes can be somewhat complex with attributes describing the tables and columns which will be used to populate them. I like the generated classes because they make it easier for me to interact with the database-- but I always like the separation of having the simple domain classes-- it also gives me the flexibility to swap out database implementations.
It is better to have a separate dll as you do in your current project - decoupling is a best practice, generating the WCF DataContracts from the database is almost certainly not a good idea however - it can be used for the first shot but subsequent changes to your database should not be directly reflected in the web service.
One of the advantages of using WCF is that you can easily achieve decoupling through a service layer, if you were to distribute a dll compiled in the way you describe you would essentially be coupling all clients to your database representation.
Decoupling enables your ORM / database to be tweaked as necesarry without all you clients having to re-compile.
On the con side - decoupling like this is a bit slower to implement up front - so if you have a very small project can be overkill - but if you are working cross team or in any way distributed then it is essential.

Is WSDL sort order relevant?

I'm refactoring an existing C# .NET Web Service that is consumed by existing Delphi 2006 (non-.NET) clients. I don't want to rebuild/redeploy the clients. My goal is to keep the WSDL identical so that the proxy classes won't change.
I used a tool (Regionerate) to region and sort the methods/properties based on our current standards. This changed the tag ordering in the WSDL.
I can use an XML diff tool to compare the files and ignore ordering, but I'm not sure if this will affect the clients. Is order of web methods or (to-be-proxy) class properties relevant?
The order should be totally irrelevant, for the methods in the WSDL as well as for the properties in the classes.
The only way I can imagine how this would affect the clients would be if the clients didn't use standard libraries to consume the service, but did it by ways of some custom coded weirdness - and even then the implementer would have had to go some extra miles to introduce a dependency on the order ;)

Categories