I'm refactoring an existing C# .NET Web Service that is consumed by existing Delphi 2006 (non-.NET) clients. I don't want to rebuild/redeploy the clients. My goal is to keep the WSDL identical so that the proxy classes won't change.
I used a tool (Regionerate) to region and sort the methods/properties based on our current standards. This changed the tag ordering in the WSDL.
I can use an XML diff tool to compare the files and ignore ordering, but I'm not sure if this will affect the clients. Is order of web methods or (to-be-proxy) class properties relevant?
The order should be totally irrelevant, for the methods in the WSDL as well as for the properties in the classes.
The only way I can imagine how this would affect the clients would be if the clients didn't use standard libraries to consume the service, but did it by ways of some custom coded weirdness - and even then the implementer would have had to go some extra miles to introduce a dependency on the order ;)
Related
I'm new in web services and I'm developing a C# WCF service that is calling an external service from another company to get some client data (for example: name, address, phone, etc), this part is working fine so far.
The external service is based on a standard XML Schema, and other companies will have soon the same service generated from the same XML Schema, using the same name methods and returning the same type of xml file.
My first question is that after I complete this first implementation, there is any way to add “dynamically” the other external companies services, having the information of their URL/Ports/etc, or do I have to insert each on them manually as services reference in my internal service project every time I need to add a new one, then compile and re-deploy?
My second question is related with the data contract /members, my understanding is that even if they are returning the same XML files, their data contracts/members will be different, is that true? So I’ll have to make a specific code to read the information I need from their data contracts for each new external company?? If this is true I have been thinking to make a generic code to read the raw xml, is this the best choice?
While C# is a compiled language it does support pluggin architecture through MEF. You could use this and add a small plugin .dll for each of your sources.
That being said it's quite possible that all you need is a configuration list containing connection details for each of your sources and connecting to them dynamically. That will only work if they're using the exact same schema, so that the objects they serve will serialize the same for all sources. You will have to instantiate the proxy dynamically through code using that configuration then, of course.
I should add something for your second question. As long as you're the one defining the contract, it doesn't matter if their actual objects are different. All you care about on your end is the xml they serve, and that you can connect using your representation. In fact, you can generate the contract as a .wsdl document. Each of the service-implementer can then generate domain objects from that. On the other hand if you're not the one "owning" the contract, some of the sources may decide to do it slightly differently, which will cause you a headache. Hopefully that's not your scenario though.
Best of luck! :)
My first question is that after I complete this first implementation, there is any way to add “dynamically” the other external companies services, having the information of their URL/Ports/etc
Unfortunately yes, you will have add service, compile it and deploy every time
My second question is related with the data contract /members, my understanding is that even if they are returning the same XML files, their data contracts/members will be different, is that true?
If you will use auto generated every service will create different contracts. I would think about creating you own class and convert external classes using reflection and extension methods
As of now, my project relies heavily on WCF which is linked to a database.
we use the classes generated from the database which are ORM if you will to do processing in our system.
i know that using DataSvcUtil, we can easily extract out all the classes and compile that as a DLL to be shared across our other systems.
But in our current project, we create another DLL which mirrors the WCF generated table class rather than using those classes directly.
So my question is there a best practice on these sort of things?
and
what's the pros and cons of these two methods?
are there other
methods?
thanks
Updates:
It seems like the consensus is on creating your own custom classes rather than relying on those that are created by WCF.
I am currently following this method, and as of now just using extension to create method to convert to the model and another one to convert it back to the type.
And having your own simpler class is good for extensibility and other stuff :)
I would suggest to still use WCF, but use compilied dll as client instead of service reference. This way you can still have your interface consistent, even if you will decide to change database in future. The pros of using DLL:
As your service will grow, users may occasionally start getting timeouts when trying to generate service reference
You will be safe from people having wrong service reference. When generating service reference some properties can be changed, thus users can generate potentially dead service reference
You will be protected from other IDEs generating slightly different references
It's a bit easier to be backwards compatible and to pinpoint the problem as you will be 100% sure that the way client is used is the same across users.
Cons of using DLL:
You will have additional reference
I'm not that familiar with WCF-- but I use Linq To Sql which I'm assuming generates the same types of classes (as does any ORM tool). I always create my own POCO classes which describe my domain model. I know there is a bit more work involved-- and you are then tasked with mapping your POCO classes with your generated classes. But I find it the best way to keep my domain classes pure. The generated classes can be somewhat complex with attributes describing the tables and columns which will be used to populate them. I like the generated classes because they make it easier for me to interact with the database-- but I always like the separation of having the simple domain classes-- it also gives me the flexibility to swap out database implementations.
It is better to have a separate dll as you do in your current project - decoupling is a best practice, generating the WCF DataContracts from the database is almost certainly not a good idea however - it can be used for the first shot but subsequent changes to your database should not be directly reflected in the web service.
One of the advantages of using WCF is that you can easily achieve decoupling through a service layer, if you were to distribute a dll compiled in the way you describe you would essentially be coupling all clients to your database representation.
Decoupling enables your ORM / database to be tweaked as necesarry without all you clients having to re-compile.
On the con side - decoupling like this is a bit slower to implement up front - so if you have a very small project can be overkill - but if you are working cross team or in any way distributed then it is essential.
I have a desktop C# app that I want to split into two parts - server part and client part. My app is already split into two very independent parts that communicate by exchanging some (complex!) objects.
If I want to put one part of my app on some web server, what kind of technology should I use for passing those custom complex objects between the server part and client part? I was thinking about WCF, but...I'm not sure that WCF can easily handle (send/receive) custom objects (composed by many other custom objects). I don't need WCF because I'm not planning to offer my service to any third-party, I'm not planning to port my client app to other OS...
That's why I'm confused and need your help: what kind of remoting technology should I use in my case?
WCF stands for Windows Communication Foundation. In other words its about general cross process/machine communication and not limited to hetrogenous systems
One thing to remember about WCF is, despite appearances, you are not actually passing objects at all - the objects are used by a serializer to generate messages.At the other end it will deserialize into an independent copy. You don't, unlike COM, get a reference back to an object on the sender.
The reason this is important is because if the complex objects have non-serializable state such as a socket connection then this won't make it to the receiver side
Also, with the DataContractSerializer (which is the default) unless your objects are annotated with the [Serializable] attribute or you annotate the classes with [DataContract] and [DataMember] you will only be sending state that is exposed publicly (via a public field or a property).
This isn't purely a problem for WCF; Remoting requires objects derive from MarshalByRefObject or are annotated with the [Serializable] attribute. Building distributed systems is quite different from building systems that all share the same memory address space. You have to think carefully how you define that boundary between the distributed pieces because, for example, lots of small calls will kill your performance rather than few data rich calls (although from your description this might not be an issue that affects you)
So WCF can handle arbitrarily complex object graphs but just remember the above points about serialization
Well, DataContracts in WCF support complex objects, so I don't see a problem with that (how complex are your objects); however you should probably use the technology that is sufficient in your case. You can use Remoting, hell, even Sockets; but it is in almost all cases overkill and going too low in .NET stack for nothing; you will just be wasting your time in implementation.
If you have no reason against WCF, I would go that way, because it is very simple and powerful. There are also standard ASP.NET ASMX web services if you'd like.
One thing to note, whichever the technology, you should have your code structured in a distribution layer, exposing coarse-grained methods.
We have several .Net webservices that we use a java client for. Each webservice has it's own namespace, but they all use a lot off common classes. When these are exposed as WSDLs, then generated into Java code, we get a lot of duplicates in Java of the same .Net classes.
Is there a way in .Net to define a set of WebService objects to be exported under a shared namespace (in XML)? Or can we when we use wsimport in Java to generate just one instance of each duplicate class?
From service side, one of the option could be to have specially crafted single WSDL describing all services. See this article for how to do it (applicable for asmx services).
On side note, for .NET clients, its quite simple to use wsdl tool with sharetypes options to have common types generated once and re-used among multiple service proxies. Hopefully, similar tools/options perhaps exist at java client side.
The -p option of wsimport allows you to override the namespace specified in the WSDL to a package that you specify. If you specify the same package for each WSDL you'll only end up with one instance of each class.
What I am trying to accomplish is to be able to use List<>.Contains() using the custom data structure(s) returned by the WCF service.
I implemented the IEquatable<>.Equals but it's not really working on the client side. Contains() always returns false. I am wondering if the Contains() method is actually part of the class when it's put together on the client side.
No. Web services are generally meant to be platform-agnostic, so they define things like operation contracts (for operations performed on the server) and data contracts (for exchanging objects composed of simple data fields). But they don't define methods on objects as this would require knowledge of the client-side platforms. (For example, how would you marshal your IEquatable<>.Equals IL code to a Mac client?)
What you can do, if you have full control over your WCF service's clients, is deploy the same library to both the clients and the server. That is, you could put your data contract classes in Data.dll and deploy it to both the client and the server (as opposed to using the default proxy classes generated from the service contract on the client).
No this will not work. Actions like implementing IEquatable<T> is adding behavior to a type. Data contracts are only meant to specify data, not behavior. Behavior will not be copied down when the client adds a reference to your type.