I'm hosting a WCF service within an organisation, and I was hoping to build a client into an assembly DLL to package up and give to anyone who wants to consume the service.
I could create a class library and simply add a service reference, build that, and distribute that. Any recommendations on an alternative approach?
I did something similar in my previous organization. I also had the additional requirement that the library should be COM visible so that a legacy C++ application could consume the API.
I went so far as not requesting the client to provide any WCF configuration, beside passing a bunch of parameters through the API (service URL, timeouts, etc...). The WCF was configured programmatically. I was in a very tightly controlled environment, where I exactly knew the clients of the library and could influence their design. This approach worked for me, but as they say your mileage may vary.
At my prior job we always did this. We'd have a library project that contained nothing but a SVCUTIL proxy generation and the config file to go with it.
This way our other projects could utilize this library and we'd only ever have one proxy generation. Great in a SOA model.
In your case you could then distribute this assembly if you wanted. Personally, I find the benefit greater for internal cases you control, but I suppose if you really felt charitable, distributing a .NET version for clients to use would be beneficial.
What's the service host going to be? If it's going to be an HTTP based one, putting it into an ASP.NET app makes a lot of sense. Otherwise, yeah, fire up the class library.
UPDATE based on comment
The client packaging really depends on what the receiver is going to do with it. If you're targeting developers, or existing in-house apps, then the library is a great option (though I'd probably wrap it in an .msi to make the experience familiar for users). If there needs to be UI then obviously you'll want to think about an appropriate UI framework (WPF, Silverlight, WinForms, etc)
I would simply provide a library that contains all the required contracts. That's it - they can write their own client proxy.
Do your users know how to use WCF? If not, include a proxy class that instantiates a channel and calls the service.
I don't really see any point in providing an assembly that just includes code generated by svcutil. Why not just give your users a WSDL and then they can generate that code themselves? Distributing boilerplate doesn't seem like a great idea.
Related
I am looking at designing a Windows Service that can act in a way that can run "modules" or plug-in assemblies.
The Service will be written in C# and .NET 7 or 8. The entire solution should connect to databases, get information, and send to a web API.
My thinking for modules is centered around having to support different database technologies (e.g. SQL Server, Oracle, Postgres etc.) that need to be connected to. The Service ideally has no knowledge of each database technology, it should deal with the web API for messaging. So the plug in modules/assemblies interact with databases, and the service interacts between modules and web API.
Modules will provide a multi-threaded approach for interacting with many database instances - doing work and returning results (messages) when done to the service.
Ideally, the existence of an assembly (let's say SQLServer.dll for example) means the Service picks this up and uses it to talk to SQL Server databases (based on provided configuration). If the assembly does not exist or is not configured to be used, it is ignored.
I have looked at this project and it looks promising https://github.com/natemcmaster/DotNetCorePlugins
Are there other alternatives? In experimenting with dynamically loading assemblies using reflection I have found one limitation to be I can't seem to pass / return complex objects because of namespace clashes. So I don't even think just putting all the database responsibility into its own project and namespace will work.
It looks like the above approach solves this. I've also read this article https://learn.microsoft.com/en-us/dotnet/core/tutorials/creating-app-with-plugin-support
So to summarise my question(s):
Is this a good idea?
Are there any pitfalls with this approach?
The DotNetCorePlugins solution sounds like the best approach and to be fair is similar to how I've designed a Windows Service framework with a plug-in design for the specific implementation detail.
The key thing here is to ensure you design a set of contracts by way of Interface declarations that any plug-in no matter what the purpose so long as the intention is the same, such that whatever concrete implementation is drawn in for use at runtime (via Assembly Load techniques etc.), the shape of that implementation regarding the methods to invoke etc., are all predetermined through the Interface declarations.
Things may get a little complicated based on your exact circumstances if you want for example to load multiple database provider assemblies (e.g., to handle interactions with SQL, Oracle etc.,) in the same Windows Service instance, but essentially it's all about being able to load a specific assembly at runtime, or specific assemblies if required, then being able to dispatch requests into them accordingly.
I have attempted to use the SharedTypeResolver, also the less generic DataContractResolver's from this blog post.
The post mentions how the SharedTypeResolver requires .NET, and for tightly coupled scenarios, such as have the assembly shared by the service, and the client.
However, my question is this: Doesn't using a DataContractResolver AT ALL, require .NET and shared assemblies? How would the client use the Resolver, if it didn't have access to the same assemblies?
Currently all I have is .NET clients, but I don't want to alienate any potential customer who might be writing clients in Java.
That would make harder, but not impossible, for your java clients to generate proxies as your wsdl wouldn't contain the types you will send over the wire. There are tools that can be used to generate proxies automatically. Obviously they wouldn't be sufficient for generating the data model that is not described in wsdl. Those data models would have to be created manually. Therefore it is possible however it's probably too much effort and it'll be never done. For those reasons I'd advice you to avoid it.
Except for perhaps named pipes, ANY client running in ANY platform can talk with WCF. Whether or not it will be an enjoyably experience is the issue.
For cross platform communication I have had success with WCF binding WebHttpBinding (REST):
http://msdn.microsoft.com/en-us/magazine/dd315413.aspx
But if your going to go that route then your missing out on so much that WCF has to offer, transactions, security, etc and might as well use the ASP.Net Web API.
I have a specific case and I want to know the best practice way to handle it.
I make a specific .NET framework (web application). This web application acts like a platform or framework to many other web applications through the following methodology :
We create our dependent web applications (classes for the project business, rdlc reports) in a separate solutions then build them.
After that we add references to the resulted dll in the framework.
And create set of user controls (one for each dependent web application) and put them in a folder in the framework it self.
It works fine but any modification to a specific user control or any modification to any one of the dependent web applications. We have to add the references again and publish the whole framework !!
What I want to do is make those different web applications and the framework loosely coupled. So I could publish the framework one and only one and any modifications to the user controls or the different web applications just publish the updated part rather than the whole framework .
How to refactor my code so I can do this?
The most important thing is :
Never publish the whole framework if the change in any dependent application, just publish the updated part belongs to this application .
If loose coupling is what you are after, develop your "framework(web application)" to function as a WCF web service. Your client applications will pass requests to your web services and receive standard responses in the form of predefined objects.
If you take this route, I recommend that you implement an additional step: Do not use the objects passed to your client applications directly in your client code. Instead, create versions of these web service objects local to each client application and upon receiving your web service response objects, map them to their local counterparts. I tend to implement this with a facade project in my client solution. The facade handles all calls to my various web services, and does the mapping between client and service objects automatically with each call. It is very convenient.
The reason for this is that the day that you decide to modify the objects that your web service serves, you only have to change the mapping algorithms in your client applications... the internal code of each client solution remains unchanged. Do not underestimate how much work this can save you!
Developing WCF web services is quite a large subject. If you are interested, a book that I recommend is Programming WCF Services. It offers a pretty good introduction to WCF development for those who come from a .NET background.
I totally agree with levib, but I also have some tips:
As an alternative to WCF (with its crazy configuration needs), I would recommend ServiceStack. Like WCF it lets you receive requests and return responses in the form of predefined objects, but with NO code generation and minimal configuration. It supports all kinds of response formats, such as JSON, XML, JSV and CSV. This makes it much easier to consume from f.ex. JavaScript and even mobile apps. It even has binaries for MonoTouch and Mono for Android! It is also highly testable and blazing fast!
A great tool for the mapping part of your code is AutoMapper, it lets you set up all your mappings in a single place and map from one object type to another by calling a simple method.
Check them out! :)
Decades of experience says: avoid the framework and you won't have a problem to solve.
Frameworks evolve like cancer. The road to hell is paved with good intentions, and a good portion of those good intentions are embodied in a colossal tumour of a framework all in the name of potential re-use that never really happens.
Get some experience and knowledge when it comes to OO and design, and you'll find endless solutions to your technical problem, such as facades, and mementos, and what have you, but they are not solutions to your real problem.
Another thing, if you are using MS technology, don't bother with anything beyond what .NET offers. Stick with what the MS gods offer because as soon as you digress and become committed to some inhouse framework, your days are numbered.
I want to consume a series of REST services from a provider. But there are a lot of functions I can call and send to the server, so I think it would be a good idea to create a separate library that my C#/MVC2 project can reference and call.
In VS2010, what is the correct project I should select to create this new library? Just plain old "Class library?" It's grouped under "Windows" so I don't know if the correct template to use for a web project.
Thanks.
"Class Library" would be fine. The Class library template is not tied to anyone particular type of project, so they can be used for Web, Console, Windows, Wpf etc.
Of course the functionality you provide in the Class library might be limited to a specific execution evironment because of the functionality you might put into the library, for example if you develop a bunch of functions that expect to be run in an ASP.NET environment then the functionality of the class library might not be applicable to a Console application.
Technically, yes, a "Class library" will give you what you want. Consider, however, whether there are any potential benefits for you in creating a proxy Web Service that you use as an intermediary between your own application(s) and the remote provider. Doing so allows additional management options that can be performed separately from the calling application.
You could also try the MSDN REST Starter Kit. It contains VS templates that help you do all the RESTful things you could ever imagine doing.
I've inherited an enormous .NET solution of about 200 projects. There are now some developers who wish to start adding their own components into our application, which will require that we begin exposing functionality via an API.
The major problem with that, of course, is that the solution we've got on our hands contains such a spider web of dependencies that we have to be careful to avoid sabotaging the API every time there's a minor change somewhere in the app. We'd also like to be able to incrementally expose new functionality without destroying any previous third party apps.
I have a way to solve this problem, but i'm not sure it's the ideal way - i was looking for other ideas.
My plan would be to essentially have three dlls.
APIServer_1_0.dll - this would be the dll with all of the dependencies.
APIClient_1_0.dll - this would be the dll our developers would actual refer to. No references to any of the mess in our solution.
APISupport_1_0.dll - this would contain the interfaces which would allow the client piece to dynamically load the "server" component and perform whatever functions are required. Both of the above dlls would depend upon this. It would be the only dll that the "client" piece refers to.
I initially arrived at this design, because the way in which we do inter process communication between windows services is sort of similar (except that the client talks to the server via named pipes, rather than dynamically loading dlls).
While i'm fairly certain i can make this work, i'm curious to know if there are better ways to accomplish the same task.
You may wish to take a look at Microsoft Managed Add-in Framework [MAF] and Managed Extensibiility Framework [MEF] (links courtesy of Kent Boogaart). As Kent states, the former is concerned with isolation of components, and the latter is primarily concerned with extensibility.
In the end, even if you do not leverage either, some of the concepts regarding API versioning are very useful - ie versioning interfaces, and then providing inter-version support through adapters.
Perhaps a little overkill, but definitely worth a look!
Hope this helps! :)
Why not just use the Assembly versioning built into .NET?
When you add a reference to an assembly, just be sure to check the 'Require specific version' checkbox on the reference. That way you know exactly which version of the Assembly you are using at any given time.