I am looking at designing a Windows Service that can act in a way that can run "modules" or plug-in assemblies.
The Service will be written in C# and .NET 7 or 8. The entire solution should connect to databases, get information, and send to a web API.
My thinking for modules is centered around having to support different database technologies (e.g. SQL Server, Oracle, Postgres etc.) that need to be connected to. The Service ideally has no knowledge of each database technology, it should deal with the web API for messaging. So the plug in modules/assemblies interact with databases, and the service interacts between modules and web API.
Modules will provide a multi-threaded approach for interacting with many database instances - doing work and returning results (messages) when done to the service.
Ideally, the existence of an assembly (let's say SQLServer.dll for example) means the Service picks this up and uses it to talk to SQL Server databases (based on provided configuration). If the assembly does not exist or is not configured to be used, it is ignored.
I have looked at this project and it looks promising https://github.com/natemcmaster/DotNetCorePlugins
Are there other alternatives? In experimenting with dynamically loading assemblies using reflection I have found one limitation to be I can't seem to pass / return complex objects because of namespace clashes. So I don't even think just putting all the database responsibility into its own project and namespace will work.
It looks like the above approach solves this. I've also read this article https://learn.microsoft.com/en-us/dotnet/core/tutorials/creating-app-with-plugin-support
So to summarise my question(s):
Is this a good idea?
Are there any pitfalls with this approach?
The DotNetCorePlugins solution sounds like the best approach and to be fair is similar to how I've designed a Windows Service framework with a plug-in design for the specific implementation detail.
The key thing here is to ensure you design a set of contracts by way of Interface declarations that any plug-in no matter what the purpose so long as the intention is the same, such that whatever concrete implementation is drawn in for use at runtime (via Assembly Load techniques etc.), the shape of that implementation regarding the methods to invoke etc., are all predetermined through the Interface declarations.
Things may get a little complicated based on your exact circumstances if you want for example to load multiple database provider assemblies (e.g., to handle interactions with SQL, Oracle etc.,) in the same Windows Service instance, but essentially it's all about being able to load a specific assembly at runtime, or specific assemblies if required, then being able to dispatch requests into them accordingly.
Related
I have a specific case and I want to know the best practice way to handle it.
I make a specific .NET framework (web application). This web application acts like a platform or framework to many other web applications through the following methodology :
We create our dependent web applications (classes for the project business, rdlc reports) in a separate solutions then build them.
After that we add references to the resulted dll in the framework.
And create set of user controls (one for each dependent web application) and put them in a folder in the framework it self.
It works fine but any modification to a specific user control or any modification to any one of the dependent web applications. We have to add the references again and publish the whole framework !!
What I want to do is make those different web applications and the framework loosely coupled. So I could publish the framework one and only one and any modifications to the user controls or the different web applications just publish the updated part rather than the whole framework .
How to refactor my code so I can do this?
The most important thing is :
Never publish the whole framework if the change in any dependent application, just publish the updated part belongs to this application .
If loose coupling is what you are after, develop your "framework(web application)" to function as a WCF web service. Your client applications will pass requests to your web services and receive standard responses in the form of predefined objects.
If you take this route, I recommend that you implement an additional step: Do not use the objects passed to your client applications directly in your client code. Instead, create versions of these web service objects local to each client application and upon receiving your web service response objects, map them to their local counterparts. I tend to implement this with a facade project in my client solution. The facade handles all calls to my various web services, and does the mapping between client and service objects automatically with each call. It is very convenient.
The reason for this is that the day that you decide to modify the objects that your web service serves, you only have to change the mapping algorithms in your client applications... the internal code of each client solution remains unchanged. Do not underestimate how much work this can save you!
Developing WCF web services is quite a large subject. If you are interested, a book that I recommend is Programming WCF Services. It offers a pretty good introduction to WCF development for those who come from a .NET background.
I totally agree with levib, but I also have some tips:
As an alternative to WCF (with its crazy configuration needs), I would recommend ServiceStack. Like WCF it lets you receive requests and return responses in the form of predefined objects, but with NO code generation and minimal configuration. It supports all kinds of response formats, such as JSON, XML, JSV and CSV. This makes it much easier to consume from f.ex. JavaScript and even mobile apps. It even has binaries for MonoTouch and Mono for Android! It is also highly testable and blazing fast!
A great tool for the mapping part of your code is AutoMapper, it lets you set up all your mappings in a single place and map from one object type to another by calling a simple method.
Check them out! :)
Decades of experience says: avoid the framework and you won't have a problem to solve.
Frameworks evolve like cancer. The road to hell is paved with good intentions, and a good portion of those good intentions are embodied in a colossal tumour of a framework all in the name of potential re-use that never really happens.
Get some experience and knowledge when it comes to OO and design, and you'll find endless solutions to your technical problem, such as facades, and mementos, and what have you, but they are not solutions to your real problem.
Another thing, if you are using MS technology, don't bother with anything beyond what .NET offers. Stick with what the MS gods offer because as soon as you digress and become committed to some inhouse framework, your days are numbered.
I'm looking for input on a direction to take for building an accounting application. The application needs to allow for high customization, sometimes entire processes will need to changed.
I want a way to make changes without re-compiling the entire application when a customer has a specific modification request. The back-end will be a SQL database of some sort. Most likely SQL Server Express for cost reasons. The front-end will be C#.
I'm thinking of an event-based system that will have events for when different types of actions, such as entries, are made. I would then have a plugin system that handles the event. I may need to have multiple processes apply in a specific order to the data before it is finally saved. It will need to trigger other processes as well.
I want to keep my base application the same, which works for most customers, but have a graceful way of loading the custom processes that other specific customers have.
I'm open to all suggestions. Even if they are thinking of completely different ways of approaching the problem. Our current in-house development talent is .NET and MS SQL Server. I'm not aware of a software pattern that may fit this situation.
Additional Info:
This isn't a completely blank slate system, it will have functionality that works for a large number of the customers. For various reasons, requirements change based on states and even at the region and town level where customization may be necessary.
I'd like to be able to plugin additional pre-compiled modules. When I started looking into possible options, I was imagining an empty handler that I could insert code into through a plugin. So say for example, a new entry is made to the general ledger that raises an event. The handler is called, but the handler's code is coming from a plugin, which may be my original process that fits 80% of the customers. If a customer wants a custom operation, I could add a plugin that completely replaces the original one or have it add an additional post processing step through another plugin after the original runs. Sort of a layering process I guess.
You could look at Managed Extensibility Framework
It provide rich composition layer features that allow you to build loosely-coupled plugin applications.
Update : sound like you need the pre-defined modules on different geographic areas and using chain of responsibility design patern might help you manage the principle of change.
Sorry no codes provided just throwing my thoughts
Windows Workflow Foundation (WF) (part of the .NET Framework) is a potential candidate for your requirements. It enables various actions, command-lets and script-lets to be composed dynamically so that you can more easily customize different workflows for different users/customers.
WF is used by Biztalk for large-scale systems integration and is hosted in-process by many other applications that require the ability to easily modify the orchestration of a number of smaller tasks and actions.
You might want to start with this tutorial on WF4.
HTH.
It's not just plugins or the way how do you technically resolve that plugin problem, use MEF (+1 #laptop) or something else, You got to put most effort in defining plugin "points" in your application, this is gone be most important eg. where you will put that empty "events" to put your code, or what parameters this events or plugins will have.
For example usable plugin would be in before save event, but you will have to have only one place in application that will save various types of business documents, so you can call plugins there and parameter would be abstract document object.
So you have to think real hard about your system architecture, to be abstract enough for various plugin points, and do that architecture completely, don't do just a part of the system and start coding on that.
I hope that you understood what I meant to say, because English is not my native language.
For an upcoming effort I'll be looking at a partial rewrite of an existing system. One piece of it is a DAL which is implemented as a .NET assembly that is referenced by a few applications and provides methods to push things to an underlying DB and retrieve data from that DB. Essentially, I provide the users of the DLL with a means to connect to some DB and make a limited set of calls to it. Rather than the users writing SQL themselves, they use my defined interface.
Now there have been discussions about using some data access service layer instead of the existing DLL and the cited benefits by the proponents of this approach are that it is "maintainable", "testable", "scaleable" - all the standard buzzwords. There was also claims that this approach somehow minimizes the impact on the applications using the layer because it isolates the changes, though I'm not convinced this is the case. It would seem to me that any layer between the underlying DB and the application is going to have a well-defined interface and so changes made on either side which don't involve the other side of the interface will be invisible and have no real impact. Too, any changes which do affect the other side would may require changes to that middle layer as well. I don't see how it wouldn't.
It's expected that early on changes will be required to the DAL because, well, stuff changes. Method parameters change and it causes me currently to recompile the DAL assembly and distribute it to the users of the DAL. These don't happen too often but they do. I may be a bit naive in this area but I'm not aware of a better way to get the applications out of the DB-interfacing business than what I currently have in place. Does anyone have specific knowledge about DAL solutions which provide for better modularity? I've read a few posts here and elsewhere on the net but none really talking about this. If there's an existing question that addresses this already, I'd be interested in seeing a link to this and would be happy to close this question.
Additional info:
A shorter version of my long question above is "What would be an advantage of not using the DLL approach that I currently employ?" The current model I employ has the following characteristics:
A number of POCO classes which abstract the underlying DB model (these classes are not currently auto-generated but were hand-built, though I guess they could be auto-generated via EF or something like that though I'm not sure it really matters)
A number of public methods which can be called, e.g., GetOrderDetails(int orderID) or SubmitOrder(Order newOrder)
It handles the DB connection string via inputs which are provided by the app using the DLL
My opinion not knowing any of the details, assuming smaller project with not too many clients, I'd say stick with the dll, and make sure the latest version of that is on a fileshare or at a url that can be downloaded by a script, preferably the build process automatically.
Ah, wasn't exactly sure what you meant by data access service layer.
I think giving good feedback on this we'd need to know how many clients, types of clients, what kind of methods would be in it (i think huge select queries of complex objects would be a poor fit for WCF or similar). Also since WCF is so versatile, would this be on the same machine, over a LAN, over the internet?
I've used WCF slightly, but not in that context really.
It might be slightly more modular than your DLL as long as all the changes you make do not break the contract (only adding new methods) but I'd say the big benefit would come if you needed it accessed from different platforms, or enough clients that database connections would be strained.
Some errors with WCF can be difficult to debug. And if you do have to break the contract you have to send them a new servicecontract.cs file for that still
In my understanding of your question and the client's desire, I believe they are talking about something along the lines of SOA or Service Oriented Architecture. According to the SOA principle it does allow for a modular and loosely coupled approach. This can lead to easier maintenance in that the logic of the data access routine can be altered, updated and/or maintained without affecting the clients. Also, say, you have twenty clients that make use of the "service" - there will be no need to update and deploy a new dll to all of the clients that make use of the "service" or DAL.
In terms of the platform or technology used to implement SOA systems, it could be whatever you chose. Or more accurately, whatever is the right tool for the job. You could roll your own, you could use WCF or you could also make use of WebAPI... etc...
Here are some links on the topic:
Introduction to SOA - JavaWorld
Service Oriented Architecture with .NET
Service Oriented Architecture and Microsoft .NET
MS Architecture Journal #21 - SOA Today and Past
Understanding Service Oriented Architecture
I'm hosting a WCF service within an organisation, and I was hoping to build a client into an assembly DLL to package up and give to anyone who wants to consume the service.
I could create a class library and simply add a service reference, build that, and distribute that. Any recommendations on an alternative approach?
I did something similar in my previous organization. I also had the additional requirement that the library should be COM visible so that a legacy C++ application could consume the API.
I went so far as not requesting the client to provide any WCF configuration, beside passing a bunch of parameters through the API (service URL, timeouts, etc...). The WCF was configured programmatically. I was in a very tightly controlled environment, where I exactly knew the clients of the library and could influence their design. This approach worked for me, but as they say your mileage may vary.
At my prior job we always did this. We'd have a library project that contained nothing but a SVCUTIL proxy generation and the config file to go with it.
This way our other projects could utilize this library and we'd only ever have one proxy generation. Great in a SOA model.
In your case you could then distribute this assembly if you wanted. Personally, I find the benefit greater for internal cases you control, but I suppose if you really felt charitable, distributing a .NET version for clients to use would be beneficial.
What's the service host going to be? If it's going to be an HTTP based one, putting it into an ASP.NET app makes a lot of sense. Otherwise, yeah, fire up the class library.
UPDATE based on comment
The client packaging really depends on what the receiver is going to do with it. If you're targeting developers, or existing in-house apps, then the library is a great option (though I'd probably wrap it in an .msi to make the experience familiar for users). If there needs to be UI then obviously you'll want to think about an appropriate UI framework (WPF, Silverlight, WinForms, etc)
I would simply provide a library that contains all the required contracts. That's it - they can write their own client proxy.
Do your users know how to use WCF? If not, include a proxy class that instantiates a channel and calls the service.
I don't really see any point in providing an assembly that just includes code generated by svcutil. Why not just give your users a WSDL and then they can generate that code themselves? Distributing boilerplate doesn't seem like a great idea.
I've inherited an enormous .NET solution of about 200 projects. There are now some developers who wish to start adding their own components into our application, which will require that we begin exposing functionality via an API.
The major problem with that, of course, is that the solution we've got on our hands contains such a spider web of dependencies that we have to be careful to avoid sabotaging the API every time there's a minor change somewhere in the app. We'd also like to be able to incrementally expose new functionality without destroying any previous third party apps.
I have a way to solve this problem, but i'm not sure it's the ideal way - i was looking for other ideas.
My plan would be to essentially have three dlls.
APIServer_1_0.dll - this would be the dll with all of the dependencies.
APIClient_1_0.dll - this would be the dll our developers would actual refer to. No references to any of the mess in our solution.
APISupport_1_0.dll - this would contain the interfaces which would allow the client piece to dynamically load the "server" component and perform whatever functions are required. Both of the above dlls would depend upon this. It would be the only dll that the "client" piece refers to.
I initially arrived at this design, because the way in which we do inter process communication between windows services is sort of similar (except that the client talks to the server via named pipes, rather than dynamically loading dlls).
While i'm fairly certain i can make this work, i'm curious to know if there are better ways to accomplish the same task.
You may wish to take a look at Microsoft Managed Add-in Framework [MAF] and Managed Extensibiility Framework [MEF] (links courtesy of Kent Boogaart). As Kent states, the former is concerned with isolation of components, and the latter is primarily concerned with extensibility.
In the end, even if you do not leverage either, some of the concepts regarding API versioning are very useful - ie versioning interfaces, and then providing inter-version support through adapters.
Perhaps a little overkill, but definitely worth a look!
Hope this helps! :)
Why not just use the Assembly versioning built into .NET?
When you add a reference to an assembly, just be sure to check the 'Require specific version' checkbox on the reference. That way you know exactly which version of the Assembly you are using at any given time.