Application Architecture For Ease of Application Customization - c#

I'm looking for input on a direction to take for building an accounting application. The application needs to allow for high customization, sometimes entire processes will need to changed.
I want a way to make changes without re-compiling the entire application when a customer has a specific modification request. The back-end will be a SQL database of some sort. Most likely SQL Server Express for cost reasons. The front-end will be C#.
I'm thinking of an event-based system that will have events for when different types of actions, such as entries, are made. I would then have a plugin system that handles the event. I may need to have multiple processes apply in a specific order to the data before it is finally saved. It will need to trigger other processes as well.
I want to keep my base application the same, which works for most customers, but have a graceful way of loading the custom processes that other specific customers have.
I'm open to all suggestions. Even if they are thinking of completely different ways of approaching the problem. Our current in-house development talent is .NET and MS SQL Server. I'm not aware of a software pattern that may fit this situation.
Additional Info:
This isn't a completely blank slate system, it will have functionality that works for a large number of the customers. For various reasons, requirements change based on states and even at the region and town level where customization may be necessary.
I'd like to be able to plugin additional pre-compiled modules. When I started looking into possible options, I was imagining an empty handler that I could insert code into through a plugin. So say for example, a new entry is made to the general ledger that raises an event. The handler is called, but the handler's code is coming from a plugin, which may be my original process that fits 80% of the customers. If a customer wants a custom operation, I could add a plugin that completely replaces the original one or have it add an additional post processing step through another plugin after the original runs. Sort of a layering process I guess.

You could look at Managed Extensibility Framework
It provide rich composition layer features that allow you to build loosely-coupled plugin applications.
Update : sound like you need the pre-defined modules on different geographic areas and using chain of responsibility design patern might help you manage the principle of change.
Sorry no codes provided just throwing my thoughts

Windows Workflow Foundation (WF) (part of the .NET Framework) is a potential candidate for your requirements. It enables various actions, command-lets and script-lets to be composed dynamically so that you can more easily customize different workflows for different users/customers.
WF is used by Biztalk for large-scale systems integration and is hosted in-process by many other applications that require the ability to easily modify the orchestration of a number of smaller tasks and actions.
You might want to start with this tutorial on WF4.
HTH.

It's not just plugins or the way how do you technically resolve that plugin problem, use MEF (+1 #laptop) or something else, You got to put most effort in defining plugin "points" in your application, this is gone be most important eg. where you will put that empty "events" to put your code, or what parameters this events or plugins will have.
For example usable plugin would be in before save event, but you will have to have only one place in application that will save various types of business documents, so you can call plugins there and parameter would be abstract document object.
So you have to think real hard about your system architecture, to be abstract enough for various plugin points, and do that architecture completely, don't do just a part of the system and start coding on that.
I hope that you understood what I meant to say, because English is not my native language.

Related

How to find unused parts of a large .NET application?

Consider a large multi-tier enterprise web application and many services with very complex functionality, mostly written in .NET (C#) on the server side and obviously html and javascript on the client, consisting of many hundred pages with the amount of service calls (actions) well in the thousands, hosted on multiple servers and developed over 15 years. Some parts are very new and modern, other parts are legacy.
Some parts of this application are obsolete and nobody actually uses those parts anymore. Whether these are whole unused sub-applications, unused pages, files, service calls, methods or even lines of code, doesn't matter. Older parts do not provide any usage statistics but do use dependency injection.
How can one automatically find out, based on access to production servers, which parts are unused, without changing the actual source code? So the question is not finding unreferenced / unreachable code. It's about finding parts that users don't actually use anymore.
One option could be looking at query logs. This discovers unused pages, but it is very difficult (a tedious manual process) to find out which parts in the background are used by those pages only.
Another option could probably be monitoring file access on servers. Does that make sense? Would that be feasible?
Yet another thought is doing something like test coverage tools do, but not during testing. Could coverage (something like lines of code executed) be measured in a live C#.NET application, assuming that debug symbols are available?
It is hard to give an answer without really knowing the situation. However, I do not think there is some automatic or easy way. I do not know the best solution, but I can tell you what I would do. I would start with collecting all log files from the (IIS?) server (for at least a year, code could be used once a year) and analyze those. This should give you the best insight on which parts are called externally. You do have those logs?
Also check the eventlogs. Sometimes there are messages like 'Directory does not exist', which could mean that the service isn't working for years but nobody noticed. And check for redundant applications, perhaps applications are active on multiple servers.
Check inside tables with time indications and loginfo for recent entries.
Checking the dates on files and analyzing the database may provide additional information, but I don't think it will really help.
Make a list of all applications that you think are obsolete, based on user input or applications that should be obsolete.
Use your findings to create a list based on the probability that application /code is obsolete. Next steps, based on your list, could be:
remove redundant applications.
look for changes in the datamodel of filesystem and check if these still match with the code.
analyze the database for invalid queries. This could indicate that the datamodel has changed, causing the application to stop working. If nobody noticed then this application or functionality is obsolete.
add logging to the code where you have doubts.
look at application level and start with marking calls as obsolete, comment / removing unused code or redirect to (new) equivalent code.
turn off applications and monitor what happens. If there is a dependency then you can take action to remove this dependency or choose to let the application live.
Monitoring the impact of your actions will help you to sort things out. I hope this answer gives you some ideas.
-- UPDATE --
There may be logging available, but collecting, reading and interpreting may be hard and time consuming. To make it easier to monitor you could think of the following:
monitor database: you can use the profiler tool, but it may be easier to create a trigger that logs all CRUD operations with all the information you need. Create a program that can read the scheme of the database and filter the log by table, stored procedure, view to determine what isn't used. I didn't investigate, but perhaps you can monitor rollbacks and exceptions as well.
monitor IIS. There are off course the log files, but you can also think of adding a Module to the website where you can write custom code to monitor whatever you want. All traffic passes the module. Take a look here: https://www.iis.net/learn/develop/runtime-extensibility/developing-iis-modules-and-handlers-with-the-net-framework. If I am not mistaken all you have to do is add the module to the website and configure the website to use the module. Create a program to filter the log on url, status, ip, identification, etc. to determine what is used.
I think that is sufficient for first analysis. It then comes to interpreting the logs. Perhaps you'll see a way to combine the logs so you can link a request to certain database actions, without having to look in or change the code. Just some thoughts.
You can use ReSharper. It will tell you such problems while you're coding.
However you can also detect problems afterwards. In the Menu you will find the entry "ReSharper > Inspect > Code Issues in Solution".
It will create a report, there you will find it under "Redundancies in Code".

What's the best design for a web app that adopts WF?

We are currently building an application that makes use of a non-simple approval process, which involves multiple levels of approval, returning, reviewing, notifications etc..
Because of the said requirement, we were asked to make use of a workflow framework also to facilitate process transparency.
On the prototype we have succesfully incorporated the workflow and it works fine. however, we cannot determine the actions that should be available to the user. For example, I have the following recieve operations: create(), managerApprove(), RAApprove(), ORMApprove().. now if I call them in order, using the correct user name then they will work. Obviously, if I don't call them in order, then it will throw a FaultException because its not in the correct state. Question is, how will I know which functions are available to expose in the UI - say, if its currently waiting for manager approval then just show an approval button for the manager...
As a workaround, I've created another WCF service that retrieves the same data from the database and then determines the correct UI state (which actions can be performed by the user). I think this is a duplication of logic, since that's supposed in the WF already.
Also, if the WF changes then it my seperate WCF service may potentially break. For example, if I switch the approval order in the workflow then I need to update the logic in the WCF service as well. Otherwise, it would show an invalid page state and clicking approve will invoke the wrong method and cause a FaultException.
Any help will be much appreciated... I'm really new to WF4.
UPDATE:
My colleague put my question this way:
What's the best design for a web app that adopts WF?
The main reasons why WF is being considered
- The workflows involved are long running
- Workflows are human workflows - they need to coordinate actions of real people
- Process Transparency
Also, how should the workflow integrate with the UI? - how will the UI know what state in should be in and what pages to show which users?
The workflow itself doesn't expose the information directly. It is there as each pending Receive is a named bookmark and the bookmark name contains the SOAP action it supports as well as the service contract and namespace. The easiest way of getting at this into is by adding the SqlWorkflowInstanceStore to the WorkflowServiceHost and checking the column with the pending bookmarks. It isn't perfect as this will give you the information as it was last persisted which is not necessarily the current state but it has worked for me in a number of applications. Just make sure to set the TimeToPersist to a pretty low value and add some Persist activities in strategic places.
A very simple approach would be simulating the workflow by managing the status of the approvals. Imagine that you have different buttons/pages for different users to approve different stages ("create", "manager approval", "RA approveal", etc) of the approval process. This is a very old school approach.
If you use this approach, you would need to distribute your workflow (logics/process) accross different places (pages). Obviously, this is a downside of this aproach, specially when your workflow changes a lot or your solutions needs to run different versions of a workflow.
If you want to use Workflow Foundation, the easiest way is what Maurice has suggested.
The other option is to use other tools which would scale more and are more flexible than WF. I have used WF (not the lastest release though), BizTalk, and SharePoint.
If your solution requires interacting with other applications, I would recommned using BizTalk.

Using Windows Workflow Foundation in applications with reporting requirements

I am wondering, very generally, about the feasibility of a WF solution in an application, where the business owners want extensive reporting capabilities on application state and data.
The main issue that I see is that WF tends to hide data, such as a recipient list, that is serialized inside the workflow instances, so it can't be reported on, in a way I'm aware of.
On the other hand, if the instance data is written out to an external resource such as a sql table, then haven't you just given up your ability to change that part of the workflow at will (taking away a major selling point for WF)?
In WF for .Net 3.5, I understand the SqlTrackingService can provide raw data about Workflow instance events, Activity events, and User events. But, I wonder about the scalability of logging a lot of instance data, in the activities, to User events. Also the schema I see that the SqlTrackingService uses looks like it would be hard to maintain a reporting solution on, especially if there are updates to the workflow over time. Am I wrong here?
So, has anyone out there successfully used WF when reporting was a major concern? If so, I'd like to hear about how it was done, if WF was isolated to only certain parts of the app (having data not applicable for reporting), and so on...
Perhaps, this is just a question about reporting in a BPM solution in general, as well...
EDIT:
I've accepted Maurice's answer because he took the time to answer it, but I'm still interested in any other opinions on this!
Using the SqlTrackingService in combination with tracking profiles is quite performant, specially when you enable transactional logging. Basically you want to use the tracking profile to only track events you are really interested in. You can also tell it to extract and log specific pieces of user data.
The main way to retrieve the data is not through SQL, although that is possible and for some queries like give all workflows executing a specific activity, but the SqlTrackingQuery class. This will also deserialize the additional user data.

Extensibility without Open-Source

My company is currently in the process of creating a large multi-tier software package in C#. We have taken a SOA approach to the structure and I was wondering whether anyone has any advice as to how to make it extensible by users with programming knowledge.
This would involve a two-fold process: approval by the administrator of a production system to allow a specific plugin to be used, and also the actual plugin architecture itself.
We want to allow the users to write scripts to perform common tasks, modify the layout of the user interface (written in WPF) and add new functionality (ie. allowing charting of tabulated data). Does anyone have any suggestions of how to implement this, or know where one might obtain the knowledge to do this kind of thing?
I was thinking this would be the perfect corner-case for releasing the software open-source with a restrictive license on distribution, however, I'm not keen on allowing the competition access to our source code.
Thanks.
EDIT: Thought I'd just clarify to explain why I chose the answer I did. I was referring to production administrators external to my company (ie. the client), and giving them someway to automate/script things in an easier manner without requiring them to have a full knowledge of c# (they are mostly end-users with limited programming experience) - I was thinking more of a DSL. This may be an out of reach goal and the Managed Extensibility Framework seems to offer the best compromise so far.
Just use interfaces. Define an IPlugin that every plugin must implement, and use a well defined messaging layer to allow the plugin to make changes in the main program. You may want to look at a program like Mediaportal or Meedios which heavily depend on user plugins.
As mentioned by Steve, using interfaces is probably the way to go. You would need to design the set of interfaces that you would want your clients to use, design entry points for the plugins as well as a plugin communication model. Along with the suggestions by Steve, you might also want to take a look at the Eclipse project. They have a very well defined plugin architecture and even though its written in java, it may be worth taking a look at.
Another approach might be to design an API available to a scripting language. Both
IronPythonand Boo are dynamic scripting languages that work well with C#. With this approach, your clients could write scripts to interact with and extend your application. This approach is a bit more of a lightweight solution compared to a full plugin system.
I would take a look at the MEF initiative from Microsoft. It's a framework that lets you add extensibility to your applications. It's in beta now, but should be part of .Net 4.0.
Microsoft shares the source, so you can look how it's implemented and interface with it. So basically your extensibility framework will be open for everyone to look at but it won't force you to publish your application code or the plug-ins code.
Open source is not necessary in any way shape or form to make a product extensible.
I agree that open source is a scary idea in this situation. When you say approval by a production administrator - is that administrator within your company, or external?
Personally, I would look at allowing extensibility through inheritance (allowing third parties to subclass your code without giving them the source) and very carefully specified access modifiers.
Microsoft already did exactly this, resulting in Reporting Services, which has every attribute you mention: user defined layout, scriptability, charting, customisable UI. This includes a downloadable IDE. No access to source code is provided or required, yet it's absolutely littered with extensibility hooks. The absence of source code inhibits close-coupling and promotes SOA thinking.
We are currently in a similar situation. We identified different scenarios where people may want to create a live connection on a data level. In that case they can have access to a sinle webservice to request and import data.
At some point they may want to have a custom user interface (in our case Silverlight 2). For this scenario we can provide a base class and have them register the module in a central repository. It then integrates into our application in a uniform way, including security, form and behaviour and interaction with services.

Building a highly modular business application with WPF?

I'm fleshing out a WPF business application in my head and one thing that sparked my interest was how I should handle making it incredibly modular. For example, my main application would simply contain the basics to start the interface, load the modules, connect to the server, etc. These modules, in the form of class libraries, would contains their own logic and WPF windows. Modules could define their own resource dictionaries and all pull from the main application's resource dictionary for common brushes and such.
What's the best way to implement a system of this nature? How should the main interface be built so that the modules it loads can alter virtually any aspect of its user interface and logic?
I realize it's a fairly vague question, but I'm simply looking for general input and brainstorming.
Thanks!
Check out Composite Client Application Guidance
The Composite Application Library is designed to help architects and developers achieve the following objectives:
Create a complex application from modules that can be built, assembled, and, optionally, deployed by independent teams using WPF or Silverlight.
Minimize cross-team dependencies and allow teams to specialize in different areas, such as user interface (UI) design, business logic implementation, and infrastructure code development.
Use an architecture that promotes reusability across independent teams.
Increase the quality of applications by abstracting common services that are available to all the teams.
Incrementally integrate new capabilities.
First of all you might be interested in SharpDevelop implementation. It is based on it's own addin system known as AddInTree. It is a separate project and can be used within your own solutions for free. Everything is split across different addins where addins are easily added/removed/configured by means of xml files. SharpDevelop is an open source project so you'll have a chance examining how the infrastructure is introduced, as well as service bus and cross-addin integrations. The core addin tree can be easily moved to WPF project without implications.
Next option is taking "Composite Client Application Guidance" (aka Prism, aka CompositeWPF) already mentioned earlier. You will get the Unity (Object builder) support out-of-box, Event Aggregation as well as the set of valuable design patterns implemented.
If you want to perform some low level design and architecture yourself the MEF will be the best choise (though working with all three I personally like this one). This is what VS 2010 will be based on so you might be sure the project won't lose support in future.
My advice is elaborating on these approaches and selecting the best and efficient one that perfectly suits your needs and the needs of your project.
Have a look at Prism

Categories