I am attempting to have a ReportHandler service to handle report creation. Reports can have multiple, differing number of parameters that could be set. In the system currently there are several different methods of creating reports (MS reporting services, html reports, etc) and the way the data is generated for each report is different. I am trying to consolidate everything into ActiveReports. I can't alter the system and change the parameters, so in some cases I will essentially get a where clause to generate the results, and in another case I will get key/value pairs that I must use to generate the results. I thought about using the factory pattern, but because of the different number of query filters this won't work.
I would love to have a single ReportHandler that would take my varied inputs and spit out report. At this point I'm not seeing any other way than to use a big switch statement to handle each report based on the reportName. Any suggestions how I could solve this better?
From your description, if you're looking for a pattern that matches better than Factory, try Strategy:
Strategy Pattern
Your context could be a custom class which encapsulates and abstracts the different report inputs (you could use the AbstractFactory pattern for this part)
Your strategy could implement any number of different query filters or additional logic needed. And if you ever need to change the system in the future, you can switch between report tools by simply creating a new strategy.
Hope that helps!
In addition to the strategy pattern, you can also create one adaptor for each of your underlying solutions. Then use strategy to vary them. I've built similar with each report solution being supported by what I called engines, In addition to the variable report solution we have variable storage solution as well - output can be stored in SQL server or file system.
I would suggest using a container then initializing it with the correct engine, e.g.:
public class ReportContainer{
public ReportContainer ( IReportEngine reportEngine, IStorageEngine storage, IDeliveryEngine delivery...)
}
}
/// In your service layer you resolve which engines to use
// Either with a bunch of if statements / Factory / config ...
IReportEngine rptEngine = EngineFactory.GetEngine<IReportEngine>( pass in some values)
IStorageEngine stgEngine = EngineFactory.GetEngine<IStorageEngien>(pass in some values)
IDeliverEngine delEngine = EngineFactory.GetEngine<IDeliverEngine>(pass in some values)
ReportContainer currentContext = new ReportContainer (rptEngine, stgEngine,delEngine);
then ReportContainer delegates work to the dependent engines...
We had a similar problem and went with the concept of "connectors" that are interfaces between the main report generator application and the different report engines. By doing this, we were able to create a "universal report server" application. You should check it out at www.versareports.com.
Related
I want to use the TPL Dataflow for my .NET Core application and followed the example from the docs.
Instead of having all the logic in one file I would like to separate each TransformBlock and ActionBlock (I don't need the other ones yet) into their own files. A small TransformBlock example converting integers to strings
class IntToStringTransformer : TransformBlock<int, string>
{
public IntToStringTransformer() : base(number => number.ToString()) { }
}
and a small ActionBlock example writing strings to the console
class StringWriter : ActionBlock<string>
{
public StringWriter() : base(Console.WriteLine) { }
}
Unfortunately this won't work because the block classes are sealed. Is there a way I can organize those blocks into their own files?
Dataflow steps/blocks/goroutines are fundamentally functional in nature and best organized as modules of factory functions, not separate classes. A TPL DataFlow pipeline is quite similar to a pipeline of function calls in F#, or any other language. In fact, one could look at it as a PowerShell pipeline, except it's easier to write.
There's no need to create a class or implement an interface to add a new function to that pipeline, you just add it and redirect the output to the next function.
TPL Dataflow blocks provide the primitives to construct a pipeline already and only require a transformation function. That's why they are sealed, to prevent misuse.
The natural way to organize dataflows is similar to F# too - create libraries with the functions that perform each job, putting them in modules of related functions. Those functions are stateless, so they can easily go into a static library, just like extension methods.
For example, there could be one module for database related functions that perform bulk inserts or read data, another to handle exports to various file formats, separate classes to call external web services, another to parse specific message formats.
A real Example
For the last 7 years I'm working with several complex pipelines for an Online Travel Agency (OTA). One of them calls several GDSs (the intermediaries between OTAs and airlines) to retrieve transaction information - ticket issues, refunds, cancellations etc. Next step retrieves the ticket records, the detailed ticket informations. Finally, the records are inserted into the database.
GDSs are too big to bother with standards, so their "SOAP" web services aren't even SOAP-compliant, much less follow WS-* standards. So each GDS needs a separate class library to call the services and parse the outputs. No dataflows there yet, the project is already complex enough
Writing the data to the database is pretty much the same always, so there's a separate project with methods that take eg an IEnumerable<T> and write it to the database with SqlBulkCopy.
It's not enough to load new data though, things often go wrong so I need to be able to load already stored ticket information.
Organisation
To preserve sanity :
Each pipeline gets its own file:
A Daily pipeline to load new data,
A Reload pipeline to load all stored data
A "Rerun" pipeline to use the existing data and ask again for any missing data.
Static classes are used to hold the worker functions and separately factory methods that produce Dataflow blocks based on configuration. Eg, a CreateLogger(path,level) creates an ActionBlock<Message> that logs specific messages.
Common dataflow extension methods - since DataFlow blocks follow the same basic patterns, it's easy to create a logged block by combining eg a Func<TIn,TOut> and a logger block. Or create a LinkTo overload that redirects bad records to a logger or database. Those are common enough they can become extension methods.
If those were in the same file, it would be very hard to edit one pipeline without affecting another. Besides, there's a lot more to a pipeline than the core tasks, eg:
Logging
Handling bad records and partial results (can't stop a 100K import for 10 errors)
error handling (which isn't the same as handling bad records)
monitoring - what's this monster doing for the last 15 minutes? Did a DOP=10 improve performance at all?
Don't create a parent pipeline class.
Some of the steps are common, so at first, I created a parent class with common steps that got overloaded, or simply replaced in child classes. VERY BAD IDEA. Each pipeline is similar but not quite, and inheritance means that modifying one step or one connection risks breaking everything. After about 1 year things became unbearable, so I split the parent class into separate classes.
As #Panagiotis explained, I think you have to put aside the OOP Mindset a little.
What you have with DataFlow are Buildingblocks that you configure to execute what you need. I'll try to create a little example of what I mean by that:
// Interface and impl. are in separate files. Actually, they could
// even be in a different project ...
public interface IMyComplicatedTransform
{
Task<string> TransformFunction(int input);
}
public class MyComplicatedTransform : IMyComplicatedTransform
{
public Task<string> IMyComplicatedTransform.TransformFunction(int input)
{
// Some complex logic
}
}
class DataFlowUsingClass{
private readonly IMyComplicatedTransform myTransformer;
private readonly TransformBlock<int , string> myTransform;
// ... some more blocks ...
public DataFlowUsingClass()
{
myTransformer = new MyComplicatedTransform(); // maybe use ctor injection?
CreatePipeline();
}
private void CreatePipeline()
{
// create blocks
myTransform = new TransformBlock<int, string>(myTransformer.TransformFunction);
// ... init some more blocks
// TODO link blocks
}
}
I think this is the closest to what you are looking for to do.
What you end up with is a set of interfaces and implementations which can be tested independently. The client basically boils down to "gluecode".
Edit: As #Panagiotis correctly states, the interfaces are even superfluent. You could do without.
I am writing a piece of software in c# .net 4.0 and am running into a wall in making sure that the code-base is extensible, re-usable and flexible in a particular area.
We have data coming into it that needs to be broken down in discrete organizational units. These units will need to be changed, sorted, deleted, and added to as the company grows.
No matter how we slice the data structure we keep running into a boat-load of conditional statements (upwards of 100 or so to start) that we are trying to avoid, allowing us to modify the OUs easily.
We are hoping to find an object-oriented method that would allow us to route the object to different workflows based on properties of that object without having to add switch statements every time.
So, for example, let's say I have an object called "Order" come into the system. This object has 'orderItems' inside of it. Each of those different kinds of 'orderItems' would need to fire a different function in the code to be handled appropriately. Each 'orderItem' has a different workflow. The conditional looks basically like this -
if(order.orderitem == 'photo')
{do this}
else if(order.orderitem == 'canvas')
{do this}
edit: Trying to clarify.
I'm not sure your question is very well defined, you need a lot more specifics here - a sample piece of data, sample piece of code, what have you tried...
No matter how we slice the data structure we keep running into a boat-load of conditional statements (upwards of 100 or so to start) that we are trying to avoid
This usually means you're trying to encode data in your code - just add a data field (or a few).
Chances are your ifs are linked to each other, it's hard to come up with 100 independent ifs - that would imply you have 100 independent branches for 100 independent data conditions. I haven't encountered such a thing in my career that really would require hard-coding 100 ifs.
Worst case scenario you can make an additional data field contain a config file or even a script of your choice. Either case - your data is incomplete if you need 100 ifs
With the update you've put in your question here's one simple approach, kind of low tech. You can do better with dependency injection and some configuration but that can get excessive too, so be careful:
public class OrderHandler{
public static Dictionary<string,OrderHandler> Handlers = new Dictionary<string,OrderHandler>(){
{"photo", new PhotoHandler()},
{"canvas", new CanvasHandler()},
};
public virtual void Handle(Order order){
var handler = handlers[order.OrderType];
handler.Handle(order);
}
}
public class PhotoHandler: OrderHandler{...}
public class CanvasHandler: OrderHandler{...}
What you could do is called - "Message Based Routing" or "Message Content Based" Routing - depending on how you implement it.
In short, instead of using conditional statements in your business logic, you should implement organizational units to look for the messages they are interested in.
For example:
Say your organization has following departments - "Plant Products", "Paper Products", "Utilities". Say there is only one place where the orders come in - Ordering (module).
here is a sample incoming message.
Party:"ABC Cop"
Department: "Plant Product"
Qty: 50
Product: "Some plan"
Publish out a message with this information. In the module that processes orders for "Plant Products" configure it such that it listens to a message that has "Department = Plant Products". This way, you push the onus on the department modules instead of on the main ordering module.
You can do this using NServiceBus, BizTalk, or any other ESB you might already have.
This is how you do in BizTalk and this is how you can do in NServiceBus
Have you considered sub-typing OrderItem?
public class PhotoOrderItem : OrderItem {}
public class CanvasOrderItem : OrderItem {}
Another option would be to use the Strategy pattern. Add an extra property to your OrderItem class definition for the OrderProcessStrategy and use a PhotoOrderStrategy/CanvasOrderStrategy to contain all of the different logic.
public class OrderItem{
public IOrderItemStrategy Strategy;
}
public interface IOrderItemStrategy{
public void Checkout();
public Control CheckoutStub{get;}
public bool PreCheckoutValidate();
}
public class PhotoOrderStrategy : IOrderItemStrategy{}
public class CanvasOrderStrategy : IOrderItemStrategy{}
Taking the specific example:
You could have some Evaluator that takes an order and iterates each line item. Instead of processing if logic raise events that carry in their event arguments the photo, canvas details.
Have a collection of objects 'Initiators' that define: 1)an handler that can process Evaluator messages, 2)a simple bool that can be set to indicate if they know what to do with something in the message, and 3)an Action or Process method which can perform or initiate the workflow. Design an interface to abstract these.
Issue the messages. Visit each Initiator, ask it if it can process the lineItem if it can tell it to do so. The processing is kicked off by the 'initiators' and they can call other workflows etc.
Name the pieces outlined above whatever best suits your domain. This should offer some flexibility. Problems may arise depending on concurrent processing requirements and workflow dependencies between the Initiators.
In general, without knowing a lot more detail, size of the project, workflows, use cases etc it is hard to comment.
I'm trying to design application in right manner, it should
Read invoice data from SQL Server (2 queries depending on type of
invoice: sales or purchase)
Process it (Acme may need less fields than SugarCorp and in different formatting)
Output txt or csv (that may change in future)
I found factory pattern helpful so prepeared a UML diagram according to my concern.
Each InvoiceFactoryProvider can generate PInvoice or SInvoice (specific to them). CreatePInvoice() and CreateSInvoice() should call load() and save() methods.
How to couple load() with SQLReader class to get each row as PInvoice object? And save() with my IDataWriter interface. Could you provide some example / advice?
Edit:
After reviewing examples of Bridge Pattern, as Atul suggested, I created a class diagram for this problem using it, which looks like this:
Invoice SQL queries may vary (application may load invoice data from different systems - PollosInvoice or StarInvoice) and how they are processed (different implementations).
In this case I decoupled abstraction - Invoice from its implementation - exporting invoice to certain software (AcmeExporter or SigmaExporter). AcmeExporter and SigmaExport will set their fields according to specification - date of transaction, payment method, invoice type etc. taken from Invoice's DataTable. ExportInvoice() will return DataTable with needed data. InvoiceExporter is also using two interfaces for encoding and file format.
What do you think about it? What kind of flaws / advantages does it have?
Currently it looks like you are using Abstract Factory design pattern for the creation of your products (invoices). But point to note is that your load and save method are inside product(invoice) so in that its always better to go with Bridge Design Pattern. Your product will use the implementation of Reader and Writer to load and save the records.
Note: You would be able to use AbstractFactory even with this design pattern.
It will looks something like below... (just an Analogy)
Small design question here. I'm trying to develop a calculation app in C#. I have a class, let's call it InputRecord, which holds 100s of fields (multi dimensional arrays) This InputRecordclass will be used in a number of CalculationEngines. Each CalculcationEngine can make changes to a number of fields in the InputRecord. These changes are steps needed for it's calculation.
Now I don't want the local changes made to the InputRecord to be used in other CalculcationEngine's classes.
The first solution that comes to mind is using a struct: these are value types. However I'd like to use inheritance: each CalculationEngine needs a few fields only relevant to that engine: it's has it's own InputRecord, based on BaseInputRecord.
Can anyone point me to a design that will help me accomplish this?
If you really have a lot of data, using structs or common cloning techniques may not be very space-efficient (e.g. it would use much memory).
Sounds like a design where you need to have a "master store" and a "diff store", just analogous to a RDBMS you have data files and transactions.
Basically, you need to keep a list of the changes performed per calculation engine, and use the master values for items which aren't affected by any changes.
The elegant solution would be to not change the inputrecord. That would allow sharing (and parallel processing).
If that is not an option you will have to Clone the data. Give each derived class a constructor that takes the base Input as a parameter.
You can declare a Clone() method on your BaseInputRecord, then pass a copy to each CalculationEngine.
I've read about it, I understand it's basic function--I'd like to know an example of a common, real-life use for this pattern.
For reference, I work mostly with business applications, web and windows, using the Microsoft stack.
Think of an Itinerary builder. There are lots of things you can add to you Itinerary like hotels, rental cars, airline flights and the cardinality of each is 0 to *. Alice might have a car and hotel while Bob might have two flights, no car and three hotels.
It would be very hard to create an concrete factory or even an abstract factory to spit out an Itinerary. What you need is a factory where you can have different steps, certain steps happen, others don't and generally produce very different types of objects as a result of the creation process.
In general, you should start with factory and go to builder only if you need higher grain control over the process.
Also, there is a good description, code examples and UML at Data & Object Factory.
Key use cases:
When the end result is immutable, but
doing it all with a constructor would
be too complicated
When I want to partially build
something and reuse that partially
built thing, but customize it at
the end each time
When you start with the factory pattern, but the thing being built by
the factory has too many permutations
In summary, builder keeps your constructors simple, yet permits immutability.
You said C#, but here's a trivial Java example:
StringBuilder sb = new StringBuilder();
sb.append("Hello");
sb.append(" ");
sb.append("World!");
System.out.println(sb.toString());
As opposed to:
String msg = "";
msg += "Hello";
msg += " ";
msg += "World!";
System.out.println(msg);
EDIT: You will see in my comments that I may have rushed into answering this question, and confused myself in the process. I will go ahead and edit this to work with the Abstract Factory, as I think I originally intended, but please note that this is mainly for reference, not necessarily as a response to the original question.
The most common example I've seen described deals with how GUI components are built.
For example, if you were designing a form for your application, whose GUI components could take on multiple representations (perhaps based on which platform you were running on), you would design an abstract factory to handle the creation of those components.
In order to add new controls to the form, the code might look something like this:
public MyForm ()
{
GuiFactory factory = new Win32Factory ();
Button btn = factory.CreateButton ();
btn.Text = "Go!"
btn.Location = new Point (15, 50);
this.Controls.Add (btn);
}
This satisfies the Abstract Factory pattern because you can create different instances of the factory object to create different representations of your created objects without changing the client code (this is a rudimentary example, but I think normally you wouldn't create the Win32Factory using new, it would be acquired via some other abstraction).
A common example that we used to see all the time was a "sysgen" of an operating system. you had a process that selected all the modules you needed, configured them, and returned a bootable image that had been customized.
One use case that I've encountered is when having multiple data sources. The particular case involved a cache and a database. The majority of the data was pulled from cache, or not. The second loader looked at the data to see whether or not it was loaded from the cache. It would query the database to finish populating the data.
Sometimes it can be helpful to think of non-software related example of design patterns in order to understand them.
I have a system I am working on now that uses a builder to create an order. The order is a class composed of several other classes. My builder creates and validates the associated classes, and if all are valid it then creates an instance of the order class. This way I can be sure that I never have an instance of an order that is missing data.