I am writing an application that will export a bunch of data from a database into files with specific file structures. There is only one 'type' of export defined so far but there will be more so I want to make it easy for new 'types' to be plugged in.
I have defined the following interface that all export 'types' must implement:
public interface IExportService
{
ExportType ExportType { get; set; }
Task DoExport(Store store);
}
The ExportType is an enum and the Store object is a customer ( rather than a data store of some kind ).
So far there is only one class that implements this interface : RetailExportService.
public class RetailExportService : IExportService
{
public ExportType Type
{
get { return ExportType.Retail; }
}
public async Task DoExport(Store store)
{
List<IRetailExport> retailExports = GetRetailExportFiles();
foreach (var retailExport in retailExports)
{
await retailExport.Export(store);
}
}
private List<IRetailExport> GetRetailExportFiles()
{
return (from t in Assembly.GetExecutingAssembly().GetTypes()
where t.GetInterfaces().Contains(typeof(IRetailExport))
select Activator.CreateInstance(t) as IRetailExport).ToList();
}
}
This class loops through all IRetailExport interfaces in the assembly and calls it's Export method. The actual querying of data and creation of files is done in the Export method.
public interface IRetailExport
{
String FileName { get; }
Task Export(Store store);
}
So, if a new file must be created I can just create a new class that implements IRetailExport and this will automatically be called by the application.
The problem I have is that I have 13 classes that implement IRetailExport and 5 of these classes require the same data. At the moment I am querying the database in each of the classes but this is a bad idea and slows down the application.
The only way I can think of doing this is to define an interface like so:
public interface IDataRequired<T> where T: class
{
IEnumerable<T> Data { get; set; }
}
and have the classes the require the same data implement this class. In the DoExport() method I can then check if this class implements IDataRequired - if so, populate the Data property:
public async Task DoExport(Store store)
{
List<IRetailExport> retailExports = GetRetailExportFiles();
List<ExpRmProductIndex> requiredData = await GetIndexedProductList(store.Id);
foreach (var retailExport in retailExports)
{
if (retailExport is IDataRequired<ExpRmProductIndex>)
(retailExport as IDataRequired<ExpRmProductIndex>).Data = requiredData;
await retailExport.Export(store);
}
}
However, I don't think this is a very elegant solution so I was hoping someone here could suggest a better way of approaching this ? Thanks !!
Reading from or writing to many files in parallel tasks is not a good idea, as this makes the disk head jump many times between the files what causes delays. (I assume that this happens if you have an await in a loop.)
Instead, process one file after the other in a strict sequential order and await once for the whole process to complete.
Related
For my project purpose I need to send metrics to AWS.
I have main class called SendingMetrics.
private CPUMetric _cpuMetric;
private RAMMetric _ramMetric;
private HDDMetric _hddMetric;
private CloudWatchClient _cloudWatchClient(); //AWS Client which contains method Send() that sends metrics to AWS
public SendingMetrics()
{
_cpuMetric = new CPUMetric();
_ramMetric = new RAMMetric();
_hddMetric = new HDDMetric();
_cloudwatchClient = new CloudwatchClient();
InitializeTimer();
}
private void InitializeTimer()
{
//here I initialize Timer object which will call method SendMetrics() each 60 seconds.
}
private void SendMetrics()
{
SendCPUMetric();
SendRAMMetric();
SendHDDMetric();
}
private void SendCPUMetric()
{
_cloudwatchClient.Send("CPU_Metric", _cpuMetric.GetValue());
}
private void SendRAMMetric()
{
_cloudwatchClient.Send("RAM_Metric", _ramMetric.GetValue());
}
private void SendHDDMetric()
{
_cloudwatchClient.Send("HDD_Metric", _hddMetric.GetValue());
}
Also I have CPUMetric, RAMMetric and HDDMetric classes that looks pretty much similar so I will just show code of one class.
internal sealed class CPUMetric
{
private int _cpuThreshold;
public CPUMetric()
{
_cpuThreshold = 95;
}
public int GetValue()
{
var currentCpuLoad = ... //logic for getting machine CPU load
if(currentCpuLoad > _cpuThreshold)
{
return 1;
}
else
{
return 0;
}
}
}
So the problem I have is that clean coding is not satisfied in my example. I have 3 metrics to send and if I need to introduce new metric I will need to create new class, initialize it in SendingMetrics class and modify that class and that is not what I want. I want to satisfy Open Closed principle, so it is open for extensions but closed for modifications.
What is the right way to do it? I would move those send methods (SendCPUMetric, SendRAMMetric, SendHDDMetric) to corresponding classes (SendCPUMetric method to CPUMetric class, SendRAMMEtric to RAMMetric, etc) but how to modfy SendingMetrics class so it is closed for modifications and if I need to add new metric to not change that class.
In object oriented languages like C# the Open Closed Principle (OCP) is usually achieved by using the concept of polymorphism. That is that objects of the same kind react different to one and the same message. Looking at your class "SendingMetrics" it's obvious that the class works with different types of "Metrics". The good thing is that your class "SendingMetrics" talks to a all types of metrics in the same way by sending the message "getData". Hence you can introduce a new abstraction by creating an Interface "IMetric" that is implemented by the concrete types of metrics. That way you decouple your "SendingMetrics" class from the concrete metric types wich means the class does not know about the specific metric types. It only knows IMetric and treats them all in the same way wich makes it possible to add any new collaborator (type of metric) that implements the IMetric interface (open for extension) without the need to change the "SendingMetrics" class (closed for modification). This also requires that the objects of the different types of metrics are not created within the "SendingMetrics" class but e.g. by a factory or outside of the class and being injected as IMetrics.
In addition to using inheritance to enable polymorphism and achiving OCP by introducing the interface IMetric you can also use inheritance to remove redundancy. Which means you can introduce an abstract base class for all metric types that implements common behaviour that is used by all types of metrics.
Your design is almost correct. You got 3 data retriever and 1 data sender. So it's easy to add more metric (more retriever) (open for extensions) without affecting current metrics (closed for modifications), you just need a bit more refactor to reduce duplicated code.
Instead of have 3 metrics classes look very similar. Only below line is different
var currentCpuLoad = ... //logic for getting machine CPU load
You can create a generic metric like this
internal interface IGetMetric
{
int GetData();
}
internal sealed class Metric
{
private int _threshold;
private IGetMetric _getDataService;
public Metric(IGetMetric getDataService)
{
_cpuThreshold = 95;
_getDataService = getDataService;
}
public int GetValue()
{
var currentCpuLoad = _getDataService.GetData();
if(currentCpuLoad > _cpuThreshold)
{
return 1;
}
else
{
return 0;
}
}
}
Then just create 3 GetMetric classes to implement that interface. This is just 1 way to reduce the code duplication. You can also use inheritance (but I don't like inheritance). Or you can use a Func param.
UPDATED: added class to get CPU metric
internal class CPUMetricService : IGetMetric
{
public int GetData() { return ....; }
}
internal class RAMMetricService : IGetMetric
{
public int GetData() { return ....; }
}
public class AllMetrics
{
private List<Metric> _metrics = new List<Metric>()
{
new Metric(new CPUMetricService());
new Metric(new RAMMetricService());
}
public void SendMetrics()
{
_metrics.ForEach(m => ....);
}
}
I'm confused what type of methods should be included in the class and what type of methods should be write in a service class?
This is my scenario:
I'm writing a music store app, and my models designed as below
public class Album
{
private string title;
public string Title
{
get { return title; }
set { title = value; }
}
private double price;
public double Price
{
get { return price; }
set { price = value; }
}
private List<Music> musicFiles;
public List<Music> MusicFiles
{
get { return musicFiles; }
set { musicFiles = value; }
}
}
public class Music
{
private string title;
public string Title
{
get { return title; }
set { title = value; }
}
private string duration;
public string Duration
{
get { return duration; }
set { duration = value; }
}
}
Users can do such operations:
Download a whole album or some specific music files;
Delete local files;
Add album to favorite list;
Remove album from favorite list.
Should I put methods such as Dwonload in the model or in another service class? If I put them in the model, the model should reference some other classes. My current solutions are:
solution1: create IDownload/IFavorite interface and let the model implement them, the methods are included in the model;
solution2: create a abstract class which contains all the properties which related to download operation and favorite operation; let models inherite from the abstract class; create DownloadService class and FavoriteService class to implement the details of the operations, pass the argument like below:
AbstractClass obj1 = new MusicFile();
AbstractClass obj2 = nwe Album();
Which solution is sensible, or is there any other solutions?
Thanks!
Also a better of calling your music artifact download, so you have ability to change or add new artifact without changing the downloading caller interfaces. This is as of my understanding about question.
Please consider this is pseudo code and write your own java code with proper syntax.
//Client call
DownloadStore store = new DownloadStore(myMusicfile)
store.download();
DownloadStore store = new DownloadStore(myAlbum)
store.download();
//your download store
DownloadStore {
IMusicArtifact artifact;
DownloadStore(IMusicArtifact artifact){
this.artifact=artifact;
}
public downlod(){
//write common coding for any artifact...
//artifact specific implemenation is called here
artifact.download();
}
}
//your interface class
IMusicArtifact {
download();
}
//your concrete class
Muscifile implements IMusicArtifact {
download(){
// Music file related downloaind stuff
}
}
//your concrete class
Album implements IMusicArtifact {
download(){
// Album related downloaind stuff
}
}
I think the cleanest solution would be a decicated service class, e.g. "Downloader". If Download is a frequently used operation, you may introduce a facade on the music file class or one of its base classes to improve the code understandability.
The answer to your question whether to put the download method in interfaces or in an abstract base class depends on how you think these operations will be used. If you access for instance the download operation primarily as an ability, e.g. you want to download a lot of stuff and don't really care what these items are, then an interface is the best choice. The reason for this is that an interface does not restrict your inheritance hierarchy, whereas an abstract base class does.
An abstract base class is good if you can share the implementation of operations across multiple files. So if downloading an album is the same code as downloading a music file, an abstract class with a shared implementation is more appropriate.
Often, you use objects based on their ability to do certain stuff and the implementation of this stuff is indeed shared. In that case, the best way is to use an interface and a separate abstract base class that contains the shared code. This way, you use the advantages of both an interface and an abstract base class. If you look in the BCL, e.g. in ADO.NET a lot of concepts are implemented this way.
C#. I have a base class called FileProcessor:
class FileProcessor {
public Path {get {return m_sPath;}}
public FileProcessor(string path)
{
m_sPath = path;
}
public virtual Process() {}
protected string m_sath;
}
Now I'd like to create to other classes ExcelProcessor & PDFProcessor:
class Excelprocessor: FileProcessor
{
public void ProcessFile()
{
//do different stuff from PDFProcessor
}
}
Same for PDFProcessor, a file is Excel if Path ends with ".xlsx" and pdf if it ends with ".pdf". I could have a ProcessingManager class:
class ProcessingManager
{
public void AddProcessJob(string path)
{
m_list.Add(Path;)
}
public ProcessingManager()
{
m_list = new BlockingQueue();
m_thread = new Thread(ThreadFunc);
m_thread.Start(this);
}
public static void ThreadFunc(var param) //this is a thread func
{
ProcessingManager _this = (ProcessingManager )var;
while(some_condition) {
string fPath= _this.m_list.Dequeue();
if(fPath.EndsWith(".pdf")) {
new PDFProcessor().Process();
}
if(fPath.EndsWith(".xlsx")) {
new ExcelProcessor().Process();
}
}
}
protected BlockingQueue m_list;
protected Thread m_thread;
}
I am trying to make this as modular as possible, let's suppose for example that I would like to add a ".doc" processing, I'd have to do a check inside the manager and implement another DOCProcessor.
How could I do this without the modification of ProcessingManager? and I really don't know if my manager is ok enough, please tell me all your suggestions on this.
I'm not really aware of your problem but I'll try to give it a shot.
You could be using the Factory pattern.
class FileProcessorFactory {
public FileProcessor getFileProcessor(string extension){
switch (extension){
case ".pdf":
return new PdfFileProcessor();
case ".xls":
return new ExcelFileProcessor();
}
}
}
class IFileProcessor{
public Object processFile(Stream inputFile);
}
class PdfFileProcessor : IFileProcessor {
public Object processFile(Stream inputFile){
// do things with your inputFile
}
}
class ExcelFileProcessor : IFileProcessor {
public Object processFile(Stream inputFile){
// do things with your inputFile
}
}
This should make sure you are using the FileProcessorFactory to get the correct processor, and the IFileProcessor will make sure you're not implementing different things for each processor.
and implement another DOCProcessor
Just add a new case to the FileProcessorFactory, and a new class which implements the interface IFileProcessor called DocFileProcessor.
You could decorate your processors with custom attributes like this:
[FileProcessorExtension(".doc")]
public class DocProcessor()
{
}
Then your processing manager could find the processor whose FileProcessorExtension property matches your extension, and instantiate it reflexively.
I agree with Highmastdon, his factory is a good solution. The core idea is not to have any FileProcessor implementation reference in your ProcessingManager anymore, only a reference to IFileProcessor interface, thus ProcessingManager does not know which type of file it deals with, it just knows it is an IFileProcessor which implements processFile(Stream inputFile).
In the long run, you'll just have to write new FileProcessor implementations, and voila. ProcessingManager does not change over time.
Use one more method called CanHandle for example:
abstract class FileProcessor
{
public FileProcessor()
{
}
public abstract Process(string path);
public abstract bool CanHandle(string path);
}
With excel file, you can implement CanHandle as below:
class Excelprocessor: FileProcessor
{
public override void Process(string path)
{
}
public override bool CanHandle(string path)
{
return path.EndsWith(".xlsx");
}
}
In ProcessingManager, you need a list of processor which you can add in runtime by method RegisterProcessor:
class ProcessingManager
{
private List<FileProcessor> _processors;
public void RegisterProcessor(FileProcessor processor)
{
_processors.Add(processor)
}
....
So LINQ can be used in here to find appropriate processor:
while(some_condition)
{
string fPath= _this.m_list.Dequeue();
var proccessor = _processors.SingleOrDefault(p => p.CanHandle(fPath));
if (proccessor != null)
proccessor.Process(proccessor);
}
If you want to add more processor, just define and add it into ProcessingManager by using
RegisterProcessor method. You also don't change any code from other classes even FileProcessorFactory like #Highmastdon's answer.
You could use the Factory pattern (a good choice)
In Factory pattern there is the possibility not to change the existing code (Follow SOLID Principle).
In future if a new Doc file support is to be added, you could use the concept of Dictionaries. (instead of modifying the switch statement)
//Some Abstract Code to get you started (Its 2 am... not a good time to give a working code)
1. Define a new dictionary with {FileType, IFileProcessor)
2. Add to the dictionary the available classes.
3. Tomorrow if you come across a new requirement simply do this.
Dictionary.Add(FileType.Docx, new DocFileProcessor());
4. Tryparse an enum for a userinput value.
5. Get the enum instance and then get that object that does your work!
Otherwise an option: It is better to go with MEF (Managed Extensibility Framework!)
That way, you dynamically discover the classes.
For example if the support for .doc needs to be implemented you could use something like below:
Export[typeof(IFileProcessor)]
class DocFileProcessor : IFileProcessor
{
DocFileProcessor(FileType type);
/// Implement the functionality if Document type is .docx in processFile() here
}
Advantages of this method:
Your DocFileProcessor class is identified automatically since it implements IFileProcessor
Application is always Extensible. (You do an importOnce of all parts, get the matching parts and Execute.. Its that simple!)
Lets say I have an abstract object which can be implemented by multiple, separate plugin authors. (For instance, a bug database connection) I don't want consumers of my bits to have to deal with each specific plugin type.
I also want to separate the process of parsing a configuration file from the process of actually initializing database plugins and other such things.
To that end, I came up with something like this:
public interface IConfiguration
{
// No members
}
public interface IConnection
{
// Members go in here
void Create();
void Update();
void Delete();
}
public interface IConnectionProvider
{
// Try to interpret file as a configuration, otherwise return null
IConfiguration ParseConfiguration(Stream configurationContents);
IConnection Connect(IConfiguration settings);
}
public class ThingyRepository
{
// Lets say there is a constructor that initializes this with something
List<IConnectionProvider> providers;
// Insulates people from the actual connection provider
KeyValuePair<IConfiguration, IConnectionProvider> Parse(string filename)
{
IConnection result = null;
IConnectionProvider resultProvider = null;
foreach (var provider in this.providers)
{
using (Stream fs = OpenTheFileReadonly(filename))
{
IConnection curResult = provider.ParseConfiguration(fs);
if (curResult == null)
{
continue;
}
else
{
if (result == null)
{
result = curResult;
resultProvider = provider;
}
else
{
throw new Exception ("ambguity!");
}
}
}
}
if (result == null)
{
throw new Exception ("can't parse!");
}
return new KeyValuePair<IConfiguration, IConnectionProvider>(
result, resultProvider);
}
}
My question is, I've got this empty interface which is supposed to serve as an opaque handle to whatever settings were loaded from the indicated file. The specific implementer of IConnectionProvider knows what bits it needs in its configuration that it would load from a file, but users of this library should be insulated from that information.
But having an empty interface seems strange to me. Does this sort of thing make sense or have I done something horribly wrong?
The basic concept of an interface with no members, that simply identifies implementors as being something instead of the interface's normal job of identifying what an object has or does, is known as a "flag interface". It has its uses, but use them sparingly. I, for instance, typically use them in a hierarchical format to identify domain objects that should be persisted to a particular data store:
//no direct implementors; unfortunately an "abstract interface" is kind of redundant
//and there's no way to tell the compiler that a class inheriting from this base
//interface is wrong,
public interface IDomainObject
{
int Id {get;}
}
public interface IDatabaseDomainObject:IDomainObject { }
public interface ICloudDomainObject:IDomainObject { }
public class SomeDatabaseEntity:IDatabaseDomainObject
{
public int Id{get;set;}
... //more properties/logic
}
public class SomeCloudEntity:ICloudDomainObject
{
public int Id{get;set;}
... //more properties/logic
}
The derived interfaces tell me nothing new about the structure of an implementing object, except that the object belongs to that specific sub-domain, allowing me to further control what can be passed where:
//I can set up a basic Repository pattern handling any IDomainObject...
//(no direct concrete implementors, though I happen to have an abstract)
public interface IRepository<T> where T:IDomainObject
{
public TDom Retrieve<TDom>(int id) where TDom:T;
}
//... Then create an interface specific to a sub-domain for implementations of
//a Repository for that specific persistence mechanism...
public interface IDatabaseRepository:IRepository<IDatabaseDomainObject>
{
//... which will only accept objects of the sub-domain.
public TDom Retrieve<TDom>(int id) where TDom:IDatabaseDomainObject;
}
The resulting implementations and their usages can be checked at compile-time to prove that an ICloudDomainObject isn't being passed to an IDatabaseRepository, and at no time can a String or byte[] be passed into the repository for storage. This compile-time security isn't possible with attributes or properties, which are the other primary ways to "flag" a class as having some special significance.
So in short, it's not bad practice per se, but definitely ask yourself what you want out of the flag interface, and ask yourself if any state or logical data that would commonly be implemented on an IConfiguration (perhaps the name or other identifier of said configuration, or methods to load or persist it to the chosen data store) could do with some enforced standardization.
I think this is entirely valid. I'm designing an API where the caller has to first get an opaque "session" object and then pass it in to subsequent calls.
Different implementations of the API will use totally different implementations of the session object, so the session object clearly isn't an abstract class with different subclasses; it's an interface. Since the session object has no behavior visible to the caller, it seems to me the only logical model for this is an interface with no members.
I am working in a content management system that uses C# and allows for adding separate code in a central class. One issue that has come up is we would like to have a separate code base for QA and the rest of the site, currently we use the folder structure to switch the call from one class to the other
if (AssetPath == "Websites QA")
{
InputHelperQA.Navigation();//Calling Navigation Section From Helper Class
}
else
{
InputHelper.Navigation();
}
But i feel it is a very tedious way of doing this task. Is there a better way of accomplishing this?, obviously just appending InputHelper + "QA" does not work but some thing along those lines where we only have to call the method once instead of having to wrap an if else around the call.
You really shouldn't have separate code for different environments, besides being branches representing your environments.
You really should store your configuration in a config file or database.
You could do worse than:
1) Have an interface (which you may already have, truth be told)
public interface IInputHelper
{
void Navigation();
}
2) Derive your two instances as you already have:
public class InputHelper : IInputHelper { }
public class InputHelperQA : IInputHelper { }
3) Create some kind of a dispatch manager:
public sealed class InputDispatch
{
private Dictionary<string, IInputHelper> dispatch_ = new Dictionary<string, IInputHelper>(StringComparer.OrdinalIgnoreCase);
public InputDispatch()
{
dispatch_["Websites QA"] = new InputDispatchQA();
dispatch_["Default"] = new InputDispatch();
}
public void Dispatch(string type)
{
Debug.Assert(dispatch_.ContainsKey(type));
dispatch_[type].Navigation();
}
}
I would use Dependency Injection. StructureMap (as just one example) will let you specify which concrete type to provide for an interface via a config file.
http://docs.structuremap.net/XmlConfiguration.htm