For several applications I made for my current client I have shared user accounts. This means that each account in one application should exist in the other applications.
Each application has it's own set of settings.
The number of applications and the settings themselves will be the parts that really change over time so I want to separate them.
The data store is accessed through an IRepository class (XMLRepository, SQLRepository etc).
They abstract the actual data access logic away.
The SettingsService class should be able to get an ISetting class as followed
public T GetSetting<T>(IUser user) where T : ISetting
Since the fields of an ISettings class will be different for each type I would reckon that it's the actual Settings class that should know how to fill it's own fields, but it doesn't know how to get the values.
The repository however would know how to access the data, but it doesn't know where to put them.
The GetSetting is actually a factory method if I'm not mistaking. I have the feeling this problem is not something new and there is probably a good pattern to solve this.
What are my options?
You will need some sort of factory for each concrete type of ISetting that can create the concrete SomeSetting instance from data returned from a Repository.
How such a factory should work depends on how you envision the settings persistence schema. Do you have a custom schema for each type of ISetting, or do you simply serialize and deserialize settings in a BLOB/XML?
In the first case, you will need a custom Repository for each settings schema. This is the easy scenario, since each specialized Repository will simply act as the custom factory.
In the other case, you can save metadata together with the BLOB that either stores which custom factory to use to deserialize the BLOB, or alternatively simply the type of the serialized BLOB (and you can then use the serialization API of .NET to serialize and deserialize the object).
Related
This is rather general question, but it relates to overall application design. I'm trying to create application that follows class design standards and I'm struggling with one aspect that is how to store information internally.
For example I can create a class for a Movie with couple fields:
title
year
director
So when I parse xml files that holds this metadata I would load them into a public List. I'm not sure if this is a right approach? Since List is an instance of an object, maybe it does not belong in a class that defines Movie?
It is public list it would be available in other parts of application.
I do not see any point of parsing xml files multiple times during application activity. The same goes for accessing database like SQLite.
I looked at Singleton design and I'm not sure if that is a right approach? Plus based on Singleton samples I viewed, I do not know if I can define fields that I mentioned before.
So, my question is. How do you deal with metadata or file paths from scanned folder? Where do you keep this information inside your application?
Thank you
The class which parses the XML file shouldn't store the result. If that class parses a list of movies, it should just return an IEnumerable<Movie>, and then the caller of that class can store the result wherever it wants to.
This is a pretty general question and there are a number of ways to do it depending on your NFRs. The following is a pretty basic way that should be forward compatible with a number of approaches.
Declare the list within main program scope as an IList<Movie>.
Write a class that implements IList (e.g. class MovieList:IList<Movie>) that exposes the data you need. It can cache it if you want. It doesn't have to. For now, write the simplest code that could work.
Later, in the main program, you can change the declaration of your IList to use an IoC container to instantiate the IList (e.g. _myList = Container.Resolve<IList<Movie>>. That way you can substitute different data providers, or substitute a mock provider for unit testing.
Later, you can change the implementation of MovieList to include caching, or store the data in a DB, or whatever you want. Or you can totally rewrite it in a new class and change the configuration of your IoC container to point at the new class. You will have all sorts of options. (The decision to cache or not to cache will ultimately depend on NFRs such as storage capacity, performance, and concurrency/ACID)
The point is to write down the bones of what your program truly needs, and worry about the details of where and when to store stuff later.
I don't think it is a good idea to simply store the whole list in a global variable without some kind of abstractification.
I have many projects in my solution representing the different layers of the application. The Data Access Layer (DAL) has a model of the database in it and more importantly --for my issue-- a Plain Old Class Object (POCO). I want to send an instance of this POCO to an external requester via a WCF contract. As you know, I must define the Operations Contract and Data Contract at the contract layer. It is here were my problem lies, how do I declare the data contract and its data members when the POCO is situated in another layer?
I have tried defining an interface and have both classes implement it, but I come up against a problem when I am getting the objects from the database and then passing them through the contract, the contract does not know the object being passed to it - even though it shares an interface.
Anyway, hope that is clear (as mud!), and if anyone can advise me on a suitable solution I would be much obliged.
P.S. Using C# in VS2015
Looks to me like what you need is another class specifically built for the WCF layer that contains all the properties and attributes you need to use and then use something like AutoMapper to copy to contents across to your WCF object.
Making use of the Factory Design Pattern could also be of helper here.
I am implementing an infrastructure for access control of models in a web application. The library has a context class that controllers (and maybe views) use for determining if the current user has access to a certain object. For keeping relevant information close to the target object, I've decided to pass on the access check request to the models themselves from the context object.
Implementing this mechanism for model object modification is almost trivial. Declare an interface, say, ICheckModifyAccess; and implement it in your model. The same goes for delete check. In both these cases, it is possible to ask an instance of a model whether it is OK to modify or delete them.
Unfortunately, it is not the case for read and create operations. These operations require that I ask the question to the model class. So using an interface for this is not an option.
I ended up creating an attribute, CheckCreateAccessAttribute, and then ended up using this attribute to mark a static function as the interface function. Then, in my context object, I can use reflection to check if such a marked function exists, if it matches the signature I expect, and eventually call it. In case it makes a difference, the method for create access check is public bool CanCreate<TObj>();. A typical model that supports access control would add something like the following to the class:
[CheckCreateAccess]
public static bool CanCreate()
{
return true;
}
I am not very fluent in C# yet, and I have a nagging feeling that I'm doing something wrong. Can you suggest a more elegant alternative? Especially, can you get rid of examining TObj by reflection?
It sounds like you've combined concerns in your object classes instead of separating them.
The temptation to "keep relevant information close to the target object" has perhaps led you to this structure.
Perhaps you could instead handle permissions in a separate class, see for example this article.
I think you shouldn't ask some specific user whether you can modify him (unless the modify right is per concrete entity). Just create a class that handles the rights (or use appropriate existing class).
This would eliminate your need for static classes and reflection.
If you are going to have lots of types, with custom rules (i.e. code) for every one of them, you could have a generic abstract type (interface or abstract class) that is able to check the rules for one type and some repository to retrieve the specific instance.
I'm attempting to design a system that will allow the processing of multiple types of file. The idea being that there's a single application to handle the actual manipulation of the files on disk, while developers can write custom libraries that will be able to do whatever they want with the files once loaded.
I current have a structure that looks like this:
Original Image
Where the application publishes an IClient interface that the custom written libraries are free to implement. Client1 to Client3 would each have a different implementation and respond to each type of file in a different way.
The Populate method on File is overriden in the derived classes to call the correct PopulateFrom method on the IClient interface, passing in the calling file.
Therefore the PopulateFrom method on the class implementing IClient is passed a file of a specific type so that it has to access the underlying data (CSVDataReader or XDocument in this example) to parse into whatever domain-specific objects it wants.
Using this design for every new type of file I add to the system I would have to add a new method to IClient which isn't ideal. To preserve compatibility with the client classes that don't have the method accepting the new file type I'm going to have to create a new interface that specifically supports that type and have the new client implement that:
Original Image
That all works, but I was wondering whether there's a better way of supporting the multiple file types without having to add a new interface every time, possibly using a design pattern?
Here is an option: your PopulateFrom method should not take a specific file type, instead it should take a FileStream or MemoryStream, after all a file is simply a stream of bytes, it is the organisation of those bytes that makes each file type unique.
Additionally, you may want to implement a method similar to this:
bool CanProcess(FileStream myFile)
that way you can query each provider in a generic way and it will tell you if it can process that particular file. Doing it this way will allow you to implement more file types and more providers without having to extend your interface or mess with the existing providers.
Check out the Provider pattern to see if it helps.
Your design violates the design principle known as Dependency inversion, because clients depend on concrete classes instead of abstract ones.
You should reconsider implementing your clients in a way they work with the abstract type (Application::File). If there's absolutely no way to do that, then you should redesign the class hierarchy.
Think about it. If an abstraction is seldom used then it is probably useless. Robert Martin terms this as the Stable abstractions principle.
What would be the best way to implement a psuedo-session/local storage class in a console app? Basically, when the app starts I will take in some arguments that I want to make available to every class while the app is running. I was thinking I could just create a static class and init the values when the app starts, but is there a more elegant way?
I typically create a static class called 'ConfigurationCache' (or something of the sort) that can be used to provide application-wide configuration settings.
Keep in mind that you don't want to get too carried away with globals. I seriously recommend taking a look at your design and passing just what you need via method parameters. You're design should be such that each method receives a parameter for what is needed (see Code Complete 2 - Steve McConnell).
This isn't to say a static class is wrong but ask yourself why you need that over passing parameters into your various classes and methods.
If you want to take the command line arguments (or some other super-duper setting) and put them somewhere that your whole app can see, I don't know why you would consider it "inelegant" to put them in a static class when the app starts. That sounds exactly like what you want to do.
you could use the singleton design pattern if you need an object that you can pass around in your code but imo a static class is fine, too.
Frankly, I think the most elegant way would be to rethink your design to avoid "global" variables. Classes should be created or receive data they need to be constructed; methods should operate on those data. You violate encapsulation by making global variables that a class or classes need to do their jobs.
I would suggest possibly implementing a singleton class to manage your psuedo-session data. You'll have the ability to access the data globally while ensuring only one instance of the class exists and remains consistent while shared between your objects.
MSDN implementation of a singleton class
Think about your data as a configuration file required by all your classes. The file would be accessible from every class - so there is nothing really wrong with exposing the data through a static class.
But every class would have to know the path to the configuration file and a change of the path would affect many classes. (Of course, the path should better be a constant in only one class referenced by all classes riquiring the path.) So a better solution would be creating a class the encapsulates the access to the configuartion file. Now every class can create an instance of this class and access the configuartion data of the file. Because your data is not backed by a file, you would have to build something like a monostate.
Now you could start thinking about class coupling. Does it matter for you? Are you planning to write unit test and will you have to mock the configuration data? Yes? In this case you should start thinking about using dependency injection and accessing the data only through an interace.
So I suggest using dependency injection using an interface and I would implement the interface with the monostate pattern.