Looming on my (C# 4.0) project horizon is the introduction of a user-configuration feature, wherein my colleagues need to obtain the ability to configure our software or a pre-run basis so that it will perform its duties in whatever way is needed. Every run (they'll tend to be lengthy simulations) will have have its own configuration file.
In the main, they'll want to define "products", fairly complex beasts comprising run-time parameters, IOC-style information (lists of calculation classes required in a Strategy Pattern-like way) and more stuff besides. Values could be numbers (integer and floating-point), strings, dates and lists of these.
We know the content will change (new parameter names, for example) as new products are introduced or existing ones evolve.
Options I've looked at include, in descending order of (my estimate of) syntax-heaviness:
XML
YAML
JSON
a DomainSpecific Language?
Some app-specific text notation
I'm looking for examples of more-or-less similar things: file formats, discussions of pros and cons from a technical and/or user perspective.
(I expect that at some time in the future we'll consider introducing a graphic front-end for such exercises, but we'll need to be able to configure executions some time before then.)
EDIT/UPDATE: I'm not particularly concerned about ease of implementation from the technical perspective: I'm looking for something my (very smart, but not technically-oriented) users will best be able to use to minimize the difficulty of writing what may be a fairly complex configuration.
Maybe "configuration" is a bad choice of word - what if we called it "simulation definition file" and considered that each user will create many of these over time?
If all of this can be described in a regular, attribute oriented way, it sounds like XML would meet your needs. C# has fantastic XML handling features including built-in serializers and LINQ that can save a ton of code implementation time. Also, creating a forms based UI from a schema is very straight forward.
I would suggest sitting down and creating a pseudo-schema to see if this would handle your use cases to help decide if this approach would work for you.
For examples, take a look at one of the many plugin systems that are driven by XML. The first one I would recommend is Eclipse.
In C# easiest way is to use a Built In Class System.Configuration ,first you should add an Configuration File ,using Visual Studio ,than you can edit it's attributes . Here is a How To
Related
Having rather large project using Resources for internationalization (following this guide: ASP.NET MVC 2 Localization complete guide, using things like data attributes, and so on) we run into the need of translating the resource files. In the beggining of project I selected approach to have lot of small resource files - for each view, viewmodel, controller, ... So I ended up having hundreds of resources. During the translations (which is done by our partners using ResXManager tool we run into trouble identifying the context of the string (where is it displayed, to find out the correct form of translation to make sense when displayed).
So I was asked to make the mutation of application which do not display the localized values, but the keys (or string names). E.g. having string in resources TBL_NAME used somewhere in the view like #ResX.TBL_NAME and translated into english as "Name", I would like to show it in this special mutation as "TBL_NAME", so the translator may see the context - where exactly this string is used.
The best would be, if this is not special build of application, but rather the another "language" of the application available for translators, so he can switch between english and this "unlocalized" languages.
I'm looking some easy ideas of doing this. So far I was thinking of these approaches:
Override ResourceManager.GetString - cannot use, because we use generated Designer classes to access strings massively and so far I haven't find a way to change created ResourceManager (see this answer). Did I miss something?
Create resources for some unused language, which will contain pairs string name/translated value as TBL_NAME/TBL_NAME - viable, but very exhausting since we have hundreds of resources. Also the addition of new resource will require us to remeber that we need to add also this unused language resource will exact same strings name. You also have to do twice much work when adding single string to application.
At the moment, it seems for me, that using resources and current approach it is impossible to solve this task, so I decided to ask this as question (and I'm aware it is rather discussion than question) here, hoping, someone will give me some hint about other approach to solve this problem.
My preferred option would be to give the translators an environment where they can see what they are translating. Rigi requires a bit of setup (basically you need to add an additional UI language), but once you have done that translators can work within the live website - or in a test instance, which is what we did.
They can also work in screenshots, which is convenient when translators would have to access admin or other role specific pages but you do not want to bother giving them all kinds of user rights. These screenshots can be generated as part of automated UI tests or during manual UI testing.
I am afraid I can't say anything about the cost of the solution, but our translators are really happy with it. I am not sure if this is what you are looking for since you asked for an easy solution, but it definitely solves the issue of giving translators the context they need to do their job - better than displaying resource IDs.
We need to implement a WCF Webservice using the ACORD Standard.
However, I don't know where to start with this since this standard is HUMONGOUS and very convoluted. A total chaos to my eyes.
I am trying to use WSCF.Blue to extract the classes from the multiple XSD I have but so far all I get is a bunch of crap: A .cs file with 50,000+ lines of code that freezes my VS2010 all the time.
Has anybody walked already thru the Valley of Death (ACORD Standard) and made it? I really would appreciate some help.
I wrote a ACORD to c# class library converter which was then used in several large commercial insurance products. It featured a very nice mapping of all of the ACORD XML into nice concise, extendable C# classes. So I know from whence you come!
Once you dig into it its not so bad, but I maintain the average coder will not 'get it' for about 3-4 months if they work at it full time (assuming anything but inquiry style messages). The real problem comes when trying to do mapping from a backend database and to/from another ACORD WS. All of the carriers, vendors, and agencies have custom rules.
My best suggestion is to find working code examples (I have tons if you need them) and maybe even a vendor or carrier who will let you hook up to a ACORD ws in a test environment.
It sounds like you are heading down the right path but are lost in the forest.
The ACORD Standard is huge and intentionally so, as it provides support for hundreds of different messages. Just as you do not download all of Wikipedia to get just a few articles, you do not need all of the classes in the ACORD Standard to support an implementation of a few messages. If you know what messages you need to support then you can generate a subset of the full XSD that will be quite manageable.
As mentioned in Hugh’s response, for any one message only a fraction of the full XSD is used. How you go about doing that will depend on the specifics of your project. If you are looking for ideas on how generate a subset of the full XSD try reaching out to the ACORD staff for help at PCS#acord.org. They should be able to offer you some help in getting started.
I have worked with the Accord PCS exposure reporting standards and yes it was a nightmare. I have also worked with other large standards like FPML and SportsML.
You need to work out exactly which types from the schema that are needed. How you do this is up to you, but VS schema viewer should be able to handle it. If not try XmlSpy or just go through it by hand if you have to. Make sure you have a good BA to hand...
Chances are you will find that you can meet your requirements by using around 1% of the types available in the standard.
What you'll probably find is that you can express the core objects with a very minimal set of values, as most nodes will be minOccurs=0 or nillable.
Then you can use the /element switch on xsd.exe to generate the code for just the types you need.
As one commenter says there is no easy pill to swallow here. The irony is that standards are supposed to make everyone's lives easier.
If you are looking to read/write ACORD documents using .NET, I just stumbled across the "IVC Software Factory for ACORD Standards" on CodePlex at http://ivc.codeplex.com.
From the limited documentation it appears as if this library can convert objects to ACORD XML documents, and vice-versa. The source code comes with different "providers" i.e. different ACORD transaction types, like 103 or 121.
Hope this helps.
I would recommend not creating a model for the entire standard. One could just pass XML and not serialize into a model but instead load it into XDocument/XElement and use Linq to query it and update the DOM using Linq to Xml. So, one is not loading the XML to a strongly typed model, but just loading the XML. There is no model, just an XML document.
From there, one can pick the data off of the XML as needed.
Using this approach, the code will be ugly and have little context since XElements will be passed everywhere, and there will be tons of magic strings of XPaths to query and define elements, but it can work. Also, everything is a string so there will be utility conversion methods to convert to numbers, date times, etc.
From my prospective, I have modeled part of the Acord into an object model using the XmlSerializer but it's well over 500 classes. The model was not tooled from XSD or other, but crafted manually and took some time. Tooling will produce monster unusable classes (as you have mentioned) and/or flat out crash. As an example, I tried to load the XSD into Stylus Studio and it crashed several times.
So, your best bet if your strapped for time is loading into an XDocument as opposed to trying to map out everything in a model. I know that sucks but Acord in general is basically a huge data hot mess.
F# 3.0 has added type providers.
I wonder if it is possible to add this language feature to other languages running on the CLR like C# or if this feature only works well in a more functional/less OO programming style?
As Tomas says, it is theoretically straightforward to add this kind of feature to any statically-typed language (though still a lot of grunt-work).
I am not a meta-programming expert, but #SK-logic asks why not a general compile-time meta-programming system instead, and I shall try to answer. I don't think you can easily achieve what you can do with F# type providers using meta-programming, because F# type providers can be lazy and dynamically interactive at design-time. Let's give an example that Don has demo-ed in one of his earlier videos: a Freebase type provider. Freebase is kind of like a schematized, programmable wikipedia, it has data on everything. So you can end up writing code along the lines of
for e in Freebase.Science.``Chemical Elements`` do
printfn "%d: %s - %s" e.``Atomic number`` e.Name e.Discoverer.Name
or whatnot (I don't have the exact code offhand), but just as easily write code that gets information about baseball statistics, or when famous actors have been in drug rehab facilities, or a zillion other types of information available through Freebase.
From an implementation point-of-view, it is infeasible to generate a schema for all of Freebase and bring it into .NET a-priori; you can't just do one compile-time step at the beginning to set all this up. You can do this for small data sources, and in fact many other type providers use this strategy, e.g. a SQL type provider gets pointed at a database, and generates .NET types for all the types in that database. But this strategy does not work for large cloud data stores like Freebase, because there are too many interrelated types (if you tried to generate .NET metadata for all of Freebase, you'd find that there are so many millions of types (one of which is ChemicalElement with AtomicNumber and Discoverer and Name and many other fields, but there are literally millions of such types) that you need more memory than is available to a 32-bit .NET process just to represent the entire type schema.
So the F# type-provider strategy is an API architecture that allows type providers to supply information on-demand, running at design-time within the IDE. Until you type e.g. Freebase.Science., the type provider does not need to know about the entities under the science categories, but once you do press . after Science, then the type provider can go and query the APIs to learn one-more-level of the overall schema, to know what categories exist under Science, one of which is ChemicalElements. And then as you try to "dot into" one of those, it will discover that elements have atomic numbers and what-not. So the type provider lazily fetches just enough of the overall schema to deal with the exact code the user happens to be typing into the editor at that moment in time. As a result, the user still has the freedom to explore any part of the universe of information, but any one source code file or interactive session will only explore a tiny fraction of what is available. When it comes time to compile/codegen, the compiler need only generate enough code to accomodate exactly the bits that the user has actually used in his code, rather than the potentially huge runtime bits to afford the possibility of talking to the whole data store.
(Maybe you can do that with some of today's meta-programming facilities now, I don't know, but the ones I learned about in school a long while back could not have easily handled this.)
As Brian and Tomas point out, there's nothing particularly "functional" about this feature. It's just a particularly slick way to provide metadata to the compiler.
The C# design team has been kicking around ideas like this for a long time. There was a proposal a few years before I joined the C# team for a feature that was going to be called "type blueprints" (or something like that) whereby a combination of XML documents, XML schema and custom code that proffered up type metadata could be used by the C# compiler. I don't recall the details and it never came to fruition, obviously. (Though it did influence the design and the implementation of the Visual Studio Tools for Office document format, which I was working on at the time.)
In any event, we have no plans on the immediate horizon for adding such a feature to C#, but we are watching with great interest to see if it does a good job of solving customer problems in F#.
(As always, Eric's musings about possible future features of unnannounced and entirely hypothetical products are for entertainment purposes only.)
I don't see any technical reason why something like type providers couldn't be added to C# or similar languages. The only family of langauges that make it difficult to add type providers (in a similar way as in F#) are dynamically typed languages.
F# type providers rely on the fact that the type information that are generated by the provider nicely propagate through the program and the editor can use them to show useful IntelliSense. In dynamically typed languages, this would require more elaborate IDE support (and "type providers" for dynamic langauges reduce to just IDE or IntelliSense).
Why are they implemented directly as a feature of F#? I think the meta-programming system would have to be really complex (note that the types are not actually generated) to support this. The other things that could be done using it wouldn't contribute to the F# language that much (they would only make it too complex, which is a bad thing). However, you could get similar thing if you had some sort of compiler extensibility.
In fact, I think this is how the C# team will add something like type providers in the future (they talked about compiler extensibility for some time now).
Here is the business part of the issue:
Several different companies send a
XML dump of the information to be
processed.
The information sent by the companies
are similar ... not exactly same.
Several more companies would be soon
enlisted and would start sending
information
Now, the technical part of the problem is I want to write a generic solution in C# to accommodate this information for processing. I would be transforming the XML in my C# class(es) to fit in to my database model.
Is there any pattern or solution for this issue to be handled generically without needing to change my solution in case of addition of many companies later?
What would be the best approach to write my parser/transformer?
This is how I have done something similar in the past.
As long as each company has its own fixed format which they use for their XML dump,
Have an specific XSLT for each company.
Have a way of indicating which dump is sourced from where (maybe different DUMP folders for each company )
In your program, based on 2, select 1 and apply it to the DUMP
All the XSLT's will transform the XML to your one standard database schema
Save this to your DB
Each new company addition is at the most a new XSLT
In cases where the schema is very similar, the XSLT's can be just re-used and then specific changes made to them.
Drawback to this approach: Debugging XSLT's can be a bit more painful if you do not have the right tools. However a LOT of XML Editors (eg XML Spy etc) have excellent XSLT debugging capabilities.
Sounds to me like you are just asking for a design pattern (or set of patterns) that you could use to do this in a generic, future-proof manner, right?
Ideally some of the attributes that you probably want
Each "transformer" is decoupled from one another.
You can easily add new "transformers" without having to rewrite your main "driver" routine.
You don't need to recompile / redeploy your entire solution every time you modify a transformer, or at least add a new one.
Each "transformer" should ideally implement a common interface that your driver routine knows about - call it IXmlTransformer. The responsibility of this interface is to take in an XML file and to return whatever object model / dataset that you use to save to the database. Each of your transformers would implement this interface. For common logic that is shared by all transformers you could either create a based class that all inherit from, or (my preferred choice) have a set of helper methods which you can call from any of them.
I would start by using a Factory to create each "transformer" from your main driver routine. The factory could use reflection to interrogate all assemblies it can see that, or something like MEF which could do a lot of the work for you. Your driver logic should use the factory to create all the transformers and store them.
Then you need some logic and mechanism to "lookup" each XML file received to a given Transformer - perhaps each XML file has a header that you could use to identify or something similar. Again, you want to keep these decoupled from your main logic so that you can easily add new transformers without modification of the driver routine. You could e.g. supply the XML file to each transformer and ask it "can you transform this file", and it is up to each transformer to "take responsibility" for a given file.
Every time your driver routine gets a new XML file, it looks up the appropriate transformer, and runs it through; the result gets sent to the DB processing area. If no transformer can be found, you dump the file in a directory for interrogation later.
I would recommend reading a book like Agile Principles, Patterns and Practices by Robert Martin (http://www.amazon.co.uk/Agile-Principles-Patterns-Practices-C/dp/0131857258), which gives good examples of appropriate design patterns for situations like yours e.g. Factory and DIP etc.
Hope that helps!
Solution proposed by InSane is likley the most straigh forward and definitely XML friendly approach.
If you looking for writing your own code to do conversion of different data formats than implementing multiple reader entities that would read data from each distinct format and transform to unified format, than your main code would work with this entities in unified way, i.e. by saving to the database.
Search for ETL - (Extract-Trandform-Load) to get more information - What model/pattern should I use for handling multiple data sources? , http://en.wikipedia.org/wiki/Extract,_transform,_load
Using XSLT as proposed in the currently most upvoted answer, is just moving the problem, from c# to xslt.
You are still changing the pieces that process the xml, and you are still exposed to how good/poor is the code structured / whether it is in c# or rules in the xslt.
Regardless if you keep it in c# or go xslt for those bits, the key is to separate the transformation of the xml you receive from the various companies into a unique format, whether that's an intermediate xml or a set of classes where you load the data you are processing.
Whatever you do avoid getting clever and trying to define your own generic transformation layer, if that's what you want Do use XSLT since that's what's for. If you go with c#, keep it simple with a transformation class for each company that implements the simplest interface.
On the c# way, keep any reuse you may have between the transformations to composition, don't even think of inheritance to do so ... this is one of the areas where it gets very ugly quickly if you go that way.
Have you considered BizTalk server?
Just playing the fence here and offering another solution for other readers.
The easiest way to get the data into your models within C# is to use XSLT to convert each companies data into a serialized form of your models. These are the basic steps I would take:
Create a complete model of all your data and use XmlSerializer to write out the model.
Create an XSLT that takes Company A's data and converts it into a valid serialized xml model of your data. Use the previously created XML file as a reference.
Use Deserialize on the new XML you just created. You will now have a reference to your model object containing all the data from the company.
I must develop a simple web application to produce reports. I have a single table "contract" and i must return very simple aggregated values : number of documents produced in a time range, average number of pages for documents and so on . The table gets filled by a batch application, users will have roles that will allow them to see only a part of the reports (if they may be called so ).
My purpose is :
develop a class, which generates the so called reports, opened to future extension (adding new methods to generate new reports for different roles must be easy )
decouple the web graphic interface from the database access
I'm evaluating various patterns : decorator, visitor, ... but being the return data so simple i cannot evaluate which apply or even if its the case to use one. Moreover i must do it i less than 5 days. It can be done if i make a so called "smart gui" but as told at point 1, i don't want to get troubles when new roles or method will be added.
thank you for your answers.
I'm sorry, i realize i haven't provided too much infos. I live in a Dilbert world. at the moment i've got the following info : db will be oracle (the concrete db doesn't exist yet) , so no EF, maybe linqtodataset (but i'm new to linq). About new features of the application,due to pravious experiences, the only thing i wish is not to be obliged to propagate changes over the whole application, even if it's simple. that are the reasons i've thougth to design patterns (note i've said "if it's the case" in my question) .
I'll KISS it and then will refactor it if needed , as suggested by ladislav mrnka, but i still appreciate any suggestion on how to keep opened to extension the data gathering class
KISS - keep it simple and stupid. You have five days. Create working application and if you have time refactor it to some better solution.
The road to good code is not paved with design patterns.
Good code is code that is readable, maintainable, robust, compatible and future-proof.
Don’t get me wrong: Design patterns are a great thing because they help categorise, and thus teach, the experience that earlier generations of programmers have accrued. Each design pattern once solved a problem in a way that was not only novel and creative, but also good. The corrolary is not that a design pattern is necessarily good code when applied to any other problem.
Good code requires experience and insight. This includes experience with design patterns, and insight into their applicability and their downsides and pitfalls.
That said, my recommendation in your specific case is to learn about the recommended practice regarding web interfaces, database access, etc. Most C# programmers write web applications in ASP.NET; tend to use LINQ-to-Entities or LINQ-to-SQL for database access; and use Windows Forms or WPF for a desktop GUI. Each of these may or may not fulfill the requirements of your particular project. Only you can tell.
How about you use strategy pattern for the retrieving data? And use interfaces like following to keep it extendable at all times.
IReportFilter: Report filter/criteria set
IReportParams: Gets report parameters
IReportData: Gets the report data in a result set
IReportFormat: Report formatting
IReportRender: Renders the report
Just thinking out loud.