I have a legacy HTTP/XML service that I need to interact with for various features in my application.
I have to create a wide range of request messages for the service, so to avoid a lot of magic strings littered around the code, I've decided to create xml XElement fragments to create a rudimentary DSL.
For example.
Instead of...
new XElement("root",
new XElement("request",
new XElement("messageData", ...)));
I'm intended to use:
Root( Request( MessageData(...) ) );
With Root, Request and MessageData (of course, these are for illustrative purposes) defined as static methods which all do something similar to:
private static XElement Root(params object[] content)
{
return new XElement("root", content);
}
This gives me a pseudo functional composition style, which I like for this sort of task.
My ultimate question is really one of sanity / best practices, so it's probably too subjective, however I'd appreciate the opportunity to get some feedback regardless.
I'm intending to move these private methods over to public static class, so that they are easily accessible for any class that wants to compose a message for the service.
I'm also intending to have different features of the service have their messages created by specific message building classes, for improved maintainability.
Is this a good way to implement this simple DSL, or am I missing some special sauce that will let me do this better?
The thing that leads me to doubt, is the fact that as soon as I move these methods to another class I increase the length of these method calls (of course I do still retain the initial goal of removing the large volume magic strings.) Should I be more concerned about the size (loc) of the DSL language class, than I am about syntax brevity?
Caveats
Note that in this instance the remote service poorly implemented, and doesn't conform to any general messaging standards, e.g. WSDL, SOAP, XML/RPC, WCF etc.
In those cases, it would obviously not be wise to create hand built messages.
In the rare cases where you do have to deal with a service like the one in question here, and it cannot be re-engineered for whatever reason, the answers below provide some possible ways of dealing with the situation.
Have you noticed that all the System.Linq.Xml classes are not sealed?
public class Root : XElement
{
public Request Request { get { return this.Element("Request") as Request; } }
public Response Response { get { return this.Element("Response") as Response; } }
public bool IsRequest { get { return Request != null; } }
/// <summary>
/// Initializes a new instance of the <see cref="Root"/> class.
/// </summary>
public Root(RootChild child) : base("Root", child) { }
}
public abstract class RootChild : XElement { }
public class Request : RootChild { }
public class Response : RootChild { }
var doc = new Root(new Request());
Remember this won't work for 'reading' scenarios, you will only have the strong-typed graph from the XML that your application creates via code.
Hand-cranking xml is one of the things which should be automated if possible.
One of the ways of doing this is to grab the messaging XSD definitions off your endpoint and use them to generate C# types using the xsd.exe tool.
Then you can create a type and serialize it using the XmlSerializer, which will pump out your xml message for you.
I noticed this article for constructing arbitrary XML with C#4.0 which is great.
The source for the library is here - https://github.com/mmonteleone/DynamicBuilder/tree/master/src/DynamicBuilder
At this time, there is a notable deficiency, no xml namespace support. Hopefully that will get fixed though.
As a quick example, here's how it's done.
dynamic x = new Xml();
x.hello("world");
Which yields:
<hello>world</hello>
Here's another quick example yanked from the article.
dynamic x = new Xml();
// passing an anonymous delegate creates a nested context
x.user(Xml.Fragment(u => {
u.firstname("John");
u.lastname("Doe");
u.email("jdoe#example.org");
u.phone(new { type="cell" }, "(985) 555-1234");
}));
Which yields:
<user>
<firstname>John</firstname>
<lastname>Doe</lastname>
<email>jdoe#example.org</email>
<phone type="cell">(985) 555-1234</phone>
</user>
Having used the Ruby library Builder this method of creating arbitrary XML is similarly terse, to the point that it verges on "fun"!
I've marked this as the answer, because, even though it doesn't directly speak to "using a DSL to create arbitrary XML" it tends to remove the need due to the extremely terse and dynamic nature of the syntax.
Personally I think this is the best way to create arbitrary XML in C# if you have the v4.0 compiler and have to crank it by hand, there are of course much better ways to generate XML automatically with serialization. Reserve this for XML which must be in a specific form for legacy systems only.
Writing this in C# seems an awful lot of work. Design your DSL as an XML vocabulary, and then compile it into XSLT, writing the compiler (translator) in XSLT. I've done this many times.
Related
I am writing an app that processes a bunch of ticker data from a page. The main class that I am working with is called Instrument, which is used to store all the relevant data pertaining to any instrument. The data is downloaded from a website, and parsed.
class Instrument
{
string Ticker {get; set;}
InstrumentType Type {get; set;}
DateTime LastUpdate {get; set;}
}
My issue is that I am not sure how to properly structure the classes that deal with the parsing of the data. Not only do I need to parse data to fill in many different fields (Tickers, InstrumentType, Timestamps etc.), but because the data is pulled from a variety of sources, there is no one standard pattern that will handle all of the parsing. There are even some parsing methods that need to make use of lower level parsing methods (situations where I regex parse the stock/type/timestamp from a string, and then need to individually parse the group matches).
My initial attempt was to create one big class ParsingHandler that contained a bunch of methods to deal with every particular parsing nuance, and add that as a field to the Instrument class, but I found that many times, as the project evolved, I was forced to either add methods, or add parameters to adapt the class for new unforeseen situations.
class ParsingHandler
{
string GetTicker(string haystack);
InstrumentType GetType(string haystack);
DateTime GetTimestamp(string haystack);
}
After trying to adapt a more interface-centric design methodology, I tried an alternate route and defined this interface:
interface IParser<outParam, inParam>
{
outParam Parse(inParam data);
}
And then using that interface I defined a bunch of parsing classes that deal with every particular parsing situation. For example:
class InstrumentTypeParser : IParser<InstrumentType, string>
{
InstrumentType Parse(string data);
}
class RegexMatchParser : IParser<Instrument, Match> where Instrument : class, new()
{
public RegexMatchParser(
IParser<string, string> tickerParser,
IParser<InstrumentType, string> instrumentParser,
IParser<DateTime, string> timestampParser)
{
// store into private fields
}
Instrument Parser(Match haystack)
{
var instrument = new Instrument();
//parse everything
return instrument;
}
}
This seems to work fine but I am now in a situation were it seems like I have a ton of implementations that I will need to pass into class constructors. It seems to be dangerously close to being incomprehensible. My thoughts on dealing with it are to now define enums and dictionaries that will house all the particular parsing implementations but I am worried that it is incorrect, or that I am heading down the wrong path in general with this fine-grained approach. Is my methodology too segmented? Would it be better to have one main parsing class with a ton of methods like I originally had? Are there alternative approaches for this particular type of situation?
I wouldn't agree with attempt to make the parser so general, as IParser<TOut, TIn>. I mean, something like InstrumentParser looks to be quite sufficient to deal with instruments.
Anyway, as you are parsing different things, like dates from Match objects and similar, then you can apply one interesting technique that deals with generic arguments. Namely, you probably want to have no generic arguments in cases when you know what you are parsing (like string to Instrument - why generics there?). In that case you can define special interfaces and/or classes with reduced generic arguments list:
interface IStringParser<T>: IParser<T, string> { }
You will probably parse data from strings anyway. In that case, you can provide a general-purpose class which parses from Match objects:
class RegexParser: IStringParser<T>
{
Regex regex;
IParser<T, Match> parser;
public RegexParser(Regex regex, IParser<T, Match> containedParser)
{
this.regex = regex;
this.parser = containedParser;
}
...
T Parse(string data)
{
return parser.Parse(regex.Match(data));
}
}
By repeatedly applying this technique, you can make your top-most consuming classes only depend on non-generic interfaces or interfaces with one generic member. Intermediate classes would wrap around more complicated (and more specific) implementations and it all becomes just a configuration issue.
The goal is always to go towards as simple consuming class as possible. Therefore, try to wrap specifics and hide them away from the consumer.
I'm putting together a .Net 4 library that is designed to be distributed as a standalone assembly. Part of the library does some ad-hoc web service calls in which I plan on returning a projected version of to the consumer of the library. There will be an extensive amount of mapping that needs to happen between the webservice response representation and what the consumer of the library will actually get. I'm hoping to leverage AutoMapper for this task; as more often than not, conventions will be able to take care of a lot of the boring right-to-left mapping code for me.
So for example, my library might expose code that looks somewhat like:
public Widget GetWidget(Guid id)
{
// Get server representation
ServerWidget serverWidget = this.Request<ServerWidget>(id);
// Map to client representation
Widget clientWidget = Mapper.Map<ServerWidget, Widget>(serverWidget);
return clientWidget;
}
Elsewhere in code I'll have obviously needed to call (plus any custom configuration for the mapping):
Mapper.CreateMap<ServerWidget, Widget>();
Per design guidelines of AutoMapper, this should be only done once per AppDomain (as it is an expensive operation). Since this library could be used in any number of possible environments (ASP.NET, WinForms app, WPF app, unit test runner, etc), how does one go about properly setting the maps up in a situation like this?
Obviously, my code could expose some sort of method for the client to call to "initialize things" (mapper in this case) and assume they did indeed make that call, and at the right time in the application startup process, but that seems like a really lame requirement to impose on a consumer of the library.
Anyone have any suggestions for me and/or could point me to an open-source project on GitHub, Codeplex, etc that is already doing something like this?
How about having a static IsMappingInitialised method in your library which you check before doing a mapping like this, which is thread safe:
private static readonly object MappingLock = new object();
private static bool _ready = false;
public static bool IsMappingInitialised()
{
if (!_ready)
{
lock (MappingLock)
{
if (!_ready)
{
Mapper.CreateMap<ServerWidget, Widget>();
_ready = true;
}
}
}
return _ready;
}
that way you do not need to rely on your consumers to carry out the initialisation.
You can also make use of the static constructor feature of .Net.
Add a static constructor in your class and add the creation of the map. You will not need any locking since the CLR ensures that the static constructor can be executed only once per AppDomain. This is enough for your case since you are using the static mapper (AutoMapper.Mapper) which is also one per AppDomain.
Let's say I have a text file of basic mathematical functions.
I want to make a web service that answers these mathematical functions. Say the first one is y=x*x. If I wanted to turn this into a web service, I could simply do this:
[WebMethod]
public int A(int x)
{
return x*x;
}
However, I've extracted the function from the list by hand and coded it into a function by hand. That's not what I want to do. I want the wsdl for the service to be generated at call time directly from the text file, and I want the web method calls to the service to go to a specific method that also parses the text file at run time.
How much heavy lifting is this? I've found a sample on how to generate WSDLs dynamically at this link, but there's a lot more to do beyond that and I don't want to bark up this tree if there are parts of the project that arn't feasible. Does anyone have any links, guides, books, or positive experiences trying this kind of thing?
This related StackOverflow question post might give you a lead.
The tip here is to use the SoapExtensionReflector class.
As I see it, you might be able to use that class as follows:
Create a web service containing 1 dummy method.
Subclass the SoapExtensionReflector and configure it in web.config.
As soon as your subclass is called for the dummy method, read the file with functions and dynamically add a method to the WSDL file for each function.
As you might agree, this sounds easier than it is, and I would personally prefer not to go there at all. The resulting code will probably be a bit of a maintenance nightmare.
Good luck :)
EDIT: it might actually be easier to write a little code generator, which generates the C# web service code from your file with functions. Then, let the WSDL generation be up to the framework you are using (e.g. WCF).
Obviously, this kind of kills the whole dynamic aspect of it + you would need to redeploy after ever change in the functions file. But then again, the cycle of 'generate code - build - redeploy' could easily be automated with some MSBuild tasks.
I guess the usefulness of such a solution depends on how often your file with functions changes...
I believe it's possible to add a metadata exchange endpoint programmatically in WCF - you may want to look into that. That would allow you to dynamically return WSDL to potential service clients who could query your webservice at runtime to determine which entry points are available. But it's definetely a bit of work - and not for the faint of heart.
Is a dynamic WSDL an absolute requirement? Not having a static WSDL also means you can't have a static (auto-generated) proxy class, which is a real PITA. Instead, you could expose the function signatures as plain old data, rather than as WSDL metadata:
[ServiceContract]
public interface IMathFunctions
{
[OperationContract]
FunctionDescription[] GetFunctionList();
[OperationContract]
object RunFunction(string funcName, object[] args);
}
public class FunctionDescription
{
string Name { get; set; }
Argument[] Arguments { get; set; }
TypeCode ReturnType { get; set; }
}
Public class Argument
{
String Name { get; set; }
TypeCode Type { get; set; }
}
You will need to use the [DataContract] and [DataMember] attributes on the FunctionDescription and Argument classes when using a version of .NET earlier than 3.5 SP1.
I am trying to create a web-based tool for my company that, in essence, uses geographic input to produce tabular results. Currently, three different business areas use my tool and receive three different kinds of output. Luckily, all of the outputs are based on the same idea of Master Table - Child Table, and they even share a common Master Table.
Unfortunately, in each case the related rows of the Child Table contain vastly different data. Because this is the only point of contention I extracted a FetchChildData method into a separate class called DetailFinder. As a result, my code looks like this:
DetailFinder DetailHandler;
if (ReportType == "Planning")
DetailHandler = new PlanningFinder();
else if (ReportType == "Operations")
DetailHandler = new OperationsFinder();
else if (ReportType == "Maintenance")
DetailHandler = new MaintenanceFinder();
DataTable ChildTable = DetailHandler.FetchChildData(Master);
Where PlanningFinder, OperationsFinder, and MaintenanceFinder are all subclasses of DetailFinder.
I have just been asked to add support for another business area and would hate to continue this if block trend. What I would prefer is to have a parse method that would look like this:
DetailFinder DetailHandler = DetailFinder.Parse(ReportType);
However, I am at a loss as to how to have DetailFinder know what subclass handles each string, or even what subclasses exist without just shifting the if block to the Parse method. Is there a way for subclasses to register themselves with the abstract DetailFinder?
You could use an IoC container, many of them allows you to register multiple services with different names or policies.
For instance, with a hypothetical IoC container you could do this:
IoC.Register<DetailHandler, PlanningFinder>("Planning");
IoC.Register<DetailHandler, OperationsFinder>("Operations");
...
and then:
DetailHandler handler = IoC.Resolve<DetailHandler>("Planning");
some variations on this theme.
You can look at the following IoC implementations:
AutoFac
Unity
Castle Windsor
You might want to use a map of types to creational methods:
public class DetailFinder
{
private static Dictionary<string,Func<DetailFinder>> Creators;
static DetailFinder()
{
Creators = new Dictionary<string,Func<DetailFinder>>();
Creators.Add( "Planning", CreatePlanningFinder );
Creators.Add( "Operations", CreateOperationsFinder );
...
}
public static DetailFinder Create( string type )
{
return Creators[type].Invoke();
}
private static DetailFinder CreatePlanningFinder()
{
return new PlanningFinder();
}
private static DetailFinder CreateOperationsFinder()
{
return new OperationsFinder();
}
...
}
Used as:
DetailFinder detailHandler = DetailFinder.Create( ReportType );
I'm not sure this is much better than your if statement, but it does make it trivially easy to both read and extend. Simply add a creational method and an entry in the Creators map.
Another alternative would be to store a map of report types and finder types, then use Activator.CreateInstance on the type if you are always simply going to invoke the constructor. The factory method detail above would probably be more appropriate if there were more complexity in the creation of the object.
public class DetailFinder
{
private static Dictionary<string,Type> Creators;
static DetailFinder()
{
Creators = new Dictionary<string,Type>();
Creators.Add( "Planning", typeof(PlanningFinder) );
...
}
public static DetailFinder Create( string type )
{
Type t = Creators[type];
return Activator.CreateInstance(t) as DetailFinder;
}
}
As long as the big if block or switch statement or whatever it is appears in only one place, it isn't bad for maintainability, so don't worry about it for that reason.
However, when it comes to extensibility, things are different. If you truly want new DetailFinders to be able to register themselves, you may want to take a look at the Managed Extensibility Framework which essentially allows you to drop new assemblies into an 'add-ins' folder or similar, and the core application will then automatically pick up the new DetailFinders.
However, I'm not sure that this is the amount of extensibility you really need.
To avoid an ever growing if..else block you could switch it round so the individal finders register which type they handle with the factory class.
The factory class on initialisation will need to discover all the possible finders and store them in a hashmap (dictionary). This could be done by reflection and/or using the managed extensibility framework as Mark Seemann suggests.
However - be wary of making this overly complex. Prefer to do the simplest thing that could possibly work now with a view to refectoring when you need it. Don't go and build a complex self-configuring framework if you'll only ever need one more finder type ;)
You can use the reflection.
There is a sample code for Parse method of DetailFinder (remember to add error checking to that code):
public DetailFinder Parse(ReportType reportType)
{
string detailFinderClassName = GetDetailFinderClassNameByReportType(reportType);
return Activator.CreateInstance(Type.GetType(detailFinderClassName)) as DetailFinder;
}
Method GetDetailFinderClassNameByReportType can get a class name from a database, from a configuration file etc.
I think information about "Plugin" pattern will be useful in your case: P of EAA: Plugin
Like Mark said, a big if/switch block isn't bad since it will all be in one place (all of computer science is basically about getting similarity in some kind of space).
That said, I would probably just use polymorphism (thus making the type system work for me). Have each report implement a FindDetails method (I'd have them inherit from a Report abstract class) since you're going to end with several kinds of detail finders anyway. This also simulates pattern matching and algebraic datatypes from functional languages.
I am currently building a new version of webservice that my company already provides. The current webservice has an input parameter of type string. In that string an entire xml doc is passed. We then validate that string against an xsd (the same xsd given to the consumer). It looks something like this:
[WebMethod]
public bool Upload(string xml)
{
if (ValidateXML(xml))
{
//Do something
}
}
I am building the next version of this service. I was under the impression that passing an XML doc as a string is not the correct way to do this. I was thinking that my service would look something like this:
[WebMethod]
public bool Upload(int referenceID, string referenceName, //etc...)
{
//Do something
}
This issue that I am having is that in actuality there are a large amount of input parameters and some of them are complex types. For example, the Upload Method needs to take in a complex object called an Allocation. This object is actually made up of several integers, decimal values, strings, and other complex objects. Should I build the webservice like so:
[WebMethod]
public bool Upload(int referenceID, string referenceName, Allocation referenceAllocation)
{
//Do something
}
Or is there a different way to do this?
Note: this Allocation object has a hierarchy in the xsd that was provided for the old service.
Could it be that the original service only took in xml to combat this problem? Is there a better way to take in complex types to a webservice?
Note: This is a C# 2.0 webservice.
I would probably use the XSD with "xsd.exe" tool to create a XML Serializable object. Then you can deal with objects instead of string parameters. It also gives you the ability to not change the signatures of the WebService.
If you change the XSD to add another parameter all you will need to do is recreate the class again using XSD.exe tool. Make good use of partial classes here. Separate your auto generated class from your business logic. This way you can recreate the the class definition if the XSD changes as many times as you want, but not touch your business logic.
XML Serialization in the .NET Framework
If you were using 3.5, you could also use LINQ to XML to quickly parse out your XML parameters.
Jon, to answer your follow-up question first: If your clients are on multiple platforms (or at least, not all on .NET), the best approach is the so-called "WSDL-first". Define the service interface in WSDL - that's where services and methods will be defined - WSDL will reference a set of XSDs defining the data-holding objects passed to and returned from those methods. You can generate C# or Java code from WSDL/XSDs.
Back to your original question. For the same of maintainability, the best practice is to defined Request and Response classes for each web methods and never pass strings, bools, integers directly. For example,
// in your Web service class
[WebMethod]
public UploadResponse Upload( UploadRequest request ) {
...
}
...
[Serializable]
public class UploadResponse {
public bool IsSuccessful {
get { ... }
set { ... }
}
}
[Serializable]
public class UploadRequest {
public Allocation ReferenceAllocation {
get { ... }
set { ... }
}
// define other request properties
// ...
}
If you defined SOAP bindings in your WSDL file, UploadRequest object is extracted from the SOAP message and deserialized. By the time the control reaches your WebMethod implementation, you have a deserialized UploadRequest object in memory with all of its properties set.
To have a method like this: public bool Upload(string xml) in a [WebService] class and parse XML inside the method implementation is definitely something you should consider moving away from.
As long as your complex types are in some way XmlSerializable, then you shouldn't have any problems just using those complex types. Let the framework do the heavy lifting for you. It will generate an appropriate WSDL and the data will get serialized all by itself rather than you having to worry about validation and serialization.
[Serializable] is your friend.