I have a controller circuit that I can communicate with via serial port and I would like to write a class library for it. I figured it is far easier calling a method (and far more readable) than repeatedly hard-coding lengthy character strings. Anyway, the controller comes pre-programmed with ~100 get/set functions separated into three categories: Sensor Settings, Output Settings, and Environment Settings. These functions are used to get or set the controller's settings. I was wondering what the "best" or "accepted" class organization would be?
All the functions belong to the same controller and use the same serial port so I figured that Controller would be the top level class. Within this class, I setup a SerialPort instance and created a simple send/receive method using the instance: string SendReceive(string commandString).
Because I really don't want to have ~100 functions and properties in the single class, I tried creating some nested classes (SensorSettings, OutputSettings, and EnvironmentSettings) and placing the respective functions within them. When I tried to build, however, I received a compile error stating that I attempted to access a higher level, non-static method (SendReceive(string commandString)) from within one of the nested classes.
Each of the various methods has a unique and variable send/receive command so I would create the command string within the method, call SendReceive, and process the returning command. Is there any way to do this?
I would like to use properties to work my way down to get/set the various settings. For example...
controllerInstance.SensorSettingsProperty.Units; // Units in use.
or...
controllerInstance.OutputSettingsProperty.GetOutput(sensor1); // Get sensor 1 output.
...but all of these require the use of the same serial port and SendRecieve.
It sounds like you need to make objects for each command, and then use your controller to send commands.
Example:
public class GetSensorReadingCommand
{
public GetSensorReadingCommand(Sensor sensor, SerialController controller)
{
// Set up command
}
public int Execute()
{
// Call controller.SendReceive with whatever, get or send stuff
}
}
Then you would simply make new objects every time you want to do a function.
SerialController controller = new SerialController(somePortNumber);
GetSensorReadingCommand command = new GetSensorReadingCommand(Sensor.Sensor10, controller);
int reading = command.Execute();
You would follow a pattern like that for each command you can send. If it's not a known at compile type sensor number, instead of using an enum, provide an integer.
You'll end up with a lot of small code files instead of a bunch of large ones. In addition, you can tailor each command with logic specific to it's functioning.
Related
I want to use the TPL Dataflow for my .NET Core application and followed the example from the docs.
Instead of having all the logic in one file I would like to separate each TransformBlock and ActionBlock (I don't need the other ones yet) into their own files. A small TransformBlock example converting integers to strings
class IntToStringTransformer : TransformBlock<int, string>
{
public IntToStringTransformer() : base(number => number.ToString()) { }
}
and a small ActionBlock example writing strings to the console
class StringWriter : ActionBlock<string>
{
public StringWriter() : base(Console.WriteLine) { }
}
Unfortunately this won't work because the block classes are sealed. Is there a way I can organize those blocks into their own files?
Dataflow steps/blocks/goroutines are fundamentally functional in nature and best organized as modules of factory functions, not separate classes. A TPL DataFlow pipeline is quite similar to a pipeline of function calls in F#, or any other language. In fact, one could look at it as a PowerShell pipeline, except it's easier to write.
There's no need to create a class or implement an interface to add a new function to that pipeline, you just add it and redirect the output to the next function.
TPL Dataflow blocks provide the primitives to construct a pipeline already and only require a transformation function. That's why they are sealed, to prevent misuse.
The natural way to organize dataflows is similar to F# too - create libraries with the functions that perform each job, putting them in modules of related functions. Those functions are stateless, so they can easily go into a static library, just like extension methods.
For example, there could be one module for database related functions that perform bulk inserts or read data, another to handle exports to various file formats, separate classes to call external web services, another to parse specific message formats.
A real Example
For the last 7 years I'm working with several complex pipelines for an Online Travel Agency (OTA). One of them calls several GDSs (the intermediaries between OTAs and airlines) to retrieve transaction information - ticket issues, refunds, cancellations etc. Next step retrieves the ticket records, the detailed ticket informations. Finally, the records are inserted into the database.
GDSs are too big to bother with standards, so their "SOAP" web services aren't even SOAP-compliant, much less follow WS-* standards. So each GDS needs a separate class library to call the services and parse the outputs. No dataflows there yet, the project is already complex enough
Writing the data to the database is pretty much the same always, so there's a separate project with methods that take eg an IEnumerable<T> and write it to the database with SqlBulkCopy.
It's not enough to load new data though, things often go wrong so I need to be able to load already stored ticket information.
Organisation
To preserve sanity :
Each pipeline gets its own file:
A Daily pipeline to load new data,
A Reload pipeline to load all stored data
A "Rerun" pipeline to use the existing data and ask again for any missing data.
Static classes are used to hold the worker functions and separately factory methods that produce Dataflow blocks based on configuration. Eg, a CreateLogger(path,level) creates an ActionBlock<Message> that logs specific messages.
Common dataflow extension methods - since DataFlow blocks follow the same basic patterns, it's easy to create a logged block by combining eg a Func<TIn,TOut> and a logger block. Or create a LinkTo overload that redirects bad records to a logger or database. Those are common enough they can become extension methods.
If those were in the same file, it would be very hard to edit one pipeline without affecting another. Besides, there's a lot more to a pipeline than the core tasks, eg:
Logging
Handling bad records and partial results (can't stop a 100K import for 10 errors)
error handling (which isn't the same as handling bad records)
monitoring - what's this monster doing for the last 15 minutes? Did a DOP=10 improve performance at all?
Don't create a parent pipeline class.
Some of the steps are common, so at first, I created a parent class with common steps that got overloaded, or simply replaced in child classes. VERY BAD IDEA. Each pipeline is similar but not quite, and inheritance means that modifying one step or one connection risks breaking everything. After about 1 year things became unbearable, so I split the parent class into separate classes.
As #Panagiotis explained, I think you have to put aside the OOP Mindset a little.
What you have with DataFlow are Buildingblocks that you configure to execute what you need. I'll try to create a little example of what I mean by that:
// Interface and impl. are in separate files. Actually, they could
// even be in a different project ...
public interface IMyComplicatedTransform
{
Task<string> TransformFunction(int input);
}
public class MyComplicatedTransform : IMyComplicatedTransform
{
public Task<string> IMyComplicatedTransform.TransformFunction(int input)
{
// Some complex logic
}
}
class DataFlowUsingClass{
private readonly IMyComplicatedTransform myTransformer;
private readonly TransformBlock<int , string> myTransform;
// ... some more blocks ...
public DataFlowUsingClass()
{
myTransformer = new MyComplicatedTransform(); // maybe use ctor injection?
CreatePipeline();
}
private void CreatePipeline()
{
// create blocks
myTransform = new TransformBlock<int, string>(myTransformer.TransformFunction);
// ... init some more blocks
// TODO link blocks
}
}
I think this is the closest to what you are looking for to do.
What you end up with is a set of interfaces and implementations which can be tested independently. The client basically boils down to "gluecode".
Edit: As #Panagiotis correctly states, the interfaces are even superfluent. You could do without.
I'm currently working one a custom CRM-style solution (EF/Winforms/OData WebApi) and I wonder how to implement a quite simple requirement:
Let's say there is a simple Project entity. It is possible to assign Tasks to it. There is a DefaultTaskResponsible defined in the Project. Whenever a Task is created, the Project's DefaultTaskResponsible is used as the Task.Responsible. But it is possible change the Task.Responsible and even set it to null.
So, in a 'normal' programming world, I would use a Task constructor accepting the Project and set the Responsible there:
public class Task {
public Task(Project p) {
this.Responsible = p.DefaultTaskResponsible;
...
}
}
But how should I implement something like this in a CRM-World with Lookup views? In Dynamics CRM (or in my custom solution), there is a Task view with a Project Lookup field. It does not make sense to use a custom Task constructor.
Maybe it is possible to use Business Rules in Dynamics CRM and update the Responsible whenever the Project changes (not sure)?! But how should I deal with the WebApi/OData Client?
If I receive a Post to the Task endpoint without a Responsible I would like to use the DefaultTaskResponsible, e.g.
POST [Organization URI]/api/data/tasks
{
"project#odata.bind":"[Organization URI]/api/data/projects(xxx-1)"
}.
No Responsible was send (maybe because it is an older client), so use the default one. But if a Responsible is set, the passed value should be used instead, e.g.
POST [Organization URI]/api/data/tasks
{
"project#odata.bind":"[Organization URI]/api/data/projects(xxx-1)",
"responsible#odata.bind": null
}.
In my TaskController I only see the Task model with the Responsible being null, but I don't know if it is null because it was set explicitly or because it wasn't send in the request.
Is there something wrong with my ideas/concepts? I think it is quite common to initialize properties based on other objects/properties, isn't it?
This question is probably out of scope for this forum, but it is a subject I am interested in. A few thoughts:
A "Task" is a generic construct which traditionally can be associated with many different types of entities. For example, you might not only have tasks associated with Projects, but also with Customer records and Sales records. To run with your code example it would look like:
public Task(Entity parent) {}
Then you have to decide whether or not your defaulting of the Responsible party is specific to Projects, or generic across all Entities which have Tasks. If the latter, then our concept looks like this:
public Task(ITaskEntity parent)
{
this.Responsible = parent.DefaultResponsible; //A property of ITaskEntity
}
This logic should be enforced at the database "pre operation" level, i.e. when your CRM application receives a request to create a Task, it should make this calculation, then persist the task to the database. This suggests that you should have a database execution pipeline, where actions can be taken before or after database operations occur. A standard simple execution pipeline looks like this:
Validation -> Pre Operation -> Operation (CRUD) -> Post Operation
Unless you are doing this for fun, I recommend abandoning the project and using an existing CRM system.
public abstract class Unit
{
public abstract List<Move> allowedMoves{get;}
}
public class Javelineer : Unit
{
public List<Move> allowedMoves =>
new List<Move> {Move.Impale, Move.JavelinThrow, Move.ShieldBlock};
}
public class Dragon : Unit
{
public List<Move> allowedMoves =>
new List<Move> {Move.BreatheFire, Move.Swipe, Move.Bite, Move.Devour, Move.TailBash};
}
The X:
Given the above code, if and how can I retrieve the allowed moves of a given unit without necessarily instantiating a new object?
I know I can retrieve the property with this code:
typeof(Javelineer).GetProperty("allowedMoves")
But if and how can I retrieve the definition of this property?
The Y:
The client (web browser) must send the game server the player's unit. This includes the unit's type and moves this unit is able to perform (4 out of all available; similarily to Pokemon).
While the validation (of course) is performed on the server, the browser still needs to get a list of available unit types and allowed moves.
In order not to duplicate code, I would like to avoid hard-coding this data in Javascript.
Having read some excellent SO questions & answers I think I can retrieve all available units with code similar to this:
Assembly.GetExecutingAssembly().GetTypes().Where(
type => type.BaseType == typeof(Unit)
).Select(type => type.Name).ToList()
I'd call this code on server startup, cache the result and send the cached result to every connecting client, because I have feeling this code is likely expensive to call.
But how can I retrieve the list of allowed moves?
You have a couple of options, but TL;DR: Construct the object instance and read the property.
In any case, here are some options, creative minds might be able to find a couple more even.
Construct the instance, read the property.
This is your best option code-wise because it will be easy to understand, maintain, bugfix.
Rewrite the code to allow for easy detection of the values using reflection
One way to do this would be to use attributes, tagging the property or object with the legal moves. However, to avoid having the bug that the attributes does one thing, the code another, you might have to change the code to use the attributes as well, which would be a performance hit.
Additionally, reading those attributes would likely construct many more objects than your original object.
Use mono.cecil or some other IL-inspection library to decode the code of the property getter and finding the construction of that list, extracting the values being added to the list. You would essentially either have to dumb down the code of that property to be on par with what you have right now (and never allow it to become more complex) or basically simulate execution of code.
This is like constructing a flotilla of space warships with enough firepower to demolish a local starsystem, just to kill an ant.
Bottom line, construct the object instance, read the property.
I am writing a piece of software in c# .net 4.0 and am running into a wall in making sure that the code-base is extensible, re-usable and flexible in a particular area.
We have data coming into it that needs to be broken down in discrete organizational units. These units will need to be changed, sorted, deleted, and added to as the company grows.
No matter how we slice the data structure we keep running into a boat-load of conditional statements (upwards of 100 or so to start) that we are trying to avoid, allowing us to modify the OUs easily.
We are hoping to find an object-oriented method that would allow us to route the object to different workflows based on properties of that object without having to add switch statements every time.
So, for example, let's say I have an object called "Order" come into the system. This object has 'orderItems' inside of it. Each of those different kinds of 'orderItems' would need to fire a different function in the code to be handled appropriately. Each 'orderItem' has a different workflow. The conditional looks basically like this -
if(order.orderitem == 'photo')
{do this}
else if(order.orderitem == 'canvas')
{do this}
edit: Trying to clarify.
I'm not sure your question is very well defined, you need a lot more specifics here - a sample piece of data, sample piece of code, what have you tried...
No matter how we slice the data structure we keep running into a boat-load of conditional statements (upwards of 100 or so to start) that we are trying to avoid
This usually means you're trying to encode data in your code - just add a data field (or a few).
Chances are your ifs are linked to each other, it's hard to come up with 100 independent ifs - that would imply you have 100 independent branches for 100 independent data conditions. I haven't encountered such a thing in my career that really would require hard-coding 100 ifs.
Worst case scenario you can make an additional data field contain a config file or even a script of your choice. Either case - your data is incomplete if you need 100 ifs
With the update you've put in your question here's one simple approach, kind of low tech. You can do better with dependency injection and some configuration but that can get excessive too, so be careful:
public class OrderHandler{
public static Dictionary<string,OrderHandler> Handlers = new Dictionary<string,OrderHandler>(){
{"photo", new PhotoHandler()},
{"canvas", new CanvasHandler()},
};
public virtual void Handle(Order order){
var handler = handlers[order.OrderType];
handler.Handle(order);
}
}
public class PhotoHandler: OrderHandler{...}
public class CanvasHandler: OrderHandler{...}
What you could do is called - "Message Based Routing" or "Message Content Based" Routing - depending on how you implement it.
In short, instead of using conditional statements in your business logic, you should implement organizational units to look for the messages they are interested in.
For example:
Say your organization has following departments - "Plant Products", "Paper Products", "Utilities". Say there is only one place where the orders come in - Ordering (module).
here is a sample incoming message.
Party:"ABC Cop"
Department: "Plant Product"
Qty: 50
Product: "Some plan"
Publish out a message with this information. In the module that processes orders for "Plant Products" configure it such that it listens to a message that has "Department = Plant Products". This way, you push the onus on the department modules instead of on the main ordering module.
You can do this using NServiceBus, BizTalk, or any other ESB you might already have.
This is how you do in BizTalk and this is how you can do in NServiceBus
Have you considered sub-typing OrderItem?
public class PhotoOrderItem : OrderItem {}
public class CanvasOrderItem : OrderItem {}
Another option would be to use the Strategy pattern. Add an extra property to your OrderItem class definition for the OrderProcessStrategy and use a PhotoOrderStrategy/CanvasOrderStrategy to contain all of the different logic.
public class OrderItem{
public IOrderItemStrategy Strategy;
}
public interface IOrderItemStrategy{
public void Checkout();
public Control CheckoutStub{get;}
public bool PreCheckoutValidate();
}
public class PhotoOrderStrategy : IOrderItemStrategy{}
public class CanvasOrderStrategy : IOrderItemStrategy{}
Taking the specific example:
You could have some Evaluator that takes an order and iterates each line item. Instead of processing if logic raise events that carry in their event arguments the photo, canvas details.
Have a collection of objects 'Initiators' that define: 1)an handler that can process Evaluator messages, 2)a simple bool that can be set to indicate if they know what to do with something in the message, and 3)an Action or Process method which can perform or initiate the workflow. Design an interface to abstract these.
Issue the messages. Visit each Initiator, ask it if it can process the lineItem if it can tell it to do so. The processing is kicked off by the 'initiators' and they can call other workflows etc.
Name the pieces outlined above whatever best suits your domain. This should offer some flexibility. Problems may arise depending on concurrent processing requirements and workflow dependencies between the Initiators.
In general, without knowing a lot more detail, size of the project, workflows, use cases etc it is hard to comment.
I am building a library which can parse complex files into a datamodel. Inside the library during parsing and during some other operations on the data structure I am building certain messages might occur (info, warnings, errors).
I need a way for the user of the library to fetch those messages. All stuff I am thinking about boils down to some static Event or a static list of those messages. But I want each object of the datastructure to have its own message queue.
An example:
class Program
{
static void Main(string[] args)
{
CalibData cd1 = new CalibData();
cd1.LoadFile(#"C:\tmp\file.ext");
var messageList = cd1.GetMessages();
cd1.DoOtherStuff();
CalibData cd2 = new CalibData();
cd2.LoadFile(#"C:\tmp\file2.ext");
cd2.LoadFile(#"C:\tmp\file3.ext2");
messageList = cd1.GetMessages(); //Do other stuff could have produced new Messages
var messageList2 = cd2.GetMessages();
}
}
Do you have any suggestions on how to implement such behavoir? I need something which is globally accessibly inside each instance, but each instance has another global messenger.
Additional Information:
Internall I am using an ANTLR Parser which generates a lot of objects (50.000+). Once the datastructure is created a lot of crossreferences are being set on the objects, etc. My main problem is, that I either have to create a static member to handle this, or from LoadFile() pass a messenger very deep into my function calls of the parser, cross referencing etc. In my opionion both is a rather bad choice. Changing the design is not an option since there is more to my problem. The datastructure is stored in 2 files (1 file = description, other file = data). So I can call something like
CalibData cd = new CalibData();
cd.LoadFile("description file"); //after this call the datastructure is built, but it hasn't got any value data yet
cd.LoadFile("data file") //now the structure also has value data
cd.ClearData();
cd.LoadFile("yet another data file"); //same structure different data
It looks like your LoadFile method currently doesn't return anything - why not make it return a data structure containing the errors, warnings etc? No need for anything global or static - or even persistent. Just the result of a method call.
In fact, I'd probably change this slightly so that a separate class (rather than the model itself) was responsible for loading, and the LoadFile call would return a result containing:
Information and errors
The resulting model object
That way any time you have a model you know it contains actual data - rather than it being "ready to load" as it were.