I'm currently writing an object dumper (allowing different dumping strategies).
Of course, I would like to write unit tests to verify that what I will develop will match all the features that I expect.
However, I cannot imagine how I will perform the testing on this solution.
I have thought about creating a set of objects which save the number of times that each of their properties have been accessed. It seems almost ok. But how can I verify that their public fields have been accessed?
Why would you explicitly care how many times the properties have been accessed etc? I'd just test that the output matched expectations. If there's some reason to have one particular strategy (e.g. fields instead of properties) then there's likely to be an easy way of testing that (e.g. make the property return a capitalized version).
I would focus on validating the output rather than verifying properties were accessed. I might read a property but not dump it correctly, right?
This is an example of testing the outcome rather than testing the implementation.
You just have to test the value dumped is the value that was assigned to the properties/public field. Just ensure to assign a different value to each property/field.
Related
I've read and heard a lot of good things about immutability, so I decided to try it out in one of my hobby projects. I declared all of my fields as readonly, and made all methods that would usually mutate an object to return a new, modified version.
It worked great until I ran into a situation where a method should, by external protocol, return a certain information about an object without modifying it, but at the same time could be optimized by modifying the internal structure. In particular, this happens with tree path compression in a union find algorithm.
When the user calls int find(int n), object appears unmodified to the outsider. It represents the same entity conceptually, but it's internal fields are mutated to optimize the running time.
How can I implement this in an immutable way?
Short answer: you have to ensure the thread-safety by yourself.
The readonly keyword on a field gives you the insurance that the field cannot be modified after the object containing this field has been constructed.
So the only write you can have for this field is contained in the constructor (or in the field initialization), and a read through a method call cannot occur before the object is constructed, hence the thread-safety of readonly.
If you want to implement caching, you break the assumption that only one write occurs (since "caching writes" can and will occur during you reads), and thus there can be threading problems in bad cases (think you're reading lines from a file, two threads can call the find method with the same parameter but read two different lines and therefore get different results).
What you want to implement is observational immutability. This related question about memoization may help you with an elegant answer.
I have been working on a project where I have a Worker class that generates a lot of data in a multi-threaded fashion. The type, size, and location of the data is variable based on a large set of parameters that can be set by an end user. Essentially this is a big test harness that I am using to investigate how certain things perform based on a variation of the data. Right now I have at least 12 different parameters for the Worker class. I was thinking about switching over to a separate WorkerOptions class that contains all of these values, and then have the UI create the WorkerOptions object and then pass that into the Worker. However, I could also expose public properties on the Worker class to allow the options to be set appropriately at Worker creation as well.
What is the best way to go about this, and why? I am sure this will generate some different opinions but I am open to listen to debate about why different people might do it a different way. Some things to consider are that currently once a Worker is created and running, its configuration doesn't change unless it stops. This could be subject to change, but I don't think it will.
EDIT
I am not a C# developer normally, I know enough to be able to write applications that function and follow common design patterns, but my expertise is in SQL Server, so I might ask follow up questions to clarify your meaning.
I have as guideline that the parameters that are necessary to use the instance should be passed in the constructor and all 'optional' parameters should be properties.
The properties will be initialized of course in the constructor to their default values.
If the number of arguments is not high I use default value arguments, but 12 is quite some amount.
I forgot to mention the separate class for options. Mostly I don't do such thing, unless there is some 'business logic' inside the options (like checking if some option combinations are not possible). If it is just for storage, you end up a with a lot of extra references to this option class (instances).
I'd combine the two approaches.
Make your WorkerOptions class use a constructor that requires all the required parameters, and allows the optional parameters to be set either via an overload, optional arguments, or properties, then pass that in as an argument.
Having the WorkerOptions class gives you a nice DTO to pass around in case refactoring leads you to create an additional layer between the UI and the worker class itself. Using required parameters in its constructor gives you compile-time checking to prevent runtime errors.
Personally, from what you have said, I prefer the WorkerOptions approach. For the following reasons:
It's cleaner, 12 constructor parameters is not out of the question, but it is perhaps a little excessive.
You can apply polymorphism and all the other OO goodness to your WorkerOptions. You might want to define an IWorkerOptions at some stage, or use Builder to construct different sub-classes of WorkerOption.
I would also make all WorkerOption instances immutable, or at least come up with a 'lock' or 'freeze' mechanism to prevent changes once a Worker has started execution.
I'm working on a legacy system that has uses stored procs, business objects and DTO:s. The business objects and the DTO:s often have the same properties. When calling a method in the service layer that returns a DTO, many transformations are happening. Stored proc -> dataset -> business object -> DTO. If a new property is added, it sometimes happens that a developer forgets to add code that moves it from one layer/object to another.
In some parts of the system I solved this by using AutoMapper which will automatically project properties with the same name.
My question is for the other parts. Can I somehow write a unit test that checks if every property in an object has been set/given a value? That way I could write an integration test that calls our service layer and all the transformations have to be successful for the test to pass.
I guess the solution would involve reflection.
Reflection is one way, but it has its caveats, if you set a property to its default value, you will not pick up on the fact it was set.
You can intercept with a real proxy and then listen on all property changes. See the code here for a base interceptor you can use. Note interceptors mean you need your object to be MarshalByRefObject which may not be something you want. So the other option is to tell your factory to wrap up the object before it returns it in the test scenario. Something that ninject or many other inversion of control libs will allow you to do.
Yes, reflection would be the way to go.
It's probably best to perform the unit test against some mock objects, so you have a known value to test for.
Maybe you could change your BO/DTO's to implement INotifyPropertyChanged Interface. This way you could setup some listener to tell your unit/integration test what properties were changed.
In listener you save the list of all changed properties and with reflection you could check in there are additional properties that are not in the list.
I'm working with a list of fonts that I serialize and deserialize using DataContractSerializer. In between the two steps, it's conceivable that the user has removed a font from their machine. I'd like to check a font name as it's being deserialized to ensure that it still exists on the system. If it doesn't exist, that element is not included in the collection returned by DataContractSerializer.ReadObject().
Specifically, I'm storing a FontFamily and serializing a property that gets FontFamily.Name. In this property's set accessor, I convert the string back into a FontFamily.
The only reasonable alternative to validation that I can think of would be having the property's set accessor ignore invalid values, and filter out the invalid deserialized objects later. I don't like this option, however - is there a more proper way?
Why not take advantage on the OnDeserializedAttribute? Have your callback do the validation and removal of items that are not valid for the client environment.
http://msdn.microsoft.com/en-us/library/ms733734.aspx
I do have some concerns about how you would go about round tripping the data if you remove or modify the data under the covers.
(For example: I remember being particularly frustrated by older versions of MS Publisher as I was working on a document on two different machines hooked up to two different printers. Whenever I modified the file on one machine, Publisher would be reformat the document to target the printer attached to that machine. When I went back to the other machine where I was going to do the actual printing, Publisher would reformat again, but the margins would not be quite right and so I needed to tweak some more.)
You could also implement IXmlSerializable for you class, which would include your own implementation of ReadXml, allowing you to do whatever validation you wanted as the object was being deserialized.
I've run into this issue quite a few times and never liked the solution chosen. Let's say you have a list of States (just as a simple example) in the database. In your code-behind, you want to be able to reference a State by ID and have the list of them available via Intellisense.
For example:
States.Arizona.Id //returns a GUID
But the problem is that I don't want to hard-code the GUIDS. Now in the past I've done all of the following:
Create class constants (hard-coding of the worst kind.. ugh!)
Create Lookup classes that have an ID property (among others) (still hard-coded and would require a rebuild of the project if ever updated)
Put all the GUIDS into the .config file, create an enumeration, and within a static constructor load the GUIDS from the .config into a Hashtable with the enumeration item as the key. So then I can do: StateHash[StatEnum.Arizona]. Nice, because if a GUID changes, no rebuild required. However, doesn't help if a new record is added or an old one removed, because the enumeration will need to be updated.
So what I'm asking is if someone has a better solution? Ideally, I'd want to be able to look up via Intellisense and not have to rebuild code when there's an update. Not even sure that's possible.
EDIT: Using states was just an example (probably a bad one). It could be a list of widgets, car types, etc. if that helps.
Personally, I would store lookup data in a database, and simply try to avoid the type of hard coding that binds rules to things like individual states. Perhaps some key property of those states (like .ApplyDoubleTax or something). And non-logic code doesn't need to use intellisense - it typically just needs to list them or find by name, which can be done easily enough however you have stored it.
Equally, I'd load the data once and cache it.
Arguably, coding the logic against states is hard coding - especially if you want to go international anytime soon - I hate it when a site asks me what state I live in...
Re the data changing... is the USA looking to annex anytime soon?
I believe that if it shows up in Intellisense, then, by definition, it is hard-coded into your program.
That said, if your goal is make the hard-coding as painless as possible, on thing you might try is auto-generating your enumeration based on what's in the database. That is, you can write a program that reads the database and creates a FOO.cs file containing your enumeration. Then just run that program every time the data changes.
This cries out for a custom MSBuild task. You really want an autogenerated enum or class in this case; if the IDs are sourced from a database and can/will change, and are not easily predicted. You could then put the task in your project and it would run before each build updating as necessary.
Or start looking at ORMs :)