I have a question. In the framework, that was largely written before the generics came, you often see a function with lots of overloads to do something with different types.
a)
Parse(int data)
Parse(long data)
Parse(string data)
..etc
That seems to be to be ok as it helps keeping code small for each method and so. On the other hand, now with generics you can do the following:
b)
Parse<T>(T data)
and then have some kind of ifs/switch statements with typeof() to try to infer what the types are and what to do with them.
What is best practise? Or what are the ideias that'd help me choose between a) and b)?
IMHO, if you need if/switch statements, you should better overload. Generics should be used where the implementation does not depend on the concrete type to still reuse it.
So as a general rule:
overload if there will be a separate implementation for each type
use generics if you can have a single implementation that works for all possible types.
Code Smell.
If you have "some kind of if/switch", that is a code smell that just screams polymorphism. It suggests that generics is not the solution to that problem. Generics should be used when the code does not depend on the concrete types you pass into it.
Watch this Google Tech Talks Video: "The Clean Code Talks -- Inheritance, Polymorphism, & Testing". It addresses specifically what you are talking about.
The pattern your describing where the use of generics results in a bunch of ifs/switch statements is an anti-pattern.
One solution to this is to implement a strategy pattern, which allows you to use generics, but at the same time isolating concerns of the Parse method from knowing how to deal with every different case.
Example:
class SomeParser
{
void Parse<T>(ParseStrategy<T> parseStrategy, T data)
{
//do some prep
parseStrategy.Parse(data);
//do something with the result;
}
}
class ParseStrategy<T>
{
abstract void Parse(T data);
}
class StringParser: ParseStrategy<String>
{
override void Parse(String data)
{
//do your String parsing.
}
}
class IntParser: ParseStrategy<int>
{
override void Parse(int data)
{
//do your int parsing.
}
}
//use it like so
[Test]
public void ParseTest()
{
var someParser = new SomeParser();
someParser.Parse(new StringParser(), "WHAT WILL THIS PARSE TO");
}
and then you would be able to pass in any of the strategies you develop. This would allow you to properly isolate your concerns across multiple classes and not violate SRP (single responsibility principle.
One issue here - if you're requiring if/switch statements to make generics work, you probably have a larger problem. In that situation, it is most likely that the generic argument won't work (correctly) for EVERY type, just a fixed set of types you are handling. In that case, you are much better off providing overloads to handle the specific types individually.
This has many advantages:
There is no chance of misuse - you're user cannot pass an invalid argument
There is more clarity in your API - it is very obvious which types are appropriate
Generic methods are more complicated to use, and not as obvious for beginning users
The use of a generic suggests that any type is valid - they really should work on every type
The generic method will likely be slower, performance wise
If your argument CAN work with any type, this becomes less clear. In that case, I'd often still consider including the overloads, plus a generic fallback method for types. This provides a performance boost when you're passing an "expected" type to the method, but you can still work with other, non-expected types.
While there's no one-size-fits-all rule on this, generics should be used when the specific type is irrelevant. That isn't to say that you can't constrain the type, but that specific type doesn't actually matter. In this case, the parsing method is entirely dependent on (and different for) each type. Generics don't seem like a good solution here.
My rule of thumb for this scenario. There is overlap and there are subtleties, but this gets you on the path:
Overloading methods/functions is concerned with handling types, and doesn't make assumptions about interfaces available on the types.
An overloaded function will deal specifically with a type passed to it, and whatever interface it implements. Example printObject(t) may need to extract a property from 't' and print it manually (e.g. print(t.name) or cout << t.name;)
Generics is concerned with handling the interface(s) implemented, but is not concerned about the type of object.
A generic function will handle any type passed to it, but expect a specific interface implemented (e.g. printObject(t) may just call t.toString() on the object, presuming it's implemented)
Related
For example
int GetNum(int x, int y)
{
return x+y;
}
then call
z= GetNum(myobject.x, myobject.y);
or
int GetNum(ClassA myobject)
{
return myobject.x+myobject.y;
}
then call
z = GetNum(myobject);
Pass in the property values to reduce coupling between your classes. The class that defines GetNum does not need to know about ClassA in this case.
It is better to reduce coupling\dependencies between classes as this makes your design more flexible. Where you do supply complex types to methods, then supply interfaces instead so that you can vary which particular implementation you pass around. This again makes your design more flexible, and more easily testable.
I think this depends very much on your own coding style.
In my opinion, a method should take an object as an argument if the purpose of that method is linked to that object. For instance, a method to format the fields of an object into a pretty text string should take an object as its arguments.
In your example, the method isn't really related to the object - it could take any two numbers, they don't have to be wrapped up in a particular object to work, so I think the method should take properties as arguments.
I don't think it makes a big difference which style you choose, though.
The general rule I follow is this. Does the method require the object?, or does it require property values of the object? The latter makes the method more usable (as it doesn't require users to create an instance of whatever type has these properties).
What you could do, is provide an overload (thus supporting both):
int GetNum(int x, int y) { return (x + y); }
int GetNum(ClassA obj) { return GetNum(obj.X, obj.Y); }
Ask yourself what are the likely use cases for the method, and then ask if you actually require the need to pass in an instance which wraps the values.
Consider that I might have the following method:
public void ProcessMessage(Result result)
{
// Do something with Result.ReturnedMessage
result.ReturnedMessage.Process(result.Target);
}
It accepts a single instance of a theoretical Result type, but the reality is, it's only using two arguments, so we could redefine it as:
public void ProcessMessage(Message message, Target target)
{
message.Process(target);
}
This now makes the method usable in potentially more scenarios, and of course, you could define an overload that just routes from ProcessMessage(Result) to ProcessMessage(Message, Target), etc.
On the other hand, if the method forms part of an interface definition:
void ProcessMessage(Result result);
You can't guarantee that a type implementing that method won't require access to more than just the ReturnedMessage and Target properties.
Bottom line is, consider you use-cases, and how that fits in with the larger design. Also consider future-proofing... how easy is it to spin up our theoretical Result type?
As a footnote, this is a very similar argument as to where to pass a value using a specialised type, or a base type, e.g.:
public void DoSomething(List<string> items);
public void DoSomething(IList<string> items);
public void DoSomething(IEnumerable<string> items);
In the above examples, is your method doing anything that explicitly requires the List<string> type, or will it work with the IList<string> interface definition... or if you are not adding anything at all.. how about accepting an IEnumerable<string> instance instead. The less you specialise your method calls, the more widely they can be used.
In this case as chibacity says GetNum doesn't need to know about ClassA. however, you might consider actually adding GetNum as a method of ClassA which calls GetNum passing in the appropriate values. Its not necessarily write but if you are going to do it a lot it might make sense.
I think, that like everything else interesting, the answer is "It depends".
The example may have been simplified too much, to give enough context to answer the question properly. There seems to be three ways of addressing the solution:
1) int GetNum(ClassA myObject){}
2) int ClassA.GetNum(){}
3) int GetNum(int a, int b) {}
In the past, I would have come down in favour of option 2, making GetNum a member of ClassA. It's none of the caller's business how GetNum works, or where it gets the information from, you (the caller) just want to get a number from a ClassA object. The code of option needs to know about the contents of ClassA, so this is inappropriate, even if it just uses two getters. If we knew more about the context, then maybe option one would be better. I can't tell from the question.
There's a huge interest in improving the testability of code at work, especially legacy code. and unless there's a useful unit testing framework already in place, options 1 and 2 require an object of ClassA, be instantiated for testing. In order to make the code testable (especially if ClassA is expensive or impractical to instantiate) we've been going with something like option 3 (or making it a static member of ClassA). This way, you do not need to instantiate or pass around objects that are difficult to make, AND you can easily add complete code test coverage.
As always though, YMMV.
Is there a name for this pattern?
Let's say you want to create a method that takes a variable number of arguments, each of which must be one of a fixed set of types (in any order or combination), and some of those types you have no control over. A common approach would be to have your method take arguments of type Object, and validate the types at runtime:
void MyMethod (params object[] args)
{
foreach (object arg in args)
{
if (arg is SomeType)
DoSomethingWith((SomeType) arg);
else if (arg is SomeOtherType)
DoSomethingElseWith((SomeOtherType) arg);
// ... etc.
else throw new Exception("bogus arg");
}
}
However, let's say that, like me, you're obsessed with compile-time type safety, and want to be able to validate your method's argument types at compile time. Here's an approach I came up with:
void MyMethod (params MyArg[] args)
{
// ... etc.
}
struct MyArg
{
public readonly object TheRealArg;
private MyArg (object obj) { this.TheRealArg = obj; }
// For each type (represented below by "GoodType") that you want your
// method to accept, define an implicit cast operator as follows:
static public implicit operator MyArg (GoodType x)
{ return new MyArg(x); }
}
The implicit casts allow you to pass arguments of valid types directly to your routine, without having to explicitly cast or wrap them. If you try to pass a value of an unacceptable type, the error will be caught at compile time.
I'm sure others have used this approach, so I'm wondering if there's a name for this pattern.
There doesn't seem to be a named pattern on the Interwebs, but based on Ryan's comment to your question, I vote the name of the pattern should be Variadic Typesafety.
Generally, I would use it very sparingly, but I'm not judging the pattern as good or bad. Many of the commenters have made good points pro and con, which we see for other patterns such as Factory, Service Locator, Dependency Injection, MVVM, etc. It's all about context. So here's a stab at that...
Context
A variable set of disparate objects must be processed.
Use When
Your method can accept a variable number of arguments of disparate types that don't have a common base type.
Your method is widely used (i.e. in many places in code and/or by a great number of users of your framework. The point being that the type-safety provides enough of a benefit to warrant its use.
The arguments can be passed in any order, yet the set of disparate types is finite and the only set acceptable to the method.
Expressiveness is your design goal and you don't want to put the onus on the user to create wrappers or adapters (see Alternatives).
Implementation
You already provided that.
Examples
LINQ to XML (e.g. new XElement(...))
Other builders such as those that build SQL parameters.
Processor facades (e.g. those that could accept different types of delegates or command objects from different frameworks) to execute the commands without the need to create explicit command adapters.
Alternatives
Adapter. Accept a variable number of arguments of some adapter type (e.g. Adapter<T> or subclasses of a non-generic Adapter) that the method can work with to produce the desired results. This widens the set your method can use (the types are no-longer finite), but nothing is lost if the adapter does the right thing to enable the processing to still work. The disadvantage is the user has the additional burden of specifying existing and/or creating new adapters, which perhaps detracts from the intent (i.e. adds "ceremony", and weakens "essence").
Remove Type Safety. This entails accepting a very base type (e.g. Object) and placing runtime checks. Burden on what to know to pass is passed to the user, but the code is still expressive. Errors don't reveal themselves until runtime.
Composite. Pass a single object that is a composite of others. This requires a pre-method-call build up, but devolves back to using one of the above patterns for the items in the composite's collection.
Fluent API. Replace the single call with a series of specific calls, one for each type of acceptable argument. A canonical example is StringBuilder.
It is called an anti-pattern more commonly known as Poltergeist.
Update:
If the types of args are always constant and the order does not matter, then create overloads that each take collections (IEnumerable<T>) changing T in each method for the type you need to operate on. This will reduce your code complexity by:
Removing the MyArg class
eliminating the need for type casting in your MyMethod
will add an additional compile time type safety in that if you try to call the method with a list of args that you can't handle, you will get a compiler exception.
This looks like a subset of the Composite Pattern. Quoting from Wikipedia:
The composite pattern describes that a group of objects are to be treated in the same way as a single instance of an object.
I'm currently trying to learn Ruby and I'm trying to understand more about what it offers in terms of encapsulation and contracts.
In C# a contract can be defined using an interface. A class which implements the interface must fulfil the terms within the contract by providing an implementation for each method and property (and maybe other things) defined. The individual class that implements an interface can do whatever it needs within the scope of the methods defined by the contract, so long as it accepts the same types of arguments and returns the same type of result.
Is there a way to enforce this kind of thing in Ruby?
Thanks
A simple example of what I mean in C#:
interface IConsole
{
int MaxControllers {get;}
void PlayGame(IGame game);
}
class Xbox360 : IConsole
{
public int MaxControllers
{
get { return 4; }
}
public void PlayGame(IGame game)
{
InsertDisc(game);
NavigateToMenuItem();
Click();
}
}
class NES : IConsole
{
public int MaxControllers
{
get { return 2; }
}
public void PlayGame(IGame game)
{
InsertCartridge(game);
TurnOn();
}
}
There are no interfaces in ruby since ruby is a dynamically typed language. Interfaces are basically used to make different classes interchangeable without breaking type safety. Your code can work with every Console as long it behaves like a console which in C# means implements IConsole. "duck typing" is a keyword you can use to catch up with the dynamic languages way of dealing with this kind of problem.
Further you can and should write unit tests to verify the behavior of your code. Every object has a respond_to? method you can use in your assert.
Ruby has Interfaces just like any other language.
Note that you have to be careful not to conflate the concept of the Interface, which is an abstract specification of the responsibilities, guarantees and protocols of a unit with the concept of the interface which is a keyword in the Java, C# and VB.NET programming languages. In Ruby, we use the former all the time, but the latter simply doesn't exist.
It is very important to distinguish the two. What's important is the Interface, not the interface. The interface tells you pretty much nothing useful. Nothing demonstrates this better than the marker interfaces in Java, which are interfaces that have no members at all: just take a look at java.io.Serializable and java.lang.Cloneable; those two interfaces mean very different things, yet they have the exact same signature.
So, if two interfaces that mean different things, have the same signature, what exactly is the interface even guaranteeing you?
Another good example:
interface ICollection<T>: IEnumerable<T>, IEnumerable
{
void Add(T item);
}
What is the Interface of System.Collections.Generic.ICollection<T>.Add?
that the length of the collection does not decrease
that all the items that were in the collection before are still there
that item is in the collection
And which of those actually shows up in the interface? None! There is nothing in the interface that says that the Add method must even add at all, it might just as well remove an element from the collection.
This is a perfectly valid implementation of that interface:
class MyCollection<T>: ICollection<T>
{
void Add(T item)
{
Remove(item);
}
}
Another example: where in java.util.Set<E> does it actually say that it is, you know, a set? Nowhere! Or more precisely, in the documentation. In English.
In pretty much all cases of interfaces, both from Java and .NET, all the relevant information is actually in the docs, not in the types. So, if the types don't tell you anything interesting anyway, why keep them at all? Why not stick just to documentation? And that's exactly what Ruby does.
Note that there are other languages in which the Interface can actually be described in a meaningful way. However, those languages typically don't call the construct which describes the Interface "interface", they call it type. In a dependently-typed programming language, you can for example express the properties that a sort function returns a collection of the same length as the original, that every element which is in the original is also in the sorted collection and that no bigger element appears before a smaller element.
So, in short: Ruby does not have an equivalent to a Java interface. It does however have an equivalent to a Java Interface, and its exactly the same as in Java: documentation.
Also, just like in Java, Acceptance Tests can be used to specify Interfaces as well.
In particular, in Ruby, the Interface of an object is determined by what it can do, not what class is is, or what module it mixes in. Any object that has a << method can be appended to. This is very useful in unit tests, where you can simply pass in an Array or a String instead of a more complicated Logger, even though Array and Logger do not share an explicit interface apart from the fact that they both have a method called <<.
Another example is StringIO, which implements the same Interface as IO and thus a large portion of the Interface of File, but without sharing any common ancestor besides Object.
Interfaces are usually introduced to static typed OO languages in order to make up for lack of multiple inheritance. In other words, they are more of a necessary evil than something useful per se.
Ruby, on the other hand:
Is dynamically typed language with "duck typing", so if you want to call method foo on two objects, they don't need to neither inherit same ancestor class, nor implement the same interface.
Supports multiple inheritance through concept of mixins, again no need for interfaces here.
Ruby doesn't really have them; interfaces and contracts generally live more in the static world, rather than the dynamic.
There is a gem called Handshake that can implement informal contracts, if you really need it.
Ruby uses the concept of Modules as a stand-in (kinda) for interfaces. Design Patterns in Ruby has a lot of really great examples on the differences between the two concepts and why ruby chooses the more flexible alternative to interfaces.
http://www.amazon.com/Design-Patterns-Ruby-Russ-Olsen/dp/0321490452
Jorg has a good point, ruby has interfaces, just not the keyword. In reading some of the replies, I think this is a negative in dynamic languages. Instead of enforcing an interface through the language, you must create unit tests instead of having a compiler catch methods not being implemented. It also makes understanding method harder to reason about, as you have to hunt down what an object is when you are trying to call it.
Take as an example:
def my_func(options)
...
end
If you look at the function, you have no clue what options is and what methods or properties it should call, without hunting for the unit tests, other places it is called, and even look at the method. Worse yet, the method may not even use those options but pass it to further methods. Why write unit tests when this should have been caught by a compiler. The problem is you must write code differently to express this downside in dynamic languages.
There is one upside to this though, and that is dynamic programming languages are FAST to write a piece of code. I don't have to write any interface declaration and later I can new methods and parameters without going to the interface to expose it. The trade-offs are speed for maintenance.
Is there ever a reason to use Type parameters over generics, ie:
// this...
void Foo(Type T);
// ...over this.
void Foo<T>();
It seems to me that generics are far more useful in that they provide generic constraints and with C# 4.0, contravarience and covarience, as well as probably some other features I don't know about. It would seem to me that the generic form has all the pluses, and no negatives that the first doesn't also share. So, are there any cases where you'd use the first instead?
Absolutely: when you don't know the type until execution time. For example:
foreach (Type t in someAssembly.GetTypes())
{
Foo(t);
}
Doing that when Foo is generic is painful. It's doable but painful.
It also allows the parameter to be null, which can be helpful in some situations.
Well they really aren't the same at all.
With the second one, you've actually got a class of the compile-time type there (whatever has been passed in by whoever). So you can call specific functions on it (say if it's all of some given interface).
In your first example, you've got an object of class 'Type'. So you'll need to still determine what it is and do a cast to do anything useful with it.
Foo(someVariable.GetType()); // allowed
Foo<someVariable.GetType()>(); // illegal
I'm very excited about the dynamic features in C# (C#4 dynamic keyword - why not?), especially because in certain Library parts of my code I use a lot of reflection.
My question is twofold:
1. does "dynamic" replace Generics, as in the case below?
Generics method:
public static void Do_Something_If_Object_Not_Null<SomeType>(SomeType ObjToTest) {
//test object is not null, regardless of its Type
if (!EqualityComparer<SomeType>.Default.Equals(ObjToTest, default(SomeType))) {
//do something
}
}
dynamic method(??):
public static void Do_Something_If_Object_Not_Null(dynamic ObjToTest) {
//test object is not null, regardless of its Type?? but how?
if (ObjToTest != null) {
//do something
}
}
2. does "dynamic" now allow for methods to return Anonymous types, as in the case below?:
public static List<dynamic> ReturnAnonymousType() {
return MyDataContext.SomeEntities.Entity.Select(e => e.Property1, e.Property2).ToList();
}
cool, cheers
EDIT:
Having thought through my question a little more, and in light of the answers, I see I completely messed up the main generic/dynamic question. They are indeed completely different. So yeah, thanks for all the info.
What about point 2 though?
dynamic might simplify a limited number of reflection scenarios (where you know the member-name up front, but there is no interface) - in particular, it might help with generic operators (although other answers exist) - but other than the generic operators trick, there is little crossover with generics.
Generics allow you to know (at compile time) about the type you are working with - conversely, dynamic doesn't care about the type.
In particular - generics allow you to specify and prove a number of conditions about a type - i.e. it might implement some interface, or have a public parameterless constructor. dynamic doesn't help with either: it doesn't support interfaces, and worse than simply not caring about interfaces, it means that we can't even see explicit interface implementations with dynamic.
Additionally, dynamic is really a special case of object, so boxing comes into play, but with a vengence.
In reality, you should limit your use of dynamic to a few cases:
COM interop
DLR interop
maybe some light duck typing
maybe some generic operators
For all other cases, generics and regular C# are the way to go.
To answer your question. No.
Generics gives you "algorithm reuse" - you write code independent of a data Type. the dynamic keyword doesn't do anything related to this. I define List<T> and then i can use it for List of strings, ints, etc...
Type safety: The whole compile time checking debate. Dynamic variables will not alert you with compile time warnings/errors in case you make a mistake they will just blow up at runtime if the method you attempt to invoke is missing. Static vs Dynamic typing debate
Performance : Generics improves the performance for algorithms/code using Value types by a significant order of magnitude. It prevents the whole boxing-unboxing cycle that cost us pre-Generics. Dynamic doesn't do anything for this too.
What the dynamic keyword would give you is
simpler code (when you are interoperating with Excel lets say..) You don't need to specify the name of the classes or the object model. If you invoke the right methods, the runtime will take care of invoking that method if it exists in the object at that time. The compiler lets you get away even if the method is not defined. However it implies that this will be slower than making a compiler-verified/static-typed method call since the CLR would have to perform checks before making a dynamic var field/method invoke.
The dynamic variable can hold different types of objects at different points of time - You're not bound to a specific family or type of objects.
To answer your first question, generics are resolved compile time, dynamic types at runtime. So there is a definite difference in type safety and speed.
Dynamic classes and Generics are completely different concepts. With generics you define types at compile time. They don't change, they are not dynamic. You just put a "placeholder" to some class or method to make the calling code define the type.
Dynamic methods are defined at runtime. You don't have compile-time type safety there. The dynamic class is similar as if you have object references and call methods by its string names using reflection.
Answer to the second question: You can return anonymous types in C# 3.0. Cast the type to object, return it and use reflection to access it's members. The dynamic keyword is just syntactic sugar for that.