Whenever I want to stub a method in an otherwise trivial class, I most often extract an interface.
Now if the constructor of that class is public and isn't too complex or dependent on complex types, it would have the same effect to just make the method in question virtual and inherit.
Is this preferable over extracting an interface? If so, why?
Edit:
class Parser
{
public IDictionary<string, int> DoLengthyParseTask(Stream s)
{
// is slow even with using memory stream
}
}
There are two ways: Either extract an interface or make the method virtual. I actually prefer interfaces, but that could lead to an explosion of IParser Parser tuples...
You need to consider what you are trying to accomplish outside of your unit testing. Do not let your tool dictate your design.
Dealing in interfaces can help decouple your code, but these should be natural points of separation in your code (e.g. business logic or data access). Making methods virtual makes sense if you are going to inherit and overwrite those methods.
In your case, I would attempt to test the behavior that uses DoLengthyParseTask and not the method directly. This will provide a more robust test suite as well. You need to carefully consider whether this method really needs to be public(meaning it can and should be referenced outside its own assembly).
Interfaces just make a contract for you, basically a promise that your implementation will provide access to a specified set of contact points (methods, properties, etc), with no specification of behaviour. You are free to do whatever you want as long as you honor the promise.
A base class on the other hand, in addition of a contract, specifies at least some behaviour that is coded in the class (unless everything is abstract, but that is another story). Making a method virtual still enables you to call in the implementation of the base, and still provide your own code along with it.
This inheritance of behaviour is basically the reason why multiple inheritance is a no-no in modern OOP, and multiple interface implementation is relatively common.
That said, you need to weight whether you just want to extract a contract, or you want to extract some behaviour as well, and the answer should be obvious for a specific case.
As for the IParser / Parser pairs, first they are great for unit testing and for dependency injection, and second, they do not charge you for class creation, so feel free to create as many as you want.
By programming to an interface you get benefits of ease of mocking/stubbing in unit testing and loosely coupled code (and as a result, much higher flexibility), literally for free (the only drawback is more artifacts to manage).
Interfaces and inheritance are two separate things and it's not a good idea to use them interchangeably, even though it's possible. By marking method virtual you're essentially telling others not only they're free to change (override) this method in their implementations, but that you actually expect them to (and are you?).
Such design comes with rather heavy consequences, so unless you explicitly need it - you shouldn't use it. Try sticking to programming to interface instead.
One of good object oriented design principles state that you should program to an interface (design by contract, Liskov Substitution Principle) and prefer composition over inheritance (not only your classes should implement interfaces/abstract classes, but also consist of such implementations).
It's worth noticing that your Parser example makes perfect candidate to be hidden behind abstraction (be it interface or base class). From its consumer point of view it doesn't matter how the data is created - for now you might think it's XML stream only, but requirements tend to change (and/or grow), and you might soon find yourself implementing binary file parser, data stream mining parser and what-not-else. Do it properly now, save yourself time and trouble later.
Related
Most of the examples given in mocking framework website is to mock Interface. Let say NSubstitute that I'm currently using, all their mocking examples is to mock interface.
But in reality, I saw some developer mock concrete class instead. Is it recommended to mock concrete class?
In theory there is absolutely no problem mocking a concrete class; we are testing against a logical interface (rather than a keyword interface), and it does not matter whether that logical interface is provided by a class or interface.
In practice .NET/C# makes this a bit problematic. As you mentioned a .NET mocking framework I'm going to assume you're restricted to that.
In .NET/C# members are non-virtual by default, so any proxy-based methods of mocking behaviour (i.e. derive from the class, and override all the members to do test-specific stuff) will not work unless you explicitly mark the members as virtual. This leads to a problem: you are using an instance of a mocked class that is meant to be completely safe in your unit test (i.e. won't run any real code), but unless you have made sure everything is virtual you may end up with a mix of real and mocked code running (this can be especially problematic if there is constructor logic, which always runs, and is compounded if there are other concrete dependencies to be new'd up).
There are a few ways to work around this.
Use interfaces. This works and is what we advise in the NSubstitute documentation, but has the downside of potentially bloating your codebase with interfaces that may not actually be needed. Arguably if we find good abstractions in our code we'll naturally end up with neat, reusable interfaces we can test to. I haven't quite seen it pan out like that, but YMMV. :)
Diligently go around making everything virtual. An arguable downside to this is that we're suggesting all these members are intended to be extension points in our design, when we really just want to change the behaviour of the whole class for testing. It also doesn't stop constructor logic running, nor does it help if the concrete class requires other dependencies.
Use assembly re-writing via something like the Virtuosity add-in for Fody, which you can use to modify all class members in your assembly to be virtual.
Use a non-proxy based mocking library like TypeMock (paid), JustMock (paid), Microsoft Fakes (requires VS Ultimate/Enterprise, though its predecessor, Microsoft Moles, is free) or Prig (free + open source). I believe these are able to mock all aspects of classes, as well as static members.
A common complaint lodged against the last idea is that you are testing via a "fake" seam; we are going outside the mechanisms normally used for extending code to change the behaviour of our code. Needing to go outside these mechanisms could indicate rigidity in our design. I understand this argument, but I've seen cases where the noise of creating another interface/s outweighs the benefits. I guess it's a matter of being aware of the potential design issue; if you don't need that feedback from the tests to highlight design rigidity then they're great solutions.
A final idea I'll throw out there is to play around with changing the size of the units in our tests. Typically we have a single class as a unit. If we have a number of cohesive classes as our unit, and have interfaces acting as a well-defined boundary around that component, then we can avoid having to mock as many classes and instead just mock over a more stable boundary. This can make our tests a more complicated, with the advantage that we're testing a cohesive unit of functionality and being encouraged to develop solid interfaces around that unit.
Hope this helps.
Update:
3 years later I want to admit that I changed my mind.
In theory I still do not like to create interfaces just to facilitate creation of mock objects. In practice ( I am using NSubstitute) it is much easier to use Substitute.For<MyInterface>() rather than mock a real class with multiple parameters, e.g. Substitute.For<MyCLass>(mockedParam1, mockedParam2, mockedParam3), where each parameter should be mocked separately. Other potential troubles are described in NSubstitute documentation
In our company the recommended practice now is to use interfaces.
Original answer:
If you don't have a requirement to create multiple implementations of the same abstraction, do not create an interface.
As it pointed by David Tchepak, you don't want to bloating your codebase with interfaces that may not actually be needed.
From http://blog.ploeh.dk/2010/12/02/InterfacesAreNotAbstractions.aspx
Do you extract interfaces from your classes to enable loose
coupling? If so, you probably have a 1:1 relationship between your
interfaces and the concrete classes that implement them.
That’s probably not a good sign, and violates the Reused Abstractions
Principle (RAP).
Having only one implementation of a given interface is a code smell.
If your target is the testability, i prefer the second option from David Tchepak's answer above.
However I am not convinced that you have to make everything virtual. It's sufficient to make virtual only the methods, that you are going to substitute.
I also will add a comment next to the method declaration that method is virtual only to make it substitutable for unit test mocking.
However note that substitution of concrete classes instead of interfaces has some limitations.
E.g. for NSubstitute
Note: Recursive substitutes will not be created for classes, as
creating and using classes can have potentially unwanted side-effects
.
The question is rather: Why not?
I can think of a couple of scenarios where this is useful, like:
Implementation of a concrete class is not yet complete, or the guy who did it is unreliable. So I mock the class as it is specified and test my code against it.
It can also be useful to mock classes that do things like database access. If you don't have a test database you might want to return values for your tests that are always constant (which is easy by mocking the class).
Its not that it is recommended, it's that you can do this if you have no other choice.
Usually well designed project rely on defining interfaces for your separate components so you can tests each of them in isolation by mocking the other ones. But if you are working with legacy code /code that you are not allowed to change and still want to test your classes then you have no choice and you cannot be criticized for it (assuming you made the effort to try to switch these components to interfaces and were denied the right to).
Supposed we have:
class Foo {
fun bar() = if (someCondition) {
“Yes”
} else {
“No”
}
}
There’s nothing preventing us to do the following mocking in the test code:
val foo = mock<Foo>()
whenever(foo.bar()).thenReturn(“Maybe”)
The problem is it is setting up incorrect behavior of class Foo. The real instance of class Foo will never be able to return “Maybe”.
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Interface vs Abstract Class (general OO)
I can see their advantage in coordination of a developing team, or code that might be further developed by others.
But if not, is there a reason to use them at all? What would happen if I omit them?
Abstract – I'll be able to instantiate it. No problem. If it doesn't make sense – I won't.
Interface – I have that functionality declared in all classes deriving from it anyway.
Note: I'm not asking what they are. I'm asking whether they're helpful for anything but coordination.
Both are what I call contracts and can be used in the following fashion by an individual developer:
Abstract
Allows for polymophism of differing derived implementations.
Allows one to create base functionality which can be dictated or not that the derived class be required to implement.
Allows for a default operation to be runtime consumed if the derived does not implement or required to implement.
Provides a consistency across derived objects which a base class pointer can utilize without having to have the actual derived; hence allows generic operations on a derived object from a base class reference similar to an Interface in runtime operation.
Interface
Allows a generic pattern of usage as a defacto contract of operation(s).
This usage is can be targetted to the process in hand and allows for the
surgically precise operations for that contract.
Used to help with
factory patterns (its the object returned), mocking of data during
unit tests and the ability to replace an existing class (say from a
factory returning the interface) with a different object and it
doesn't cause any consumer of the factory any pain of refactoring due to the adherence of the interface contract.
Provides a pattern of usage which can be easily understood away from the static of the rest of the class's implementation.
Long story short are they required to get a job done? No.
But if you are into designing systems which will have a lifespan of more than one cycle, the upfront work by said architect will pay off in the long run whether on a team or by an individual.
++Update
I do practice what I preach and when handing off a project to other developers it was nice to say
Look at the interface IProcess which all the primary business classes adhere to. That process defines a system of goals which can help you understand the purpose and the execution of the business logic in a defined way.
While maintaining and adding new functionality to the project the interfaces actually helped me remember the flow and easily add new business logic into the project.
I think if you're not coordinating with others, it does two things
helps keep your from doing weird things to your own code. Imagine
your write a class, and use it in multiple projects. You may evolve
it in one project so that it is unrecognizable from it's cousin in
another project. Having an abstract class or interface makes you
think twice about changing the function signatures.
it gives you flexibility going forward - plenty of classic examples here. Use
the generic form of the thing you're trying to accomplish, and if
you decide you need a different kind later (streamreaders are a
great example, right?) you can more easily implement it later.
Abstract - you can instantiate a child of it, but what is more important, it can has its own non abstract methods and fields.
Interface - more "rough" one in regard of abstract, but in .NET you can have multiple inheritance. So by defining interface you can lead consumer of your interface(s) to subscribe to different contracts(interfaces), so present different "shapes" of specified type.
There are many reasons to use either construct even if you are not coordinating with anyone. The main use is that both actually help express the developper intent, which may help you later figure out why you choose the design you actually chose. They also may allow for further extensibility.
Abstract class allow you to define one common implementation that will be shared across many derived classes while delegating some of the behavior to the child classes. It allows the DRY (don't repeat yourself) principle to avoid having the same code repeated everywhere.
Interfaces expresses that your class implements one specific contract. This has a very useful uses within the framework, among which:
Use of library functionality that necessitate the implementation of some Interface. Examples are IDisposable, IEquatable, IEnumerable...
Use of constraints in generics.
Allow mocking of interfaces (if you do unit testing) whithout having to instanciate a real object.
Use of COM objects
In a big project I work for, I am considering recommending other programmers to always seal their classes if they haven't considered how their classes should be subclassed. Often times, less-experienced programmers never consider this.
I find it odd that in Java and C# classes are non-sealed / non-final by default. I think making classes sealed greatly improves readability of the code.
Notice that this is in-house code that we can always change should the rare case occur that we need to subclass.
What are your experiences? I meet quite some resistance to this idea. Are people that lazy they could not be bothered to type sealed?
Okay, as so many other people have weighed in...
Yes, I think it's entirely reasonable to recommend that classes are sealed by default.
This goes along with the recommendation from Josh Bloch in his excellent book Effective Java, 2nd edition:
Design for inheritance, or prohibit it.
Designing for inheritance is hard, and can make your implementation less flexible, especially if you have virtual methods, one of which calls the other. Maybe they're overloads, maybe they're not. The fact that one calls the other must be documented otherwise you can't override either method safely - you don't know when it'll be called, or whether you're safe to call the other method without risking a stack overflow.
Now if you later want to change which method calls which in a later version, you can't - you'll potentially break subclasses. So in the name of "flexibility" you've actually made the implementation less flexible, and had to document your implementation details more closely. That doesn't sound like a great idea to me.
Next up is immutability - I like immutable types. I find them easier to reason about than mutable types. It's one reason why the Joda Time API is nicer than using Date and Calendar in Java. But an unsealed class can never be known to be immutable. If I accept a parameter of type Foo, I may be able to rely on the properties declared in Foo not to be changed over time, but I can't rely on the object itself not being modified - there could be a mutable property in the subclass. Heaven help me if that property is also used by an override of some virtual method. Wave goodbye to many of the benefits of immutability. (Ironically, Joda Time has very large inheritance hierarchies - often with things saying "subclasses should be immutable. The large inheritance hierarchy of Chronology made it hard to understand when porting to C#.)
Finally, there's the aspect of overuse of inheritance. Personally I favour composition over inheritance where feasible. I love polymorphism for interfaces, and occasionally I use inheritance of implementation - but it's rarely a great fit in my experience. Making classes sealed avoids them being inappropriately derived from where composition would be a better fit.
EDIT: I'd also like to point readers at Eric Lippert's blog post from 2004 on why so many of the framework classes are sealed. There are plenty of places where I wish .NET provided an interface we could work to for testability, but that's a slightly different request...
It is my opinion that architectural design decisions are made to communicate to other developers (including future maintenance developers) something important.
Sealing classes communicates that the implementation should not be overridden. It communicates that the class should not be impersonated. There are good reasons to seal.
If you take the unusual approach of sealing everything (and this is unusual), then your design decisions now communicate things that are really not important - like that the class wasn't intended to be inherited by the original/authoring developer.
But then how would you communicate to other developers that the class should not be inherited because of something? You really can't. You are stuck.
Also, sealing a class doesn't improve readability. I just don't see that. If inheritance is a problem in OOP development, then we have a much larger problem.
I'd like to think that I'm a reasonably-experienced programmer and, if I've learned nothing else, it's that I am remarkably bad at predicting the future.
Typing sealed is not hard, I just don't want to irritate a developer down the road (who could be me!) who discovers that a problem could be easily solved with a little inheritance.
I also have no idea how sealing a class makes it more readable. Are you trying to force people to prefer composition to inheritance?
© Jeffrey Richter
There are three reasons why a sealed
class is better than an unsealed
class:
Versioning: When a class is originally sealed, it can change to
unsealed in the future without
breaking compatibility. However, once
a class is unsealed, you can never
change it to sealed in the future as
this would break all derived classes.
In addition, if the unsealed class
defines any unsealed virtual methods,
ordering of the virtual method calls
must be maintained with new versions
or there is the potential of breaking
derived types in the future.
Performance: As discussed in the previous section, calling a virtual
method doesn’t perform as well as
calling a nonvirtual method because
the CLR must look up the type of the
object at runtime in order to
determine which type defines the
method to call. However, if the JIT
compiler sees a call to a virtual
method using a sealed type, the JIT
compiler can produce more efficient
code by calling the method
nonvirtually. It can do this because
it knows there can’t possibly be a
derived class if the class is sealed.
Security: and predictability A class must protect its own state and not
allow itself to ever become corrupted.
When a class is unsealed, a derived
class can access and manipulate the
base class’s state if any data fields
or methods that internally manipulate
fields are accessible and not private.
In addition, a virtual method can be
overridden by a derived class, and the
derived class can decide whether to
call the base class’s implementation.
By making a method, property, or event
virtual, the base class is giving up
some control over its behavior and its
state. Unless carefully thought out,
this can cause the object to behave
unpredictably, and it opens up
potential security holes.
There shouldn't be anything wrong in inheriting from a class.
You should seal a class only when theres a good reason why it should never be inherited.
Besides, if you seal them all, it will only decrease maintainability. Every time someone will want to inherit from one of your classes, he will see it is sealed, then he'll either remove the seal (mess with code he shouldn't have to mess with) or worse: create a poor implementation of your class for himself.
Then you'll have 2 implementations of the same thing, one probably worse than the other, and 2 pieces of code to maintain.
Better just keep it unsealed. No harm in it being unsealed.
Frankly I think that classes not being sealed by default in c# is kind of weird and out of place with how the rest of the defaults work in the language.
By default, classes are internal.
By default fields are private.
By default members are private.
There seems to be a trend that points to least plausible access by default. It would stand to reason that a unsealed keyword should exits in c# instead of a sealed.
Personally I'd rather classes were sealed by default. In most ocassions when someone writes a class, he is not designing it with subclassing in mind and all the complexities that come along with it. Designing for future subclassing should be a conscious act and therefore I'd rather you explicitly have to state it.
"...consider[ing] how their classes should be sub classed..." shouldn't matter.
At least a half dozen times over the past few years I've found myself cursing some open source team or another for a sloppy mix of protected and private, making it impossible to simply extend a class without copying the source of the entire parent class. (In most cases, overriding a particular method required access to private members.)
One example was a JSTL tag that almost did what I wanted. I need to override one small thing. Nope, sorry, I had to completely copy the source of the parent.
I only seal classes if I am working on a reusable component that I intend to distribute, and I don't want the end user to inherit from it, or as a system architect if I know I don't want another developer on the team to inherit from it. However there is usually some reason for it.
Just because a class isn't being inherited from, I don't think it should automatically be marked sealed. Also, it annoys me to no end when I want to do something tricky in .NET, but then realize MS marks tons of their classes sealed.
This is a very opinionated question that's likely to garner some very opinionated answers ;-)
That said, in my opinion, I strongly prefer NOT making my classes sealed/final, particularly at the beginning. Doing this makes it very difficult to infer the intended extensibility points, and it's nearly impossible to get them right at the beginning. IMHO, overuse of encapsulation is worse than overuse of polymorphism.
Your house, your rule.
You can also have the complementary rule instead: a class that can be subclassed must be annotated; nobody should subclass a class that's not annotated so. This rule is not harder to follow than your rule.
The main purpose of a sealed class to take away the inheritance feature from the user so they cannot derive a class from a sealed class.Are you sure you want to do that. Or do you want to start having all classes as sealed and then when you need to make it inheritable you will change it .. Well that might be ok when every thing is in house and in one team but incase other teams in future use your dlls it will be not possible to recompile whole source code everytime a class needs to be unsealed ....
I wont recommend this but thats just my opinion
I don't like that way to think. Java and c# are made to be OOP languages. These languages are designed in a way where a class can have a parent or a child. That's it.
Some people say that we should always start from the most restricting modifier (private, protected...) and set your member to public only when you use it externally. These people are ,to me, lazy and don't want to think about a good design at the beginning of the project.
My answer is: Design your apps in a good way now. Set your class to seal when it needs to be sealed and private when it needs to be private. Don't make them sealed by default.
I find that sealed / final classes are actually pretty rare, in my experience; I would certainly not recommend suggesting all classes be sealed / final by default. That specification makes a certain statement about the state of the code (i.e., that it's complete) that is not necessarily always true during development time.
I'll also add that leaving a class unsealed requires more design / test effort to make sure that the exposed behaviours are well-defined and tested; heavy unit testing is critical, IMO, to achieve a level of confidence in the behaviour of the class that appears to be desired by the proponents of "sealed". But IMO, that increased level of effort translates directly to a high level of confidence and to higher quality code.
What is the best approach for defining Interfaces in either C# or Java? Do we need to make generic or add the methods as and when the real need arises?
Regards,
Srinivas
Once an interface is defined, it is intended to not be changed.
You have to be thoughtful about the purpose of the interface and be as complete as possible.
If you find the need, later, to add a method, really you should define a new interface, possibly a _V2 interface, with the additional method.
Addendum: Here you will find some good guidelines on interface design in C#, as part of a larger, valuable work on C# design in general. It generally applies to Java as well.
Excerpts:
Although most APIs are best modeled using classes and structs, there are cases in which interfaces are more appropriate or are the only option.
DO provide at least one type that is
an implementation of an interface.
This helps to validate the design of
the interface. For example,
System.Collections.ArrayList is an
implementation of the
System.Collections.IList interface.
DO provide at least one API consuming
each interface you define (a method
taking the interface as a parameter or
a property typed as the interface).
This helps to validate the interface
design. For example, List.Sort
consumes IComparer interface.
DO NOT add members to an interface that
has previously shipped. Doing so
would break implementations of the
interface. You should create a new
interface to avoid versioning
problems.
I recommend relying on the broad type design guidelines.
To quote Joshua Bloch:
When in doubt, leave it out.
You can always add to an interface later. Once a member is a part of your interface it is very difficult to change or remove it. Be very conservative in your creation of you interfaces as they are binding contracts.
As a side note here is an excellent interview with Vance Morrison (of the Microsoft CLR team) in which he mentions the possibility of a future version of the CLR allowing "mixins" or interfaces with default implementations for their members.
If your interface is part of code that is shared with other projects and teams, listen to Cheeso. But if your interface is part of a private project and you have access to all the change points then you probably didn't need interfaces to begin with but go ahead and change them.
If the interface is going to be public, I feel that a good deal of care needs to be put into the design because changes to the interface is going to be difficult if a lot of code is going to suddenly break in the next iteration.
Changes to the interface needs to be taken with care, therefore, it would be ideal if changes wouldn't have to be made after the initial release. This means, that the first iteration will be very important in terms of the design.
However, if changes are required, one way to implement the changes to the interface would be deprecate the old methods, and provide a transition path for old code to use the newly-designed features. This does mean that the deprecated methods will still stick around to prevent the code using the old methods from breaking -- this is not ideal, so it is a "price to pay" for not getting it right the first time around.
On a related matter, yesterday, I stumbled upon the Google Tech Talk: How to Design a Good API and Why It Matters by Joshua Bloch. He was behind the design and implementation of the Java Collection libraries and such, and is the author of Effective Java.
The video is about an hour long where he goes into details and examples about what makes a good API, why we should be making well-designed APIs, and more. It's a good video to watch to get some ideas and inspiration for certain things to look out for when thinking about designing APIs.
Adding methods later to an interface immediately breaks all implementations of the interface that didn't accidentaly implement those methods. For that reason, make sure your interface specification is complete. I'd propose you start with a (sample) client of the interface, the part that actually uses instances of classes implementing said interface. Whatever the client needs must be part of the interface (obviously). Then make a (sample) implementation of the interface and look what additional methods are both generally usefull and available (in possible other implementations) so they should also be part of the interface. Check for symetry completeness (e.g. if there is an "openXYZ", there should also be a "closeXYZ". if there is an "addFooBar", there should be a "removeFooBar". etc.)
If possible, let a coworker check your specification.
And: Be sure you really want an interface. Maybe an abstract base class is a better fit for your needs.
Well, it really depends on your particular situation. If your team is the sole user/maintainer of that interface, then by all means, modify it as you see fit and forget all about that "best practice blabla" kind of stuff. It is YOUR code after all... Never blindly follow best pracice stuff without understanding its rationale.
Now, if you're making a public API that other team or customer, will work with (think plugins, extension points or things like that) then you have to be conservative with what you put in the interface. As other mentionned, you may have to add _V2 kind of interface int these cases. Microsoft did with several web browser COM interfaces.
The guidelines Microsoft publishes in Framework Design Guidelines are just that: guideline for PUBLIC interface. Not for private internal stuff; tough many of them still apply. Know what applies or not to your situation.
No rule will make up for lack of common sense.
I have been reading that creating dependencies by using static classes/singletons in code, is bad form, and creates problems ie. tight coupling, and unit testing.
I have a situation where I have a group of url parsing methods that have no state associated with them, and perform operations using only the input arguments of the method. I am sure you are familiar with this kind of method.
In the past I would have proceeded to create a class and add these methods and call them directly from my code eg.
UrlParser.ParseUrl(url);
But wait a minute, that is introducing a dependency to another class. I am unsure whether these 'utility' classes are bad, as they are stateless and this minimises some of the problems with said static classes, and singletons. Could someone clarify this?
Should I be moving the methods to the calling class, that is if only the calling class will be using the method. THis may violate the 'Single Responsibilty Principle'.
From a theoretical design standpoint, I feel that Utility classes are something to be avoided when possible. They basically are no different than static classes (although slightly nicer, since they have no state).
From a practical standpoint, however, I do create these, and encourage their use when appropriate. Trying to avoid utility classes is often cumbersome, and leads to less maintainable code. However, I do try to encourage my developers to avoid these in public APIs when possible.
For example, in your case, I feel that UrlParser.ParseUrl(...) is probably better handled as a class. Look at System.Uri in the BCL - this handles a clean, easy to use interface for Uniform Resource Indentifiers, that works well, and maintains the actual state. I prefer this approach to a utility method that works on strings, and forcing the user to pass around a string, remember to validate it, etc.
Utility classes are ok..... as long as they don't violate design principles. Use them as happily as you'd use the core framework classes.
The classes should be well named and logical. Really they aren't so much "utility" but part of an emerging framwework that the native classes don't provide.
Using things like Extension methods can be useful as well to align functionality onto the "right" class. BUT, they can be a cause of some confusion as the extensions aren't packaged with the class they extend usually, which is not ideal, but, still, can be very useful and produce cleaner code.
You could always create an interface and use that with dependency injection with instances of classes that implement that interface instead of static classes.
The question becomes, is it really worth the effort? In some systems, the answer in yes, but in others, especially smaller ones, the answer is probably no.
This really depends on the context, and on how we use it.
Utility classes, itself, is not bad. However, It will become bad if we use it the bad way. Every design pattern (especially Singleton pattern) can easily be turned into anti-pattern, same goes for Utility classes.
In software design, we need a balancing between flexibility & simplicity. If we're going to create a StringUtils which is only responsible for string-manipulation:
Does it violate SRP (Single Responsibility Principle)? -> Nope, it's the developers that put too much responsibilities into utility classes that violate SRP.
"It can not be injected using DI frameworks" -> Are StringUtils implementation gonna varies? Are we gonna switch its implementations at runtime? Are we gonna mock it? Of course not.
=> Utility classes, themselve, are not bad. It's the developers' fault that make it bad.
It all really depends on the context. If you're just gonna create a utility class that only contains single responsibility, and is only used privately inside a module or a layer. Then you're still good with it.
I agree with some of the other responses here that it is the classic singleton which maintains a single instance of a stateful object which is to be avoided and not necessarily utility classes with no state that are evil. I also agree with Reed, that if at all possible, put these utility methods in a class where it makes sense to do so and where one would logically suspect such methods would reside. I would add, that often these static utility methods might be good candidates for extension methods.
I really, really try to avoid them, but who are we kidding... they creep into every system. Nevertheless, in the example given I would use a URL object which would then expose various attributes of the URL (protocol, domain, path and query-string parameters). Nearly every time I want to create a utility class of statics, I can get more value by creating an object that does this kind of work.
In a similar way I have created a lot of custom controls that have built in validation for things like percentages, currency, phone numbers and the like. Prior to doing this I had a Parser utility class that had all of these rules, but it makes it so much cleaner to just drop a control on the page that already knows the basic rules (and thus requires only business logic validation to be added).
I still keep the parser utility class and these controls hide that static class, but use it extensively (keeping all the parsing in one easy to find place). In that regard I consider it acceptable to have the utility class because it allows me to apply "Don't Repeat Yourself", while I get the benefit of instanced classes with the controls or other objects that use the utilities.
Utility classes used in this way are basically namespaces for what would otherwise be (pure) top-level functions.
From an architectural perspective there is no difference if you use pure top-level "global" functions or basic (*) pure static methods. Any pros or cons of one would equally apply to the other.
Static methods vs global functions
The main argument for using utility classes over global ("floating") functions is code organization, file and directory structure, and naming:
You might already have a convention for structuring class files in directories by namespace, but you might not have a good convention for top-level functions.
For version control (e.g. git) it might be preferable to have a separate file per function, but for other reasons it might be preferable to have them in the same file.
Your language might have an autoload mechanism for classes, but not for functions. (I think this would mostly apply to PHP)
You might prefer to write import Acme:::Url; Url::parse(url) over import function Acme:::parse_url; parse_url();. Or you might prefer the latter.
You should check if your language allows passing static methods and/or top-level functions as values. Perhaps some languages only allow one but not the other.
So it largely depends on the language you use, and conventions in your project, framework or software ecosystem.
(*) You could have private or protected methods in the utility class, or even use inheritance - something you cannot do with top-level functions. But most of the time this is not what you want.
Static methods/functions vs object methods
The main benefit of object methods is that you can inject the object, and later replace it with a different implementation with different behavior. Calling a static method directly works well if you don't ever need to replace it. Typically this is the case if:
the function is pure (no side effects, not influenced by internal or external state)
any alternative behavior would be considered as wrong, or highly strange. E.g. 1 + 1 should always be 2. There is no reason for an alternative implementation where 1 + 1 = 3.
You may also decide that the static call is "good enough for now".
And even if you start with static methods, you can make them injectable/pluggable later. Either by using function/callable values, or by having small wrapper classes with object methods that internally call the static method.
They're fine as long as you design them well ( That is, you don't have to change their signature from time to time).
These utility methods do not change that often, because they do one thing only. The problem comes when you want to tight a more complex object to another. If one of them needs to change or be replaced, it will be harder to to if you have them highly coupled.
Since these utility methods won't change that often I would say that is not much problem.
I think it would be worst if you copy/paste the same utility method over and over again.
This video How to design a good API and why it matters by Joshua Bloch, explains several concepts to bear in mind when designing an API ( that would be your utility library ). Although he's a recognized Java architect the content applies to all the programming languages.
Use them sparingly, you want to put as much logic as you can into your classes so they dont become just data containers.
But, at the same time you can't really avoid utilites, they are required sometimes.
In this case i think it's ok.
FYI there is the system.web.httputility class which contains alot of common http utilities which you may find useful.