Interfaces in Class Files - c#

Should my interface and concrete implementation of that interface be broken out into two separate files?

If you want other classes to implement that interface, it would probably be a good idea, if only for cleanliness. Anyone looking at your interface should not have to look at your implementation of it every time.

If there is only one implementation: why the interface?
If there is more than one implementation: where do you put the others?

If by different files you mean different xxx.cs files within your assembly, then normally due to my own practices I would say yes - but this is down to the house standards you use. If you're just programming for yourself, then I would say this is good coding practice, it keeps everything clean and easy to read. The smaller the blocks of code in any given file, the easier something is to follow (within reason), obviously you can start getting into partial classes where things can start getting ridiculous if you don't keep a reign on it.
As a rule, I keep my projects in a logical folder structure where portions of the project might be allocated into folders DAL or BM and within there I might have a number of logically named folders which each contain a number of files: one interface, one implementation and any helper classes specific to those.
However, all that said, your team/in-house best practices should be adopted if you're working within a team of developers.

Separate files... FTW! You might even want to create separate projects/assemblies depending on how extensible your code is. At the very least it should probably be in a separate namespace.
The whole point of an interface is so that the code that uses the interface doesn't care about the implementation. Therefore they should be as loosely associated as possible, which they won't be if they are in the same file.
But as #balabaster notes, it depends on what your team's practices (although they are not always "best practices") are.

Yes, for the classes they're called partial class,
take a look link text

General rule of thumb, yes. An Interface means it may be implemented by other classes, it is cleaner and easier to manager when they are clearly in separate files.
What's more, depending on the level of separation and isolation your application is going to take, you would even want to place your interfaces in its own project. Then consuming projects would reference the interface project instead of each and every assembly that carries implementations of that interface.

Yes, even if one gives counter arguments such as there's only one implementation or he/she foresees that there will be only one implementation for a long time or he/she is the only user/developer, etc. If there are multiple implementations, multiple users, etc, then it's obvious that you would want to keep them in separate files. So why should one treat it differently in the case of one implementation only?

Related

How to organise class files in C#

I am working on a simple project and I have created several classes, interfaces, one static class and so on. What I am asking is, how to organise this files into namespaces. Is there any good practice for this or I should just follow the logic of my program. I am currently thinking that I should move the interfaces into one namespace and all the classes into another. So what can you advise me. I am really curious to find out the best way to separate my files.
Have a nice day :)
You should group your code in namespace with other types which have the highest cohesion. That is, group types together when they perform common functionality. The type of cohesion you're suggesting is logical cohesion, and is really a rather weak form of cohesion.
Namespaces are mainly for the benifit of large projects. Since you are working on a "simple project", I suggest that you use a single namespace for the entire application. Since everything in C# must be a type or a member of a type (i.e., there are no global variables or methods), the types that you create (objects, classes, interfaces, enums, etc.) are usually a good-enough organizing feature for a small project.
For slightly larger projects, I suggest putting each tier into its own namespace.
For even larger projects, namespaces should be a logical grouping of related types or subsystems, according to preference.
Into specific namespace you should put everything which concerns some matter. For example all the stuff concerning string manipulations you should put into separate namespace, e.g. com.server.string.
It's very important especially in case you have class with names existing in other namespaces.
The only reason to split your code in files is to make your code maintainable.
As a general rule of thumb, I tend to create folders for enum's, struct's, models, controllers, etc. Depending on the size of the solution, you keep nesting in groups after that.
Sometimes it makes sense to just put the entire namespace in the file, other times, you let your nesting take care of the naming.
A good rule of tumb is that you should be able to find what you are looking for quicky, and, more importantly, someone who hasn't seen the project, should find his way around quickly.
One thing to keep in mind is that you never put more then one thing in one file. Never put two classes in the same file, never append enums at the end of a class file, etc.
You are confusing files with classes. You can create folders in Visual Studio to organize your files. That way you can group interfaces and classes (which is what I usually do). VS will automatically put new classes for which the file is in those folders in the namespace of the same name. This is usually not what you want (I don't know how to turn it off, so I can't help you with that).
I agree with the other answers here that you should group types based on what they do, not on what kind of language construct they are.

C# Class function members declaration & implementation

Is there a concept in C# of class definition and implementation similar to what you find in C++?
I prefer to keep my class definitions simple by removing most, if no every, implementations details (it depends on several factors as you may know, but generally I move towards leaving most member implementation details outside the class definition). This has the benefit of giving me a bird's eye view of the class and its functionality.
However in C# it seems I'm forced to define my member functions at the point of declaration. Can this be avoided, or circumvent some way?
During my apprenticeship of C#, this is one aspect that is bothering me. Classes, especially complex ones, become increasingly harder to read.
This is really a case of needing to step back and see the bigger picture. Visual studio has many, many tools to help you write and manipulate your code, from outlining, #regions, class view, class diagrams, the Code Definition Window and many more.
C# isn't C++, if you try to make it so then you'll trip over yourself and no-one else will be able to read your code.
A day spent learning to use the Visual Studio tools will repay the investment many times over in terms of productivity and you'll soon wonder how you ever lived with that C++ way of doing things.
Update in response to comments
I have long since stopped regarding my code as simple text files. I regard code as an organic thing and I find that allowing myself to rely on a feature-rich IDE lets me move up and down levels of abstraction more easily and enhances my productivity no end. I suppose that could be a personal trait and perhaps it is not for everyone; I have a very 'visual' mind and I work best when I can see things in pictures.
That said, a clever IDE is not an excuse for poor style. There are best practices for writing "clean code" that don't require an smart IDE. One of the principles of clean code is to keep the definition of something near its use and I think that could be extended to cover declaration and definition. Personally, I think that separating the declaration and definition makes the code less clear. If you are finding that you get monster classes that are hard to understand, then that might be a sign that you're violating the Single Responsibility Principle.
The reason for separate definition and declaration in c/C++ is because C++ uses a single pass compiler, where forward references cannot be resolved later, unlike C# and its two-pass compiler which can happily find references regardless of the order of declaration. This difference stems from the different design philosphies of the compilers: C/C++ considers each source file to be a unit of compilation, whereas in C# the entire project is considered to be the unit of compilation. I suppose when you are used to working in the C/C++ way then separating the declaration and definition can appear to be a desirable element of style, but I personally believe that keeping declaration and use (or in this case declaration and definition) enhances, rather then reduces, readability. I used to be a C programmer myself until I started using C# in 2001. I always loved C and thought it's way of doing things was the 'bees knees'. These days when I read C/C++ code I think it looks absolutely horrendous and I can't believe we used to put up with working that way. It's all a matter of what you are used to, I suppose.
If you're using Visual Studio, you can take advantage of the Class View. You can also use the expand/collapse features of the source code editor.
In the improbable case that your tools don't help, you can always write a quick utility that will summarize the class for you.
If the class has been compiled, you can use Reflector to view the class, too.
No, there is no concept of implementation and header files in C# like you find in C/C++. The closest you can come to this is to use an interface, but the interface can only define the public members of your class. You would then end up with a 1-to-1 mapping of classes and interfaces, which really isn't the intent for how interfaces are to be used.
You could get a similar result by defining an interface for each of your classes which they then implement.
It sounds like you're referring to interfaces. In c#, you can define all of your member functions in an interface, and then implement them in another class.
In C# you could fake it with partial classes and partial members to a point, however, forward declarations and prototypes go the way of the dodo bird with your newer languages. Class View, Class Diagrams, Intellisense, et al, all help to remove the potential need for those "features".
Define an interface.
Then it's nice to be able to automatically implement the interface using a nice code assist tool.
If you find that a class is hard to read or difficult to understand, that's often a sign that the class is trying to do too much. Instead of trying to duplicate C++'s separation of declarations and definitions, consider refactoring the troublesome class into several classes so that each class has less responsibility.
Whenever it's possible or desirable, I'll go with the previous responses and define an interface. but it's not always appropriate.
alternatively, you can work around this "problem" by using some static code inspection tools. Resharper's "File Structure" window will give you exactly what you want. you can also use the built in "Class View" from visual studio. but I prefer the former.
The prototyping that I guess you are referring to does not really exist in C#. Defining interfaces as others have suggested will give you a point where you have declarations of your methods collected, but it's not the same thing as prototypes, and I am not so sure that it will help you in making your implementation classes easier to read.
C# is not C++, and should probably not be treated as C++.
Not sure what you mean by your classes continue to grow and become hard to read. Do you mean you want a header file like view of a class's members? If so, like John suggested, can't you just collapse the implementation so you don't have to see it?
If you don't want every class to implement a certain thing, then interfaces are probably the way to go (like others are saying).
But as a side thought, if your classes themselves get more and more complex as a your write the program, perhaps it's more of a design issue than a language problem? I think a class should have one responsibility and not take on more and more responsibilities as the program grows, rather the number of classes and how old classes are used should grow and get more complex as you continue to develop your software?
There are two remedies for this to make it more C++-ish:
Create an interface file that declares all method signatures and properties
Implement that interface in a class across multiple files by using the partial modifier on the class definitions
Edits:
// File: ICppLikeInterface.cs
public interface ICppLikeInterface
{
...
}
// File: CppLikeImplementation1.cs
public partial class CppLikeImplementation : ICppLikeInterface
{
...
}
// File: CppLikeImplementation2.cs
public partial class CppLikeImplementation : ICppLikeInterface
{
...
}
The C++ way of separating interface into a header file is mostly (I think) due to an early design decision when C was created to allow fast, incremental compilations during the "old days", as the compiler throws away any meta data, contrary to Smalltalk. This is not a matter with C# (nor Java) where tens of thousands of lines compiles within seconds on recent hardware (C++ still doesn't)

Interfaces separated from the class implementation in separate projects? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
We work on a middle-size project (3 developers over more than 6 months) and need to make following decision: We'd like to have interfaces separated from concrete implementation. The first is to store the interface in a separate file.
We'd like to go further and separate the data even more: We'd like to have one project (CSPROJ) with interface in one .CS file plus another .CS file with help classes (like some public classes used within this interface, some enums etc.). Then, we'd like to have another project (CSPROJ) with a factory pattern, concrete interface implementation and other "worker" classes.
Any class which wants to create an object implementing this interface must include the first project which contains the interfaces and public classes, not the implementation itself.
This solution has one big disadvantage: it multiplies the number of assemblies by 2, because you would have for every "normal" project one project with interace and one with implementation.
What would you recommend? Do you think it's a good idea to place all interfaces in one separate project rather than one interface in its own project?
I would distinguish between interfaces like this:
Standalone interfaces whose purpose you can describe without talking about the rest of your project. Put these in a single dedicated "interface assembly", which is probably referenced by all other assemblies in your project. Typical examples: ILogger, IFileSystem, IServiceLocator.
Class coupled interfaces which really only make sense in the context of your project's classes. Put these in the same assembly as the classes they are coupled to.
An example: suppose your domain model has a Banana class. If you retrieve bananas through a IBananaRepository interface, then that interface is tightly coupled to bananas. It is impossible to implement or use the interface without knowing something about bananas. Therefore it is only logical that the interface resides in the same assembly as Banana.
The previous example has a technical coupling, but the coupling might just be a logical one. For example, a IFecesThrowingTarget interface may only make sense as a collaborator of the Monkey class even if the interface declaration has no technical link to Monkey.
My answer does depend on the notion that it's okay to have some coupling to classes. Hiding everything behind an interface would be a mistake. Sometimes it's okay to just "new up" a class, instead of injecting it or creating it via a factory.
Yes, I think this is a good idea. Actually, we do it here all the time, and we eventually have to do it because of a simple reason:
We use Remoting to access server functionality. So the Remote Objects on the server need to implement the interfaces and the client code has to have access to the interfaces to use the remote objects.
In general, I think you are more loosely coupled when you put the interfaces in a separate project, so just go along and do it. It isn't really a problem to have 2 assemblies, is it?
ADDITION:
Just crossed my mind: By putting the interfaces in a separate assembly, you additionally get the benefit of being able to reuse the interfaces if a few of them are general enough.
I think it you should consider first whether ALL interfaces belong to the 'public interface' of your project.
If they are to be shared by multiple projects, executables and/or services, i think it's fair to put them into a separate assembly.
However, if they are for internal use only and there for your convenience, you could choose to keep them in the same assembly as the implementation, thus keeping the overall amount of assemblies relatively low.
I wouldn't do it unless it offers a proven benefit for your application's architecture.
It's good to keep an eye on the number of assemblies you're creating. Even if an interface and its implementation are in the same assembly, you can still achieve the decoupling you rightly seek with a little discipline.
If an implementation of an interface ends up having a lot of dependencies (on other assemblies, etc), then having the interface in an isolated assembly can simply life for higher level consumers.
They can reference the interface without inadvertently becoming dependent on the specific implementation's dependencies.
We used to have quite a number of separate assemblies in our shared code. Over time, we found that we almost invariably referenced these in groups. This made more work for the developers, and we had to hunt to find what assembly a class or interface was in. We ended up combining some of these assemblies based on usage patterns. Life got easier.
There are a lot of considerations here - are you writing a library for developers, are you deploying the DLLs to offsite customers, are you using remoting (thanks, Maximilian Mayerl) or writing WCF services, etc. There is no one right answer - it depends.
In general I agree with Jeff Sternal - don't break up the assemblies unless it offers a proven benefit.
There are pros and cons to the approach, and you will also need to temper the decision with how it best fits into your architectural approach.
On the "pro" side, you can achieve a level of separation to help enforce correct implementations of the interfaces. Consider that if you have junior- or mid-level developer working on implementations, the interfaces themselves can be defined in a project that they only have read access on. Perhaps a senior-level, team lead, or architect is responsible for the design and maintenance of the interfaces. If these interfaces are used on multiple projects, this can help mitigate the risk of unintentional breaking changes on other projects when only working in one. Also, if you work with third party vendors who you distribute an API to, packaging the interfaces is a very good thing to do.
Obviously, there are some down sides. The assembly does not contain executable code. In some shops that I have worked at, they have frowned upon not having functionality in an assembly, regardless of the reason. There definitely is additional overhead. Depending on how you set up your physical file and namespace structure, you might have multiple assemblies doing the same thing (although not required).
On a semi-random note, make sure to document your interfaces well. Documentation inheritance from interfaces using GhostDoc is a beautiful thing.
This is a good idea and I appreciate some of the distinctions in the accepted answer. Since both enumerations and especially interfaces are by their very nature dependency-less this gives them special properties and makes them immune from circular dependencies and even just complex dependency graphs that make a system "brittle". A co-worker of mine once called a similar technique the "memento pattern" and never failed to point out a useful application of it.
Put an interface into a project that already has many dependencies and that interface, at least with respect to the project, comes with all the dependencies of the product. Do this often and you're more likely to face situations with circular dependencies. The temptation is then to compensate with patches that wouldn't otherwise be needed.
It's as if coupling interfaces with projects having many dependencies contaminates them. The design intent of interfaces is to de-couple so in most cases it makes little sense to couple them to classes.

Is it good practise to have multiple class definitions in one file?

Is it good practise to have multiple class definitions in one file? or is it preferable to have one class per file?
I prefer one class per file. You'll never have to search for the correct filename because it is always the class name.
One class per file.
That way you can avoid having to merge edits when two people have to edit the same file because one is working on class A and the other is working on class B. While this should be automatic in any source control system, it's an extra step that can be missed which would cause problems.
Far better to have a process that didn't allow this sort of error to occur in the first place.
I do not see any issue with multiple classes in the same file, as long as the classes are related to each other.
If you have resharper, you can always use the navigation tools to find any class.
It is generally best practice to have one file per class.
Some folk, not me, like to have more than more one if they are related and very very small in size. Others might do this in a prototyping stage. I say start and stay with one per file as does Scott McConnell in his discourse on Class Quality in his seminal book Code Complete
To quote, "Put one class in one file. A file isn't just a bucket that holds some code. If your language allows it, a file should hold a collection of routines that supports one and only one purpose. A file reinforces the idea that a collection of routines are in the same class."
I think it's preferable to have one class per file and to organize them in folders having the same hierarchy as their namespaces.
Most programmers would consider one class per file to be a best practice.
Usually - no.
Following practice "one class per file" simplifies browsing of solution.
Additionally if you have a big team of developers and source control tool that uses pessimistic approach (exclusive locks) - your developers will have hard time while working on the same file.
I guess it is down to preference as you said.
I think you'll find most online examples/ most code is one class per file for easy management.
I sometimes put 2 classes in a file - only if i'm using the second class as an entity and it's only being used in the first class.
I guess you ask because you've noticed already that it's considered best practice. Given the obvious benefits (and some less obvious ones mentioned here), why would you want to do it differently? Are there any benefits at all in multiple classes per file? I can't think of any.
Usually it is the best solution to have one class per file (with the file named exactly like the contained class).
I only differ from that if
There are lots of small enumerations ->I collect these into a single file e.g. Enums.cs
There are lots (20+) of generated classes/interfaces that directly relate to each other ->Into one file E.g. Interfaces.cs
There is stuff that is no direct functional part of the application and in close semantic consistance (e.g. everything you need for interop. Thats usually a few structures, enums, constants and a single class) -> That goes into a single file named after the interop class.
Private inner classes -> Stay with their parent class instead of partial classes
I would say no, i know devexpress hates it aswell ( It has some detection bad practives).
But i do have it sometimes, when its a very small class thats basicly only used by the "main" class in the file. Personaly i think it comes down a bit to taste, there is a balance between having 10k lines long .cs files or having to many .cs in your project.
I think in terms of it being a "best practise" approach then probably yes. However, really it depends on the project. I tend to group related code into separate units for example:
MyApplication.Interfaces
MyApplication.Utils
MyApplication.Controllers
I really think a class only ever deserves it's own unit if it becomes huge. However, if it does get to this stage, you should start to consider moving some code into helper classes to separate the logic.
I would have to agree with most on this. One class per file is ideal. It makes it easier to see what's available in a project without having to rely on intellisense to discover types that are available in a given assembly.
I think the only time I ever fudge on the one class per file rule is when I'm defining a custom EventArgs class and it's related to an event that's fired from another class. Then typically I would define those in along with a delegate for the event in the same file. I don't know that it's a good practice one way or another or just out of sheer lazyness??
If you work on a very large project, too many files can slow down your build times significantly (at least with C++). I don't think that rigid adherence to a rule is necessarily the way to go.
One Class Per File is my Preferred approach, it helps me get rid of any confusion later on... I tend to use a lot of partial classes though...
As long as I dont break the 1000 line barrier, I'll stuff in as many related classes that makes sense.
Sometimes an abstraction may only be one overridden method.

Utility classes.. Good or Bad?

I have been reading that creating dependencies by using static classes/singletons in code, is bad form, and creates problems ie. tight coupling, and unit testing.
I have a situation where I have a group of url parsing methods that have no state associated with them, and perform operations using only the input arguments of the method. I am sure you are familiar with this kind of method.
In the past I would have proceeded to create a class and add these methods and call them directly from my code eg.
UrlParser.ParseUrl(url);
But wait a minute, that is introducing a dependency to another class. I am unsure whether these 'utility' classes are bad, as they are stateless and this minimises some of the problems with said static classes, and singletons. Could someone clarify this?
Should I be moving the methods to the calling class, that is if only the calling class will be using the method. THis may violate the 'Single Responsibilty Principle'.
From a theoretical design standpoint, I feel that Utility classes are something to be avoided when possible. They basically are no different than static classes (although slightly nicer, since they have no state).
From a practical standpoint, however, I do create these, and encourage their use when appropriate. Trying to avoid utility classes is often cumbersome, and leads to less maintainable code. However, I do try to encourage my developers to avoid these in public APIs when possible.
For example, in your case, I feel that UrlParser.ParseUrl(...) is probably better handled as a class. Look at System.Uri in the BCL - this handles a clean, easy to use interface for Uniform Resource Indentifiers, that works well, and maintains the actual state. I prefer this approach to a utility method that works on strings, and forcing the user to pass around a string, remember to validate it, etc.
Utility classes are ok..... as long as they don't violate design principles. Use them as happily as you'd use the core framework classes.
The classes should be well named and logical. Really they aren't so much "utility" but part of an emerging framwework that the native classes don't provide.
Using things like Extension methods can be useful as well to align functionality onto the "right" class. BUT, they can be a cause of some confusion as the extensions aren't packaged with the class they extend usually, which is not ideal, but, still, can be very useful and produce cleaner code.
You could always create an interface and use that with dependency injection with instances of classes that implement that interface instead of static classes.
The question becomes, is it really worth the effort? In some systems, the answer in yes, but in others, especially smaller ones, the answer is probably no.
This really depends on the context, and on how we use it.
Utility classes, itself, is not bad. However, It will become bad if we use it the bad way. Every design pattern (especially Singleton pattern) can easily be turned into anti-pattern, same goes for Utility classes.
In software design, we need a balancing between flexibility & simplicity. If we're going to create a StringUtils which is only responsible for string-manipulation:
Does it violate SRP (Single Responsibility Principle)? -> Nope, it's the developers that put too much responsibilities into utility classes that violate SRP.
"It can not be injected using DI frameworks" -> Are StringUtils implementation gonna varies? Are we gonna switch its implementations at runtime? Are we gonna mock it? Of course not.
=> Utility classes, themselve, are not bad. It's the developers' fault that make it bad.
It all really depends on the context. If you're just gonna create a utility class that only contains single responsibility, and is only used privately inside a module or a layer. Then you're still good with it.
I agree with some of the other responses here that it is the classic singleton which maintains a single instance of a stateful object which is to be avoided and not necessarily utility classes with no state that are evil. I also agree with Reed, that if at all possible, put these utility methods in a class where it makes sense to do so and where one would logically suspect such methods would reside. I would add, that often these static utility methods might be good candidates for extension methods.
I really, really try to avoid them, but who are we kidding... they creep into every system. Nevertheless, in the example given I would use a URL object which would then expose various attributes of the URL (protocol, domain, path and query-string parameters). Nearly every time I want to create a utility class of statics, I can get more value by creating an object that does this kind of work.
In a similar way I have created a lot of custom controls that have built in validation for things like percentages, currency, phone numbers and the like. Prior to doing this I had a Parser utility class that had all of these rules, but it makes it so much cleaner to just drop a control on the page that already knows the basic rules (and thus requires only business logic validation to be added).
I still keep the parser utility class and these controls hide that static class, but use it extensively (keeping all the parsing in one easy to find place). In that regard I consider it acceptable to have the utility class because it allows me to apply "Don't Repeat Yourself", while I get the benefit of instanced classes with the controls or other objects that use the utilities.
Utility classes used in this way are basically namespaces for what would otherwise be (pure) top-level functions.
From an architectural perspective there is no difference if you use pure top-level "global" functions or basic (*) pure static methods. Any pros or cons of one would equally apply to the other.
Static methods vs global functions
The main argument for using utility classes over global ("floating") functions is code organization, file and directory structure, and naming:
You might already have a convention for structuring class files in directories by namespace, but you might not have a good convention for top-level functions.
For version control (e.g. git) it might be preferable to have a separate file per function, but for other reasons it might be preferable to have them in the same file.
Your language might have an autoload mechanism for classes, but not for functions. (I think this would mostly apply to PHP)
You might prefer to write import Acme:::Url; Url::parse(url) over import function Acme:::parse_url; parse_url();. Or you might prefer the latter.
You should check if your language allows passing static methods and/or top-level functions as values. Perhaps some languages only allow one but not the other.
So it largely depends on the language you use, and conventions in your project, framework or software ecosystem.
(*) You could have private or protected methods in the utility class, or even use inheritance - something you cannot do with top-level functions. But most of the time this is not what you want.
Static methods/functions vs object methods
The main benefit of object methods is that you can inject the object, and later replace it with a different implementation with different behavior. Calling a static method directly works well if you don't ever need to replace it. Typically this is the case if:
the function is pure (no side effects, not influenced by internal or external state)
any alternative behavior would be considered as wrong, or highly strange. E.g. 1 + 1 should always be 2. There is no reason for an alternative implementation where 1 + 1 = 3.
You may also decide that the static call is "good enough for now".
And even if you start with static methods, you can make them injectable/pluggable later. Either by using function/callable values, or by having small wrapper classes with object methods that internally call the static method.
They're fine as long as you design them well ( That is, you don't have to change their signature from time to time).
These utility methods do not change that often, because they do one thing only. The problem comes when you want to tight a more complex object to another. If one of them needs to change or be replaced, it will be harder to to if you have them highly coupled.
Since these utility methods won't change that often I would say that is not much problem.
I think it would be worst if you copy/paste the same utility method over and over again.
This video How to design a good API and why it matters by Joshua Bloch, explains several concepts to bear in mind when designing an API ( that would be your utility library ). Although he's a recognized Java architect the content applies to all the programming languages.
Use them sparingly, you want to put as much logic as you can into your classes so they dont become just data containers.
But, at the same time you can't really avoid utilites, they are required sometimes.
In this case i think it's ok.
FYI there is the system.web.httputility class which contains alot of common http utilities which you may find useful.

Categories