What are the consequences of having (too) many namespaces? - c#

I run code analysis on a project and I get a warning saying
CA1020 : Microsoft.Design : Consider merging the types defined in {some namespace} with another namespace. {some namespace}
Why do I get this? Is there a negative implication of having too many namespaces?

I believe the main reason is discoverability and believe that plays a large part in the successful support and maintenance of your code. If it's easier to discover it should be easier to maintain.
Here's the a quote from MSDN.
Namespaces should
contain types that are used together
in most scenarios. When their
applications are mutually exclusive,
types should be located in separate
namespaces. Careful namespace
organization can also be helpful
because it increases the
discoverability of a feature. By
examining the namespace hierarchy,
library consumers should be able to
locate the types that implement a
feature

There are no real negative implications at runtime, in terms of increased memory or execution times. Also, namespaces are not like IP addresses, which have a fixed pool and can eventually run out.
Namespaces are basically a naming convention to help you group related things together. The CA error is suggesting that too many small namespaces may make your code harder for other people to use.

Related

Does using namespaces affect performance or compile time?

If I put all classes of a project in the same namespace all classes are available everywhere in the project. But if I use different namespaces not all classes are available everywhere. I get a restriction.
Does using namespaces affect the compile time somehow? Since the compiler has less classes in each namespace and not all namespaces are used all the time he may have a little less trouble finding the right classes.
Will using namespaces affect the applications performance?
It won't affect the execution-time performance.
It may affect the compile-time performance, but I doubt that it would be significant at all, and I wouldn't even like to predict which way it will affect it. (Do you have an issue with long compile times? If so, you may want to try it and then measure the difference... otherwise you really won't know the effect. If you don't have a problem, it doesn't really matter.)
I'm quite sure, that putting classes in namespaces does not effect the compile time significantly.
But beware, that you might lose your logical project-structure, if you put every class into the same namespace.
I (and the Resharper) suggest to use namespaces that correspond with the file location (which corresponds with the project structure).
You should use namespaces according to your logic and ease of human readability and not for performance issues.

Assembly ordering in .net c#

A similar question has been asked in Ordering of reflection requests in dotnet
But I'm hoping for a different answer... I'm writing a plugin for a program that uses reflection to interrogate plugins to find the entry point. Unfortunately it has a bug which means if it encounters an interface declaration during this process it crashes with an unhandled exception. I have spoken to the development team and this is unlikely to be fixed. This is extremely limiting for me for obvious reasons. One workaround I have already thought of is to have my assembly load another assembly with the interfaces in it, but for reasons I won't go into this is not a great solution. It was a while before I encountered this problem because for some reason my entry class always preceded my interfaces in the reflection enumeration order.
My question is, is there any way to influence the ordering of classes and interfaces in the assembly?
Note: I have already tried setting different accessibility levels on my interfaces but that doesn't work for me.
Cheers,
J
I'd bet the code using AppDomain.GetAssemblies() which are then inspected. The implementation of AppDomain.GetAssemblies() leads to an external method, so Reflector is of mostly no help here.
However, without actually trying it and inspecting the result, there are two logical options for the ordering of assemblies in the result:
Load order
Alphabetical order
In the first case you'd probably have to organize references among your assemblies and the load order in such a way that the foreign code finds the right assembly with the entrypoint class and stops. In the second case it would be a pure matter of naming the assemblies in a 'right' way but I doubt it's this case.
(However, the order may be completely different from the two above, e.g. 'mostly' random as well.)
In either case I think sooner or later the buggy code will encounter the problematic assembly and crash anyway. Thus the strong recommendation is: insist on having the bug fixed.

Why there is a convention of declaring default namespace/libraries in any programming language?

Why don't any programming language load the default libraries like stdio.h, iostream.h or using Systemso that there declaration is avoided?
As these namespace/libraries are required in any program, why the compilers expect it to be declared by the user.
Do any programs exist without using namespace/headers? even if yes, whats wrong in loading a harmless default libraries?
I don't mean that .. I am lazy to write a line of code but it makes less sense (as per me) for a compiler to cry for declaration of so called default thingummies ending up in a compilation error.
It's because there are programs which are written without the standard libraries. For example, there are plenty of C programs running on embedded systems that don't provide stdio.h, since it doesn't make any sense on those platforms (in C, such environments are referred to as "freestanding", as opposed to the more usual "hosted").
The “default” libraries are not “required in any program”, and indeed there are many cases where they are not even available (operating system kernel/drivers, microcontrollers, etc). And more in the mainstream, many high-level graphical programs use system-specific GUI/graphics libraries instead of standard I/O.
For stdio.h/iostream(.h): the quick answer is that in the biggest part of your software, they are not needed (definitively not both). Headless devices/servers should having a logging module instead and GUI's don't always have a console to interface with.
Many languages (especially scripting languages, and languages that carry a standard runtime as part of the language spec) do do this.
The trade-off is convenience versus software-engineering goodness. The problem with opening namespaces by default is you end up with a lot of names being available immediately at the top level, which can cause name clashes and confusion, pollute Intellisense/autocompletion lists, etc.
To follow up on caf's answer.
You need to tell the compiler about these headers/libraries so you do not have to include anything that you do not want. Because they are not needed in every program. Any programmer is able to write a library in c or c++ that does not depend on any runtime libraries. This ability makes it possible to write as lean software as possible, and save memory/diskspace/compile time/link time (pick what you need most). In low level languages you should only pay for what you need, nothing more.
There is name collision problem exist. As language standards developed, they provide more and more features, giving them more amd more names. And the probability that system name and name defined in user program raises. To avoid this, new features defined in modules which will not included if program not using it. As system libraries use many common usage words as its symbols (like open, restricted) problem is serious.
But explicit module inclusion is not only method for avoiding collisions. They also are: using "standard" namespace for system names (e.g. C++, namespace std), reserving names and name patterns (e.g. C, double underscores), allowed redefinition (e.g. Forth).
Because libraries are external component of the language. If a day all library (or part of it) change headers, namespaces the language element don't change with them. The compiler checks the syntax and rules of programming language only.

Namespaces - How deep is too deep

We are currently reorganising some of our services projects so their naming is more logical. We have the following structure:
Djp.Services.Type. ServiceName
This seems to make sense as a logical grouping, however what I want to know is, is it acceptable to have further levels under this based on the folders in the project. For example one project is called
Djp.Services.Management.Data
Under this project we have a "POCO" folder and a "Repositories" folder, which means, in principal, objects under these folders will have a namespace 5 levels deep.
Is this depth of namespace something that should avoided, or is it perfectly reasonable?
Any namespace that follows the logic of your application structure is fine - regardless of the length.
We have a namespace seven layers deep, with an eighth symbol on the end for the class. The dropdown in the top-left of Visual Studio 2010 that allows you to choose the class within this file doesn't fit our fully qualified class name, and when you mouse over it, there's no tooltip, so the only way to find the class name is to undock the source view and stretch it across two monitors.
I know this is dependent on the total length of the names, and not necessarily the number of nested namespaces, but I'm going to go ahead and define this as "too deep" :)
It can be handy to make your folder structure match your namespace structure, but it makes no sense to make a namespace structure match a folder structure.
The types and members of the namespace(s) are the things you are making. That is the output of your craft and the thing you should be concerned about. The files in the folder are a way to help you do so. You may have already structured the folders such that they match a sensible namespace (essentially you "wrote" the namespace structure when you did so), in which case all and good, but you may also have not done so. The namespaces will matter both to the creators of the assembly(s) and the users of it, the folder structure only to the creators.
Ignore depth, ignore folders, look at the spaces created by the names.
It something smells too long, step back and analyze it. If it passed muster, then I agree completely with #Bozho.
Software development is extremely objective and full of exceptions to hard-fast rules. (couldn't resist)
Tough to answer objectively, but there are a couple things that have given me pause in the past...
Serialization. When serializing classes, the fully qualified class names often go into some identifier that's included in the serialization. $type in a json file for example. Or on a message bus (e.g. NServiceBus) where they're used with various APIs. For example, I had a FQN of a class that was needed as an even type and the azure service bus API rejected it because it was too long.
Documentation. Pretty easy to explain this one - run docfx or some other document generator and then look at your table of contents. Have fun with that. Even when using swashbuckle to autogenerate your Swagger/OAS spec files -- you have some FAT object ids.
In code, when you have two classes with the same name from two different namespaces, they have to be qualified in the code. For example, you could have a bunch that look like this:
Dictionary<MyCompany.Application.Domain.Service.Models.SomeClass, MyCompany.Application.Domain.Service.Models.SomeOtherClass> _someLookup = new Dictionary<MyCompany.Application.Domain.Service.Models.SomeClass, MyCompany.Application.Domain.Service.Models.SomeOtherClass>();
^ that is all one field and it's not even close to the worst I've seen. You can alias them in the directives section to shorten them up in the actual code, but either way, you're gonna have some fat declarations.
I don't know that there's any "wrong" number of levels to go in the naming convention, but there certainly are implications. I'm starting to back away from the approach and something else. For example, I have the Solution named after what it is and have the projects short
MySolution
Project1
Project2
Etc
It's fairly rare that I run into naming collisions this way, and nine times out of ten, when I run into those situations, it's indicative of a different problem; code-smell really. That's just me. I've also tried to stop nesting directories so deep because those generally become implicit namespaces. You can have namespaces not match the directory structure but that's generally considered bad practice and get really confusing. I've been making my structures flatter and flatter with every new project.
Philosophically, what I would say is to NOT use namespaces as an organization device but rather as a scoping device. The primary difference being that we're engineers and we can organize and re-organize everything under the sun and argue about it all day long, but scoping is more objective. That is, I don't introduce a new scope until I know I need one; when I know I have a collision and renaming the contesting classes is worse that applying scope. "Getting ahead of the problem" in this context can get really messy. Over-engineering?

Use of the using keyword in C#

In my background in C++ I was a supporter of using the scope resolution operator, for example
class Foo
{
std::list<int> m_list;
...
}
for external libraries, to keep clear which library you were using.
Now in C# I don't know if there's a rule of thumb or a best practice to know which packages should be included via the using keyword and which classes should be fully qualified. I suppose that this can be a subjetive issue, but would like to know the most extended practices.
I pretty much never fully qualify names - I always use using directives instead.
If I need to use two names which clash, I'll give both of them aliases:
using WinFormsTextBox = System.Windows.Forms.TextBox;
using WebFormsTextBox = System.Web.UI.WebControls.TextBox;
That rarely comes up though, in my experience.
I tend to make autogenerated code fully qualify everything though, just for simplicity and robustness.
I think the saving grace in C# is the directives are fully constrained to the file you place them in. I use them whenever their use is clear for the code in the file and it helps readability of the code. Another team at my office doesn't use them at all - I think it's nuts but they came up with their own rules and are happy with them.
Tend towards whatever makes the code more readable and understandable.
If the name may be ambiguous and their is no common "most likely case" then fully/partially qualifying to make this clear can be sensible even if this increases verbosity.
If confusion exists but one candidate is far more likely then qualify only in those cases where you do not use the most common case.
Common example is the use of System.Collection.X classes rather than the System.Collections.Generics versions (perhaps for back wards compatibility). In this case importing the generic namespace is fine and any non generic ones are fully qualified.
This makes it clear where you are using legacy code.
If you will be dealing with multiple clashes and the resulting full qualification would make you code extremely unreadable then it may make sense to use aliases to separate them out but you should be pretty averse to doing this since it renders the resulting code easier to physically read but harder to conceptually understand.
You have injected an element of inconsistency with the wider world. this makes code snippets within the class harder to understand in isolation.
If you must do this consider alias names which make it very clear that these are aliases as an indication to readers that they should look at the using statements for confirmation of the real types.

Categories