How to make VS assume everything is public by default - c#

I use VS2012 and ReSharper 7 to write C# code. My projects are rarely so large or complicated as to require thinking about granular access levels. It's usually easier for me to just make everything public, instead of spending time and effort to figure out what should be open to access by what. In any case, I am the only one using my code.
I realize this does not apply to everyone, and I realize that access modifiers are important features of the language and should be used carefully. But in my current situation, it doesn't matter and everything might as well be public (in practice I do make them public). I suspect this applies to many other programmers, especially non-enterpise ones.
However, the tendency of VS2012 is to default to the lowest access level. For instance, if I add a new field by typing int id_number;, the moment I put the semicolon in private is added to the field, then I have to go back and change it to public if that was my intention (it usually is).
How can I make VS/ReSharper generate classes, fields, methods and so on with the highest possible access level (essentially, make everything public)?

You can't.
Resharper adds private, because that's the default if you wouldn't specify any access modifier.
So, Resharper doesn't change the access level of your field. It just makes it explicit and because of that, Resharper doesn't have any functionality to change the access level automatically.
But you could easily use automatic properties. There even is a live template for it. Just type prop and hit TAB.

For classes and interfaces (typing class MyClass will cause ReSharper to recongnize "class" as a shortcut, and insert the template class MyClass { } as opposed to public class MyClass { }) it's possible to edit the template through ReSharper -> Template Explorer.
Things such as generated methods which are created by Extract... commands appear to be determined by Visual Studio's code snippets. The location of these can be found in the Code Snippet Manager (Ctrl+K, B). Each snippet is an XML file, this MSDN page describes editing them.

Related

Can't decide between appropriate approach to unit testing of protected methods

Disclaimer: I do know that in an optimal world we only test the publics from an interface. However, in reality, we often have a pre-existent code base that wasn't developed under TDD, hence the need to a more flexible approach.
I'd like to design test methods for an ASPX page (blobb.aspx.cs) and since it's not using an interface to inherit from and there's some logic that can't be refactored out, I have to access and test the protected methods of it. I've done my googlearch and arrived at two different suggestions.
Inherit and test within as shown in this example.
Force access to other assemblies as shown in this example.
The first approach seems to be th emost widely suggested and there are tons of blogs talking about as well as answers on SO recommending it. So there seems to be a concesus on the subject. However, the second approach seems most technically proper and has an immence upvote from the community with the only eyebrowse lifter that it's very sparsely mentioned on the web. I haven't found any comparison putting the two against each other nor any reasoning on which is more appropriate in what circumstances.
Hence, me asking.
From what I was reading on MSDN it sounded like you could automatically have private accessors or InternalsVisbleTo generated for you
When you create a unit test for an internal method in C# or for a
friend method in Microsoft Visual Basic, a dialog box appears that
allows you to choose between having your internal methods accessed
with the private accessor or with the InternalsVisibleToAttribute.
From: https://msdn.microsoft.com/en-us/library/bb385974(VS.100).aspx
But then I read:
The use of Accessors has been deprecated in Visual Studio 2010 and
will not be included in future versions of Visual Studio.
From: https://msdn.microsoft.com/en-us/library/dd293546(v=vs.100).aspx
Obviously, you could still roll your own accessors, but that would be a development effort all on its own. Even auto-generating an inherited class would be a pain. And you'd just be creating a source of meta-bugs.
So it sounds like InternalsVisibleTo is the way to go and maybe you change the the protected methods to "protected internal". That way you can access them without creating another test surface for the meta-bugs to cling to.

Methodnames in Output-Assembly

I am compiling a project with Visual Studio 2013 against .NET 4.5 and then checking it again with ILDASM.
What I noticed is that the build in Release still contains method names and variable names, I thought these should be removed in a release-build or do I need an obsfuscator to do that?
You need an obsfuscator to hide method and member names, local variable names should be stripped by the compiler, but anything that can turn up using reflection is preserved that includes class and interface names, public and private methods, public and private fields.
As for method names, the compiler doesn't know if your assembly will be used or not in another project, so the preservation of method names is logical. Though variable names can't be used anywhere than in the method where they're defined, I guess it is useful for debugging (be it Debug or Release) and they really take insignificant space.
And my advice, don't use obfuscator, unless your application contains security critical codes (and then, I'd still advise obfuscating just this code, not the other methods). It is way better for debugging and reading exceptions.

Ndoc on properties best practice?

My project manager last week hinted at using ndoc on properties within a class. Is this something that should be done? Is it considered best practice to do this or not? I am currently expanding all my ndoc for the section of a project that I am working on but do not know how deep I need to go with it. I have of course provided summaries, params, returns and remarks to the class and each method but do properties require ndoc too?
Public properties are a contract to the outside world I think they should be documented.
Internal properties will only be used in the same assembly so you could get away with not documenting them.
Protected properties will only be used in derived classes (internal or public) so they might be in need of some documentation.
Private properties will only be used in the class itself so, again, you could get away with it.
Note that "getting away with not documenting it" suggests the way I feel about this: you should document. At the same time I realize that sometimes you need to do one thing or the other...
Perhaps you should ask this on http://programmers.stackexchange.com
Just like any other members, the meaning of properties should be documented. This should include not only what the property does or what it can be used for, but also its initial value, special cases (e.g. values that must not be assigned; values that would cause an exception or automatically be replaced with other values), as well as possibly the ramifications and purpose of overriding the property in a derived class where this is possible.
Public properties should definitely always be documented, whether your chosen documentation workflow uses GhostDoc, NDoc, or whatever. XML comments on public properties and methods show up in Intellisence when people use it, so there's no reason to not add something there. Even if the name of the property explains what it does, it's very nice to have XML comments there to confirm that. There are plenty of gotchas in plenty of code, so it's courteous to let the people who use your code know they're not walking into one.
Private properties can go either way. I'd hesitate to call it a particular best practice since to see the comments you have to be in the class, at which point you can just look at its usage trivially. That said, I still put XML comments on private properties, if for nobody else then for myself. There's no way you will remember what you were doing 6 months from now and any structural comments you can add will make it easier to pick up where you left off.

C#: Un-nested struct in same .cs file as related class?

If I'm dealing with one class and one public struct (not nested), Should I create a separate .cs just for the struct? Or leave it un-nested in its .cs file of the class? (This is assuming the struct relates to the class, but isn't so exclusive to the class that it should be nested and declared private)
Edit: I removed my initial question about two classes because I found C# classes in separate files?
Note that the only person(s) that can accurately answer this question is you, and your team. If your team is happy to find several related types inside a single file, combined due to ... whatever... then what I, or whomever other person, says, should be just ... irrelevant.
In any case, I would turn the question upside down:
Is there any reason to place two separate types (related by names, functionality, or whatever, but separate nonetheless) in the same file
and I've yet to come up with a good reason.
There are extensions/addins to Visual Studio where you can type in the name, and quickly navigate to the file, and I can think of three, but there are undoubtedly others:
DPack
ReSharper
CodeRush/Refactor! Pro
The first allows you to quickly navigate to a file by name. If you know the type, but have people putting multiple types into the same type, this will not be helpful, at all.
The second and third, lets you navigate to a type by name, but you shouldn't rely on people having those, or knowing how to use them.
To that end, I would advocate following these rules:
Project names should be identical to the root namespace of that project. I differ from this point myself where in some cases I name my projects "...Core", and I then remove "Core" from the namespace, but otherwise, leave the project name identical to the namespace
Use folders in the project to build namespace hierarchies
The name of a type should correspond 100% to the name of the file + whatever extension is right for your language. So "YourType" should be "YourType.cs", "YourType.vb" or "YourType.whatever" depending on language
That depends on who you ask.
I, personally, find it easier to read if they are all, always, broken out. However, the compiler doesn't care... so whatever you and your team agree is easier to understand.
In my opinion it's a good practice to avoid that. Some day a developer will be looking around for ClassBar in the project and won't be able to find it easily because it's nested in ClassFoo.cs
Tools like Resharper have a neat feature where you can just select a class, right click, place in new file to make this easier.
If you read any of the popular coding standards (Lance Hunt, iDesign, Framework Design Guidelines etc) most of them advocate 1 class per file.
Its annoying to scroll down and search for how many class each.cs file contains/hides.
Maintainability issue while using version control
Usability with our team.
Check here for more interesting discussion on same.
I think it was less about whether you can or whether you should. For things like this, I feel it's best to look to the convention in the rest of the codebase. Sometime conformity is better because it makes other developers jobs easier becaues everybody knows where things are.
If it's entirely new project and you are setting the standards here by yourself, do what makes sense to you. To me if the struct has no use outside the related class, I may put them in the same file. Otherwise, I seperate them out.

Justification for Reflection in C#

I have wondered about the appropriateness of reflection in C# code. For example I have written a function which iterates through the properties of a given source object and creates a new instance of a specified type, then copies the values of properties with the same name from one to the other. I created this to copy data from one auto-generated LINQ object to another in order to get around the lack of inheritance from multiple tables in LINQ.
However, I can't help but think code like this is really 'cheating', i.e. rather than using using the provided language constructs to achieve a given end it allows you to circumvent them.
To what degree is this sort of code acceptable? What are the risks? What are legitimate uses of this approach?
Sometimes using reflection can be a bit of a hack, but a lot of the time it's simply the most fantastic code tool.
Look at the .Net property grid - anyone who's used Visual Studio will be familiar with it. You can point it at any object and it it will produce a simple property editor. That uses reflection, in fact most of VS's toolbox does.
Look at unit tests - they're loaded by reflection (at least in NUnit and MSTest).
Reflection allows dynamic-style behaviour from static languages.
The one thing it really needs is duck typing - the C# compiler already supports this: you can foreach anything that looks like IEnumerable, whether it implements the interface or not. You can use the C#3 collection syntax on any class that has a method called Add.
Use reflection wherever you need dynamic-style behaviour - for instance you have a collection of objects and you want to check the same property on each.
The risks are similar for dynamic types - compile time exceptions become run time ones. You code is not as 'safe' and you have to react accordingly.
The .Net reflection code is very quick, but not as fast as the explicit call would have been.
I agree, it gives me the it works but it feels like a hack feeling. I try to avoid reflection whenever possible. I have been burned many times after refactoring code which had reflection in it. Code compiles fine, tests even run, but under special circumstances (which the tests didn't cover) the program blows up run-time because of my refactoring in one of the objects the reflection code poked into.
Example 1: Reflection in OR mapper, you change the name or the type of the property in your object model: Blows up run-time.
Example 2: You are in a SOA shop. Web Services are complete decoupled (or so you think). They have their own set of generated proxy classes, but in the mapping you decide to save some time and you do this:
ExternalColor c = (ExternalColor)Enum.Parse(typeof(ExternalColor),
internalColor.ToString());
Under the covers this is also reflection but done by the .net framework itself. Now what happens if you decide to rename InternalColor.Grey to InternalColor.Gray? Everything looks ok, it builds fine, and even runs fine.. until the day some stupid user decides to use the color Gray... at which point the mapper will blow up.
Reflection is a wonderful tool that I could not live without. It can make programming much easier and faster.
For instance, I use reflection in my ORM layer to be able to assign properties with column values from tables. If it wasn't for reflection I have had to create a copy class for each table/class mapping.
As for the external color exception above. The problem is not Enum.Parse, but that the coder didnt not catch the proper exception. Since a string is parsed, the coder should always assume that the string can contain an incorrect value.
The same problem applies to all advanced programming in .Net. "With great power, comes great responsibility". Using reflection gives you much power. But make sure that you know how to use it properly. There are dozens of examples on the web.
It may be just me, but the way I'd get into this is by creating a code generator - using reflection at runtime is a bit costly and untyped. Creating classes that would get generated according to your latest code and copy everything in a strongly typed manner would mean that you will catch these errors at build-time.
For instance, a generated class may look like this:
static class AtoBCopier
{
public static B Copy(A item)
{
return new B() { Prop1 = item.Prop1, Prop2 = item.Prop2 };
}
}
If either class doesn't have the properties or their types change, the code doesn't compile. Plus, there's a huge improvement in times.
I recently used reflection in C# for finding implementations of a specific interface. I had written a simple batch-style interpreter that looked up "actions" for each step of the computation based on the class name. Reflecting the current namespace then pops up the right implementation of my IStep inteface that can be Execute()ed. This way, adding new "actions" is as easy as creating a new derived class - no need to add it to a registry, or even worse: forgetting to add it to a registry...
Reflection makes it very easy to implement plugin architectures where plugin DLLs are automatically loaded at runtime (not explicitly linked at compile time).
These can be scanned for classes that implement/extend relevant interfaces/classes. Reflection can then be used to instantiate instances of these on demand.

Categories