I am currently playing around with the latest Visual Studio 2017 Release Candidate by creating a .NET Standard 1.6 library. I am using xUnit to unit test my code and was wondering if you can still test internal methods in VS2017.
I remember that you could add a line in AssemblyInfo.cs class in VS2015 that would enable specified projects to see internal methods:
[assembly:InternalsVisibleTo("MyTests")]
As there is no AssemblyInfo.cs class in VS2017 .NET Standard projects, I was wondering if you can still unit test internal methods?
According to .NET docs for the InternalsVisibleToAttribute:
The attribute is applied at the assembly level. This means that it can be included at the beginning of a source code file, or it can be included in the AssemblyInfo file in a Visual Studio project.
In other words, you can simply place it in your own arbitrarily named .cs file, and it should work fine:
// some .cs file included in your project
using System.Runtime.CompilerServices;
[assembly:InternalsVisibleTo("MyTests")]
As described here:
https://blog.sanderaernouts.com/make-internals-visible-with-new-csproj-format
It is possible to add the internal visible attribute within the project file by adding another ItemGroup:
<ItemGroup>
<AssemblyAttribute Include="System.Runtime.CompilerServices.InternalsVisibleTo">
<_Parameter1>$(AssemblyName).Tests</_Parameter1>
</AssemblyAttribute>
</ItemGroup>
or even:
<ItemGroup>
<AssemblyAttribute Include="System.Runtime.CompilerServices.InternalsVisibleTo">
<_Parameter1>$(MSBuildProjectName).Tests</_Parameter1>
</AssemblyAttribute>
</ItemGroup>
I like that solution because the project file seems to be the right place for defining such concerns.
While the first answer is perfectly fine. If you feel you still want to do this in the original AssemblyInfo you can always choose to not auto generate the file and add it manually.
<PropertyGroup>
<GenerateAssemblyInfo>false</GenerateAssemblyInfo>
</PropertyGroup>
For more information:
https://stackoverflow.com/a/47075759/869033
The "InternalsVisibleTo" attribute is key to any sort of "white-box" (the term of the decade, I guess) testing for .Net. It can be placed in any c# file with the "assembly" attribute on the front. Note that MS DOCs say that the assembly name must be qualified by the public key token, if it is signed. Sometimes that does not work and one must use the full public key in it's place. Access to internals is key to testing concurrent systems and in many other situations. See https://www.amazon.com/xUnit-Test-Patterns-Refactoring-Code/dp/0131495054. In this book, Meszaros describes a variety of coding styles that basically constitute a "Design For Test" approach to program development. At least that's the way I've used it over the years.
ADDED:
Sorry, I haven't been on here for a while. One approach is called the "testing subclass" approach by Meszaros. Again, one has to use "internalsvisableto" to access the base class's internals. This is a great solution, but it doesn't work for sealed classes. When I teach "Design For Test", I suggest that it's one of the things that are required to be "pre-engineered" into the base classes in order to provide testability. It has to become almost a cultural thing. Design a "base" base class that is unsealed. Call it UnsealedBaseClass or something uniformly recognizable. This is the class to be subclassed for testing. It is also subclassed to build the production sealed class, which often only differs in the constructors it exposes. I work in the nuclear industry and the testing requirements are taken VERY seriously. So, I have to think about these things all the time. By the way, leaving testing hooks in production code is not considered a problem in our field, as long as they are "internal" in a .Net implementation. The ramifications of NOT testing something can be quite profound.
Another way is to use a 'wrapper' TestMyFoo public class inside the target project that has public methods and inherits from the class you need to test (e.g. MyFoo). These public methods simply call through onto the base class you want to test.
It is not 'ideal' as you end up shipping a test hook in your target project. But consider modern reliable cars ship with diagnostic ports and modern reliable electronics ship with a JTAG connection. But nobody is silly enough to drive their car using the diagnostic port.
Related
I have a .Net project in which some classes (e.g. constants, enums, etc.) are generated by a tool developed in the company. Developers would not participate in changing them. In addition the team using this tool may make mistakes due to the large size of the project.
Is there any way I can enforce some rules like folder structure, naming, proper namespaces, and such things upon inserting those files in the solution? Or is there a way to test these factors?
To enforce a folder structure, you could add custom logic in MSBuild. The logic in MSBuild would run as part of a build. If you know that certain folders must exist as part of a project and/or that certain files must be in certain folders, you can add verification steps in MSBuild and either issue a warning or stop the build with an error.
To enforce name and namespace rules/conventions you can use a static code analyzer. You can use the Microsoft Code Analyzer and/or a third party analyzer. If the 'rules' you need are not available out of box, you can write custom rules.
Both the MSBuild and code analyzer can be used with and without the Visual Studio IDE and can be used locally and in automated builds.
What I seek is achievable with ArchUnitNet. It can be reached here
It helps with testing the folder structure of the project as well as namespace testing and relative naming and even correct inheritance if I'm not mistaken.
I stumbled into a cooperation project, where the other part references an interface library of mine and deploys a self compiled MEF Plugin for our tool. I know which methods those guys are using and I want to monitor our library during the the build process, if the method signatures have been changed (just to make sure, noone checked in stuff, which should lead to another interface version and impairs the plugins loadability).
Actually, I have a console project in mind, where the signatures are somehow hardcoded and checked via reflection - but maybe there is a more elegant or simple way.
Any hint would be great.
Thanks in advance!
Roslyn 2.3 introduces a feature for generating reference assemblies. That is an assembly containing only public types and members. When used together with the "deterministic" feature (=> reproducible builds), the generated reference assembly remains binary identical as long as no changes to the public interface is made (implementation changes and private/internal members don't matter).
So you can add this to your csproj:
<PropertyGroup>
<Deterministic>true</Deterministic>
<ProduceReferenceAssembly>true</ProduceReferenceAssembly>
</PropertyGroup>
Until VS 2017 15.5 comes out, I suggest adding <CompileUsingReferenceAssemblies>false</CompileUsingReferenceAssemblies> to all consuming projects because the IDE (e.g. "go to definition") has some problems with this feature unless you are using the "new project system" that is used for .NET Core and .NET Standard projects. (The idea would be that projects referencing the project are only rebuilt if the public interface changes - this speeds up incremental build for large solutions when only implementations change).
These changes will create a ref folder in your output. You can then check if the checksum of the assmbly in there matches a known cheksum on each build.
I ended up creating a small console application with a try catch block, using the same interface dll and the same objects as the project partner does - compiled with the last released interface library. During execution it falls into the catch branch if the signatures got invalid (discovered by the normal .NET processes) - then the exitcode is raised with -1.
Doing all this in the post build processes, cathcing the exit code as discribed this article and breaking build automatically.
Not very happy with that solution, but got it working ... Further ideas still wanted :-)
There are a lot of variables and methods in my program and I want to seperate some of them in other class files. But as the program grows the methods and functions can change.
I searched on the net but many people generally speaking for dll files. Without making a dll file, how can I arrange my code and split into small class files?
Yes, just split it out in to a separate file in a new class but still inside the same project. The term for what you are doing is called Code Refactoring. There are some tools built in to Visual Studio to make it easier to do, and there are some 3rd party tools that add even more features to make it easier to do.
But all it boils down to is just making new classes in the same project and referencing those new classes from where you took the code out from.
You can add folders to your solution. Classes are by default a namespaceprovider, so that classes in this folder have a different namespace.
For example if your default-namespace is MyNameSpace and you create a folder called Entity then all classes in this folder have the namespace MyNameSpace.Entity
And all Items in a project are compiled to one single dll or exe
Just add more classes to the project and put the data and behavior (methods) into the appropriate classes. The project will still build into a single exe or dll.
Generally, it's better to add a second project under the same solution call it "CommonLib" or something like that. Then you add it as a reference to the main application and set up the project so that the applications build depends on the libraries build. Add a using statement for the common lib where ever you want to use those objects. This is definitely better for large scale or enterprise applications. There's a pretty decent chance that somewhere down the line you'll want to reuse some of this code, if everything builds into a single exe that won't be an option.
Honestly, I can't word my question any better without describing it.
I have a base project (with all its glory, dlls, resources etc) which is a CMS.
I need to use this project as a base for othe custom bake projects.
This base project is to be maintained and updated among all custom bake projects.
I use subversion (Collabnet and Tortise SVN)
I have two questions:
1 - Can I use subversion to share the base project among other projects
What I mean here is can I "Checkout" the base project into another "Checked Out" project and have both update and commit seperatley. So, to paint a picture, let's say I am working on a custom project and I modify the core/base prject in some way (which I know will suit the others) can I then commit those changes and upon doing so when I update the base project in the other "Checked out" resources will it pull the changes? In short, I would like not to have to manually deploy updated core files whenever I make changes into each seperate project.
2 - If I create a custom file (let's say an webcontrol or aspx page etc) can I have it compile seperatley from the base project
Another tricky one to explain. When I publish my web application it creates DLLs based on the namespaces of projects attached to it. So I may have a number of DLLs including the "Website's" namespace DLL, which could simply be website. I want to be able to make a seperate, custom, control which does not compile into those DLLs as the custom files should not rely on those DLLS to run. Is it as simple to set a seperate namespace for those files like CustomFiles.ProjectName for example?
Think of the whole idea as adding modules to the .NET project, I don't want the module's code in any of the core DLLs but I do need for module to be able to access the core dlls.
(There is no need for the core project to access the module code as it should be one way only in theory, though I reckon it woould not be possible anyway without using JSON/SOAP or something like that, maybe I am wrong.)
I want to create a pluggable environment much like that of Joomla/Wordpress as since PHP generally doesn't have to be compiled first I see this is the reason why all this is possible/easy. The idea is to allow pluggable themes, modules etc etc.
(I haven't tried simply adding .NET themes after compile/publish but I am assuming this is possible anyway? OR does the compiler need to reference items in the files?)
UPDATE (16/05/2010):
I posted a similar question with a little more detail for question 2 on Experts-Exchange. I don't want to post all that info here as it just will be too messy but it explains question 2 in greater detail.
For your first question, you want to use svn externals. More details can be found here: http://svnbook.red-bean.com/en/1.0/ch07s03.html
For your second question, you need to create a seperate assembly and the easiest way is to create a new project within your solution. You can't have a single project emit 2 dll's (that I know of)
For your first question:
If the base project is a library then there is nothing stopping you from creating the following directory structure on your SVN:
Base project
Cool project nr 1
Cool project nr 2
All projects built on the Base project will include a relative reference and then everybody can checkout his Cool project X and the Base project and work on them. Checking-in changes for Base project will allow everybody else to see them by updating their Base project image. Advantage: only one SVN trunk required.
For your second question :
I tried my best, but I can't understand what you're asking :).
How do you usually go about separating your codebase and associated unit tests? I know people who create a separate project for unit tests, which I personally find confusing and difficult to maintain. On the other hand, if you mix up code and its tests in a single project, you end up with binaries related to your unit test framework (be it NUnit, MbUnit or whatever else) and your own binaries side by side.
This is fine for debugging, but once I build a release version, I really do not want my code to reference the unit testing framework any more.
One solution I found is to enclose all your unit tests within #if DEBUG -- #endif directives: when no code references an unit testing assembly, the compiler is clever enough to omit the reference in the compiled code.
Are there any other (possibly more comfortable) options to achieve a similar goal?
I definitely advocate separating your tests out to a separate project. It's the only way to go in my opinion.
Yes, as Gary says, it also forces you to test behavior through public methods rather than playing about with the innards of your classes
As the others point out, a seperate test project (for each normal project) is a good way to do it. I usually mirror the namespaces and create a test class for each normal class with 'test' appended to the name. This is supported directly in the IDE if you have Visual Studio Team System which can automatically generate test classes and methods in another project.
One thing to remember if you want to test classes and methods with the 'internal' accessor is to add the following line to the AssemblyInfo.cs file for each project to be tested:
[assembly: InternalsVisibleTo("UnitTestProjectName")]
The .Net framework after v2 has a useful feature where you can mark an assembly with the InternalsVisibleTo attribute that allows the assembly to be accessed by another.
A sort of assembly tunnelling feature.
Yet another alternative to using compiler directives within a file or creating a separate project is merely to create additional .cs files in your project.
With some magic in the project file itself, you can dictate that:
nunit.framework DLLs are only referenced in a debug build, and
your test files are only included in debug builds
Example .csproj excerpt:
<Project DefaultTargets="Build" xmlns="http://schemas.microsoft.com/developer/msbuild/2003" ToolsVersion="3.5">
...
<Reference Include="nunit.framework" Condition=" '$(Configuration)'=='Debug' ">
<SpecificVersion>False</SpecificVersion>
<HintPath>..\..\debug\nunit.framework.dll</HintPath>
</Reference>
...
<Compile Include="Test\ClassTest.cs" Condition=" '$(Configuration)'=='Debug' " />
...
</Project>
I would recommend a separate project for unit tests (and yet more projects for integration tests, functional tests etc.). I have tried mixing code and tests in the same project and found it much less maintainable than separating them into separate projects.
Maintaining parallel namespaces and using a sensible naming convention for tests (eg. MyClass and MyClassTest) will help you keeping the codebase maintainable.
As long as your tests are in a seperate project, the tests can reference the codebase, but the codebase never has to reference the tests. I have to ask, what's confusing about maintaining two projects? You can keep them in the same solution for organization.
The complicated part, of course, is when the business has 55 projects in the solution and 60% of them are tests. Count yourself lucky.
I put the tests in a separate project but in the same solution. Granted, in big solutions there might be a lot of projects but the solution explorer is good enough on separating them and if you give everything reasonable names I don't really think it's an issue.
One thing yet to be considered is versions of VisualStudio prior to 2005 did not allow EXE assembly projects to be referenced from other projects. So if you are working on a legacy project in VS.NET your options would be:
Put unit tests in the same project and use conditional compilation to exclude them from release builds.
Move everything to dll assemblies so your exe is just an entry point.
Circumvent the IDE by hacking the project file in a text editor.
Of the three conditional compilation is the least error prone.
I've always keep my unit tests in a seperate project so it compiles to it's own assembly.
For each project there is a corresponding .Test project that contains tests on it.
E.g. for the assembly called, say "Acme.BillingSystem.Utils", there would be a test assembly called "Acme.BillingSystem.Utils.Test".
Exclude it from the shipping version of your product by not shipping that dll.
If the #if(DEBUG) tag allows for a clean "release" version, why would you need a separate project for tests. The nunit LibarryA/B example (yeah, I know its a example) does this. Currently wrestling with the scenario. Had been using a separate project, but this seems to possibly allow for some productivity improvements. Still hummin and hawin.
I definitely agree with everyone else that you should separate the tests from your production code. If you insist on not, however, you should define a conditional comiplation constant called TEST, and wrap all of your unit test classes with a
#if TEST
#endif
first to ensure that the test code does not compile in a production scenario. Once that is done, you should either be able to exclude the test dlls from your production deployment, or even better (but higher maintenance), create an NAnt or MSBuild for production that compiles without the references to the test dlls.
I always create a separate Acme.Stuff.Test project that is compiled separately.
The converse argument is: Why do you want to take the tests out? Why not deliver the test? If you deliver the tests along with a test runner you have some level of acceptance test and self test delivered with the product.
I've heard this argument a few times and thought about it but I personally still keep tests in a separate project.