We have created P2P sharing in c# and I want to have some integration tests which tests for example if a file downloaded correctly from sender, and check size, speed etc.
I have tried to send a file and then checked but I can not check this file without running the program.
Any idea how I can create some tests to check data?
Let's start off by the difference from an integration to a unit test. Whereas in a unit test you'll test individual parts in isolation (usually achieved via mocking the dependencies) the integration tests works against the broader (or full) system.
I'm assuming you want to run the integration test in automated fashion. Basically you can just run a test with the unit test framework you're using already (for example NUnit) and when creating the necessary services you don't use any mocks but you inject the actual dependencies.
How you exactly setup the "frame" of the integration test up depends on your project, for example if you are using an IoC library to inject dependencies you might be able to use this one also within the integration tests rather than having to set your services up by hand.
What you have to be careful is also the fact that when you run integration tests it might affect the system you're running it on. So if you're doing something on the file system it's a good practice to make sure to clean up after the test.
I would recommend to create some basic "framework" that fits your project to setup integration tests, that would include the generic code you need to setup the tests to run with your system and maybe creates dedicated folders in a temp directory that will be removed after every test run.
Now to your more concrete question: If I get it right you would need to create a "sender" that will provide the test file you want to download. As part of the test you could deploy this test file to the above mentioned temp folder and configure the sender to provide that file. Then you could create the client that would interact with this sender and download the file to somewhere on your system.
Before you initiate the download you could track the time and figure out how long it took. Additionally you could after the download has finished check it's properties, or compare it to the original file, as I assume it should be identical.
Following some pseudo-code that could show the general concept:
[TestMethod]
[TestCategory("Integration")
public void DownloadFileFromSender_ConnectionDoesNotGetInterrupted_SuccessfullyDownloadsFile()
{
// Arrange - do the setup of files and temp folders, make create sender and receiver
SetupTestFile(#"Tempfolder\Testfile.txt");
var sender = new Sender(...);
var receiver = new Receiver(...);
sender.ProvideFile(#"Tempfolder\Testfile.txt");
// Act - Put your actual test here
var timeBeforeDownload = DateTime.Now;
receiver.DownloadFile(sender, #"Tempfolder\Testfile.txt", #"Tempfolder\DownloadedFile.txt");
var totalDownloadTime = DateTime.Now - timeBeforeDownload;
// Assert - Verify here your assumptions, e.g. download time or file properties
Assert.IsTrue(totalDownloadTime.TotalMilliseconds < 10000);
Assert.IsTrue(File.Exists(#"Tempfolder\DownloadedFile.txt"));
}
Be aware that running integration tests might have a longer time to setup and run depending on the size/complexity of the parts that you are testing. They do not replace unit tests but rather complement them. Due to their difference it's also a good idea to tag them to be able to just run either unit or integration tests.
Again the specifics on how to setup the test environment is up to you and depends heavily on your project you want to test.
Related
Is it possible to determine how many tests have been selected to execute before the test runner executes them? This would be helpful for local testing since our logic generates test configuration for every test suite configuration at a time, having the ability to figure out which tests have been selected to execute could allow me to create logic to only create test data configuration for those tests.
This would be useful when writing tests and testing that they work. Since we only want to generate test data for the selected test.
Right now we have to comment out code to disable it from executing the test data configuration.
Thanks.
I think you are overthinking this a bit. Your setup can be done with a combination of splitting the setup work between the whole test assembly setup that only runs once, namespace wide setup that runs once before any test in the namespace is run, the constructor for a test fixture, the start of an actual test, etc.
If you are reusing a docker instance and app pool for all the tests then initialize it in the whole assembly setup so that it is only done once. Then each test can just add whatever data it needs before it starts. If some of that data is shared between tests then just setup global flags to indicate what has been done already and if some data a test needs hasn't been setup then just do the incremental additional setup needed before continuing with that test, but this generally isn't required if you organize your tests into namespaces properly and just use the namespace wide setup for fixtures.
I have some classes that implements some logic related to file system and files. For example, I am performing following tasks as part of this logic:
checking if certain folder has certain structure (eg. it contains subfolders with specific names etc...)
loading some files from those folders and checking their structure (eg. these are some configuration files, located at certain place within certain folder)
load additional files for testing/validation from the configuration file (eg. this config file contains information about other files in the same folder, that should have other internal structure etc...)
Now all this logic has some workflow and exceptions are thrown, if something is not right (eg. configuration file is not found at the specific folder location). In addition, there is Managed Extensibility Framework (MEF) involved in this logic, because some of these files I am checking are managed DLLs that I am manually loading to MEF aggregates etc...
Now I'd like to test all this in some way. I was thinking of creating several physical test folders on HDD, that cover various test cases and then run my code against them. I could create for example:
folder with correct structure and all files being valid
folder with correct structure but with invalid configuration file
folder with correct structure but missing configuration file
etc...
Would this be the right approach? I am not sure though how exactly to run my code in this scenario... I certainly don't want to run the whole application and point it to check these mocked folders. Should I use some unit testing framework to write kind of "unit tests", that executes my code against these file system objects?
In general, is all this a correct approach for this kind of testing scenarios? Are there other better approaches?
First of all, I think, it is better to write unit tests to test your logic without touching any external resources. Here you have two options:
you need to use abstraction layer to isolate your logic from external dependencies such as the file system. You can easily stub or mock (by hand or with help of constrained isolation framework such as NSubstitute, FakeItEasy or Moq) this abstractions in unit tests. I prefer this option, because in this case tests push you to a better design.
if you have to deal with legacy code (only in this case), you can use one of the unconstrained isolation frameworks (such as TypeMock Isolator, JustMock or Microsoft Fakes) that can stub/mock pretty much everything (for instance, sealed and static classes, non-virtual methods). But they costs money. The only "free" option is Microsoft Fakes unless you are the happy owner of Visual Studio 2012/2013 Premium/Ultimate.
In unit tests you don't need to test the logic of external libraries such as MEF.
Secondly, if you want to write integration tests, then you need to write "happy path" test (when everything is OK) and some tests that testing your logic in boundary cases (file or directory not found). Unlike #Sergey Berezovskiy, I recommend creating separate folders for each test case. The main advantages is:
you can give your folder meaningful names that more clearly express your
intentions;
you don't need to write complex (i.e. fragile) setup/teardown logic.
even if you decide later to use another folder structure, then you can change it more easily, because you will already have working code and tests (refactoring under test harness is much easier).
For both, unit and integration tests, you can use ordinary unit testing frameworks (like NUnit or xUnit.NET). With this frameworks is pretty easy to launch your tests in Continuous integration scenarios on your Build server.
If you decide to write both kinds of tests, then you need to separate unit tests from integration tests (you can create separate projects for every kind of tests). Reasons for it:
unit tests is a safety net for developers. They must provide quick feedback about expected behavior of system units after last code changes (bug fixes, new features). If they are run frequently, then developer can quickly and easily identify piece of code, that broke the system. Nobody wants to run slow unit tests.
integration tests are generally slower than unit tests. But they have different purpose. They check that units works as expected with real dependencies.
You should test as much logic as possible with unit tests, by abstracting calls to the file system behind interfaces. Using dependency injection and a testing-framework such as FakeItEasy will allow you to test that your interfaces are actually being used/called to operate on the files and folders.
At some point however, you will have to test the implementations working on the file-system too, and this is where you will need integration tests.
The things you need to test seem to be relatively isolated since all you want to test is your own files and directories, on your own file system. If you wanted to test a database, or some other external system with multiple users, etc, things might be more complicated.
I don't think you'll find any "official rules" for how best to do integration tests of this type, but I believe you are on the right track. Some ideas you should strive towards:
Clear standards: Make the rules and purpose of each test absolutely clear.
Automation: The ability to re-run tests quickly and without too much manual tweaking.
Repeatability: A test-situation that you can "reset", so you can re-run tests quickly, with only slight variations.
Create a repeatable test-scenario
In your situation, I would set up two main folders: One in which everything is as it is supposed to be (i.e. working correctly), and one in which all the rules are broken.
I would create these folders and any files in them, then zip each of the folders, and write logic in a test-class for unzipping each of them.
These are not really tests; think of them instead as "scripts" for setting up your test-scenario, enabling you to delete and recreate your folders and files easily and quickly, even if your main integration tests should change or mess them up during testing. The reason for putting them in a test-class, is simply to make then easy to run from the same interface as you will be working with during testing.
Testing
Create two sets of test-classes, one set for each situation (correctly set up folder vs. folder with broken rules). Place these tests in a hierarchy of folders that feels meaningful to you (depending on the complexity of your situation).
It's not clear how familiar you are with unit-/integration-testing. In any case, I would recommend NUnit. I like to use the extensions in Should as well. You can get both of these from Nuget:
install-package Nunit
install-package Should
The should-package will let you write the test-code in a manner like the following:
someCalculatedIntValue.ShouldEqual(3);
someFoundBoolValue.ShouldBeTrue();
Note that there are several test-runners available, to run your tests with. I've personally only had any real experience with the runner built into Resharper, but I'm quite satisfied with it and I have no problems recommending it.
Below is an example of a simple test-class with two tests. Note that in the first, we check for an expected value using an extension method from Should, while we don't explicitly test anything in the second. That is because it is tagged with [ExpectedException], meaning it will fail if an Exception of the specified type is not thrown when the test is run. You can use this to verify that an appropriate exception is thrown whenever one of your rules is broken.
[TestFixture]
public class When_calculating_sums
{
private MyCalculator _calc;
private int _result;
[SetUp] // Runs before each test
public void SetUp()
{
// Create an instance of the class to test:
_calc = new MyCalculator();
// Logic to test the result of:
_result = _calc.Add(1, 1);
}
[Test] // First test
public void Should_return_correct_sum()
{
_result.ShouldEqual(2);
}
[Test] // Second test
[ExpectedException(typeof (DivideByZeroException))]
public void Should_throw_exception_for_invalid_values()
{
// Divide by 0 should throw a DivideByZeroException:
var otherResult = _calc.Divide(5, 0);
}
[TearDown] // Runs after each test (seldom needed in practice)
public void TearDown()
{
_calc.Dispose();
}
}
With all of this in place, you should be able to create and recreate test-scenarios, and run tests on them in a easy and repeatable way.
Edit: As pointed out in a comment, Assert.Throws() is another option for ensuring exceptions are thrown as required. Personally, I like the tag-variant though, and with parameters, you can check things like the error message there too. Another example (assuming a custom error message is being thrown from your calculator):
[ExpectedException(typeof(DivideByZeroException),
ExpectedMessage="Attempted to divide by zero" )]
public void When_attempting_something_silly(){
...
}
I'd go with single test folder. For various test cases you can put different valid/invalid files into that folder as part of context setup. In test teardown just remove those files from folder.
E.g. with Specflow:
Given configuration file not exist
When something
Then foo
Given configuration file exists
And some dll not exists
When something
Then bar
Define each context setup step as copying/not copying appropriate file to your folder. You also can use table for defining which file should be copied to folder:
Given some scenario
| FileName |
| a.config |
| b.invalid.config |
When something
Then foobar
I don't know your program's architecture to give a good advice, but I will try
I believe that you don't need to test real file structure. File access services are defined by system/framework, and they're don't need to be tested. You need to mock this services in related tests.
Also you don't need to test MEF. It is already tested.
Use SOLID principles to make unit tests. Especially take look at Single Responsibility Principle this will allow you to to create unit tests, which won't be related to each others. Just don't forget about mocking to avoid dependencies.
To make integration tests, you can create a set of helper classes, which will emulate scenarios of file structures, which you want to test. This will allow you to stay not attached to machine on which you will run this tests. Such approach maybe more complicated than creating real file structure, but I like it.
I would build framework logic and test concurrency issues and file system exceptions to ensure a well defined test environment.
Try to list all the boundaries of the problem domain. If there are too many, then consider the possibility that your problem is too broadly defined and needs to be broken down. What is the full set of necessary and sufficient conditions required to make your system pass all tests? Then look at every condition and treat it as an individual attack point. And list all the ways you can think of, of breaching that. Try to prove to yourself that you have found them all. Then write a test for each.
I would go through the above process first for the environment, build and test that first to a satisfactory standard and then for the more detailed logic within the workflow. Some iteration may be required if dependencies between the environment and the detailed logic occur to you during testing.
Trying to debug a plugin in CRM 2011 can be extremely difficult. Not only are there issues with having the .pdb files in the correct location on the server, but each time you make a coding change you get to go through the hassle of deploying and re-registering the plugin. Since the trigger is in CRM itself, it's hard to create a unit test for it.
My current process of writing a unit test for a brand new plugin is rather slow and error, but goes something like this:
Register the new plugin using the SDK plugin registration tool
Attach a debugger to the w3wp.exe, putting a break point in the plugin code.
Trigger the plugin through whatever action it is registered to run for.
When the break point gets hit, serialize the preimage, postimage, and target values of the pipeline to XML files, this then becomes the input to my unit test.
Stop debugging and create a new unit test, using RhinoMocks to mock the PluginExecutionContext and ServiceProvider, using loading the serialized XML files as stubs for the input parameters.
Create methods that get run at the start and end of each unit test that resets (first attempting to delete, then add) dummy data for the unit test to process, then deletes the dummy data at the end of the test.
Edit the Serialized files to reference the dummy data so that I can ensure that the plugin will work against the exact same data each time it is ran.
Declare and instantiate the plugin in the unit test, passing in the mocked objects
Execute the plugin, running additional queries to ensure that the plugin performed the work I was expecting, Asserting on failure.
This is a pain to do. From getting the images correct, to creating the dummy data, and resetting it each time the test is run, there seems to be a lot of area for improvement.
How can I unit test a plugin without having to actually trigger it from CRM, or run through all the hoopla of debugging it in CRM first, and creating unique dummy data for each test? How can I use injection to eliminate the need to be deleting, creating, testing, verifying, and deleting data in CRM for each unit test?
Update 2016
This question is still getting quite a few hits, so I thought I'd add what the two (that I know of) open source projects that provide Fake CRM instances for unit testing:
FakeXrmEasy -- Created by Jordi (see answer below)
Primarily Fake CRM Service
Support for Plugin/Workflow Faking
Dependency on FakeItEasy
Great Documentation
XrmUnitTest -- Created by myself
Fake CRM Service + more (Assumptions, Entity Builders, etc)
Fluent Support for Plugin/Workflow Faking
No Dependency on any mocking framework
Sucky Documentation (I’m working on it)
Checkout this video I created to compare and contrast the differences.
I serialize the plugin execution context to file for use with unit tests. There is a good project on codeplex that does this http://crm2011plugintest.codeplex.com/
Makes debugging and unit testing easier and you can 'record' real world testing.
How can I unit test a plugin without having to actually trigger it from CRM, or run through all the hoopla of debugging it in CRM first, and creating unique dummy data for each test?
With mocking. See this link for what classes to mock with RhinoMocks. Sounds like you are on your way in this regard.
How can I use injection to eliminate the need to be deleting, creating, testing, verifying, and deleting data in CRM for each unit test?
Injecting values for the input parameters can be done by stubbing in a hand-cranked instance of the entity you are going to manipulate:
// Add the target entity
Entity myStubbedEntity = new Entity("account");
// set properties on myStubbedEntity specific for this test...
ParameterCollection inputParameters = new ParameterCollection();
inputParameters.Add("Target", myStubbedEntity);
pipelineContext.Stub(x => x.InputParameters).Return(inputParameters);
Isnt that easier than capturing the xml data and rehydrating the entire input parameters collection?
EDIT:
for data access the usual recommendation is to wrap data access into classes. The repository pattern is popular but overkill for what we need here. For your plugins execution classes, you "inject" your mocked class at creation. A blank constructor that initalizes the default repository, and a second constructor that takes an IRepository.
public class MyPluginStep
{
ITaskRepository taskRepository;
public MyPluginStep(ITaskRepository repo)
{
taskRepository = repo;
}
public MyPluginStep()
{
taskRepository = new DefaultTaskRepositoryImplementation();
}
public MyExecuteMethod(mypluginstepparams){
Task task = taskRepository.GetTaskByContact(...);
}
Depending on the complexity of your plugin steps this can evolve into passing many repositories to each class and could become burdensome but this is the basics you can add complexity to if required.
One really good option would be to use a mocking library which deals with mocks and fakes for you because I wanted to create my own and always ended up wasting a lot of time creating fakes or mocks until I created this library which does it for you. Try FakeXrmEasy
I have a number of unit tests which rely on the presence of a csv file. They will throw an exception if this file doesn't exist obviously.
Are there any Gallio/MbUnit methods which can conditionally skip a test from running? I'm running Gallio 3.1 and using the CsvData attribute
[Test]
[Timeout(1800)]
[CsvData(FilePath = TestDataFolderPath + "TestData.csv", HasHeader = true)]
public static void CalculateShortfallSingleLifeTest()
{
.
.
.
Thanks
According to the answer in this question, you'll need to make a new TestDecoratorAttribute that calls Assert.Inconclusive if the file is missing.
Assert.Inconclusive is very appropriate for your situation because you aren't saying that the test passed or failed; you're just saying that it couldn't be executed in the current state.
What you have here is not a unit test. A unit test tests a single unit of code (it may be large though), and does not depend on external environmental factors, like files or network connections.
Since you are depending on a file here, what you have is an integration test. You're testing whether your code safely integrates with something outside of the control of the code, in this case, the file system.
If this is indeed an integration test, you should change the test so that you're testing the thing that you actually want tested.
If you're still considering this as a unit test, for instance you're attempting to test CSV parsing, then I would refactor the code so that you can mock/stub/fake out the actual reading of the CSV file contents. This way you can more easily provide test data to the CSV parser, and not depend on any external files.
For instance, have you considered that:
An AntiVirus package might not give you immediate access to the file
A typical programmer tool, like TortoiseSvn, integrates shell overlays into Explorer that sometimes hold on to files for too long and doesn't always give access to a file to a program (you deleted the file, and try to overwrite it with a new one? sure, just let me get through the deletion first, but there is a program holding on to the file so it might take a while...)
The file might not actually be there (why is that?)
You might not have read-access to the path
You might have the wrong file contents (leftover from an earlier debugging session?)
Once you start involving external systems like file systems, network connections, etc. there's so many things that can go wrong that what you have is basically a brittle test.
My advice: Figure out what you're trying to test (file system? CSV parser?), and remove dependencies that are conflicting with that goal.
An easy way would be to include an if condition right at the start of the test that would just execute any code in the test if the CSV file can be found.
Of course this has the big drawback that tests would be green although they haven't actually run and asserted anything.
I agree with Grzenio though, if you have unit tests that rely heavily on external conditions, they're not really helping you. In this scenario you will never really know whether the unit test ran successfully or was just skipped, which contradicts what unit tests are actually for.
In my personal opinion, I would just write the test so that they correctly fail when the file is not there. If they fail this is an indicator that the file in question should be available on the machine where the unit tests run. This might need some manual adjustments at times (getting the file to the computer or server in question), but at least you have reliable unit tests.
In Gallio/MbUnit v3.2 the abstract ContentAttribute and its concrete derived types (such as [CsvData] have a new optional parameter that allows to change the default outcome of a test in case of an error occured while opening or reading the file data source (ref. issue 681). The syntax is the following:
[Test]
[CsvData(..., OutcomeOnFileError = OutcomeOnFileError.Inconclusive)]
public void MyTestMethod()
{
// ...
}
I know Visual Studio offers some Unit Testing goodies. How do I use them, how do you use them? What should I know about unit testing (assume I know nothing).
This question is similar but it does not address what Visual Studio can do, please do not mark this as a duplicate because of that. Posted as Community Wiki because I'm not trying to be a rep whore.
Easily the most significant difference is that the MSTest support is built in to Visual Studio and provides unit testing, code coverage and mocking support directly. In order to do the same types of things in the external (third-party) unit test frameworks generally requires multiple frameworks (a unit testing framework and a mocking framework) and other tools to do code coverage analysis.
The easist way to use the MSTest unit testing tools is to open the file you want to create unit tests for, right click in the editor window and choose the "Create Unit Tests..." menu from the context menu. I prefer putting my unit tests in a separate project, but that's just personal perference. Doing this will create a sort of "template" test class, which will contain test methods to allow you to test each of the functions and properties of your class. At that point, you need to determine what it means for the test to pass or fail (in other words, determine what should happen given a certain set of inputs).
Generally, you end up writing tests that look similar to this:
string stringVal = "This";
Assert.IsTrue(stringVal.Length == 4);
This says that for the varaible named stringVal, the Length property should be equal to 4 after the assignment.
The resources listed in the other thread should provide a good starting point to understandng what unit testing is in general.
The unit testing structure in VS is similar to NUnit in it's usage. One interesting (and useful) feature of it does differ from NUnit significantly. VS unit testing can be used with code that was not written with unit testing in mind.
You can build a unit testing framework after an application is written because the test structure allows you to externally reference method calls and use ramp-up and tear-down code to prep the test invironment. For example: If you have a method within a class that uses resources that are external to the method, you can create them in the ramp-up class (which VS creates for you) and then test it in the unit test class (also created for you by VS). When the test finishes, the tear-down class (yet again...provided for you by VS) will release resources and clean up. This entire process exists outside of your application thus does not interfere with the code base.
The VS unit testing framework is actually a very well implemented and easy to use. Best of all, you can use it with an application that was not designed with unit testing in mind (something that is not easy with NUnit).
First thing I would do is download a copy of TestDriven.Net to use as a test runner. This will add a right-click menu that will let you run individual tests by right-clicking in the test method and selecting Run Test(s). This also works for all tests in a class (right-click in class, but outside a method), a namespace (right click on project or in namespace outside a class), or a entire solution (right-click on solution). It also adds the ability to run tests with coverage (built-in or nCover) or the debugger from the same right-click menu.
As far as setting up tests, generally I stick with the one test project per project and one test class per class under test. Sometimes I will create test classes for aspects that run across a lot of classes, but not usually. The typical way I create them is to first create the skeleton of the class -- no properties, no constructor, but with the first method that I want to test. This method simply throws an NotImplementedException.
Once the class skeleton is created, I use the right-click Create Unit Tests in the method under test. This brins up a dialog that lets you create a new test project or select an existing one. I create, and name appropriately, a new test project and have the wizard create the classes. Once this is done you may want to also create the private accessor functions for the class in the test project as well. Sometimes these need to be updated (recreated) if your class changes substantially.
Now you have a test project and your first test. Start by modifying the test to define a desired behavior of the method. Write enough code to (just barely) pass the test. continue on with writing tests/writing codes, specifying more behavior for the method until all of the behavior for the method is defined. Then move onto the next method or class as appropriate until you have enough code to complete the feature that you are working on.
You can add more and different types of tests as required. You can also set up your source code control to require that some or all tests pass before check in.