I am new to unit testing, so I am probably misunderstanding something big, but I have been asked to create some unit tests for my WCF service. It's a very simple service that executes a stored procedure and returns the result. The second line in my operation is this:
string conn = ConfigurationManager
.ConnectionStrings["AtlasMirrorConnectionString"].ConnectionString;
Everything works fine when deploying the service, but under unit testing, it seems that the config file becomes invisible. ConfigurationManager.ConnectionStrings["AtlasMirrorConnectionString"] becomes a null reference and throws accordingly.
How do I include my config file in the tests? Right now, the only behavior I'm able to test is the handling of missing config files, which is not terribly useful.
asked again and again and again and answered by me last week and this week as well :)
if you have your unit tests in another project (VS generated test project, class library etc...) just create an app config for that unit test project and put the same configuration keys as you have in the project which works.
of course I am simplifying because you could absolutely want to customize those keys with specific test values, but as a start copy what works then customize in case you want to point to another database, machine etc... :)
If you want your unit test to always have the same values as your project then you can use the following line as a post-build event in the test project
copy /Y "$(SolutionDir)ProjectName\App.config" "$(TargetDir)TestProjectName.dll.config"
You'll have to decorate the test class or method with the DeploymentItemAttribute to deploy the configuration file into the test directory.
Use something like this on your TestClass (this assumes that you have a copy of the app.config local to your testclasses):
[DeploymentItem("app.config")]
Related
We have created P2P sharing in c# and I want to have some integration tests which tests for example if a file downloaded correctly from sender, and check size, speed etc.
I have tried to send a file and then checked but I can not check this file without running the program.
Any idea how I can create some tests to check data?
Let's start off by the difference from an integration to a unit test. Whereas in a unit test you'll test individual parts in isolation (usually achieved via mocking the dependencies) the integration tests works against the broader (or full) system.
I'm assuming you want to run the integration test in automated fashion. Basically you can just run a test with the unit test framework you're using already (for example NUnit) and when creating the necessary services you don't use any mocks but you inject the actual dependencies.
How you exactly setup the "frame" of the integration test up depends on your project, for example if you are using an IoC library to inject dependencies you might be able to use this one also within the integration tests rather than having to set your services up by hand.
What you have to be careful is also the fact that when you run integration tests it might affect the system you're running it on. So if you're doing something on the file system it's a good practice to make sure to clean up after the test.
I would recommend to create some basic "framework" that fits your project to setup integration tests, that would include the generic code you need to setup the tests to run with your system and maybe creates dedicated folders in a temp directory that will be removed after every test run.
Now to your more concrete question: If I get it right you would need to create a "sender" that will provide the test file you want to download. As part of the test you could deploy this test file to the above mentioned temp folder and configure the sender to provide that file. Then you could create the client that would interact with this sender and download the file to somewhere on your system.
Before you initiate the download you could track the time and figure out how long it took. Additionally you could after the download has finished check it's properties, or compare it to the original file, as I assume it should be identical.
Following some pseudo-code that could show the general concept:
[TestMethod]
[TestCategory("Integration")
public void DownloadFileFromSender_ConnectionDoesNotGetInterrupted_SuccessfullyDownloadsFile()
{
// Arrange - do the setup of files and temp folders, make create sender and receiver
SetupTestFile(#"Tempfolder\Testfile.txt");
var sender = new Sender(...);
var receiver = new Receiver(...);
sender.ProvideFile(#"Tempfolder\Testfile.txt");
// Act - Put your actual test here
var timeBeforeDownload = DateTime.Now;
receiver.DownloadFile(sender, #"Tempfolder\Testfile.txt", #"Tempfolder\DownloadedFile.txt");
var totalDownloadTime = DateTime.Now - timeBeforeDownload;
// Assert - Verify here your assumptions, e.g. download time or file properties
Assert.IsTrue(totalDownloadTime.TotalMilliseconds < 10000);
Assert.IsTrue(File.Exists(#"Tempfolder\DownloadedFile.txt"));
}
Be aware that running integration tests might have a longer time to setup and run depending on the size/complexity of the parts that you are testing. They do not replace unit tests but rather complement them. Due to their difference it's also a good idea to tag them to be able to just run either unit or integration tests.
Again the specifics on how to setup the test environment is up to you and depends heavily on your project you want to test.
Is it possible to determine how many tests have been selected to execute before the test runner executes them? This would be helpful for local testing since our logic generates test configuration for every test suite configuration at a time, having the ability to figure out which tests have been selected to execute could allow me to create logic to only create test data configuration for those tests.
This would be useful when writing tests and testing that they work. Since we only want to generate test data for the selected test.
Right now we have to comment out code to disable it from executing the test data configuration.
Thanks.
I think you are overthinking this a bit. Your setup can be done with a combination of splitting the setup work between the whole test assembly setup that only runs once, namespace wide setup that runs once before any test in the namespace is run, the constructor for a test fixture, the start of an actual test, etc.
If you are reusing a docker instance and app pool for all the tests then initialize it in the whole assembly setup so that it is only done once. Then each test can just add whatever data it needs before it starts. If some of that data is shared between tests then just setup global flags to indicate what has been done already and if some data a test needs hasn't been setup then just do the incremental additional setup needed before continuing with that test, but this generally isn't required if you organize your tests into namespaces properly and just use the namespace wide setup for fixtures.
In Visual Studio IDE, I can create a unit test file with a unit test class for the code in a source file, by right clicking inside the code to be tested and selecting the option to create unit test.
The testing code and the code to be tested are not only in the same file, but also not in the same namespace.
Is it possible to write the code to be tested and the testing code in the same source file?
If yes, how? Should I put them in the same or different namespaces?
Can you give some examples?
Thanks.
It is possible, but that also means that you deploy your tests with your code, as well as any mocks, dummy data, etc. All of this is unnecessary and may confuse anyone trying to use the library.
However, to answer the question, just use different namespace blocks to separate the test classes in a separate namespace.
namespace MyCompany.MyLibrary
{
// classes
}
namespace MyCompany.MyLibrary.Test
{
// tests, mocks, etc.
}
Yes, there is no restrictions where "code under test" is coming from.
While it is somewhat strange you can have just UnitTest project and put code you trying next to your tests. If you want - even in the same files using same or different namespaces of your choice (C# is not Java and there is no connection bewteen file name/location and namespace)
Yes, but
If you put them in the same code base as the system under test, then you will be unable to deploy the system without also deploying the tests. Do you want the tests sitting on your production servers?
Also, the same app.config (or web.config, depending on your solution) will apply to both your tests and the system under test. That means you can't set up alternate configurations for things like AutoFac, which normally is handy for unit/isolation testing.
I have a web test, let's call it WebTestParent that calls another web test, WebTestChild.
There is no problem when I run it from the IDE, but when I try running it from the command line using mstest, like this:
C:\MySolution> mstest.exe /testmetadata:"Tests.vsmdi" /test:"WebTestParent.webtest" /testsettings:"local.testsettings"
I get this error:
Cannot find the test 'WebTestChild' with storage 'C:\MySolution\somesubfolder\WebTestChild.webtest'.
The file local.testsettings has "Enable deployment" checked.
Did anyone experience this and maybe found a solution?
Thanks.
I am not familiar with web tests but I have done this with unit tests. I belive that your problem is not the call from one to test to the other. Maybe your 'WebTestChild' (or both tests) does not belong on the 'TestList' in your 'Tests.vsmdi' file.
If you have not a list for your tests then you should create one. Check here for more details.
I have a number of unit tests which rely on the presence of a csv file. They will throw an exception if this file doesn't exist obviously.
Are there any Gallio/MbUnit methods which can conditionally skip a test from running? I'm running Gallio 3.1 and using the CsvData attribute
[Test]
[Timeout(1800)]
[CsvData(FilePath = TestDataFolderPath + "TestData.csv", HasHeader = true)]
public static void CalculateShortfallSingleLifeTest()
{
.
.
.
Thanks
According to the answer in this question, you'll need to make a new TestDecoratorAttribute that calls Assert.Inconclusive if the file is missing.
Assert.Inconclusive is very appropriate for your situation because you aren't saying that the test passed or failed; you're just saying that it couldn't be executed in the current state.
What you have here is not a unit test. A unit test tests a single unit of code (it may be large though), and does not depend on external environmental factors, like files or network connections.
Since you are depending on a file here, what you have is an integration test. You're testing whether your code safely integrates with something outside of the control of the code, in this case, the file system.
If this is indeed an integration test, you should change the test so that you're testing the thing that you actually want tested.
If you're still considering this as a unit test, for instance you're attempting to test CSV parsing, then I would refactor the code so that you can mock/stub/fake out the actual reading of the CSV file contents. This way you can more easily provide test data to the CSV parser, and not depend on any external files.
For instance, have you considered that:
An AntiVirus package might not give you immediate access to the file
A typical programmer tool, like TortoiseSvn, integrates shell overlays into Explorer that sometimes hold on to files for too long and doesn't always give access to a file to a program (you deleted the file, and try to overwrite it with a new one? sure, just let me get through the deletion first, but there is a program holding on to the file so it might take a while...)
The file might not actually be there (why is that?)
You might not have read-access to the path
You might have the wrong file contents (leftover from an earlier debugging session?)
Once you start involving external systems like file systems, network connections, etc. there's so many things that can go wrong that what you have is basically a brittle test.
My advice: Figure out what you're trying to test (file system? CSV parser?), and remove dependencies that are conflicting with that goal.
An easy way would be to include an if condition right at the start of the test that would just execute any code in the test if the CSV file can be found.
Of course this has the big drawback that tests would be green although they haven't actually run and asserted anything.
I agree with Grzenio though, if you have unit tests that rely heavily on external conditions, they're not really helping you. In this scenario you will never really know whether the unit test ran successfully or was just skipped, which contradicts what unit tests are actually for.
In my personal opinion, I would just write the test so that they correctly fail when the file is not there. If they fail this is an indicator that the file in question should be available on the machine where the unit tests run. This might need some manual adjustments at times (getting the file to the computer or server in question), but at least you have reliable unit tests.
In Gallio/MbUnit v3.2 the abstract ContentAttribute and its concrete derived types (such as [CsvData] have a new optional parameter that allows to change the default outcome of a test in case of an error occured while opening or reading the file data source (ref. issue 681). The syntax is the following:
[Test]
[CsvData(..., OutcomeOnFileError = OutcomeOnFileError.Inconclusive)]
public void MyTestMethod()
{
// ...
}