I'm just getting started with Selenium, using VS2012 and C#. I'm not sure if using a separate testing framework such as NUnit or HtmlUnit is necessary. I tried out the google search example that is available online in C# without using NUnit and it worked fine.
So my question at this point is, why would i need to use NUnit with Selenium?
You don't NEED to use NUnit (or any other unit testing framework) with Selenium if you don't want to. However, there may be instances where you might want to use NUnit (or others) to leverage the benefits of other things. For example:
If you have existing unit tests it keeps everything in one place. (If that's the way you want to organise things)
If you already use NUnit (or your preferred unit testing framework) for unit tests then you can re-use the same test runner (e.g. NUnit console, NUnit GUI, ReSharper, etc.) you use for NUnit meaning you can run all your tests (NUnit and Selenium) with one button press/keyboard shortcut.
If you use continuous integration it can run your selenium tests through the existing NUnit (or which ever you prefer) test runner which means you don't have to configure your continuous integration server for the selenium tests separately.
The above assumes that you have unit tests already. If you don't already have unit tests, or you are only interested in the Selenium tests (For example, we have a development team and a tester, we write unit tests, the tester writes Selenium tests and they are run independently of each other) then there is no need to add that extra layer.
Unit tests frameworks and selenium test for different things. Unit tests typically look at a single unit of code at a time (although in practice, I find it often spills out into adjacent units especially if they are small and deterministic). Selenium looks at a web page (or series of pages) as a single test. Selenium's tests need a system to have many of its components integrated together to run the test. It is therefore testing at a higher level as it is checking many things at once. (e.g. that the system can cope with requests, that the responses arrive back, that the responses contain the expected data, that pressing buttons on the page do the correct things, go to the correct pages, etc.)
Ultimately, just because you can do something doesn't mean you should. Running Selenium tests through a unit testing framework is a convenience if you have to handle both. It may work for you, it may not.
Related
I'm currently using Selenium Webdriver with C#. I've successfully executed my test in remote webdriver as well using selenium GRID.
I just configured 5 instances of FF, Chrome and IE in my Grid settings and when I ran my test project on chrome browser, I noticed that just only one instance of chrome is picked. Is this the expected behavior? I was initially in an assumption that the number of tests in a single project will be distributed across multiple browser instances based on the maxinstance and maxsessions. But not sure why it is using just one browser instance for the whole project. Please do let me know if I need to do anything to use more than one browser instance to share/run the test.
Unfortunately the standard NUnit runner doesn't support parallelization out of the box.
There are a few alternative unit testing frameworks that you might want to look into that do support parallelized runs like MbUnit or PnUnit.
One workaround is to split up your test. Some common ways are by DLL, namespace, test name, or category. You could then run your NUnit test in parallel using a MSBuild script.
The final command would look something like this c:\proj> msbuild /m:8 RunTests.xml
Check out the answers to this question for more details: How can I run NUnit tests in parallel?
I'm currently doing some trainee work & looking for a bit of advice.
I've been set a pretty simple task of writing some unit tests for a service application in C#. The service mainly has methods which query and look up and SQL database for various things. I've been asked to write unit tests for the majority of these methods and other things like simply checking the connection and stuff. I've been told to keep it simple and probably just write the tests in a Console application. I'm wondering what the best way to go about this would be?
Would simply calling the methods from a console app with hardcoded input be suitable? Then just check what the output is & write whether is passes in the console? Or is this too simple and nasty?
You can run both MSTest and NUnit tests from the command line and in my opinion that would be far more preferable than writing your own test runner from scratch. Concentrate on writing quality tests, not the scaffolding required to execute them and deliver the results.
I would suggest it's as simple to do it "properly" as it is to craft your own solution.
Note though that tests connecting to the database are integration tests, not unit test.
First of all, I'm new to testing software. I suppose I'm rather green. Still for this project I need it to make sure all the calculations are done right. For the moment I'm using unit tests to test each module in C# to make sure it does what it should do.
I know about some different testing methods (Integration, unit testing), but I can't seemingly find any good book or reference material on testing in general. Most of em are to the point in one specific area (like unit testing). This sort of makes it really hard (for me) to get a good grasp on how to test your software the best, and which method to use.
For example, I'm using GUI elements as well, should I test those? Should I just test it by using it and then visually confirm all is in order? I do know it depends on whether it's critical or not, but at which point do you say "Let's not test that, because it's not really relevant/important"?
So a summary: How to choose which testing method is best, and where do you draw the line when you use the method to test your software?
There are several kind of tests but imho the most important are unit test, component test, function test and end-to-end test.
Unit tests checks if your classes works as intended.(JUnit). These tests are the base of your testing environment as these tells whether your methods works. Your goal is 100% coverage here as these are the most important part of your tests.
Component tests checks how several class works together in your code. Component can be a lot of stuff in the code, but its basically more than unit testing, and less than function testing. The goal coverage is around 75%.
Function tests are testing the actual functionalities you implement. For example, if you want a button that saves some input data to a database, that is a functionality of the program. That is what you test. The goal coverage here is around 50%.
End-to-end tests are testing your whole application. These can be quite robust, and you likely cant and don't want to test everything, this test is here to check if the whole system works. The goal coverage is around 25%.
This is the order of importance too.
There is no such thing in these as "better" though. Any test you can run to check if your code works as intended is equally good.
You probably want most of your test automated: so you can test while you having a coffee break or your servers can test everything while you are away from work, then check the results in the morning.
GUI testing considered the hardest part of testing, and there are several tools that help you with that, for example Selenium for browser-GUI testing.
Several architectural patterns like Model-View-Presenter tries to separate the GUI part of the application because of this and work with as dumb GUI as possible to avoid errors in there. If you can successfully separate your graphic, you will be able to mock out the gui part of the application and simply left it out from most of the testing process.
For reference I suggest "Effective Software Testing" from Elfriede Dustin, but I'm not familiar with the books on the subject; there could be better ones.
It realy depends on the software you need to test. If you are mostly interested in the logic behind (calculations) then you should write some integration tests that will test not just separate methods, but the workflow from start to finish, so try to emulate what typical user would do. I can hardly imagine user calling one particular method - most likely some specific sequence of methods. If the calculations are ok - then the chance is low that GUI will display it wrong. Besides automating GUI is time consuming process and it will require a lot of skill and energy to build it and maintain, as every simple change might brake everything. My advice is to start with writing integration tests with different values that will cover the most common scenarios.
This answer is specific to the type of application you are involved in - WinForms
For GUI use MVC/MVP pattern. Avoid writing any code in the code behind file. This will allow you to unit test your UI code. When I say UI code it means you will be able to test your code which will be invoked when a button is clicked or any action that needs to be taken on an UI event occurrence.
You should be able to unit test each of your class files (except UI code). This will mainly focus on state based testing.
Also, you will be able to write interaction test cases for tests involving multiple class files. This should cover most of your flows.
So two things to focus on State based testing and Interaction testing.
The project I'm working on has a bunch of service-tier unit tests using Microsoft.VisualStudio.TestTools.UnitTesting.TestMethodAttribute. I want to look into some web frontend automated generation tool for these tests. I don't care if I need to use some other framework like NUnit. I need some decent way to have an easy web frontend for looking at test results, that also allows adding new tests in an easy manner.
After a bit of investigation I realised that we already have TeamCity for the builds. Do I need anything else to setup test browsing from teamcity?
We use Cruise Control, NAnt, SubVersion and NUnit together to provide continuous integration. Every commit triggers a build and runs all the unit tests. The cruise control dashboard show build results, unit test results and code coverage for each build. Is that the kind of thing you are looking to do or do you want some kind of web based ad hoc test runner?
Continuous Integration systems normally let you do this and usually have a web front end.
I know that you could set this up using CruiseControl.Net (which is free), the other system that has been recommended to me in the past is TeamCity so I'm sure that could do this too (and its also free as long as you don't configure too many projects).
I've recently started reading The Art of Unit Testing, and the light came on regarding the difference between Unit tests and Integration tests. I'm pretty sure there were some things I was doing in NUnit that would have fit better in an Integration test.
So my question is, what methods and tools do you use for Integration testing?
In my experience, you can use (mostly) the same tools for unit and integration testing. The difference is more in what you test, not how you test. So while setup, code tested and checking of results will be different, you can use the same tools.
For example, I have used JUnit and DBUnit for both unit and integration tests.
At any rate, the line between unit and integrations tests can be somewhat blurry. It depends on what you define as a "unit"...
Selenium along with Junit for unit+integration testing including the UI
Integration tests are the "next level" for people passionate about unit testing.
Nunit itself can be used for integration testing(No tool change).
eg scenario:
A Unit test was created using Nunit using mock(where it goes to DB/API)
To use integration test we do as follows
instead of mocks use real DB
leads to data input in DB
leads to data corruption
leads to deleting and recreating DB on every test
leads to building a framework for data management(tool addition?)
As you can see from #2 onwards we are heading into an unfamiliar territory as unit test developers. Even though the tool remains the same.
leads to you wondering, why so much time for integration test setup?
leads to : shall I stop unit testing as both kinds of tests takes
more time?
leads to : we only do integration testing
leads to : would all devs agree to this? (some devs might hate testing altogether)
leads to : since no unit test, no code coverage.
Now we are heading to issues with business goals and dev physco..
I think I answered your question a bit more than needed. Anyways, like to read more and you think unit tests are a danger? then head to this
1) Method: Test Point Metrics is best approach in any environment. By this approach not only we can do unit and integration testing but also validate the requirements.
Time for writing Test Point Metrics is just after the Requirement understanding
A template of Test Point Metrics available here:
http://www.docstoc.com/docs/80205542/Test-Plan
Typically there are 3 kind of testing.
1. Manual
2. Automated
3. Hybrid approach
In all above cases Test Point Metrics approach works.
2) Tool:
Tool will depend upon the requirements of project anyhow following are best tools according to my R&D
1. QTP
2. Selenium
3. AppPerfect
For more clear answer about tool, please specify your type of project.
Regards:
Muhammad Husnain
I mostly use JUnit for unit testing in combination with Mockito to mock/stub out dependencies so i can test my unit of code in isolation.
For integration tests these are normally involve 'integration' with an external system/module like a database/message queue/framework etc... so to test these your best bet would be the use a combination of tools.
For e.g. i use JUnit as well but rather than mock out the dependencies i actually configure those dependencies as it were calling code. In addition, i test a flow of control so that each method are not tested in isolation as it is in Unit testing but instead together. Regarding Database connectivity, i use an embedded database with some dummy test data etc.