I program simple control library that checks user input from text and format the input.I want to make a unit test.How can I do that ?
I would take a look at this SO question. There are some good responses there. Marek Grzenkowicz's CodeProject article has some information on unit testing a TextBox he developed.
Edit:
Testing the UI can be a challenge and I generally try to pull as much out of the UI as I can and put it into a more testable class. Obviously, you want your unit tests to be run without any need for user interaction, so if your class method takes in the input string and formats it, you can write a test (using NUnit, MS Test, etc) to provide the input and test the actual output against the expected results.
I'd point you at NUnit and see if that does what you need. It is easy enough, also, to create New tests in Visual Studio under the Test->New Test... menu item. I know this is available in the VS 2008 Professional and assume it is available in other versions too.
Also, I'd take a look at this SO question about NUnit examples. There are some links to some examples there too.
Related
Sorry to ask such a simple question. I am a newbie.
I am setting up my website. I try to create a UnitTest Project for it, but I simply cannot not create a reference for it. Even though I have created a namespace for my class method, Visual Studio doesn't allow me to use the method and to see the namespace at all.
Is UnitTesting website completely different from UnitTesting normal programs?
Any reference online available?
Thank you
You might want to consider using the new Fakes support in the Visual Studio 2012. In short, you can detour the call to the web site API and let the call routed to your supplied delegate. This allows you to isolate testing your code in a modualized fashion.
You can find more detail at http://msdn.microsoft.com/en-us/library/hh549175.aspx
Regards,
Patrick Tseng
Visual Studio ALM team.
use this article.if u are user mvc use this question.and this stie
You've probably heard the old adage, "Better, faster, cheaper, pick any two." If you want something good and fast, it isn't going to be cheap, and if you want something fast and cheap, it isn't going to be very good. Cheaper, faster, and better means that we need to write more code more quickly, right? If only it were that simple. Learning to type faster might satisfy two of the requirements, but it isn't going to make the software you develop any better. So how do we make software better? What does "better" mean?
"Better" means producing flexible, maintainable software with a low number of defects; better software is all about long-term maintainability. To achieve this, a key design decision is to ensure that components are loosely coupled. Loosely coupled software has many benefits. One that stands out is that it improves our ability to test solutions. If we write software that can be broken down easily into small parts, it becomes easier to test. It sounds simple when you word it that way, but the amount of software in use today that is difficult to test or maintain shows that it is not as straightforward as we might like to think. Software needs to be coupled to do anything useful, but developers need tools and techniques to decrease coupling so that solutions are easier to test.
I assume that your website code is in one project, and your unit test code is in another project in the same solution? In order for code in one project to access the code of another project, you have to add a reference to the other project:
Right-click on the unit test project and click 'Add Reference...'
Open the 'Projects' tab, select the website project and click 'OK'.
You should not unit test your website. You should unit test your business layer code. Testing your website would probably be UI testing, you can use tools for this like Selenium. I would recommend reading up on unit testing, and unit testing frameworks (MSTest, NUnit etc). You should layer your application so you can test your classes, objects, data access etc, all in absolute isolation.
Example:
A Manage Users page may show an administrator a list of all the users, filtered by a filter criteria.
The page should only handle the UI side of the problem, as in, it should only be concerned about displaying the data. It should send off a request to another object to get a list of users. Maybe something like a UserRepository which has a method called GetUsers which takes in a filter criteria.
This is where unit testing comes in. You would test (by mocking, perhaps) that when given a very specific list of users in the database, and a certain filter criteria, that the GetUsers method returns the list of users you expect.
In my project we had BDD tests which I have written using specflow, nUnit and Watin. I run these tests from visual studio using resharper. Now I want to expose these features and scenarios to non technical people and I want them to run these tests.
Something like I want to list all the tests in a browser and user should be able to run those tests by clicking on them. Can this be achieved ? Is there any addin ?
Currently we use Team Foundation Server as our build server.
TeamCity, a Continuous integration server by JetBrains provides this as a webbased functionality. It even provides statistics and test output results.
It supports nUnit out of the box.
SpecFlow and Watin are supported with some configuration.
The BIGGEST problem you are going to have is that the plain text feature file, automatically gets converted to a xxx.feature.cs file by the SpecFlow Visual Studio plugin. So your process is this,
Modify xxxx.feature file
Find some way to get the SpecFlow plugin to generate xxx.feature.cs
Compile
Run tests by using NUnit/Xunit (as configured)
Gather and present test success report
To me this is a process has a name, I'd called it development.
BDD however is a different process, it's all about collaboration and communication with the business in order to devise a specification. In the beginning there were no tools, but the process still worked.
A number of my co-workers have been using BDD techniques on a variety of real-world projects and have found the techniques very successful. The JBehave story runner – the part that verifies acceptance criteria – is under active development.
Dan North - Introducing BDD 2006
Don't get caught up on the tools alone or you'll miss the vital part of the process. You'll get so much benefit by working with your BA to define the new specification together collaboratively.
P.S. Another way to consider this is that the specification and code should always be in step. Just by defining a new example, we don't magically move the code forwards to meet that example. Instead the most common practice is to develop the code to meet the new example, and then check in the new specification and code as a single change set.
You can use the Pickles project to produce stakeholder-friendly documentation (including HTML) from the Gherkin specifications in your source control.
https://github.com/picklesdoc/pickles
There's no facility for running the tests from the HTML. It's open-source so perhaps you can extend it this way... however, I personally don't see the value in having non-technical users actually execute the specifications. I would have your continuous integration server run the SpecFlow tests and generate a step definition report periodically. The non-technical users can then browse to these reports to see current project status.
To give access to your feature files to non technical people you can use http://www.speclog.net/
Spec log will allow non tech to edit and create new features and will automatically synchronise them with TFS.
Unfortunately it's not free and you can't run the specs from that tool.
I am currently doing my write-up for my third year degree project. I created a system using C# which used a Microsoft Access as the back end database. The system does not connect to the internet, nor does it use the local network for any connectivity.
I am asking for the best method to test an application such as this, so that it is of sufficient testing.
You should impelement the Repository Pattern, which will abstract the database code so that you can test the business logic, while faking out the database calls.
I don't know what exactly you're looking for and how loosely coupled your application is, but in my case, most of the code (roughly 90%) is written so that it can be tested in unit tests, without any need to run the ui. The MVVM pattern is a good starter for that, since it forces you to move code out of your UI into separate classes like ViewModels, Commands, which can be unit tested.
That assures a lot already, and if you need to do automated UI testing, take a look at the Coded UI Tests available in Visual Studio 2010 (Premium and Ultimate only). They allow you to fully automate / simulate user interaction. In simulation, you can do what Justin proposed : detach your application from the database and work with a repository.
You have to keep in mind that in order to write really testable code, you have to design your code testable. In my experience it is next to impossible to write Unit Tests for code written without testing intent right from the start. The probably best thing you can do in that case is write integration tests.
But in order to give more clear advice, we need more input.
Cheers
I am comfortable with recording Coded UI tests using the VS2010 Ultimate.
The problem I am running into is that some of the UI controls being tested in the Silverlight app that we have built require data that is machine specific.
If I run the tests on my machine they run great. However, my team mates also have to run the same tests on their own machines. Problem is that the Coded UI tests are recorded with my machine name as an input setting to certain text boxes in the application under test. Unless my team mates re-record the same test with their own machine names, those tests will fail.
After some digging I saw that you can associate a CSV, EXCEL, a database, or XML file to drive your coded ui tests. However all examples on MSDN and elsewhere only show pre-configured answer files, and most of them are in CSV format.
What goes on in the answer file, and how can I create one of my own in XML format to drive the values being inputted into the text-boxes when the coded ui test replays?
Any links and guidance would be greatly appreciated!
Disclaimer: Not a fan of CodedUI.
Link1 - Creating a data-driven CodedUI test
It should be possible to use record-n-replay to generate the CodedUI test. Then make the modifications listed above to drive it with inputs from an input-set.
However I'm not sure if re-recording the test would obliterate your modifications... You'd have to try this out and see. If you need this I'd advise using CodedUI in scripting mode (instead of record-n-replay).
Separate the business logic from the UI and you don't have the problem with any functionality/behavior testing of the UI bits. You will then have the data issues solved. As for testing the UI bits, there are a couple of ways of handling this. One relatively simple method is to bootstrap an IOC container with mocks and setting up UI tests on top of the mocked data.
If you want to get into more automated UAT testing, there are tools for that. Not sure about Silverlight/WPF per se (as I don't spend a huge time in either due to moving all business logic out of the UI bits), but I would imagine there has to be one.
First of all, I'm new to testing software. I suppose I'm rather green. Still for this project I need it to make sure all the calculations are done right. For the moment I'm using unit tests to test each module in C# to make sure it does what it should do.
I know about some different testing methods (Integration, unit testing), but I can't seemingly find any good book or reference material on testing in general. Most of em are to the point in one specific area (like unit testing). This sort of makes it really hard (for me) to get a good grasp on how to test your software the best, and which method to use.
For example, I'm using GUI elements as well, should I test those? Should I just test it by using it and then visually confirm all is in order? I do know it depends on whether it's critical or not, but at which point do you say "Let's not test that, because it's not really relevant/important"?
So a summary: How to choose which testing method is best, and where do you draw the line when you use the method to test your software?
There are several kind of tests but imho the most important are unit test, component test, function test and end-to-end test.
Unit tests checks if your classes works as intended.(JUnit). These tests are the base of your testing environment as these tells whether your methods works. Your goal is 100% coverage here as these are the most important part of your tests.
Component tests checks how several class works together in your code. Component can be a lot of stuff in the code, but its basically more than unit testing, and less than function testing. The goal coverage is around 75%.
Function tests are testing the actual functionalities you implement. For example, if you want a button that saves some input data to a database, that is a functionality of the program. That is what you test. The goal coverage here is around 50%.
End-to-end tests are testing your whole application. These can be quite robust, and you likely cant and don't want to test everything, this test is here to check if the whole system works. The goal coverage is around 25%.
This is the order of importance too.
There is no such thing in these as "better" though. Any test you can run to check if your code works as intended is equally good.
You probably want most of your test automated: so you can test while you having a coffee break or your servers can test everything while you are away from work, then check the results in the morning.
GUI testing considered the hardest part of testing, and there are several tools that help you with that, for example Selenium for browser-GUI testing.
Several architectural patterns like Model-View-Presenter tries to separate the GUI part of the application because of this and work with as dumb GUI as possible to avoid errors in there. If you can successfully separate your graphic, you will be able to mock out the gui part of the application and simply left it out from most of the testing process.
For reference I suggest "Effective Software Testing" from Elfriede Dustin, but I'm not familiar with the books on the subject; there could be better ones.
It realy depends on the software you need to test. If you are mostly interested in the logic behind (calculations) then you should write some integration tests that will test not just separate methods, but the workflow from start to finish, so try to emulate what typical user would do. I can hardly imagine user calling one particular method - most likely some specific sequence of methods. If the calculations are ok - then the chance is low that GUI will display it wrong. Besides automating GUI is time consuming process and it will require a lot of skill and energy to build it and maintain, as every simple change might brake everything. My advice is to start with writing integration tests with different values that will cover the most common scenarios.
This answer is specific to the type of application you are involved in - WinForms
For GUI use MVC/MVP pattern. Avoid writing any code in the code behind file. This will allow you to unit test your UI code. When I say UI code it means you will be able to test your code which will be invoked when a button is clicked or any action that needs to be taken on an UI event occurrence.
You should be able to unit test each of your class files (except UI code). This will mainly focus on state based testing.
Also, you will be able to write interaction test cases for tests involving multiple class files. This should cover most of your flows.
So two things to focus on State based testing and Interaction testing.