So I'm trying to find a way in which I can insert new generic test cases into TFS through C#. These are the same ones that you can create in Visual Studio so I was hoping there was someway to do this with the TFS API somehow. Any hints of suggestions are greatly appreciated.
Thanks!!
The key difference Ewald is pointing to is that there are Test Case Work Items (logical sets of tests you need to execute as a piece of recorded work) and physical tests that can verify behavior. Generic Test is a artifact that executes another tool and verifies the rseults of that other tool. It really has little direct relationship to TFS. These are things you add to a Visual Studio Test Project and can, but are not required to, place in source control (TFS or otherwise).
You can likely you can use an existing Generic Test file as your template for automating this process if you have the need.
Follow these instructions http://www.ewaldhofman.nl/post/2009/12/11/TFS-SDK-2010-e28093-Part-5-e28093-Create-a-new-Test-Case-work-item.aspx to create a new test case work item to TFS.
Related
Has anyone attempted something similar to "find inheritors" or "find references" across branches? I'm working with a code base that includes multiple release branches, some of which differ from the main line.
Fortunately, the base code is no different, but there may be classes that inherit that base code. When making redesign decisions, it would be helpful to know which branches might be affected by a redesign. Has anyone run into this in the past and/or knows a good direction in which to point me to find more information on the topic?
I'm working with C#, Visual Studio, and an SVN repository, but if someone has run into this in other code repositories, other languages, or other technologies, the same concepts should apply.
Thanks in advance for the help!
Your query is similar to find difference between branches in TFS. With TFS, we can find difference in VS directly, by right click on the file or folder in Source Control Explorer and choose Compare. Or we can use command line tool tf.exe to get the difference:
tf diff[erence] itemspec [/version:versionspec] [/type:filetype]
[/format:format [/ignorespace] [/ignoreeol] [/ignorecase] [/recursive]
[/options][/noprompt][/login:username,[password]]
You may check whether SVN has similar command line tool.
First of all i'm new here and new to SpecFlow. I'll try to be as clear as possible because I'm still exploring ways to solve my problems so please bear with me :)
Alright here I go. I have a solution (lets call it DBHelper) that does a few operations on a Database and i want to provide a tool in BBD (using specflow) to determine and set up a test suite using test rail that will run automatically. These tests could be a set composed of single scenario run several times but with different values. I'm still very early in the development of this tool so the version i have right now is connected to DBHelper and does a single operation when i run either SpecRun of NUnit.
Here is my scenario:
Scenario: InsertBuildCommand
Given The build name is AmazingTest
And The build type is Test
And The platform is PC
And The number of files in the build is 13
And Each file is 8 MB
And The project code name is UPS
And The studio code name is MTL
And The environment is TEST
When The command executes
Then The build should be inserted in the DB with the correct files in it
Now i am looking fo a way to make the scenario dynamic. I ultimatelly want the user to enter be able to run the scenario but his choice of values (ex: the name of the build would be MoreAmazingTest) without being in VS. I know you can run SpecRun from the command line but i am clueless as to how to close the gap between the orignally hardcoded scenario values and the user input. The steps contain regular expression where it is useful so it really is just about the scenario values.
Someone told me about coding a custom plugin or reverse engineer Specrun and make a modified version of it but i have no idea how that would help me. Pardon me if it all makes sense i'm not en expert :x
Thanks alot!
If I understand your question properly, you can use Scenario Outline rather than Scenario. Scenario Outline help
You would then have something like this:
Scenario Outline: test using multiple examples
Given I do something
When I enter <numbers>
And I click a button
Then I will have an answer
Examples:
|numbers|
|1 |
|2 |
|3 |
It will then run the same scenario for each example given.
One way is to define some kind of configuration file which the step definitions will read and perform the tests on it. After you change the file you can run the tests however you want, from a command line or VS and it will read the file and get the numbers from there.
I use environment variables for that.
But if you really need arguments, you could also create an .exe (consoleapp) which uses specflow/nunit/etc to pass the cmd arguments to your classes.
I have one class which talks to DataBase.
I have my integration-tests which talks to Db and asserts relevant changes. But I want those tests to be ignored when I commit my code because I do not want them to be called automatically later on.
(Just for development time I use them for now)
When I put [Ignore] attribute they are not called but code-coverage reduces dramatically.
Is there a way to keep those tests but not have them run automatically
on the build machine in a way that the fact that they are ignored does
not influence code-coverage percentage?
Whatever code coverage tool you use most likely has some kind of CoverageIgnoreAttribute or something along those lines (at least the ones I've used do) so you just place that on the method block that gets called from those unit tests and you should be fine.
What you request seems not to make sense. Code-Coverage is measured by executing your tests and log which statements/conditions etc. are executed. If you disable your tests, nothing get executed and your code-coverage goes down.
TestNG has groups so you can specify to only run some groups, automatically and have the others for usage outside of that. You didn't specify your unit testing framework but it might have something similar.
I do not know if this is applicable to your situation. But spontaneously I am thinking of a setup where you have two solution files (.sln), one with unit/integration tests and one without. The two solutions share the same code and project files with the exception that your development/testing solution includes your unit tests (which are built and run at compile time), and the other solution doesn't. Both solutions should be under source control but only the one without unit tests are built by the build server.
This kind of setup should not need you to change existing code (too much). Which I would prefer over rewriting code to fit your test setup.
In my situation, I would like to find out all the functions containing this function call
DBase.CreateCommand();
But not include the following call
DBase.CloseCommand(cmd);
In that function.
What I'm trying to do is to find out any not closed database connections.
A tool, plugin, regex etc. are all welcome.
You can change the name of the method: CreateCommand() into a different name so you will get a bunch of errors at compile time and you will check those errors one by one?
or you can search by exact match: "CreateCommand();" or in Visual Studio, right click on the method then select Find All References...
You could try Resharper or something like that - whilst not providing a compare (that I know of) it will make them easier to modify as changes will ripple though and it has a lot of neat tools. Barring that Fortify scanning or other code analysis tools pick up un-released resources such as dbconns, and readers ect as well although I believe they may be quite pricey!
Is there a built-in way to determine if a test is running on a dev machine (within Visual Studio) or on a builder (within the MSTest runner)? If there isn't has anyone found a good solution to this?
I'm using data driven tests and I want to be able to filter the data source on local dev machines, using a home brewed UI, but I want the builder to run all the tests.
I've considered #if but that seems hacky. Any better solutions?
I'm successfully using Environment variables and conditional compilation. MSBuild can easily translate environment variables into preprocessor symbols which you can use in your code. Your MSBuild file must include this translation like this:
<PropertyGroup Condition="'$(ENVIRONMENT_VARIABLE)' != '' ">
<DefineConstants>$(DefineConstants);ENVIRONMENT_VARIABLE</DefineConstants>
</PropertyGroup>
What this snipped does is checking for the presence of ENVIRONMENT_VARIABLE and then then appends that variable to the existing DefineConstants list that indicates to MSBuild which symbols to define for compilation.
Defining an environment variable only on your build server/ or only on your dev boxes (depending on what's easier) is a very lean and flexible way to achieve simple configuration. If you need more advanced strategies, config files may be the way to go. But be careful with introducing different build combinations, usually they create a lot of overhead and introduce chance to break the build accidentially.
Whenever you can, avoid it.
When I've had to make unit tests behave differently on the build machine as opposed to dev machines, I've ended up using Environment.MachineName to determine whether the machine's the build server or not. Don't know if that'll be any use to you.
Whatever you do, I'd just make sure it's well documented so that your colleagues know about it.
You can use any number of local settings on the dev box that won't be present in the official build or test boxes, and read those to determine if you need to switch to dev-box specific behavior. For example, a file, an XML File, a registry key, a preprocessor directive (as you mentioned).
As Robert Harvey suggested, application settings are another way to do it.
You can use an application setting.