So at my job we have a core SpecFlow library that our different teams can use for their automation. This library has some declared steps.
For example, the library might have something like this:
When I click the button
However, let's say I want to define my own step declaration that uses that exact same wording. Is it possible to override it?
As #Grasshopper wrote, the step definition are global.
But you could use Scopes to overwrite it.
See http://www.specflow.org/documentation/Scoped-Bindings/
In this case do not forget to specify on every scenario the tag or the original step definition will be called.
It would be a very bad idea to do this, as any scenario that uses this step and fails will be very much harder to understand and debug.
In general using generic library steps in scenarios is also not such a good idea. Scenarios should not contain generic steps or descriptions of HOW things are done. Instead they should contain steps specific to your business context, and these should describe WHAT is being done and WHY its being done.
So instead of
When I click on sign in
And I fill in my email with ...
...
we get the much simpler and more abstract
When I sign in
which is all about WHAT we are doing, and nothing about HOW we are doing it.
You will get a DuplicateStepException if you have a same step (in your case - When I click the button) twice either in the same step definition file or another one. Even if you use a given or then annotation. This is because the step definitions are loaded globally thus resulting in conflict.
Also you cannot extend a stepdefinition or hook containing file as cucumber will throw an error that this is not acceptable. Thus no way you can override behaviour by inheritance.
You will need to write a different step all together, or if possible pass the button as a parameter to the existing step and put in the logic if you are allowed to modify the library code.
Related
I am trying to rewrite extremely ugly class in one application at work. In one of our classes, there are hundreds of lines of code that ensure initialization and re-initialization of some classes. Currently, this is done in the awful brute force-y way, where you write your init code and manually copy it to re-init part (as they are very similar).
Because of this , I started to rewrite it to a form of a list of delegates which are then called with a parameter in both places (bool isReinit). Then I noticed that most of the delegates are also identical, as the initialization process of 90 percent of the classes is identical. This means that I should be able to create some default initialization function to simplify the code drastically. Currently I created something like this :
https://dotnetfiddle.net/RVS5UT
I also created class CustomInitializer which implements IInitializer and only takes one Func as a parameter and runs it in Initialize, for the cases where the initialization is a lot different.
Now, this simplified and anonymized piece of working code, but it works. The problem is that the whole approach is very awkward and the constructor signature is ugly as hell. Is there some way to simplify this ? I can't find any pattern or approach that would help me ? Any step towards better code is welcome and maybe I am just missing something.
There is also another catch. One solution I figured out would be to store the property pairs (var1a + var1b, var2a+var2b, ..) in an object and pass it directly to Initialize method. But this would mean moving the properties, which is sadly not possible at the moment, because the file has over 18k lines and code reviewers would kill me for changing third of them because of refactoring of one method (even if its a long one). I need to leave the target properties (var1a, var1b, var2a, ..) where they are now. This could also mean that there is no elegant way to solve this.
I am using .NET 4.0, C# 5.0
EDIT: I have no access to the initialized types (another stupid catch)
Thanks for your help.
the file has over 18k lines
Wow, looks like a lot of fun.
It is absolutely good to try to improve it. And believe me, whatever your co-workers may think, there is nothing else to do than refactoring here, unless this code does not need to evolve.
But, it seems to me you go on the path of complexity, trying to be DRY instead of trying to be expressive. The idea of having StandardInitializer and CustomInitializer managing lambdas is extremely complex. The initialization of a class should be in the class it is responsible to initialize. If some behaviors are really shared, they may share a base class or a collaboration class.
I recommend you this discussion on Working Effectively With Legacy Code. As you'll see and probably already know, the first key point is to have tests.
Please don't try to refactor such a class without a test harness. Otherwise you'll introduce regression, you'll be frustrated, and your co-workers will be comforted in their vision that nothing can be done here without breaking everything.
And don't forget if tests are hard to create, it's because of bad code, not because tests are expensive. Bad code is expensive.
After some tests protect you, try to think in terms of responsibility and life cycle. For example in a WPF application, it is a common issue to have "initializable" ViewModel because they do some async web service call to initialize themselves.
In this case, the object with the responsibilty of lifecycle for a given ViewModel, has also the responsibility to init it. If it manages several Initializable view models, then this kind of code is fine:
foreach (var initializable in initializables)
{
initializable.Initialize();
}
But please, whatever solution you choose, keep a clear separation between Initialize and Reinitialize (if they have things in common, make them call an internal shared function). It is a very bad idea to write stuff like:
init.Initialize(true);
It clearly states that the behavior of your Initialize function will change depending of a boolean value. If you have 2 behaviors, you should have 2 functions with clear naming.
My library has some methods whose return value should never be discarded. Leaking them is a very popular mistake even for me, the author. So I want the compiler to alert programmer when it does so.
Such value may be either stored or used as an argument for another method. It's not strictly to use the stored value but if it's simply discarded it's 100% error.
Is there any easy to setup way to enforce this for my library users?
var x = instance.Method(); // ok
field = instance.Method(); // ok
instance.OtherMethod(instance.Method()); // ok
MyMethod(instance.Method()); // ok, no need to check inside MyMethod
instance.Method(); // callvirt and pop - error!
I thought about making IL analyzer for post-build event but it feels like so overcomplicated...
If you implement Code Analysis / FXCop, the rule CA1806 - Do not ignore method results would cover this case.
See: How to Enable / Disable Code Analysis for Managed Code
Basically, it's as simple as going to the project file, code analysis tab, checking a box and selecting what rules to error / warn on.
Basically tick the checkbox # 1, and then use 2 to get to a window where you can configure a ruleset file (this can either be one you share between libraries or something more global (if you have a build server, make sure its stored somewhere the build can get to, i.e. with the source not on a local machine).
Here's a ruleset with the rule I mean:
The Nicolai's answer enables ruleset for any types but I needed this check for only my library types (I don't want to force my library users to apply rule set on all their code).
Using out everywhere as suggested in the comments makes the library usage to hard.
Therefore I've chosen another approach.
In finalizer I check whether any method was called (it's enough for me to confirm usage). If not - InvalidOperationException. Object creation StackTrace is optionally recorded and appended to the error message.
User may call SetNotLeaked() to disable the check for particular object and all internal objects recursively.
This is not a compile-time check but it will surely be noticed.
This is not a very elegant solution and it breaks some guidelines but it does what I need, doesn't make user to view through unnecessary warnings (RuleSet solution) and doesn't affect code cleanliness (out).
For tests I had to make a base class where I setup Appdomain.UnhandledException handler in SetUp method and check (after GC.Collect) whether any exception was thrown in TearDown because finalizer is called from another thread and NUnit otherwise shows the test as passed.
I want to force the usage of an attribute, if another attribute is used.
If a special 3rd party attribute is attached to a property, this attribute also needs to be given to a property.
Is there any possibility of doing this?
For example:
[Some3rdPartyAttribute("...")]
[RequiredAttribute("...)]
public bool Example{get; set;}
should bring no compile error,
[Some3rdPartyAttribute("...")]
public bool Example{get; set;}
should bring a compile error or warning.
The attribute itself is definded like the example from http://msdn.microsoft.com/en-US/library/z919e8tw(v=vs.80).aspx itself . But how to force the usage of the attribute if another attribute is used?
Unfortunately you cannot generate custom compiler warnings from attributes. Some attributes like System.ObsoleteAttribute will generate a warning or error, but this is hard-coded into the C# compiler. You should find another solution to your problem, maybe letting Some3rdPartyAttribute inherit from RequiredAttribute?
Otherwise you have to change the compiler.
Another option is using some AOP techniques. Like for example:
PostSharp.
Using it you can at compilation analyze yur code and emit a error if some condition does not sutisfies your requirements.
For concrete example on attributes, can have a look on :
PostSharp 2.1: Reflecting Custom Attributes
You can make a console app, that will iterate trough all types in your assembly trough reflection, check if the rule is satisfied and return 0 if it is, and some other error code and output error if the rule is broken.
Then make this console app run as post-build task.
As far as I know, there is no way to check for attributes at compile time.
I recently needed to enforce something similar (all classes derived from a certain base class need certain attributes). I ended up putting a manual check (with [Conditional("DEBUG")]) using reflection into the constructor of the base class. This way, whenever someone creates an instance of a class with missing attributes, they get an exception. But this might not be applicable in your case, if your classes do not all derive from the same class.
You could write some code that runs on application start which uses reflection and would then throw runtime exceptions if an attribute was used without the proper match but I believe that's as far as you can go and personally I wouldn't consider that a good approach as you would need to run the application once to make sure it complies with your rules.
Also, take a look at PostSharp which may help you.
How about using #warning + Unit testing? In this way, whenever you run Unit tests, an warning will be generated (or you could just use Debug.Fail instead of #warning)
I have at least 3 .feature files in my C# Specflow tests project in which I have the step, for instance:
Given I am at the Home Page
When I first wrote the step in the file Feateure1.feature and created the step method, I placed it in a step file, let's say, Steps1.cs, which inherits from a base class that initializes a FirefoxDriver. All my StepsXXXX.cs classes inherit from this base class.
Then, I wrote Feature2.feature, which also has a step Given I am at the Home Page. And the step was automaticaly bound to the one in Steps1.cs
'Till now, no problem. That's pretty much what I wanted - to have reusable steps throughout the test project. But the problem is, whenever I'm running a scenario that has steps in diferent StepsXXXX files, I get various browser instances running.
======
I'm pretty sure this is due to the fact that My StepsXXXX (binding classes) all inherit from this base class that has a IWebDriver of its own, and when the step is called, everything else (including the before/after scenario methods) is called. But I can't figure out how to work around this.
I still want reusable steps. I tried to put these steps in the base class, but it did not work.
I thought of changing the bindings too, but specflow uses meaningfull strings to do so, and I don't want to change them to misleading strings.
Has anyone stumbled across this?
Any help is really appreciated.
You can use Scoped bindings using [Scope(Tag = "mytag", Feature = "feature title", Scenario = "scenario title")] to referred on specific scenario or feateure like this:
Feature: Feateure1
Scenario: demo
Given I am at the Home Page
When ....
[Binding, Scope(Feature = "Feateure1")]
public class Steps1{
[Given(#"Given I am at the Home Page")]
public void GivenIAmAtTheHomePage(){
{ }
}
Feature: Feateure2
Scenario: demo
Given I am at the Home Page
When ....
...
[Binding,Scope(Feature = "Feateure2")]
public class Steps2{
[Given(#"Given I am at the Home Page")]
public void GivenIAmAtTheHomePage(){
{ }
}
The problem is that SpecFlow bindings don't respect inheritance. All custom attributes are considered global, and so all SpecFlow does is search for a list of classes with a [Binding]then build up a dictionary for all the [Given]/[When]/[Then]s so that it can evaluate them for a best match. It will then create an instance of the class (if it hasn't already done so) and call the method on it.
As a result your simple cases all stay in the Steps1 class, because its the first perfect match. Your more complicated cases start instantiating more classes, hence multiple browsers, And your attempt to refactor won't work because your abstract base class doesn't have a [Binding] on it.
I'd probably start by flattening all your step class hierarchy, into one big AllSteps.cs class. This may seem counter-productive, but all you are really doing is arranging the code just how the current bindings appear to your SpecFlow features. This way you can start to refactor out the overlaps between the different GWT bindings.
At the moment your bindings are arranged around the scenarios. What you will need to do is refactor them around your functionality. Have a read of Whose Domain is it anyway? before you start and this will probably give you some good ideas. Then have a look at Sharing-Data-between-Bindings on the SpecFlow documentation to work out how to link between your new steps classes.
i think this is a lot more simple than the question and answers here make it out to be. there are really two questions at play here:
AISki gave you the right answer in the link to documentation about specflow context, but it was not really presented as the answer and there was distraction in presenting an inferior answer as the actual answer.
the answer as to the behavior you see is that you should expect exactly what is happening with the way you set things up. if you have multiple binding classes that create browser instances (and you do if they all have a common base that creates a browser instance) and they have matches in your features, you should expect multiple browser instances.
The answer for what you intend (a single browser shared among your steps) is that you should use the context feature of specflow to control the dependency on a browser instance. this amounts to dependency injection. your step definition classes should take a constructor dependency on something that creates your browser instance - specflow manages dependencies for you and you'll get a new instance for the first of your classes created and then the same one after that.
https://github.com/techtalk/SpecFlow/wiki/Sharing-Data-between-Bindings
I facing the same issue.
I wanted to have one feature file that will call steps in different cs classes. The issue came across when I want to setup and tear down for each scenario.
Using step class constructor and Dispose() not possible because the scenario uses more than one step class which I don't want to 'setup' multiple time in a scenario.
Using [BeforeScenario] and [AfterScenario] for both step classes also makes the runner run the before and after methods in both class that makes it setup run twice.
So what I was done is create another third class called something like BrowserScenarioSetup put the before and after scenario class in it to setup a browser for the scenario and assign to ScenarioContext.Current dictionary. When the test run, only one browser created for a scenario and I can use scenario steps defined in any class but just uses Scenario.Context.Current to get the browser instance.
I can make both step classes have a base step class and create a short method to get browser instance (or any shared instance created in setup) just to hide Scenario.Context.Current
Finally I can mark [BeforeScenario("Browser", "IE")] and use #Browser and #IE in a feature or scenario to only call this setup method in suitable context.
I have an .net assembly at C#. I have both: binary and source which has no logger, for example.
All I need is to insert property which will be initialised specific logger. Then I need to introduce logger invoker in all methods. The first way - is manually write property and their invokes. And the second way - is to write another class\method (I suppose in the same assembly) which will do it automatically.
Is it possible? Any suggestions?
I think it is possible, cause it was one of the questions at the interview. But there is no proof that this is possible, and they wanted to hear "no, do this manually".
This is what we call in architectural terms a 'cross cutting concern'. Logging is something that straddles many aspects of an application.
There are features to take care of it in the Microsoft Enterprise Library. The part you want is the Policy Injection library. You can then specify, in the config, methods to match (based on method name/structure) and a function to be called. In this way you can include logging as a proper cross-cutting concern of your app, rather than something which must be manually coded into every method.
It is not possible to alter the execution of a method without altering the source code and recompiling. You could write a wrapper class that would expose all classes and methods which would first call your logger and then the methods, but that's not what they asked.
So the answer to their question is 1. is possible, 2. isn't possible, and if you would have to add logging support, you would need to add it to each method manually.