Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 days ago.
Improve this question
I recently moved to a project which is having C# APIs. CI/CD pipeline has been setup for continuous deployment. The project is almost build from past 5+yrs.
The project has a set of API automation tests configured in CI/CD pipeline which will be executed during deployment to validate whether all the APIs are working. If any test fails, API won't get deployed.
Design and content of API automation test:
C# test project using Microsoft Unit testing.
Uses REST client to call APIs for testing.
For each API (GET/POST/PUT/DELETE) a separate test method has been written which will call the actual API using REST client.
The test will call actual API and does database operations (CRUD) which delays execution of test process which in-turn delays deployment.
Doubt:
Is it a good practice / approach to have API tests which is performing CRUD operations by calling each and every API during deployment into various environments?
We have Unit tests for each API where we use MOQ. Unit tests will be executed when we raise PR (pull request) for merging to master. And also we have minimal required integration tests. So, do we need to have a separate API test (that does CRUD operation) during deployment?
We are trying to have best practices in place to avoid blockages / overloading tests during deployment.
Please suggest if there are any best practices related to having different types of tests for C# API projects.
Thanks!!! in advance...
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I am looking to run Nunit test but I do not want the tests to be data dependent.
For eg: If I am running my unit test on testing server referring to testing database and if some user changes database values;
it should not have an impact on my testing scenarios.
However I want my testing scenarios to refer to oracle Stored procedures.
Thanks....any help would be highly appreciated.
Also I am open to the idea of any other tool which has the ability to achieve this.
If you are really hitting the database this not a unit test but integration test.
Basically you have two options which one with it's pros and cons:
Keep with the idea of integration tests but ensure somehow that the data you are using is as you expected. This can be achieved using stored procedure in your testing database that recreate your data while calling it, you can call this procedure in your tests initialization and then do all of your testing. The main disadvantage here is that the test will take more time than unit test and will cost more resources.
The main advantage is that you can be sure you're code integrates well with your database.
Choosing to use a real unit tests, in this option you will not going to use the database at all but instead create in-memory objects that represents the data from your database.
Because you will create this objects in the arrange part of your unit test you can know extacly what data they are holding.
The main disadvantage here is that you can't be sure you're code integrates well with your database.
The main advantage is that the test will take less time than integeration test and will cost less resources, moreover your test can be run even if your testing database is down.
If you want you can actually choose to use both options, this is useful because each test is testing your code from a different perspective.
More about unit tests vs integeration tests can be found here.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I have a situation. The webpage that I am testing is developed using AngularJS and KnockoutJS.
Search for a hotel and other pages in the website are developed using Angularjs.**
The Booking of the hotel and payments are developed using knockoutjs.
I am aware that Jasmine framework is used to test Knockoutjs applications.
Can I Use Protractor framework in c# for both Angularjs and knockoutjs application?
Or is there any other e2e testing framework to test such webapplications?
Protractor is for E2E testing and Jasmine is to javascript code testing. So if you're requirement is to test UI based testing go with Protractor or else go with Jasmine for code testing.
You go through following for more information on Protractor and Jasmine
Protractor
➔ It is an open source and end-to-end test framework specially for AngularJS web applications.
➔ It was introduced during AngularJS 1.2 as a replacement of the existing e2e testing framework ‘Angular Scenario Runner’
➔ It was built by a team in Google on the top of WebDriverJS with existing technologies such as Selenium, Node.js
Jasmine:
Jasmine is a behavior-driven development framework for testing JavaScript code. It does not depend on any other JavaScript frameworks.
Protractor VS Jasmine:
➔ Protractor has been developed for UI based testing activity(e2e testing), whereas Jasmine is to test JavaScript Code
➔ To develop e2e test scripts with Protractor, it needs BDD framework(Jasmin or Cucumber or Mocha) together for structuring the test scripts, whereas to test java script code with Jasmine, it doesn't require any other frameworks together
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
We currently have a single database with users, customers, products and orders logically separated by schemas. We then have several MVC.net applications accessing the database via their own BLLs. Each of these applications have their own functionality and share some aspects with some/all of the other applications.
Currently, some code is duplicated in these BLLs and it's a bit of a mess to maintain. It does however, allow us to develop features quickly and deploy each application independently (assuming on major database work here).
We have started to develop a single access layer, properly separated out that sits above the database and is used by all of our MVC.net applications. Logically this makes sense as we can now share code between our applications. For example, application A can retrieve a customer record in the same way as application B. The issue comes when we want to deploy an application, we wouldn't be able to deploy one application, we'd need to deploy them all.
What other architectural approaches could we consider that would allow us to share code between our applications and deploy those applications independently?
A common solution is to factor out services (based on an arbitrary communication layer REST, WCF, Message Bus, your choice with versioning) and deploy these services to your infrastructure as standalone services.
Now you can evolve, scale and deploy your services independently of the consumers. Instead of deploying all applications you now only have to deploy the changed services (side-by-side with the old ones) and the new application.
This adds quite a lot of complexity around service versioning, configuration management, integration testing, a little communication overhead etc. So you have to balance the pros and cons. There are quite a bunch of articles on the net how to build such an architecture.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I need do some performance tests over a solution that uses: WCF services, serialization and a lot of stuff more (framework 4.0) deployed on several machines.
using a visual studio 2010 test project, would be possible configure any option to point at the different machines where the solution is installed?
would WCF cause any problem if I try to test agains it using a binding in particular?
I would create a test project, add references to the WCF services you want to test, and simply start testing. If you want something for complete end to end performance testing you can look at Microsofts Performance Testing suite (You can use this to load/stress test etc...)
What you are looking for is called "Lab Management". Those features are built in to Team Foundation Server on Visual Studio 2010 and newer.
Depending on how complicated of a a setup you want to do you may need a server running Hyper-V and a license for System Center Virtual Machine Manager (SCVMM) (it is not cheep).
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I am trying to write a suite of automated integration tests to test my C# client library calls to the Yahoo Fantasy Sports API. Several API calls require OAuth tokens, which is where I am having some difficulty. I can use a web browser to generate an access key and secret and then pass those along in my test code, but the tokens expire after an hour, so I need to manually regenerate these and update my test configuration any time I want to run the tests.
Are there best practices for writing API integration tests when OAuth tokens are required?
normally such api's offer a way to get authentication tokens without the need to use a browser. I am not sure if yahoo sports is one of those though.
Normally you have to create an application to access an OAuth2 system, they then give you a ClientID and ClientSecret, then you hit a token URL and receive the access token which is then valid for an hour.
You might want to consider not having integration tests at all though. If I were you I would simply mock the Api responses and use that in your tests. So, gt a sample of the response for each call and then simply create a fake response which returns that whenever you hit it. you can then still run your tests.
The question you need to answer is this : what exactly am I testing? Are you testing a third party APi or do you want to test your own code.
Also, don't forget each api allows to be hit a certain number of times during a certain time window. One more reason to fake it, I'd say