I am trying to do some unit testing in for a Web API project. I am going simulate the web API hosting environment. It seems like that I could use In memory host (HttpServer) or self host (HttpSelfHostServer).
Just wondering what are the difference and which technology is good for what and is there any limitation for those options.
You should use in memory host for end-to-end tests and then test the network connectivity of your environment separately.
For a number of reasons:
In memory host, as the name suggests, runs entirely in memory so will be much faster
Self host needs to be run with elevated privileges, so your tests will need to be executed within the context of an "admin" identity. This is far from desired. It is especially troublesome if you want to execute tests from i.e. build scripts or from PowerShell, since, as a result, these processes would also have to be started with elevated privileges. Moreover, this will have to happen on any of the servers you test on.
In self host you end up testing the given operating system’s networking stack which really is something that shouldn't be tested – since it might differ across different environments (development, staging, QA, production and so on). For example - a given port might not be available. As a result you might get dragged into unnecessary debugging efforts across different machines to even get the tests running.
Finally, testing using self-hosting, still doesn't guarantee that the service will run correctly when web-hosted and vice versa - so you might as well just test in memory
Related
We are setting up our automation to run remotely so we can start incorporating them into the builds (you know, the whole CI/CD thing). These are a handful of important automated GUI tests that for obvious reasons, need an active VM to run. These are not browser tests, they are actually automated tests for a windows application so any support that Selenium brings to the table is off for us.
So now on to the challenge - how can I keep the VMs up and running without having to log into them using the Remote Desktop Connection to allow them to run the tests properly. Currently, I have to connect to them from my local machine and then minimize it and then I can kick off the builds. As soon as I exit however, the virtual machine is locked again.
I want the VMs to work completely independently from my machine, so I was skeptical about this approach because it seemed like it would still be tied to my machine only. Pretty much anyone in the company can log into the VMs from their machine using their credentials. What I would like to do is to programatically connect to the VM during my global TestStartup and then disconnect at TearDown. Is this possible to do? Has anyone had success or ran into similar situations with their automation integration process? We use a tool called LeanFT and NUnit as our test runner. .
Your idea to log in as part of the test is a bit fragile and prone to instabilities.
Here is the setup that works for every UI automation tool I've used for Windows
set up your VM to not lock / sleep /hibernate, etc.
Avoid using RDC (turn that feature off, even for admins if you can)
Only use the console viewer for your vm server
Limit access to those systems using the permissions in the VM server so that only you and your team can interact with them.
Here is why this works. You have already discovered that when you disconnect the RDP connection, the session locks and your automation fails. By using the vm console viewer, it's essentially like turning on/off the monitor connected to the system. By keeping them on all the time and not sleeping, they are always available for running tests.
We are using LeanFT and to encourage the stability of our tests, we have setup tasks to check the running processes to kill any stray leanft runtimes that didn't get closed cleanly from a prior run, as well as any stray applications that were not closed properly after a testing run.
These kind of issues are really annoying for UI automation.
In the end I found a solution. Not quite well but it works. All I did is created a Docker container and used it in UI automation job.
The container is composed by SSHD, Xvfb and xfreerdp, which let you connect to massive remote RDP, and because it use xvfb, a virtual display tool, it costs low resource.
Here's the image I created for your reference.
https://hub.docker.com/repository/docker/ariyuan/ubuntu1604_ssh_rdp
Before your UI automation start you just need to tell the container to open remote RDP connection to the machine where your UI automation hosted. In this case your display for UI automation will be kept all the time during the execution.(You can do it all by Jenkins with parameters to connect to different remote machine)
I have a Windows Service written in C#. I have recently added CassiniDev to it to allow remote web administration and monitoring of the service. The integration went really well except for my inability to interact with data layer of my Windows Service from hosted ASP.NET pages.
I have tried putting everything of interest into a common assembly but the debugger shows there are two loaded assemblies with the same name but from different paths. Cassini runs ASP.NET off some temp folder so the assembly I am using is really "a different instance" in the address space of the same process.
I am not sure what is going on here. Probably some "application domain" separation stuff that I do not understand at this time.
So with Windows Service and the web server running in the same process, how can I make them interact? Say I have some status in the Service part that I want to report in the ASP.NET part. Any ideas how I could make this happen? Shared memory or TCP comes to mind but it sounds like an overkill for purely intra-process communication.
If security isn't an immediate concern, i.e. the data isn't highly sensitive and in a controlled environment, then you could have success using Named Pipes. A managed API for processing piping has been implemented as part of the framework, so you don't need to think in native calls.
Some background....
We're venturing into Azure for the first time and are trying to do it in baby steps. For now, our first apps are going to be worker roles monitoring queues to process requests (such as sending an email or performing some screen scraping) and we'll just insert into the queues from our on-premise MVC app and WCF services. We'll later move the MVC app and WCF services to Azure.
Our development workflow essentially goes like this (somewhat modified in unimportant ways):
Develop locally and on some shared servers. Check into source control.
Every 15 minutes, build server builds and deploys to a "development" environment (doubles as CI and covers in case we need to hit a "development" environment other than local)
Technical testers manually trigger build server to deploy the last-successful "Dev" build to the Test environment (or alternately, the previously-deployed Test build, including/excluding database).
Technical testers and business testers manually trigger build server to deploy the last-successful "Test" build (or alternately, the previously-deployed Test build, including/excluding database) to the QA environment.
At some point we merge the changeset that QA approves for deployment.
Later our production build server deploys this version to staging and then later to our production environments (we host it N times in parallel/independent environments).
As you can tell, we have a number of internally-hosted versions of our app for internal support people to hit against prior to reaching production. I would like for these to have a reasonably low-level of dependence upon Azure. I don't need to complete sever dependence so we'll continue to use Azure Queues and perhaps some other mechanisms just because they're easy to continue using, but we don't want to have our build server to have to deploy to Azure for every one of these environments (and alternatively pay for all that hosting).
So how can we reasonably host our Worker Roles on-premise in a way where we're actually testing the code that gets deployed to Azure?
One option that's been suggested is that we create the worker role as a wrapper/facade and do all the real work inside a class library, which was our plan. However, the follow-up to allow us to "host" this would be to create a second wrapper/facade application that performs the same work as the worker role, just in a way where we can run it as a scheduled task or a windows server. Ultimately, I don't like this option because an entire project is never tested until it hits staging.
Is it possible to do something similar where we create a second wrapper/facade application that instead of calling the class library that it actually references and calls the Run() function in the worker role?
Do you reckon Azure emulator might help you? These are the differences between the real Azure provider and the emulator.
Having a facade for your worker role seems reasonable. And use adaptors to adapt any possible cloud (or other hosting) techonology to that facade? Just trying to throw in some ideas. I actually used this approach before, but was a "personal" project.
Use PowerShell to configure your roles and whatnot.
Configure your Azure emulator like this.
The facade approach is the best one to adopt, to be honest.
When you have deployments that ultimately have dependencies on the supporting infrastructure, it's exceptionally difficult to fully test until you deploy to an identical, or comparable, infrastructure. That's surely the case with an Azure Worker Role.
By decoupling the functional aspects of your application from the infrastructure touch-points, you can spend your effort ensuring that your code behaves as it should, prove that the facade behaves as it should, then confidence test the final combination.
There's always some element of compromise to this effect unless your test environments are identical to your production environments.
And it's what the Azure staging deployment is for; that last level of confidence testing before you switch to production.
You can create an extra-small deployment, purely for your later stages of testing. You pay for the time that the role is deployed, so if you delete your deployment once your testing is complete, you can minimise the cost.
Lastly, and the facade pattern is an example, design for testability. Factor your code to maximise the amount that can be tested before deployment and you minimise your risk in later stages of testing.
I am developing a distributed application where each distributed site will run a 2-tier winform based client-server application (there are specific reasons for not going into web architecture). The local application server will be responsible for communicating with a central server and the local workstations will in turn communicate with the local server.
I am about to develop one or more windows services to achieve the following objectives. I presume some of them will run on the local server while some on the local workstations. The central server is just a SQL Server database so no services to be developed on that end, just DML statements will be fired from local server.
Windows Service Objectives
Synchronise local data with data from central server (both end SQL Server) based on a schedule
Take automatic database backup based on a schedule
Automatically download software updates from vendor site based on a schedule
Enforce software license management by monitoring connected workstations
Intranet messaging between workstations and/or local server (like a little richer NET SEND utility)
Manage deployment of updates from local server to workstations
Though I have something on my mind, I am a little confused in deciding and I need to cross-check my thoughts with experts like you. Following are my queries -
How many services should run on the local server?
Please specify individual service roles.
How many services should run on the workstations?
Please specify individual service roles.
Since some services will need to query the local SQL database, where should the services read the connection string from? registry? Config file? Any other location?
If one service is alotted to do multiple tasks, e.g. like a scheduler service, what should be the best/optimal way with respect to performance for managing the scheduled tasks
a. Build .NET class Library assemblies for the scheduled tasks and spin them up on separate threads
b. Build .NET executable assemblies for the scheduled tasks and fire the executables
Though it may seem as too many questions in a single post, they are all related. I will share my views with the forum later as I do not want to bias the forum's views/suggestions in any way.
Thanking all of you in advance.
How many services should run on the local server?
It depends on many factors, but I would generally go for as less services as possible, because maintaining and surveilling one service is less work for the admin than having many services.
How many services should run on the workstations?
I would use just one, because this will make it a single point of failure. The user will notice if the service on the workstation is down. If this is the case only one service needs to be started.
Since some services will need to query the local SQL database, where should the services read the connection string from? registry? Config file? Any other location?
I would generally put the connection string in the app.config. The .NET framework also offers facilities to encrypt the connection string.
If one service is alotted to do multiple tasks, e.g. like a scheduler service, what should be the best/optimal way with respect to performance for managing the scheduled tasks a. Build .NET class Library assemblies for the scheduled tasks and spin them up on separate threads b. Build .NET executable assemblies for the scheduled tasks and fire the executables
b. is easier to design and implement. It gives you the possibility to use the windows scheduler. In this case you will need to think of the problem, when the windows scheduler starts the executable, when the previous start has not finished yet. This results to two processes, which may do the same. If this is not a problem then stay at that design. If it is a problem, consider solution a.
For solution a. have a look on Quartz.NET which offers you a lot of advanced scheduling capabilities. Also considering using application domains instead of threads to make the service more robust.
If you don't get admin rights on the local server, think about means to restart the service without the service control manager. Give some priviledged user the possibility to re-initialize the service from a client machine.
Also think about ways to restart just one part of a serivce, if one service is doing mulitple tasks. For instance the service is behaving strangely, because the update task is running wrong. If you need to restart the whole service to repair this, all users may become aware of it. Provide some means to re-intialize only the update task.
Most important: Don't follow any of my advices if you find an easier way to achieve your goals. Start with a simple design. Don't overengineer! Solutions with (multiple) services and scheduling tend to explode in their complexity with each added feature. Especially when you need to let the services talk to each other.
I dont think there is one answer to this, some would probably use just one service and other would modularize every domain into a service and add enterprise transaction services etc. The question is more of a SOA one than C# and you might consider read up on some SOA books to find your pattern.
This does not answer your questions specifically, but one thing to consider is that you can run multiple services in the same process. The ServiceBase.Run method accepts an array of ServerBase instances to start. They are technically different services and appear as such in the service control manager, but they are executed inside the same process. Again, just something to consider in the broader context of your questions.
I'm working on a graduation project for one of my university courses, and I need find some place to run several crawlers I wrote in C# from. With no web hosting experience, I'm a bit lost. Is this something that any site allows? Do I need a special host that gives more access to the server? The crawler is a simple app that does its work, then periodically writes information to a remote database.
A web crawler is a simulation of a normal user. It acess sites like browsers do, getting the html code (javascript, etc.) returned from the server (so no internal access to server code). Being that, any site can be crawled.
Be aware of some web crawler ethics guidelines. There are pages you shouldn't index or follow its links. And web developers build some files and instructions to web crawlers, saying what you can index or follow.
If you can't run it off your desktop for some reason, you'll need a host that lets you execute arbitrary C# code. Most cheap web servers don't do this due to the potential security implications, since there will be several other people running on the same server.
This means you'll need to be on a server where you have your own OS. Either a VPS - Virtual Private Server, where virtualization is used to give you your own OS but share the hardware - or your own dedicated server, where you have both the hardware and software to yourself.
Note that if you're running on a server that's shared in any way, you'll need to make sure to throttle yourself so as to not cause problems for your neighbors; your primary issue will be not using too much CPU or bandwidth. This isn't just for politeness - most web hosts will suspend your hosting if you're causing problems on their network, such as denying the other users of the hardware you're on resources by consuming them all yourself. You can usually burst higher usage levels, but they'll cut you off if you sustain them for a significant period of time.
This doesn't seem to have anything to do with web hosting. You just need a machine with an internet connection and a database server.
I'd check with your university if I were you. At least in my time, a lot was possible to arrange in-house when it came to graduation projects.
Failing that, you could look into a simple VPS (Virtual Private Server) account. Unless you are sure your app runs under Mono, you will need a Windows one. The resource limits are usually a lot lower than you'd get from a dedicated server, but they're relatively affordable. Some will offer a MS SQL Server database you can use next to the VPS account (on another machine). Installing SQL Server on the VPS itself can be a problem license wise.
Make sure you check the terms of usage before you open an account, as well as the (virtual) system specs though. Also check if there is some kind of minimum contract period. Sometimes this can be longer than a single month, especially if there is no setup fee.
If at all possible, find a host that's geographically close to you. A server on the other side of the world can get a little annoying to access remotely using Remote Desktop.
80legs lets you use their crawlers to process millions of web pages with your own program.
The rates are:
$2.00 per million pages
$0.03 per CPU-hour
They claim to crawl 2 billion web pages a day.
You will need a VPS(Virtual private server) or a full on dedicated server. Crawlers are nothing more then applications that "crawl" the internet. While you could set up a web site to be a crawler, it is not practical because the web page would have to be accessed for you crawler to work. You will have to read the ToS(Terms of service) for the host to see what the terms are for usage. Some of the lower prices hosts will cut your connection with a reason of "negatively impacting the network" if you try to use to much bandwidth even though they have given you plenty to use.
VPS are around $30-80 for a linux server and $60+ for a windows server.
Dedicated services run $100+ for both linux and windows servers.
You don't need any web hosting to run your spider. Just ask for a PC with web connection that can act as a dedicated server,configure the database and run the crawler from there.