When trying to apply some heavy stress on a new web app, we were having trouble with our usual array of free tools (WAS, some other free ms tool (wcat?)), so i created a v. quick and dirty tool that would use a .net webrequest object, open up new threads, and continually hit a list urls from a file. On a single thread it produced a little load previously.
Then once i started trying to multi-thread it (once by trying a thread.start(), and then another time by calling beginrequestgets on the webrequest object), the requests did not hit the server (nothing in the iis logs, no increase in requests executing, requests/sec, etc)---unless Fiddler was on! With fiddler on, it works just as I'd expect.
I'm not especially interested in using this little application much more (will probably try to find another free web stress tool -- any recommendations?) but my main question is, why did my little app only provide stress when going through the proxy of Fiddler? Any ideas?
You may take a look at Apache Bench. It is part of the Apache server software but ab.exe is completely standalone and you don't need to install the server. In the description it says that it is used to test the Apache Hypertext Transfer Protocol (HTTP) server but it works with any HTTP server. I've used it in multiple projects to perform stress testing and I can say that it is a great tool. As it allows posting data, it could be used to test web services as well.
Another alternative is WCAT from Microsoft.
Could be a missing proxy setting in your app. Do you normally use a proxy server to connect to your servers that you're stressing?
Fiddler operates within the context of the logged-in user, including any proxy settings. But when coding your own WebClient/HttpWebRequest, the proxy is not automatically used-- you need to enable use of a proxy yourself, either in code or configuration.
Also could be a permissions problem, if your stressed servers (or your proxies) require authentication.
Here's some code to play around with to address both a missing proxy and lack of authentication. Note that the same code can be used against WebClient as HttpWebRequest:
WebClient wc = new WebClient();
WebProxy wp = new WebProxy("http://myproxyserver:80/",true);
wp.UseDefaultCredentials = true;
wc.Proxy = wp;
wc.UseDefaultCredentials = true;
BTW, a tool I often use in weird situations like this is HttpWatch. It's expensive, but it is like fiddler but works as a browser plugin, meaning it will detect proxy issues and other problems which don't show up inside Fiddler. It also works nicely with HTTPS.
Related
I have an multi layered application that I have developed. I communicate with the windows service using http with ServiceStack (AppHostHttpListenerBase). While this works fine in clean environments, I often find that customers computers are not so clean and one of the first problem areas is the introduction of an unwanted Proxy with causes my application not to run. I get alot of bad press that the application does not work well, when in reality it is a result of a hidden issue on the customer machine.
When I go to query my endpoints the proxy interferes and I lose all communications with the Service.
I am thinking of going back to WCF and using namedpipes but before I do so, wondered if there was a better way in the ServiceStack world (Which I absolutely love)
Ideas? Suggestions?
If a local HTTP proxy is causing issues one thing you could try is to use SSL which will let you tunnel your traffic as an opaque binary through their proxy minimizing potential for interference.
Configuring SSL for HttpListener is configured the same way for all HttpListener's, i.e. it's not specific to ServiceStack - and it needs to be configured on the OS where it's run.
This answer shows how to configure SSL on Windows: https://stackoverflow.com/a/11457719/85785
You'll be able to use https with ServiceStack HttpListener self-host by following the steps above, I used "https://*:8443/" for the url and "CN=localhost" to bypass the SSL browser warning dialog.
It's not specifically clear in the answer but you can get the Thumbprint from the details tab of the certificate where you then need to remove spaces. If it's easier, you can follow the walkthrough in the answer below to use MMC to import the certificate: https://stackoverflow.com/a/33905011/85785
I want to check which web sites is open in browsers ( IE, Firefox, Chrome ) to write a program in C# which can block web site which is in list of forbidden web site. Is there some API of browsers?
The better solution can be to write a TCP/IP filter, like most firewalls do.
UPD: this topic can be relevant: How do I hook the TCP stack in Windows to sniff and modify packets?
There is no generic "browser API" that allows access to this kind of information across all browsers.
I'm pretty sure the idea of doing this by accessing the browsers is doomed from the start. It is going to be almost impossible to implement, require frequent updates, and always extremely easy to circumvent (there are dozens and dozens of browsers that your program will not know.).
The only reliable way is to filter traffic on network level. I would recommend looking into using an existing proxy server or TCP filtering program. There are several Open Source ones that I'm sure you can use as a basis to rewrite or hook into.
The easier solution is to write an http listener that logs the requests.
Fiddler2 is one of these, you can check it out. it logs all incomming and outcomming http content.
I have code on my server which works very well. It must crawl a few pages on remote sites to work properly. I know some users may want to abuse my site so instead of running the code which uses webclient and HttpRequest i would like it to run on client side so if it is abused the user may have his IP blacklisted instead of my server. How might i run this code client side? I am thinking silverlight may be a solution but i know nothing about it.
Yes, Silverlight is the solution that lets you run a limited subset of .NET code on client's machine. Just google for silverlight limitations to get more information about what's not available.
I don't know what is the scenario you're trying to implement, and whether you need real-time results, but I guess caching the crawl results could be a good idea?
In case you're after web scraping, you should be able to find a couple of JavaScript frameworks that for you.
I think your options here are Silverlight or somesort of desktop app
Unless maybe there is a jquery library or other client scripting language that can do same things
That's an interesting request (no pun). If you do use Silverlight then maybe instead of porting your logic to it, create a simple Proxy class in it that receives Requests from your server app and shuttles it forward for the dirty work. Same with the incoming Responses: have your Silverlight proxy send it back to the server app.
This way you have the option of running your server app through the Silverlight proxy in some instances, and on its own (with no proxy) in other scenarios. The silverlight plugin should provide a consistent API to program against no matter which browser it's running in.
If using a proxy solution in the web browser, you might even be able to skip Silverlight altogether and use JavaScript/AJAX calls. Of course this kind of thing is usually fraught with browser compatibility issues and it would be an obscure push/pull implementation for sure, but I think JavaScript can access domains and URLs and (in some cases of usage) not be restricted to the one it originated from.
If Silverlight security stands in the way you might look into other kinds of programmable (turing complete) browser plugins like Java, Flash, etc. If memory serves correct, for the Java plugin, it can only communicate over the network with the domain it originated from. This kind of security is too restrictive for your crawling needs.
Is it realistic to use the C# .Net class HttpListener as the foundation for a production caliber web server?
The http web service I need to host contains no .aspx or static files. All http responses are dynamic and generated in c# code that is invoked via a few switch statements which inspect a restful url format.
My thinking is that IIS is really a user-mode wrapper around the Windows o/s HTTP-SYS kernel module that does all the heavy duty network handling and so is HttpListener.
I already have a basic multithreaded web server running which is excellent for development because it starts in debug mode in an instance, now I am thinking do I need the overkill of IIS for production. A low memory footprint is another attraction.
You have two serious choices here. And no, coding your own Web Server with HttpListener isn't production-grade.
1) Use IIS. It has a ton of features for security, performance, and perhaps more importantly, manageability, that you would have to reinvent yourself. Like remote administration, logging, integrated Windows Security, etc.
2) Use WCF and create a ServiceHost to host your files. Then you will have to implement your own services and find a way to manage their lifetimes. You can do it, but again, if you're talking RESTFul web calls, IIS really is the way to go.
Manually rolling your own should be avoided. IIS has changed a lot in the past 10 years. It's by no means a big monolithic server anymore. They've modularized just about everything, especially in Windows 2008, so you get a lean and fast system.
Well, as it was said - try to use IIS at first.
HttpListener is not bad at all - that's fastest managed listener server you can have now (faster than TcpListener and faster than Socket class). And it's actually the same core as with IIS. But IIS has a lot of more things.
I can't say IIS is monolith - hosting usage shown that it became worse in Win2008 in terms of stability and management. But your hand-made solution can be much worse. And also don't forget - http.sys much more customizable than HttpListener. I.e. you can't do streaming with HttpListener, but you can do that with http.sys - Question about HttpListener streaming
But if you will have enough power as a developer - you can try to write up own http.sys wrapper and that is best approach of writing own web server in Windows.
Rubbish, Roll your own. The architecture allows it. Be aware though there are some strange behaviours in the Class. Closing it down a couple of times in an NT service makes it flakey as a bag of puff pastries.
If you run it on Console, no problems whatsoever, run it async and all should be well, however, starting and stopping the darn thing. thats a different issue which I am currently struggling with as no errors are being produced from the hermetically Microsoft sealed classes.
I feel python coming on with a little dash of cherryPy
If you write it, then you'll have to maintain it. Microsoft has already written a web server - you should use it.
I have some shared server web hosting in the States (I'm from the UK), which allows me to publish PHP and .NET applications. I cannot install my own software onto the remote server, but I'd like to set up a web forwarding proxy for accessing sites that serve different content depending on what country you're from.
My C# and ASP.NET skills are OK, but my PHP is very limited. Are there any solutions that anyone would recommend for this sort of problem? The proxies I've investigated all seem to require installation on the server machine itself, whereas I'm just looking for something that's accessible from a URL.
Obviously, as the requests are coming from the UK, the headers will have to be manipulated by the proxy before forwarding them on. I was going to code my own HTTP handler in C#, but I don't want to reinvent the wheel if there's something out there already ;)
Although quite old, the Org.Mentalis.Proxy could be a good starting point for an example proxy implementation in C#. You can find it here: http://www.mentalis.org/soft/projects/proxy/
maybe this script is helping you? phproxy