I need to make a website that is designed to monitor / check the connectivity of our internal applications that is deployed on iis, Its like a list of links to our internal websites that we developed. The question is, how would I be able to check if a website is up and running? and how would I check if a website is down? I will simply display the links of our system and color it based on their status, green for up, and red if its down or has errors.. hoping for your advice. sample codes would be appreciated.
Just load anything from that server, if it loads your site is up and running, if it doesnt load, then just generate an error or show red
The simplest way to do this is to have a windows service or scheduled task running which performs WebRequests against the list of websites and checking the status codes.
If a status code of 200 is returned, show green. Anything else (4xx, 5xx, timeout), show red. Have the service store the results in a database and have the 'red-green' page read from that database.
That would be a generic, one-size-fits-all solution. It may not work for all sites, as some sites could have basic authentication, in which case your monitor will incorrectly record that the site is down. So you would need to store metadata against the sites and perform basic authentication (or any other business logic) to determine whether it's up or down.
If you have access to the websites you want to monitor then I would have thought the easiest way is to put a status page on the websites you want to monitor designed to be polled by your service. This way if the website is up you can get more advanced status information from it by reading the page content.
If you just want to check the http status then just access any page on the website (preferably a small one!) and check the response status code.
Something like
// prepare the web page we will be asking for
HttpWebRequest request = (HttpWebRequest)
WebRequest.Create(Url);
//if (AuthRequired())
// request.Credentials = new NetworkCredential(Username, Password);
// execute the request
HttpWebResponse response = (HttpWebResponse)request.GetResponse();
then you can read the response.StatusCode
Related
I have number of APIs which is not being hosted by me, so I have no control of the API itself. What I'm trying to achieve is to check whether the APIs is online or not.I already tried several way:
Sent a HTTP request to the API endpoint with HEAD method
Sent a blank HttpWebRequest
HttpWebRequest request = WebRequest.Create(url) as HttpWebRequest;
using (HttpWebResponse response = request.GetResponse() as HttpWebResponse)
{
if (response.StatusCode == HttpStatusCode.NotFound)
return false;
return true;
}
Ping the server
But somehow my end result is not accurate enough. It shows offline but when I manually try to invoke the API, it seems okay. Anybody got a solution?
What about using an API Health Check service?
Some links that can help you:
https://www.runscope.com/
https://www.getpostman.com/docs/postman/monitors/intro_monitors
https://blog.runscope.com/posts/build-a-health-check-api-for-services-underneath-your-api
https://nordicapis.com/monitor-the-status-of-apis-with-these-4-tools/
https://apigee.com/about/blog/technology/api-health-free-tool-monitoring-apis-public-beta-now
The first option should be good as long as the server can respond to a HEAD request. If it doesn't there should be some safe GET endpoint which can be used instead.
It also makes sense to time the health check so that if the API server is not alive, it won't freeze your app.
You can use Postman Monitors to achieve this. Postman offers 1000 free monitor runs on the free account and notifications via email on failures. You can set it up to run on a schedule by specifying the hour/day/week that the monitor should run at.
Simply create a postman collection: https://learning.getpostman.com/docs/postman/collections/creating_collections/
Add your HTTP Health check request in the collection.
Create a monitor on the collection: https://learning.getpostman.com/docs/postman/monitors/setting_up_monitor/
And set up the frequency that it should run with.
You can also manually trigger monitors via the Postman API: https://docs.api.getpostman.com
I have an application that generates a web request to Facebook Graph API to get a share count from an external page. I have been using this code for over a year without issue, and suddenly, the share count is not working when the request is made from .NET. However, if I make the request from a web browser, it works just fine. My code is as follows:
string fbLink = "https://graph.facebook.com/?id=" + externalLink + "&fields=og_object%7Bengagement%7D&access_token=<token_removed>";
WebClient client = new WebClient();
string fbString = client.DownloadString(fbLink);
This code still appears to be working fine, in that the request is made, and FB responds with no errors. In fact, it responds back with correct page id, and details. However, the share count is zero.
Here is where it gets a little bit weird. On my localhost development machine, the code works fine and returns the proper share count. However, if I run the code on my actual server (an AWS EC2 instance), the share count shows zero.
If I open Chrome and run the request from the browser, the share count displays as expected.
If I open Internet Explorer 11, and run the request from the browser, the counter shows zero. HOWEVER, if I log in to Facebook from IE11, and then run the request to FB Graph API, the response shows the correct page count.
This is very confusing to me, as it appears the reason the counter has stopped working, has to do with cookies, or maybe the browser being logged into FB. This should not be the case as I am using an APP token ID, and I wouldn't expect to need to be logged into FB in order to make a request to Graph API.
Does anybody have any ideas why my request/code in .NET worked just fine for a year and a half, and just stopped working? Or why the requests work fine on my localhost and not my live server?
After spending considerable time on this issue, I have fixed the issue. There is a FB authentication cookie that was being transmitted through a web browser query. The cookie name was "XS" and the value was a long string that is used as a sessionId for my specific login. If I created this cookie in my web request in C# code, I get the proper response with correct # of shares.
WebClient client = new WebClient();
client.Headers.Add("Cookie", "xs=<removed>;");
I have no idea why I have to do this, only on my EC2 server. Nowhere in FB's documentation does it say you have to spoof a valid logged in authentication string cookie in order to obtain correct Share Count results from a request to it's Graph API, but there you have it. A workaround at least.
We are using an HttpWebRequest in C# to get data from an internet resource in our Azure Web App. The problem is that Azure has a limitation on how long it keeps the connection alive (around 240 seconds). Due to the nature of our application, the response will sometimes take longer than 240 seconds. When this happens, the webpage will go white, and the "View Source" will show zero source code (which has made this issue difficult to debug).
Here's some sample code to illustrate:
webRequest = WebRequest.Create(PAGE_URL) as HttpWebRequest;
webRequest.Method = "POST";
webRequest.ContentType = "application/x-www-form-urlencoded";
webRequest.CookieContainer = cookies;
webRequest.Timeout = Timeout.Infinite;
webRequest.KeepAlive = true;
StreamWriter requestWriter2 = new
StreamWriter(webRequest.GetRequestStream());
requestWriter2.Write(postString);
requestWriter2.Close();
WebResponse response = webRequest.GetResponse();
Stream stream = response.GetResponseStream();
Adding webRequest.Timeout and webRequest.KeepAlive did not solve the issue.
jbq on this thread mentioned he had a workaround by sending a "newline character every 5 seconds", however did not explain how to accomplish this exactly. He was answering a question about an Azure VM, but I think an Azure Web App would have similar behaviors with respect to what I believe are the load balancers that are responsible for the timeout.
The Question:
How can I send one HttpWebRequest, and then send another HttpWebRequest while the previous one is running with a blank line to maintain the connection and prevent the Azure load balancer(?) from timing out the azure application? Would a new session variable need to be used? Perhaps an asynchronous method? Do I need to send the "pinging" request before the main request? If so, how would this look in implementation? Or is it something else entirely? Please provide some source code as an example :)
Note: you do not need to use an HttpWebRequest to replicate this issue. Attach a debugger from within Visual Studio to a live Azure Web App. Place a breakpoint within Visual Studio at any piece of code. When that breakpoint is hit, after roughly 4 minutes you'll see the page in your browser stop loading and go white with an empty source. So, it's not specifically related to HttpWebRequest, but that is an operation that would typically cause this sort of issue since some responses take longer than 4 minutes.
*EDIT: I think what I am looking for is an implementation of Asynchronous methods. I will update this post as I find a satisfactory implementation.
If you are making an HttpWebRequest to an Azure Website then you use ServicePointManager.SetTcpKeepAlive on your client code which uses HttpWebRequest.
https://msdn.microsoft.com/en-us/library/system.net.servicepointmanager.settcpkeepalive(v=vs.110).aspx
The 4 minute timeout that you are talking about is an IDLE timeout over the TCP layer and setting this will ensure that your client (who is using HttpWebRequest) sends ACK packet over TCP so that the connection doesn't get idle.
If your web application is making a HttpWebRequest to some other service, you can still use this function but that will just ensure that the Idle timeout is not hit when calling the remote service. Your actual HTTP request to the Azure Webapp may still hit the 4 minute time and if the client to your Azure web app is not HttpWebRequest, then again the 4 minute idle timeout will bite you...
The best thing to do here is to change the code a bit to implement a JOB Model kind of pattern where-in you make a server call which returns a JOBID. The client then queries the server using this JOBID in a polling fashion and when the job completes on the server the status of this JOBID can be set to COMPLETED in which case the client can then retrieve the data. You can use Webjobs in Azure Webapps to achieve something like this.
Hope this helps...
Got a bit of an odd problem. Here goes:
I have two ASP.NET applications: A web app and a web service app.
Information arriving via the webservice effects data in the database used by the web app.
One particular bit of data controls items in a drop down menu - when the data is altered in the app it can call:
HttpContext.Current.Cache.Remove
but I now need to clear the cache in the web service as i can recieve messages which update that information.
Can anyone recommend a way of doing this?
Cache invalidation can be hard. Off the top of my head I can think of 3 solutions of varying complexity which may or may not work for you.
First, you could write a web service for the web app that the web service app calls to invalidate the cache. This is probably the hardest.
Second, you could have the web service app write a "dirty" flag in the database that the web app could check before it renders the drop down menu. This is the route I would go.
Third, you could simply stop caching that particular data.
You could have a web method whose sole purpose is to clear the cache.
var webRequest = HttpWebRequest.Create(clearCacheURL);
var webResponse = webRequest.GetResponse();
// receive the response and return it as function result
var sr = new System.IO.StreamReader(webResponse.GetResponseStream());
var result = sr.ReadToEnd();
Implement the cache with an expiry time.
Cache.Insert("DSN", connectionString, null,
DateTime.Now.AddMinutes(2), Cache.NoSlidingExpiration);
Cache.Insert Method
You can try SQL Dependency. It will trigger an event when the table you have subscribed has any changes.
https://www.codeproject.com/Articles/12335/Using-SqlDependency-for-data-change-events
I am developing an application in which I am displaying products in a grid. In the grid there is a column which have a disable/enable icon and on click of that icon I am firing a request through AJAX to my page manageProduct.aspx for enabling/disabling that particular product.
In my ajax request I am passing productID as parameter, so the final ajax query is as
http://example.com/manageProduct.aspx?id=234
Now, if someone (professional hacker or web developer) can get this URL (which is easy to get from my javascript files), then he can make a script which will run as a loop and will disable all my products.
So, I want to know that is there any mechanism, technique or method using which if someone tries to execute that page directly then, it will return an error (a proper message "You're not authorized or something") else if the page is executed from the desired page, like where I am displaying product list, then it will ecxecute properly.
Basically I wnat to secure my AJAX requests, so taht no one can directly execute them.
In PHP:
In php my colleague secure this PHP pages by checking the refrer of the page. as below:
$back_link = $_SERVER['HTTP_REFERER'];
if ($back_link =='')
{
echo 'You are not authorized to execute this page';
}
else
{
//coding
}
Please tell me how to the same or any other different but secure techique in ASP.NET (C#), I am using jQUERY in my app for making ajax requests.
Thanks
Forget about using the referer - it is trivial to forge. There is no way to reliably tell if a request is being made directly or as a response to something else.
If you want to stop unauthorised people from having an effect on the system by requesting a URL, then you need something smarter then that to determine their authorisation level (probably a password system implemented with HTTP Basic Auth or Cookies).
Whatever you do, don't rely on http headers like 'HTTP_REFERER', as they can be easily spoofed.
You need to check in your service that your user is logged in. Writing a good secure login system isn't easy either but that is what you need to do, or use the built in "forms authentication".
Also, do not use sequential product id's, use uniqueidentifiers, you can still have an integer product id for display but for all other uses like the one you describe you will want to use the product uniqueidentifier/guid.