Sending a http request in C# and catching network issues - c#

I previously had a small VBScript that would test if a specific website was accessible by sending a GET request. The script itself was extremely simple and did everything I needed:
Function GETRequest(URL) 'Sends a GET http request to a specific URL
Dim objHttpRequest
Set objHttpRequest = CreateObject("MSXML2.XMLHTTP.3.0")
objHttpRequest.Open "GET", URL, False
On Error Resume Next 'Error checking in case access is denied
objHttpRequest.Send
GETRequest = objHttpRequest.Status
End Function
I now want to include this sort of functionality in an expanded C# application. However I've been unable to get the same results my previous script provided.
Using code similar to what I've posted below sort of gets me a proper result, but fails to run if my network connection has failed.
public static void GETRequest()
{
HttpWebRequest request = (HttpWebRequest)WebRequest.Create("http://url");
request.Method = "GET";
HttpStatusCode status;
HttpWebResponse response;
try
{
response = (HttpWebResponse)request.GetResponse();
status = response.StatusCode;
Console.WriteLine((int)response.StatusCode);
Console.WriteLine(status);
}
catch (WebException e)
{
status = ((HttpWebResponse)e.Response).StatusCode;
Console.WriteLine(status);
}
}
But as I said, I need to know if the site is accessible, not matter the reason: the portal could be down, or the problem might reside on the side of the PC that's trying to access it. Either way: I don't care.
When I used MSXML2.XMLHTTP.3.0 in the script I was able to get values ranging from 12000 to 12156 if I was having network problems. I would like to have the same functionality in my C# app, that way I could at least write a minimum of information to a log and let the computer act accordingly. Any ideas?

A direct translation of your code would be something like this:
static void GetStatusCode(string url)
{
dynamic httpRequest = Activator.CreateInstance(Type.GetTypeFromProgID("MSXML2.XMLHTTP.3.0"));
httpRequest.Open("GET", url, false);
try { httpRequest.Send(); }
catch { }
finally { Console.WriteLine(httpRequest.Status); }
}
It's as small and simple as your VBScript script, and uses the same COM object to send the request.
This code happily gives me error code like 12029 ERROR_WINHTTP_CANNOT_CONNECT or 12007 ERROR_WINHTTP_NAME_NOT_RESOLVED etc.

If the code is failing only when you don't have an available network connection, you can use GetIsNetworkAvailable() before executing your code. This method will return a boolean indicating if a network connection is available or not. If it returns false, you could execute an early return / notify the user, and if not, continue.
System.Net.NetworkInformation.NetworkInterface.GetIsNetworkAvailable()
using the code you provided above:
public static void GETRequest()
{
if (!System.Net.NetworkInformation.NetworkInterface.GetIsNetworkAvailable())
return; //or alert the user there is no connection
HttpWebRequest request = (HttpWebRequest)WebRequest.Create("http://url");
request.Method = "GET";
HttpStatusCode status;
HttpWebResponse response;
try
{
response = (HttpWebResponse)request.GetResponse();
status = response.StatusCode;
Console.WriteLine((int)response.StatusCode);
Console.WriteLine(status);
}
catch (WebException e)
{
status = ((HttpWebResponse)e.Response).StatusCode;
Console.WriteLine(status);
}
}

This should work for you, i've used it many times before, cut it down a bit for your needs: -
private static string GetStatusCode(string url)
{
HttpWebRequest req = (HttpWebRequest)WebRequest.Create(url);
req.Method = WebRequestMethods.Http.Get;
req.ProtocolVersion = HttpVersion.Version11;
req.UserAgent = "Mozilla/4.0 (compatible; MSIE 5.01; Windows NT 5.0)";
try
{
HttpWebResponse response = (HttpWebResponse)req.GetResponse();
StringBuilder sb = new StringBuilder();
foreach (string header in response.Headers)
{
sb.AppendLine(string.Format("{0}: {1}", header, response.GetResponseHeader(header)));
}
return string.Format("Response Status Code: {0}\nServer:{1}\nProtocol: {2}\nRequest Method: {3}\n\n***Headers***\n\n{4}", response.StatusCode,response.Server, response.ProtocolVersion, response.Method, sb);
}
catch (Exception e)
{
return string.Format("Error: {0}", e.ToString());
}
}
Feel free to ignore the section that gets the headers

Related

How do I stay on the page using HttpWebRequest?

So I am trying to simulate a person being on my website, in this case it's my console application.
I can connect to it using a HttpWebRequest and create a WebRequest but it doesn't show as a person being on the website in my dashboard. However when I manually get on my website through my web browser it says that someone is online on the website in my dashboard system (WordPress),
So my question is, how do I accomplish the same thing, would I have to create a Socket connection? Or is this possible by using KeepAlive because I think the issue is that it's not on the page for long enough, it connect and gets the request but it doesnt actually establish a connection if that makes any sense.
That's just my theory please correct me if I am wrong.
public static bool isServerOnline()
{
Boolean ret = false;
try
{
HttpWebRequest req = (HttpWebRequest)HttpWebRequest.Create("https://arcticinnovative.com");
req.CookieContainer = cookieContainer; // <= HERE
req.Method = "HEAD";
req.KeepAlive = false;
HttpWebResponse resp = (HttpWebResponse)req.GetResponse();
if (resp.StatusCode == HttpStatusCode.OK)
{
// HTTP = 200 - Internet connection available, server online
ret = true;
}
resp.Close();
return ret;
}
catch (WebException we)
{
// Exception - connection not available
Debug.Print("InternetUtils - isServerOnline - " + we.Status);
return false;
}
}
According to the documentation found here, you can set KeepAlive to true in order to maintain a persistent connection.

c# HttpWebRequest and HttpWebResponse generic stuff

I am playing around with an app using HttpWebRequest to dialog with a web server.
I followed standard instructions I found on the web to build my request function that I tried to make as generic as possible (I try to get a unique method regardless of the method: PUT, POST, DELETE, REPORT, ...)
When I submit a "REPORT" request, I get two access logs on my server:
1) I get response 401 after following line is launched in debugger
reqStream.Write(Encoding.UTF8.GetBytes(body), 0, body.Length);
2) I get response 207 (multi-get, which is what I expect) after passing the line calling Request.GetResponse();
Actually, it seems to be the Request.GetRequestStream() line that is querying the server the first time, but the request is only committed once passing the reqStream.Write(...) line...
Same for PUT and DELETE, the Request.GetRequestStream() again generates a 401 access log on my server whereas the Request.GetResponse(); returns code 204.
I don't understand why for a unique request I have two server access logs, especially one that seems to be doing nothing as it always returns code 401... Could anybody explain what is going on? Is it a flaw in my code or a bad design due to my attempt to get a generic code for multiple methods?
Here is my full code:
public static HttpWebResponse getHttpWebRequest(string url, string usrname, string pwd, string method, string contentType,
string[] headers, string body) {
// Variables.
HttpWebRequest Request;
HttpWebResponse Response;
//
string strSrcURI = url.Trim();
string strBody = body.Trim();
try {
// Create the HttpWebRequest object.
Request = (HttpWebRequest)HttpWebRequest.Create(strSrcURI);
// Add the network credentials to the request.
Request.Credentials = new NetworkCredential(usrname.Trim(), pwd);
// Specify the method.
Request.Method = method.Trim();
// request headers
foreach (string s in headers) {
Request.Headers.Add(s);
}
// Set the content type header.
Request.ContentType = contentType.Trim();
// set the body of the request...
Request.ContentLength = body.Length;
using (Stream reqStream = Request.GetRequestStream()) {
// Write the string to the destination as a text file.
reqStream.Write(Encoding.UTF8.GetBytes(body), 0, body.Length);
reqStream.Close();
}
// Send the method request and get the response from the server.
Response = (HttpWebResponse)Request.GetResponse();
// return the response to be handled by calling method...
return Response;
}
catch (Exception e) {
throw new Exception("Web API error: " + e.Message, e);
}

HttpWebRequest.GetResponse() keeps getting timed out

i wrote a simple C# function to retrieve trade history from MtGox with following API call:
https://data.mtgox.com/api/1/BTCUSD/trades?since=<trade_id>
documented here: https://en.bitcoin.it/wiki/MtGox/API/HTTP/v1#Multi_currency_trades
here's the function:
string GetTradesOnline(Int64 tid)
{
Thread.Sleep(30000);
// communicate
string url = "https://data.mtgox.com/api/1/BTCUSD/trades?since=" + tid.ToString();
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(url);
HttpWebResponse response = (HttpWebResponse)request.GetResponse();
StreamReader reader = new StreamReader(response.GetResponseStream());
string json = reader.ReadToEnd();
reader.Close();
reader.Dispose();
response.Close();
return json;
}
i'm starting at tid=0 (trade id) to get the data (from the very beginning). for each request, i receive a response containing 1000 trade details. i always send the trade id from the previous response for the next request. it works fine for exactly 4 requests & responses. but after that, the following line throws a "System.Net.WebException", saying that "The operation has timed out":
HttpWebResponse response = (HttpWebResponse)request.GetResponse();
here are the facts:
catching the exception and retying keeps causing the same exception
the default HttpWebRequest .Timeout and .ReadWriteTimeout are already high enough (over a minute)
changing HttpWebRequest.KeepAlive to false didn't solve anything either
it seems to always work in the browser even while the function is failing
it has no problems retrieveing the response from https://www.google.com
the amount of successful responses before the exceptions varies from day to day (but browser always works)
starting at the trade id that failed last time causes the exception immediately
calling this function from the main thread instead still caused the exception
running it on a different machine didn't work
running it from a different IP didn't work
increasing Thread.Sleep inbetween requests does not help
any ideas of what could be wrong?
I had the very same issue.
For me the fix was as simple as wrapping the HttpWebResponse code in using block.
using (HttpWebResponse response = (HttpWebResponse) request.GetResponse())
{
// Do your processings here....
}
Details: This issue usually happens when several requests are made to the same host, and WebResponse is not disposed properly. That is where using block will properly dispose the WebResponse object properly and thus solving the issue.
There are two kind of timeouts. Client timeout and server timeout. Have you tried doing something like this:
request.Timeout = Timeout.Infinite;
request.KeepAlive = true;
Try something like this...
I just had similar troubles calling a REST Service on a LINUX Server thru ssl. After trying many different configuration scenarios I found out that I had to send a UserAgent in the http head.
Here is my final method for calling the REST API.
private static string RunWebRequest(string url, string json)
{
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(url);
// Header
request.ContentType = "application/json";
request.Method = "POST";
request.AllowAutoRedirect = false;
request.KeepAlive = false;
request.Timeout = 30000;
request.ReadWriteTimeout = 30000;
request.UserAgent = "test.net";
request.Accept = "application/json";
request.ProtocolVersion = HttpVersion.Version11;
request.Headers.Add("Accept-Language","de_DE");
ServicePointManager.SecurityProtocol = SecurityProtocolType.Tls;
ServicePointManager.ServerCertificateValidationCallback = delegate { return true; };
byte[] bytes = Encoding.UTF8.GetBytes(json);
request.ContentLength = bytes.Length;
using (var writer = request.GetRequestStream())
{
writer.Write(bytes, 0, bytes.Length);
writer.Flush();
writer.Close();
}
var httpResponse = (HttpWebResponse)request.GetResponse();
using (var streamReader = new StreamReader(httpResponse.GetResponseStream()))
{
var jsonReturn = streamReader.ReadToEnd();
return jsonReturn;
}
}
This is not a solution, but just an alternative:
These days i almost only use WebClient instead of HttpWebRequest. Especially WebClient.UploadString for POST and PUT and WebClient.DownloadString. These simply take and return strings. This way i don't have to deal with streams objects, except when i get a WebException. i can also set the content type with WebClient.Headers["Content-type"] if necessary. The using statement also makes life easier by calling Dispose for me.
Rarely for performance, i set System.Net.ServicePointManager.DefaultConnectionLimit high and instead use HttpClient with it's Async methods for simultaneous calls.
This is how i would do it now
string GetTradesOnline(Int64 tid)
{
using (var wc = new WebClient())
{
return wc.DownloadString("https://data.mtgox.com/api/1/BTCUSD/trades?since=" + tid.ToString());
}
}
2 more POST examples
// POST
string SubmitData(string data)
{
string response;
using (var wc = new WebClient())
{
wc.Headers["Content-type"] = "text/plain";
response = wc.UploadString("https://data.mtgox.com/api/1/BTCUSD/trades", "POST", data);
}
return response;
}
// POST: easily url encode multiple parameters
string SubmitForm(string project, string subject, string sender, string message)
{
// url encoded query
NameValueCollection query = HttpUtility.ParseQueryString(string.Empty);
query.Add("project", project);
query.Add("subject", subject);
// url encoded data
NameValueCollection data = HttpUtility.ParseQueryString(string.Empty);
data.Add("sender", sender);
data.Add("message", message);
string response;
using (var wc = new WebClient())
{
wc.Headers[HttpRequestHeader.ContentType] = "application/x-www-form-urlencoded";
response = wc.UploadString( "https://data.mtgox.com/api/1/BTCUSD/trades?"+query.ToString()
, WebRequestMethods.Http.Post
, data.ToString()
);
}
return response;
}
Error handling
try
{
Console.WriteLine(GetTradesOnline(0));
string data = File.ReadAllText(#"C:\mydata.txt");
Console.WriteLine(SubmitData(data));
Console.WriteLine(SubmitForm("The Big Project", "Progress", "John Smith", "almost done"));
}
catch (WebException ex)
{
string msg;
if (ex.Response != null)
{
// read response HTTP body
using (var sr = new StreamReader(ex.Response.GetResponseStream())) msg = sr.ReadToEnd();
}
else
{
msg = ex.Message;
}
Log(msg);
}
For what it's worth, I was experiencing the same issues with timeouts every time I used it, even though calls went through to the server I was calling. The problem in my case was that I had Expect set to application/json, when that wasn't what the server was returning.

C# WebRequest Check if page requires HTTP Authentication

Does anyone know how to check if a webpage is asking for HTTP Authentication via C# using the WebRequest class? I'm not asking how to post Credentials to the page, just how to check if the page is asking for Authentication.
Current Snippet to get HTML:
WebRequest wrq = WebRequest.Create(address);
wrs = wrq.GetResponse();
Uri uri = wrs.ResponseUri;
StreamReader strdr = new StreamReader(wrs.GetResponseStream());
string html = strdr.ReadToEnd();
wrs.Close();
strdr.Close();
return html;
PHP Server side source:
<?php
if (!isset($_SERVER['PHP_AUTH_USER'])) {
header('WWW-Authenticate: Basic realm="Secure Sign-in"');
header('HTTP/1.0 401 Unauthorized');
echo 'Text to send if user hits Cancel button';
exit;
} else {
echo "<p>Hello {$_SERVER['PHP_AUTH_USER']}.</p>";
echo "<p>You entered {$_SERVER['PHP_AUTH_PW']} as your password.</p>";
}
?>
WebRequest.GetResponse returns an object of type HttpWebResponse. Just cast it and you can retrieve StatusCode.
However, .Net will give you an exception if it receives a response of status 4xx or 5xx (thanks for your feedback).
There is a little workaround, check it out:
HttpWebRequest wrq = (HttpWebRequest)WebRequest.Create(#"http://webstrand.comoj.com/locked/safe.php");
HttpWebResponse wrs = null;
try
{
wrs = (HttpWebResponse)wrq.GetResponse();
}
catch (System.Net.WebException protocolError)
{
if (((HttpWebResponse)protocolError.Response).StatusCode == HttpStatusCode.Unauthorized)
{
//do something
}
}
catch (System.Exception generalError)
{
//run to the hills
}
if (wrs.StatusCode == HttpStatusCode.OK)
{
Uri uri = wrs.ResponseUri;
StreamReader strdr = new StreamReader(wrs.GetResponseStream());
string html = strdr.ReadToEnd();
wrs.Close();
strdr.Close();
}
Hope this helps.
Regards
Might want to try
WebClient wc = new WebClient();
CredentialCache credCache = new CredentialCache();
If you can work with WebClient instead of WebRequest, you should it's a bit higher level, easier to handle headers etc.
Also, might want to check this thread:
System.Net.WebClient fails weirdly

C# How can I check if a URL exists/is valid?

I am making a simple program in visual c# 2005 that looks up a stock symbol on Yahoo! Finance, downloads the historical data, and then plots the price history for the specified ticker symbol.
I know the exact URL that I need to acquire the data, and if the user inputs an existing ticker symbol (or at least one with data on Yahoo! Finance) it works perfectly fine. However, I have a run-time error if the user makes up a ticker symbol, as the program tries to pull data from a non-existent web page.
I am using the WebClient class, and using the DownloadString function. I looked through all the other member functions of the WebClient class, but didn't see anything I could use to test a URL.
How can I do this?
Here is another implementation of this solution:
using System.Net;
///
/// Checks the file exists or not.
///
/// The URL of the remote file.
/// True : If the file exits, False if file not exists
private bool RemoteFileExists(string url)
{
try
{
//Creating the HttpWebRequest
HttpWebRequest request = WebRequest.Create(url) as HttpWebRequest;
//Setting the Request method HEAD, you can also use GET too.
request.Method = "HEAD";
//Getting the Web Response.
HttpWebResponse response = request.GetResponse() as HttpWebResponse;
//Returns TRUE if the Status code == 200
response.Close();
return (response.StatusCode == HttpStatusCode.OK);
}
catch
{
//Any exception will returns false.
return false;
}
}
From: http://www.dotnetthoughts.net/2009/10/14/how-to-check-remote-file-exists-using-c/
You could issue a "HEAD" request rather than a "GET"?
So to test a URL without the cost of downloading the content:
// using MyClient from linked post
using(var client = new MyClient()) {
client.HeadOnly = true;
// fine, no content downloaded
string s1 = client.DownloadString("http://google.com");
// throws 404
string s2 = client.DownloadString("http://google.com/silly");
}
You would try/catch around the DownloadString to check for errors; no error? It exists...
With C# 2.0 (VS2005):
private bool headOnly;
public bool HeadOnly {
get {return headOnly;}
set {headOnly = value;}
}
and
using(WebClient client = new MyClient())
{
// code as before
}
These solutions are pretty good, but they are forgetting that there may be other status codes than 200 OK. This is a solution that I've used on production environments for status monitoring and such.
If there is a url redirect or some other condition on the target page, the return will be true using this method. Also, GetResponse() will throw an exception and hence you will not get a StatusCode for it. You need to trap the exception and check for a ProtocolError.
Any 400 or 500 status code will return false. All others return true.
This code is easily modified to suit your needs for specific status codes.
/// <summary>
/// This method will check a url to see that it does not return server or protocol errors
/// </summary>
/// <param name="url">The path to check</param>
/// <returns></returns>
public bool UrlIsValid(string url)
{
try
{
HttpWebRequest request = HttpWebRequest.Create(url) as HttpWebRequest;
request.Timeout = 5000; //set the timeout to 5 seconds to keep the user from waiting too long for the page to load
request.Method = "HEAD"; //Get only the header information -- no need to download any content
using (HttpWebResponse response = request.GetResponse() as HttpWebResponse)
{
int statusCode = (int)response.StatusCode;
if (statusCode >= 100 && statusCode < 400) //Good requests
{
return true;
}
else if (statusCode >= 500 && statusCode <= 510) //Server Errors
{
//log.Warn(String.Format("The remote server has thrown an internal error. Url is not valid: {0}", url));
Debug.WriteLine(String.Format("The remote server has thrown an internal error. Url is not valid: {0}", url));
return false;
}
}
}
catch (WebException ex)
{
if (ex.Status == WebExceptionStatus.ProtocolError) //400 errors
{
return false;
}
else
{
log.Warn(String.Format("Unhandled status [{0}] returned for url: {1}", ex.Status, url), ex);
}
}
catch (Exception ex)
{
log.Error(String.Format("Could not test url {0}.", url), ex);
}
return false;
}
If I understand your question correctly, you could use a small method like this to give you the results of your URL test:
WebRequest webRequest = WebRequest.Create(url);
WebResponse webResponse;
try
{
webResponse = webRequest.GetResponse();
}
catch //If exception thrown then couldn't get response from address
{
return 0;
}
return 1;
You could wrap the above code in a method and use it to perform validation. I hope this answers the question you were asking.
Try this (Make sure you use System.Net):
public bool checkWebsite(string URL) {
try {
WebClient wc = new WebClient();
string HTMLSource = wc.DownloadString(URL);
return true;
}
catch (Exception) {
return false;
}
}
When the checkWebsite() function gets called, it tries to get the source code of
the URL passed into it. If it gets the source code, it returns true. If not,
it returns false.
Code Example:
//The checkWebsite command will return true:
bool websiteExists = this.checkWebsite("https://www.google.com");
//The checkWebsite command will return false:
bool websiteExists = this.checkWebsite("https://www.thisisnotarealwebsite.com/fakepage.html");
I have always found Exceptions are much slower to be handled.
Perhaps a less intensive way would yeild a better, faster, result?
public bool IsValidUri(Uri uri)
{
using (HttpClient Client = new HttpClient())
{
HttpResponseMessage result = Client.GetAsync(uri).Result;
HttpStatusCode StatusCode = result.StatusCode;
switch (StatusCode)
{
case HttpStatusCode.Accepted:
return true;
case HttpStatusCode.OK:
return true;
default:
return false;
}
}
}
Then just use:
IsValidUri(new Uri("http://www.google.com/censorship_algorithm"));
A lot of the answers are older than HttpClient (I think it was introduced in Visual Studio 2013) or without async/await functionality, so I decided to post my own solution:
private static async Task<bool> DoesUrlExists(String url)
{
try
{
using (HttpClient client = new HttpClient())
{
//Do only Head request to avoid download full file
var response = await client.SendAsync(new HttpRequestMessage(HttpMethod.Head, url));
if (response.IsSuccessStatusCode) {
//Url is available is we have a SuccessStatusCode
return true;
}
return false;
}
} catch {
return false;
}
}
I use HttpClient.SendAsync with HttpMethod.Head to make only a head request, and not downlaod the whole file. Like David and Marc already say there is not only http 200 for ok, so I use IsSuccessStatusCode to allow all Sucess Status codes.
WebRequest request = WebRequest.Create("http://www.google.com");
try
{
request.GetResponse();
}
catch //If exception thrown then couldn't get response from address
{
MessageBox.Show("The URL is incorrect");`
}
This solution seems easy to follow:
public static bool isValidURL(string url) {
WebRequest webRequest = WebRequest.Create(url);
WebResponse webResponse;
try
{
webResponse = webRequest.GetResponse();
}
catch //If exception thrown then couldn't get response from address
{
return false ;
}
return true ;
}
Here is another option
public static bool UrlIsValid(string url)
{
bool br = false;
try {
IPHostEntry ipHost = Dns.Resolve(url);
br = true;
}
catch (SocketException se) {
br = false;
}
return br;
}
A lot of other answers are using WebRequest which is now obsolete.
Here is a method that has minimal code and uses currently up-to-date classes and methods.
I have also tested the other most up-voted functions which can produce false positives.
I tested with these URLs, which points to the Visual Studio Community Installer, found on this page.
//Valid URL
https://aka.ms/vs/17/release/vs_community.exe
//Invalid URL, redirects. Produces false positive on other methods.
https://aka.ms/vs/14/release/vs_community.exe
using System.Net;
using System.Net.Http;
//HttpClient is not meant to be created and disposed frequently.
//Declare it staticly in the class to be reused.
static HttpClient client = new HttpClient();
/// <summary>
/// Checks if a remote file at the <paramref name="url"/> exists, and if access is not restricted.
/// </summary>
/// <param name="url">URL to a remote file.</param>
/// <returns>True if the file at the <paramref name="url"/> is able to be downloaded, false if the file does not exist, or if the file is restricted.</returns>
public static bool IsRemoteFileAvailable(string url)
{
//Checking if URI is well formed is optional
Uri uri = new Uri(url);
if (!uri.IsWellFormedOriginalString())
return false;
try
{
using (HttpRequestMessage request = new HttpRequestMessage(HttpMethod.Head, uri))
using (HttpResponseMessage response = client.Send(request))
{
return response.IsSuccessStatusCode && response.Content.Headers.ContentLength > 0;
}
}
catch
{
return false;
}
}
Just note that this will not work with .NET Framework, as HttpClient.Send does not exist.
To get it working on .NET Framework you will need to change client.Send(request) to client.SendAsync(request).Result.
Web servers respond with a HTTP status code indicating the outcome of the request e.g. 200 (sometimes 202) means success, 404 - not found etc (see here). Assuming the server address part of the URL is correct and you are not getting a socket timeout, the exception is most likely telling you the HTTP status code was other than 200. I would suggest checking the class of the exception and seeing if the exception carries the HTTP status code.
IIRC - The call in question throws a WebException or a descendant. Check the class name to see which one and wrap the call in a try block to trap the condition.
i have a more simple way to determine weather a url is valid.
if (Uri.IsWellFormedUriString(uriString, UriKind.RelativeOrAbsolute))
{
//...
}
Following on from the examples already given, I'd say, it's best practice to also wrap the response in a using like this
public bool IsValidUrl(string url)
{
try
{
var request = WebRequest.Create(url);
request.Timeout = 5000;
request.Method = "HEAD";
using (var response = (HttpWebResponse)request.GetResponse())
{
response.Close();
return response.StatusCode == HttpStatusCode.OK;
}
}
catch (Exception exception)
{
return false;
}
}

Categories