I am using this C# code in LINQPad 6 (Free Edition) to download data from a webpage
void Main()
{
string a = "https://www1.nseindia.com/live_market/dynaContent/live_watch/option_chain/optionKeys.jsp";
WebClient w = new WebClient();
w.DownloadStringAsync(new Uri(a));
w.DownloadStringCompleted += (s,e) => {
string t = (string)e.Result;
Console.WriteLine (t);
};
}
I am always getting the WebException - An error occurred while sending the request. The response ended prematurely.
Can anyone please kindly point out my mistake. Awaiting your reply.
Thank you.
Related
When I compile to a windows 8 phone I keep getting this error.
I want to output some XML from my API URL.
I get the right XML-format when I use my URL in the browser but keeps getting this error when I compile in Visual studio 2013 Premium.
private void WebQuestion()
{
System.Net.WebClient wc = new WebClient();
// Add event handlers when webbclienten received answers
wc.OpenReadCompleted += wc_OpenReadCompleted;
// Invoking web service asynchronously and let the answer be handled
string URL = "https://api.steampowered.com/IDOTA2Match_570/GetMatchDetails/V001/?match_id=885221892&key=D38259AC7F57D17B10F73A76C1873DD2&format=XML";
wc.OpenReadAsync(new Uri(URL));
}
void wc_OpenReadCompleted(object sender, System.Net.OpenReadCompletedEventArgs e)
{
if (e.Error != null)
{
return;
}
try
{
something
}
}
Hi I was making a crawler for a site. After about 3 hours of crawling, my app stopped on a WebException. below are my code in c#. client is predefined WebClient object that will be disposed every time gameDoc has already been processed. gameDoc is a HtmlDocument object (from HtmlAgilityPack)
while (retrygamedoc)
{
try
{
gameDoc.LoadHtml(client.DownloadString(url)); // this line caused the exception
retrygamedoc = false;
}
catch
{
client.Dispose();
client = new WebClient();
retrygamedoc = true;
Thread.Sleep(500);
}
}
I tried to use code below (to keep the webclient fresh) from this answer
while (retrygamedoc)
{
try
{
using (WebClient client2 = new WebClient())
{
gameDoc.LoadHtml(client2.DownloadString(url)); // this line cause the exception
retrygamedoc = false;
}
}
catch
{
retrygamedoc = true;
Thread.Sleep(500);
}
}
but the result is still the same. Then I use StreamReader and the result stays the same! below are my code using StreamReader.
while (retrygamedoc)
{
try
{
// using native to check the result
HttpWebRequest webreq = (HttpWebRequest)WebRequest.Create(url);
string responsestring = string.Empty;
HttpWebResponse response = (HttpWebResponse)webreq.GetResponse(); // this cause the exception
using (StreamReader reader = new StreamReader(response.GetResponseStream()))
{
responsestring = reader.ReadToEnd();
}
gameDoc.LoadHtml(client.DownloadString(url));
retrygamedoc = false;
}
catch
{
retrygamedoc = true;
Thread.Sleep(500);
}
}
What should I do and check? I am so confused because I got am able to crawl on some pages, on the same site, then in about 1000 reasults, it cause the exception. the message from exception is only The request was aborted: The connection was closed unexpectedly. and the status is ConnectionClosed
PS. the app is a desktop form app.
update :
Now I am skipping the values and turned them to null so that the crawling can goes on. But if the data is really needed, I still have to update the crawling result manually, which is tiring because the result contains thousands of record. Please help me.
example :
it was like you have downloaded like about 1300 data from the website, then the application stopped saying The request was aborted: The connection was closed unexpectedly. while all your internet connection still on and on a good speed.
ConnectionClosed may indicate (and probably does) that the server you're downloading from is closing the connection. Perhaps it is noticing a large amount of requests from your client and is denying you additional service.
Since you can't control server-side shenanigans, I'd recommend you have some sort of logic to retry the download a bit later.
Got this error because it was returned as 404 from the server.
Sorry if the title is not clear or correct, dont know what title should i put. Please correct if wrong.
I have this code to download images from IP camera and it can download the images.The problem is how can i do the images downloading process at the same time for all cameras if i have two or more cameras?
private void GetImage()
{
string IP1 = "example.IPcam1.com:81/snapshot.cgi;
string IP2 = "example.IPcam2.com:81/snapshot.cgi;
.
.
.
string IPn = "example.IPcamn.com:81/snapshot.cgi";
for (int i = 0; i < 10; i++)
{
string ImagePath = Server.MapPath("~\\Videos\\liveRecording2\\") + string.Format("{0}", i, i + 1) + ".jpeg";
string sourceURL = ip;
WebRequest req = (WebRequest)WebRequest.Create(sourceURL);
req.Credentials = new NetworkCredential("user", "password");
WebResponse resp = req.GetResponse();
Stream stream = resp.GetResponseStream();
Bitmap bmp = (Bitmap)Bitmap.FromStream(stream);
bmp.Save(ImagePath);
}
}
You should not run long-running code like that from an ASP.NET application. They are meant to simply respond to requests.
You should place this code in a service (Windows Services are easy), and control the service through a WCF service running inside of it.
You're also going to get into trouble because you don't have your WebResponse and Stream in using blocks.
There are several methods that will depend on how you want to report feedback to the user. It all comes down to multi-threading.
Here is one example, using the ThreadPool. Note that this is missing a bunch of error checking throughout... It is here as an example of how to use the ThreadPool, not as a robust application:
private Dictionary<String, String> _cameras = new Dictionary<String, String> {
{ "http://example.IPcam1.com:81/snapshot.cgi", "/some/path/for/image1.jpg" },
{ "http://example.IPcam2.com:81/snapshot.cgi", "/some/other/path/image2.jpg" },
};
public void DoImageDownload() {
int finished = 0;
foreach (KeyValuePair<String, String> pair in _cameras) {
ThreadPool.QueueUserWorkItem(delegate {
BeginDownload(pair.Key, pair.Value);
finished++;
});
}
while (finished < _cameras.Count) {
Thread.Sleep(1000); // sleep 1 second
}
}
private void BeginDownload(String src, String dest) {
WebRequest req = (WebRequest) WebRequest.Create(src);
req.Credentials = new NetworkCredential("username", "password");
WebResponse resp = req.GetResponse();
Stream input = resp.GetResponseStream();
using (Stream output = File.Create(dest)) {
input.CopyTo(output);
}
}
This example simply takes the work you are doing in the for loop and off-loads it to the thread pool for processing. The DoImageDownload method will return very quickly, as it is not doing much actual work.
Depending on your use case, you may need a mechanism to wait for the images to finish downloading from the caller of DoImageDownload. A common approach would be the use of event callbacks at the end of BeginDownload to notify when the download is complete. I have put a simple while loop here that will wait until the images finish... Of course, this needs error checking in case images are missing or the delegate never returns.
Be sure to add your error checking throughout... Hopefully this gives you a place to start.
I'm trying to create a program that will retrieve page titles given a url. I've written code that works when I'm not using a AsyncCallback, but when I use a AsyncCallback the code doesn't seem to work.
public void GetWebPageTitle(string URL)
{
// make request for web page
HttpWebRequest myWebRequest = (HttpWebRequest)HttpWebRequest.Create(URL);
myWebRequest.Method = "GET";
myWebRequest.BeginGetResponse(new AsyncCallback(FinishWebRequest), myWebRequest);
zConsole.WriteLine("Beginning HttpWebRequest for: " + URL);
}
void FinishWebRequest(IAsyncResult result)
{
zConsole.WriteLine("...");
string title = "Unknown";
//Code under here doesnt get extcuted
HttpWebResponse myWebResponse = (HttpWebResponse)((HttpWebRequest)result.AsyncState).EndGetResponse(result);
StreamReader myWebSource = new StreamReader(myWebResponse.GetResponseStream());
string source = "";
source = myWebSource.ReadToEnd();
myWebResponse.Close();
title = Regex.Match(source, #"\<title\b[^>]*\>\s*(?<Title>[\s\S]*?)\</title\>", RegexOptions.IgnoreCase).Groups["Title"].Value;
zConsole.WriteLine(title);
}
Thanks.
I think, the problem is, your program ends, before async result is returned.
The main thread after doing Console.Writeline dies.
Rest looks okay. BeginGetResponse at MSDN
Put a try/catch block around the code inside the callback and see if anything in there is throwing an exception.
Otherwise some more details would be useful. When you say that the code doesn't get executed are you actually stepping through the code/using breakpoints or are you assuming this is the case based on your console output? Is this request being made from the main window thread of your application?
I have a windows service that calls a page after a certain interval of time. The page in turn creates some reports.
The problem is that the service stops doing anything after 2-3 calls. as in it calls the page for 2-3 times and then does not do any work though it shows that the service is running...i am using timers in my service..
please can someone help me with a solution here
thank you
the code:(where t1 is my timer)
protected override void OnStart(string[] args)
{
GetRecords();
t1.Elapsed += new ElapsedEventHandler(OnElapsedTime);
t1.Interval = //SomeTimeInterval
t1.Enabled = true;
t1.Start();
}
private void OnElapsedTime(object source, ElapsedEventArgs e)
{
try
{
GetRecords();
}
catch (Exception ex)
{
EventLog.WriteEntry(ex.Message);
}
}
public void GetRecords()
{
try
{
string ConnectionString = //Connection string from web.config
WebRequest Request = HttpWebRequest.Create(ConnectionString);
Request.Timeout = 100000000;
HttpWebResponse Response = (HttpWebResponse)Request.GetResponse();
}
catch (Exception ex)
{
}
}
Well, what does the code look like? WebClient is the easiest way to query a page:
string result;
using (WebClient client = new WebClient()) {
result = client.DownloadString(address);
}
// do something with `result`
The timer code might also be glitchy if it is stalling...
It's possible that HttpWebRequest will restrict the number of concurrent HTTP requests to a specific page or server, as is generally proper HTTP client practice.
The fact that you're not properly disposing your objects most likely means you are maintaining 2 or 3 connections to a specific page, each with large timout value, and HttpWebRequest is queueing or ignoring your requests until the first few complete (die from a client or server timeout, most likely the server in this case).
Add a 'finally' clause and dispose of your objects properly!
I think you're missing something about disposing your objects like StreamReader, WebRequest, etc.. You should dispose your expensive objects after using them.
possibly the way you are requesting athe page is throwing an unnhandled exception which leaves the service in an inoperable state.
Yes, we need code.
Marc's advice worked for me, in the context of a service
Using WebClient worked reliably, where WebRequest timed out.
#jscharf explanation looks as good as any to me.