Fastest Way To Download Website Data C# - c#

I am writing an Rcon in Visual Studio for Black Ops. I know its an old game but I still have a server running.
I am trying to download the data from this link
Black Ops Log File
I am using this code.
System.Net.WebClient wc = new System.Net.WebClient();
string raw = wc.DownloadString(logFile);
Which take between 6441ms to 13741ms according to Visual Studio.
Another attempt was...
string html = null;
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(logFile);
request.AutomaticDecompression = DecompressionMethods.GZip;
request.Proxy = null;
using (HttpWebResponse response = (HttpWebResponse)request.GetResponse())
using (Stream stream = response.GetResponseStream())
using (StreamReader reader = new StreamReader(stream))
{
html = reader.ReadToEnd();
}
Which also takes around 6133ms according to VS debugging.
I have seen other rcon respond to commands really quickly. Mine take on best 5000ms which is not really acceptable. How can I download this this information quicker. I am told it shouldn't take this long??? What am I doing wrong?

This is just how long the server takes to answer:
In the future you can debug such problems yourself using network tools such as Fiddler or by profiling your code to see what takes the longest amount of time.

Related

Strange characters as a result of HttpWebResponse [duplicate]

This question already has an answer here:
Garbled httpWebResponse string when posting data to web form programmatically
(1 answer)
Closed 4 years ago.
I'm trying to create site parser for telegram bot. The exact code is:
var link = "https://www.detmir.ru/";
var request = HttpWebRequest.Create(link);
var resp = (HttpWebResponse)request.GetResponse();
string result;
using (var stream = resp.GetResponseStream())
{
using (var reader = new StreamReader(stream, Encoding.GetEncoding(resp.CharacterSet)))
result = reader.ReadToEnd();
}
File.WriteAllText(#"d:\1.txt", result);
Result is a set of strange symbols:
As far as I get - the main clue in encoding. I've tried to use Encoding.Defult, Encoding.UTF8 with the same result.
But with other sites it works perfectly. Is there any trick to solve issue with this certain website?
Update
In Google Chrome the source code of webpage shows correctly:
Google Chrome webpage source code
The contents of the response is UTF-8, as the site reports, but it is compressed to increase throughput performance.
Enable automatic decompression:
var request = (HttpWebRequest)HttpWebRequest.Create(link);
request.AutomaticDecompression = DecompressionMethods.GZip;

Using C# get content from URL get this error “WebRequest does not contain a definition for GetRespone and …”

This code comes from Microsoft's documentation. I put this code in a Console app and a Windows Form app separately.
In the Console app, there is an error : “WebRequest does not contain a definition for GetRespone and …”
But in the Windows Form app, there is no error.
I really don't know why this happen. I am a beginner of C#, so this question may be stupid. But I feel very confused. Please explain to me.
Thank you!
Below are two screenshots for these two situation:
Here is the code.
using System;
using System.IO;
using System.Net;
namespace ConsoleApp1
{
class Program
{
static void Main(string[] args)
{
// Create a request for the URL.
WebRequest request = WebRequest.Create(
"http://www.contoso.com/default.html");
// If required by the server, set the credentials.
request.Credentials = CredentialCache.DefaultCredentials;
// Get the response.
WebResponse response = request.GetResponse();
// Display the status.
Console.WriteLine(((HttpWebResponse)response).StatusDescription);
// Get the stream containing content returned by the server.
Stream dataStream = response.GetResponseStream();
// Open the stream using a StreamReader for easy access.
StreamReader reader = new StreamReader(dataStream);
// Read the content.
string responseFromServer = reader.ReadToEnd();
// Display the content.
Console.WriteLine(responseFromServer);
// Clean up the streams and the response.
reader.Close();
response.Close();
}
}
}
Update 1:
I was using Macbook Pro by the parallel virtual machine and the VS version is Enterprise 2017, .net framework is 4.5.2.
But after I shifted to a windows laptop, and the code is running perfect. Maybe the problem is the virtual machine ? ... It's very strange. It seems that I can't just trust to the virtual machine... Anyway, Thanks for helping !
Update 2:
It seems that I was too optimistic. When I use Visual Studio 2017, even I build it on Windows Laptop, the error still shown. So, I think there is a high chance that the problem is Visual Studio 2017...
You must use an HttpWebRequest and HttpWebResponse, not a WebRequest:
using System;
using System.IO;
using System.Net;
namespace ConsoleApp1
{
class Program
{
static void Main(string[] args)
{
// Create a request for the URL.
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(
"http://www.contoso.com/default.html");
// If required by the server, set the credentials.
request.Credentials = CredentialCache.DefaultCredentials;
// Get the response.
HttpWebResponse response = (HttpWebResponse)request.GetResponse();
// Get the stream containing content returned by the server.
Stream dataStream = response.GetResponseStream();
// Open the stream using a StreamReader for easy access.
StreamReader reader = new StreamReader(dataStream);
// Read the content.
string responseFromServer = reader.ReadToEnd();
// Display the content.
Console.WriteLine(responseFromServer);
// Clean up the streams and the response.
reader.Close();
response.Close();
}
}
}
WebRequest.Create will create a derived class of WebRequest with the exact implementation (HttpWebRequest, FtpWebRequest and so on) for the specified address, so you must cast to the concrete implementation to access the concrete functions.
Also, as HttpWebRequest.GetResponse returns the concrete implementation of WebResponse you must cast it, in this case to HttpWebResponse.
It looks like reader.Close() is also complaining, which leads me to believe one or more of your referenced assemblies is missing. Can you check the References folder in your project and see if any have a yellow warning icon on them?

C# (.NET), how to fix web response performance, too much elapsed time

I'm need to optimize my code, elapsed time for response 1sec, elapsed time for ReadToEnd() it - 0.5sec, all other code (with full web request) take only 0.1sec.
request2.AutomaticDecompression = DecompressionMethods.GZip | DecompressionMethods.Deflate;
//This 1sec+
HttpWebResponse response = (HttpWebResponse)request2.GetResponse();
StreamReader sr = new StreamReader(response.GetResponseStream(), Encoding.UTF8);
//This 0.5sec+
string mem = sr.ReadToEnd();
sr.Close();
P.S: Html code about 200k+ chars, but i need only 4-5.
Most likely, having not set the proxy settings for your HttpWebResponse is causing the delay. It is always recommended to set the proxy, even if its not used. Also try using the using clause:
request2.AutomaticDecompression = DecompressionMethods.GZip | DecompressionMethods.Deflate;
request2.Proxy = null;
using (HttpWebResponse response = (HttpWebResponse)request2.GetResponse())
{
StreamReader sr = new StreamReader(response.GetResponseStream(), Encoding.UTF8);
string mem = sr.ReadToEnd();
}
The initial GetResponse call generates a security negotiation, Get and excecute, then an HTTP response whose first few bytes include the Http Response Code (200,500,404 etc).
The body of the response may already have been received by your client Http buffer or still be being streamed into it - you can't really tell. Your second call (readToEnd) reads all the bytes in the receive buffer and waits until the server has sent all the bytes indicated in the Http Header.
Your code is very unlikely to be causing any appreciable cost to the execution time of the web service call, and I can't see any likely optmisation steps - you need to determine how long the call takes without your client code.
Use Telerik Fiddler to track the number of bytes being called from the destination web service, and the amount of time the raw transfer from sever to client takes - do this by simply calling the URL within Fiddler or on a web browser. This will isolate whether its your code, or the server, or the connection latency costing time.
To add on top of other's suggestions of using using and setting proxy to null, you might also want to try to only read a specific amount of chars, instead of reading everything.
var content = new char[10];
sr.Read(content, 0, content.Length);
string contentStr = new String(content);
Check your js scripts doesnt depend on another services, repositories etc. It was worked for me before in my project. Download them and move it to your local folders

Download file directly to memory

I would like to load an excel file directly from an ftp site into a memory stream. Then I want to open the file in the FarPoint Spread control using the OpenExcel(Stream) method. My issue is I'm not sure if it's possible to download a file directly into memory. Anyone know if this is possible?
Yes, you can download a file from FTP to memory.
I think you can even pass the Stream from the FTP server to be processed by FarPoint.
WebRequest request = FtpWebRequest.Create("ftp://asd.com/file");
using (WebResponse response = request.GetResponse())
{
Stream responseStream = response.GetResponseStream();
OpenExcel(responseStream);
}
Using WebClient you can do nearly the same. Generally using WebClient is easier but gives you less configuration options and control (eg.: No timeout setting).
WebClient wc = new WebClient();
using (MemoryStream stream = new MemoryStream(wc.DownloadData("ftp://asd.com/file")))
{
OpenExcel(stream);
}
Take a look at WebClient.DownloadData. You should be able to download the file directory to memory and not write it to a file first.
This is untested, but something like:
var spreadSheetStream
= new MemoryStream(new WebClient().DownloadData(yourFilePath));
I'm not familiar with FarPoint though, to say whether or not the stream can be used directly with the OpenExcel method. Online examples show the method being used with a FileStream, but I'd assume any kind of Stream would be accepted.
Download file from URL to memory.
My answer does not exactly show, how to download a file for use in Excel, but shows how to create a generic-purpose in-memory byte array.
private static byte[] DownloadFile(string url)
{
byte[] result = null;
using (WebClient webClient = new WebClient())
{
result = webClient.DownloadData(url);
}
return result;
}

Reading information from a website c#

In the project I have in mind I want to be able to look at a website, retrieve text from that website, and do something with that information later.
My question is what is the best way to retrieve the data(text) from the website. I am unsure about how to do this when dealing with a static page vs dealing with a dynamic page.
From some searching I found this:
WebRequest request = WebRequest.Create("anysite.com");
// If required by the server, set the credentials.
request.Credentials = CredentialCache.DefaultCredentials;
// Get the response.
HttpWebResponse response = (HttpWebResponse)request.GetResponse();
// Display the status.
Console.WriteLine(response.StatusDescription);
Console.WriteLine();
// Get the stream containing content returned by the server.
using (Stream dataStream = response.GetResponseStream())
{
// Open the stream using a StreamReader for easy access.
StreamReader reader = new StreamReader(dataStream, Encoding.UTF8);
// Read the content.
string responseString = reader.ReadToEnd();
// Display the content.
Console.WriteLine(responseString);
reader.Close();
}
response.Close();
So from running this on my own I can see it returns the html code from a website, not exactly what I'm looking for. I eventually want to be able to type in a site (such as a news article), and return the contents of the article. Is this possible in c# or Java?
Thanks
I hate to brake this to you but that's how webpages looks, it's a long stream of html markup/content. This gets rendered by the browser as what you see on your screen. The only way I can think of is to parse to html by yourself.
After a quick search on google I found this stack overflow article.
What is the best way to parse html in C#?
but I'm betting you figured this would be a bit easier than you expected, but that's the fun in programming always challenging problems
You can just use a WebClient:
using(var webClient = new WebClient())
{
string htmlFromPage = webClient.DownloadString("http://myurl.com");
}
In the above example htmlFromPage will contain the HTML which you can then parse to find the data you're looking for.
What you are describing is called web scraping, and there are plenty of libraries that do just that for both Java and C#. It doesn't really matter if the target site is static or dynamic since both output HTML in the end. JavaScript or Flash heavy sites on the other hand tend to be problematic.
Please try this,
System.Net.WebClient wc = new System.Net.WebClient();
string webData = wc.DownloadString("anysite.com");

Categories