This question already has an answer here:
HttpWebRequest Unable to download data from nasdaq.com but able from browsers
(1 answer)
Closed 2 years ago.
I know it's kinda a silly question but i've read a lot of forums and nothing didn't work for me.
("https://www.nasdaq.com/api/v1/historical/HPQ/stocks/2010-11-14/2020-11-14")
I got url where i need to download csv file. When i paste this url into browsers it works fine but
when i paste it in my app it doesnt' work at all. My app just stop responding and create empty file.
WebClient webClient = new WebClient();
webClient.DownloadFile(
"https://www.nasdaq.com/api/v1/historical/HPQ/stocks/2010-11-14/2020-11-14",
#"HistoryDataStocks.csv");
You need send the proper web request. Try this code and it will work:
var request = (HttpWebRequest)WebRequest.Create(url);
request.Timeout = 30000;
request.AllowWriteStreamBuffering = false;
using (var response = (HttpWebResponse)request.GetResponse())
using (var s = response.GetResponseStream())
using (var fs = new FileStream("test.csv", FileMode.Create))
{
byte[] buffer = new byte[4096];
int bytesRead;
while ((bytesRead = s.Read(buffer, 0, buffer.Length)) > 0)
{
fs.Write(buffer, 0, bytesRead);
bytesRead = s.Read(buffer, 0, buffer.Length);
}
}
The file stream will contain your file.
Related
Good day everyone,
We are currently developing an update service that will automatically update our programs. For this, the latest versions of these programs are provided via a link, which can be downloaded after a login. The service is supposed to download these files now... this works partly also, but somehow certain files (always the same ones) are downloaded with only one kilobyte. Here it doesn't matter if it's a .zip or an .exe file, as it downloads one file of each correctly. Also, it does not seem to matter the size, as sometimes larger files are downloaded, but smaller ones are not. If the provided link is called manually, the files that could not be downloaded automatically can be downloaded normally.
For the automatic download, we have already tried it with the WebClient:
client.Credentials = new NetworkCredential(username, password);
client.DownloadFile(datei.LinkUpdatedatei, _dateiverzeichnis + '/' + Path.GetFileName(datei.LinkUpdatedatei));
Also, we have already tried it with an HTTPWebRequest:
foreach (UpdateDatei datei in BezieheUpdatedateipfade())
{
using (FileStream fileStream = new System.IO.FileStream(_dateiverzeichnis + '/' + Path.GetFileName(datei.LinkUpdatedatei), System.IO.FileMode.OpenOrCreate, System.IO.FileAccess.Write))
{
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(datei.LinkUpdatedatei);
request.Method = WebRequestMethods.Http.Get;
request.PreAuthenticate = true;
//ToDo: Credentials aus config auslesen
request.Credentials = new NetworkCredential(username, password);
const int BUFFER_SIZE = 16 * 1024;
{
using (HttpWebResponse response = (HttpWebResponse)request.GetResponse())
{
using (var responseStream = response.GetResponseStream())
{
var buffer = new byte[BUFFER_SIZE];
int bytesRead;
do
{
bytesRead = responseStream.Read(buffer, 0, BUFFER_SIZE);
fileStream.Write(buffer, 0, bytesRead);
} while (bytesRead > 0);
}
}
fileStream.Close();
}
}
}
The result is always the same and the same files are not downloaded correctly:
Please don't take offense if I forgot or overlooked anything... this is one of my first entries - thanks!
I created a simple application that collects form data, generates an XML in memory as a MemoryStream object, and and delivers the XML to a server using SMB. The delivery method is simple for SMB:
var outputFile = new FileStream($#"{serverPath}\{filename}.xml", FileMode.Create);
int Length = 256;
Byte[] buffer = new Byte[Length];
int bytesRead = stream.Read(buffer, 0, Length);
while (bytesRead > 0)
{
outputFile.Write(buffer, 0, bytesRead);
bytesRead = stream.Read(buffer, 0, Length);
}
However, I need to create an alternative delivery method using FTP (with credentials). I don't want to rewrite my XML method, as creating it in memory saves writing to disk which has been a problem in our environment in the past.
I have not been able to find any examples that explain (for a person of very limited coding ability) how such a thing may be accomplished.
Generally when I have need to upload a file to an FTP server, I use something like this:
using (var client = new WebClient())
{
client.Credentials = new NetworkCredential("user", "pass");
client.UploadFile(uri, WebRequestMethods.Ftp.UploadFile, filename.xml);
}
Can this be adapted to upload from a MemoryStream instead of a file on disk?
If not, what other way could I upload a MemoryStream to an FTP server?
Either use FtpWebRequest, as you can see in Upload a streamable in-memory document (.docx) to FTP with C#?:
WebRequest request =
WebRequest.Create("ftp://ftp.example.com/remote/path/filename.xml");
request.Method = WebRequestMethods.Ftp.UploadFile;
request.Credentials = new NetworkCredential(username, password);
using (Stream ftpStream = request.GetRequestStream())
{
memoryStream.CopyTo(ftpStream);
}
or use WebClient.OpenWrite (as you can also see in the answer by #Neptune):
using (var webClient = new WebClient())
{
const string url = "ftp://ftp.example.com/remote/path/filename.xml";
using (Stream uploadStream = client.OpenWrite(url))
{
memoryStream.CopyTo(uploadStream);
}
}
Equivalently, your existing FileStream code can be simplified to:
using (var outputFile = File.Create($#"{serverPath}\{filename}.xml"))
{
stream.CopyTo(outputFile);
}
Though obviously, even better would be to avoid the intermediate MemoryStream and write the XML directly to FileStream and WebRequest.GetRequestStream (using their common Stream interface).
You can use the methods OpenWrite/OpenWriteAsync to get a stream that you can write to from any source (stream/array/...etc.)
Here is an example using OpenWrite to write from a MemoryStream:
var sourceStream = new MemoryStream();
// Populate your stream with data ...
using (var webClient = new WebClient())
{
using (Stream uploadStream = client.OpenWrite(uploadUrl))
{
sourceStream.CopyTo(uploadStream);
}
}
I am try to download a zip file via a url to extract files from. I would rather not have to save it a temp file (which works fine) and rather keep it in memory - it is not very big. For example, if I try to download this file:
http://phs.googlecode.com/files/Download%20File%20Test.zip
using this code:
using Ionic.Zip;
...
HttpWebRequest request = (HttpWebRequest)HttpWebRequest.Create(URL);
HttpWebResponse response = (HttpWebResponse)request.GetResponse();
if (response.ContentLength > 0)
{
using (MemoryStream zipms = new MemoryStream())
{
int bytesRead;
byte[] buffer = new byte[32768];
using (Stream stream = response.GetResponseStream())
{
while ((bytesRead = stream.Read(buffer, 0, buffer.Length)) > 0)
zipms.Write(buffer, 0, bytesRead);
ZipFile zip = ZipFile.Read(stream); // <--ERROR: "This stream does not support seek operations. "
}
using (ZipFile zip = ZipFile.Read(zipms)) // <--ERROR: "Could not read block - no data! (position 0x00000000) "
using (MemoryStream txtms = new MemoryStream())
{
ZipEntry csentry= zip["Download File Test.cs"];
csentry.Extract(txtms);
txtms.Position = 0;
using (StreamReader reader = new StreamReader(txtms))
{
string csentry = reader.ReadToEnd();
}
}
}
}
...
Note where i flagged the errors I am receiving. With the first one, it does not like the System.Net.ConnectStream. If I comment that line out and allow it to hit the line where I note the second error, it does not like the MemoryStream. I did see this posting: https://stackoverflow.com/a/6377099/1324284 but I am having the same issues that others mention about not having more then 4 overloads of the Read method so I cannot try the WebClient.
However, if I do everything via a FileStream and save it to a temp location first, then point ZipFile.Read at that temp location, everything works including extracting any contained files into a MemoryStream.
Thanks for any help.
You need to Flush() your MemoryStream and set the Position to 0 before you read from it, otherwise you are trying to read from the current position (where there is nothing).
For your code:
ZipFile zip;
using (Stream stream = response.GetResponseStream())
{
while ((bytesRead = stream.Read(buffer, 0, buffer.Length)) > 0)
zipms.Write(buffer, 0, bytesRead);
zipms.Flush();
zipms.Position = 0;
zip = ZipFile.Read(zipms);
}
I need basic file downloading capabilities in my app and I cannot use WebClient.DownloadFile [1]. Is this (naïve?) implementation of a DownloadFile method enough? Are there any pitfalls that I don't address with this implementation?
public static void DownloadFile(String url, String destination)
{
using (var request = (HttpWebRequest)WebRequest.Create(url))
{
request.Method = "GET";
request.Timeout = 100000; // 100 seconds
using (var response = request.GetResponse())
{
using (var responseStream = response.GetResponseStream())
{
using (var fileStream = File.Open(destination,
FileMode.Create,
FileAccess.Write,
FileShare.None))
{
var MaxBytesToRead = 10 * 1024;
var buffer = new Byte[MaxBytesToRead];
var totalBytesRead = 0;
var bytesRead = responseStream.Read(buffer,
0,
MaxBytesToRead);
while (bytesRead > 0)
{
totalBytesRead += bytesRead;
fileStream.Write(buffer, 0, bytesRead);
bytesRead = responseStream.Read(buffer,
0,
MaxBytesToRead);
}
}
}
}
}
}
Thanks!
[1] .Net Compact Framework...
You're keeping track of totalBytesRead, but I can't see it used anywhere.
Since Method = "GET" is the default, I don't see anything that's specific to HTTP. If you remove the (HttpWebRequest) cast and the Method = line then you'll gain the ability to download over other protocols, such as FTP. Currently the code will throw an exception if somebody provides a URL other than http://.
Response should have a Content-Length header (unless content-encoding = chunked) which you can use to validate that the download was not interrupted.
Other than that, your implementation looks fine by me.
I have the following code which downloads video content:
WebRequest wreq = (HttpWebRequest)WebRequest.Create(url);
using (HttpWebResponse wresp = (HttpWebResponse)wreq.GetResponse())
using (Stream mystream = wresp.GetResponseStream())
{
using (BinaryReader reader = new BinaryReader(mystream))
{
int length = Convert.ToInt32(wresp.ContentLength);
byte[] buffer = new byte[length];
buffer = reader.ReadBytes(length);
Response.Clear();
Response.Buffer = false;
Response.ContentType = "video/mp4";
//Response.BinaryWrite(buffer);
Response.OutputStream.Write(buffer, 0, buffer.Length);
Response.End();
}
}
But the problem is that the whole file downloads before being played. How can I make it stream and play as it's still downloading? Or is this up to the client/receiver application to manage?
You're reading the entire file into a single buffer, then sending the entire byte array at once.
You should read into a smaller buffer in a while loop.
For example:
byte[] buffer = new byte[4096];
while(true) {
int bytesRead = myStream.Read(buffer, 0, buffer.Length);
if (bytesRead == 0) break;
Response.OutputStream.Write(buffer, 0, bytesRead);
}
This is more efficient for you especially if you need to stream a video from a file on your server or even this file is hosted at another server
File On your server:
context.Response.BinaryWrite(File.ReadAllBytes(HTTPContext.Current.Server.MapPath(_video.Location)));
File on external server:
var wc = new WebClient();
context.Response.BinaryWrite(wc.DownloadData(new Uri("http://mysite/video.mp4")));
Have you looked at Smooth Streaming?
Look at sample code here