Problem still there while i tried below three methods.
Using Window API "URLDownloadToFile"
WebClient Method
webclient.DownloadFile(url,dest) ''With/Without credientials
HTTP WebRequest Method:
public static void Download(String strURLFileandPath, String strFileSaveFileandPath)
{
HttpWebRequest wr = (HttpWebRequest)WebRequest.Create(strURLFileandPath);
HttpWebResponse ws = (HttpWebResponse)wr.GetResponse();
Stream str = ws.GetResponseStream();
byte[] inBuf = new byte[100000];
int bytesToRead = (int) inBuf.Length;
int bytesRead = 0;
while (bytesToRead > 0)
{
int n = str.Read(inBuf, bytesRead,bytesToRead);
if (n==0)
break;
bytesRead += n;
bytesToRead -= n;
}
FileStream fstr = new FileStream(strFileSaveFileandPath, FileMode.OpenOrCreate, FileAccess.Write);
fstr.Write(inBuf, 0, bytesRead);
str.Close();
fstr.Close();
}
Still i m facing the problem, file i am able to download at my local system, but when i open that it show Corrupt pdf.
!!!!I just want to download the pdf from URL and thats my query in VB.net/C# not using response method of ASP.net.
Please help if someone face this real problem.
Thanks in Advance!!!
Your code only writes 100000 bytes of the downloaded PDF and hence every PDF that is bigger than 100000 bytes gets corrupted.
To read more bytes you have to write the contents of every buffer to the FileStream.
The following should do it:
HttpWebRequest wr = (HttpWebRequest)WebRequest.Create(strURLFileandPath);
using (HttpWebResponse ws = (HttpWebResponse)wr.GetResponse())
using (Stream str = ws.GetResponseStream())
using (FileStream fstr = new FileStream(strFileSaveFileandPath, FileMode.OpenOrCreate, FileAccess.Write))
{
byte[] inBuf = new byte[100000];
int bytesRead = 0;
while ((bytesRead = str.Read(inBuf, 0, inBuf.Length)) > 0)
fstr.Write(inBuf, 0, bytesRead);
}
(It's good coding practice to use a using on every IDisposable instead of manually closing the streams.)
Related
I am try to download a zip file via a url to extract files from. I would rather not have to save it a temp file (which works fine) and rather keep it in memory - it is not very big. For example, if I try to download this file:
http://phs.googlecode.com/files/Download%20File%20Test.zip
using this code:
using Ionic.Zip;
...
HttpWebRequest request = (HttpWebRequest)HttpWebRequest.Create(URL);
HttpWebResponse response = (HttpWebResponse)request.GetResponse();
if (response.ContentLength > 0)
{
using (MemoryStream zipms = new MemoryStream())
{
int bytesRead;
byte[] buffer = new byte[32768];
using (Stream stream = response.GetResponseStream())
{
while ((bytesRead = stream.Read(buffer, 0, buffer.Length)) > 0)
zipms.Write(buffer, 0, bytesRead);
ZipFile zip = ZipFile.Read(stream); // <--ERROR: "This stream does not support seek operations. "
}
using (ZipFile zip = ZipFile.Read(zipms)) // <--ERROR: "Could not read block - no data! (position 0x00000000) "
using (MemoryStream txtms = new MemoryStream())
{
ZipEntry csentry= zip["Download File Test.cs"];
csentry.Extract(txtms);
txtms.Position = 0;
using (StreamReader reader = new StreamReader(txtms))
{
string csentry = reader.ReadToEnd();
}
}
}
}
...
Note where i flagged the errors I am receiving. With the first one, it does not like the System.Net.ConnectStream. If I comment that line out and allow it to hit the line where I note the second error, it does not like the MemoryStream. I did see this posting: https://stackoverflow.com/a/6377099/1324284 but I am having the same issues that others mention about not having more then 4 overloads of the Read method so I cannot try the WebClient.
However, if I do everything via a FileStream and save it to a temp location first, then point ZipFile.Read at that temp location, everything works including extracting any contained files into a MemoryStream.
Thanks for any help.
You need to Flush() your MemoryStream and set the Position to 0 before you read from it, otherwise you are trying to read from the current position (where there is nothing).
For your code:
ZipFile zip;
using (Stream stream = response.GetResponseStream())
{
while ((bytesRead = stream.Read(buffer, 0, buffer.Length)) > 0)
zipms.Write(buffer, 0, bytesRead);
zipms.Flush();
zipms.Position = 0;
zip = ZipFile.Read(zipms);
}
I have a method for FTP download file, but I do not save file locally rather I parse the file in memory through ftp response. My question is, is returning stream reader after getting ftp response stream a good practice? Because do not want to do parsing and other stuff in the same method.
var uri = new Uri(string.Format("ftp://{0}/{1}/{2}", "somevalue", remotefolderpath, remotefilename));
var request = (FtpWebRequest)FtpWebRequest.Create(uri);
request.Credentials = new NetworkCredential(userName, password);
request.Method = WebRequestMethods.Ftp.DownloadFile;
var ftpResponse = (FtpWebResponse)request.GetResponse();
/* Get the FTP Server's Response Stream */
ftpStream = ftpResponse.GetResponseStream();
return responseStream = new StreamReader(ftpStream);
For me there are 2 disadvantages of using the stream directly, if you can live with them, you shouldn't waste memory or disk space.
In this stream you can not seek to a specific position, you can only read the contents as it comes in;
Your internet connection could suddenly drop and you will get an exception while parsing and processing your file, either split the parsing and processing or make sure your processing routine can handle the case that a file is processed for a second time (after a failure halfway through the first attempt).
To work around these issues, you could copy the stream to a MemoryStream:
using (var ftpStream = ftpResponse.GetResponseStream())
{
var memoryStream = new MemoryStream()
while ((bytesRead = ftpStream.Read(buffer, 0, buffer.Length)) > 0)
{
memoryStream.Write(buffer, 0, bytesRead);
}
memoryStream.Flush();
memoryStream.Position = 0;
return memoryStream;
}
If you are working with larger files I prefer writing it to a file, this way you minimize the memory footprint of your application:
using (var ftpStream = ftpResponse.GetResponseStream())
{
var fileStream = new FileStream(Path.GetTempFileName(), FileMode.CreateNew)
while ((bytesRead = ftpStream.Read(buffer, 0, buffer.Length)) > 0)
{
fileStream.Write(buffer, 0, bytesRead);
}
fileStream.Flush();
fileStream.Position = 0;
return fileStream;
}
I see more practical returning a responseStream when you are performing an HttpWebRequest. If you are using FtpWebRequest it means you are working with files. I would read the responseStream to byte[] and return the byte file content of the downloaded file, so you can easily work with the System.IO.Fileclasses to handle the file.
Thanks Carlos it was really helpful . I just return the byte[]
byte[] buffer = new byte[16 * 1024];
using (MemoryStream ms = new MemoryStream())
{
int read;
while ((read = ftpStream.Read(buffer, 0, buffer.Length)) > 0)
{
ms.Write(buffer, 0, read);
}
memoryStream=ms;
}
return memoryStream.ToArray();
and used byte[] in the method like this
public async Task ParseReport(byte[] bytesRead)
{
Stream stream = new MemoryStream(bytesRead);
using (StreamReader reader = new StreamReader(stream))
{
string line = null;
while (null != (line = reader.ReadLine()))
{
string[] values = line.Split(';');
}
}
stream.Close();
}
Whenever I try to download a file from a server (the server is on a device, it's not on the interwebs) > 4Gb, the transfer only actually transfers what appears to be (FileSize) % 4Gb. In other words, for a file just over 4.5Gb, I end up only transferring around 600mb of data.
It's something to do with content-length headers and so on I think, but I'm not sure what the exact mechanism is. I've tried using WebClient and WebRequest but both exhibit the same behaviour.
Does anyone have any idea how I can get past this limit? Here's my current loop:
byte[] buffer = new byte[4096];
WebRequest request = WebRequest.Create(new Uri(transferDetails.URL));
using (WebResponse response = request.GetResponse())
{
using (Stream responseStream = response.GetResponseStream())
{
using (FileStream fileStream = new FileStream(actualPath, FileMode.Create, FileAccess.Write))
{
int count = 0;
do
{
// Read a block.
count = responseStream.Read(buffer, 0, buffer.Length);
// Write out to the local file.
if(count > 0)
{
fileStream.Write(buffer, 0, count);
}
} while (count != 0);
}
}
}
I have the following code which downloads video content:
WebRequest wreq = (HttpWebRequest)WebRequest.Create(url);
using (HttpWebResponse wresp = (HttpWebResponse)wreq.GetResponse())
using (Stream mystream = wresp.GetResponseStream())
{
using (BinaryReader reader = new BinaryReader(mystream))
{
int length = Convert.ToInt32(wresp.ContentLength);
byte[] buffer = new byte[length];
buffer = reader.ReadBytes(length);
Response.Clear();
Response.Buffer = false;
Response.ContentType = "video/mp4";
//Response.BinaryWrite(buffer);
Response.OutputStream.Write(buffer, 0, buffer.Length);
Response.End();
}
}
But the problem is that the whole file downloads before being played. How can I make it stream and play as it's still downloading? Or is this up to the client/receiver application to manage?
You're reading the entire file into a single buffer, then sending the entire byte array at once.
You should read into a smaller buffer in a while loop.
For example:
byte[] buffer = new byte[4096];
while(true) {
int bytesRead = myStream.Read(buffer, 0, buffer.Length);
if (bytesRead == 0) break;
Response.OutputStream.Write(buffer, 0, bytesRead);
}
This is more efficient for you especially if you need to stream a video from a file on your server or even this file is hosted at another server
File On your server:
context.Response.BinaryWrite(File.ReadAllBytes(HTTPContext.Current.Server.MapPath(_video.Location)));
File on external server:
var wc = new WebClient();
context.Response.BinaryWrite(wc.DownloadData(new Uri("http://mysite/video.mp4")));
Have you looked at Smooth Streaming?
Look at sample code here
I'm using the following code to grab a wmv file through a WebResponse. I'm using a thread to call this function:
static void GetPage(object data)
{
// Cast the object to a ThreadInfo
ThreadInfo ti = (ThreadInfo)data;
// Request the URL
WebResponse wr = WebRequest.Create(ti.url).GetResponse();
// Display the value for the Content-Length header
Console.WriteLine(ti.url + ": " + wr.Headers["Content-Length"]);
string toBeSaved = #"C:\Users\Kevin\Downloads\TempFiles" + wr.ResponseUri.PathAndQuery;
StreamWriter streamWriter = new StreamWriter(toBeSaved);
MemoryStream m = new MemoryStream();
Stream receiveStream = wr.GetResponseStream();
using (StreamReader sr = new StreamReader(receiveStream))
{
while (sr.Peek() >= 0)
{
m.WriteByte((byte)sr.Read());
}
streamWriter.Write(sr.ReadToEnd());
sr.Close();
wr.Close();
}
streamWriter.Flush();
streamWriter.Close();
// streamReader.Close();
// Let the parent thread know the process is done
ti.are.Set();
wr.Close();
}
The file seems to download just fine, but Windows Media Viewer cannot open the file properly. Some silly error about not being able to support the file type.
What incredibly easy thing am I missing?
You just need to download it as binary instead of text. Here's a method that should do the trick for you.
public void DownloadFile(string url, string toLocalPath)
{
byte[] result = null;
byte[] buffer = new byte[4097];
WebRequest wr = WebRequest.Create(url);
WebResponse response = wr.GetResponse();
Stream responseStream = response.GetResponseStream;
MemoryStream memoryStream = new MemoryStream();
int count = 0;
do {
count = responseStream.Read(buffer, 0, buffer.Length);
memoryStream.Write(buffer, 0, count);
if (count == 0) {
break;
}
}
while (true);
result = memoryStream.ToArray;
FileStream fs = new FileStream(toLocalPath, FileMode.OpenOrCreate, FileAccess.ReadWrite);
fs.Write(result, 0, result.Length);
fs.Close();
memoryStream.Close();
responseStream.Close();
}
I do not understand why you are filling MemoryStream m one byte at a time, but then writing the sr to the file. At that point, I believe the sr is empty, and MemoryStream m is never used.
Below is some code I wrote to do a similar task. It gets a WebResponse in 32K chunks at a time, and dumps it directly to a file.
public void GetStream()
{
// ASSUME: String URL is set to a valid URL.
// ASSUME: String Storage is set to valid filename.
Stream response = WebRequest.Create(URL).GetResponse().GetResponseStream();
using (FileStream fs = File.Create(Storage))
{
Byte[] buffer = new Byte[32*1024];
int read = response.Read(buffer,0,buffer.Length);
while (read > 0)
{
fs.Write(buffer,0,read);
read = response.Read(buffer,0,buffer.Length);
}
}
// NOTE: Various Flush and Close of streams and storage not shown here.
}
You are using a StreamReader and a StreamWriter to transfer your stream, but those classes are for handling text. Your file is binary and chances are that sequences of CR, LF and CR LF may get clobbered when you transfer the data. How NUL characters are handled I have no idea.