Im trying to upload a file through a SOAP request , and it worked perfectly , but I couldnt get a progress for the uploaded amount of the request .
You could try sending the file up in "chunks", like 1MB at a time rather than sending it all up at once? That way when each chunk completes, you'll be able to update the progress.
I can answer my question now,
Im not using SOAP anymore to upload my files in my solution, Im using HTTPWebRequest now,
1) yes im uploading my large files in chunks (each chuck is 1MB),
2) each chunk(1 MB) can give me progress each BufferSize (4 KB in my case);
so there is a big loop, foreach(Chunk in File) {} .
and inside the big loop there is another loop, as Im using HTTPWebRequest:
long buffer = 4096;
Stream stm = request.GetRequestStream();
while (remainingOfChunkWithReq != 0)
{
stm.Write(buffer, 0, bytesRead);
remainingOfChunkWithReq = remainingOfChunkWithReq - bytesRead;
bytesRead = memoryStream.Read(buffer, 0, bytesSize);
//Send Progress
}
then continue to send the request. and receive the response.
Related
I have the following code that downloads a file from an FTP server:
var req = (FtpWebRequest)WebRequest.Create(ftp_addr + file_path);
req.Credentials = new NetworkCredential(...);
req.Method = WebRequestMethods.Ftp.DownloadFile;
using (var resp = req.GetResponse())
using (var strm = resp.GetResponseStream())
using (var mem_strm = new MemoryStream())
{
//The Web Response stream doesn't support seek, so we have to do a buffered read into a memory stream
byte[] buffer = new byte[2048];
int red_byts = 0;
do
{
red_byts = strm.Read(buffer, 0, 2048);
mem_strm.Write(buffer, 0, red_byts);
}
while (strm.CanRead && red_byts > 0);
//Reset the stream to position 0 for reading
mem_strm.Position = 0;
//Now I've got a mem stream that I can use
}
Since the raw stream returned from "GetResponseStream" cannot be read or sought (cannot perform a seek on it). It seems to me that this code is actually performing a request to the FTP server for the next chunk of the file and copying it into memory. Am I correct, or is the entire response downloaded when you can GetResponseStream?
I just want to know so I can correctly apply awaits with ReadAsync calls in asynchronous methods that make use of FTP downloading. My intuition tells me to change the line:
red_byts = strm.Read(...);
to
red_byts = await strm.ReadAsync(...);
The documentation of WebRequest.GetResponseStream doesn't seem to specify, nor does the documentation for FtpWebRequest.GetResponseStream.
The raw stream should be a network stream; to verify this, check out strm.GetType().Name.
It seems to me that this code is actually performing a request to the FTP server for the next chunk of the file and copying it into memory. Am I correct, or is the entire response downloaded when you can GetResponseStream?
Neither.
It is not sending separate requests for each call to Read/ReadAsync; rather, there is only one request, and the stream represents the body of the one response.
It is also not downloading the entire response before returning from GetResponseStream. Rather, the stream represents the download. Some buffering is going on - as the server is sending data, the network stack and the BCL are reading it in for you - but there's no guarantee that it's download by the time you start reading the stream.
I just want to know so I can correctly apply awaits with ReadAsync calls in asynchronous methods that make use of FTP downloading. My intuition tells me to [use async]
Yes, you should use asynchronous reads. If some data is already buffered, they may complete synchronously; otherwise, they will need to wait until the server sends more data.
I have a WCF Service, which uploads the document using Stream class.
Now after this, i want to get the Size of the document(Length of Stream), to update the fileAttribute for FileSize.
But doing this, the WCF throws an exception saying
Document Upload Exception: System.NotSupportedException: Specified method is not supported.
at System.ServiceModel.Dispatcher.StreamFormatter.MessageBodyStream.get_Length()
at eDMRMService.DocumentHandling.UploadDocument(UploadDocumentRequest request)
Can anyone help me in solving this.
Now after this, i want to get the Size of the document(Length of Stream), to update the fileAttribute for FileSize.
No, don't do that. If you are writing a file, then just write the file. At the simplest:
using(var file = File.Create(path)) {
source.CopyTo(file);
}
or before 4.0:
using(var file = File.Create(path)) {
byte[] buffer = new byte[8192];
int read;
while((read = source.Read(buffer, 0, buffer.Length)) > 0) {
file.Write(buffer, 0, read);
}
}
(which does not need to know the length in advance)
Note that some WCF options (full message security etc) require the entire message to be validated before processing, so can never truly stream, so: if the size is huge, I suggest you instead use an API where the client splits it and sends it in pieces (which you then reassemble at the server).
If the stream doesn't support seeking you cannot find its length using Stream.Length
The alternative is to copy the stream to a byte array and find its cumulative length. This involves processing the whole stream first , if you don't want this, you should add a stream length parameter to your WCF service interface
When using a standard <input type="file" /> on an mvc3 site, you can receive the file in your action method by creating an input parameter of type HttpPostedFile and setting the form to enctype="multipart/form-data"
One of the problems of this approach is that the request does not complete and is not handed off to your action method until the entire contents of the file have been uploaded.
I would like to do some things to that file as it is being uploaded to the server. Basically i want to asynchronously receive the data as it comes in and then programmatically handle the data byte by byte.
To accomplish the above I imagine you will need to handle this part of the request in an HttpModule or custom HttpHandler perhaps. I am familiar with how those things work, but I am not familiar with the method of receiving the file upload data asynchronously as it comes in.
I know this is possible because I have worked with 3rd party components in the past that do this (normally so they can report upload progress, or cache the data to disk to avoid iis/asp.net memory limitations). Unfortunately all the components I have used are closed source so I can't peek inside and see what they are doing.
I am not looking for code, but can someone get me pointed in the right direction here?
Using a WCF service you can send file streams to and from your service.
Here is the service side receive code I use:
int chunkSize = 2048;
byte[] buffer = new byte[chunkSize];
using (System.IO.FileStream writeStream =
new System.IO.FileStream(file.FullName, System.IO.FileMode.CreateNew, System.IO.FileAccess.Write))
{
do
{
// read bytes from input stream
int bytesRead = request.FileByteStream.Read(buffer, 0, chunkSize);
if (bytesRead == 0) break;
// write bytes to output stream
writeStream.Write(buffer, 0, bytesRead);
} while (true);
writeStream.Close();
}
If that looks like what you want, check out the CodeProject File Transfer Progress. It goes into a lot of detail that my code is loosely based on.
I'm using the web client object to download a file like so:
strm = Client.OpenRead(url);
strm.ReadTimeout = 30000;
bool bFirst = true;
while ((read = strm.Read(buf, 0, 2000)) > 0)
{
fout.Write(buf, 0, read);
}
Where the url points to an S3 bucket. In some cases the download fails with a timeout at exactly 2 GB. Is this a network issue, or is there something I could change in the code?
Any ideas appreciated.
I believe WebClient will read the file into memory, and you're probably running into process size limitations.
What you'll want to use is WebClient.DownloadFile
I believe this will work better for you!
I am trying to join a number of binary files that were split during download. The requirement stemmed from the project http://asproxy.sourceforge.net/. In this project author allows you to download files by providing a url.
The problem comes through where my server does not have enough memory to keep a file that is larger than 20 meg in memory.So to solve this problem i modified the code to not download files larger than 10 meg's , if the file is larger it would then allow the user to download the first 10 megs. The user must then continue the download and hopefully get the second 10 megs. Now i have got all this working , except when the user needs to join the files they downloaded i end up with corrupt files , as far as i can tell something is either being added or removed via the download.
I am currently join the files together by reading all the files then writing them to one file.This should work since i am reading and writing in bytes. The code i used to join the files is listed here http://www.geekpedia.com/tutorial201_Splitting-and-joining-files-using-C.html
I do not have the exact code with me atm , as soon as i am home i will post the exact code if anyone is willing to help out.
Please let me know if i am missing out anything or if there is a better way to do this , i.e what could i use as an alternative to a memory stream. The source code for the original project which i made changes to can be found here http://asproxy.sourceforge.net/download.html , it should be noted i am using version 5.0. The file i modified is called WebDataCore.cs and i modified line 606 to only too till 10 megs of data had been loaded the continue execution.
Let me know if there is anything i missed.
Thanks
You shouldn't split for memory reasons... the reason to split is usually to avoid having to re-download everything in case of failure. If memory is an issue, you are doing it wrong... you shouldn't be buffering in memory, for example.
The easiest way to download a file is simply:
using(WebClient client = new WebClient()) {
client.DownloadFile(remoteUrl, localPath);
}
Re your split/join code - again, the problem is that you are buffering everything in memory; File.ReadAllBytes is a bad thing unless you know you have small files. What you should have is something like:
byte[] buffer = new byte[8192]; // why not...
int read;
while((read = inStream.Read(buffer, 0, buffer.Length)) > 0)
{
outStream.Write(buffer, 0, read);
}
This uses a moderate buffer to pump data between the two as a stream. A lot more efficient. The loop says:
try to read some data (at most, the buffer-size)
(this will read at least 1 byte, or we have reached the end of the stream)
if we read something, write this many bytes from the buffer to the output
In the end i have found that by using a FTP request i was able to get arround the memory issue and the file is saved correctly.
Thanks for all the help
That example is loading each entire chunk into memory, instead you could do something like this:
int bufSize = 1024 * 32;
byte[] buffer = new byte[bufSize];
using (FileStream outputFile = new FileStream(OutputFileName, FileMode.OpenOrCreate,
FileAccess.Write, FileShare.None, bufSize))
{
foreach (string inputFileName in inputFiles)
{
using (FileStream inputFile = new FileStream(inputFileName, FileMode.Append,
FileAccess.Write, FileShare.None, buffer.Length))
{
int bytesRead = 0;
while ((bytesRead = inputFile.Read(buffer, 0, buffer.Length)) != 0)
{
outputFile.Write(buffer, 0, bytesRead);
}
}