WCF: ZIP file generation - c#

I am uploading a file using a WCF service.
The client side code looks like this:
System.ServiceModel.EndpointAddress endPointAddress = new System.ServiceModel.EndpointAddress(address);
IFileStreamService proxy = new FileStreamServiceClient("FileTransfer", endPointAddress);
proxy.UploadFile(uploadReq); //upload the file
Server side includes
public void UploadFile(FileUploadRequest uploadRequest)
{
try
{
string targetFile = uploadRequest.getTargetFile();
Stream sourceStream = uploadRequest.FileByStream;
Log("Going to read stream from client");
using (FileStream outfile = new FileStream(targetFile, FileMode.Create))
{
const int bufferSize = 65536; // 64K chunk
Byte[] buffer = new Byte[bufferSize];
int bytesRead = sourceStream.Read(buffer, 0, bufferSize);
while (bytesRead > 0)
{
outfile.Write(buffer, 0, bytesRead);
bytesRead = sourceStream.Read(buffer, 0, bufferSize);
}
}
}
}
I am not doing any zipping at client side. But at the server side zip file is getting created like this
temp_c7dfbfd7-3495-43b6-a0ec-e46153cef72a.zip
zip file is also corrupted. Can anybody point me to where things are going wrong?

Related

Zip file is getting corrupted after downloading from server in C#

request = MakeConnection(uri, WebRequestMethods.Ftp.DownloadFile, username, password);
response = (FtpWebResponse)request.GetResponse();
Stream responseStream = response.GetResponseStream();
//This part of the code is used to write the read content from the server
using (StreamReader responseReader = new StreamReader(responseStream))
{
using (var destinationStream = new FileStream(toFilenameToWrite, FileMode.Create))
{
byte[] fileContents = Encoding.UTF8.GetBytes(responseReader.ReadToEnd());
destinationStream.Write(fileContents, 0, fileContents.Length);
}
}
//This part of the code is used to write the read content from the server
using (var destinationStream = new FileStream(toFilenameToWrite, FileMode.Create))
{
long length = response.ContentLength;
int bufferSize = 2048;
int readCount;
byte[] buffer = new byte[2048];
readCount = responseStream.Read(buffer, 0, bufferSize);
while (readCount > 0)
{
destinationStream.Write(buffer, 0, readCount);
readCount = responseStream.Read(buffer, 0, bufferSize);
}
}
The former ones writes the content to the file but when I try to open the file it says it is corrupted. But the later one does the job perfectly when downloading zip files. Is there any specific reason why the former code doesn't work for zip files as it works perfectly for text files?
byte[] fileContents = Encoding.UTF8.GetBytes(responseReader.ReadToEnd());
You try to interpret a binary PDF file as an UTF-8 text. That just cannot work.
For a correct code, see Upload and download a binary file to/from FTP server in C#/.NET.
Use BinaryWriter and pass it FileStream.
//This part of the code is used to write the read content from the server
using (var destinationStream = new BinaryWriter(new FileStream(toFilenameToWrite, FileMode.Create)))
{
long length = response.ContentLength;
int bufferSize = 2048;
int readCount;
byte[] buffer = new byte[2048];
readCount = responseStream.Read(buffer, 0, bufferSize);
while (readCount > 0)
{
destinationStream.Write(buffer, 0, readCount);
readCount = responseStream.Read(buffer, 0, bufferSize);
}
}
here is my solution that worked for me
C#
public IActionResult GetZip([FromBody] List<DocumentAndSourceDto> documents)
{
List<Document> listOfDocuments = new List<Document>();
foreach (DocumentAndSourceDto doc in documents)
listOfDocuments.Add(_documentService.GetDocumentWithServerPath(doc.Id));
using (var ms = new MemoryStream())
{
using (var zipArchive = new ZipArchive(ms, ZipArchiveMode.Create, true))
{
foreach (var attachment in listOfDocuments)
{
var entry = zipArchive.CreateEntry(attachment.FileName);
using (var fileStream = new FileStream(attachment.FilePath, FileMode.Open))
using (var entryStream = entry.Open())
{
fileStream.CopyTo(entryStream);
}
}
}
ms.Position = 0;
return File(ms.ToArray(), "application/zip");
}
throw new ErrorException("Can't zip files");
}
don't miss the ms.Position = 0; here

Upload file from WinRT to WCF

Help again please. I managed to upload a file from ASP.NET to my WCF service and it works like a charm. Now I want to do the same thing from WinRT without success. My file upload service is based on this post http://www.seesharpdot.net/?p=214. From ASP.NET I upload the file using this code
string filePath = Server.MapPath("~/Files/Happy.jpg");
string fileName = "Happy.jpg";
ServiceReference1.FileMetaData metadata = new ServiceReference1.FileMetaData();
metadata.LocalFilename = fileName;
metadata.FileType = ".jpg";
fileStream = new FileInfo(filePath).OpenRead();
oService.UploadFile(metadata, fileStream);
byte[] buffer = new byte[2048];
int bytesRead = fileStream.Read(buffer, 0, 2048);
while (bytesRead > 0)
{
fileStream.Write(buffer, 0, 2048);
bytesRead = fileStream.Read(buffer, 0, 2048);
}
From WinRT I thought this will work but it does not. No exception is thrown.
FileOpenPicker openPicker = new FileOpenPicker();
openPicker.ViewMode = PickerViewMode.Thumbnail;
openPicker.SuggestedStartLocation = PickerLocationId.PicturesLibrary;
openPicker.FileTypeFilter.Add(".jpg");
openPicker.FileTypeFilter.Add(".jpeg");
openPicker.FileTypeFilter.Add(".png");
StorageFile file = await openPicker.PickSingleFileAsync();
if (file != null)
{
byte[] bytes = await GetByteFromFile(file);
await App.ServiceInstance.UploadFileAsync(bytes);
}
// This is the method to convert the StorageFile to a Byte[]
private async Task<byte[]> GetByteFromFile(StorageFile storageFile)
{
var stream = await storageFile.OpenReadAsync();
using (var dataReader = new DataReader(stream))
{
var bytes = new byte[stream.Size];
await dataReader.LoadAsync((uint)stream.Size);
dataReader.ReadBytes(bytes);
return bytes;
}
}
What is interesting is that my WCF Service method only accepts a byte array (byte[]) as parameter and ignores the messageContract. Do I need to change my WCF service? How would you recommend I go about to fix this? Any help appreciated.
My WCF Service:
public void UploadFile(FileUploadMessage request)
{
Stream fileStream = null;
Stream outputStream = null;
try
{
fileStream = request.FileByteStream;
string rootPath = HttpContext.Current.Server.MapPath("~\\Files"); ; // ConfigurationManager.AppSettings["RootPath"].ToString();
string newFileName = Path.Combine(rootPath, request.MetaData.LocalFileName);
outputStream = new FileInfo(newFileName).OpenWrite();
const int bufferSize = 1024;
byte[] buffer = new byte[bufferSize];
int bytesRead = fileStream.Read(buffer, 0, bufferSize);
while (bytesRead > 0)
{
outputStream.Write(buffer, 0, bytesRead);
bytesRead = fileStream.Read(buffer, 0, bufferSize);
}
}
catch (IOException ex)
{
throw new FaultException<IOException>(ex, new FaultReason(ex.Message));
}
finally
{
if (fileStream != null)
{
fileStream.Close();
}
if (outputStream != null)
{
outputStream.Close();
}
}
}
I had to implement the same, but the WinRT generation of the library is different as to the one for desktop (Console application).
I had to take out Mtom in the binding, and leave the WCF service parameter as a Stream type.
This still allowed me to upload the document as required. However, on the service, i named the file to the md5 checksum value. The windows 8 app then sent another message to the service, with the parameter being the md5 checksum (calculated on the WinRt device) along with the file metadata. The WCF service then looked for the file with the md5 checksum and renamed the file.
So its a 2 step process from what I see as an immediate workaround, which I think i am happy with.
Happy to share the code for the md5 checksum on the service and WinRt side if required.

Download to file and save to byte array

I am trying to download a file to my computer and in the same time save it to Byte Array:
try
{
var req = (HttpWebRequest)HttpWebRequest.Create(url);
var fileStream = new FileStream(filePath,
FileMode.Create, FileAccess.Write, FileShare.Write);
using (var resp = req.GetResponse())
{
using (var stream = resp.GetResponseStream())
{
byte[] buffer = new byte[0x10000];
int len;
while ((len = stream.Read(buffer, 0, buffer.Length)) > 0)
{
//Do with the content whatever you want
// ***YOUR CODE***
MemoryStream memoryStream = new MemoryStream();
if (len > 0)
{
memoryStream.Write(buffer, 0, len);
len = stream.Read(buffer, 0, buffer.Length);
}
file = memoryStream.ToArray();
fileStream.Write(buffer, 0, len);
}
}
}
fileStream.Close();
}
catch (Exception exc) { }
And i noticed that it's not download all the file with this.
I wan to do it because i want to download a file and in the same time work with it.
Any idea why this problem happen?
There is a much easier way to get the file bytes by using the System.Net.WebClient.WebClient():
private static byte[] DownloadFile(string absoluteUrl)
{
using (var client = new System.Net.WebClient())
{
return client.DownloadData(absoluteUrl);
}
}
Usage:
var bytes = DownloadFile(absoluteUrl);
The problem looks to be double-reading - you are putting different things into the memory-stream / file-stream - it should be more like:
// declare file/memory stream here
while ((len = stream.Read(buffer, 0, buffer.Length)) > 0)
{
memoryStream.Write(buffer, 0, len);
fileStream.Write(buffer, 0, len);
// if you need to process "len" bytes, do it here
}
You might be able to lose "memoryStream" completely if you are processing the "len" bytes immediately. If it fits in-memory, it may be easier to just use WebClient.DownloadData and then File.WriteAllBytes.

Writing file from HttpWebRequest periodically vs. after download finishes?

Right now I am using this code to download files (with a Range header). Most of the files are large, and it is running 99% of CPU currently as the file downloads. Is there any way that the file can be written periodically so that it does not remain in RAM constantly?
private byte[] GetWebPageContent(string url, long start, long finish)
{
byte[] result = new byte[finish];
HttpWebRequest request;
request = WebRequest.Create(url) as HttpWebRequest;
//request.Headers.Add("Range", "bytes=" + start + "-" + finish);
request.AddRange((int)start, (int)finish);
using (WebResponse response = request.GetResponse())
{
return ReadFully(response.GetResponseStream());
}
}
public static byte[] ReadFully(Stream stream)
{
byte[] buffer = new byte[32768];
using (MemoryStream ms = new MemoryStream())
{
while (true)
{
int read = stream.Read(buffer, 0, buffer.Length);
if (read <= 0)
return ms.ToArray();
ms.Write(buffer, 0, read);
}
}
}
Instead of writing the data to a MemoryStream (which stores the data in memory), write the data to a FileStream (which stores the data in a file on disk).
byte[] buffer = new byte[32768];
using (FileStream fileStream = File.Create(path))
{
while (true)
{
int read = stream.Read(buffer, 0, buffer.Length);
if (read <= 0)
break;
fileStream.Write(buffer, 0, read);
}
}
Using .NET 4.0:
using (FileStream fileStream = File.Create(path))
{
stream.CopyTo(fileStream);
}

How to properly serve a PDF file

I am using .NET 3.5 ASP.NET. Currently my web site serves a PDF file in the following manner:
context.Response.WriteFile(#"c:\blah\blah.pdf");
This works great. However, I'd like to serve it via the context.Response.Write(char [], int, int) method.
So I tried sending out the file via
byte [] byteContent = File.ReadAllBytes(ReportPath);
ASCIIEncoding encoding = new ASCIIEncoding();
char[] charContent = encoding.GetChars(byteContent);
context.Response.Write(charContent, 0, charContent.Length);
That did not work (e.g. browser's PDF plugin complains that the file is corrupted).
So I tried the Unicode approach:
byte [] byteContent = File.ReadAllBytes(ReportPath);
UnicodeEncoding encoding = new UnicodeEncoding();
char[] charContent = encoding.GetChars(byteContent);
context.Response.Write(charContent, 0, charContent.Length);
which also did not work.
What am I missing?
You should not convert the bytes into characters, that is why it becomes "corrupted". Even though ASCII characters are stored in bytes the actual ASCII character set is limited to 7 bits. Thus, converting a byte stream with the ASCIIEncoding will effectively remove the 8th bit from each byte.
The bytes should be written to the OutputStream stream of the Response instance.
Instead of loading all bytes from the file upfront, which could possibly consume a lot of memory, reading the file in chunks from a stream is a better approach. Here's a sample of how to read from one stream and then write to another:
void LoadStreamToStream(Stream inputStream, Stream outputStream)
{
const int bufferSize = 64 * 1024;
var buffer = new byte[bufferSize];
while (true)
{
var bytesRead = inputStream.Read(buffer, 0, bufferSize);
if (bytesRead > 0)
{
outputStream.Write(buffer, 0, bytesRead);
}
if ((bytesRead == 0) || (bytesRead < bufferSize))
break;
}
}
You can then use this method to load the contents of your file directly to the Response.OutputStream
LoadStreamToStream(fileStream, Response.OutputStream);
Better still, here's a method opening a file and loading its contents to a stream:
void LoadFileToStream(string inputFile, Stream outputStream)
{
using (var streamInput = new FileStream(inputFile, FileMode.Open, FileAccess.Read))
{
LoadStreamToStream(streamInput, outputStream);
streamInput.Close();
}
}
You may also need to set the ContentType by doing something like this:
Response.ContentType = "application/octet-stream";
Building upon Peter Lillevold's answer, I went and just made some extension methods for his above functions.
public static void WriteTo(this Stream inputStream, Stream outputStream)
{
const int bufferSize = 64 * 1024;
var buffer = new byte[bufferSize];
while (true)
{
var bytesRead = inputStream.Read(buffer, 0, bufferSize);
if (bytesRead > 0)
{
outputStream.Write(buffer, 0, bytesRead);
}
if ((bytesRead == 0) || (bytesRead < bufferSize)) break;
}
}
public static void WriteToFromFile(this Stream outputStream, string inputFile)
{
using (var inputStream = new FileStream(inputFile, FileMode.Open, FileAccess.Read))
{
inputStream.WriteTo(outputStream);
inputStream.Close();
}
}

Categories