I've created a zip file method in my web api which returns a zip file to the front end (Angular / typescript) that should download a zip file in the browser. The issue I have is the file shows it has data by the number of kbs it has but on trying to extract the files it says it's empty. From a bit of research this is most likely down to the file being corrupt, but I want to know where I can find this is going wrong. Here's my code:
WebApi:
I won't show the controller as it basically just takes the inputs and passes them to the method. The DownloadFileResults passed in basically have a byte[] in the File property.
public FileContentResult CreateZipFile(IEnumerable<DownloadFileResult> files)
{
using (var compressedFileStream = new MemoryStream())
{
using (var zipArchive = new ZipArchive(compressedFileStream, ZipArchiveMode.Update))
{
foreach (var file in files)
{
var zipEntry = zipArchive.CreateEntry(file.FileName);
using (var entryStream = zipEntry.Open())
{
entryStream.Write(file.File, 0, file.File.Length);
}
}
}
return new FileContentResult(compressedFileStream.ToArray(), "application/zip");
}
}
This appears to work in that it generates a result with data. Here's my front end code:
let fileData = this._filePaths;
this._fileStorageProxy.downloadFile(Object.entries(fileData).map(([key, val]) => val), this._pId).subscribe(result => {
let data = result.data.fileContents;
const blob = new Blob([data], {
type: 'application/zip'
});
const url = window.URL.createObjectURL(blob);
window.open(url);
});
The front end code then displays me a zip file being downloaded, which as I say appears to have data due to it's size, but I can't extract it.
Update
I tried writing the compressedFileStream to a file on my local and I can see that it creates a zip file and I can extract the files within it. This leads me to believe it's something wrong with the front end, or at least with what the front end code is receiving.
2nd Update
Ok, turns out this is specific to how we do things here. The request goes through platform, but for downloads it can only handle a BinaryTransferObject and I needed to hit a different end point. So with a tweak to no longer returning a FileContentResult and hitting the right end point and making the url simply an ahref it's now working.
Related
I've work with large XML Files (~1000000 lines, 34mb) that are stored in a ZIP archive. The XML file is used at runtime to store and load app settings and measurements. The gets loadeted with this function:
public static void LoadFile(string path, string name)
{
using (var file = File.OpenRead(path))
{
using (var zip = new ZipArchive(file, ZipArchiveMode.Read))
{
var foundConfigurationFile = zip.Entries.First(x => x.FullName == ConfigurationFileName);
using (var stream = new StreamReader(foundConfigurationFile.Open()))
{
var xmlSerializer = new XmlSerializer(typeof(ProjectConfiguration));
var newObject = xmlSerializer.Deserialize(stream);
CurrentConfiguration = null;
CurrentConfiguration = newObject as ProjectConfiguration;
AddRecentFiles(name, path);
}
}
}
}
This works for most of the time.
However, some files don't get read to the end and i get an error that the file contains non valid XML. I used
foundConfigurationFile.ExtractToFile();
and fount that the readed file stops at line ~800000. But this only happens inside this code. When i open the file via editor everything is there.
It looks like the zip doesnt get loaded correctly, or for that matter, completly.
Am i running in some limitations? Or is there an error in my code i don't find?
The file is saved via:
using (var file = File.OpenWrite(Path.Combine(dirInfo.ToString(), fileName.ToString()) + ".pwe"))
{
var zip = new ZipArchive(file, ZipArchiveMode.Create);
var configurationEntry = zip.CreateEntry(ConfigurationFileName, CompressionLevel.Optimal);
var stream = configurationEntry.Open();
var xmlSerializer = new XmlSerializer(typeof(ProjectConfiguration));
xmlSerializer.Serialize(stream, CurrentConfiguration);
stream.Close();
zip.Dispose();
}
Update:
The problem was the File.OpenWrite() method.
If you try to override a file with this method it will result in a mix between the old file and the new file, if the new file is shorter than the old file.
File.OpenWrite() doenst truncate the old file first as stated in the docs
In order to do it correctly it was neccesary to use the File.Create() method. Because this method truncates the old file first.
I have a web interface where users can choose one of many files from local computer and upload them to a central location, in this case Azure Blob Storage. I have a check in my C# code to validate that the filename ending is .bin. The receiving method in C# takes an array of HttpPostedFileBase.
I want to allow users to choose a zipfile instead. In my C# code, I iterate through the content of the zipfile and check each filename to verify that the ending is .bin.
However, when I iterate through the zipfile, the ContentLength of the HttpPostedFileBase object becomes 0 (zero) and when I later on upload the zipfile to Azure, it is empty.
How can I make a check for filename endings without manipulating the zipfile?
I have tried to DeepCopy a single object of HttpPostedFileBase but it is not serializable.
I've tried to make a copy of the array but nothing works. It seems that everything is reference and not value. Some example of my code as follows. Yes, I tried the lines individually.
private static bool CanUploadBatchOfFiles(HttpPostedFileBase[] files)
{
var filesCopy = new HttpPostedFileBase[files.Length];
// Neither of these lines works
Array.Copy(files, 0, filesCopy, 0, files.Length);
Array.Copy(files, filesCopy, files.Length);
files.CopyTo(filesCopy, 0);
}
This is how I iterate through the zipfile
foreach (var file in filesCopy)
{
if (file.FileName.EndsWith(".zip"))
{
using (ZipArchive zipFile = new ZipArchive(file.InputStream))
{
foreach (ZipArchiveEntry entry in zipFile.Entries)
{
if (entry.Name.EndsWith(".bin"))
{
// Some code left out
}
}
}
}
}
I solved my problem. I had to do two separate things:
First, I do not do a copy of the array. Instead, for each zip file, I just copy the stream. This made the ContentLength stay at whatever length it was.
The second thing is did was to reset the position after I looked inside the zipfile. I need to do this or else the zip file that I upload to Azure Blob Storage will be empty.
private static bool CanUploadBatchOfFiles(HttpPostedFileBase[] files)
{
foreach (var file in files)
{
if (file.FileName.EndsWith(".zip"))
{
// Part one of the solution
Stream fileCopy = new MemoryStream();
file.InputStream.CopyTo(fileCopy);
using (ZipArchive zipFile = new ZipArchive(fileCopy))
{
foreach (ZipArchiveEntry entry in zipFile.Entries)
{
// Code left out
}
}
// Part two of the solution
file.InputStream.Position = 0;
}
}
return true;
}
I am facing issue while downloading/exporting XML file from C# model to local machine of browser(I have front end for it).
However I am able to download/export the file from C# model to XML and save it on directory on server.
I am using below code for it :
var gradeExportDto = Mapper.Map<GradeExportDto>(responseGradeDto);
System.Xml.Serialization.XmlSerializer writer = new System.Xml.Serialization.XmlSerializer(gradeExportDto.GetType());
var path = _configuration.GetValue<string>(AppConstants.IMPORT_EXPORT_LOCAL_URL) + "\\"+ responseGradeDto.Code+"_"+DateTime.UtcNow.ToString("yyyy-MM-dd") + ".xml";
System.IO.FileStream file = System.IO.File.Create(path);
writer.Serialize(file, gradeExportDto);
file.Close();
Angular Code :
onExport(selectedData: any): void{
this.apiService.post(environment.api_url_master, 'ImportExport/ExportGrade/', selectedData).subscribe(result => {
this.translateService.get('GradeExportSuccess').subscribe(value => this.toastr.success(value));
}, err => {
this.toastr.error(err.message);
});
}
I need help in getting this file downloaded to local system on which browser is running.
Please let me know if more information is required from my side.
NOTE : I am not trying to download existing file. I have model in C# which I need to convert in XML and then download it to my local. However I am able to convert it to XML but not able to download on local.
You cannot save anything directly to a client machine. All you can do is provide the file as a response to a request, which will then generally prompt a download dialog on the client, allowing them to choose to save it somewhere on their local machine.
What #croxy linked you to is how to return such a response. If the issue is that the answer is using an existing file, you can disregard that part. The idea is that you're returning a byte[] or stream, regardless of where that's actually coming from. If you're creating the XML in memory, then you can simply do something like:
return File(memoryStream.ToArray(), "application/xml", "file.xml");
Instead of serializing your data into a file, serialize it into a stream eg. MemoryStream and return a File() from your action:
public IActionResult GetXml()
{
var gradeExportDto = Mapper.Map<GradeExportDto>(responseGradeDto);
var writer = new System.Xml.Serialization.XmlSerializer(gradeExportDto.GetType());
var stream = new MemoryStream();
writer.Serialize(stream, gradeExportDto);
var fileName = responseGradeDto.Code + "_" + DateTime.UtcNow.ToString("yyyy-MM-dd") + ".xml";
return File(stream.ToArray(), "application/xml", fileName);
}
I have a multi-volume archive stored in Azure Blob Storage that is split into a series of zips titled like this: Archive-Name.zip.001, Archive-Name.zip.002, etc. . . Archive-Name.zip.010. Each file is 250 MB and contains hundreds of PDFs.
Currently we were trying to iterate through each archive part and extract the PDFs. This works except when the past PDF in an archive has been split between two archive parts, ZipFile in C# is unable to process the split file and throws an exception.
We tried reading all the archive parts into a single MemoryStream and then extracting the files, however then we are finding the memory streams exceed 2GBs which is the limit - so this method does not work either.
It is not feasible to download the archive into a machines memory, extract, then upload the PDFs to a new file. The extraction needs to be done in Azure where the program will run.
This is the code we are currently using - it is unable to handle PDFs split between two archive parts.
public static void UnzipTaxForms(TextWriter log, string type, string fiscalYear)
{
var folderName = "folderName";
var outPutContainer = GetContainer("containerName");
CreateIfNotExists(outPutContainer);
var fileItems = ListFileItems(folderName);
fileItems = fileItems.Where(i => i.Name.Contains(".zip")).ToList();
foreach (var file in fileItems)
{
using (var ziped = ZipFile.Read(GetMemoryStreamFromFile(folderName, file.Name)))
{
foreach (var zipEntry in ziped)
{
using (var outPutStream = new MemoryStream())
{
zipEntry.Extract(outPutStream);
var blockblob = outPutContainer.GetBlockBlobReference(zipEntry.FileName);
outPutStream.Seek(0, SeekOrigin.Begin);
blockblob.UploadFromStream(outPutStream);
}
}
}
}
}
Another note. We are unable to change the way the multi-volume archive is generated. Any help would be appreciated.
I have a MIME file (not an e-mail) that has a multipart body to it. One of the parts is xml while the other is Application\PDF. When trying to save the PDF, it will not open. I am probably just not doing it correctly (as a file saves, but Adobe says that the file is corrupt when trying to open it).
I am using the following code: (NOTE: In this snippet, I am simply retrieving the information from the file and then saving it to a database. I later extract the data from the database and create the file. I know it is not the saving to/from the DB that is the problem as that has been thoroughly tested. It is in this method that is causing my problem.)
foreach (var part in _mimeMessage.BodyParts)
{
if (part is MimePart)
{
var p = part as MimePart;
if (p.ContentId == name)
{
using (var stream = new System.IO.MemoryStream())
{
p.ContentObject.WriteTo(stream);
return stream.ToArray();
}
}
}
}
Is there something I am missing in doing this?
You are saving the encoded content. You need to save the decoded content. Like this:
p.ContentObject.DecodeTo(stream);
It turns out the issue is that the files that I had were "double encoded" using base64. I got help from someone on the MimeKit forums, and here is the code that ended up working for me.
foreach (var attachment in _mimeMessage.BodyParts.OfType<MimePart>())
{
if (attachment.ContentId != name)
continue;
using (var stream = new System.IO.MemoryStream())//File.Create(#"C:\Client Test Data\Alert Files\" + name))
{
using (var filtered = new FilteredStream(stream))
{
filtered.Add(DecoderFilter.Create("base64"));
attachment.ContentObject.DecodeTo(filtered);
return stream.ToArray();
}
}
}