Difference between SendToZip and C# CreateFromDirectory zipfile - c#

I am using RestSharp to send a POST request, the POST request contains a zip file along with the following headers:
request.AddParameter("username", this.username, ParameterType.GetOrPost);
request.AddParameter("userid", this.userid, ParameterType.GetOrPost);
request.AddParameter("projectid", this.projectid, ParameterType.GetOrPost);
//add the file
request.AddFile("os_serverpackage", this.filelocation);
request.AlwaysMultipartFormData = true;
In C# I am sending the request synchronously.
The file is making it to the server, we can see it coming through, so we don't think it is a RestSharp issue.
The problem we are having is the zip file is not being properly handled by our server (which is a node-backed server built on an ubuntu machine).
So we think the problem is the way we're using C# libraries to create the zip file.
The problem is when I send the same document as a zip file which was created with the sendToZip functionality on the desktop (windows) it works. However when I create the zip using C#'s native System.IO.Compression.ZipFile.CreateFromDirectory method to create the zip file, the server fails to respond.
What is the difference between the two?
I am using C#'s native System.IO.Compression.ZipFile.CreateFromDirectory method to create the zip file.
We're not experts on zip and compression, or using .NET to compress files, but we've done some basic troubleshooting. When we open one of these zip files on a Mac (a *nix machine of sorts), that originated from our .NET compression, we run into a problem where the zip is turned into a CPGZ and things look odd as outlined in this link http://osxdaily.com/2013/02/13/open-zip-cpgz-file/.
Another option we have is to change the way our node server handles the zip files. The server at the receiving end of the POST request is using adm-zip to extract the zip file.
Do any compression experts have any tips on ways to leverage these libraries to ensure that our zip file works cross platform?

Seems to me that you're missing this (caveat: I'm not familiar with RestSharp):
request.AddHeader("Content-Type", "application/zip, application/octet-stream");
I know that Linux servers tend to be fussy about correct Content-Type HTTP headers and if you don't set it correctly then it won't be seen. At the very least you should use something like Fiddler2 to see if your messages are being transmitted correctly

Related

WebClient doing something weird with large file

I have a Varnish server in front of a S3 bucket. An API will generate a private URL and allow me to download private files of this bucket through the Varnish server.
Whenever I download a 500MB file directly from the bucket or through the Varnish-server in Chrome, everything works fine.
When I move the same logic to a C# WebClient (with proxy set to NULL and only sending an User Agent header), directly downloading from the bucket works fine. When I change the URL to the Varnish-server, things start to topple over... It will stop receiving the file at exactly 104.640KB every single time.
I'll get an IOException the Stream ended unexpectedly. I've tried both DownloadData, DownloadFile and their Async counterparts. It simply will never finish the download.
I've went back and forth from .NET 2.0 all the way to 4.5.1. Does anyone have a clue why this happens?
I came to the conclusion Varnish does some really weird things with its headers. I eventually setup OpenResty (nginx+lua) and used S3CMD to sync files whenever an URL is requested in a certain way, and used LUA to make a similar system to pre-signed URL's. This turned out to work perfectly in combination with the C# WebClient, and on top of that, I can now actually see that files really come from this particular server, which is not the case if Varnish caches it.

Is it possible to set a given range with FileWebRequest, similarly to HttpWebRequest?

I'm currently attempting to download a file through WebRequest.Create(url), getting a FileWebRequest.
I was wondering if it would be possible to pass in a given download range so I could potentially download different file chunks in parallel. I've noticed that in contrast with HttpWebRequest, there's no AddRange() method but maybe through a correct conjugation of other parameters such behavior could be achieved.
Thanks
I don't think you can since FileWebRequest refer to request using the file:// uri scheme. This scheme does not involve a specific protocol (as far as I know) like http:// uri scheme involving HTTP protocol which includes headers to partially download content.
If you are trying to open a remote file, you may consider the use of a specific protocol like FTP which may allow you to do want you want instead of file uris.

ASMX file upload

Is there a way to upload a file from local filesystem to a folder in a server using ASMX web services(no WCF, don't ask why:)?
UPD
P.S.file size can be 2-10 GB
Sure:
[WebMethod]
public void Upload(byte[] contents, string filename)
{
var appData = Server.MapPath("~/App_Data");
var file = Path.Combine(appData, Path.GetFileName(filename));
File.WriteAllBytes(file, contents);
}
then expose the service, generate a client proxy from the WSDL, invoke, standard stuff.
--
UPDATE:
I see your update now about handling large files. The MTOM protocol with streaming which is built into WCF is optimized for handling such scenarios.
When developing my free tool to upload large files to a server, I am also using .NET 2.0 and web services.
To make the application more error tolerant for very large files, I decided to not upload one large byte[] array but instead do a "chuncked" upload.
I.e. for uploading a 1 MB file, I do call my upload SOAP function 20 times, each call passing a byte[] array of 50 KB and concating it on the server together again.
I also count the packages, when one drops, I try to upload it again for several times.
This makes the upload more error tolerant and more responsive in the UI.
If you are interested, this is a CP article of the tool.
For very large files, the only efficient way to send them to web services is with MTOM. And MTOM is only supported in WCF, which you have ruled out. The only way to do this with old-style .asmx web services is the answer that #Darin Dimitrov gave. And with that solution, you'll have to suffer the cost of the file being base64 encoded (33% more bandwidth).
We had the same requirement, basically uploading a file via HTTP POST using the standard FileUpload controls on the client side.
In the end we just added an ASPX page to the ASMX web service project (after all its just a web project) - this allowed us to upload to i.e. http://foo/bar/Upload.aspx when the web service was at http://foo/bar/baz.asmx. This kept the functionality within the web service, even though it was using a separate web page.
This might or might not fit your requirements, #Darins approach would work as a workaround as well but you would have to make modifications on the client side for that, which wasn't an option for us.
You can try to convert the file to Base64 and pass it as a string to the service and then convert back to a byte array.
https://forums.asp.net/t/1980031.aspx?Web+Service+method+with+Byte+array+parameter+throws+ArgumentException
How to convert file to base64 in JavaScript?
The input is not a valid Base-64 string as it contains a non-base 64 character

Decompress SharpZipLib string in php or JS?

I am on a linux server connecting to a webservice via PHP/Soap.
The problem is that a method is zipping the response via SharpZipLib. So all I get in return is a garbled string.
Does anyone know of a way to unzip this with PHP or JS?
Thanks!
Update:
This is the compressed test data that gets returned:
UEsDBC0AAAAIAI5TWz3XB/zi//////////8EABQAZGF0YQEAEADWAgAAAAAAABYBAAAAAAAAfZLvToNAEMTnUXyDatXEDxcS/3zxizH6BBVESaESKFHf3t+cOWgtNhcuYXd2Zndug570qjdV6rVVpxV3pQ9tlCnohv+ab6Mc1J0G7kynZBb/5IKeYTDLAGOm28hVwtmpobqItfuYACpp1Ki42jobOGqO1eYRIXI2egHfofeOTqt7OE6o8QQdmbnpjMm01JXOdcG5ZKplVDpeEeBr6LCir2umKaJCj3ZSbGPEE3+Nsd/57fADtfYhoRtwZqmJ/c3Z+bmaHl9Kzq6CX20bWRJzjvMNbtjZ71Fvtdfz2RjPY/2ESy54ExJjC6P78U74XYudOaw2gPUOTSyfRDut9cjLmGma2//24TBTwj85573zDhziFkc29wdQSwECLQAtAAAACACOU1s91wf84v//////////BAAUAAAAAAAAAAAAAAAAAAAAZGF0YQEAEADWAgAAAAAAABYBAAAAAAAAUEsFBgAAAAABAAEARgAAAEwBAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=
Chances are, it's using gzip. You should look into PHP Zlib and the gzdecode or gzdeflate methods. You'll probably need to look at the Content type header or another response header.
Something you can try is also setting an Accept header in the web service request that tells the service you don't know how to deal with compression. If it's a proper service, it will honor the request.
EDIT Looking at the .pdf, they're sending the data as a zip archive - so you need to find a PHP lib that deals with in memory zip archives. The C# code they use to decode it is pretty straightforward - they just read all the entries in the archive and expand them. You can try storing it as an in memory buffer using a PHP wrapper along with PHP Zip.
Did you try setting an Accept Header that asks for no compression?
You can unzip your string using such function:
function decode($data)
{
$filename = tempnam('/tmp', 'tempfile');
file_put_contents($filename, base64_decode($data));
$zip = zip_open($filename);
$entry = zip_read($zip);
$decoded = zip_entry_read($entry);
zip_entry_close($entry);
zip_close($zip);
return $decoded;
}

Download a large file with a c# webservice

I am trying to create a webservice that would allow its consumers to download files (can be very huge files). At the server, I have many files that need to be sent back to the consumer, so I am compressing all those file into one big zip file and streaming them back to the user. Right now, my webservice will start compressing the files, when the request comes in, forms the zip files and streams it back.Sometimes compression can take a lot of time, and the request may time out. What can I do to avoid such situations? My solution right now is to, seperate the data into smaller zip files, send a response to consumer saying there would be these many smaller files, and let consumers send request for individual smaller files. So, if i have 1GB zip file, i will break it into 10 smaller zip files, and ask the consumer to request for smaller files in 10 requests. Is this the correct approach? What problems can I be facing? Has anyone dealth with such issues before? I would be glad if you can share your experiences. Also, is it possible to start streaming the zip files without forming them fully?
Treat the request and the delivery as asynchronous operations.
The client can make a request for the file using one method. Another method can let the client know the status of the file packaging (whether they are ready for download yet). A third method can actually download the files.
It may be worth looking at a restful approach. Rather than a soap web service. As OrbMan, suggested an asynch approach may be best.
With REST you could expose a resource as: http://yourlocation/generatefile
Which (when called with a post) returns a http response with a response code of 301 'accepted' and a location header value of location=http://yourlocation/generatefile/id00124 which suggests the location of the data.
You can then poll the http://yourlocation/generatefile/id00124 resource (maybe just header request) to get the status i.e. processing / complete.
When processing is complete. Do a get on the http://yourlocation/generatefile/id00124 to download your file. The response http message should identify you file and the format i.e. encryption and compression types so any consumer knows how to read it.
This is a nice solution to problems which are long running and returns data in formats other than soap anbd general xml.
I hope this helps
I would poll from the calling client as part of the method which gets the file. The client code might flow something like this:
byte[] GetFile()
{
response = request.Post(http://yourlocation/generatefile);
string dataResource = response.Headers["Location"];
bool resourceReady = false;
while(!reasourceReady)
{
resH = request.Header(dataResource);
if(resH.Headers[Status] == "complete")
break;
else
Thread.Sleep(OneSecond); ?? or whetever
}
fileRes = request.Get(dataResource);
return fileRes.ToByteArray();
}
This is only psuedo, but I hope it makes sense...

Categories