FileUpload to Amazon S3 results in 0 byte file - c#

I'm trying to fix a bug where the following code results in a 0 byte file on S3, and no error message.
This code feeds in a Stream (from the poorly-named FileUpload4) which contains an image and the desired image path (from a database wrapper object) to Amazon's S3, but the file itself is never uploaded.
CloudUtils.UploadAssetToCloud(FileUpload4.FileContent, ((ImageContent)auxSRC.Content).PhysicalLocationUrl);
ContentWrapper.SaveOrUpdateAuxiliarySalesRoleContent(auxSRC);
The second line simply saves the database object which stores information about the (supposedly) uploaded picture. This save is going through, demonstrating that the above line runs without error.
The first line above calls in to this method, after retrieving an appropriate bucketname:
public static bool UploadAssetToCloud(Stream asset, string path, string bucketName, AssetSecurity security = AssetSecurity.PublicRead)
{
TransferUtility txferUtil;
S3CannedACL ACL = GetS3ACL(security);
using (txferUtil = new Amazon.S3.Transfer.TransferUtility(AWSKEY, AWSSECRETKEY))
{
TransferUtilityUploadRequest request = new TransferUtilityUploadRequest()
.WithBucketName(bucketName)
.WithTimeout(TWO_MINUTES)
.WithCannedACL(ACL)
.WithKey(path);
request.InputStream = asset;
txferUtil.Upload(request);
}
return true;
}
I have made sure that the stream is a good stream - I can save it anywhere else I have permissions for, the bucket exists and the path is fine (the file is created at the destination on S3, it just doesn't get populated with the content of the stream). I'm close to my wits end, here - what am I missing?
EDIT: One of my coworkers pointed out that it would be better to the FileUpload's PostedFile property. I'm now pulling the stream off of that, instead. It still isn't working.

Is the stream positioned correctly? Check asset.Position to make sure the position is set to the beginning of the stream.
asset.Seek(0, SeekOrigin.Begin);
Edit
OK, more guesses (I'm down to guesses, though):
(all of this is assuming that you can still read from your incoming stream just fine "by hand")
Just for testing, try one of the simpler Upload methods on the TransferUtility -- maybe one that just takes a file path string. If that works, then maybe there are additional properties to set on the UploadRequest object.
If you hook the UploadProgressEvent on the UploadRequest object, do you get any additional clues to what's going wrong?
I noticed that the UploadRequest's api includes both an InputStream property, and a WithInputStream fluent API. Maybe there's a bug with setting InputStream? Maybe try using the .WithInputStream API instead

Which Stream are you using ? Does the stream you are using, support mark() and reset() method.
Might be while upload method first calculate the MD5 for the given stream and then upload it, So if you stream is not supporting these two method then at the time of MD5 calculation it reaches at eof and then unable to preposition for the stream to upload the object.

Related

Unable to Append to Append Blob in Azure

I'm trying to perform a textbook append to an append blob in azure.
First I create a blob container. I know this operation succeeds because I can see the container in the storage explorer.
Next I create the blob. I know this operation succeeds because I can see the blob in the storage explorer.
Finally I attempt to append to the blob with the following code.
var csa = CloudStorageAccount.Parse(BLOB_CONNECTION_STRING);
var client = csa.CreateCloudBlobClient();
var containerRefernece = client.GetContainerReference(CONTAINER_NAME);
var blobrefernce = containerRefernece.GetAppendBlobReference(BLOB_NAME);
var ms = new MemoryStream();
var sr = new StreamWriter(ms);
sr.WriteLine(message);
ms.Seek(0, SeekOrigin.Begin);
await blobrefernce.AppendBlockAsync(ms);
No matter what I do I get the following exception.
windowsAzure.Storage StorageException: The value for one of the HTTP headers is not in the correct format.
I'm at a bit of a loss as to how to proceed. I cant even determine what parameters are the problem from the exception. The connection string is copied directly from the azure portal. Note I am using the latest version (9.3.0) of the WindowsAzure.Storage NuGet package.
Any ideas how I can figure out what the problem is?
Thanks!
Just add sr.Flush(); after sr.WriteLine(message); to make buffered data written to the underlying stream immediately.
AutoFlush of StreamWriter is false by default, so buffered data won't be written to destination until we use Flush or Close.
We need to use the MemoryStream which is the construct parameter of StreamWriter, so we can't use Close otherwise we will get exception like Cannot access a closed Stream.

Overriding WebHostBufferPolicySelector for Non-Buffered File Upload

In an attempt to create a non-buffered file upload I have extended System.Web.Http.WebHost.WebHostBufferPolicySelector, overriding function UseBufferedInputStream() as described in this article: http://www.strathweb.com/2012/09/dealing-with-large-files-in-asp-net-web-api/. When a file is POSTed to my controller, I can see in trace output that the overridden function UseBufferedInputStream() is definitely returning FALSE as expected. However, using diagnostic tools I can see the memory growing as the file is being uploaded.
The heavy memory usage appears to be occurring in my custom MediaTypeFormatter (something like the FileMediaFormatter here: http://lonetechie.com/). It is in this formatter that I would like to incrementally write the incoming file to disk, but I also need to parse json and do some other operations with the Content-Type:multipart/form-data upload. Therefore I'm using HttpContent method ReadAsMultiPartAsync(), which appears to be the source of the memory growth. I have placed trace output before/after the "await", and it appears that while the task is blocking the memory usage is increasing fairly rapidly.
Once I find the file content in the parts returned by ReadAsMultiPartAsync(), I am using Stream.CopyTo() in order to write the file contents to disk. This writes to disk as expected, but unfortunately the source file is already in memory by this point.
Does anyone have any thoughts about what might be going wrong? It seems that ReadAsMultiPartAsync() is buffering the whole post data; if that is true why do we require var fileStream = await fileContent.ReadAsStreamAsync() to get the file contents? Is there another way to accomplish the splitting of the parts without reading them into memory? The code in my MediaTypeFormatter looks something like this:
// save the stream so we can seek/read again later
Stream stream = await content.ReadAsStreamAsync();
var parts = await content.ReadAsMultipartAsync(); // <- memory usage grows rapidly
if (!content.IsMimeMultipartContent())
{
throw new HttpResponseException(HttpStatusCode.UnsupportedMediaType);
}
//
// pull data out of parts.Contents, process json, etc.
//
// find the file data in the multipart contents
var fileContent = parts.Contents.FirstOrDefault(
x => x.Headers.ContentDisposition.DispositionType.ToLower().Trim() == "form-data" &&
x.Headers.ContentDisposition.Name.ToLower().Trim() == "\"" + DATA_CONTENT_DISPOSITION_NAME_FILE_CONTENTS + "\"");
// write the file to disk
using (var fileStream = await fileContent.ReadAsStreamAsync())
{
using (FileStream toDisk = File.OpenWrite("myUploadedFile.bin"))
{
((Stream)fileStream).CopyTo(toDisk);
}
}
WebHostBufferPolicySelector only specifies if the underlying request is bufferless. This is what Web API will do under the hood:
IHostBufferPolicySelector policySelector = _bufferPolicySelector.Value;
bool isInputBuffered = policySelector == null ? true : policySelector.UseBufferedInputStream(httpContextBase);
Stream inputStream = isInputBuffered
? requestBase.InputStream
: httpContextBase.ApplicationInstance.Request.GetBufferlessInputStream();
So if your implementation returns false, then the request is bufferless.
However, ReadAsMultipartAsync() loads everything into MemoryStream - because if you don't specify a provider, it defaults to MultipartMemoryStreamProvider.
To get the files to save automatically to disk as every part is processed use MultipartFormDataStreamProvider (if you deal with files and form data) or MultipartFileStreamProvider (if you deal with just files).
There is an example on asp.net or here. In these examples everything happens in controllers, but there is no reason why you wouldn't use it in i.e. a formatter.
Another option, if you really want to play with streams is to implement a custom class inheritng from MultipartStreamProvider that would fire whatever processing you want as soon as it grabs part of the stream. The usage would be similar to the aforementioned providers - you'd need to pass it to the ReadAsMultipartAsync(provider) method.
Finally - if you are feeling suicidal - since the underlying request stream is bufferless theoretically you could use something like this in your controller or formatter:
Stream stream = HttpContext.Current.Request.GetBufferlessInputStream();
byte[] b = new byte[32*1024];
while ((n = stream.Read(b, 0, b.Length)) > 0)
{
//do stuff with stream bit
}
But of course that's very, for the lack of better word, "ghetto."

Issues with StreamReader, ThreadSafety and Read Mode

I have following code to read a file
StreamReader str = new StreamReader(File.Open(fileName, FileMode.Open, FileAccess.Read));
string fichier = str.ReadToEnd();
str.Close();
This is part of a asp.net webservice and has been working fine for an year now in production. Now with increasing load on server, customer has started getting "File already in use" error. That file is being read from this code and is never written to from application.
One problem that I clearly see is that we are not caching the contents of file for future use. We will do that. But I need to understand why and how we are getting this issue.
Is it because of multiple threads trying to read the file? I read that StreamReader is not thread safe but why should it be a problem when I am opening file in Read mode?
You need to open the file with read access allowed. Use this overload of File.Open to specify a file sharing mode. You can use FileShare.Read to allow read access to this file.
Anothr possible solution is to load this file once into memory in a static constructor of a class and then store the contents in a static read-only variable. Since a static constructor is guaranteed to run only once and is thread-safe, you don't have to do anything special to make it work.
If you never change the contents in memory, you won't even need to lock when you access the data. If you do change the contents, you need to first clone this data every time when you're about to change it but then again, you don't need a lock for the clone operation since your actual original data never changes.
For example:
public static class FileData
{
private static readonly string s_sFileData;
static FileData ()
{
s_sFileData = ...; // read file data here using your code
}
public static string Contents
{
get
{
return ( string.Copy ( s_sFileData ) );
}
}
}
This encapsulates your data and gives you read-only access to it.
You only need String.Copy() if your code may modify the file contents - this is just a precaution to force creating a new string instance to protect the original string. Since string is immutable, this is only necessary if your code uses string pointers - I only added this bit because I ran into an issue with a similar variable in my own code just last week where I used pointers to cached data. :)
FileMode just controls what you can do (read/write).
Shared access to files is handled at the operating system level, and you can request behaviors with FileShare (3rd param), see doc

c# how to write a jpg image from request.binaryread

I have a flash app which sends raw data for a jpg image to a particular url Send.aspx . In Send.aspx I am using request.binaryread() to get the total request length and then read in the data to a byte array.
Then I am writing the data as jpg file to the server. The code is given below:
FileStream f = File.Create(Server.MapPath("~") + "/plugins/handwrite/uploads/" + filename);
byte[] data = Request.BinaryRead(Request.TotalBytes);
f.Write(data, 0, data.Length);
f.Close();
The file is getting created but there is no image in it. It always shows up as empty in any graphic viewer. What part am I missing. Am I supposed to use jpg encoding first before writing it to file? Thanks in advance
Well, you should use a using statement for your file stream, but other than that it looks okay to me.
A few suggestions for how to proceed...
Is it possible that the client isn't providing the data properly? Perhaps it's providing it as base64-encoded data?
Have you already read some data from the request body? (That could mess things up.)
I suggest you look closely at what you end up saving vs the original file:
Are they the same length? If not, which is longer?
If they're the same length, do their MD5 sums match?
If you look at both within a binary file editor, do they match at all? Any obvious differences?

.Net MVC Return file and then delete it

I've produced a MVC app that when you access /App/export it zips up all the files in a particular folder and then returns the zip file. The code looks something like:
public ActionResult Export() {
exporter = new Project.Exporter("/mypath/")
return File(exporter.filePath, "application/zip", exporter.fileName);
}
What I would like to do is return the file to the user and then delete it. Is there any way to set a timeout to delete the file? or hold onto the file handle so the file isn't deleted till after the request is finished?
Sorry, I do not have the code right now...
But the idea here is: just avoid creating a temporary file! You may write the zipped data directly to the response, using a MemoryStream for that.
EDIT Something on that line (it's not using MemoryStream but the idea is the same, avoiding creating a temp file, here using the DotNetZip library):
DotNetZip now can save directly to ASP.NET Response.OutputStream.
I know this thread is too old , but here is a solution if someone still faces this.
create temp file normally.
Read file into bytes array in memory by System.IO.File.ReadAllBytes().
Delete file from desk.
Return the file bytes by File(byte[] ,"application/zip" ,"SomeNAme.zip") , this is from your controller.
Code Sample here:
//Load ZipFile
var toDownload = System.IO.File.ReadAllBytes(zipFile);
//Clean Files
Directory.Delete(tmpFolder, true);
System.IO.File.Delete(zipFile);
//Return result for download
return File(toDownload,"application/zip",$"Certificates_{rs}.zip");
You could create a Stream implementation similar to FileStream, but which deletes the file when it is disposed.
There's some good code in this SO post.

Categories