Can I upload excel file into Amazon S3 using AWSSDK.dll - c#

Can I upload excel file into the AWS s3 account. What I have fount is that PutObject method provided in the Library can be used to upload the file from a location or using the Stream object.
PutObjectRequest request = new PutObjectRequest()
{
ContentBody = "this is a test",
BucketName = bucketName,
Key = keyName,
InputStream = stream
};
PutObjectResponse response = client.PutObject(request);
Key can be the absolute path on the machine or we give the stream of the file. But my doubt is how we can upload the excel file using the above method
P.S
This is the way I am using to convert stream to byte[] but input.ReadByte() is always equal to zero. So my doubt is, is it not reading the excel file?
FileStream str = new FileStream(#"C:\case1.xlsx", FileMode.Open);
byte[] arr = ReadFully(str);
public static byte[] ReadFully(FileStream input)
{
long size = 0;
while (input.ReadByte() > 0)
{
size++;
}
byte[] buffer = new byte[size];
//byte[] buffer = new byte[16 * 1024];
using (MemoryStream ms = new MemoryStream())
{
int read;
while ((read = input.Read(buffer, 0, buffer.Length)) > 0)
{
ms.Write(buffer, 0, read);
}
return ms.ToArray();
}
}

You should be able to upload any file via the file path or stream. It doesn't matter that it's an Excel file. When you run PutObject, it uploads the actual file data represented by that path or stream.
You can see the MIME types for MS Office formats at Filext. Doing it by file path would probably be easier:
PutObjectRequest request = new PutObjectRequest()
{
ContentBody = "this is a test",
BucketName = bucketName,
Key = keyName,
ContentType =
"application/vnd.openxmlformats-officedocument.spreadsheetml.sheet", // xlsx
FilePath = #"\path\to\myfile.xlsx"
};
PutObjectResponse response = client.PutObject(request);
Or reading from a file stream:
PutObjectRequest request = new PutObjectRequest()
{
ContentBody = "this is a test",
BucketName = bucketName,
Key = keyName,
ContentType =
"application/vnd.openxmlformats-officedocument.spreadsheetml.sheet" // xlsx
};
using (var stream = new FileStream(#"\path\to\myfile.xlsx", FileMode.Open))
{
request.InputStream = stream;
PutObjectResponse response = client.PutObject(request);
}

Related

Unable to create zip file using Ionic.Zip

I'm not sure where and what am I doing wrong, but the zip that I'm creating using DotNetZip library, is creating a zip file whose contents are blank. Or the size of file in zip is showing as 0Kb and unable to open it.
Code:
public static async Task DotNetZipFileAsync(MemoryStream stream, string bucket, List<List<string>> pdfFileSet, IAmazonS3 s3Client)
{
using Ionic.Zip.ZipFile zip = new ZipFile();
foreach (var pdfFile in pdfFileSet)
{
foreach (var file in pdfFile)
{
GetObjectRequest request = new GetObjectRequest
{
BucketName = bucket,
Key = file
};
using GetObjectResponse response = await s3Client.GetObjectAsync(request);
using Stream responseStream = response.ResponseStream;
ZipEntry zipEntry = zip.AddEntry(file.Split('/')[^1], responseStream);
await responseStream.CopyToAsync(stream);
}
}
zip.Save(stream);
stream.Seek(0,SeekOrigin.Begin);
await stream.CopyToAsync(new FileStream(#"C:\LocalRepo\Temp.zip", FileMode.OpenOrCreate, FileAccess.ReadWrite, FileShare.ReadWrite));
}
}
}
Your code has at least two problems:
The read stream is completely consumed by the await responseStream.CopyToAsync(stream). You could rewind the responseStream to cope with this, but saving the data into the memory stream is completely useless.
The response stream is disposed before zip.Save is called.
What you could do: keep the streams open until Save is called and dispose them afterwards. As Alexey Rumyantsev discovered (see comments), also the GetObjectResponse objects need to be kept until the ZIP file is saved.
using Ionic.Zip.ZipFile zip = new ZipFile();
var disposables = List<IDisposable>();
try
{
foreach (var pdfFile in pdfFileSet)
{
foreach (var file in pdfFile)
{
GetObjectRequest request = new GetObjectRequest
{
BucketName = bucket,
Key = file
};
var response = await s3Client.GetObjectAsync(request);
disposables.Add(response);
var responseStream = response.ResponseStream;
disposables.Add(responseStream);
ZipEntry zipEntry = zip.AddEntry(file.Split('/')[^1], responseStream);
}
}
using var fileStream = new FileStream(#"C:\LocalRepo\Temp.zip", FileMode.Create, FileAccess.Write);
zip.Save(fileStream);
}
finally
{
foreach (var disposable in disposables)
{
disposable.Dispose();
}
}
The documentation has some hints ony how this could be made smarter.
public static async Task DotNetZipFileAsync(string bucket, List<List<string>> pdfFileSet, IAmazonS3 s3Client)
{
int read;
using Ionic.Zip.ZipFile zip = new ZipFile();
byte[] buffer = new byte[16 * 1024];
foreach (var pdfFile in pdfFileSet)
{
foreach (var file in pdfFile)
{
GetObjectRequest request = new GetObjectRequest
{
BucketName = bucket,
Key = file
};
using GetObjectResponse response = await s3Client.GetObjectAsync(request);
using Stream responseStream = response.ResponseStream;
using (MemoryStream ms = new MemoryStream())
{
while ((read = responseStream.Read(buffer, 0, buffer.Length)) > 0)
{
ms.Write(buffer, 0, read);
}
zip.AddEntry(file.Split('/')[^1], ms.ToArray());
}
}
}
using var fileStream = new FileStream(#"C:\LocalRepo\Temp.zip", FileMode.Create, FileAccess.Write);
zip.Save(fileStream);
}

Trying to download large blob files

I need to download large backup files from my storage account.
I try it with SAS and I have generated link, when I use that link and enter it
directly into browser it downloads the file, but when I am trying to download through my code it gives me empty file or doesn't download file at all. Commented out lines are some that I already tried, last one is Redirect(blobSasuri);
public async Task DownloadBlobItemAsync([FromQuery] string userId, [FromRoute] string fileName, [FromBody] PathObject path, [FromRoute] int filestorageConnectionId)
{
var fileStorageConnection = await _customerProvider.GetFileStorageConnection(filestorageConnectionId);
var customer = await _customerProvider.GetCustomer(fileStorageConnection.CustomerId);
CloudBlockBlob blob = _fileStorage.DownloadBlobFile(fileStorageConnection.Id, userId, customer.Id, fileName, path.Path);
var sas = blob.GetSharedAccessSignature(new SharedAccessBlobPolicy()
{
SharedAccessStartTime = DateTime.UtcNow.AddHours(-5),
SharedAccessExpiryTime = DateTime.UtcNow.AddHours(5),
Permissions = SharedAccessBlobPermissions.Read
});
string blobSasUri = (string.Format(CultureInfo.InvariantCulture, "{0}{1}", blob.Uri, sas));
// CloudBlockBlob blobNew = new CloudBlockBlob(new Uri(blobSasUri));
// var pathNew = Directory.GetCurrentDirectory();
// blobNew.DownloadToFileAsync(pathNew, FileMode.Create);
//await blob.DownloadToFileAsync(blobSasUri, FileMode.Create);
Redirect(blobSasUri);
//using (var client = new WebClient())
//{
// client.DownloadFile(blobSasUri, fileName);
//}
}
I don't know what method you used to download the blob, I test with blobSas.DownloadToStream(), it worked for me. So maybe you could try with my code.
static void Main(string[] args)
{
string storageConnectionString = "connectin string";
// Check whether the connection string can be parsed.
CloudStorageAccount storageAccount;
CloudStorageAccount.TryParse(storageConnectionString, out storageAccount);
var containerName = "test";
var blobName = "testfile.zip";
string saveFileName = #"E:\testfilefolder\myfile1.zip";
var blobContainer = storageAccount.CreateCloudBlobClient().GetContainerReference(containerName);
var blob = blobContainer.GetBlockBlobReference(blobName);
var sas =blob.GetSharedAccessSignature(new SharedAccessBlobPolicy()
{
SharedAccessStartTime = DateTime.UtcNow.AddHours(-5),
SharedAccessExpiryTime = DateTime.UtcNow.AddHours(5),
Permissions = SharedAccessBlobPermissions.Read
});
string blobSasUri = (string.Format(CultureInfo.InvariantCulture, "{0}{1}", blob.Uri, sas));
//Download Blob through SAS url
CloudBlockBlob blobSas = new CloudBlockBlob(new Uri(blobSasUri));
long startPosition = 0;
using (MemoryStream ms = new MemoryStream())
{
blobSas.DownloadToStream(ms);
byte[] data = new byte[ms.Length];
ms.Position = 0;
ms.Read(data, 0, data.Length);
using (FileStream fs = new FileStream(saveFileName, FileMode.OpenOrCreate))
{
fs.Position = startPosition;
fs.Write(data, 0, data.Length);
}
}
}
And except with sas url to download large blob, another option is to serve the file in chunks. Here is the code.
int segmentSize = 1 * 1024 * 1024;//1 MB chunk
var blobContainer = storageAccount.CreateCloudBlobClient().GetContainerReference(containerName);
var blob = blobContainer.GetBlockBlobReference(blobName);
blob.FetchAttributes();
var blobLengthRemaining = blob.Properties.Length;
long startPosition = 0;
string saveFileName = #"E:\testfilefolder\myfile.zip";
do
{
long blockSize = Math.Min(segmentSize, blobLengthRemaining);
byte[] blobContents = new byte[blockSize];
using (MemoryStream ms = new MemoryStream())
{
blob.DownloadRangeToStream(ms, startPosition, blockSize);
ms.Position = 0;
ms.Read(blobContents, 0, blobContents.Length);
using (FileStream fs = new FileStream(saveFileName, FileMode.OpenOrCreate))
{
fs.Position = startPosition;
fs.Write(blobContents, 0, blobContents.Length);
}
}
startPosition += blockSize;
blobLengthRemaining -= blockSize;
}
while (blobLengthRemaining > 0);
Hope this could help you, if you still have other problem please feel free to let me know.
This doesnt work for me and for large files >5 GB. What I did is I returned path to the file + added SAS on it and return it to frontend. So now on frontend I have link with sas and it downloads it directly thorugh browser there.

getting the same directory structure after decompressing a zipped file

I downloaded the zip file from the amazon server(using AWS SDK for unity).The zip file has a folder which has one more folder inside it which contains png files.When I got the response object from the amazon server,I read it as a byte array and stored as a .zip file. When I double click on zip file I get directory and subdirectory inside it containing png files.Now I need to programatically unzip the file. I am trying to use GZipStream to decompress it,which returns the uncompressed byte array.Now how can I save this byte array so that I retain my folder structure?Also I don't want to use the third party library to decompress the zipped file.
void Start()
{
UnityInitializer.AttachToGameObject (this.gameObject);
client = new AmazonS3Client (mAccKey, mSecretKey, mRegion);
Debug.Log ("Getting the presigned url\n");
GetPreSignedUrlRequest request = new GetPreSignedUrlRequest ();
request.BucketName = mBucketName;
request.Key = mFileName;
request.Expires = DateTime.Now.AddMinutes (5);
request.Protocol = Protocol.HTTP;
GetObjectRequest requestObject = new GetObjectRequest ();
requestObject.BucketName = mBucketName;
requestObject.Key = mFileName;
Debug.Log ("Requesting for the " + mFileName + " contents" + "from the bucket\n" + mBucketName);
client.GetObjectAsync(mBucketName, mFileName, (responseObj) =>
{
var response = responseObj.Response;
if (response.ResponseStream != null)
{
Debug.Log("Recieving response\n");
using (BinaryReader bReader=new BinaryReader(response.ResponseStream))
{
byte[] buffer = bReader.ReadBytes((int)response.ResponseStream.Length);
var zippedPath=Application.persistentDataPath+"/"+zippedFile;
File.WriteAllBytes(zippedPath,buffer);
var unZippedPath=Application.persistentDataPath+"/"+unZipToFolder;
DirectoryInfo directory=Directory.CreateDirectory(unZippedPath);
byte[]compressedData=compress(buffer);
byte[] unCompressedData=decompress(compressedData);
//Debug.Log(unCompressedData.Length);
File.WriteAllBytes(unZippedPath+directory,unCompressedData);
}
Debug.Log("Response complete");
}
});
}
#region GZipStream
public static byte[] compress(byte[] data)
{
using (MemoryStream outStream = new MemoryStream())
{
using (GZipStream gzipStream = new GZipStream(outStream, CompressionMode.Compress))
using (MemoryStream srcStream = new MemoryStream(data))
CopyTo(srcStream, gzipStream);
return outStream.ToArray();
}
}
public static byte[] decompress(byte[] compressed)
{
using (MemoryStream inStream = new MemoryStream(compressed))
using (GZipStream gzipStream = new GZipStream(inStream, CompressionMode.Decompress))
using (MemoryStream outStream = new MemoryStream())
{
CopyTo(gzipStream, outStream);
return outStream.ToArray();
}
}
public static void CopyTo(Stream input, Stream output)
{
byte[] buffer = new byte[16 * 1024];
int bytesRead;
while ((bytesRead = input.Read(buffer, 0, buffer.Length)) > 0)
{
output.Write(buffer, 0, bytesRead);
}
}
#endregion
}
folder structure inside the zip file images->sample images->10 png files
GZipStream can only (de)compress streams. In other words, you cannot restore folder structure using it. Use ZipFile or, if you cannot use framework 4.5, SharpZipLib.

How can I get the bytes of a GetObjectResponse from S3?

I'm retrieving a file from Amazon S3. I want to convert the file to bytes so that I can download it as follows:
var download = new FileContentResult(bytes, "application/pdf");
download.FileDownloadName = filename;
return download;
I have the file here:
var client = Amazon.AWSClientFactory.CreateAmazonS3Client(
accessKey,
secretKey,
config
);
GetObjectRequest request = new GetObjectRequest();
GetObjectResponse response = client.GetObject(request);
I know about response.WriteResponseStreamToFile() but I want to download the file to the regular downloads folder. If I convert the GetObjectResponse to bytes, I can return the file. How can I do this?
Here's the solution I found for anyone else who needs it:
GetObjectResponse response = client.GetObject(request);
using (Stream responseStream = response.ResponseStream)
{
var bytes = ReadStream(responseStream);
var download = new FileContentResult(bytes, "application/pdf");
download.FileDownloadName = filename;
return download;
}
public static byte[] ReadStream(Stream responseStream)
{
byte[] buffer = new byte[16 * 1024];
using (MemoryStream ms = new MemoryStream())
{
int read;
while ((read = responseStream.Read(buffer, 0, buffer.Length)) > 0)
{
ms.Write(buffer, 0, read);
}
return ms.ToArray();
}
}
Just another option:
Stream rs;
using (IAmazonS3 client = Amazon.AWSClientFactory.CreateAmazonS3Client())
{
GetObjectRequest getObjectRequest = new GetObjectRequest();
getObjectRequest.BucketName = "mybucketname";
getObjectRequest.Key = "mykey";
using (var getObjectResponse = client.GetObject(getObjectRequest))
{
getObjectResponse.ResponseStream.CopyTo(rs);
}
}
I struggled to get the cleaner method offered by Alex to work (not sure what I'm missing), but I wanted to do it w/o the extra ReadStream method offered by Erica (although it worked)... here is what I wound up doing:
var s3Client = new AmazonS3Client(AccessKeyId, SecretKey, Amazon.RegionEndpoint.USEast1);
using (s3Client)
{
MemoryStream ms = new MemoryStream();
GetObjectRequest getObjectRequest = new GetObjectRequest();
getObjectRequest.BucketName = BucketName;
getObjectRequest.Key = awsFileKey;
using (var getObjectResponse = s3Client.GetObject(getObjectRequest))
{
getObjectResponse.ResponseStream.CopyTo(ms);
}
var download = new FileContentResult(ms.ToArray(), "image/png"); //"application/pdf"
download.FileDownloadName = ToFilePath;
return download;
}
Stream now has asynchronous methods. In C# 8, you can do this:
public async Task<byte[]> GetAttachmentAsync(string objectPointer)
{
var objReq = new GetObjectRequest
{
BucketName = "bucket-name",
Key = objectPointer, // the file name
};
using var objResp = await _s3Client.GetObjectAsync(objReq);
using var ms = new MemoryStream();
await objResp.ResponseStream.CopyToAsync(ms, _ct); // _ct is a CancellationToken
return ms.ToArray();
}
This won't block any threads while the IO occurs.

ICSharpZipLib - unziping file issue

I have an application in ASP.NET where user can upload ZIP file. I'm trying to extract file using ICSharpZipLib (I also tried DotNetZip, but had same issue).
This zip file contains single xml document (9KB before compress).
When I open this file with other applications on my desktop (7zip, windows explorer) it seems to be ok.
My unzip method throws System.OutOfMemoryException and I have no idea why is that. When I debugged my unziping method I noticed that zipInputStreams' Length property throws Exception and is not available:
Stream UnZipSingleFile(Stream memoryStream)
{
var zipInputStream = new ZipInputStream(memoryStream);
memoryStream.Position = 0;
zipInputStream.GetNextEntry();
MemoryStream unzippedStream = new MemoryStream();
int len;
byte[] buf = new byte[4096];
while ((len = zipInputStream.Read(buf, 0, buf.Length)) > 0)
{
unzippedStream.Write(buf, 0, len);
}
unzippedStream.Position = 0;
memoryStream.Position = 0;
return unzippedStream;
}
and here's how I get string of unzippedStream:
string GetString()
{
var reader = new StreamReader(unzippedStream);
var result = reader.ReadToEnd();
unzippedStream.Position = 0;
return result;
}
From their wiki:
"Sharpzip supports Zip files using both stored and deflate compression methods and also supports old (PKZIP 2.0) style and AES encryption"
Are you sure the format of the uploaded zip file is acceptable for SharpZipLib?
While this post is quite old, I think it could be beneficial to illustrate how I did this for compression and decompression using ICSharpZipLib (C# package version 1.1.0). I put this together by looking into the examples shown here (see ie. these compression and decompression examples).
Assumption: The input to the compression and decompression below should be in bytes. If you have ie. an xml file you could load it to an XDocument, and convert it into an XmlDocument with .ToXmlDocument(). From there, you could access the string contents by calling .OuterXml, and converting the string to a byte array.
// Compression (inputBytes = ie. string-to-compress, as bytes)
using var dataStream = new MemoryStream(inputBytes);
var outputStream = new MemoryStream();
using (var zipStream = new ZipOutputStream(outputStream))
{
zipStream.SetLevel(3);
var newEntry = new ZipEntry("someFilename.someExtension");
newEntry.DateTime = DateTime.Now;
zipStream.PutNextEntry(newEntry);
StreamUtils.Copy(dataStream, zipStream, new byte[4096]);
zipStream.CloseEntry();
zipStream.IsStreamOwner = false;
}
outputStream.Position = 0;
var outputBytes = outputStream.ToArray();
// Decompression (inputBytes = ie. string-to-decompress, as bytes)
using var dataStream = new MemoryStream(inputBytes);
var outputStream = new MemoryStream();
using (var zipStream = new ZipInputStream(dataStream))
{
while (zipStream.GetNextEntry() is ZipEntry zipEntry)
{
var buffer = new byte[4096];
StreamUtils.Copy(zipStream, outputStream, buffer);
}
}
var outputBytes = outputStream.ToArray();

Categories