I need to download large backup files from my storage account.
I try it with SAS and I have generated link, when I use that link and enter it
directly into browser it downloads the file, but when I am trying to download through my code it gives me empty file or doesn't download file at all. Commented out lines are some that I already tried, last one is Redirect(blobSasuri);
public async Task DownloadBlobItemAsync([FromQuery] string userId, [FromRoute] string fileName, [FromBody] PathObject path, [FromRoute] int filestorageConnectionId)
{
var fileStorageConnection = await _customerProvider.GetFileStorageConnection(filestorageConnectionId);
var customer = await _customerProvider.GetCustomer(fileStorageConnection.CustomerId);
CloudBlockBlob blob = _fileStorage.DownloadBlobFile(fileStorageConnection.Id, userId, customer.Id, fileName, path.Path);
var sas = blob.GetSharedAccessSignature(new SharedAccessBlobPolicy()
{
SharedAccessStartTime = DateTime.UtcNow.AddHours(-5),
SharedAccessExpiryTime = DateTime.UtcNow.AddHours(5),
Permissions = SharedAccessBlobPermissions.Read
});
string blobSasUri = (string.Format(CultureInfo.InvariantCulture, "{0}{1}", blob.Uri, sas));
// CloudBlockBlob blobNew = new CloudBlockBlob(new Uri(blobSasUri));
// var pathNew = Directory.GetCurrentDirectory();
// blobNew.DownloadToFileAsync(pathNew, FileMode.Create);
//await blob.DownloadToFileAsync(blobSasUri, FileMode.Create);
Redirect(blobSasUri);
//using (var client = new WebClient())
//{
// client.DownloadFile(blobSasUri, fileName);
//}
}
I don't know what method you used to download the blob, I test with blobSas.DownloadToStream(), it worked for me. So maybe you could try with my code.
static void Main(string[] args)
{
string storageConnectionString = "connectin string";
// Check whether the connection string can be parsed.
CloudStorageAccount storageAccount;
CloudStorageAccount.TryParse(storageConnectionString, out storageAccount);
var containerName = "test";
var blobName = "testfile.zip";
string saveFileName = #"E:\testfilefolder\myfile1.zip";
var blobContainer = storageAccount.CreateCloudBlobClient().GetContainerReference(containerName);
var blob = blobContainer.GetBlockBlobReference(blobName);
var sas =blob.GetSharedAccessSignature(new SharedAccessBlobPolicy()
{
SharedAccessStartTime = DateTime.UtcNow.AddHours(-5),
SharedAccessExpiryTime = DateTime.UtcNow.AddHours(5),
Permissions = SharedAccessBlobPermissions.Read
});
string blobSasUri = (string.Format(CultureInfo.InvariantCulture, "{0}{1}", blob.Uri, sas));
//Download Blob through SAS url
CloudBlockBlob blobSas = new CloudBlockBlob(new Uri(blobSasUri));
long startPosition = 0;
using (MemoryStream ms = new MemoryStream())
{
blobSas.DownloadToStream(ms);
byte[] data = new byte[ms.Length];
ms.Position = 0;
ms.Read(data, 0, data.Length);
using (FileStream fs = new FileStream(saveFileName, FileMode.OpenOrCreate))
{
fs.Position = startPosition;
fs.Write(data, 0, data.Length);
}
}
}
And except with sas url to download large blob, another option is to serve the file in chunks. Here is the code.
int segmentSize = 1 * 1024 * 1024;//1 MB chunk
var blobContainer = storageAccount.CreateCloudBlobClient().GetContainerReference(containerName);
var blob = blobContainer.GetBlockBlobReference(blobName);
blob.FetchAttributes();
var blobLengthRemaining = blob.Properties.Length;
long startPosition = 0;
string saveFileName = #"E:\testfilefolder\myfile.zip";
do
{
long blockSize = Math.Min(segmentSize, blobLengthRemaining);
byte[] blobContents = new byte[blockSize];
using (MemoryStream ms = new MemoryStream())
{
blob.DownloadRangeToStream(ms, startPosition, blockSize);
ms.Position = 0;
ms.Read(blobContents, 0, blobContents.Length);
using (FileStream fs = new FileStream(saveFileName, FileMode.OpenOrCreate))
{
fs.Position = startPosition;
fs.Write(blobContents, 0, blobContents.Length);
}
}
startPosition += blockSize;
blobLengthRemaining -= blockSize;
}
while (blobLengthRemaining > 0);
Hope this could help you, if you still have other problem please feel free to let me know.
This doesnt work for me and for large files >5 GB. What I did is I returned path to the file + added SAS on it and return it to frontend. So now on frontend I have link with sas and it downloads it directly thorugh browser there.
Related
I have an issue when trying too upload a large file to a sub sharepoint folder.
The issue is related to the variable libraryName. I am not sure have i can change this so i can use an url instead.
Example:
var site = ""https://sharepoint.com/sites/Test_Site1/"
var relative = "Documents/Folder1/folder2/
https://learn.microsoft.com/en-us/sharepoint/dev/solution-guidance/upload-large-files-sample-app-for-sharepoint
public Microsoft.SharePoint.Client.File UploadFileSlicePerSlice(ClientContext ctx, string libraryName, string fileName, int fileChunkSizeInMB = 3)
{
// Each sliced upload requires a unique ID.
Guid uploadId = Guid.NewGuid();
// Get the name of the file.
string uniqueFileName = Path.GetFileName(fileName);
// Ensure that target library exists, and create it if it is missing.
if (!LibraryExists(ctx, ctx.Web, libraryName))
{
CreateLibrary(ctx, ctx.Web, libraryName);
}
// Get the folder to upload into.
List docs = ctx.Web.Lists.GetByTitle(libraryName);
ctx.Load(docs, l => l.RootFolder);
// Get the information about the folder that will hold the file.
ctx.Load(docs.RootFolder, f => f.ServerRelativeUrl);
ctx.ExecuteQuery();
// File object.
Microsoft.SharePoint.Client.File uploadFile = null;
// Calculate block size in bytes.
int blockSize = fileChunkSizeInMB * 1024 * 1024;
// Get the information about the folder that will hold the file.
ctx.Load(docs.RootFolder, f => f.ServerRelativeUrl);
ctx.ExecuteQuery();
// Get the size of the file.
long fileSize = new FileInfo(fileName).Length;
if (fileSize <= blockSize)
{
// Use regular approach.
using (FileStream fs = new FileStream(fileName, FileMode.Open))
{
FileCreationInformation fileInfo = new FileCreationInformation();
fileInfo.ContentStream = fs;
fileInfo.Url = uniqueFileName;
fileInfo.Overwrite = true;
uploadFile = docs.RootFolder.Files.Add(fileInfo);
ctx.Load(uploadFile);
ctx.ExecuteQuery();
// Return the file object for the uploaded file.
return uploadFile;
}
}
else
{
// Use large file upload approach.
ClientResult<long> bytesUploaded = null;
FileStream fs = null;
try
{
fs = System.IO.File.Open(fileName, FileMode.Open, FileAccess.Read, FileShare.ReadWrite);
using (BinaryReader br = new BinaryReader(fs))
{
byte[] buffer = new byte[blockSize];
Byte[] lastBuffer = null;
long fileoffset = 0;
long totalBytesRead = 0;
int bytesRead;
bool first = true;
bool last = false;
// Read data from file system in blocks.
while ((bytesRead = br.Read(buffer, 0, buffer.Length)) > 0)
{
totalBytesRead = totalBytesRead + bytesRead;
// You've reached the end of the file.
if (totalBytesRead == fileSize)
{
last = true;
// Copy to a new buffer that has the correct size.
lastBuffer = new byte[bytesRead];
Array.Copy(buffer, 0, lastBuffer, 0, bytesRead);
}
if (first)
{
using (MemoryStream contentStream = new MemoryStream())
{
// Add an empty file.
FileCreationInformation fileInfo = new FileCreationInformation();
fileInfo.ContentStream = contentStream;
fileInfo.Url = uniqueFileName;
fileInfo.Overwrite = true;
uploadFile = docs.RootFolder.Files.Add(fileInfo);
// Start upload by uploading the first slice.
using (MemoryStream s = new MemoryStream(buffer))
{
// Call the start upload method on the first slice.
bytesUploaded = uploadFile.StartUpload(uploadId, s);
ctx.ExecuteQuery();
// fileoffset is the pointer where the next slice will be added.
fileoffset = bytesUploaded.Value;
}
// You can only start the upload once.
first = false;
}
}
else
{
if (last)
{
// Is this the last slice of data?
using (MemoryStream s = new MemoryStream(lastBuffer))
{
// End sliced upload by calling FinishUpload.
uploadFile = uploadFile.FinishUpload(uploadId, fileoffset, s);
ctx.ExecuteQuery();
// Return the file object for the uploaded file.
return uploadFile;
}
}
else
{
using (MemoryStream s = new MemoryStream(buffer))
{
// Continue sliced upload.
bytesUploaded = uploadFile.ContinueUpload(uploadId, fileoffset, s);
ctx.ExecuteQuery();
// Update fileoffset for the next slice.
fileoffset = bytesUploaded.Value;
}
}
}
} // while ((bytesRead = br.Read(buffer, 0, buffer.Length)) > 0)
}
}
finally
{
if (fs != null)
{
fs.Dispose();
}
}
}
return null;
}
This is the first page where i run the method
using Microsoft.SharePoint.Client;
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Security;
using System.Text;
using System.Threading.Tasks;
namespace Contoso.Core.LargeFileUpload
{
class Program
{
static void Main(string[] args)
{
// Request Office365 site from the user
string siteUrl = #"https://bundegruppen.sharepoint.com/sites/F24-2905/";
/* Prompt for Credentials */
//Console.WriteLine("Filer blir overført til site: {0}", siteUrl);
string userName = "xx.xx#bxxbygg.no";
SecureString pwd = new SecureString();
string password = "xxx";
foreach (char c in password.ToCharArray()) pwd.AppendChar(c);
/* End Program if no Credentials */
if (string.IsNullOrEmpty(userName) || (pwd == null))
return;
ClientContext ctx = new ClientContext(siteUrl);
ctx.AuthenticationMode = ClientAuthenticationMode.Default;
ctx.Credentials = new SharePointOnlineCredentials(userName, pwd);
// These should both work as expected.
try
{
// Alternative 3 for uploading large files: slice per slice which allows you to stop and resume a download
new FileUploadService().UploadFileSlicePerSliceToFolder(ctx, "Dokumenter/General", #"C:\Temp\F24_Sammenstillingsmodell.smc");
}
catch (Exception ex)
{
Console.WriteLine(string.Format("Exception while uploading files to the target site: {0}.", ex.ToString()));
Console.WriteLine("Press enter to continue.");
Console.Read();
}
// Just to see what we have in console
Console.ForegroundColor = ConsoleColor.White;
}
}
}
The code you have is written just to upload the specified file to the RootFolder of the named Library. If you pass in a full path to a folder instead of just a Library Name, it will fail.
The following is a modded version of the function that should allow you to pass a full serverRelativeUrl to the desired folder:
public Microsoft.SharePoint.Client.File UploadFileSlicePerSliceToFolder(ClientContext ctx, string serverRelativeFolderUrl, string fileName, int fileChunkSizeInMB = 3)
{
// Each sliced upload requires a unique ID.
Guid uploadId = Guid.NewGuid();
// Get the name of the file.
string uniqueFileName = Path.GetFileName(fileName);
// Get the folder to upload into.
Folder uploadFolder = ctx.web.GetFolderByServerRelativeUrl(serverRelativeFolderUrl);
// Get the information about the folder that will hold the file.
ctx.Load(uploadFolder);
ctx.ExecuteQuery();
// File object.
Microsoft.SharePoint.Client.File uploadFile = null;
// Calculate block size in bytes.
int blockSize = fileChunkSizeInMB * 1024 * 1024;
// Get the information about the folder that will hold the file.
ctx.Load(uploadFolder, f => f.ServerRelativeUrl);
ctx.ExecuteQuery();
// Get the size of the file.
long fileSize = new FileInfo(fileName).Length;
if (fileSize <= blockSize)
{
// Use regular approach.
using (FileStream fs = new FileStream(fileName, FileMode.Open))
{
FileCreationInformation fileInfo = new FileCreationInformation();
fileInfo.ContentStream = fs;
fileInfo.Url = uniqueFileName;
fileInfo.Overwrite = true;
uploadFile = uploadFolder.Files.Add(fileInfo);
ctx.Load(uploadFile);
ctx.ExecuteQuery();
// Return the file object for the uploaded file.
return uploadFile;
}
}
else
{
// Use large file upload approach.
ClientResult<long> bytesUploaded = null;
FileStream fs = null;
try
{
fs = System.IO.File.Open(fileName, FileMode.Open, FileAccess.Read, FileShare.ReadWrite);
using (BinaryReader br = new BinaryReader(fs))
{
byte[] buffer = new byte[blockSize];
Byte[] lastBuffer = null;
long fileoffset = 0;
long totalBytesRead = 0;
int bytesRead;
bool first = true;
bool last = false;
// Read data from file system in blocks.
while ((bytesRead = br.Read(buffer, 0, buffer.Length)) > 0)
{
totalBytesRead = totalBytesRead + bytesRead;
// You've reached the end of the file.
if (totalBytesRead == fileSize)
{
last = true;
// Copy to a new buffer that has the correct size.
lastBuffer = new byte[bytesRead];
Array.Copy(buffer, 0, lastBuffer, 0, bytesRead);
}
if (first)
{
using (MemoryStream contentStream = new MemoryStream())
{
// Add an empty file.
FileCreationInformation fileInfo = new FileCreationInformation();
fileInfo.ContentStream = contentStream;
fileInfo.Url = uniqueFileName;
fileInfo.Overwrite = true;
uploadFile = uploadFolder.Files.Add(fileInfo);
// Start upload by uploading the first slice.
using (MemoryStream s = new MemoryStream(buffer))
{
// Call the start upload method on the first slice.
bytesUploaded = uploadFile.StartUpload(uploadId, s);
ctx.ExecuteQuery();
// fileoffset is the pointer where the next slice will be added.
fileoffset = bytesUploaded.Value;
}
// You can only start the upload once.
first = false;
}
}
else
{
if (last)
{
// Is this the last slice of data?
using (MemoryStream s = new MemoryStream(lastBuffer))
{
// End sliced upload by calling FinishUpload.
uploadFile = uploadFile.FinishUpload(uploadId, fileoffset, s);
ctx.ExecuteQuery();
// Return the file object for the uploaded file.
return uploadFile;
}
}
else
{
using (MemoryStream s = new MemoryStream(buffer))
{
// Continue sliced upload.
bytesUploaded = uploadFile.ContinueUpload(uploadId, fileoffset, s);
ctx.ExecuteQuery();
// Update fileoffset for the next slice.
fileoffset = bytesUploaded.Value;
}
}
}
} // while ((bytesRead = br.Read(buffer, 0, buffer.Length)) > 0)
}
}
finally
{
if (fs != null)
{
fs.Dispose();
}
}
}
return null;
}
I am trying to upload a file to a bucket using the forge .NET SDK. It works most of the time but gives an {error: overlapping ranges} occasionally. Here is the code snippet.
private string uploadFileToBucket(Configuration configuration, string bucketKey, string filePath)
{
ObjectsApi objectsApi = new ObjectsApi(configuration);
string fileName = Path.GetFileName(filePath);
string base64EncodedUrn, objectKey;
using (FileStream fileStream = File.Open(filePath, FileMode.Open))
{
long contentLength = fileStream.Length;
string content_range = "bytes 0-" + (contentLength - 1) + "/" + contentLength;
dynamic result = objectsApi.UploadChunk(bucketKey, fileName, (int)fileStream.Length, content_range,
"12313", fileStream);
DynamicJsonResponse dynamicJsonResponse = (DynamicJsonResponse)result;
JObject json = dynamicJsonResponse.ToJson();
JToken urn = json.GetValue("objectId");
string urnStr = urn.ToString();
base64EncodedUrn = ApiClient.encodeToSafeBase64(urnStr);
objectKey = fileName;
}
return base64EncodedUrn;
}
Before uploading, the file content must have to read to the computer memory, otherwise, the FileStream object in your code snippet is empty.
However, I would like to advise you to use PUT buckets/:bucketKey/objects/:objectName instead if you want to upload the whole file in a single chunk only. Here is my test code. Hope it helps~
private static TwoLeggedApi oauth2TwoLegged;
private static dynamic twoLeggedCredentials;
private static Random random = new Random();
public static string RandomString(int length)
{
const string chars = "ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789";
return new string(Enumerable.Repeat(chars, length)
.Select(s => s[random.Next(s.Length)]).ToArray());
}
// Initialize the 2-legged OAuth 2.0 client, and optionally set specific scopes.
private static void initializeOAuth()
{
// You must provide at least one valid scope
Scope[] scopes = new Scope[] { Scope.DataRead, Scope.DataWrite, Scope.BucketCreate, Scope.BucketRead };
oauth2TwoLegged = new TwoLeggedApi();
twoLeggedCredentials = oauth2TwoLegged.Authenticate(FORGE_CLIENT_ID, FORGE_CLIENT_SECRET, oAuthConstants.CLIENT_CREDENTIALS, scopes);
objectsApi.Configuration.AccessToken = twoLeggedCredentials.access_token;
}
private static void uploadFileToBucket(string bucketKey, string filePath)
{
Console.WriteLine("*****Start uploading file to the OSS");
string path = filePath;
//File Total size
var info = new System.IO.FileInfo(path);
long fileSize = info.Length;
using (FileStream fileStream = File.Open(filePath, FileMode.Open))
{
string sessionId = RandomString(12);
Console.WriteLine(string.Format("sessionId: {0}", sessionId));
long contentLength = fileSize;
string content_range = "bytes 0-" + (contentLength - 1) + "/" + contentLength;
Console.WriteLine("Uploading range: " + content_range);
byte[] buffer = new byte[contentLength];
MemoryStream memoryStream = new MemoryStream(buffer);
int nb = fileStream.Read(buffer, 0, (int)contentLength);
memoryStream.Write(buffer, 0, nb);
memoryStream.Position = 0;
dynamic response = objectsApi.UploadChunk(bucketKey, info.Name, (int)contentLength, content_range,
sessionId, memoryStream);
Console.WriteLine(response);
}
}
static void Main(string[] args)
{
initializeOAuth();
uploadFileToBucket(BUCKET_KEY, FILE_PATH);
}
I am attempting to download a list of files from urls stored in my database, and then upload them to my Azure FileStorage account. I am successfully downloading the files and can turn them into files on my local storage or convert them to text and upload them. However I lose data when converting something like a pdf to a text and I do not want to have to store the files on the Azure app that this endpoint is hosted on as I do not need to manipulate the files in any way.
I have attempted to upload the files from the Stream I get from the HttpContent object using the UploadFromStream method on the CloudFile. Whenever this command is run I get an InvalidOperationException with the message "Operation is not valid due to the current state of the object."
I've tried converting the original Stream to a MemoryStream as well but this just writes a blank file to the FileStorage account, even if I set the position to the beginning of the MemoryStream. My code is below and if anyone could point out what information I am missing to make this work I would appreciate it.
public DownloadFileResponse DownloadFile(FileLink fileLink)
{
string fileName = string.Format("{0}{1}{2}", fileLink.ExpectedFileName, ".", fileLink.ExpectedFileType);
HttpStatusCode status;
string hash = "";
using (var client = new HttpClient())
{
client.Timeout = TimeSpan.FromSeconds(10); // candidate for .config setting
client.DefaultRequestHeaders.Add("User-Agent", USER_AGENT);
var request = new HttpRequestMessage(HttpMethod.Get, fileLink.ExpectedURL);
var sendTask = client.SendAsync(request, HttpCompletionOption.ResponseHeadersRead);
var response = sendTask.Result; // not ensuring success here, going to handle error codes without exceptions
status = response.StatusCode;
if (status == HttpStatusCode.OK)
{
var httpStream = response.Content.ReadAsStreamAsync().Result;
fileStorage.WriteFile(fileLink.ExpectedFileType, fileName, httpStream);
hash = HashGenerator.GetMD5HashFromStream(httpStream);
}
}
return new DownloadFileResponse(status, fileName, hash);
}
public void WriteFile(string targetDirectory, string targetFilePath, Stream fileStream)
{
var options = SetOptions();
var newFile = GetTargetCloudFile(targetDirectory, targetFilePath);
newFile.UploadFromStream(fileStream, options: options);
}
public FileRequestOptions SetOptions()
{
FileRequestOptions options = new FileRequestOptions();
options.ServerTimeout = TimeSpan.FromSeconds(10);
options.RetryPolicy = new NoRetry();
return options;
}
public CloudFile GetTargetCloudFile(string targetDirectory, string targetFilePath)
{
if (!shareConnector.share.Exists())
{
throw new Exception("Cannot access Azure File Storage share");
}
CloudFileDirectory rootDirectory = shareConnector.share.GetRootDirectoryReference();
CloudFileDirectory directory = rootDirectory.GetDirectoryReference(targetDirectory);
if (!directory.Exists())
{
throw new Exception("Target Directory does not exist");
}
CloudFile newFile = directory.GetFileReference(targetFilePath);
return newFile;
}
Had the same problem, the only way it worked is by reading the coming stream (in your case it is httpStream in DownloadFile(FileLink fileLink) method) to a byte array and using UploadFromByteArray (byte[] buffer, int index, int count) instead of UploadFromStream
So your WriteFile(FileLink fileLink) method will look like:
public void WriteFile(string targetDirectory, string targetFilePath, Stream fileStream)
{
var options = SetOptions();
var newFile = GetTargetCloudFile(targetDirectory, targetFilePath);
const int bufferLength= 600;
byte[] buffer = new byte[bufferLength];
// Buffer to read from stram This size is just an example
List<byte> byteArrayFile = new List<byte>(); // all your file will be here
int count = 0;
try
{
while ((count = fileStream.Read(buffer, 0, bufferLength)) > 0)
{
byteArrayFile.AddRange(buffer);
}
fileStream.Close();
}
catch (Exception ex)
{
throw; // you need to change this
}
file.UploadFromByteArray(allFile.ToArray(), 0, byteArrayFile.Count);
// Not sure about byteArrayFile.Count.. it should work
}
According to your description and codes, I suggest you could use Steam.CopyTo to copy the stream to the local memoryStream firstly, then upload the MemoryStream to azure file storage.
More details, you could refer to below codes:
I just change the DownloadFile method to test it.
HttpStatusCode status;
using (var client = new HttpClient())
{
client.Timeout = TimeSpan.FromSeconds(10); // candidate for .config setting
// client.DefaultRequestHeaders.Add("User-Agent", USER_AGENT);
//here I use my blob file to test it
var request = new HttpRequestMessage(HttpMethod.Get, "https://xxxxxxxxxx.blob.core.windows.net/media/secondblobtest-eypt.txt");
var sendTask = client.SendAsync(request, HttpCompletionOption.ResponseHeadersRead);
var response = sendTask.Result; // not ensuring success here, going to handle error codes without exceptions
status = response.StatusCode;
if (status == HttpStatusCode.OK)
{
MemoryStream ms = new MemoryStream();
var httpStream = response.Content.ReadAsStreamAsync().Result;
httpStream.CopyTo(ms);
ms.Position = 0;
WriteFile("aaa", "testaa", ms);
// hash = HashGenerator.GetMD5HashFromStream(httpStream);
}
}
I had a similar problem and got to find out that the UploadFromStream method only works with buffered streams. Nevertheless I was able to successfully upload files to azure storage by using a MemoryStream. I don't think this to be a very good solution as you are using up your memory resources by copying the content of the file stream to memory before handing it to the azure stream. What I have come up with is a way of writing directly to an azure stream by using instead the OpenWriteAsync method to create the stream and then a simple CopyToAsync from the source stream.
CloudStorageAccount storageAccount = CloudStorageAccount.Parse( "YourAzureStorageConnectionString" );
CloudFileClient fileClient = storageAccount.CreateCloudFileClient();
CloudFileShare share = fileClient.GetShareReference( "YourShareName" );
CloudFileDirectory root = share.GetRootDirectoryReference();
CloudFile file = root.GetFileReference( "TheFileName" );
using (CloudFileStream fileWriteStream = await file.OpenWriteAsync( fileMetadata.FileSize, new AccessCondition(),
new FileRequestOptions { StoreFileContentMD5 = true },
new OperationContext() ))
{
await fileContent.CopyToAsync( fileWriteStream, 128 * 1024 );
}
I'm retrieving a file from Amazon S3. I want to convert the file to bytes so that I can download it as follows:
var download = new FileContentResult(bytes, "application/pdf");
download.FileDownloadName = filename;
return download;
I have the file here:
var client = Amazon.AWSClientFactory.CreateAmazonS3Client(
accessKey,
secretKey,
config
);
GetObjectRequest request = new GetObjectRequest();
GetObjectResponse response = client.GetObject(request);
I know about response.WriteResponseStreamToFile() but I want to download the file to the regular downloads folder. If I convert the GetObjectResponse to bytes, I can return the file. How can I do this?
Here's the solution I found for anyone else who needs it:
GetObjectResponse response = client.GetObject(request);
using (Stream responseStream = response.ResponseStream)
{
var bytes = ReadStream(responseStream);
var download = new FileContentResult(bytes, "application/pdf");
download.FileDownloadName = filename;
return download;
}
public static byte[] ReadStream(Stream responseStream)
{
byte[] buffer = new byte[16 * 1024];
using (MemoryStream ms = new MemoryStream())
{
int read;
while ((read = responseStream.Read(buffer, 0, buffer.Length)) > 0)
{
ms.Write(buffer, 0, read);
}
return ms.ToArray();
}
}
Just another option:
Stream rs;
using (IAmazonS3 client = Amazon.AWSClientFactory.CreateAmazonS3Client())
{
GetObjectRequest getObjectRequest = new GetObjectRequest();
getObjectRequest.BucketName = "mybucketname";
getObjectRequest.Key = "mykey";
using (var getObjectResponse = client.GetObject(getObjectRequest))
{
getObjectResponse.ResponseStream.CopyTo(rs);
}
}
I struggled to get the cleaner method offered by Alex to work (not sure what I'm missing), but I wanted to do it w/o the extra ReadStream method offered by Erica (although it worked)... here is what I wound up doing:
var s3Client = new AmazonS3Client(AccessKeyId, SecretKey, Amazon.RegionEndpoint.USEast1);
using (s3Client)
{
MemoryStream ms = new MemoryStream();
GetObjectRequest getObjectRequest = new GetObjectRequest();
getObjectRequest.BucketName = BucketName;
getObjectRequest.Key = awsFileKey;
using (var getObjectResponse = s3Client.GetObject(getObjectRequest))
{
getObjectResponse.ResponseStream.CopyTo(ms);
}
var download = new FileContentResult(ms.ToArray(), "image/png"); //"application/pdf"
download.FileDownloadName = ToFilePath;
return download;
}
Stream now has asynchronous methods. In C# 8, you can do this:
public async Task<byte[]> GetAttachmentAsync(string objectPointer)
{
var objReq = new GetObjectRequest
{
BucketName = "bucket-name",
Key = objectPointer, // the file name
};
using var objResp = await _s3Client.GetObjectAsync(objReq);
using var ms = new MemoryStream();
await objResp.ResponseStream.CopyToAsync(ms, _ct); // _ct is a CancellationToken
return ms.ToArray();
}
This won't block any threads while the IO occurs.
Can I upload excel file into the AWS s3 account. What I have fount is that PutObject method provided in the Library can be used to upload the file from a location or using the Stream object.
PutObjectRequest request = new PutObjectRequest()
{
ContentBody = "this is a test",
BucketName = bucketName,
Key = keyName,
InputStream = stream
};
PutObjectResponse response = client.PutObject(request);
Key can be the absolute path on the machine or we give the stream of the file. But my doubt is how we can upload the excel file using the above method
P.S
This is the way I am using to convert stream to byte[] but input.ReadByte() is always equal to zero. So my doubt is, is it not reading the excel file?
FileStream str = new FileStream(#"C:\case1.xlsx", FileMode.Open);
byte[] arr = ReadFully(str);
public static byte[] ReadFully(FileStream input)
{
long size = 0;
while (input.ReadByte() > 0)
{
size++;
}
byte[] buffer = new byte[size];
//byte[] buffer = new byte[16 * 1024];
using (MemoryStream ms = new MemoryStream())
{
int read;
while ((read = input.Read(buffer, 0, buffer.Length)) > 0)
{
ms.Write(buffer, 0, read);
}
return ms.ToArray();
}
}
You should be able to upload any file via the file path or stream. It doesn't matter that it's an Excel file. When you run PutObject, it uploads the actual file data represented by that path or stream.
You can see the MIME types for MS Office formats at Filext. Doing it by file path would probably be easier:
PutObjectRequest request = new PutObjectRequest()
{
ContentBody = "this is a test",
BucketName = bucketName,
Key = keyName,
ContentType =
"application/vnd.openxmlformats-officedocument.spreadsheetml.sheet", // xlsx
FilePath = #"\path\to\myfile.xlsx"
};
PutObjectResponse response = client.PutObject(request);
Or reading from a file stream:
PutObjectRequest request = new PutObjectRequest()
{
ContentBody = "this is a test",
BucketName = bucketName,
Key = keyName,
ContentType =
"application/vnd.openxmlformats-officedocument.spreadsheetml.sheet" // xlsx
};
using (var stream = new FileStream(#"\path\to\myfile.xlsx", FileMode.Open))
{
request.InputStream = stream;
PutObjectResponse response = client.PutObject(request);
}