I have a wcf service here to upload the files to Amazon s3 server. After the successful upload, I need to delete the file from my local path. But when I try to delete the file, got an error says The process cannot access the file.Because its being used by another process".Sharing below my code snippets.
var putRequest = new PutObjectRequest
{
BucketName = System.Configuration.ConfigurationManager.AppSettings["S3Bucket"]
.ToString(),
Key = keyName,
FilePath = path,
ContentType = "application/pdf"
};
client = new AmazonS3Client(bucketRegion);
PutObjectResponse response = await client.PutObjectAsync(putRequest);
putRequest = null;
client.Dispose();
File.Delete(path);
If anyone know about the issue, please update..
There might be a timing issue here, so you might want to try to close the stream explicitly.
Do note, I am not sure, if I am mistaken I'll remove this, but it was to long for a comment.
using (var fileStream = new File.OpenRead(path))
{
var putRequest = new PutObjectRequest
{
BucketName = System.Configuration.ConfigurationManager.AppSettings["S3Bucket"]
.ToString(),
Key = keyName,
InputStream = fileStream ,
ContentType = "application/pdf",
AutoCloseStream = false,
};
using (var c = new AmazonS3Client(bucketRegion))
{
PutObjectResponse response = await c.PutObjectAsync(putRequest);
}
} //filestream should be closed here, if not: call fileStream.Close()
File.Delete(path);
More info on the properties: https://docs.aws.amazon.com/sdkfornet1/latest/apidocs/html/T_Amazon_S3_Model_PutObjectRequest.htm
Related
I am trying to upload file to amazon s3 but got error Cannot Access a closed stream in await client.PutObjectAsync(request);
using (var stream = new MemoryStream())
{
using (var sWriter = new StreamWriter(stream, Encoding.UTF8))
{
await sWriter.WriteAsync(commandWithMetadata.SerializeToString());
stream.Seek(0, SeekOrigin.Begin);
var fileName = GetFileName(command);
var request = new PutObjectRequest
{
BucketName = BucketName,
Key = fileName,
InputStream = stream
};
await client.PutObjectAsync(request);
}
}
there is AutoCloseStream property on request which by default is true and amazon lib is closing the stream automatically
I want to open a filestream from a sharepoint file (Microsoft.SharePoint.Client.File) but I don't seem to find out how.
I only have access to Microsoft.SharePoint.Client because the Microsoft.SharePoint package can't be installed due to some errors.
This is the code I have so far:
ClientContext ctx = new ClientContext("https://factionxyz0.sharepoint.com/sites/faktion-devs");
ctx.Credentials = CredentialCache.DefaultCredentials;
Microsoft.SharePoint.Client.File temp = ctx.Web.GetFileByServerRelativeUrl(filePath);
FileStream fs = new FileStream(???);
You can only create a System.IO.FileStream if the file exists on a physical disk (or is mapped to a disk via the Operating System).
Workaround: Are you able to access the raw URL of the file? In which case, download the file to disk (if the size is appropriate) and then read from there.
For example:
var httpClient = new HttpClient();
// HTTP GET Request
var response = await httpClient.GetAsync(... SharePoint URL ...);
// Get the Content Stream
var stream = await response.Content.ReadAsSteamAsync();
// Create a temporary file
var tempFile = Path.GetTempFileName();
using (var fs = File.OpenWrite(tempFile))
{
await stream.CopyToAsync(fs);
}
// tempFile now contains your file locally, you can access it like
var fileStream = File.OpenRead(tempFile);
// Make sure you delete the temporary file after using it
File.Delete(tempFile);
FileStream must map to a file. The following code demonstrates how to get a stream via CSOM, then we can convert it to FileStream by using a temp file.
ResourcePath filepath = ResourcePath.FromDecodedUrl(filename);
Microsoft.SharePoint.Client.File temp = context.Web.GetFileByServerRelativePath(filepath);
ClientResult<System.IO.Stream> crstream = temp.OpenBinaryStream();
context.Load(temp);
context.ExecuteQuery();
var tempFile = Path.GetTempFileName();
FileStream fs = System.IO.File.OpenWrite(tempFile);
if (crstream.Value != null){
crstream.Value.CopyTo(fs);
}
As for Azure function temp storage, you may take a reference of following thread:
Azure Functions Temp storage
Or you can store data to Azure storage:
Upload data to blob storage with Azure Functions
Best Regards,
Baker Kong
Been a while since the question was asked, however, this is how I solved this while I was working on a project. Obviously passing in the credentials directly like this isn't the best way, but due to timing constraints I was not able to convert this project into a newer version of .NET and use Azure AD.
Note that the class is implementing an interface.
public void SetServer(string domainName) {
if (string.IsNullOrEmpty(domainName)) throw new Exception("Invalid domain name. Name cannot be null");
_server = domainName.Trim('/').Trim('\\');
}
private string MapPath(string urlPath) {
var url = string.Join("/", _server, urlPath);
return url.Trim('/');
}
public ISharePointDocument GetDocument(string path, string fileName) {
var serverPath = MapPath(path);
var filePath = string.Join("/", serverPath, TemplateLibrary, fileName).Trim('/');
var document = new SharePointDocument();
var data = GetClientStream(path, fileName);
using(var memoryStream = new MemoryStream()) {
if (data == null) return document;
data.CopyTo(memoryStream);
var byteArray = memoryStream.ToArray();
document = new SharePointDocument {
FullPath = filePath,
Bytes = byteArray
};
}
return document;
}
public Stream GetClientStream(string path, string fileName) {
var serverPath = MapPath(path);
var filePath = string.Join("/", serverPath, TemplateLibrary, fileName).Trim('/');
var context = GetClientContext(serverPath);
var web = context.Web;
context.Load(web);
context.ExecuteQuery();
var file = web.GetListItem(filePath).File;
var data = file.OpenBinaryStream();
context.Load(file);
context.ExecuteQuery();
return data.Value;
}
private static ClientContext GetClientContext(string serverPath) {
var context = new ClientContext(serverPath) {
Credentials = new SharePointOnlineCredentials("example#example.com", GetPassword())
};
return context;
}
private static SecureString GetPassword() {
const string password = "XYZ";
var securePassword = new SecureString();
foreach(var c in password.ToCharArray()) securePassword.AppendChar(c);
return securePassword;
}
My image are uploading locally but when i deployed lambda its giving a broken image(Note: it is uploading image but size increases),I have added Binary Media Type in the API Gateway , but still not getting right results. Interesting thing is that when i uploaded a text file it was perfect on the bucket but not images.
public async Task<S3Response> ImageUpload(IFormFile file ){
string bucket_name = "your_bucket";
var client = new AmazonS3Client("***", "****", RegionEndpoint.USEast1);
var stream = new System.IO.MemoryStream();
file.CopyTo(stream);
var request = new PutObjectRequest
{
Key = file.FileName,
BucketName = bucket_name,
InputStream = stream,
//ContentType = "application/octet-stream",
ContentType = file.ContentType,
CannedACL = S3CannedACL.PublicRead
};
response = await client.PutObjectAsync(request);
}
I am saving image as base64 string on s3 bucket and converting back from base64 string to my original image on the client side.If someone got a better solution kindly add in a thread.
byte[] byteArray = Encoding.UTF8.GetBytes(file.Filebase64);
stream= new MemoryStream(byteArray);
var request = new PutObjectRequest
{
Key=file.File_name,
BucketName = bucket_name,
InputStream = stream,
ContentType = "text/plain",
CannedACL = S3CannedACL.PublicRead
};
where Image file model class look like this :
public class ImageModel
{
public String File_name { set; get; }
public String Filebase64 { set; get; }
}
I am attempting to download a list of files from urls stored in my database, and then upload them to my Azure FileStorage account. I am successfully downloading the files and can turn them into files on my local storage or convert them to text and upload them. However I lose data when converting something like a pdf to a text and I do not want to have to store the files on the Azure app that this endpoint is hosted on as I do not need to manipulate the files in any way.
I have attempted to upload the files from the Stream I get from the HttpContent object using the UploadFromStream method on the CloudFile. Whenever this command is run I get an InvalidOperationException with the message "Operation is not valid due to the current state of the object."
I've tried converting the original Stream to a MemoryStream as well but this just writes a blank file to the FileStorage account, even if I set the position to the beginning of the MemoryStream. My code is below and if anyone could point out what information I am missing to make this work I would appreciate it.
public DownloadFileResponse DownloadFile(FileLink fileLink)
{
string fileName = string.Format("{0}{1}{2}", fileLink.ExpectedFileName, ".", fileLink.ExpectedFileType);
HttpStatusCode status;
string hash = "";
using (var client = new HttpClient())
{
client.Timeout = TimeSpan.FromSeconds(10); // candidate for .config setting
client.DefaultRequestHeaders.Add("User-Agent", USER_AGENT);
var request = new HttpRequestMessage(HttpMethod.Get, fileLink.ExpectedURL);
var sendTask = client.SendAsync(request, HttpCompletionOption.ResponseHeadersRead);
var response = sendTask.Result; // not ensuring success here, going to handle error codes without exceptions
status = response.StatusCode;
if (status == HttpStatusCode.OK)
{
var httpStream = response.Content.ReadAsStreamAsync().Result;
fileStorage.WriteFile(fileLink.ExpectedFileType, fileName, httpStream);
hash = HashGenerator.GetMD5HashFromStream(httpStream);
}
}
return new DownloadFileResponse(status, fileName, hash);
}
public void WriteFile(string targetDirectory, string targetFilePath, Stream fileStream)
{
var options = SetOptions();
var newFile = GetTargetCloudFile(targetDirectory, targetFilePath);
newFile.UploadFromStream(fileStream, options: options);
}
public FileRequestOptions SetOptions()
{
FileRequestOptions options = new FileRequestOptions();
options.ServerTimeout = TimeSpan.FromSeconds(10);
options.RetryPolicy = new NoRetry();
return options;
}
public CloudFile GetTargetCloudFile(string targetDirectory, string targetFilePath)
{
if (!shareConnector.share.Exists())
{
throw new Exception("Cannot access Azure File Storage share");
}
CloudFileDirectory rootDirectory = shareConnector.share.GetRootDirectoryReference();
CloudFileDirectory directory = rootDirectory.GetDirectoryReference(targetDirectory);
if (!directory.Exists())
{
throw new Exception("Target Directory does not exist");
}
CloudFile newFile = directory.GetFileReference(targetFilePath);
return newFile;
}
Had the same problem, the only way it worked is by reading the coming stream (in your case it is httpStream in DownloadFile(FileLink fileLink) method) to a byte array and using UploadFromByteArray (byte[] buffer, int index, int count) instead of UploadFromStream
So your WriteFile(FileLink fileLink) method will look like:
public void WriteFile(string targetDirectory, string targetFilePath, Stream fileStream)
{
var options = SetOptions();
var newFile = GetTargetCloudFile(targetDirectory, targetFilePath);
const int bufferLength= 600;
byte[] buffer = new byte[bufferLength];
// Buffer to read from stram This size is just an example
List<byte> byteArrayFile = new List<byte>(); // all your file will be here
int count = 0;
try
{
while ((count = fileStream.Read(buffer, 0, bufferLength)) > 0)
{
byteArrayFile.AddRange(buffer);
}
fileStream.Close();
}
catch (Exception ex)
{
throw; // you need to change this
}
file.UploadFromByteArray(allFile.ToArray(), 0, byteArrayFile.Count);
// Not sure about byteArrayFile.Count.. it should work
}
According to your description and codes, I suggest you could use Steam.CopyTo to copy the stream to the local memoryStream firstly, then upload the MemoryStream to azure file storage.
More details, you could refer to below codes:
I just change the DownloadFile method to test it.
HttpStatusCode status;
using (var client = new HttpClient())
{
client.Timeout = TimeSpan.FromSeconds(10); // candidate for .config setting
// client.DefaultRequestHeaders.Add("User-Agent", USER_AGENT);
//here I use my blob file to test it
var request = new HttpRequestMessage(HttpMethod.Get, "https://xxxxxxxxxx.blob.core.windows.net/media/secondblobtest-eypt.txt");
var sendTask = client.SendAsync(request, HttpCompletionOption.ResponseHeadersRead);
var response = sendTask.Result; // not ensuring success here, going to handle error codes without exceptions
status = response.StatusCode;
if (status == HttpStatusCode.OK)
{
MemoryStream ms = new MemoryStream();
var httpStream = response.Content.ReadAsStreamAsync().Result;
httpStream.CopyTo(ms);
ms.Position = 0;
WriteFile("aaa", "testaa", ms);
// hash = HashGenerator.GetMD5HashFromStream(httpStream);
}
}
I had a similar problem and got to find out that the UploadFromStream method only works with buffered streams. Nevertheless I was able to successfully upload files to azure storage by using a MemoryStream. I don't think this to be a very good solution as you are using up your memory resources by copying the content of the file stream to memory before handing it to the azure stream. What I have come up with is a way of writing directly to an azure stream by using instead the OpenWriteAsync method to create the stream and then a simple CopyToAsync from the source stream.
CloudStorageAccount storageAccount = CloudStorageAccount.Parse( "YourAzureStorageConnectionString" );
CloudFileClient fileClient = storageAccount.CreateCloudFileClient();
CloudFileShare share = fileClient.GetShareReference( "YourShareName" );
CloudFileDirectory root = share.GetRootDirectoryReference();
CloudFile file = root.GetFileReference( "TheFileName" );
using (CloudFileStream fileWriteStream = await file.OpenWriteAsync( fileMetadata.FileSize, new AccessCondition(),
new FileRequestOptions { StoreFileContentMD5 = true },
new OperationContext() ))
{
await fileContent.CopyToAsync( fileWriteStream, 128 * 1024 );
}
I'm trying to upload a image to my bucket, but I can't because I have this error:
An exception of type 'Amazon.Runtime.AmazonServiceException' occurred in mscorlib.dll but was not handled in user code
Detail
{"Encountered a non retryable WebException : RequestCanceled"}
More Detail
{"The request was aborted: The request was canceled."}
Inner Exception
The stream was already consumed. It cannot be read again.
I'm using the AWS SDK for VS2013.
Code
private const string ExistingBucketName = "******development"; //Name of the bucket
private const string KeyName = "Images";
public static void UploadToS3(string filePath)
{
//filePath -> C:\example.jpg
try
{
var fileTransferUtility = new
TransferUtility(new AmazonS3Client(Amazon.RegionEndpoint.USEast1));
// 1. Upload a file, file name is used as the object key name.
fileTransferUtility.Upload(filePath, ExistingBucketName);
Trace.WriteLine("Upload 1 completed");
// 2. Specify object key name explicitly.
fileTransferUtility.Upload(filePath,
ExistingBucketName, KeyName);
Trace.WriteLine("Upload 2 completed");
// 3. Upload data from a type of System.IO.Stream.
using (var fileToUpload =
new FileStream(filePath, FileMode.Open, FileAccess.Read))
{
fileTransferUtility.Upload(fileToUpload,
ExistingBucketName, KeyName);
}
Trace.WriteLine("Upload 3 completed");
// 4.Specify advanced settings/options.
var fileTransferUtilityRequest = new TransferUtilityUploadRequest
{
BucketName = ExistingBucketName,
FilePath = filePath,
StorageClass = S3StorageClass.ReducedRedundancy,
PartSize = 6291456, // 6 MB.
Key = KeyName,
CannedACL = S3CannedACL.PublicRead
};
fileTransferUtilityRequest.Metadata.Add("param1", "Value1");
fileTransferUtilityRequest.Metadata.Add("param2", "Value2");
fileTransferUtility.Upload(fileTransferUtilityRequest);
Trace.WriteLine("Upload 4 completed");
}
catch (AmazonS3Exception s3Exception)
{
Trace.WriteLine(s3Exception.Message);
Trace.WriteLine(s3Exception.InnerException);
}
}
My error was in:
var fileTransferUtility = new
TransferUtility(new AmazonS3Client(Amazon.RegionEndpoint.USEast1));
I was using an incorrect region... I changed by Europe and it works.
var fileTransferUtility = new
TransferUtility(new AmazonS3Client(Amazon.RegionEndpoint.EUWest1));
By doing this:
using (var fileToUpload =
new FileStream(filePath, FileMode.Open, FileAccess.Read))
{
fileTransferUtility.Upload(fileToUpload,
ExistingBucketName, KeyName);
}
You are disposing your filestream before it gets uploaded. Try to remove the using clause or wrap the entire call in it.
User as follows :
TransferUtility fileTransferUtility = new TransferUtility(new AmazonS3Client("Access Key Id", "Access Segret key",Amazon.RegionEndpoint.USEast1));