I have a simple code to get versions of a file from S3 but getting the error below. In the meantime, put and get object requests for the same files are working fine.
var getVrRequest = new ListVersionsRequest()
{
BucketName = bucketName,
MaxKeys = 10
};
ListVersionsResponse response;
try
{
response = await client.ListVersionsAsync(getVrRequest);
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
throw;
}
ex.Message = "There is no such thing as the ?versions sub-resource for a key"
Any idea what might be the problem for getting this error?
screenshot
Solved it. For put and get objects, for the request BucketName property I was using the whole path starting with the bucket name and I thought it's the same thing here, but for list versions request, it should be only the name of the bucket.
Related
Im working on a small school project where i need to update an file from my github repo.
Everything worked fine until i got an error out of nowhere.
I am using Octokit .net with a C# WPF Application. Here the Exception:
Octokit.ApiException: "is at 1ce907108c4582d5a0986d3a37b2777e271a0105 but expected 47fa57debd39ee6a63f24d39e9513f87814a5ed6"
I dont know why this error shows up, because i didn't change anything before the error happend and now nothing works anymore. Can someone help me with this?
Here the code:
private static async void UpdateFile(string fileName, string fileContent)
{
var ghClient = new GitHubClient(new ProductHeaderValue(HEADER));
ghClient.Credentials = new Credentials(API_KEY);
// github variables
var owner = OWNER;
var repo = REPO;
var branch = "main";
var targetFile = fileName;
try
{
// try to get the file (and with the file the last commit sha)
var existingFile = await ghClient.Repository.Content.GetAllContentsByRef(owner, repo, targetFile, branch);
// update the file
var updateChangeSet = await ghClient.Repository.Content.UpdateFile(owner, repo, targetFile,
new UpdateFileRequest("API Config Updated", fileContent, existingFile.First().Sha, branch));
}
catch (Octokit.NotFoundException)
{
// if file is not found, create it
var createChangeSet = await ghClient.Repository.Content.CreateFile(owner, repo, targetFile, new CreateFileRequest("API Config Created", fileContent, branch));
}
}
I found the issue after a bit of experimenting.
I updated 3 files at the same time, it turns out Octokit can't handle more than 1 request at the same time...
If you're stuck on this problem too, just add a delay of ~2 seconds before posting a new request.
I want to upload large Files to Azure Blob Storage (500-2000MB) and I try to do this with the following code:
private BlobContainerClient containerClient;
public async Task<UploadResultDto> Upload(FileInfo fileInfo, string remotePath)
{
try
{
var blobClient = containerClient.GetBlobClient(remotePath + "/" + fileInfo.Name);
var transferOptions = new StorageTransferOptions
{
MaximumConcurrency = 1,
MaximumTransferSize = 10485760,
InitialTransferSize = 10485760
};
await using var uploadFileStream = File.OpenRead(fileInfo.FullName);
await blobClient.UploadAsync(uploadFileStream, transferOptions: transferOptions);
uploadFileStream.Close();
return new UploadResultDto()
{
UploadSuccessfull = true
};
}
catch (Exception ex)
{
Log.Error(ex,$"Error while uploading File {fileInfo.FullName}");
}
return new UploadResultDto()
{
UploadSuccessfull = false
};
}
I instantly get the following message:
The specified blob or block content is invalid.
RequestId:c5c2d925-701e-0035-7ce0-8691a6000000
Time:2020-09-09T19:33:40.9559646Z
Status: 400 (The specified blob or block content is invalid.)
If i remove the InitialTransferSize from the StorageTransferOptions, i get the following error after some time:
retry failed after 6 tries. (The operation was canceled.)
As far as I understood the new SDK, the upload in chunks and therefore the whole handling of the blockIds etc. should be done by the SDK. Or am I wrong?
Does anybody know why this is not working? I did not find anything different then this for BlobContainerClient, only for the old cloudblobcontainer.
Update:
Some Additional Informations:
It is a .netCore 3.1 Application which runs with the library Topshelf as a Windows Service
The second part of your question after you remove the InitialTransferSize from the StorageTransferOptions is similar to the issue in this question.
You may be able to resolve the issue by setting the timeouts for the blob client as follows:
var blobClientOptions = new BlobClientOptions
{
Transport = new HttpClientTransport(new HttpClient { Timeout = Timeout.InfiniteTimeSpan }),
Retry = { NetworkTimeout = Timeout.InfiniteTimeSpan }
};
InfiniteTimeSpan is probably overkill, but at least it will prove if that was the issue.
Those settings got rid of the "retry failed after 6 tries" error for me and got the upload when I started using the Azure.Storage.Blobs v12.8.0 package
I create a new console app and test with your code which works very well.
1.Confirm that you do not have inconsistencies in assemblies. Remove the earlier version of Azure.Storage.Blobs and update you itto the latest version.
And why your containerClient is private? You could set it in Upload method with following code:
BlobServiceClient blobServiceClient = new BlobServiceClient(connectionstring);
BlobContainerClient containerClient = blobServiceClient.GetBlobContainerClient("containerName");
var blobClient = containerClient.GetBlobClient(remotePath + "/" +fileInfo.Name);
I was not able to get it working with version 12.6.0...
I downgraded to Microsoft.Azure.Storage.Blob v11 and implemented the upload based on this thread
https://stackoverflow.com/a/58741171/765766
This works fine for me now
I'm trying to determine if a folder exists on my Amazon S3 Bucket and if it doesn't I want to create it.
At the moment I can create the folder using the .NET SDK as follows:
public void CreateFolder(string bucketName, string folderName)
{
var folderKey = folderName + "/"; //end the folder name with "/"
var request = new PutObjectRequest();
request.WithBucketName(bucketName);
request.StorageClass = S3StorageClass.Standard;
request.ServerSideEncryptionMethod = ServerSideEncryptionMethod.None;
//request.CannedACL = S3CannedACL.BucketOwnerFullControl;
request.WithKey(folderKey);
request.WithContentBody(string.Empty);
S3Response response = m_S3Client.PutObject(request);
}
Now when I try to see if the folder exists using this code:
public bool DoesFolderExist(string key, string bucketName)
{
try
{
S3Response response = m_S3Client.GetObjectMetadata(new GetObjectMetadataRequest()
.WithBucketName(bucketName)
.WithKey(key));
return true;
}
catch (Amazon.S3.AmazonS3Exception ex)
{
if (ex.StatusCode == System.Net.HttpStatusCode.NotFound)
return false;
//status wasn't not found, so throw the exception
throw;
}
}
It cannot find the folder. The strange thing is if I create the folder using the AWS Management Console, the 'DoesFolderExist' method can see it.
I'm not sure if it's an ACL/IAM thing but am not sure how to resolve this.
Your code actually works for me, but there are a few things you need to be aware off.
As I understand it, Amazon S3 does not have a concept of folders, but individual clients may display the S3 objects as if they did. So if you create an object called A/B , then the client may display it as if it was an object called B inside a folder called A. This is intuitive and seems to have become a standard, but simulating an empty folder does not appear to have a standard.
For example, I used your method to create a folder called Test, then actually end up creating an object called Test/. But I created a folder called Test2 in AWS Explorer (ie the addon to Visual Studio) and it ended up creating an object called Test2/Test2_$folder$
(AWS Explorer will display both Test and Test2 as folders)
Once of the things that this means is that you don't need to create the 'folder' before you can use it, which may mean that you don't need a DoesFolderExist method.
As I mention I tried your code and it works and finds the Test folder it created, but the key had to be tweaked to find the folder created by AWS Explorer , ie
DoesFolderExist("Test/" , bucketName); // Returns true
DoesFolderExist("Test2/" , bucketName); // Returns false
DoesFolderExist("Test2/Test2_$folder$", bucketName); // Returns true
So if you do still want to have a DoesFolderExist method, then it might be safer to just look for any objects that start with folderName + "/" , ie something like
ListObjectsRequest request = new ListObjectsRequest();
request.BucketName = bucketName ;
request.WithPrefix(folderName + "/");
request.MaxKeys = 1;
using (ListObjectsResponse response = m_S3Client.ListObjects(request))
{
return (response.S3Objects.Count > 0);
}
Just refactored above codes to on async method with version 2 of AWS .Net SDK:
public async Task CreateFoldersAsync(string bucketName, string path)
{
path = path.EnsureEndsWith('/');
IAmazonS3 client =
new AmazonS3Client(YOUR.AccessKeyId, YOUR.SecretAccessKey,
RegionEndpoint.EUWest1);
var findFolderRequest = new ListObjectsV2Request();
findFolderRequest.BucketName = bucketName;
findFolderRequest.Prefix = path;
findFolderRequest.MaxKeys = 1;
ListObjectsV2Response findFolderResponse =
await client.ListObjectsV2Async(findFolderRequest);
if (findFolderResponse.S3Objects.Any())
{
return;
}
PutObjectRequest request = new PutObjectRequest()
{
BucketName = bucketName,
StorageClass = S3StorageClass.Standard,
ServerSideEncryptionMethod = ServerSideEncryptionMethod.None,
Key = path,
ContentBody = string.Empty
};
// add try catch in case you have exceptions shield/handling here
PutObjectResponse response = await client.PutObjectAsync(request);
}
ListObjectsRequest findFolderRequest = new ListObjectsRequest();
findFolderRequest.BucketName = bucketName;
findFolderRequest.Prefix = path;
ListObjectsResponse findFolderResponse = s3Client.ListObjects(findFolderRequest);
Boolean folderExists = findFolderResponse.S3Objects.Any();
path can be something like "images/40/".
Using the above code can check if a so-called folder "images/40/" under bucket exists or not.
But Amazon S3 data model does not have the concept of folders. When you try to copy a image or file to certain path, if this co-called folder does not exist it will be created automatically as part of key name of this file or image. Therefore, you actually do not need to check if this folder exists or not.
Very important information from docs.aws.amazon.com : The Amazon S3 data model is a flat structure: you create a bucket, and the bucket stores objects. There is no hierarchy of subbuckets or subfolders; however, you can infer logical hierarchy using key name prefixes and delimiters as the Amazon S3 console does.
http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingMetadata.html
I am currently working with S3 and need to extract an S3 resource which has a timeout for streaming, so that the client cannot use the URL after a specific amount of time.
I have already used some code provided in the documentation for "Presigned Object URL Using AWS SDK for .NET".
The code will provide a temporary URL which can be used to download an S3 resource by anyone...but within a specific time limit.
I have also used the Amazon S3 Explorer for Visual Studio, but it doesn't support URL generation for resources embedded with AWSKMS key.
Also tried deleting the KMS Key for the S3 folder, but that is throwing an error.
If there is a possible link for deleting KMS keys can you also include it in your answers.
//Code Start
using Amazon;
using Amazon.S3;
using Amazon.S3.Model;
using System;
namespace URLDownload
{
public class Class1
{
private const string bucketName = "some-value";
private const string objectKey = "some-value";
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USEast1;
private static IAmazonS3 s3Client;
public static void Main()
{
s3Client = new AmazonS3Client(bucketRegion);
string urlString = GeneratePreSignedURL();
Console.WriteLine(urlString);
Console.Read();
}
static string GeneratePreSignedURL()
{
string urlString = "";
try
{
//ServerSideEncryptionMethod ssem = new ServerSideEncryptionMethod("AWSKMS");
GetPreSignedUrlRequest request1 = new GetPreSignedUrlRequest
{
BucketName = bucketName,
Key = objectKey,
Expires = DateTime.Now.AddMinutes(5),
Verb = 0,
ServerSideEncryptionKeyManagementServiceKeyId = "some-value",
ServerSideEncryptionMethod = ServerSideEncryptionMethod.AWSKMS
};
urlString = s3Client.GetPreSignedURL(request1);
}
catch (AmazonS3Exception e)
{
Console.WriteLine("Error encountered on server. Message:'{0}' when writing an object", e.Message);
}
catch (Exception e)
{
Console.WriteLine("Unknown encountered on server. Message:'{0}' when writing an object", e.Message);
}
return urlString;
}
}
}
SignatureDoesNotMatch
The request signature we calculated does not match the signature you provided. Check your key and signing method.
AKIA347A6YXQ3XM4JQ7A
This is the error that I am getting when I am trying to access the generated URL and that is probably because the AWSKMS authentication is having some issue.
I see it's been a couple of years, but did have an answer for this one? One thing that your code snippet seems to be missing is V4 signature flag set to true:
AWSConfigsS3.UseSignatureVersion4 = true;
Sources:
https://aws.amazon.com/blogs/developer/generating-amazon-s3-pre-signed-urls-with-sse-part-1/
https://aws.amazon.com/blogs/developer/generating-amazon-s3-pre-signed-urls-with-sse-kms-part-2/
You also need to make sure you're providing x-amz-server-side-encryption and x-amz-server-side-encryption-aws-kms-key-id headers on your upload request
We have a small program, to upload some files to Softlayer through object storage's Rest API.
Every day we upload about 38 files, with 1.32 GB total size. The files sizes vary between 650KB to 600MB, approximately.
One of those files have ~582MB size, and every day our program tries three times to upload it, but it has always been unsuccessful. This file takes about 30~45 minutes long to be upload, each attempt.
The message returned by the API is:
The request was aborted: The request was canceled.
Here is my code to Upload the files:
// Lists the Backup files of the folder
DirectoryInfo dirInfoBkps = new DirectoryInfo(backupFolder);
FileInfo[] arrFiles = dirInfoBkps.GetFiles(backupExtension);
// Performs the authentication in Softlayer, and obtains the Token and URL of Upload
RestHelper softlayerRestAPI = new RestHelper();
softlayerRestAPI.RestHeaders.Add("X-Auth-User", apiSoftlayerUser);
softlayerRestAPI.RestHeaders.Add("X-Auth-Key", apiSoftlayerToken);
softlayerRestAPI.RestHeaders.Add("X-Account-Meta-Temp-Url-Key", apiSoftlayerMetaTempUrlKey);
Dictionary<string, string> dicRespondeHeaders;
SoftlayerModel softlayerModel =
softlayerRestAPI.CallGetRestMethod<SoftlayerModel>(apiSoftlayerUrl, out dicRespondeHeaders);
// Prepares to Upload Files
apiSoftlayerUrl = softlayerModel.storage.#public;
apiSoftlayerUrl = apiSoftlayerUrl.Replace("https", "http");
apiSoftlayerXAuthToken = dicRespondeHeaders["X-Storage-Token"];
// Upload each file in the folder
foreach (FileInfo fileInfo in arrFiles)
{
// Creates the Upload URL
string uploadUrl = string.Format("{0}{1}{2}",
apiSoftlayerUrl,
"/Backups_SVN/",
fileInfo.Name);
// Try to make the upload 3-times
int numberOfTries = 0;
Exception lastException = null;
string lastFilename = null;
string mensagem = string.Empty;
while (numberOfTries < 3)
{
try
{
numberOfTries++;
softlayerRestAPI.RestHeaders.Clear();
softlayerRestAPI.RestHeaders.Add("X-Auth-Token", apiSoftlayerXAuthToken);
byte[] arr =
softlayerRestAPI.CallUploadRestMethod(uploadUrl, fileInfo.FullName);
// Upload Successful
break;
}
catch (Exception ex)
{
// Upload failed
lastException = ex;
lastFilename = fileInfo.Name;
Console.WriteLine(ex.Message);
}
}
if (numberOfTries == 3) // All attempts failed
{
// Writes the error log for future reference
}
}
Update
I forgot the code for RestHelper class: https://pastebin.com/hBYjXXJh
#FrankerZ is right - the problem posted here is duplicated with the other post.
I made the procedures of the the other post, and increased the Timeout of my WebRequest object, and the problem was solved.
Thanks for the help.