Delete a folder from Amazon S3 using API - c#

I am trying to delete all the files inside a folder which is basically the date.
Suppose, if there are 100 files under folder "08-10-2015", instead of sending all those 100 file names, i want to send the folder name.
I am trying below code and it is not working for me.
DeleteObjectsRequest multiObjectDeleteRequest = new DeleteObjectsRequest();
multiObjectDeleteRequest.BucketName = bucketName;
multiObjectDeleteRequest.AddKey(keyName + "/" + folderName + "/");
AmazonS3Config S3Config = new AmazonS3Config()
{
ServiceURL = string.Format(servicehost)
};
using (IAmazonS3 client = Amazon.AWSClientFactory.CreateAmazonS3Client(accesskey, secretkey, S3Config))
{
try
{
DeleteObjectsResponse response = client.DeleteObjects(multiObjectDeleteRequest);
Console.WriteLine("Successfully deleted all the {0} items", response.DeletedObjects.Count);
}
catch (DeleteObjectsException e)
{
// Process exception.
}
I am using the above code and it is not working.

I think you can delete the entire folder using the following code:
AmazonS3Config cfg = new AmazonS3Config();
cfg.RegionEndpoint = Amazon.RegionEndpoint.EUCentral1;
string bucketName = "your bucket name";
AmazonS3Client s3Client = new AmazonS3Client("your access key", "your secret key", cfg);
S3DirectoryInfo directoryToDelete = new S3DirectoryInfo(s3Client, bucketName, "your folder name or full folder key");
directoryToDelete.Delete(true); // true will delete recursively in folder inside
I am using amazon AWSSDK.Core and AWSSDK.S3 version 3.1.0.0 for .net 3.5.
I hope it can help you

You have to:
List all objects in the folder
Retrieve key for each object
Add this key to a multiple Delete Object Request
Make the request to delete all objects
AmazonS3Config S3Config = new AmazonS3Config()
{
ServiceURL = "s3.amazonaws.com",
CommunicationProtocol = Amazon.S3.Model.Protocol.HTTP,
};
const string AWS_ACCESS_KEY = "xxxxxxxxxxxxxxxx";
const string AWS_SECRET_KEY = "yyyyyyyyyyyyyyyy";
AmazonS3Client client = new AmazonS3Client(AWS_ACCESS_KEY, AWS_SECRET_KEY, S3Config);
DeleteObjectsRequest request2 = new DeleteObjectsRequest();
ListObjectsRequest request = new ListObjectsRequest
{
BucketName = "yourbucketname",
Prefix = "yourprefix"
};
ListObjectsResponse response = await client.ListObjectsAsync(request);
// Process response.
foreach (S3Object entry in response.S3Objects)
{
request2.AddKey(entry.Key);
}
request2.BucketName = "yourbucketname";
DeleteObjectsResponse response2 = await client.DeleteObjectsAsync(request2);

I'm not sure why they didn't keep this method in future SDKs, but, for those interested, here is the implementation of the S3DirectoryInfo.Delete method:
ListObjectsRequest listObjectsRequest = new ListObjectsRequest
{
BucketName = bucket,
Prefix = directoryPrefix
};
DeleteObjectsRequest deleteObjectsRequest = new DeleteObjectsRequest
{
BucketName = bucket
};
ListObjectsResponse listObjectsResponse = null;
do
{
listObjectsResponse = s3Client.ListObjects(listObjectsRequest);
foreach (S3Object item in listObjectsResponse.S3Objects.OrderBy((S3Object x) => x.Key))
{
deleteObjectsRequest.AddKey(item.Key);
if (deleteObjectsRequest.Objects.Count == 1000)
{
s3Client.DeleteObjects(deleteObjectsRequest);
deleteObjectsRequest.Objects.Clear();
}
listObjectsRequest.Marker = item.Key;
}
}
while (listObjectsResponse.IsTruncated);
if (deleteObjectsRequest.Objects.Count > 0)
{
s3Client.DeleteObjects(deleteObjectsRequest);
}

Related

GraphAPI request content Read/Write Timeout Throwing Invalid Operation Exception

The code below creates a new folder within the SharePoint which will then copy a power point and add it to the newly created folder. I am making 2 requests, one to get the file itself and another to get the file contents for an appropriate copy. When performing the follow line
var tester = await graphClient.Sites[sharepoint.Id].Drive.Root.ItemWithPath("FolderPath/Template.pptx").Content.Request().GetAsync();
It doesn't seem to be working on creating the copy to the newly created folder. When examining in debugger I see my tester of stype System.IO.MemoryStream is getting an invalid operation exception with the Read.Timeout and Write.Timeout.
Any assistance would be great.
public async Task<string> Sharepoint_FolderCreate(string NewFolderName, string sharepoint_folder_path = "/SomeFolderPath")
{
var item = new DriveItem
{
Name = NewFolderName.Replace("?", " ").Replace("/", " ").Replace("\\", " ").Replace("<", " ").Replace(">", " ").Replace("*", " ").Replace("\"", " ").Replace(":", " ").Replace("|", " "),
Folder = new Folder { },
AdditionalData = new Dictionary<string, object>()
{
{"#microsoft.graph.conflictBehavior","rename"}
}
};
var scopes = new[] { "https://graph.microsoft.com/.default" };
var options = new TokenCredentialOptions
{
AuthorityHost = AzureAuthorityHosts.AzurePublicCloud
};
// https://docs.microsoft.com/dotnet/api/azure.identity.clientsecretcredential
var clientSecretCredential = new ClientSecretCredential(
tenantID, clientId, clientSecret, options);
var graphClient = new GraphServiceClient(clientSecretCredential, scopes);
var sharepoint = await graphClient.Sites.GetByPath("/sites/SiteFolder", "localhost.sharepoint.com").Request().GetAsync();
await graphClient.Sites[sharepoint.Id].Drive.Root.ItemWithPath(sharepoint_folder_path).Children.Request().AddAsync(item);
var NewFolder = await graphClient.Sites[sharepoint.Id].Drive.Root.ItemWithPath($"{sharepoint_folder_path}/{item.Name}").Request().GetAsync();
return NewFolder.WebUrl;
}
Thank you to the above comments. My goal was achieved with the following code which creates a new folder then copies a powerpoint saved within a different folder in the sharepoint to the newly created folder.
public async Task<string> Sharepoint_FolderCreate(string NewFolderName, string sharepoint_folder_path = "/FolderPath")
{
var item = new DriveItem
{
Name = NewFolderName.Replace("?", " ").Replace("/", " ").Replace("\\", " ").Replace("<", " ").Replace(">", " ").Replace("*", " ").Replace("\"", " ").Replace(":", " ").Replace("|", " ").Trim(),
Folder = new Folder { },
AdditionalData = new Dictionary<string, object>()
{
{"#microsoft.graph.conflictBehavior","rename"}
}
};
var scopes = new[] { "https://graph.microsoft.com/.default" };
var options = new TokenCredentialOptions
{
AuthorityHost = AzureAuthorityHosts.AzurePublicCloud
};
var clientSecretCredential = new ClientSecretCredential(tenantID, clientId, clientSecret, options);
var graphClient = new GraphServiceClient(clientSecretCredential, scopes);
var sharepoint = await graphClient.Sites.GetByPath("/sites/Folder", "localhost.sharepoint.com").Request().GetAsync();
await graphClient.Sites[sharepoint.Id].Drive.Root.ItemWithPath(sharepoint_folder_path).Children.Request().AddAsync(item);
var NewFolder = await graphClient.Sites[sharepoint.Id].Drive.Root.ItemWithPath($"{sharepoint_folder_path}/{item.Name}").Request().GetAsync();
var parentReference = new ItemReference
{
DriveId = NewFolder.ParentReference.DriveId,
Id = NewFolder.Id
};
await graphClient.Sites[sharepoint.Id].Drive.Root.ItemWithPath("/FolderPath/Template.pptx").Copy($"{item.Name}.pptx", parentReference).Request().PostAsync();
return NewFolder.WebUrl;
}

AWS S3 - move folder with all files problem

I am trying to move data for example;
Source = "Uploads/Photos/" to Destination="Uploads/mytest/"
I am getting error like that but but when i give a specific file this works.
Basically, I want to move folder with all files.
My code is below;
public async Task<MoveResponse> MoveObject(MoveRequest moveRequest)
{
MoveResponse moveResponse = new MoveResponse();
CopyObjectRequest copyObjectRequest = new CopyObjectRequest
{
SourceBucket = moveRequest.BucketName,
DestinationBucket = moveRequest.BucketName + "/" + moveRequest.Destination,
SourceKey = moveRequest.Source,
DestinationKey = moveRequest.Source,
};
var response1 = await client.CopyObjectAsync(copyObjectRequest).ConfigureAwait(false);
if (response1.HttpStatusCode != System.Net.HttpStatusCode.OK)
{
moveResponse.IsError = true;
moveResponse.ErrorMessage = "Files could not moved to destination!";
return moveResponse;
}
return moveResponse;
}
I hope, you are using high level S3 API's.
Check out this sample code
private void uploadFolderToolStripMenuItem_Click(object sender, EventArgs e)
{
string directoryPath = textBoxBasePath.Text + listBoxFolder.SelectedItem.ToString().Replace("[", "").Replace("]", "");
string bucketName = comboBoxBucketNames.Text;
string FolderName = listBoxFolder.SelectedItem.ToString().Replace("[", "").Replace("]", "");
try
{
TransferUtility directoryTransferUtility = new TransferUtility(new AmazonS3Client(AwsAccessKeyID, AwsSecretAccessKey, RegionEndpoint.USEast1));
TransferUtilityUploadDirectoryRequest request = new TransferUtilityUploadDirectoryRequest
{
BucketName = bucketName,
KeyPrefix = FolderName,
StorageClass = S3StorageClass.StandardInfrequentAccess,
ServerSideEncryptionMethod = ServerSideEncryptionMethod.AES256,
Directory = directoryPath,
SearchOption = SearchOption.AllDirectories,
SearchPattern = "*.*",
CannedACL = S3CannedACL.AuthenticatedRead
};
ListMultipartUploadsRequest req1 = new ListMultipartUploadsRequest
{
BucketName = bucketName
};
var t = Task.Factory.FromAsync(directoryTransferUtility.BeginUploadDirectory, directoryTransferUtility.EndUploadDirectory, request, null);
t.Wait();
MessageBox.Show(string.Format("The Directory '{0}' is successfully uploaded", FolderName));
}
catch (Exception ex)
{
MessageBox.Show(ex.Message);
}
finally
{ }
}

AndroidPublisherService - Play Developer API Client - Upload aab failes due to bad credentials

Trying to make use of the AndroidPublisherService from Play Developer API Client.
I can list active tracks and the releases in those tracks, but when I try to upload a new build there seems to be no way of attaching the authentication already made previously to read data.
I've authenticated using var googleCredentials = GoogleCredential.FromStream(keyDataStream) .CreateWithUser(serviceUsername); where serviceUsername is the email for my service account.
private static void Execute(string packageName, string aabfile, string credfile, string serviceUsername)
{
var credentialsFilename = credfile;
if (string.IsNullOrWhiteSpace(credentialsFilename))
{
// Check env. var
credentialsFilename =
Environment.GetEnvironmentVariable("GOOGLE_APPLICATION_CREDENTIALS",
EnvironmentVariableTarget.Process);
}
Console.WriteLine($"Using credentials {credfile} with package {packageName} for aab file {aabfile}");
var keyDataStream = File.OpenRead(credentialsFilename);
var googleCredentials = GoogleCredential.FromStream(keyDataStream)
.CreateWithUser(serviceUsername);
var credentials = googleCredentials.UnderlyingCredential as ServiceAccountCredential;
var service = new AndroidPublisherService();
var edit = service.Edits.Insert(new AppEdit { ExpiryTimeSeconds = "3600" }, packageName);
edit.Credential = credentials;
var activeEditSession = edit.Execute();
Console.WriteLine($"Edits started with id {activeEditSession.Id}");
var tracksList = service.Edits.Tracks.List(packageName, activeEditSession.Id);
tracksList.Credential = credentials;
var tracksResponse = tracksList.Execute();
foreach (var track in tracksResponse.Tracks)
{
Console.WriteLine($"Track: {track.TrackValue}");
Console.WriteLine("Releases: ");
foreach (var rel in track.Releases)
Console.WriteLine($"{rel.Name} version: {rel.VersionCodes.FirstOrDefault()} - Status: {rel.Status}");
}
using var fileStream = File.OpenRead(aabfile);
var upload = service.Edits.Bundles.Upload(packageName, activeEditSession.Id, fileStream, "application/octet-stream");
var uploadProgress = upload.Upload();
if (uploadProgress == null || uploadProgress.Exception != null)
{
Console.WriteLine($"Failed to upload. Error: {uploadProgress?.Exception}");
return;
}
Console.WriteLine($"Upload {uploadProgress.Status}");
var tracksUpdate = service.Edits.Tracks.Update(new Track
{
Releases = new List<TrackRelease>(new[]
{
new TrackRelease
{
Name = "Roswell - Grenis Dev Test",
Status = "completed",
VersionCodes = new List<long?>(new[] {(long?) upload?.ResponseBody?.VersionCode})
}
})
}, packageName, activeEditSession.Id, "internal");
tracksUpdate.Credential = credentials;
var trackResult = tracksUpdate.Execute();
Console.WriteLine($"Track {trackResult?.TrackValue}");
var commitResult = service.Edits.Commit(packageName, activeEditSession.Id);
Console.WriteLine($"{commitResult.EditId} has been committed");
}
And as the code points out, all action objects such as tracksList.Credential = credentials; can be given the credentials generated from the service account.
BUT the actual upload action var upload = service.Edits.Bundles.Upload(packageName, activeEditSession.Id, fileStream, "application/octet-stream"); does not expose a .Credential object, and it always fails with:
The service androidpublisher has thrown an exception: Google.GoogleApiException: Google.Apis.Requests.RequestError
Request is missing required authentication credential. Expected OAuth 2 access token, login cookie or other valid authentication credential. See https://developers.google.com/identity/sign-in/web/devconsole-project. [401]
Errors [
Message[Login Required.] Location[Authorization - header] Reason[required] Domain[global]
]
at Google.Apis.Upload.ResumableUpload`1.InitiateSessionAsync(CancellationToken cancellationToken)
at Google.Apis.Upload.ResumableUpload.UploadAsync(CancellationToken cancellationToken)
So, how would I go about providing the actual Upload action with the given credentials here?
Managed to figure this out during the day, I was missing one call to CreateScoped() when creating the GoogleCredential object as well as a call to InitiateSession() on the upload object.
var googleCredentials = GoogleCredential.FromStream(keyDataStream)
.CreateWithUser(serviceUsername)
.CreateScoped(AndroidPublisherService.Scope.Androidpublisher);
Once that was done I could then get a valid oauth token by calling
var googleCredentials = GoogleCredential.FromStream(keyDataStream)
.CreateWithUser(serviceUsername)
.CreateScoped(AndroidPublisherService.Scope.Androidpublisher);
var credentials = googleCredentials.UnderlyingCredential as ServiceAccountCredential;
var oauthToken = credentials?.GetAccessTokenForRequestAsync(AndroidPublisherService.Scope.Androidpublisher).Result;
And I can now use that oauth token in the upload request:
upload.OauthToken = oauthToken;
_ = await upload.InitiateSessionAsync();
var uploadProgress = await upload.UploadAsync();
if (uploadProgress == null || uploadProgress.Exception != null)
{
Console.WriteLine($"Failed to upload. Error: {uploadProgress?.Exception}");
return;
}
The full code example for successfully uploading a new aab file to google play store internal test track thus looks something like this:
private async Task UploadGooglePlayRelease(string fileToUpload, string changeLogFile, string serviceUsername, string packageName)
{
var serviceAccountFile = ResolveServiceAccountCertificateInfoFile();
if (!serviceAccountFile.Exists)
throw new ApplicationException($"Failed to find the service account certificate file. {serviceAccountFile.FullName}");
var keyDataStream = File.OpenRead(serviceAccountFile.FullName);
var googleCredentials = GoogleCredential.FromStream(keyDataStream)
.CreateWithUser(serviceUsername)
.CreateScoped(AndroidPublisherService.Scope.Androidpublisher);
var credentials = googleCredentials.UnderlyingCredential as ServiceAccountCredential;
var oauthToken = credentials?.GetAccessTokenForRequestAsync(AndroidPublisherService.Scope.Androidpublisher).Result;
var service = new AndroidPublisherService();
var edit = service.Edits.Insert(new AppEdit { ExpiryTimeSeconds = "3600" }, packageName);
edit.Credential = credentials;
var activeEditSession = await edit.ExecuteAsync();
_logger.LogInformation($"Edits started with id {activeEditSession.Id}");
var tracksList = service.Edits.Tracks.List(packageName, activeEditSession.Id);
tracksList.Credential = credentials;
var tracksResponse = await tracksList.ExecuteAsync();
foreach (var track in tracksResponse.Tracks)
{
_logger.LogInformation($"Track: {track.TrackValue}");
_logger.LogInformation("Releases: ");
foreach (var rel in track.Releases)
_logger.LogInformation($"{rel.Name} version: {rel.VersionCodes.FirstOrDefault()} - Status: {rel.Status}");
}
var fileStream = File.OpenRead(fileToUpload);
var upload = service.Edits.Bundles.Upload(packageName, activeEditSession.Id, fileStream, "application/octet-stream");
upload.OauthToken = oauthToken;
_ = await upload.InitiateSessionAsync();
var uploadProgress = await upload.UploadAsync();
if (uploadProgress == null || uploadProgress.Exception != null)
{
Console.WriteLine($"Failed to upload. Error: {uploadProgress?.Exception}");
return;
}
_logger.LogInformation($"Upload {uploadProgress.Status}");
var releaseNotes = await File.ReadAllTextAsync(changeLogFile);
var tracksUpdate = service.Edits.Tracks.Update(new Track
{
Releases = new List<TrackRelease>(new[]
{
new TrackRelease
{
Name = $"{upload?.ResponseBody?.VersionCode}",
Status = "completed",
InAppUpdatePriority = 5,
CountryTargeting = new CountryTargeting { IncludeRestOfWorld = true },
ReleaseNotes = new List<LocalizedText>(new []{ new LocalizedText { Language = "en-US", Text = releaseNotes } }),
VersionCodes = new List<long?>(new[] {(long?) upload?.ResponseBody?.VersionCode})
}
})
}, packageName, activeEditSession.Id, "internal");
tracksUpdate.Credential = credentials;
var trackResult = await tracksUpdate.ExecuteAsync();
_logger.LogInformation($"Track {trackResult?.TrackValue}");
var commitResult = service.Edits.Commit(packageName, activeEditSession.Id);
commitResult.Credential = credentials;
await commitResult.ExecuteAsync();
_logger.LogInformation($"{commitResult.EditId} has been committed");
}

Amazon S3 MultiPartUpload C# not working

I've been struggling for hours with this problem, I hope you can help me. Somehow my compiler will only accept InitiateMultipartUploadAsync and not the regular InitiateMultipartUpload and absolutely requires the callback as parameters to compile, but I can figure out what callback function to give him.
private static async Task UploadObjectAsync()
{
// Create list to store upload part responses.
List<UploadPartResponse> uploadResponses = new List<UploadPartResponse>();
// Setup information required to initiate the multipart upload.
InitiateMultipartUploadRequest initiateRequest = new InitiateMultipartUploadRequest
{
BucketName = "XXXXXXXXX",
Key = "videos/multipart"
};
// Initiate the upload.
InitiateMultipartUploadResponse initResponse =
await S3Client.InitiateMultipartUploadAsync(initiateRequest);
// Upload parts.
long contentLength = new FileInfo("videotest").Length;
long partSize = 5 * (long)Math.Pow(2, 20); // 5 MB
try
{
Console.WriteLine("Uploading parts");
long filePosition = 0;
for (int i = 1; filePosition < contentLength; i++)
{
UploadPartRequest uploadRequest = new UploadPartRequest
{
BucketName = "XXXXXXXX",
Key = "videos/multipart",
UploadId = initResponse.UploadId,
PartNumber = i,
PartSize = partSize,
FilePosition = filePosition,
FilePath = "videotest"
};
// Track upload progress.
uploadRequest.StreamTransferProgress +=
new EventHandler<StreamTransferProgressArgs>(UploadPartProgressEventCallback);
// Upload a part and add the response to our list.
uploadResponses.Add(await S3Client.UploadPartAsync(uploadRequest));
filePosition += partSize;
}
// Setup to complete the upload.
CompleteMultipartUploadRequest completeRequest = new CompleteMultipartUploadRequest
{
BucketName = "XXXXXXXXXXX",
Key = "videos/multipart",
UploadId = initResponse.UploadId
};
completeRequest.AddPartETags(uploadResponses);
// Complete the upload.
CompleteMultipartUploadResponse completeUploadResponse =
await S3Client.CompleteMultipartUploadAsync(completeRequest);
}
catch (Exception exception)
{
Console.WriteLine("An AmazonS3Exception was thrown: {0}", exception.Message);
// Abort the upload.
AbortMultipartUploadRequest abortMPURequest = new AbortMultipartUploadRequest
{
BucketName = "XXXXXXXXX",
Key = "videos/multipart",
UploadId = initResponse.UploadId
};
await S3Client.AbortMultipartUploadAsync(abortMPURequest);
}
}
public static void UploadPartProgressEventCallback(object sender, StreamTransferProgressArgs e)
{
// Process event.
Console.WriteLine("{0}/{1}", e.TransferredBytes, e.TotalBytes);
}
This code was inspired from the offical aws example : https://docs.aws.amazon.com/AmazonS3/latest/dev/LLuploadFileDotNet.html So I really don't understand why this doesn't work ! Currently with above code, Visual Studio is telling me that InitiateMultipartUploadAsync, UploadPartAsync, CompleteMultipartUploadAsync and AbortMultipartUploadAsync all require a callback function, but 1) examples says callback is optional 2) every callback I tried doesn't work. Thanks in advance
In case someone else as the same bug, I changed the code completly and used this solution : https://github.com/aws/aws-sdk-net/issues/562
you can initiate it like this:
s3Client = new AmazonS3Client(BUCKETREGION);
var initiateRequest = new InitiateMultipartUploadRequest { BucketName = BUCKET, Key = KEY };
await s3Client.InitiateMultipartUploadAsync(initiateRequest).ContinueWith(response =>
{
if (!string.IsNullOrEmpty(response.Result.UploadId))
{
uploadId = response.Result.UploadId;
}
else
{
Debug.WriteLine(response.Exception);
}
});

Remove "Everyone" from aws s3 object ACL's using .NET API

I'm looping through all object ACL's in a bucket, to remove "Everyone" permissions from all of them. The idea here is to retain all current permissions.
My issue is that the PutACL call doesn't work. In the example below, a new AccessControlList is created, omitting the "everyone" entries. The PutACL call returns successfully, but the object's ACL is unchanged.
Perhaps there is an easier way to identify and remove specific Grants.
AmazonS3Client s3 = new AmazonS3Client();
GetACLRequest aclRequest = new GetACLRequest() { BucketName = "my-bucket", Key = "/dir/protect_me.txt" };
var aclResponse = s3.GetACL(aclRequest);
bool foundEveryonePriv = false; //if found at least one.
S3AccessControlList newAcl = new S3AccessControlList();
foreach (var grant in aclResponse.AccessControlList.Grants)
{
bool grantToEveryone = string.Compare(grant.Grantee.URI, "http://acs.amazonaws.com/groups/global/AllUsers") == 0;
Logger.log.InfoFormat("{0},{1},{2},{3}", aclRequest.BucketName, o.Key, grant.Permission, (everyoneHasThisPriv ? "EVERYONE" : string.Empty));
if (grantToEveryone)
{
foundEveryonePriv = true;
newAcl.AddGrant(grant.Grantee, grant.Permission);
}
}
//modify the items if necessary and requested.
if (foundEveryonePriv)
{
newAcl.Owner = aclResponse.AccessControlList.Owner;
var response = s3.PutACL(new PutACLRequest() { AccessControlList = newAcl, BucketName = aclRequest.BucketName, Key = o.Key });
}
Try modifying the existing ACL from the GET to remove the public grant. Then send the modified ACL back in a PUT request. Here's what I did and it's working well to retain the original grants and remove the public grant from a given object.
private void RemovePublicAcl(AmazonS3Client client, string bucket, string key)
{
var aclRequest = new GetACLRequest { BucketName = bucket, Key = key };
var aclResponse = client.GetACL(aclRequest);
var acl = aclResponse.AccessControlList;
const string PUBLIC_GRANTEE = "http://acs.amazonaws.com/groups/global/AllUsers";
if (acl.Grants.Any(x =>
!string.IsNullOrWhiteSpace(x.Grantee.URI) &&
x.Grantee.URI.Equals(PUBLIC_GRANTEE)))
{
var publicGrant = new S3Grantee();
publicGrant.URI = PUBLIC_GRANTEE;
acl.Grants.RemoveAll(x =>
!string.IsNullOrWhiteSpace(x.Grantee.URI) &&
x.Grantee.URI.Equals(PUBLIC_GRANTEE));
var aclUpdate = new PutACLRequest();
aclUpdate.BucketName = bucket;
aclUpdate.Key = key;
aclUpdate.AccessControlList = acl;
var response = client.PutACL(aclUpdate);
}

Categories