I am trying to read a bucket at storage.googleapis.com, using the Amazon Web Services .Net SDK in C#.
Can anyone provide a working example of a S3 endpoint Config setup for google, just using the Auth. key/secret pair and a bucket name? Or using any other method to get this working?
According to this tutorial this should be a simple matter, but I get all sorts of exceptions when trying to follow the instructions given. Here is an extract of my current attempt - which throws a TrustFailure exception:
The remote certificate is invalid.
AmazonS3Config conf = new AmazonS3Config();
// Set regionEndpoint to null, or else the serviceURL will be ignored
conf.RegionEndpoint = null;
conf.ServiceURL = "https://s3.storage.googleapis.com";
conf.UseHttp = false;
conf.AuthenticationRegion = null;
conf.UseAccelerateEndpoint = false;
conf.UseDualstackEndpoint = false;
AWSCredentials cred = new BasicAWSCredentials("GOOG3LFXXXXXXXXXXXXX", "BQ6VeMXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX");
IAmazonS3 client = new AmazonS3Client(cred, conf);
GetBucketVersioningRequest request = new GetBucketVersioningRequest { BucketName = "hisbucket" };
GetBucketVersioningResponse response = client.GetBucketVersioning(request);
I finally got the .NET SDK to upload to Google Cloud Storage with:
AWSConfigsS3.UseSignatureVersion4 = false;
AmazonS3Config config = new AmazonS3Config();
config.ServiceURL = "https://storage.googleapis.com";
config.SignatureVersion = "2";
AmazonS3Client client = new AmazonS3Client(accessKey, secretKey, config);
var transferUtilityConfig = new TransferUtilityConfig
{
ConcurrentServiceRequests = 1,
MinSizeBeforePartUpload = 6291456000,
};
var fileTransferUtilityRequest = new TransferUtilityUploadRequest
{
BucketName = bucketName,
FilePath = filePath,
PartSize = 6291456000,
Key = keyName,
};
TransferUtility fileTransferUtility = new TransferUtility(client, transferUtilityConfig);
fileTransferUtility.Upload(fileTransferUtilityRequest);
fileTransferUtility.Dispose();
You need a Amazon S3 service URL, an access key id, a secret access key id and the bucket name.
var s3Config = new AmazonS3Config
{
ServiceURL = Constants.AmazonS3ServiceUrl,
RegionEndpoint = Amazon.RegionEndpoint.EUWest1
};
string accessKeyId = Constants.AmazonAccessKeyId;
string secretAccessKey = Constants.AmazonSecretAccessKey;
var config = new AwsS3Config(){AmazonS3BucketName = Constants.AmazonS3BucketName};
var client = new AmazonS3Client(accessKeyId, secretAccessKey, s3Config);
Then, you should be able to make calls to the amazon client:
var request = new GetObjectRequest
{
BucketName = _bucketName,
Key = entity.Path
};
var response = _client.GetObjectAsync(request).Result;
The code above works on an S3 account, not particularly storage.googleapis.com, which is your case. Anyway, I hope this helps and answers your question.
Related
We were using AmazonS3EncryptionClient in our code to interact with S3 bucket using client side encryption. But on updating nuget package today, I noticed that AmazonS3EncryptionClient has been marked obsolete. Looks like we will need to use AmazonS3EncryptionClientV2 if we want to get continuous update going forward. I am having this issue while trying to migrate from AmazonS3EncryptionClient to AmazonS3EncryptionClientV2.
In our old code we were using AmazonS3EncryptionClient constructor that takes RegionEnpoint as a parameter. see image below. Looks like constructors that takes RegionEnpoint has been removed in AmazonS3EncryptionClientV2.
Old code that was working to GetObject from S3 bucket.
S3BucketConfiguration _s3BucketConfiguration = provider
.GetService<IOptionsSnapshot<S3BucketConfiguration>>()
.Value;
var credential = new BasicAWSCredentials(
_s3BucketConfiguration.AccessKey, _s3BucketConfiguration.SecurityKey);
RegionEndpoint bucketRegion =
RegionEndpoint.GetBySystemName(_s3BucketConfiguration.Region);
EncryptionMaterials encryptionMaterials = new EncryptionMaterials(_s3BucketConfiguration.KMSKeyId);
var client = new AmazonS3EncryptionClient(credential, bucketRegion, encryptionMaterials);
GetObjectResponse response = await _client.GetObjectAsync(new GetObjectRequest
{
BucketName = _s3BucketConfig.BucketName,
Key = filePath
});
I cannot pass in RegionEnpoint in AmazonS3EncryptionClientV2.
My Code so far.
S3BucketConfiguration _s3BucketConfiguration = provider
.GetService<IOptionsSnapshot<S3BucketConfiguration>>()
.Value;
var credential = new BasicAWSCredentials(
_s3BucketConfiguration.AccessKey, _s3BucketConfiguration.SecurityKey);
RegionEndpoint bucketRegion =
RegionEndpoint.GetBySystemName(_s3BucketConfiguration.Region);
var encryptionMaterials = new EncryptionMaterialsV2(
_s3BucketConfiguration.KMSKeyId,
KmsType.KmsContext,
new Dictionary<string, string>()
);
var config = new AmazonS3CryptoConfigurationV2(SecurityProfile.V2AndLegacy);
//If I add this line it will instantiate AmazonS3EncryptionClientV2 but, the GetObject call fails.
//If I do not add this line, it will give me same error while instiantiating AmazonS3EncryptionClientV2
//config.RegionEndpoint = bucketRegion;
vr client = new AmazonS3EncryptionClientV2(credential, config, encryptionMaterials);
GetObjectResponse response = client.GetObjectAsync(new GetObjectRequest
{
BucketName = _s3BucketConfig.BucketName,
Key = filePath,
}).GetAwaiter().GetResult();
Exception
No RegionEndpoint or ServiceURL configured
I can successfully encrypt with V1 and decrypt with V2 client along with passing the RegionEndpoint.
var configuration = new AmazonS3CryptoConfiguration()
{
RegionEndpoint = RegionEndpoint.USWest2
};
var material = new EncryptionMaterials(KmsKeyId);
var client = new AmazonS3EncryptionClient(configuration, material);
var putObjectResponse = await client.PutObjectAsync(new PutObjectRequest()
{
ContentBody = ContentBody,
BucketName = Bucket,
Key = Key
});
if (putObjectResponse.HttpStatusCode == System.Net.HttpStatusCode.OK)
{
var configurationV2 = new AmazonS3CryptoConfigurationV2(SecurityProfile.V2AndLegacy)
{
RegionEndpoint = RegionEndpoint.USWest2
};
var materialV2 = new EncryptionMaterialsV2(KmsKeyId, KmsType.KmsContext, new Dictionary<string, string>());
var clientV2 = new AmazonS3EncryptionClientV2(configurationV2, materialV2);
var getObjectResponse = await clientV2.GetObjectAsync(new GetObjectRequest()
{
BucketName = Bucket,
Key = Key
});
using (var reader = new StreamReader(getObjectResponse.ResponseStream))
{
Console.WriteLine(reader.ReadToEnd());
}
}
Can you make sure you are using the same RegionEndpoint during the encryption and decryption?
Having some trouble with this one. I'm getting an SAS token generated after following the examples in Microsoft's documentation, but am having issues with the SAS token not being authenticated.
string sastoken = "";
BlobServiceClient blobServiceClient = new BlobServiceClient("DefaultEndpointsProtocol=https;AccountName=accountname;AccountKey=accountkey;EndpointSuffix=core.windows.net");
string containerName = containername;
BlobContainerClient containerClient = blobServiceClient.GetBlobContainerClient(containerName);
BlobSasBuilder sasBuilder = new BlobSasBuilder()
{
ExpiresOn = DateTime.UtcNow + (new TimeSpan(24, 0, 0)),
BlobContainerName = containerName,
BlobName = imageData.filename,
Resource = "b"
};
sasBuilder.SetPermissions(BlobSasPermissions.Read);
sastoken = sasBuilder.ToSasQueryParameters(new StorageSharedKeyCredential(containername, credentialkey)).ToString();
UriBuilder fulluri = new UriBuilder()
{
Scheme = "https",
Host = string.Format("{0}.blob.core.windows.net", containername),
Path = string.Format("{0}/{1}", "blobtest", "file.bmp"),
Query = sastoken
};
imageData.url = fulluri.Uri.ToString();
imageData.url returns as: https://accountname.blob.core.windows.net/containername/file.bmp?sv=2019-07-07&se=2020-07-10T14%3A54%3A43Z&sr=b&sp=r&sig=UXvC7SAXqQtsVgfXj6L%2BOIinTMhQj%2F3NH95v%2FLRvM8g%3D
I get an authentication error, but the entire point of SAS tokens is to provide that authentication. I'm sure that I'm missing something here, but haven't found anywhere that I'm making a mistake. Most of the information I find is related to the Microsoft.Azure.Storage package rather than the Azure.Storage.Blob namespace. Any help or advice would be welcome.
Thanks!
I use something like this, using the Microsoft.WindowsAzure.Storage nuget package:
private Uri GetSasForBlob(CloudBlob blob, DateTime expiry, SharedAccessBlobPermissions permissions = SharedAccessBlobPermissions.None)
{
var offset = TimeSpan.FromMinutes(10);
var policy = new SharedAccessBlobPolicy
{
SharedAccessStartTime = DateTime.UtcNow.Subtract(offset),
SharedAccessExpiryTime = expiry.Add(offset),
Permissions = permissions
};
#pragma warning disable CA5377 // Use Container Level Access Policy
var sas = blob.GetSharedAccessSignature(policy);
#pragma warning restore CA5377 // Use Container Level Access Policy
return new Uri($"{blob.Uri}{sas}");
}
UPDATE using Azure.Storage.Blobs:
// Read these from config:
// var accountName = "accountname";
// var accountKey = "xxxxxxx";
// var blobServiceEndpoint = $"https://{accountName}.blob.core.windows.net";
private Uri GetSasForBlob(string blobname, string containerName, DateTime expiry, BlobAccountSasPermissions permissions = BlobAccountSasPermissions.Read)
{
var offset = TimeSpan.FromMinutes(10);
var credential = new StorageSharedKeyCredential(accountName, accountKey);
var sas = new BlobSasBuilder
{
BlobName = blobname,
BlobContainerName = containerName,
StartsOn = DateTime.UtcNow.Subtract(offset),
ExpiresOn = expiry.Add(offset)
};
sas.SetPermissions(permissions);
UriBuilder sasUri = new UriBuilder($"{blobServiceEndpoint}/{containerName}/{blobname}");
sasUri.Query = sas.ToSasQueryParameters(credential).ToString();
return sasUri.Uri;
}
Reference: https://github.com/Azure/azure-sdk-for-net/blob/42839e7dea6be316024f168ecd08f3134bc57a47/sdk/storage/Azure.Storage.Blobs/samples/Sample02_Auth.cs#L137
It looks like your generated SAS token and URL are using different values for account name, container name and blob name.
Consider updating the URL generation code to use the same values.
UriBuilder fulluri = new UriBuilder()
{
Scheme = "https",
Host = string.Format("{0}.blob.core.windows.net", accountname),
Path = string.Format("{0}/{1}", containerName, imageData.fileName),
Query = sastoken
};
Hope this helps.
I'm trying to create an Authentication function for user login, but my idea is to expose the "function keys" of the rest of the functions. So the mobile app can grave the keys to star calling the rest of the functions.
Is a way to do this?
If you want to manage Azure function key, you can use the Key management API to implement it. For more details, please refer to document
Get function key
GET https://<functionappname>.azurewebsites.net/admin/functions/{functionname}/keys
Create Function key
PUT https://<functionappname>.azurewebsites.net/admin/functions/{functionname}/keys/{keyname}
{
"name": "keyname",
"value" : "keyvalue"
}
The code
tring clientId = "client id";
string secret = "secret key";
string tenant = "tenant id";
var functionName ="functionName";
var webFunctionAppName = "functionApp name";
string resourceGroup = "resource group name";
var credentials = new AzureCredentials(new ServicePrincipalLoginInformation { ClientId = clientId, ClientSecret = secret}, tenant, AzureEnvironment.AzureGlobalCloud);
var azure = Azure
.Configure()
.Authenticate(credentials)
.WithDefaultSubscription();
var webFunctionApp = azure.AppServices.FunctionApps.GetByResourceGroup(resourceGroup, webFunctionAppName);
var ftpUsername = webFunctionApp.GetPublishingProfile().FtpUsername;
var username = ftpUsername.Split('\\').ToList()[1];
var password = webFunctionApp.GetPublishingProfile().FtpPassword;
var base64Auth = Convert.ToBase64String(Encoding.Default.GetBytes($"{username}:{password}"));
var apiUrl = new Uri($"https://{webFunctionAppName}.scm.azurewebsites.net/api");
var siteUrl = new Uri($"https://{webFunctionAppName}.azurewebsites.net");
string JWT;
using (var client = new HttpClient())
{
client.DefaultRequestHeaders.Add("Authorization", $"Basic {base64Auth}");
var result = client.GetAsync($"{apiUrl}/functions/admin/token").Result;
JWT = result.Content.ReadAsStringAsync().Result.Trim('"'); //get JWT for call funtion key
}
// get key
using (var client = new HttpClient())
{
client.DefaultRequestHeaders.Add("Authorization", "Bearer " + JWT);
var key = await client.GetAsync($"{siteUrl}/admin/functions/{functionName}/keys").Result.Content.ReadAsStringAsync();
}
// create key
var map = new Dictionary<string, string>();
map.Add("name", "keyName");
map.Add("value", "keyVaule");
using (var client = new HttpClient()) {
var content = new StringContent(JsonConvert.SerializeObject(map), System.Text.Encoding.UTF8, "application/json");
await client.PutAsync($"{siteUrl}/admin/functions/{functionname}/keys/{keyname}", content);
}
Besides, according to my research, we also can use Azure REST API to manage Azure function key. For more details, please refer to
a. Create Azure function key
b. List Azure function key
I need to get the keys through code, not through a portal. For doing this I have found REST API in Google.
This is the link to Azure Key management API, but do this we need to do an authentication.
We have to develop all this using C# only.
Regarding the issue, please refer to the following code.
#install Microsoft.Azure.Management.ResourceManager.Fluent and Microsoft.Azure.Management.Fluent
string clientId = "client id";
string secret = "secret key";
string tenant = "tenant id";
var functionName ="functionName";
var webFunctionAppName = "functionApp name";
string resourceGroup = "resource group name";
var credentials = new AzureCredentials(new ServicePrincipalLoginInformation { ClientId = clientId, ClientSecret = secret}, tenant, AzureEnvironment.AzureGlobalCloud);
var azure = Azure
.Configure()
.Authenticate(credentials)
.WithDefaultSubscription();
var webFunctionApp = azure.AppServices.FunctionApps.GetByResourceGroup(resourceGroup, webFunctionAppName);
var ftpUsername = webFunctionApp.GetPublishingProfile().FtpUsername;
var username = ftpUsername.Split('\\').ToList()[1];
var password = webFunctionApp.GetPublishingProfile().FtpPassword;
var base64Auth = Convert.ToBase64String(Encoding.Default.GetBytes($"{username}:{password}"));
var apiUrl = new Uri($"https://{webFunctionAppName}.scm.azurewebsites.net/api");
var siteUrl = new Uri($"https://{webFunctionAppName}.azurewebsites.net");
string JWT;
using (var client = new HttpClient())
{
client.DefaultRequestHeaders.Add("Authorization", $"Basic {base64Auth}");
var result = client.GetAsync($"{apiUrl}/functions/admin/token").Result;
JWT = result.Content.ReadAsStringAsync().Result.Trim('"'); //get JWT for call funtion key
}
using (var client = new HttpClient())
{
client.DefaultRequestHeaders.Add("Authorization", "Bearer " + JWT);
var key = client.GetAsync($"{siteUrl}/admin/functions/{functionName}/keys").Result.Content.ReadAsStringAsync().Result;
}
Besides, you also can refer to the document.
I've read the S3 documentation several times and I'm adding metadata to an S3 object with this code...
PutObjectRequest titledRequest = new PutObjectRequest();
titledRequest.WithTimeout(3600000)
.WithMetaData("outputfolder", outputFolder)
.WithBucketName(AWS_BUCKET_NAME)
.WithKey(objectKey)
.WithAutoCloseStream(true)
.WithInputStream(fs);
When reading the object from the S3 bucket I'm using this code....
string outputFolder = response.Metadata["x-amz-meta-outputfolder"];
But I'm getting an empty string every time even though the outputFolder variable definitely has a value.
Am I doing something really silly wrong here? As far as I can tell this is consistent with the documentation
string outputFolder = response.Metadata["outputfolder"];
will do.
use this instead of reading the meta from putobject response
GetObjectMetadataRequest request = new GetObjectMetadataRequest()
.WithKey("Key")
.WithBucketName("");
GetObjectMetadataResponse response = s3Client.GetObjectMetaData(request);
response."choose properety to retrieve"
hope this may help
Codes verified to work.
// upload & add outputfolder to metadata
var S3Client = new AmazonS3Client();
var Request = new PutObjectRequest {
BucketName = bucketname,Key = S3Name,FilePath = Filepath };
Request.Metadata.Add("outputfolder",#"C:\test");
PutObjectResponse Response = S3Client.PutObject(Request);
// download and retrieve metadata
var S3Client = new AmazonS3Client();
var Request = new GetObjectRequest { BucketName = bucketname,Key = S3Name };
GetObjectResponse Response = S3Client.GetObject(Request);
// this works
string outputFolder = Response.Metadata["x-amz-meta-outputfolder"];
// so does this - no need for the x-amz-meta- prefix
string outputFolder = Response.Metadata["outputfolder"];