I have localstack (https://github.com/localstack/localstack) running and am able to use the aws s3 cli to upload files to it.
What I want to be able to do is use the .NET AWS ADK with localstack. I'd like the following code to upload a file into localstack:
using (var tfu = new TransferUtility())
{
await tfu.UploadAsync(new TransferUtilityUploadRequest
{
Key = key,
BucketName = bucketName,
ContentType = document.ContentType,
Headers = { ["Content-Disposition"] = "attachment; filename=\"test.txt\"" },
InputStream = stream
});
}
My problem is I don't know how to set the endpoints so that localstack is used by the SDK rather than aws. Apparently you can set the AWSEndpointDefinition in appSettings.config as mentioned in the AWS SDK documentation, e.g:
<add key="AWSEndpointDefinition" value="C:\Dev\localstack\endpoints.json"/>
However I have no idea what this endpoints.json config should look like. I tried using this file:
https://raw.githubusercontent.com/aws/aws-sdk-net/master/sdk/src/Core/endpoints.json
When I do this, as soon as I new up a TransferUtility class I get a null reference exception - this is before I point anything to my localstack setup.
The version of AWS ASK is 3.3.0.
Another thing to note is that in some places in the documentation it is implied that the config should be an xml file rather than a json, however, when I try to use an xml file instead I get a different exception when newing up TransferUtility: 'Invalid character '<' in input string'
You can easily override it by creating an S3 client and passing it to TransferUtility constructor.
var config = new AmazonS3Config { ServiceURL = "http://localhost:4572" };
var s3client = new AmazonS3Client(config);
Do not forget to replace URL if your localstack is using different port for S3.
Hope this helps.
Related
I upload to google cloud storage bucket via the storage c# api (Google.Cloud.Storage.V1). These are public files accessed by client pages.
Problem:
the files are sent with "private, max-age= 0".
Question:
I would like to set custom cache headers instead while or after uploading the files via the api itself. Is this possible to sent the cache header or other metadata via the c# google storage api call?
I am also curious: since I have not set any cache header, why does google storage serve these files with max-age=0, instead of not sending any cache header at all?
You can set the cache control when you call UploadObject, if you specify an Object instead of just the bucket name and object name. Here's an example:
var client = StorageClient.Create();
var obj = new Google.Apis.Storage.v1.Data.Object
{
Bucket = bucketId,
Name = objectName,
CacheControl = "public,max-age=3600"
};
var stream = new MemoryStream(Encoding.UTF8.GetBytes("Hello world"));
client.UploadObject(obj, stream);
You can do it after the fact as well using PatchObject:
var patch = new Google.Apis.Storage.v1.Data.Object
{
Bucket = bucketId,
Name = objectName,
CacheControl = "public,max-age=7200"
};
client.PatchObject(patch);
I don't know about the details of cache control if you haven't specified anything though, I'm afraid.
I have developed an Android game which successfully gets a ServerAuthCode from the Google Play API. I want to send this ServerAuthCode to my custom game server, which I have wrote in C# and validate it to authenticate the player.
There is a documentation by Google for Java available (part "Exchange the server auth code for an access token on the server"): https://developers.google.com/games/services/android/offline-access
Unfortunately I can not adapt this for C#.
I have the client_secret.json which seems to include all API authentication data and I have the ServerAuthCode (which seems to be a token).
There is also a NuGet package available for C#, but it does not contain all the classes from the above documentation: https://www.nuget.org/packages/Google.Apis.AndroidPublisher.v3/
How can I validate the token? I would also welcome a simple Postman example.
I figured it out by trial and error. One important thing to note is that the Server Auth Code expires fast. In case you are debugging and copy & pasting by hand, it may happen that until you run the code, the Server Auth Code is already expired. In this case, Google API returns "invalid_grant" as error, which for me was misleading.
In my example solution you need to have a file "client_secret.json" in your project, which is copied on build to the output directory (file properties -> "Build Action" = "Content", "Copy to Output Directory" = "Copy always").
You get your client_secret.json file from the Google API console (https://console.developers.google.com/apis/credentials?project=, click on the download icon on the right side of your project, under "OAuth 2.0-Client-IDs").
Important: The redirect url must match the redirect url configured in your project. For me, it was just empty, so just use an empty string.
using Google.Apis.Auth.OAuth2;
using Google.Apis.Auth.OAuth2.Requests;
using System;
using System.IO;
using System.Reflection;
using System.Text;
namespace GoogleApiTest
{
// Source: https://developers.google.com/identity/sign-in/android/offline-access
class Program
{
static void Main(string[] args)
{
var authCode = "YOUR_FRESH_SERVER_AUTH_CODE";
var path = Path.Combine(Path.GetDirectoryName(Assembly.GetExecutingAssembly().Location), #"client_secret.json");
var config = File.ReadAllText(path, Encoding.UTF8);
GoogleClientSecrets clientSecrets = GoogleClientSecrets.Load(new FileStream(path, FileMode.Open));
var request = new AuthorizationCodeTokenRequest()
{
ClientId = clientSecrets.Secrets.ClientId,
ClientSecret = clientSecrets.Secrets.ClientSecret,
RedirectUri = "",
Code = authCode,
GrantType = "authorization_code"
};
var tokenResponse = request.ExecuteAsync(new System.Net.Http.HttpClient(), "https://www.googleapis.com/oauth2/v4/token", new System.Threading.CancellationToken(), Google.Apis.Util.SystemClock.Default).GetAwaiter().GetResult();
Console.ReadLine();
}
}
}
I have a blob in a container called 'a' at 'b/123?/1.xml' and I'm having trouble deleting it via a cloudclient.
string blobAddressUri = "b/123%3f/1.xml";
var cloudBlobContainer = csa.CreateCloudBlobClient().GetContainerReference("ndrdata");
var blobToDelete = cloudBlobContainer.GetBlobReference(HttpUtility.UrlEncode(blobAddressUri));
blobToDelete.Delete();
This is the code I've tried with different variations on using ? vs %3f. and not UrlEncoding the string.
I can access the file if I generate a SAS uri through CloudBerry and then replace the '?' with %3f.
Thanks for any help.
What version of Storage Client library you're using? I used version 1.7.0 and used the following code against development storage and it worked fine for me.
var storage = CloudStorageAccount.DevelopmentStorageAccount;
string blobAddressUri = "b/123?/MainWindow.xaml";
var cloudBlobContainer = storage.CreateCloudBlobClient().GetContainerReference("abc");
var blobToDelete = cloudBlobContainer.GetBlobReference(blobAddressUri);
blobToDelete.Delete();
I am trying to do some simple file IO using amazon S3 and C#.
So far I have been able to create files and list them. I am the bucket owner and I should have full access. In CloudBerry I can create and delete files in the bucket. In my code when I try to delete a file I get an access denied exception.
This is my test method:
[Test]
public void TestThatFilesCanBeCreatedAndDeleted()
{
const string testFile = "test.txt";
var awsS3Helper = new AwsS3Helper();
awsS3Helper.AddFileToBucketRoot(testFile);
var testList = awsS3Helper.ListItemsInBucketRoot();
Assert.True(testList.ContainsKey(testFile)); // This test passes
awsS3Helper.DeleteFileFromBucket(testFile); // Access denied exception here
testList = awsS3Helper.ListItemsInBucketRoot();
Assert.False(testList.ContainsKey(testFile));
}
My method to add a file:
var request = new PutObjectRequest();
request.WithBucketName(bucketName);
request.WithKey(fileName);
request.WithContentBody("");
S3Response response = client.PutObject(request);
response.Dispose();
My method to delete a file:
var request = new DeleteObjectRequest()
{
BucketName = bucketName,
Key = fileKey
};
S3Response response = client.DeleteObject(request);
response.Dispose();
After running the code the file is visible in CloudBerry and I can delete it from there.
I have very little experience with Amazon S3 so I don't know what could be going wrong. Should I be putting some kind of permissions on to any files I create or upload? Why would I be able to delete a file while I am logged in to CloudBerry with the same credentials provided to my program?
I'm not sure what is the source of problem. Possibly security rules, but maybe something very simple with your bucket configuration. You can check them using S3 Organizer Firefox plugin, using AWS management site or any other management tool. Also I recommend request-responce logging - that helped a lot in different investigating for me. AWSSDK has plenty of good exmamples with logging - so you need only copy-paste them and everything works. If you have actual requests sending to Amazon, you can compare them with documentation. Please check AccessKeyId for your deleteRequest
I am using AWS.Net to upload user content (images) and display them on my site. This is what my code looks like currently for the upload:
using (client = Amazon.AWSClientFactory.CreateAmazonS3Client())
{
var putObjectRequest = new PutObjectRequest
{
BucketName = bucketName,
InputStream = fileStream,
Key = fileName,
CannedACL = S3CannedACL.PublicRead,
//MD5Digest = md5Base64,
//GenerateMD5Digest = true,
Timeout = 3600000 //1 Hour
};
S3Response response = client.PutObject(putObjectRequest);
response.Dispose();
}
Whats the best way to store the path to these files? Is there a way to get a link to my file from the response?
Currently I just have a URL in my webconfig like https://s3.amazonaws.com/<MyBucketName>/ and then when I need to show an image I take that string and use the key from the object I store in the db that represents the file uploaded.
Is there a better way to do this?
All the examples that come with it don't really address this kind of usage. And the documentation isn't online and I don't know how to get to it after I install the SDK, despite following the directions on Amazon.
Your approach of storing paths including your bucket names in web.config is the same thing that I do, which works great. I then just store the relative paths in various database tables.
The nice thing about this approach is that it makes it easier to migrate to different storage mechanisms or CDNs such as CloudFront. I don't think that there's a better way than this approach because S3 files reside on a different domain, or subdomain if you do CNAME mapping, and thus your .NET runtime does not run under the same domain or subdomain.
There is also a "Location" property in the response which points directly to the uri where the S3Object is.