I got to upload file & streams to S3 periodically and the file size usually under 50MB. What am seeing is TransferUtility.UploadAsync method returns success but the file was never created. This occurs sporadically. Below is the code we use. Any suggestions around this?
public TransferUtilityUploadRequest CreateRequest(string bucket, string path)
{
var request = new TransferUtilityUploadRequest
{
BucketName = bucket,
StorageClass = S3StorageClass.Standard,
Key = path,
AutoCloseStream = true,
CannedACL = S3CannedACL.AuthenticatedRead,
ServerSideEncryptionMethod = ServerSideEncryptionMethod.AES256
};
return request;
}
public async Task CreateS3ObjectFromStreamAsync(MemoryStream memeoryStream, string bucket, string filePath)
{
var ftu = new TransferUtility(client);
var request = CreateRequest(bucket, filePath);
request.InputStream = memeoryStream;
await ftu.UploadAsync(request);
}
public async Task CreateS3ObjectFromFileAsync(string sourceFilePath, string bucketName, string destpath)
{
var request = CreateRequest(bucketName, destpath);
request.FilePath = sourceFilePath;
var ftu = new TransferUtility(client);
await ftu.UploadAsync(request);
}
Please use Upload or UploadAsync.Wait. It worked for me!
Related
I'm trying to upload a file to the contabo object storage from memory stream using AWSSDK.S3
This is my client configuration.
string accessKey = "xxxxxxxxxxxxxxxxxxxxxxxxxx";
string secretKey = "xxxxxxxxxxxxxxxxxxxxxxxxxx";
Amazon.S3.AmazonS3Config config = new Amazon.S3.AmazonS3Config();
Amazon.S3.AmazonS3Client s3Client;
public BookingsController()
{
config.ServiceURL = "https://eu2.contabostorage.com";
config.DisableHostPrefixInjection = true;
s3Client = new Amazon.S3.AmazonS3Client(
accessKey,
secretKey,
config
);
}
This is the method I'm using:
[HttpPost("/api/Bookings/AddFile")]
public async Task<ActionResult> AddBookingFile([FromForm] IFormFile file)
{
using (var newMemoryStream = new MemoryStream())
{
ListBucketsResponse response = await s3Client.ListBucketsAsync();
file.CopyTo(newMemoryStream);
Amazon.S3.Model.PutObjectRequest request = new Amazon.S3.Model.PutObjectRequest();
request.BucketName = "test-bucket";
request.Key = "recording.wav";
request.ContentType = "audio/wav";
request.InputStream = newMemoryStream;
await s3Client.PutObjectAsync(request);
}
return Ok();
}
The ListBucket-Method works properly. The PutObject Method throws the exception that the Host "Der angegebene Host ist unbekannt. (test-bucket.eu2.contabostorage.com:443)" cannot be found.
Referenced to the Contabo-docs is that correct, because Contabo doesn't support virtual hosted buckets (dns prefix). Reference to contabo docs
I thought, that the following configuration fix this, but that wasn't the solution.
config.DisableHostPrefixInjection = true;
Does anyone has any advise hot to prevent the prefixing of the url?
So currently my code uploads an image to S3 but what I want it to now do is to get the URL of the image that has been uploaded, so this URL can be stored in the DB and used later.
I know it's possible, as it was shown in this question here: s3 file upload does not return response (but that is JavaScript and I'm struggling to convert it to c#)
This is my code, it works perfectly, I just need to get the URL of the uploaded object + is there a way to make the object public by default. I tried to console write the response but that was no help
public class AmazonS3Uploader
{
private string bucketName = "cartalkio-image-storage-dev";
private string keyName = DateTime.Now.ToString() + ".png";
public async void UploadFile()
{
byte[] bytes = System.Convert.FromBase64String("iVBORw0KGgoAAAANSUhEUgAAAAUAAAAFCAYAAACNbyblAAAAHElEQVQI12P4//8/w38GIAXDIBKE0DHxgljNBAAO9TXL0Y4OHwAAAABJRU5ErkJggg==");
Stream stream = new MemoryStream(bytes);
var myAwsAccesskey = "*************";
var myAwsSecret = "**************************";
var client = new AmazonS3Client(myAwsAccesskey, myAwsSecret, Amazon.RegionEndpoint.EUWest2);
try
{
PutObjectRequest putRequest = new PutObjectRequest
{
BucketName = bucketName,
Key = keyName,
ContentType = "image/png",
InputStream = stream
};
PutObjectResponse response = await client.PutObjectAsync(putRequest);
// Console.Write(response);
}
catch (AmazonS3Exception amazonS3Exception)
{
if (amazonS3Exception.ErrorCode != null &&
(amazonS3Exception.ErrorCode.Equals("InvalidAccessKeyId")
||
amazonS3Exception.ErrorCode.Equals("InvalidSecurity")))
{
throw new Exception("Check the provided AWS Credentials.");
}
else
{
throw new Exception("Error occurred: " + amazonS3Exception.Message);
}
}
}
I've searched the documentation and various websites but this was all I found () - I guess there just isn't a URL that is returned. What I've done is changed my code around a bit because you can always predict what the object URL will be based on the name of the object you're uploading. i.e if you're uploading an image called 'test.png', the URL will be this:
https://[Your-Bucket-Name].s3.[S3-Region].amazonaws.com/[test.png]
I tried dependency injection but AWS didn't like that so I've changed my code to this:
(in this example I'm receiving a base64 string, turning it into a BYTE and then into a SYSTEM.IO.Stream)
This is the request I'm sending up:
{ "image":"iVBORw0KGgoAAAANSUhEUgAAAAUAAAAFCAYAAACNbyblAAAAHElEQVQI12P4//8/w38GIAXDIBKE0DHxgljNBAAO9TXL0Y4OHwAAAABJRU5ErkJggg=="
}
Controller:
public async Task<ActionResult> Index(string image)
{
AmazonS3Uploader amazonS3 = new AmazonS3Uploader();
// This bit creates a random string that will be used as part of the URL
var chars = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789";
var stringChars = new char[8];
var random = new Random();
for (int i = 0; i < stringChars.Length; i++)
{
stringChars[i] = chars[random.Next(chars.Length)];
}
var finalString = new String(stringChars);
// This bit adds the '.png' as its an image
var keyName = finalString + ".png";
// This uploads the file to S3, passing through the keyname (which is the end of the URL) and the image string
amazonS3.UploadFile(keyName, image);
// This is what the final URL of the object will be, so you can use this variable later or save it in your database
var itemUrl = "https://[your-bucket-name].s3.[S3-Region].amazonaws.com/" + keyName;
return Ok();
}
AmazonS3Uploader.cs
private string bucketName = "your-bucket-name";
public async void UploadFile(string keyName, string image)
{
byte[] bytes = System.Convert.FromBase64String(image);
Stream stream = new MemoryStream(bytes);
var myAwsAccesskey = "*************";
var myAwsSecret = "**************************";
var client = new AmazonS3Client(myAwsAccesskey, myAwsSecret, Amazon.RegionEndpoint.[S3-Region]);
try
{
PutObjectRequest putRequest = new PutObjectRequest
{
BucketName = bucketName,
Key = keyName,
ContentType = "image/png",
InputStream = stream
};
PutObjectResponse response = await client.PutObjectAsync(putRequest);
}
catch (AmazonS3Exception amazonS3Exception)
{
if (amazonS3Exception.ErrorCode != null &&
(amazonS3Exception.ErrorCode.Equals("InvalidAccessKeyId")
||
amazonS3Exception.ErrorCode.Equals("InvalidSecurity")))
{
throw new Exception("Check the provided AWS Credentials.");
}
else
{
throw new Exception("Error occurred: " + amazonS3Exception.Message);
}
}
}
The only problem with this is if the file upload to S3 Fails, you might still get the URL, and if you're saving it to your database - you'll save the URL to the database but the object wont exist in the S3 bucket, so it won't lead anywhere. If you implement dependency injection - this shouldn't be an issue (dependency injection not implemented in this example)
Hi I am writing Lambda function in .Net core. My requirement is Post api will send employee data. When data is received I want to store it in S3 bucket. The approach I am following is whenever Api sends data to lambda, Empid is unique. Each time I want to create one json file and name of the file should be equal to emp id. I am trying to write my function as below.
public async Task<string> FunctionHandler(Employee input, ILambdaContext context)
{
string bucketname = "someunquebucket";
var client = new AmazonS3Client();
bool doesBucketExists = await AmazonS3Util.DoesS3BucketExistV2Async(client,bucketname);
if(!doesBucketExists)
{
var request = new PutBucketRequest
{
BucketName = "someunquebucket",
};
var response = await client.PutBucketAsync(request);
}
using (var stream = new MemoryStream(bin))
{
var request = new PutObjectRequest
{
BucketName = bucketname,
InputStream = stream,
ContentType = "application/json",
Key = input.emp_id
};
var response = await client.PutObjectAsync(request).ConfigureAwait(false);
}
}
In the above code, PutObjectRequest is used to write data to s3 bucket. I am adding few parameters like bucketname etc. In the function I am receiving Employee data. Now I want to create json file with emp id. I found above code but I am not sure what to pass in place of bin. Can someone help me to execute the above function. Any help would be appreciated. Thanks
I am new to .Net Core, and came across your post while trying to solve a similar problem.
Here is how I was able to upload a json response string to my s3 Bucket (Hope it helps)
String timeStamp = DateTime.Now.ToString("yyyyMMddHHmmssfff");
byte[] byteArray = Encoding.ASCII.GetBytes(jsonString);
var seekableStream = new MemoryStream(byteArray);
seekableStream.Position = 0;
var putRequest = new PutObjectRequest
{
BucketName = this.BucketName,
Key = timeStamp+"_"+reg + ".json",
InputStream = seekableStream
};
try
{
var response2 = await this.S3Client.PutObjectAsync(putRequest);}
Hope you sorted your issue.
This is the example code from AWS SDK Code Examples.
https://docs.aws.amazon.com/sdk-for-net/v3/developer-guide/welcome.html
https://github.com/awsdocs/aws-doc-sdk-examples/tree/main/dotnetv3/S3
https://github.com/awsdocs/aws-doc-sdk-examples/blob/main/dotnetv3/S3/S3_Basics/S3Bucket.cs#L45
/// <summary>
/// Shows how to upload a file from the local computer to an Amazon S3
/// bucket.
/// </summary>
/// <param name="client">An initialized Amazon S3 client object.</param>
/// <param name="bucketName">The Amazon S3 bucket to which the object
/// will be uploaded.</param>
/// <param name="objectName">The object to upload.</param>
/// <param name="filePath">The path, including file name, of the object
/// on the local computer to upload.</param>
/// <returns>A boolean value indicating the success or failure of the
/// upload procedure.</returns>
public static async Task<bool> UploadFileAsync(
IAmazonS3 client,
string bucketName,
string objectName,
string filePath)
{
var request = new PutObjectRequest
{
BucketName = bucketName,
Key = objectName,
FilePath = filePath,
};
var response = await client.PutObjectAsync(request);
if (response.HttpStatusCode == System.Net.HttpStatusCode.OK)
{
Console.WriteLine($"Successfully uploaded {objectName} to {bucketName}.");
return true;
}
else
{
Console.WriteLine($"Could not upload {objectName} to {bucketName}.");
return false;
}
}
This is a modified service I wrote to upload json strings as json files both with sync and async methods:
public class AwsS3Service
{
private readonly IAmazonS3 _client;
private readonly string _bucketName;
private readonly string _keyPrefix;
/// <param name="bucketName">The Amazon S3 bucket to which the object
/// will be uploaded.</param>
public AwsS3Service(string accessKey, string secretKey, string bucketName, string keyPrefix)
{
BasicAWSCredentials credentials = new BasicAWSCredentials(accessKey, secretKey);
_client = new AmazonS3Client(credentials);
_bucketName=bucketName;
_keyPrefix=keyPrefix;
}
public bool UploadJson(string objectName, string json, int migrationId, long orgId)
{
return Task.Run(() => UploadJsonAsync(objectName, json, migrationId, orgId)).GetAwaiter().GetResult();
}
public async Task<bool> UploadJsonAsync(string objectName, string json, int migrationId, long orgId)
{
var request = new PutObjectRequest
{
BucketName = $"{_bucketName}",
Key = $"{_keyPrefix}{migrationId}/{orgId}/{objectName}",
InputStream = new MemoryStream(Encoding.UTF8.GetBytes(json)),
};
var response = await _client.PutObjectAsync(request);
if (response.HttpStatusCode == System.Net.HttpStatusCode.OK)
{
return true;
}
else
{
return false;
}
}
}
Im trying to download object from S3 bucket facing below issue
The Security token included in the request is Invalid .
Please check and correct where is the mistake.
Below is my code
1. Get Temporary credentails:
main()
{
string path = "http://XXX.XXX.XXX./latest/meta-data/iam/security-credentials/EC2_WLMA_Permissions";
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(path);
request.Method = "GET";
request.ContentType = "application/json";
HttpWebResponse response = request.GetResponse() as HttpWebResponse;
string result = string.Empty;
using (StreamReader reader = new StreamReader(response.GetResponseStream()))
{
result = reader.ReadToEnd();
dynamic metaData = JsonConvert.DeserializeObject(result);
_awsAccessKeyId = metaData.AccessKeyId;
_awsSecretAccessKey = metaData.SecretAccessKey;
}
}
Create SessionAWSCredentials instance:
SessionAWSCredentials tempCredentials =
GetTemporaryCredentials(_awsAccessKeyId, _awsSecretAccessKey);
//GetTemporaryCredentials method:
private static SessionAWSCredentials GetTemporaryCredentials(
string accessKeyId, string secretAccessKeyId)
{
AmazonSecurityTokenServiceClient stsClient =
new AmazonSecurityTokenServiceClient(accessKeyId,
secretAccessKeyId);
Console.WriteLine(stsClient.ToString());
GetSessionTokenRequest getSessionTokenRequest =
new GetSessionTokenRequest();
getSessionTokenRequest.DurationSeconds = 7200; // seconds
GetSessionTokenResponse sessionTokenResponse =
stsClient.GetSessionToken(getSessionTokenRequest);
Console.WriteLine(sessionTokenResponse.ToString());
Credentials credentials = sessionTokenResponse.Credentials;
Console.WriteLine(credentials.ToString());
SessionAWSCredentials sessionCredentials =
new SessionAWSCredentials(credentials.AccessKeyId,
credentials.SecretAccessKey,
credentials.SessionToken);
return sessionCredentials;
}
Get files from S3 using AmazonS3Client:
using (IAmazonS3 client = new AmazonS3Client(tempCredentials,RegionEndpoint.USEast1))
{
GetObjectRequest request = new GetObjectRequest();
request.BucketName = "bucketName" + #"/" + "foldername";
request.Key = "Terms.docx";
GetObjectResponse response = client.GetObject(request);
response.WriteResponseStreamToFile("C:\\MyFile.docx");
}
We do something a little simpler for interfacing with S3 (downloads and uploads)
It looks like you went the more complex approach. You should try just using the TransferUtility instead:
TransferUtility fileTransferUtility =
new TransferUtility(
new AmazonS3Client("ACCESS-KEY-ID", "SECRET-ACCESS-KEY", Amazon.RegionEndpoint.CACentral1));
// Note the 'fileName' is the 'key' of the object in S3 (which is usually just the file name)
fileTransferUtility.Download(filePath, "my-bucket-name", fileName);
NOTE: TransferUtility.Download() returns void because it downloads the file to the path specified in the filePath argument. This may be a little different than what you were expecting but you can still open a FileStream to that path afterwards and manipulate the file all you want. For example:
using (FileStream fileDownloaded = new FileStream(filePath, FileMode.Open, FileAccess.Read))
{
// Do stuff with our newly downloaded file
}
Bucketname, Accesskey and secretkey, I took from web config. You could type manually.
public void DownloadObject(string imagename)
{
RegionEndpoint bucketRegion = RegionEndpoint.USEast1;
IAmazonS3 client = new AmazonS3Client(bucketRegion);
string accessKey = System.Configuration.ConfigurationManager.AppSettings["AWSAccessKey"];
string secretKey = System.Configuration.ConfigurationManager.AppSettings["AWSSecretKey"];
AmazonS3Client s3Client = new AmazonS3Client(new BasicAWSCredentials(accessKey, secretKey), Amazon.RegionEndpoint.USEast1);
string objectKey = "EMR" + "/" + imagename;
//EMR is folder name of the image inside the bucket
GetObjectRequest request = new GetObjectRequest();
request.BucketName = System.Configuration.ConfigurationManager.AppSettings["bucketname"];
request.Key = objectKey;
GetObjectResponse response = s3Client.GetObject(request);
response.WriteResponseStreamToFile("D:\\Test\\"+ imagename);
}
//> D:\Test\ is local file path.
Following is how I download it. I need to download only .zip files in this case. Restricting to only required file types (.zip in my case), helped me to avoid errors related to (416) Requested Range Not Satisfiable
public static class MyAWS_S3_Helper
{
static string _S3Key = System.Configuration.ConfigurationManager.ConnectionStrings["S3BucketKey"].ConnectionString;
static string _S3SecretKey = System.Configuration.ConfigurationManager.ConnectionStrings["S3BucketSecretKey"].ConnectionString;
public static void S3Download(string bucketName, string _ObjectKey, string downloadPath)
{
IAmazonS3 _client = new AmazonS3Client(_S3Key, _S3SecretKey, Amazon.RegionEndpoint.USEast1);
TransferUtility fileTransferUtility = new TransferUtility(_client);
fileTransferUtility.Download(downloadPath + "\\" + _ObjectKey, bucketName, _ObjectKey);
_client.Dispose();
}
public static async Task AsyncDownload(string bucketName, string downloadPath, string requiredSunFolder)
{
var bucketRegion = RegionEndpoint.USEast1; //Change it
var credentials = new BasicAWSCredentials(_S3Key, _S3SecretKey);
var client = new AmazonS3Client(credentials, bucketRegion);
var request = new ListObjectsV2Request
{
BucketName = bucketName,
MaxKeys = 1000
};
var response = await client.ListObjectsV2Async(request);
var utility = new TransferUtility(client);
foreach (var obj in response.S3Objects)
{
string currentKey = obj.Key;
double sizeCheck = Convert.ToDouble(obj.Size);
int fileNameLength = currentKey.Length;
Console.WriteLine(currentKey + "---" + fileNameLength.ToString());
if (currentKey.Contains(requiredSunFolder))
{
if (currentKey.Contains(".zip")) //This helps to avoid errors related to (416) Requested Range Not Satisfiable
{
try
{
S3Download(bucketName, currentKey, downloadPath);
}
catch (Exception exTest)
{
string messageTest = currentKey + "-" + exTest;
}
}
}
}
}
}
Here is how it is called
static void Main(string[] args)
{
string downloadPath = #"C:\SourceFiles\TestDownload";
Task awsTask = MyAWS_S3_Helper.AsyncDownload("my-files", downloadPath, "mysubfolder");
awsTask.Wait();
}
Here is what I have done to download the files from S3 bucket,
var AwsImportFilePathParcel = "TEST/TEMP"
IAmazonS3 client = new AmazonS3Client(AwsAccessKey,AwsSecretKey);
S3DirectoryInfo info = new S3DirectoryInfo(client, S3BucketName, AwsImportFilePathParcel);
S3FileInfo[] s3Files = info.GetFiles(pattenForParcel);
now in s3Files, you have all the files which are on provided location, using for each you can save all files in to your system
foreach (var fileInfo in s3Files)
{
var localPath = Path.Combine("C:\TEST\", fileInfo.Name);
var file = fileInfo.CopyToLocal(localPath);
}
I am trying to upload a large file to Amazon S3. I first used the PutObject and it worked fine but took about 5 hours to upload a 2GB file. So I read some online suggestions and tried it with the TransferUtility.
I have increased the timeout but this TransferUtility API always give me "The request was aborted. The request was canceled." error.
code sample:
public void UploadWithMultiPart(string BucketName, string s3_key, string fileName)
{
var fileTransferUtility = new Amazon.S3.Transfer.TransferUtility(_accessKey, _secretKey);
var request = new Amazon.S3.Transfer.TransferUtilityUploadRequest()
.WithBucketName(BucketName)
.WithKey(s3_key)
.WithFilePath(fileName)
.WithTimeout(60*60*1000*100)
.WithPartSize(1024 * 1024 * 100)
.WithCannedACL(S3CannedACL.PublicRead)
.WithStorageClass(S3StorageClass.ReducedRedundancy);
request.Timeout = 60*60*1000*100;
fileKey = s3_key;
request.UploadProgressEvent += new EventHandler<UploadProgressArgs>(uploadRequest_UploadPartProgressEvent);
//.with = 30000
// .AddHeader("x-amz-acl", "public-read")
fileTransferUtility.Upload(request);
}
public void Upload(string BucketName, string s3_key, string fileName)
{
Amazon.S3.Model.PutObjectRequest request = new Amazon.S3.Model.PutObjectRequest();
request.WithBucketName(BucketName);
request.WithKey(s3_key);
request.WithFilePath(fileName);
request.Timeout = -1;
request.ReadWriteTimeout = 30000;
request.AddHeader("x-amz-acl", "public-read");
s3Client.PutObject(request);
}
Try This
private TransferUtility transferUtility;
transferUtility = new TransferUtility(awsAccessKey, awsSecretKey);
AsyncCallback callback = new AsyncCallback(UploadComplete);
var putObjectRequest = new Amazon.S3.Transfer.TransferUtilityUploadRequest()
{
FilePath = filePath,
BucketName = awsBucketName,
Key = awsFilePath,
ContentType = contentType,
StorageClass = S3StorageClass.ReducedRedundancy,
ServerSideEncryptionMethod = ServerSideEncryptionMethod.AES256,
CannedACL = S3CannedACL.Private
};
IAsyncResult ar = transferUtility.BeginUpload(putObjectRequest, callback, null);
ThreadPool.QueueUserWorkItem(c =>
{
transferUtility.EndUpload(ar);
});