client.PutObjectAsync(request) uploading a broken image using aws lambda - c#

My image are uploading locally but when i deployed lambda its giving a broken image(Note: it is uploading image but size increases),I have added Binary Media Type in the API Gateway , but still not getting right results. Interesting thing is that when i uploaded a text file it was perfect on the bucket but not images.
public async Task<S3Response> ImageUpload(IFormFile file ){
string bucket_name = "your_bucket";
var client = new AmazonS3Client("***", "****", RegionEndpoint.USEast1);
var stream = new System.IO.MemoryStream();
file.CopyTo(stream);
var request = new PutObjectRequest
{
Key = file.FileName,
BucketName = bucket_name,
InputStream = stream,
//ContentType = "application/octet-stream",
ContentType = file.ContentType,
CannedACL = S3CannedACL.PublicRead
};
response = await client.PutObjectAsync(request);
}

I am saving image as base64 string on s3 bucket and converting back from base64 string to my original image on the client side.If someone got a better solution kindly add in a thread.
byte[] byteArray = Encoding.UTF8.GetBytes(file.Filebase64);
stream= new MemoryStream(byteArray);
var request = new PutObjectRequest
{
Key=file.File_name,
BucketName = bucket_name,
InputStream = stream,
ContentType = "text/plain",
CannedACL = S3CannedACL.PublicRead
};
where Image file model class look like this :
public class ImageModel
{
public String File_name { set; get; }
public String Filebase64 { set; get; }
}

Related

Amazon S3 does not release the file after the upload

I have a wcf service here to upload the files to Amazon s3 server. After the successful upload, I need to delete the file from my local path. But when I try to delete the file, got an error says The process cannot access the file.Because its being used by another process".Sharing below my code snippets.
var putRequest = new PutObjectRequest
{
BucketName = System.Configuration.ConfigurationManager.AppSettings["S3Bucket"]
.ToString(),
Key = keyName,
FilePath = path,
ContentType = "application/pdf"
};
client = new AmazonS3Client(bucketRegion);
PutObjectResponse response = await client.PutObjectAsync(putRequest);
putRequest = null;
client.Dispose();
File.Delete(path);
If anyone know about the issue, please update..
There might be a timing issue here, so you might want to try to close the stream explicitly.
Do note, I am not sure, if I am mistaken I'll remove this, but it was to long for a comment.
using (var fileStream = new File.OpenRead(path))
{
var putRequest = new PutObjectRequest
{
BucketName = System.Configuration.ConfigurationManager.AppSettings["S3Bucket"]
.ToString(),
Key = keyName,
InputStream = fileStream ,
ContentType = "application/pdf",
AutoCloseStream = false,
};
using (var c = new AmazonS3Client(bucketRegion))
{
PutObjectResponse response = await c.PutObjectAsync(putRequest);
}
} //filestream should be closed here, if not: call fileStream.Close()
File.Delete(path);
More info on the properties: https://docs.aws.amazon.com/sdkfornet1/latest/apidocs/html/T_Amazon_S3_Model_PutObjectRequest.htm

Amazon S3 .NET: Upload base64 image data?

I am trying to convert images to base64, and trying to upload that to AWS S3 using C#. I keep getting a remote server not found exception. But I am able to log in programmatically and list the buckets I have.
Can you please identify whats wrong.
static void Main(string[] args)
{
string configaccess = ConfigurationManager.AppSettings["AWSAccesskey"];
string configsecret = ConfigurationManager.AppSettings["AWSSecretkey"];
var s3Client = new AmazonS3Client(
configaccess,
configsecret,
RegionEndpoint.USEast1
);
Byte[] bArray = File.ReadAllBytes("path/foo.jpg");
String base64String = Convert.ToBase64String(bArray);
try
{
byte[] bytes = Convert.FromBase64String(base64String);
using (s3Client)
{
var request = new PutObjectRequest
{
BucketName = "bucketName",
CannedACL = S3CannedACL.PublicRead,
Key = string.Format("bucketName/{0}", "foo.jpg")
};
using (var ms = new MemoryStream(bytes))
{
request.InputStream = ms;
s3Client.PutObject(request);
}
}
}
catch (Exception ex)
{
Console.WriteLine("AWS Fail");
}
}
I have tested with same code and working fine for me. You need not to specify the bucket name in the Key. We can specify the folder name, if want to store this file in any folder inside the bucket.
Key = string.Format("FolderName/{0}", "foo.jpg").

How to upload image on amzon s3 using .net web api c#?

I have create one api for the image upload. in this code i have upload time image download in my local folder and store. but i need now change my code and move this image download on amzon s3. i have found one link in searching time but in this link static image is upload i need image browse from the file upload control and download on amzon server. but how can do that i have no idea. please any one how can do that then please help me. here below listed my code. and also add i have try this code in below.
this is my api method for the image upload :
[HttpPost]
[Route("FileUpload")]
public HttpResponseMessage FileUpload(string FileUploadType)
{
try
{
var httpRequest = HttpContext.Current.Request;
if (httpRequest.Files.Count > 0)
{
foreach (string file in httpRequest.Files)
{
var postedFile = httpRequest.Files[file];
string fname = System.IO.Path.GetFileNameWithoutExtension(postedFile.FileName.ToString());
string extension = Path.GetExtension(postedFile.FileName);
Image img = null;
string newFileName = "";
newFileName = DateTime.Now.ToString("yyyyMMddhhmmssfff") + ".jpeg";
string path = ConfigurationManager.AppSettings["ImageUploadPath"].ToString();
string filePath = Path.Combine(path, newFileName);
SaveJpg(img, filePath);
return Request.CreateResponse(HttpStatusCode.OK, "Ok");
}
}
}
catch (Exception ex)
{
return ex;
}
return Request.CreateResponse(HttpStatusCode.OK, "Ok");
}
This is my save image api =>
public static void SaveJpg(Image image, string file_name, long compression = 60)
{
try
{
EncoderParameters encoder_params = new EncoderParameters(1);
encoder_params.Param[0] = new EncoderParameter(
System.Drawing.Imaging.Encoder.Quality, compression);
ImageCodecInfo image_codec_info =
GetEncoderInfo("image/jpeg");
image.Save(file_name, image_codec_info, encoder_params);
}
catch (Exception ex)
{
}
}
i have try this code with static image upload on server =>
private string bucketName = "Xyz";
private string keyName = "abc.jpeg";
private string filePath = "C:\\Users\\I BALL\\Desktop\\image\\abc.jpeg";. // this image is store on server
public void UploadFile()
{
var client = new AmazonS3Client(Amazon.RegionEndpoint.USEast1);
try
{
PutObjectRequest putRequest = new PutObjectRequest
{
BucketName = bucketName,
Key = keyName,
FilePath = filePath,
ContentType = "text/plain"
};
PutObjectResponse response = client.PutObject(putRequest);
}
catch (AmazonS3Exception amazonS3Exception)
{
if (amazonS3Exception.ErrorCode != null &&
(amazonS3Exception.ErrorCode.Equals("InvalidAccessKeyId")
||
amazonS3Exception.ErrorCode.Equals("InvalidSecurity")))
{
throw new Exception("Check the provided AWS Credentials.");
}
else
{
throw new Exception("Error occurred: " + amazonS3Exception.Message);
}
}
}
here i have show my code but i need to marge with my code so how can do that please any one know how can do that.
This might be too late, but here is how I did it:
Short Answer: Amazon S3 SDK for .Net has a class called "TransferUtility" which accepts a Stream object, so as long as you can convert your file to any Class derived from the abstract Stream class, you can upload the file.
Long Answer:
The httprequest posted files has an inputStream property, so inside your foreach loop:
var postedFile = httpRequest.Files[file];
If you expand on this object, it is of type "HttpPostedFile", so you have access to the Stream through the InputStream property:
Here is some snippets from a working sample:
//get values from the headers
HttpPostedFile postedFile = httpRequest.Files["File"];
//convert the posted file stream a to memory stream
System.IO.MemoryStream target = new System.IO.MemoryStream();
postedFile.InputStream.CopyTo(target);
//the following static function is a function I built which accepts the amazon file key and also the object that will be uploaded to S3, in this case, a MemoryStream object
s3.WritingAnObject(fileKey, target);
The S3 is an instance of a class called "S3Uploader", here are some snippets that can get you going,
below are some needed namespaces:
using Amazon;
using Amazon.Runtime;
using Amazon.S3;
using Amazon.S3.Model;
using Amazon.S3.Transfer;
class constructor:
static IAmazonS3 client;
static TransferUtility fileTransferUtility;
public S3Uploader(string accessKeyId, string secretAccessKey,string bucketName)
{
_bucketName = bucketName;
var credentials = new BasicAWSCredentials(accessKeyId, secretAccessKey);
client = new AmazonS3Client(credentials, RegionEndpoint.USEast1);
fileTransferUtility = new TransferUtility(client);
}
Notice here that we are creating the credentials using the BasicAWSCredentials class instead of passing it to the AmazonS3Client directly. And then we are using fileTransferUtility class to have better control over what is sent to S3. and here is how the Upload works based on Memory Stream:
public void WritingAnObject(string keyName, MemoryStream fileToUpload)
{
try
{
TransferUtilityUploadRequest fileTransferUtilityRequest = new
TransferUtilityUploadRequest
{
StorageClass = S3StorageClass.ReducedRedundancy,
CannedACL = S3CannedACL.Private
};
fileTransferUtility.Upload(fileToUpload, _bucketName, keyName);
}
catch (AmazonS3Exception amazonS3Exception)
{
//your error handling here
}
}
Hope this helps someone with similar issues.

System.Net.ProtocolViolationException: Bytes to be written to the stream exceed the Content-Length bytes size specified

Using AWSSDK.dll I am trying to loop through the images in a zip file and send them to amazon S3 using the code below. The problem is I keep getting the error
System.Net.ProtocolViolationException
when I call the fileTransferUtility.Upload() method. The zip is posted as HttpPostedFile to the method which is used to create the zip file, then loop through each entry and upload.
ZipFile zipFile = new ZipFile(postedFile.InputStream);
foreach (ZipEntry zipEntry in zipFile)
{
if (zipEntry.Name != String.Empty)
{
string saveLocation = unzipBaseDir + "/" + zipEntry.Name;
string dbLocation = "./" + Path.Combine(unzipBaseDir, zipEntry.Name).Replace(#"\", "/");
// save the file
TransferUtilityConfig config = new TransferUtilityConfig();
config.MinSizeBeforePartUpload = 80740;
TransferUtility fileTransferUtility = new TransferUtility(new AmazonS3Client(accessKeyID, secretAccessKeyID, RegionEndpoint.EUWest1), config);
using (Stream fileToUpload = zipFile.GetInputStream(zipEntry))
{
TransferUtilityUploadRequest fileTransferUtilityRequest = new TransferUtilityUploadRequest
{
BucketName = existingBucketName,
InputStream = fileToUpload,
StorageClass = S3StorageClass.Standard,
PartSize = fileToUpload.Length,
Key = saveLocation,
CannedACL = S3CannedACL.PublicRead
};
fileTransferUtility.Upload(fileTransferUtilityRequest);
}
}
}

Multipart upload to Amazon S3 overwrites the last part

I'm following the documentation found here about doing multipart upload with the .NET client library.
The issue I'm having is that each part sent to S3 is overwriting the last part. So in other words my pieces are 10kb each (tried 5mb at a time too) and each upload overwrites the previous. What am I doing wrong?
Here's what I've got
var fileTransferUtility = new TransferUtility(_s3Client);
var request = new TransferUtilityUploadRequest
{
BucketName = "mybucket",
InputStream = stream,
StorageClass = S3StorageClass.ReducedRedundancy,
PartSize = stream.Length,//stream is 10,000 bytes at a time
Key = fileName
};
Edit
Here's working code for doing the multipart upload
public UploadPartResponse UploadChunk(Stream stream, string fileName, string uploadId, List<PartETag> eTags, int partNumber, bool lastPart)
{
stream.Position = 0;
//Step 1: build and send a multi upload request
if (partNumber == 1)
{
var initiateRequest = new InitiateMultipartUploadRequest
{
BucketName = _settings.Bucket,
Key = fileName
};
var initResponse = _s3Client.InitiateMultipartUpload(initiateRequest);
uploadId = initResponse.UploadId;
}
//Step 2: upload each chunk (this is run for every chunk unlike the other steps which are run once)
var uploadRequest = new UploadPartRequest
{
BucketName = _settings.Bucket,
Key = fileName,
UploadId = uploadId,
PartNumber = partNumber,
InputStream = stream,
IsLastPart = lastPart,
PartSize = stream.Length
};
var response = _s3Client.UploadPart(uploadRequest);
//Step 3: build and send the multipart complete request
if (lastPart)
{
eTags.Add(new PartETag
{
PartNumber = partNumber,
ETag = response.ETag
});
var completeRequest = new CompleteMultipartUploadRequest
{
BucketName = _settings.Bucket,
Key = fileName,
UploadId = uploadId,
PartETags = eTags
};
try
{
_s3Client.CompleteMultipartUpload(completeRequest);
}
catch
{
//do some logging and return null response
return null;
}
}
response.ResponseMetadata.Metadata["uploadid"] = uploadRequest.UploadId;
return response;
}
If you have a stream that is broken up into 10 chunks you will be hitting this method 10 times, on the first chunk you will hit step 1 & 2, chunks 2-9 only step 2 and on the last only step 3. Your need to send back to your client the upload id and the etag for each response. At step 3 you will need to provide the etag for all pieces or else it will put together the file on S3 without 1 more pieces. On my client side I had a hidden field where I persisted the etag list (comma delimited).
What this code sets up is a request that will upload an object with only one part because you pass in a stream and set the part size to the length of the whole stream.
The intention of using the TransferUtility is you would give it a large stream or file path and set part size to the increments you want the stream broke down to. You can also leave PartSize blank which will use a default part size.

Categories