I am trying to convert images to base64, and trying to upload that to AWS S3 using C#. I keep getting a remote server not found exception. But I am able to log in programmatically and list the buckets I have.
Can you please identify whats wrong.
static void Main(string[] args)
{
string configaccess = ConfigurationManager.AppSettings["AWSAccesskey"];
string configsecret = ConfigurationManager.AppSettings["AWSSecretkey"];
var s3Client = new AmazonS3Client(
configaccess,
configsecret,
RegionEndpoint.USEast1
);
Byte[] bArray = File.ReadAllBytes("path/foo.jpg");
String base64String = Convert.ToBase64String(bArray);
try
{
byte[] bytes = Convert.FromBase64String(base64String);
using (s3Client)
{
var request = new PutObjectRequest
{
BucketName = "bucketName",
CannedACL = S3CannedACL.PublicRead,
Key = string.Format("bucketName/{0}", "foo.jpg")
};
using (var ms = new MemoryStream(bytes))
{
request.InputStream = ms;
s3Client.PutObject(request);
}
}
}
catch (Exception ex)
{
Console.WriteLine("AWS Fail");
}
}
I have tested with same code and working fine for me. You need not to specify the bucket name in the Key. We can specify the folder name, if want to store this file in any folder inside the bucket.
Key = string.Format("FolderName/{0}", "foo.jpg").
Related
I am using AWS S3 SDK and I want to upload the files from my WEB API to the Bucket. It is working perfectly normal if the provided filePath is of the sort C:\User\Desktop\file.jpg, but if I use Path.GetFullPath(file.FileName) It is looking for the .jpg file inside my project folder. How can I get the absolute path on the machine, not the path in the project folder.
public async Task UploadFileAsync(IFormFile file, string userId)
{
var filePath = Path.GetFullPath(file.FileName);
var bucketName = this.configuration.GetSection("Amazon")["BucketName"];
var accessKey = this.configuration.GetSection("Amazon")["AWSAccessKey"];
var secretKey = this.configuration.GetSection("Amazon")["AWSSecretKey"];
var bucketRegion = RegionEndpoint.EUWest1;
var s3Client = new AmazonS3Client(accessKey, secretKey, bucketRegion);
try
{
var fileTransferUtility =
new TransferUtility(s3Client);
using (var fileToUpload =
new FileStream(filePath, FileMode.Open, FileAccess.Read))
{
await fileTransferUtility.UploadAsync(fileToUpload, bucketName, file.FileName);
}
await this.filesRepository.AddAsync(new FileBlob
{
Name = file.FileName,
Extension = file.FileName.Split('.')[1],
Size = file.Length,
UserId = userId,
UploadedOn = DateTime.UtcNow,
});
await this.filesRepository.SaveChangesAsync();
}
catch (AmazonS3Exception e)
{
Console.WriteLine(e.Message);
}
catch (Exception e)
{
Console.WriteLine(e.Message);
}
}
path i get when i use GetFullPath when i should be getting C:\Users\Pepi\Desktop\All\corgi.png
I am sure that I am missing a lot of things here but in order for it to work i need the path to the file on the machine. If i try to escape using filepath and upload the file itself through memoryStream S3 says access denied.
I want to open a filestream from a sharepoint file (Microsoft.SharePoint.Client.File) but I don't seem to find out how.
I only have access to Microsoft.SharePoint.Client because the Microsoft.SharePoint package can't be installed due to some errors.
This is the code I have so far:
ClientContext ctx = new ClientContext("https://factionxyz0.sharepoint.com/sites/faktion-devs");
ctx.Credentials = CredentialCache.DefaultCredentials;
Microsoft.SharePoint.Client.File temp = ctx.Web.GetFileByServerRelativeUrl(filePath);
FileStream fs = new FileStream(???);
You can only create a System.IO.FileStream if the file exists on a physical disk (or is mapped to a disk via the Operating System).
Workaround: Are you able to access the raw URL of the file? In which case, download the file to disk (if the size is appropriate) and then read from there.
For example:
var httpClient = new HttpClient();
// HTTP GET Request
var response = await httpClient.GetAsync(... SharePoint URL ...);
// Get the Content Stream
var stream = await response.Content.ReadAsSteamAsync();
// Create a temporary file
var tempFile = Path.GetTempFileName();
using (var fs = File.OpenWrite(tempFile))
{
await stream.CopyToAsync(fs);
}
// tempFile now contains your file locally, you can access it like
var fileStream = File.OpenRead(tempFile);
// Make sure you delete the temporary file after using it
File.Delete(tempFile);
FileStream must map to a file. The following code demonstrates how to get a stream via CSOM, then we can convert it to FileStream by using a temp file.
ResourcePath filepath = ResourcePath.FromDecodedUrl(filename);
Microsoft.SharePoint.Client.File temp = context.Web.GetFileByServerRelativePath(filepath);
ClientResult<System.IO.Stream> crstream = temp.OpenBinaryStream();
context.Load(temp);
context.ExecuteQuery();
var tempFile = Path.GetTempFileName();
FileStream fs = System.IO.File.OpenWrite(tempFile);
if (crstream.Value != null){
crstream.Value.CopyTo(fs);
}
As for Azure function temp storage, you may take a reference of following thread:
Azure Functions Temp storage
Or you can store data to Azure storage:
Upload data to blob storage with Azure Functions
Best Regards,
Baker Kong
Been a while since the question was asked, however, this is how I solved this while I was working on a project. Obviously passing in the credentials directly like this isn't the best way, but due to timing constraints I was not able to convert this project into a newer version of .NET and use Azure AD.
Note that the class is implementing an interface.
public void SetServer(string domainName) {
if (string.IsNullOrEmpty(domainName)) throw new Exception("Invalid domain name. Name cannot be null");
_server = domainName.Trim('/').Trim('\\');
}
private string MapPath(string urlPath) {
var url = string.Join("/", _server, urlPath);
return url.Trim('/');
}
public ISharePointDocument GetDocument(string path, string fileName) {
var serverPath = MapPath(path);
var filePath = string.Join("/", serverPath, TemplateLibrary, fileName).Trim('/');
var document = new SharePointDocument();
var data = GetClientStream(path, fileName);
using(var memoryStream = new MemoryStream()) {
if (data == null) return document;
data.CopyTo(memoryStream);
var byteArray = memoryStream.ToArray();
document = new SharePointDocument {
FullPath = filePath,
Bytes = byteArray
};
}
return document;
}
public Stream GetClientStream(string path, string fileName) {
var serverPath = MapPath(path);
var filePath = string.Join("/", serverPath, TemplateLibrary, fileName).Trim('/');
var context = GetClientContext(serverPath);
var web = context.Web;
context.Load(web);
context.ExecuteQuery();
var file = web.GetListItem(filePath).File;
var data = file.OpenBinaryStream();
context.Load(file);
context.ExecuteQuery();
return data.Value;
}
private static ClientContext GetClientContext(string serverPath) {
var context = new ClientContext(serverPath) {
Credentials = new SharePointOnlineCredentials("example#example.com", GetPassword())
};
return context;
}
private static SecureString GetPassword() {
const string password = "XYZ";
var securePassword = new SecureString();
foreach(var c in password.ToCharArray()) securePassword.AppendChar(c);
return securePassword;
}
My image are uploading locally but when i deployed lambda its giving a broken image(Note: it is uploading image but size increases),I have added Binary Media Type in the API Gateway , but still not getting right results. Interesting thing is that when i uploaded a text file it was perfect on the bucket but not images.
public async Task<S3Response> ImageUpload(IFormFile file ){
string bucket_name = "your_bucket";
var client = new AmazonS3Client("***", "****", RegionEndpoint.USEast1);
var stream = new System.IO.MemoryStream();
file.CopyTo(stream);
var request = new PutObjectRequest
{
Key = file.FileName,
BucketName = bucket_name,
InputStream = stream,
//ContentType = "application/octet-stream",
ContentType = file.ContentType,
CannedACL = S3CannedACL.PublicRead
};
response = await client.PutObjectAsync(request);
}
I am saving image as base64 string on s3 bucket and converting back from base64 string to my original image on the client side.If someone got a better solution kindly add in a thread.
byte[] byteArray = Encoding.UTF8.GetBytes(file.Filebase64);
stream= new MemoryStream(byteArray);
var request = new PutObjectRequest
{
Key=file.File_name,
BucketName = bucket_name,
InputStream = stream,
ContentType = "text/plain",
CannedACL = S3CannedACL.PublicRead
};
where Image file model class look like this :
public class ImageModel
{
public String File_name { set; get; }
public String Filebase64 { set; get; }
}
I have create one api for the image upload. in this code i have upload time image download in my local folder and store. but i need now change my code and move this image download on amzon s3. i have found one link in searching time but in this link static image is upload i need image browse from the file upload control and download on amzon server. but how can do that i have no idea. please any one how can do that then please help me. here below listed my code. and also add i have try this code in below.
this is my api method for the image upload :
[HttpPost]
[Route("FileUpload")]
public HttpResponseMessage FileUpload(string FileUploadType)
{
try
{
var httpRequest = HttpContext.Current.Request;
if (httpRequest.Files.Count > 0)
{
foreach (string file in httpRequest.Files)
{
var postedFile = httpRequest.Files[file];
string fname = System.IO.Path.GetFileNameWithoutExtension(postedFile.FileName.ToString());
string extension = Path.GetExtension(postedFile.FileName);
Image img = null;
string newFileName = "";
newFileName = DateTime.Now.ToString("yyyyMMddhhmmssfff") + ".jpeg";
string path = ConfigurationManager.AppSettings["ImageUploadPath"].ToString();
string filePath = Path.Combine(path, newFileName);
SaveJpg(img, filePath);
return Request.CreateResponse(HttpStatusCode.OK, "Ok");
}
}
}
catch (Exception ex)
{
return ex;
}
return Request.CreateResponse(HttpStatusCode.OK, "Ok");
}
This is my save image api =>
public static void SaveJpg(Image image, string file_name, long compression = 60)
{
try
{
EncoderParameters encoder_params = new EncoderParameters(1);
encoder_params.Param[0] = new EncoderParameter(
System.Drawing.Imaging.Encoder.Quality, compression);
ImageCodecInfo image_codec_info =
GetEncoderInfo("image/jpeg");
image.Save(file_name, image_codec_info, encoder_params);
}
catch (Exception ex)
{
}
}
i have try this code with static image upload on server =>
private string bucketName = "Xyz";
private string keyName = "abc.jpeg";
private string filePath = "C:\\Users\\I BALL\\Desktop\\image\\abc.jpeg";. // this image is store on server
public void UploadFile()
{
var client = new AmazonS3Client(Amazon.RegionEndpoint.USEast1);
try
{
PutObjectRequest putRequest = new PutObjectRequest
{
BucketName = bucketName,
Key = keyName,
FilePath = filePath,
ContentType = "text/plain"
};
PutObjectResponse response = client.PutObject(putRequest);
}
catch (AmazonS3Exception amazonS3Exception)
{
if (amazonS3Exception.ErrorCode != null &&
(amazonS3Exception.ErrorCode.Equals("InvalidAccessKeyId")
||
amazonS3Exception.ErrorCode.Equals("InvalidSecurity")))
{
throw new Exception("Check the provided AWS Credentials.");
}
else
{
throw new Exception("Error occurred: " + amazonS3Exception.Message);
}
}
}
here i have show my code but i need to marge with my code so how can do that please any one know how can do that.
This might be too late, but here is how I did it:
Short Answer: Amazon S3 SDK for .Net has a class called "TransferUtility" which accepts a Stream object, so as long as you can convert your file to any Class derived from the abstract Stream class, you can upload the file.
Long Answer:
The httprequest posted files has an inputStream property, so inside your foreach loop:
var postedFile = httpRequest.Files[file];
If you expand on this object, it is of type "HttpPostedFile", so you have access to the Stream through the InputStream property:
Here is some snippets from a working sample:
//get values from the headers
HttpPostedFile postedFile = httpRequest.Files["File"];
//convert the posted file stream a to memory stream
System.IO.MemoryStream target = new System.IO.MemoryStream();
postedFile.InputStream.CopyTo(target);
//the following static function is a function I built which accepts the amazon file key and also the object that will be uploaded to S3, in this case, a MemoryStream object
s3.WritingAnObject(fileKey, target);
The S3 is an instance of a class called "S3Uploader", here are some snippets that can get you going,
below are some needed namespaces:
using Amazon;
using Amazon.Runtime;
using Amazon.S3;
using Amazon.S3.Model;
using Amazon.S3.Transfer;
class constructor:
static IAmazonS3 client;
static TransferUtility fileTransferUtility;
public S3Uploader(string accessKeyId, string secretAccessKey,string bucketName)
{
_bucketName = bucketName;
var credentials = new BasicAWSCredentials(accessKeyId, secretAccessKey);
client = new AmazonS3Client(credentials, RegionEndpoint.USEast1);
fileTransferUtility = new TransferUtility(client);
}
Notice here that we are creating the credentials using the BasicAWSCredentials class instead of passing it to the AmazonS3Client directly. And then we are using fileTransferUtility class to have better control over what is sent to S3. and here is how the Upload works based on Memory Stream:
public void WritingAnObject(string keyName, MemoryStream fileToUpload)
{
try
{
TransferUtilityUploadRequest fileTransferUtilityRequest = new
TransferUtilityUploadRequest
{
StorageClass = S3StorageClass.ReducedRedundancy,
CannedACL = S3CannedACL.Private
};
fileTransferUtility.Upload(fileToUpload, _bucketName, keyName);
}
catch (AmazonS3Exception amazonS3Exception)
{
//your error handling here
}
}
Hope this helps someone with similar issues.
I am attempting to download a list of files from urls stored in my database, and then upload them to my Azure FileStorage account. I am successfully downloading the files and can turn them into files on my local storage or convert them to text and upload them. However I lose data when converting something like a pdf to a text and I do not want to have to store the files on the Azure app that this endpoint is hosted on as I do not need to manipulate the files in any way.
I have attempted to upload the files from the Stream I get from the HttpContent object using the UploadFromStream method on the CloudFile. Whenever this command is run I get an InvalidOperationException with the message "Operation is not valid due to the current state of the object."
I've tried converting the original Stream to a MemoryStream as well but this just writes a blank file to the FileStorage account, even if I set the position to the beginning of the MemoryStream. My code is below and if anyone could point out what information I am missing to make this work I would appreciate it.
public DownloadFileResponse DownloadFile(FileLink fileLink)
{
string fileName = string.Format("{0}{1}{2}", fileLink.ExpectedFileName, ".", fileLink.ExpectedFileType);
HttpStatusCode status;
string hash = "";
using (var client = new HttpClient())
{
client.Timeout = TimeSpan.FromSeconds(10); // candidate for .config setting
client.DefaultRequestHeaders.Add("User-Agent", USER_AGENT);
var request = new HttpRequestMessage(HttpMethod.Get, fileLink.ExpectedURL);
var sendTask = client.SendAsync(request, HttpCompletionOption.ResponseHeadersRead);
var response = sendTask.Result; // not ensuring success here, going to handle error codes without exceptions
status = response.StatusCode;
if (status == HttpStatusCode.OK)
{
var httpStream = response.Content.ReadAsStreamAsync().Result;
fileStorage.WriteFile(fileLink.ExpectedFileType, fileName, httpStream);
hash = HashGenerator.GetMD5HashFromStream(httpStream);
}
}
return new DownloadFileResponse(status, fileName, hash);
}
public void WriteFile(string targetDirectory, string targetFilePath, Stream fileStream)
{
var options = SetOptions();
var newFile = GetTargetCloudFile(targetDirectory, targetFilePath);
newFile.UploadFromStream(fileStream, options: options);
}
public FileRequestOptions SetOptions()
{
FileRequestOptions options = new FileRequestOptions();
options.ServerTimeout = TimeSpan.FromSeconds(10);
options.RetryPolicy = new NoRetry();
return options;
}
public CloudFile GetTargetCloudFile(string targetDirectory, string targetFilePath)
{
if (!shareConnector.share.Exists())
{
throw new Exception("Cannot access Azure File Storage share");
}
CloudFileDirectory rootDirectory = shareConnector.share.GetRootDirectoryReference();
CloudFileDirectory directory = rootDirectory.GetDirectoryReference(targetDirectory);
if (!directory.Exists())
{
throw new Exception("Target Directory does not exist");
}
CloudFile newFile = directory.GetFileReference(targetFilePath);
return newFile;
}
Had the same problem, the only way it worked is by reading the coming stream (in your case it is httpStream in DownloadFile(FileLink fileLink) method) to a byte array and using UploadFromByteArray (byte[] buffer, int index, int count) instead of UploadFromStream
So your WriteFile(FileLink fileLink) method will look like:
public void WriteFile(string targetDirectory, string targetFilePath, Stream fileStream)
{
var options = SetOptions();
var newFile = GetTargetCloudFile(targetDirectory, targetFilePath);
const int bufferLength= 600;
byte[] buffer = new byte[bufferLength];
// Buffer to read from stram This size is just an example
List<byte> byteArrayFile = new List<byte>(); // all your file will be here
int count = 0;
try
{
while ((count = fileStream.Read(buffer, 0, bufferLength)) > 0)
{
byteArrayFile.AddRange(buffer);
}
fileStream.Close();
}
catch (Exception ex)
{
throw; // you need to change this
}
file.UploadFromByteArray(allFile.ToArray(), 0, byteArrayFile.Count);
// Not sure about byteArrayFile.Count.. it should work
}
According to your description and codes, I suggest you could use Steam.CopyTo to copy the stream to the local memoryStream firstly, then upload the MemoryStream to azure file storage.
More details, you could refer to below codes:
I just change the DownloadFile method to test it.
HttpStatusCode status;
using (var client = new HttpClient())
{
client.Timeout = TimeSpan.FromSeconds(10); // candidate for .config setting
// client.DefaultRequestHeaders.Add("User-Agent", USER_AGENT);
//here I use my blob file to test it
var request = new HttpRequestMessage(HttpMethod.Get, "https://xxxxxxxxxx.blob.core.windows.net/media/secondblobtest-eypt.txt");
var sendTask = client.SendAsync(request, HttpCompletionOption.ResponseHeadersRead);
var response = sendTask.Result; // not ensuring success here, going to handle error codes without exceptions
status = response.StatusCode;
if (status == HttpStatusCode.OK)
{
MemoryStream ms = new MemoryStream();
var httpStream = response.Content.ReadAsStreamAsync().Result;
httpStream.CopyTo(ms);
ms.Position = 0;
WriteFile("aaa", "testaa", ms);
// hash = HashGenerator.GetMD5HashFromStream(httpStream);
}
}
I had a similar problem and got to find out that the UploadFromStream method only works with buffered streams. Nevertheless I was able to successfully upload files to azure storage by using a MemoryStream. I don't think this to be a very good solution as you are using up your memory resources by copying the content of the file stream to memory before handing it to the azure stream. What I have come up with is a way of writing directly to an azure stream by using instead the OpenWriteAsync method to create the stream and then a simple CopyToAsync from the source stream.
CloudStorageAccount storageAccount = CloudStorageAccount.Parse( "YourAzureStorageConnectionString" );
CloudFileClient fileClient = storageAccount.CreateCloudFileClient();
CloudFileShare share = fileClient.GetShareReference( "YourShareName" );
CloudFileDirectory root = share.GetRootDirectoryReference();
CloudFile file = root.GetFileReference( "TheFileName" );
using (CloudFileStream fileWriteStream = await file.OpenWriteAsync( fileMetadata.FileSize, new AccessCondition(),
new FileRequestOptions { StoreFileContentMD5 = true },
new OperationContext() ))
{
await fileContent.CopyToAsync( fileWriteStream, 128 * 1024 );
}