Get attachment and save in s3 bucket mailkit imap - c#

I need to get the attachments that are in the email to save, I do the whole process of reading the email and saving as I need to, but when I try to get the attachments to save in the AWS S3 Bucket it gives an error when it tries to read the content type.
Basically what I need to do is.
Get the attachment.
Create a FormFile for my S3 save function to save the attachments (The S3 save function is already working).
I am having problem just to build the FormFile.
.
.
.
var attachmentsToS3 = new FormFileCollection();
foreach (var attachment in message.Attachments)
{
var part = attachment as MimePart;
var stream = new MemoryStream();
await part.Content.DecodeToAsync(stream);
var fileLength = stream.Length;
var formFile = new FormFile(stream, 0, fileLength, "file[]", attachment.ContentDisposition.FileName);
attachmentsToS3.Add(formFile);
}
await _atendimentoServices.SaveAnexoAtendimentoToS3( attachmentsToS3, idAcompanhamento, requestApi);
.
.
.
UPDATE
Changes made based on bruno's answer and it is now working.
var attachmentsToS3 = new FormFileCollection();
foreach (MimeEntity attachment in message.Attachments)
{
var memory = new MemoryStream();
if (attachment is MimePart part)
await part.Content.DecodeToAsync(memory);
else
await ((MessagePart)attachment).Message.WriteToAsync(memory);
var bytes = memory.ToArray();
var contentType = MimeTypes.GetMimeType(attachment.ContentType.MimeType);
var formFile = new FormFile(memory, 0, bytes.Length, "file[]", attachment.ContentDisposition.FileName)
{
Headers = new HeaderDictionary(),
ContentType = contentType
};
attachmentsToS3.Add(formFile);
}
await _atendimentoServices.SaveAnexoAtendimentoToS3(attachmentsToS3, idAcompanhamento, requestApi);```

Good Afternoon Gabriel, Analysing you code I noticed there were missing some parts of code.
the first you need to trate your attachment to like this:
if (attachment is MimePart part) await part.Content.DecodeToAsync(memory);
else await ((MessagePart)attachment).Message.WriteToAsync(memory);
With that you'll garanted your stream will be populated correctly been a mimeType or not.
The other part was missing is the header and the content type at your FormFile constructor, try that.
var formFile = new FormFile(memory, 0, bytes.Length, "file[]", attachment.ContentDisposition.FileName)
{
Headers = new HeaderDictionary(),
ContentType = MimeTypes.GetMimeType(attachment.ContentType.MimeType);
};

Related

Amazon S3 does not release the file after the upload

I have a wcf service here to upload the files to Amazon s3 server. After the successful upload, I need to delete the file from my local path. But when I try to delete the file, got an error says The process cannot access the file.Because its being used by another process".Sharing below my code snippets.
var putRequest = new PutObjectRequest
{
BucketName = System.Configuration.ConfigurationManager.AppSettings["S3Bucket"]
.ToString(),
Key = keyName,
FilePath = path,
ContentType = "application/pdf"
};
client = new AmazonS3Client(bucketRegion);
PutObjectResponse response = await client.PutObjectAsync(putRequest);
putRequest = null;
client.Dispose();
File.Delete(path);
If anyone know about the issue, please update..
There might be a timing issue here, so you might want to try to close the stream explicitly.
Do note, I am not sure, if I am mistaken I'll remove this, but it was to long for a comment.
using (var fileStream = new File.OpenRead(path))
{
var putRequest = new PutObjectRequest
{
BucketName = System.Configuration.ConfigurationManager.AppSettings["S3Bucket"]
.ToString(),
Key = keyName,
InputStream = fileStream ,
ContentType = "application/pdf",
AutoCloseStream = false,
};
using (var c = new AmazonS3Client(bucketRegion))
{
PutObjectResponse response = await c.PutObjectAsync(putRequest);
}
} //filestream should be closed here, if not: call fileStream.Close()
File.Delete(path);
More info on the properties: https://docs.aws.amazon.com/sdkfornet1/latest/apidocs/html/T_Amazon_S3_Model_PutObjectRequest.htm

Send mail with PDFDocument without filename in C#

i have the following Code :
How can i send the PDFDocument "doc" without saving it (without filename)?
To create an attachment object i need a filename (path), but i dont have one.
Just choose a name for the file
var stream = new MemoryStream();
doc.WriteToStream(stream);
stream.Position = 0;
var contentType = new ContentType(MediaTypeNames.Application.Pdf)
{
Name ="withoutfilename.pdf";
};
var attachment = new Attachment(stream, contentType);
mailMsg.Attachments.Add(attachment);

How to upload the Stream from an HttpContent result to Azure File Storage

I am attempting to download a list of files from urls stored in my database, and then upload them to my Azure FileStorage account. I am successfully downloading the files and can turn them into files on my local storage or convert them to text and upload them. However I lose data when converting something like a pdf to a text and I do not want to have to store the files on the Azure app that this endpoint is hosted on as I do not need to manipulate the files in any way.
I have attempted to upload the files from the Stream I get from the HttpContent object using the UploadFromStream method on the CloudFile. Whenever this command is run I get an InvalidOperationException with the message "Operation is not valid due to the current state of the object."
I've tried converting the original Stream to a MemoryStream as well but this just writes a blank file to the FileStorage account, even if I set the position to the beginning of the MemoryStream. My code is below and if anyone could point out what information I am missing to make this work I would appreciate it.
public DownloadFileResponse DownloadFile(FileLink fileLink)
{
string fileName = string.Format("{0}{1}{2}", fileLink.ExpectedFileName, ".", fileLink.ExpectedFileType);
HttpStatusCode status;
string hash = "";
using (var client = new HttpClient())
{
client.Timeout = TimeSpan.FromSeconds(10); // candidate for .config setting
client.DefaultRequestHeaders.Add("User-Agent", USER_AGENT);
var request = new HttpRequestMessage(HttpMethod.Get, fileLink.ExpectedURL);
var sendTask = client.SendAsync(request, HttpCompletionOption.ResponseHeadersRead);
var response = sendTask.Result; // not ensuring success here, going to handle error codes without exceptions
status = response.StatusCode;
if (status == HttpStatusCode.OK)
{
var httpStream = response.Content.ReadAsStreamAsync().Result;
fileStorage.WriteFile(fileLink.ExpectedFileType, fileName, httpStream);
hash = HashGenerator.GetMD5HashFromStream(httpStream);
}
}
return new DownloadFileResponse(status, fileName, hash);
}
public void WriteFile(string targetDirectory, string targetFilePath, Stream fileStream)
{
var options = SetOptions();
var newFile = GetTargetCloudFile(targetDirectory, targetFilePath);
newFile.UploadFromStream(fileStream, options: options);
}
public FileRequestOptions SetOptions()
{
FileRequestOptions options = new FileRequestOptions();
options.ServerTimeout = TimeSpan.FromSeconds(10);
options.RetryPolicy = new NoRetry();
return options;
}
public CloudFile GetTargetCloudFile(string targetDirectory, string targetFilePath)
{
if (!shareConnector.share.Exists())
{
throw new Exception("Cannot access Azure File Storage share");
}
CloudFileDirectory rootDirectory = shareConnector.share.GetRootDirectoryReference();
CloudFileDirectory directory = rootDirectory.GetDirectoryReference(targetDirectory);
if (!directory.Exists())
{
throw new Exception("Target Directory does not exist");
}
CloudFile newFile = directory.GetFileReference(targetFilePath);
return newFile;
}
Had the same problem, the only way it worked is by reading the coming stream (in your case it is httpStream in DownloadFile(FileLink fileLink) method) to a byte array and using UploadFromByteArray (byte[] buffer, int index, int count) instead of UploadFromStream
So your WriteFile(FileLink fileLink) method will look like:
public void WriteFile(string targetDirectory, string targetFilePath, Stream fileStream)
{
var options = SetOptions();
var newFile = GetTargetCloudFile(targetDirectory, targetFilePath);
const int bufferLength= 600;
byte[] buffer = new byte[bufferLength];
// Buffer to read from stram This size is just an example
List<byte> byteArrayFile = new List<byte>(); // all your file will be here
int count = 0;
try
{
while ((count = fileStream.Read(buffer, 0, bufferLength)) > 0)
{
byteArrayFile.AddRange(buffer);
}
fileStream.Close();
}
catch (Exception ex)
{
throw; // you need to change this
}
file.UploadFromByteArray(allFile.ToArray(), 0, byteArrayFile.Count);
// Not sure about byteArrayFile.Count.. it should work
}
According to your description and codes, I suggest you could use Steam.CopyTo to copy the stream to the local memoryStream firstly, then upload the MemoryStream to azure file storage.
More details, you could refer to below codes:
I just change the DownloadFile method to test it.
HttpStatusCode status;
using (var client = new HttpClient())
{
client.Timeout = TimeSpan.FromSeconds(10); // candidate for .config setting
// client.DefaultRequestHeaders.Add("User-Agent", USER_AGENT);
//here I use my blob file to test it
var request = new HttpRequestMessage(HttpMethod.Get, "https://xxxxxxxxxx.blob.core.windows.net/media/secondblobtest-eypt.txt");
var sendTask = client.SendAsync(request, HttpCompletionOption.ResponseHeadersRead);
var response = sendTask.Result; // not ensuring success here, going to handle error codes without exceptions
status = response.StatusCode;
if (status == HttpStatusCode.OK)
{
MemoryStream ms = new MemoryStream();
var httpStream = response.Content.ReadAsStreamAsync().Result;
httpStream.CopyTo(ms);
ms.Position = 0;
WriteFile("aaa", "testaa", ms);
// hash = HashGenerator.GetMD5HashFromStream(httpStream);
}
}
I had a similar problem and got to find out that the UploadFromStream method only works with buffered streams. Nevertheless I was able to successfully upload files to azure storage by using a MemoryStream. I don't think this to be a very good solution as you are using up your memory resources by copying the content of the file stream to memory before handing it to the azure stream. What I have come up with is a way of writing directly to an azure stream by using instead the OpenWriteAsync method to create the stream and then a simple CopyToAsync from the source stream.
CloudStorageAccount storageAccount = CloudStorageAccount.Parse( "YourAzureStorageConnectionString" );
CloudFileClient fileClient = storageAccount.CreateCloudFileClient();
CloudFileShare share = fileClient.GetShareReference( "YourShareName" );
CloudFileDirectory root = share.GetRootDirectoryReference();
CloudFile file = root.GetFileReference( "TheFileName" );
using (CloudFileStream fileWriteStream = await file.OpenWriteAsync( fileMetadata.FileSize, new AccessCondition(),
new FileRequestOptions { StoreFileContentMD5 = true },
new OperationContext() ))
{
await fileContent.CopyToAsync( fileWriteStream, 128 * 1024 );
}

Angular/Web API 2 returns invalid or corrupt file with StreamContent or ByteArrayContent

I'm trying to return a file in a ASP.NET Web API Controller. This file is a dynamically-generated PDF saved in a MemoryStream.
The client (browser) receives the file successfully, but when I open the file, I see that all the pages are totally blank.
The thing is that if I take the same MemoryStream and write it to a file, this disk file is displayed correctly, so I assume that the problem is related to the file transfer via Web.
My controller looks like this:
[HttpGet][Route("export/pdf")]
public HttpResponseMessage ExportAsPdf()
{
MemoryStream memStream = new MemoryStream();
PdfExporter.Instance.Generate(memStream);
memStream.Position = 0;
HttpResponseMessage result = new HttpResponseMessage(HttpStatusCode.OK);
result.Content = new ByteArrayContent(memStream.ToArray()); //OR: new StreamContent(memStream);
return result;
}
Just to try, if I write the stream to disk, it's displayed correctly:
[HttpGet][Route("export/pdf")]
public HttpResponseMessage ExportAsPdf()
{
MemoryStream memStream = new MemoryStream();
PdfExporter.Instance.Generate(memStream);
memStream.Position = 0;
using (var fs = new FileStream("C:\\Temp\\test.pdf", FileMode.OpenOrCreate, FileAccess.ReadWrite))
{
memStream.CopyTo(fs);
}
return null;
}
The differences are:
PDF saved on disk: 34KB
PDF transferred via web: 60KB (!)
If I compare both files contents, the main differences are:
File Differences
On the left is the PDF transferred via web; on the right, the PDF saved to disk.
Is there something wrong with my code?
Maybe something related to encodings?
Thanks!
Well, it turned out to be a client (browser) problem, not a server problem. I'm using AngularJS in the frontend, so when the respose was received, Angular automatically converted it to a Javascript string. In that conversion, the binary contents of the file were somehow altered...
Basically it was solved by telling Angular not to convert the response to a string:
$http.get(url, { responseType: 'arraybuffer' })
.then(function(response) {
var dataBlob = new Blob([response.data], { type: 'application/pdf'});
FileSaver.saveAs(dataBlob, 'myFile.pdf');
});
And then saving the response as a file, helped by the Angular File Saver service.
I guess you should set ContentDisposition and ContentType like this:
[HttpGet][Route("export/pdf")]
public HttpResponseMessage ExportAsPdf()
{
MemoryStream memStream = new MemoryStream();
PdfExporter.Instance.Generate(memStream);
var result = new HttpResponseMessage(HttpStatusCode.OK)
{
Content = new ByteArrayContent(memStream.ToArray())
};
//this line
result.Content.Headers.ContentDisposition = new System.Net.Http.Headers.ContentDispositionHeaderValue("attachment")
{
FileName = "YourName.pdf"
};
//and this line
result.Content.Headers.ContentType = new MediaTypeHeaderValue("application/octet-stream");
return result;
}
Try this
[HttpGet][Route("export/pdf")]
public HttpResponseMessage ExportAsPdf()
{
MemoryStream memStream = new MemoryStream();
PdfExporter.Instance.Generate(memStream);
//get buffer
var buffer = memStream.GetBuffer();
//content length for header
var contentLength = buffer.Length;
var statuscode = HttpStatusCode.OK;
var response = Request.CreateResponse(statuscode);
response.Content = new StreamContent(new MemoryStream(buffer));
response.Content.Headers.ContentType = new MediaTypeHeaderValue("application/pdf");
response.Content.Headers.ContentLength = contentLength;
ContentDispositionHeaderValue contentDisposition = null;
if (ContentDispositionHeaderValue.TryParse("inline; filename=my_filename.pdf", out contentDisposition)) {
response.Content.Headers.ContentDisposition = contentDisposition;
}
return response;
}

How to convert base64 value from a database to a stream with C#

I have some base64 stored in a database (that are actually images) that needs to be uploaded to a third party. I would like to upload them using memory rather than saving them as an image then posting it to a server. Does anyone here know how to convert base64 to a stream?
How can I change this code:
var fileInfo = new FileInfo(fullFilePath);
var fileContent = new StreamContent(fileInfo.OpenRead());
to fill the StreamContent object with a base64 interpretation of an image file instead.
private static StreamContent FileMultiPartBody(string fullFilePath)
{
var fileInfo = new FileInfo(fullFilePath);
var fileContent = new StreamContent(fileInfo.OpenRead());
// Manually wrap the string values in escaped quotes.
fileContent.Headers.ContentDisposition = new ContentDispositionHeaderValue("form-data")
{
FileName = string.Format("\"{0}\"", fileInfo.Name),
Name = "\"name\"",
};
fileContent.Headers.ContentType = new MediaTypeHeaderValue("image/jpeg");
return fileContent;
}
You'll want to do something like this, once you've gotten the string from the database:
var bytes = Convert.FromBase64String(base64encodedstring);
var contents = new StreamContent(new MemoryStream(bytes));
// Whatever else needs to be done here.
Just as an alternative approach, which works well with large streams (saves the intermediate byte array):
// using System.Security.Cryptography
// and assumes the input stream is b64Stream
var stream = new CryptoStream(b64Stream, new FromBase64Transform(), CryptoStreamMode.Read);
return new StreamContent(stream);
var stream = new MemoryStream(Convert.FromBase64String(base64));

Categories