I'm building a simple .net MailKit IMAP client. Rather then pulling emails again and again from the IMAP server, is it possible to store the entire MailKit mime message (in full, including attachments) as a byte array? If so, how?
Then I could write it to MySql or a file and reuse it for testing code changes.
As Lucas points out, you can use the MimeMessage.WriteTo() method to write the message to either a file name or to a stream (such as a MemoryStream).
If you want the message as a byte array in order to save it to an SQL database, you could do this:
using (var memory = new MemoryStream ()) {
message.WriteTo (memory);
var blob = memory.ToArray ();
// now save the blob to the database
}
To read it back from the database, you'd first read the blob as a byte[] and then do this:
using (var memory = new MemoryStream (blob, false)) {
message = MimeMessage.Load (memory);
}
Related
My SMTP server restricts the amount of data that can be sent per smtpclient session as well as some other constraints. For example, I want to send 10 messages but the server may impose a limit of 10Mb total. I would like to calculate the size of the messages so that I know when I need to reinitialize the server connection.
I am using the MailKit library for this effort.
I was considering writing the Message.Body, which would include the attachments, to a MemoryStream, but that seems like overkill just to get the size.
If I have an in memory MimeMessage object, is there a method to determine its complete content length prior to sending?
------UPDATE------
If there was not a native option this was my proposed method:
using (var memory = new MemoryStream())
{
await mailMessage.Body.WriteToAsync(memory);
curMessageLength = Convert.ToBase64String(memory.ToArray()).Length;
}
What you want can be done like this:
// Make sure to prepare the message for sending before you call
// SmtpClient.Send/Async() so that you are getting an accurate size.
mailMessage.Prepare (EncodingConstraint.SevenBit);
using (var stream = new MimeKit.IO.MeasuringStream ())
{
await mailMessage.WriteToAsync (stream);
curMessageLength = stream.Length;
}
MimeKit has a convenient MeasuringStream so that you don't need to allocate memory.
I'm pretty sure you also want to measure the entire message content (including the message headers).
I don't understand why you were base64 encoding the output message, but I doubt you need or want to do that.
I have the following request flow where the customer can request to download a CSV file from the Server. The issue is that the blob file is too large and the customer has to wait a lot longer before the actual download starts (the customer thinks that there is some issue and closes the browser). How can the download be made more efficient using streams?
Current sequence is as below:
Request Sequence:
Client clicks the download button from the browser.
Backend receives the request.
Backend Server Downloads the Blob from the Azure Storage Account.
There is some custom processing that needs to be done.
Once the processing is completed, start sending the response back to the client.
Now the issue is that while using the DownloadTo(Stream) function of BlobBaseClient, the file is entirely downloaded to memory before I could do anything.
How can I download the blob file in chunks, do the processing and start sending it to the customer?
Part of Download Controller:
var contentDisposition = new ContentDispositionHeaderValue("attachment")
{
FileName = "customer-file.csv",
CreationDate = DateTimeOffset.UtcNow
};
Response.Headers.Add("Content-Disposition", contentDisposition.ToString());
var result = blobService.DownloadAndProcessContent();
foreach (var line in result)
{
yield return line ;
}
Response.BodyWriter.FlushAsync();
Part of DownloadAndProcessContent Function:
var stream = new MemoryStream();
var blob = container.GetAppendBlobClient(blobName);
blob.DownloadTo(stream);
// Processing is done on the Blob Data
var streamReader = new StreamReader(stream);
while (!streamReader.EndOfStream)
{
string currentLine= streamReader.ReadLine();
// process the line.
string processDataLine = ProcessData(currentLine);
yield return processDataLine;
}
Did you consider using built-in method OpenRead so you can apply the StreamReader directly to the blob stream without needing a MemoryStream in the middle? This should give you a way process line-by-line as you do in the loop.
Also note it's recommended to take the async-await approach all the way so your controller code (made async) would be much more scalable by not blocking on I/O turning the .Net thread-pool into a bottleneck for handling concurrent requests to your API.
This answer doesn't address returning an HTTP response with streaming, that's separate from streaming a downloaded blob.
Is there a way to create a stream object directly to Azure Blob or Azure Block Storage Blob.
IE
var s = new AzureStream(blockObject)
ms.CopyTo(s);
s.position = 200;
ms.CopyTo(s);
s.Read...
This would allow for some awesome interactions such as storing database Indices in azure blob and not needing to pull them local.
Not sure if this answers your question, but you can read a range of bytes from a blob. When using the REST API directly, you can specify the bytes you want to read in either the Range or x-ms-range header.
When using C# SDK, You can use DownloadRangeToStream method, something like:
using (var ms = new MemoryStream())
{
long offset = 200;
long bytesToRead = 1024;
blob.DownloadRangeToStream(ms, offset, bytesToRead);
}
If your question is, "can I use Streams with Azure Blobs", in order to avoid the need to hold the entire size of the blob in memory at any point in time, then the answer is absolutely yes.
For example, when reading block blobs, as per this answer here, blobs can be accessed as a stream handle with methods such as CloudBlob.OpenReadAsync. The default buffer size is 4MB, but can be adjusted via properties like StreamMinimumReadSizeInBytes. Here we copy the blob stream to another open, output stream:
using (var stream = await myBlockBlob.OpenReadAsync(cancellationToken))
{
await stream.CopyToAsync(outputStream);
}
Similar, you can write a stream directly into Blob Storage:
await blockBlob.UploadFromStreamAsync(streamToSave, cancellationToken);
my azure function receives large video files and images and stores it in Azure blob. Client API is sending data in chunks to my Azure htttp trigger function. Do I have to do something at receiving-end to improve performance like receiving data in chunks?
Bruce, actually Client code is being developed by some other team. right now i am testing it by postman and getting files from multipart http request.
foreach (HttpContent ctnt in provider.Contents) {
var dataStream = await ctnt.ReadAsStreamAsync();
if (ctnt.Headers.ContentDisposition.Name.Trim().Replace("\"", "") == "file")
{
byte[] ImageBytes = ReadFully(dataStream);
var fileName = WebUtility.UrlDecode(ctnt.Headers.ContentDisposition.FileName);
} }
ReadFully Function
public static byte[] ReadFully(Stream input){
using (MemoryStream ms = new MemoryStream())
{
input.CopyTo(ms);
return ms.ToArray();
}}
As BlobRequestOptions.ParallelOperationThread states as follows:
Gets or sets the number of blocks that may be simultaneously uploaded.
Remarks:
When using the UploadFrom* methods on a blob, the blob will be broken up into blocks. Setting this
value limits the number of outstanding I/O "put block" requests that the library will have in-flight
at a given time. Default is 1 (no parallelism). Setting this value higher may result in
faster blob uploads, depending on the network between the client and the Azure Storage service.
If blobs are small (less than 256 MB), keeping this value equal to 1 is advised.
I assumed that you could explicitly set the ParallelOperationThreadCount for faster blob uploading.
var requestOption = new BlobRequestOptions()
{
ParallelOperationThreadCount = 5 //Gets or sets the number of blocks that may be simultaneously uploaded.
};
//upload a blob from the local file system
await blockBlob.UploadFromFileAsync("{your-file-path}",null,requestOption,null);
//upload a blob from the stream
await blockBlob.UploadFromStreamAsync({stream-for-upload},null,requestOption,null);
foreach (HttpContent ctnt in provider.Contents)
Based on your code, I assumed that you retrieve the provider instance as follows:
MultipartMemoryStreamProvider provider = await request.Content.ReadAsMultipartAsync();
At this time, you could use the following code for uploading your new blob:
var blobname = ctnt.Headers.ContentDisposition.FileName.Trim('"');
CloudBlockBlob blockBlob = container.GetBlockBlobReference(blobname);
//set the content-type for the current blob
blockBlob.Properties.ContentType = ctnt.Headers.ContentType.MediaType;
await blockBlob.UploadFromStreamAsync(await ctnt.Content.ReadAsStreamAsync(), null,requestOption,null);
I would prefer use MultipartFormDataStreamProvider which would store the uploaded files from the client to the file system instead of MultipartMemoryStreamProvider which would use the server memory for temporarily storing the data sent from the client. For the MultipartFormDataStreamProvider approach, you could follow this similar issue. Moreover, I would prefer use the Azure Storage Client Library with my Azure function, you could follow Get started with Azure Blob storage using .NET.
UPDATE:
Moreover, you could follow this tutorial about breaking a large file into small chunks, upload them in the client side, then merge them back in your server side.
I want to store files in my SQL Server database by C# which I have done it without problem.
This is my code:
byte[] file;
using (var stream = new FileStream(letter.FilePath, FileMode.Open, FileAccess.Read))
{
using (var reader = new BinaryReader(stream))
{
file = reader.ReadBytes((int)stream.Length);
letter.ltr_Image = file;
}
}
LetterDB letterDB = new LetterDB();
id = letterDB.LetterActions(letter);
The insert SQL action in the LetterActions module. But I want to know, in order to reduce the size of the database (which increases daily) is there any solution for compressing the files and then store them in the database?
Yes , you can zip your files before storing them in the database, using the ZipFile class. Take a look here: https://msdn.microsoft.com/en-us/library/system.io.compression.zipfile(v=vs.110).aspx
Plenty of sample code out there too. See here:http://imar.spaanjaars.com/414/storing-uploaded-files-in-a-database-or-in-the-file-system-with-aspnet-20
You can compress file like this. Then insert compressed file stream into DB, but when you read it you need decompress it.
If you really need store file in DB, suggest you compress and decompress it by client.
And better way handle file is store them in disk, and only store file path in DB, when client need file use file path get file.