Using CryptoStream with StreamContent c# - c#

I would like to to read an image from file or blob storage and base64 encode it as a stream and then pass that stream to StreamContent. The following code times out:
[HttpGet, Route("{id}", Name = "GetImage")]
public HttpResponseMessage GetImage([FromUri] ImageRequest request)
{
var filePath = HostingEnvironment.MapPath("~/Areas/API/Images/Mr-Bean-Drivers-License.jpg");
HttpResponseMessage result = new HttpResponseMessage(HttpStatusCode.OK);
var stream = new FileStream(filePath, FileMode.Open);
var cryptoStream = new CryptoStream(stream, new ToBase64Transform(), CryptoStreamMode.Write);
result.Content = new StreamContent(cryptoStream);
result.Content.Headers.ContentType = new MediaTypeHeaderValue("application/octet-stream");
return result;
}
I am able to get the following code to work without keeping the file as a stream and read it all into memory but I would like to avoid that.
HttpResponseMessage result = new HttpResponseMessage(HttpStatusCode.OK);
using (var fileStream = new FileStream(filePath, FileMode.Open))
{
using (var image = Image.FromStream(fileStream))
{
var memoryStream = new MemoryStream();
image.Save(memoryStream, image.RawFormat);
byte[] imageBytes = memoryStream.ToArray();
var base64String = Convert.ToBase64String(imageBytes);
result.Content = new StringContent(base64String);
result.Content.Headers.ContentType = new MediaTypeHeaderValue("text/plain");
return result;
}
}

The problem here is you're passing CryptoStreamMode.Write to the constructor of CryptoStream whereas you should be passing CryptoStreamMode.Read because the CryptoStream is going to be read as the HttpResponseMessage is returned.
For more details about this, see Figolu's great explanation about the various usages of CryptoStream in this answer.

Related

HTTP POST multipart/formdata using HttpClient

When I post the below code using httpclient
using var formContent = new MultipartFormDataContent("NKdKd9Yk");
using var stream = new MemoryStream();
file.CopyTo(stream);
var fileBytes = stream.ToArray();
formContent.Headers.ContentType.MediaType = "multipart/form-data";
formContent.Add(new StreamContent(stream), "file", fileName);
var response = await httpClient.PostAsync(GetDocumentUpdateRelativeUrl(), formContent);
here file is of type IFormfile
In API side I retrieve file as follows
var base64str= "";
using (var ms = new MemoryStream())
{
request.file.CopyTo(ms);
var fileBytes = ms.ToArray();
base64str= Convert.ToBase64String(fileBytes);
// act on the Base64 data
}
I get 0 byte. My questions is what's wrong with this approch?
But If I use below code. Then API works and I get what I post.
using var formContent = new MultipartFormDataContent("NKdKd9Yk");
using var stream = new MemoryStream();
file.CopyTo(stream);
var fileBytes = stream.ToArray();
formContent.Headers.ContentType.MediaType = "multipart/form-data";
formContent.Add(new StreamContent(new MemoryStream(fileBytes)), "file", fileName);
differece is how I add stream content
formContent.Add(new StreamContent(stream), "file", fileName);
vs
formContent.Add(new StreamContent(new MemoryStream(fileBytes)), "file", fileName);
Why the first approch didn't work but second one does?
You need to add stream.Seek(0, SeekOrigin.Begin); in order to jump back to the beginning of the MemoryStream. You should also use CopyToAsync
In the second version, you had a fresh MemoryStream from the byte[] array, which is positioned on 0 anyway.
using var formContent = new MultipartFormDataContent("NKdKd9Yk");
using var stream = new MemoryStream();
await file.CopyToAsync(stream);
stream.Seek(0, SeekOrigin.Begin);
formContent.Headers.ContentType.MediaType = "multipart/form-data";
formContent.Add(new StreamContent(stream), "file", fileName);
using var response = await httpClient.PostAsync(GetDocumentUpdateRelativeUrl(), formContent);
Although to be honest, the MemoryStream seems entirely unnecessary here. Just pass the a Stream from file directly.
using var formContent = new MultipartFormDataContent("NKdKd9Yk");
formContent.Headers.ContentType.MediaType = "multipart/form-data";
using var stream = file.OpenReadStream();
formContent.Add(new StreamContent(stream), "file", fileName);
using var response = await httpClient.PostAsync(GetDocumentUpdateRelativeUrl(), formContent);

Value already read, or no value when trying to read from a Stream

I've been trying this for a long time but it keeps giving me an error. I have an array of bytes that should represent a nbt document. I would like to convert this into a c# object with a library: fNbt.
Here is my code:
byte[] buffer = Convert.FromBase64String(value);
byte[] decompressed;
using (var inputStream = new MemoryStream(buffer))
{
using var outputStream = new MemoryStream();
using (var gzip = new GZipStream(inputStream, CompressionMode.Decompress, leaveOpen: true))
{
gzip.CopyTo(outputStream);
}
fNbt.NbtReader reader = new fNbt.NbtReader(outputStream, true);
var output = reader.ReadValueAs<AuctionItem>(); //Error: Value already read, or no value to read.
return output;
}
When I try this, it works:
decompressed = outputStream.ToArray();
outputStream.Seek(0, SeekOrigin.Begin);
outputStream.Read(new byte[1000], 0, decompressed.Count() - 1);
But when I try this, it doesn't:
outputStream.Seek(0, SeekOrigin.Begin);
fNbt.NbtReader reader = new fNbt.NbtReader(outputStream, true);
reader.ReadValueAs<AuctionItem>();
NbtReader, like most stream readers, begins reading from the current position of whatever stream you give it. Since you're just done writing to outputStream, then that position is the stream's end. Which means at that point there's nothing to be read.
The solution is to seek the outputStream back to the beginning before reading from it:
outputStream.Seek(0, SeekOrigin.Begin); // <-- seek to the beginning
// Do the read
fNbt.NbtReader reader = new fNbt.NbtReader(outputStream, true);
var output = reader.ReadValueAs<AuctionItem>(); // No error anymore
return output;
The solution is as follows. NbtReader.ReadValueAs does not consider a nbtCompound or nbtList as value. I made this little reader but it is not done yet (I will update the code once it is done).
public static T ReadValueAs<T>(string value) where T: new()
{
byte[] buffer = Convert.FromBase64String(value);
using (var inputStream = new MemoryStream(buffer))
{
using var outputStream = new MemoryStream();
using (var gzip = new GZipStream(inputStream, CompressionMode.Decompress, leaveOpen: true))
{
gzip.CopyTo(outputStream);
}
outputStream.Seek(0, SeekOrigin.Begin);
return new EasyNbt.NbtReader(outputStream).ReadValueAs<T>();
}
}
This is the NbtReader:
private MemoryStream MemStream { get; set; }
public NbtReader(MemoryStream memStream)
{
MemStream = memStream;
}
public T ReadValueAs<T>() where T: new()
{
return ReadTagAs<T>(new fNbt.NbtReader(MemStream, true).ReadAsTag());
}
private T ReadTagAs<T>(fNbt.NbtTag nbtTag)
{
//Reads to the root and adds to T...
}

How to write PDF to HttpResponseMessage using iText 7

I'm trying to generate a PDF and write it to HTTP response using iText 7 and iText7.pdfHtml libraries.
The HTML content for the PDF is stored in a StringBuilder object.
Not sure what the correct process of doing this is because as soon as I use HtmlConverter.ConvertToPdf, the MemoryStream is closed and I cannot access the bytes. I get the following exception:
System.ObjectDisposedException: Cannot access a closed Stream.
HttpResponseMessage httpResponseMessage = new HttpResponseMessage();
StringBuilder htmlText = new StringBuilder();
htmlText.Append("<html><body><h1>Hello World!</h1></body></html>");
using (MemoryStream memoryStream = new MemoryStream())
{
using (PdfWriter pdfWriter = new PdfWriter(memoryStream))
{
PdfDocument pdfDocument = new PdfDocument(pdfWriter);
Document document = new Document(pdfDocument);
string headerText = "my header";
string footerText = "my footer";
pdfDocument.AddEventHandler(PdfDocumentEvent.END_PAGE, new HeaderFooterEventHandler(document, headerText, footerText));
HtmlConverter.ConvertToPdf(htmlText.ToString(), pdfWriter);
memoryStream.Flush();
memoryStream.Seek(0, SeekOrigin.Begin);
byte[] bytes = new byte[memoryStream.Length];
memoryStream.Read(bytes, 0, (int)memoryStream.Length);
Stream stream = new MemoryStream(bytes);
httpResponseMessage.Content = new StreamContent(stream);
httpResponseMessage.Content.Headers.ContentType = MediaTypeHeaderValue.Parse("application/pdf");
httpResponseMessage.Content.Headers.ContentDisposition = new ContentDispositionHeaderValue("attachment")
{
FileName = "sample.pdf"
};
httpResponseMessage.StatusCode = HttpStatusCode.OK;
}//end using pdfwriter
}//end using memory stream
EDIT
Added PdfDocument and Document objects to manipulate header/footer and new page.
Make use of the fact that you have a MemoryStream and replace
memoryStream.Flush();
memoryStream.Seek(0, SeekOrigin.Begin);
byte[] bytes = new byte[memoryStream.Length];
memoryStream.Read(bytes, 0, (int)memoryStream.Length);
by
byte[] bytes = memoryStream.ToArray();
That method is documented to also work with closed memory streams.

GZipStream does not decompress data correctly

I'm trying to compress data with GZipStream. The code is quite straightforward:
// Serialize
var ms = new MemoryStream();
ProtoBuf.Serializer.Serialize(ms, result);
ms.Seek(0, SeekOrigin.Begin);
// Compress
var ms2 = new MemoryStream();
GZipStream zipStream = new GZipStream(ms2, CompressionMode.Compress);
ms.CopyTo(zipStream);
zipStream.Flush();
// Test
ms2.Seek(0, SeekOrigin.Begin);
var ms3 = new MemoryStream();
var unzipStream = new GZipStream(ms2, CompressionMode.Decompress);
unzipStream.CopyTo(ms3);
System.Diagnostics.Debug.WriteLine($"{ms.Length} =? {ms3.Length}");
Results should be equal, but I'm getting:
244480 =? 191481
Is the GZipStream unable to decompress stream compressed by itself? Or am I doing something wrong?
From the docs of GZipStream.Flush:
The current implementation of this method does not flush the internal buffer. The internal buffer is flushed when the object is disposed.
That fits in with not enough data being written to ms2. Try wrapping zipStream in a using block instead:
var ms2 = new MemoryStream();
using (GZipStream zipStream = new GZipStream(ms2, CompressionMode.Compress))
{
ms.CopyTo(zipStream);
}

Angular/Web API 2 returns invalid or corrupt file with StreamContent or ByteArrayContent

I'm trying to return a file in a ASP.NET Web API Controller. This file is a dynamically-generated PDF saved in a MemoryStream.
The client (browser) receives the file successfully, but when I open the file, I see that all the pages are totally blank.
The thing is that if I take the same MemoryStream and write it to a file, this disk file is displayed correctly, so I assume that the problem is related to the file transfer via Web.
My controller looks like this:
[HttpGet][Route("export/pdf")]
public HttpResponseMessage ExportAsPdf()
{
MemoryStream memStream = new MemoryStream();
PdfExporter.Instance.Generate(memStream);
memStream.Position = 0;
HttpResponseMessage result = new HttpResponseMessage(HttpStatusCode.OK);
result.Content = new ByteArrayContent(memStream.ToArray()); //OR: new StreamContent(memStream);
return result;
}
Just to try, if I write the stream to disk, it's displayed correctly:
[HttpGet][Route("export/pdf")]
public HttpResponseMessage ExportAsPdf()
{
MemoryStream memStream = new MemoryStream();
PdfExporter.Instance.Generate(memStream);
memStream.Position = 0;
using (var fs = new FileStream("C:\\Temp\\test.pdf", FileMode.OpenOrCreate, FileAccess.ReadWrite))
{
memStream.CopyTo(fs);
}
return null;
}
The differences are:
PDF saved on disk: 34KB
PDF transferred via web: 60KB (!)
If I compare both files contents, the main differences are:
File Differences
On the left is the PDF transferred via web; on the right, the PDF saved to disk.
Is there something wrong with my code?
Maybe something related to encodings?
Thanks!
Well, it turned out to be a client (browser) problem, not a server problem. I'm using AngularJS in the frontend, so when the respose was received, Angular automatically converted it to a Javascript string. In that conversion, the binary contents of the file were somehow altered...
Basically it was solved by telling Angular not to convert the response to a string:
$http.get(url, { responseType: 'arraybuffer' })
.then(function(response) {
var dataBlob = new Blob([response.data], { type: 'application/pdf'});
FileSaver.saveAs(dataBlob, 'myFile.pdf');
});
And then saving the response as a file, helped by the Angular File Saver service.
I guess you should set ContentDisposition and ContentType like this:
[HttpGet][Route("export/pdf")]
public HttpResponseMessage ExportAsPdf()
{
MemoryStream memStream = new MemoryStream();
PdfExporter.Instance.Generate(memStream);
var result = new HttpResponseMessage(HttpStatusCode.OK)
{
Content = new ByteArrayContent(memStream.ToArray())
};
//this line
result.Content.Headers.ContentDisposition = new System.Net.Http.Headers.ContentDispositionHeaderValue("attachment")
{
FileName = "YourName.pdf"
};
//and this line
result.Content.Headers.ContentType = new MediaTypeHeaderValue("application/octet-stream");
return result;
}
Try this
[HttpGet][Route("export/pdf")]
public HttpResponseMessage ExportAsPdf()
{
MemoryStream memStream = new MemoryStream();
PdfExporter.Instance.Generate(memStream);
//get buffer
var buffer = memStream.GetBuffer();
//content length for header
var contentLength = buffer.Length;
var statuscode = HttpStatusCode.OK;
var response = Request.CreateResponse(statuscode);
response.Content = new StreamContent(new MemoryStream(buffer));
response.Content.Headers.ContentType = new MediaTypeHeaderValue("application/pdf");
response.Content.Headers.ContentLength = contentLength;
ContentDispositionHeaderValue contentDisposition = null;
if (ContentDispositionHeaderValue.TryParse("inline; filename=my_filename.pdf", out contentDisposition)) {
response.Content.Headers.ContentDisposition = contentDisposition;
}
return response;
}

Categories