HttpResponse.ClearContent() keeps returning Html - c#

I'm Trying to make a downloadable file with corpus my bytes variable. However, even if I try to cancel out the HttpResponse body and header (also trying response.Clear()) this keep me returning what I want (bytes) BUT also the html in the end. Any ideas why this is not working?
string value = string.Format("attachment; filename={0}", fileName); // name of file
var response = HttpContext.Current.Response;
response.ContentType = "text/plain";
response.ClearContent();
response.ClearHeaders();
response.AddHeader("Content-Disposition", value);
response.BinaryWrite(bytes);
using (var stream = new MemoryStream(bytes))
{
stream.WriteTo(response.OutputStream);
}
I'm following the indications made also in another question

Related

What is HttpContext.Current.Response

I was trying to create a new endpoint to download a file from the server. Got the example from https://forums.asp.net/t/2010544.aspx?Download+files+from+website+using+Asp+net+c+ and this is what I ended up:
[Route("{id}/file")]
[HttpGet]
public IHttpActionResult GetFile(int id)
{
var filePath = $"C:\\Static\\File_{id}.pdf";
var response = HttpContext.Current.Response;
var data = new WebClient().DownloadData(filePath);
response.Clear();
response.ClearContent();
response.ClearHeaders();
response.Buffer = true;
response.AddHeader("Content-Disposition", "attachment");
response.BinaryWrite(data);
response.End();
return Ok(response);
}
But I wasn't sure if I need all these:
response.Clear();
response.ClearContent();
response.ClearHeaders();
response.Buffer = true;
response.BinaryWrite(data);
response.End();
What do these do?
The response object is the object containing everything relative to the response the client will receive from the server once the current request is served back.
response.Clear(); -> Will clear the content of the body of the response ( any html for example that was supposed to be served back, you can remove this)
response.ClearContent(); -> will clear any content in the response ( that is why you can remove the previous Clear call i think )
response.ClearHeaders(); -> Clears all headers asscociated with the response. (For example a header might tell the client there is 'encoding:gzip')
response.Buffer = true; -> enables response buffer
response.BinaryWrite(data); -> Appends your binary data to the content of the response( you cleared it earlier so now only this is contained)
response.End(); -> Terminates the current response handling and returns the response to the client.
Look up more stuff and better explanations here!

File download from C# not working, I can see in the browser response from the API

I am using a C# api and I call it from the from the UI, I am able to call it and I can see when I inspect the browser response, the returned data from the api, but it didn't force it to download the response. Here is the code I am using in the C# api
var response = HttpContext.Current.Response;
response.Clear();
string fileName = CleanFileName(string.Format("{0} test - {1}.txt", name, DateTime.Now.ToString("yyyy-MM-dd HH_mm_ss")));
response.AddHeader("content-disposition", "attachment; filename = \"" + fileName + "\"");
response.ContentType = "text/csv";
response.AddHeader("Pragma", "must-revalidate");
response.AddHeader("Cache-Control", "must-revalidate");
byte[] byteArray = Encoding.UTF8.GetBytes(mydata);
response.AppendHeader("Content-Length", byteArray.Length.ToString());
response.BinaryWrite(byteArray);
response.End();
response.Flush();
Thanks
Probably you do async call of your API from your client side. In that case you will get all data that returned by API as result of your request and they won't be downloaded automatically.
Try to do something like this from your client side (JS):
window.open(apiDownloadLink, '_blank', '');
It would cause postback and file will be downloaded by browser.
You can find additional information in this question

how to read a PDF into a char array for HTTP response output [duplicate]

I have an app that needs to read a PDF file from the file system and then write it out to the user. The PDF is 183KB and seems to work perfectly. When I use the code at the bottom the browser gets a file 224KB and I get a message from Acrobat Reader saying the file is damaged and cannot be repaired.
Here is my code (I've also tried using File.ReadAllBytes(), but I get the same thing):
using (FileStream fs = File.OpenRead(path))
{
int length = (int)fs.Length;
byte[] buffer;
using (BinaryReader br = new BinaryReader(fs))
{
buffer = br.ReadBytes(length);
}
Response.Clear();
Response.Buffer = true;
Response.AddHeader("content-disposition", String.Format("attachment;filename={0}", Path.GetFileName(path)));
Response.ContentType = "application/" + Path.GetExtension(path).Substring(1);
Response.BinaryWrite(buffer);
}
Try adding
Response.End();
after the call to Response.BinaryWrite().
You may inadvertently be sending other content back after Response.BinaryWrite which may confuse the browser. Response.End will ensure that that the browser only gets what you really intend.
Response.BinaryWrite(bytes);
Response.Flush();
Response.Close();
Response.End();
This works for us. We create PDFs from SQL Reporting Services.
We've used this with a lot of success. WriteFile do to the download for you and a Flush / End at the end to send it all to the client.
//Use these headers to display a saves as / download
//Response.ContentType = "application/octet-stream";
//Response.AddHeader("Content-Disposition", String.Format("attachment; filename={0}.pdf", Path.GetFileName(Path)));
Response.ContentType = "application/pdf";
Response.AddHeader("Content-Disposition", String.Format("inline; filename={0}.pdf", Path.GetFileName(Path)));
Response.WriteFile(path);
Response.Flush();
Response.End();
Since you're sending the file directly from your filesystem with no intermediate processing, why not use Response.TransmitFile instead?
Response.Clear();
Response.ContentType = "application/pdf";
Response.AddHeader("Content-Disposition",
"attachment; filename=\"" + Path.GetFileName(path) + "\"");
Response.TransmitFile(path);
Response.End();
(I suspect that your problem is caused by a missing Response.End, meaning that you're sending the rest of your page's content appended to the PDF data.)
Just for future reference, as stated in this blog post:
http://blogs.msdn.com/b/aspnetue/archive/2010/05/25/response-end-response-close-and-how-customer-feedback-helps-us-improve-msdn-documentation.aspx
It is not recommended to call Response.Close() or Response.End() - instead use CompleteRequest().
Your code would look somewhat like this:
byte[] bytes = {};
bytes = GetBytesFromDB(); // I use a similar way to get pdf data from my DB
Response.Clear();
Response.ClearHeaders();
Response.Buffer = true;
Response.Cache.SetCacheability(HttpCacheability.NoCache);
Response.ContentType = "application/pdf";
Response.AppendHeader("Content-Disposition", "attachment; filename=" + anhangTitel);
Response.AppendHeader("Content-Length", bytes.Length.ToString());
this.Context.ApplicationInstance.CompleteRequest();
Please read this before using Response.TransmitFile: http://improve.dk/blog/2008/03/29/response-transmitfile-close-will-kill-your-application
Maybe you are missing a Response.close to close de Binary Stream
In my MVC application, I have enabled gzip compression for all responses. If you are reading this binary write from an ajax call with gzipped responses, you are getting the gzipped bytearray rather than original bytearray that you need to work with.
//c# controller is compressing the result after the response.binarywrite
[compress]
public ActionResult Print(int id)
{
...
var byteArray=someService.BuildPdf(id);
return return this.PDF(byteArray, "test.pdf");
}
//where PDF is a custom actionresult that eventually does this:
public class PDFResult : ActionResult
{
...
public override void ExecuteResult(ControllerContext context)
{
//Set the HTTP header to excel for download
HttpContext.Current.Response.Clear();
//HttpContext.Current.Response.ContentType = "application/vnd.ms-excel";
HttpContext.Current.Response.ContentType = "application/pdf";
HttpContext.Current.Response.AddHeader("content-disposition", string.Concat("attachment; filename=", fileName));
HttpContext.Current.Response.AddHeader("Content-Length", pdfBytes.Length.ToString());
//Write the pdf file as a byte array to the page
HttpContext.Current.Response.BinaryWrite(byteArray);
HttpContext.Current.Response.End();
}
}
//javascript
function pdf(mySearchObject) {
return $http({
method: 'Post',
url: '/api/print/',
data: mySearchObject,
responseType: 'arraybuffer',
headers: {
'Accept': 'application/pdf',
}
}).then(function (response) {
var type = response.headers('Content-Type');
//if response.data is gzipped, this blob will be incorrect. you have to uncompress it first.
var blob = new Blob([response.data], { type: type });
var fileName = response.headers('content-disposition').split('=').pop();
if (window.navigator.msSaveOrOpenBlob) { // for IE and Edge
window.navigator.msSaveBlob(blob, fileName);
} else {
var anchor = angular.element('<a/>');
anchor.css({ display: 'none' }); // Make sure it's not visible
angular.element(document.body).append(anchor); // Attach to document
anchor.attr({
href: URL.createObjectURL(blob),
target: '_blank',
download: fileName
})[0].click();
anchor.remove();
}
});
}
" var blob = new Blob([response.data], { type: type }); "
This will give you that invalid/corrupt file that you are trying to open when you turn that byte array into a file in your javascript if you don't uncompress it first.
To fix this, you have a choice to either prevent gzipping this binary data so that you can properly turn it into the file that you are downloading, or you have to decompress that gzipped data in your javascript code before you turn it into a file.
In addition to Igor's Response.Close(), I would add a Response.Flush().
I also found it necessary to add the following:
Response.Encoding = Encoding.Default
If I didn't include this, my JPEG was corrupt and double the size in bytes.
But only if the handler was returning from an ASPX page. It seemed running from an ASHX this was not required.

String encoding related issue

I have a custom xml serializer which can serialize types and the method signature is:
string result = CustomXmlSerializer.Serialize(someObject);
the result of the serialization I want to make it available in a web page something like :
Response.ClearContent();
Response.AddHeader("content-disposition", "attachment; filename=\"somefilename.xml\"");
Response.BufferOutput = true;
Response.ContentEncoding = Encoding.UTF8;
Response.ContentType = "text/xml; encoding=utf-8";
string content= CustomXmlSerialize.Serialize(someObject);
byte[] utf8Bytes = Encoding.UTF8.GetBytes(content);
Response.OutputStream.Write(utf8Bytes, 0, utf8Bytes.Length);
Response.End();
However the generated xml has still the string in-memory encoding (utf-16)? How is that possible? I am writing to the HttpResponse an array of byte with encoding utf-8?
Ask your CustomXmlSerializer to emit a correct encoding attribute in the header. It cannot know what encoding do you use when you convert the result to bytes.

Reading a binary file and using Response.BinaryWrite()

I have an app that needs to read a PDF file from the file system and then write it out to the user. The PDF is 183KB and seems to work perfectly. When I use the code at the bottom the browser gets a file 224KB and I get a message from Acrobat Reader saying the file is damaged and cannot be repaired.
Here is my code (I've also tried using File.ReadAllBytes(), but I get the same thing):
using (FileStream fs = File.OpenRead(path))
{
int length = (int)fs.Length;
byte[] buffer;
using (BinaryReader br = new BinaryReader(fs))
{
buffer = br.ReadBytes(length);
}
Response.Clear();
Response.Buffer = true;
Response.AddHeader("content-disposition", String.Format("attachment;filename={0}", Path.GetFileName(path)));
Response.ContentType = "application/" + Path.GetExtension(path).Substring(1);
Response.BinaryWrite(buffer);
}
Try adding
Response.End();
after the call to Response.BinaryWrite().
You may inadvertently be sending other content back after Response.BinaryWrite which may confuse the browser. Response.End will ensure that that the browser only gets what you really intend.
Response.BinaryWrite(bytes);
Response.Flush();
Response.Close();
Response.End();
This works for us. We create PDFs from SQL Reporting Services.
We've used this with a lot of success. WriteFile do to the download for you and a Flush / End at the end to send it all to the client.
//Use these headers to display a saves as / download
//Response.ContentType = "application/octet-stream";
//Response.AddHeader("Content-Disposition", String.Format("attachment; filename={0}.pdf", Path.GetFileName(Path)));
Response.ContentType = "application/pdf";
Response.AddHeader("Content-Disposition", String.Format("inline; filename={0}.pdf", Path.GetFileName(Path)));
Response.WriteFile(path);
Response.Flush();
Response.End();
Since you're sending the file directly from your filesystem with no intermediate processing, why not use Response.TransmitFile instead?
Response.Clear();
Response.ContentType = "application/pdf";
Response.AddHeader("Content-Disposition",
"attachment; filename=\"" + Path.GetFileName(path) + "\"");
Response.TransmitFile(path);
Response.End();
(I suspect that your problem is caused by a missing Response.End, meaning that you're sending the rest of your page's content appended to the PDF data.)
Just for future reference, as stated in this blog post:
http://blogs.msdn.com/b/aspnetue/archive/2010/05/25/response-end-response-close-and-how-customer-feedback-helps-us-improve-msdn-documentation.aspx
It is not recommended to call Response.Close() or Response.End() - instead use CompleteRequest().
Your code would look somewhat like this:
byte[] bytes = {};
bytes = GetBytesFromDB(); // I use a similar way to get pdf data from my DB
Response.Clear();
Response.ClearHeaders();
Response.Buffer = true;
Response.Cache.SetCacheability(HttpCacheability.NoCache);
Response.ContentType = "application/pdf";
Response.AppendHeader("Content-Disposition", "attachment; filename=" + anhangTitel);
Response.AppendHeader("Content-Length", bytes.Length.ToString());
this.Context.ApplicationInstance.CompleteRequest();
Please read this before using Response.TransmitFile: http://improve.dk/blog/2008/03/29/response-transmitfile-close-will-kill-your-application
Maybe you are missing a Response.close to close de Binary Stream
In my MVC application, I have enabled gzip compression for all responses. If you are reading this binary write from an ajax call with gzipped responses, you are getting the gzipped bytearray rather than original bytearray that you need to work with.
//c# controller is compressing the result after the response.binarywrite
[compress]
public ActionResult Print(int id)
{
...
var byteArray=someService.BuildPdf(id);
return return this.PDF(byteArray, "test.pdf");
}
//where PDF is a custom actionresult that eventually does this:
public class PDFResult : ActionResult
{
...
public override void ExecuteResult(ControllerContext context)
{
//Set the HTTP header to excel for download
HttpContext.Current.Response.Clear();
//HttpContext.Current.Response.ContentType = "application/vnd.ms-excel";
HttpContext.Current.Response.ContentType = "application/pdf";
HttpContext.Current.Response.AddHeader("content-disposition", string.Concat("attachment; filename=", fileName));
HttpContext.Current.Response.AddHeader("Content-Length", pdfBytes.Length.ToString());
//Write the pdf file as a byte array to the page
HttpContext.Current.Response.BinaryWrite(byteArray);
HttpContext.Current.Response.End();
}
}
//javascript
function pdf(mySearchObject) {
return $http({
method: 'Post',
url: '/api/print/',
data: mySearchObject,
responseType: 'arraybuffer',
headers: {
'Accept': 'application/pdf',
}
}).then(function (response) {
var type = response.headers('Content-Type');
//if response.data is gzipped, this blob will be incorrect. you have to uncompress it first.
var blob = new Blob([response.data], { type: type });
var fileName = response.headers('content-disposition').split('=').pop();
if (window.navigator.msSaveOrOpenBlob) { // for IE and Edge
window.navigator.msSaveBlob(blob, fileName);
} else {
var anchor = angular.element('<a/>');
anchor.css({ display: 'none' }); // Make sure it's not visible
angular.element(document.body).append(anchor); // Attach to document
anchor.attr({
href: URL.createObjectURL(blob),
target: '_blank',
download: fileName
})[0].click();
anchor.remove();
}
});
}
" var blob = new Blob([response.data], { type: type }); "
This will give you that invalid/corrupt file that you are trying to open when you turn that byte array into a file in your javascript if you don't uncompress it first.
To fix this, you have a choice to either prevent gzipping this binary data so that you can properly turn it into the file that you are downloading, or you have to decompress that gzipped data in your javascript code before you turn it into a file.
In addition to Igor's Response.Close(), I would add a Response.Flush().
I also found it necessary to add the following:
Response.Encoding = Encoding.Default
If I didn't include this, my JPEG was corrupt and double the size in bytes.
But only if the handler was returning from an ASPX page. It seemed running from an ASHX this was not required.

Categories