I have the below code when scanned through HP Fortify, showed XSS issue on the following line : Response.BinaryWrite(buffer);
What can I do to fix the XSS issue pointed out by Fortify?
void ShowPDF(string infilepath)
{
WebClient client = new WebClient();
Byte[] buffer = client.DownloadData(infilepath);
Response.ContentType = "application/pdf";
Response.AddHeader("content-length", buffer.Length.ToString());
Response.AddHeader("content-disposition","attachment; filename=hearingprep.pdf");
Response.BinaryWrite(buffer);
}
Any help will be deeply appreciated.
What Fortify said is that you put untrusted content to the browser.
When you do BinaryWrite(buffer). The rule triggered from Fortify is probably because you use directly the buffer coming from DownloadData (who come from a untrusted source ).
So First, try to add a sanitizing capability to your code on the string infilepathand see what's happen.
After that, if you can add sanitizing to the Byte[] buffer, this should definitively solve the vulns spot by Fortify.
If you can not do the 2., you should be prepare to have really a XSS.
Raised the issue as false positive, with ensuring the incoming file locations as trusted location.
Related
I am working with some calculations and the product of my business logic is the creation of a PDF report (C# ASP.NET). So far, I have managed to create the report. The convention of my company is storing all the data in .gz format somewhere in the filesystem. At the click of some button, I need to send the file to the client, with the opening of the dialog box for downloading and saving the file to the desired destination.
I have tried a lot of options, including .TransmitFile, .OutputStream.Write as they are the methods that the Response object has. However, I'm not getting any feedback back. There doesn't seem to be any errors and the code is running as the methods never happened. If I set a try catch block, the code runs into an exception with description "Thread was aborted", though I heard that Response.End always throws an exception.
var file = new FileInfo(integrityCheckInfo.ReportPath);
var buffer = integrityCheckInfo.ReportData;
//Code for downloading the report, doesn't work, and I don't know how to fix this.
if (File.Exists(integrityCheckInfo.ReportPath))
{
Response.Clear();
//Response.AddHeader("Accept-encoding", "gzip, deflate");
Response.AddHeader("content-length", buffer.Length.ToString());
Response.ContentType = "application/x-gzip";
Response.Buffer = true;
Response.Expires = -1000;
Response.AddHeader("content-disposition", "attachment; filename=report.gz");
//Response.BinaryWrite(buffer);
Response.OutputStream.Write(buffer, 0, buffer.Length);
//Response.Flush();
//Response.TransmitFile(integrityCheckInfo.ReportPath);
//Response.Flush();
Response.End();
}
The code that starts with // are comments and they are not included in this iteration where I wanted to download the .gz instead of the PDF but also got nothing for feedback.
I'm having some problems dealing with downloading a file from the server.
The problem is that the end of the downloaded file is missing.
I have found some indications of other people having similar problems, but nothing that helps me in my problem.
When debugging I have learned that the length of fileData is correct, and all data is present in the byte array when calling BinaryWrite.
That leaves the BinaryWrite, Flush or Close calls...
I have read about not using Response.End or Response.Close, for example here:
HttpResponse.End vs HttpResponse.Close vs HttpResponse.SuppressContent and it seems like a probable cause, but what should I use instead (have tried to remove Response.Close completely, but that result in too much data output)?
Someone knows what might cause this behaviour, and how to fix it?
EDIT: Just tried with Response.ContentType = "binary/octet-stream"; and it works like a charm!
What is the difference between text/plain and binary/octet-stream that may cause a behavior like this?
It even works without the Close call...
MORE EDIT: Seems like compression of Responses was activated on the server-side. Apparently there seems to be an issue with plain text streams when compression is active.
The code I have is:
private void DownloadFile(byte[] fileData, string fileName, string fileExtension)
{
Response.Clear();
Response.AddHeader("content-disposition", "attachment; filename=" + fileName + fileExtension);
Response.AddHeader("Content-Length", fileData.Length.ToString(CultureInfo.InvariantCulture));
Response.ContentType = "text/plain";
Response.BinaryWrite(fileData);
Response.Flush();
Response.Close();
}
If compression is active on the server-side (IIS) it apparently causes trouble with text/plain streams.
For you with similar problems, try deactivating, it might help!
It surely did for me!
I'm making tests with ASP.NET HttpHandler for download a file writting directly on the response stream, and I'm not pretty sure about the way I'm doing it. This is a example method, in the future the file could be stored in a BLOB in the database:
public void GetFile(HttpResponse response)
{
String fileName = "example.iso";
response.ClearHeaders();
response.ClearContent();
response.ContentType = "application/octet-stream";
response.AppendHeader("Content-Disposition", "attachment; filename=" + fileName);
using (FileStream fs = new FileStream(Path.Combine(HttpContext.Current.Server.MapPath("~/App_Data"), fileName), FileMode.Open))
{
Byte[] buffer = new Byte[4096];
Int32 readed = 0;
while ((readed = fs.Read(buffer, 0, buffer.Length)) > 0)
{
response.OutputStream.Write(buffer, 0, readed);
response.Flush();
}
}
}
But, I'm not sure if this is correct or there is a better way to do it.
My questions are:
When I open the url with the browser, appears the "Save File" dialog... but it seems like the server has started already to push data into the stream before I click "Save", is that normal?
If I remove the line"response.Flush()", when I open the url with the browser, ... I see how the web server is pushing data but the "Save File" dialog doesn't come up, (or at least not in a reasonable time fashion) why?
When I open the url with a WebRequest object, I see that the HttpResponse.ContentLength is "-1", although I can read the stream and get the file. What is the meaning of -1? When is HttpResponse.ContentLength going to show the length of the response? For example, I have a method that retrieves a big xml compresed with deflate as a binary stream, but in that case... when I access it with a WebRequest, in the HttpResponse I can actually see the ContentLength with the length of the stream, why?
What is the optimal length for the Byte[] array that I use as buffer for optimal performance in a web server? I've read that is between 4K and 8K... but which factors should I consider to make the correct decision.
Does this method bloat the IIS or client memory usage? or is it actually buffering the transference correctly?
Sorry for so many questions, I'm pretty new in web development :P
Cheers.
Yes; this is normal.
If you never flush, the browser doesn't get any response until the server finishes (Not even the Content-Disposition header). Therefore, it doesn't know to show a file dialog.
The Content-Length header only gets set if the entire response is buffered (If you never flush) or if you set it yourself. In this case, you can and should set it yourself; write
response.AppendHeader("Content-Length", new FileInfo(path).Length.ToString());
I recommend 4K; I don't have any hard basis for the recommendation.
This method is the best way to do it. By calling Flush inside the loop, you are sending the response down the wire immediately, without any buffering. However, for added performance, you can use GZIP compression.
Yes, it is buffering.
Flush pushes the cached content to the browser. If it is never pushed, you won't get a save dialog box.
Hard to tell without seeing the exact files/URLs/Streams you are using.
I think the factors depends on how sluggish your page is, really. You will have better performance toward 4k. And perhaps, the lower value will be better to accommodate slower connections.
See #1 & 2.
For #3 you need to set the content-length header in your http-response. Many of those values come from http headers.
I believe you can change the bufferring by changing a buffering property on the response object to false. Haven't done it in a while so I don't remember what it might be.
I have a custom HttpHandler that invokes a webservice to get a file. In test, I invoke the production webservice and the HttpHandler returns the file correctly. When I test it in the production environment on the server, it works as well. However, if I invoke the HttpHandler from a remote client (not on the server) the filename and size are set correctly, but the file bytes that are downloaded are zero. Any ideas?
So here's the deal. I created a multipart range handler (you need to implement the RFC in order to stream content to, say, an iPhone or Adobe Reader). The spec is suppose to enable handling a file when the client requests a range of bytes instead of the whole array. The issue with my handler came when the client wanted the whole BLOB:
if (context.Request.Headers[HEADER_RANGE] != null)
{
...
}
else
{
context.Response.ContentType = contentItem.MimeType;
addHeader(context.Response, HEADER_CONTENT_DISPOSITION, "attachment; filename=\"" + contentItem.Filename + "\"");
addHeader(context.Response, HEADER_CONTENT_LENGTH, contentItem.FileBytes.Length.ToString());
context.Response.OutputStream.Write(contentItem.FileBytes, 0, contentItem.FileBytes.Length);
}
Notice anything missing???
I forgot to include:
context.Response.Flush();
After adding that line of code, it started working in the production environment. I find it very odd, however, that this was working on the server and not on any clients. Anyone able to shed any light on why that would be?
This code will always make my aspx page load twice. And this has nothing to do with AutoEventWireup.
Response.Clear();
Response.ContentType = "application/pdf";
Response.AppendHeader("Content-Disposition", "inline;filename=data.pdf");
Response.BufferOutput = true;
byte[] response = GetDocument(doclocation);
Response.AddHeader("Content-Length", response.Length.ToString());
Response.BinaryWrite(response);
Response.End();
This code will only make my page load once (as it should) when I hardcode some dummy values.
Response.Clear();
Response.ContentType = "application/pdf";
Response.AppendHeader("Content-Disposition", "inline;filename=data.pdf");
Response.BufferOutput = true;
byte[] response = new byte[] {10,11,12,13};
Response.AddHeader("Content-Length", response.Length.ToString());
Response.BinaryWrite(response);
Response.End();
I have also increased the request length for good measure in the web.config file.
<httpRuntime executionTimeout="180" maxRequestLength="400000"/>
Still nothing. Anyone see something I don't?
GetDocument(doclocation);
May be this method somehow returns Redirection code ? or may be an iframe or img for your dynamic content?
If so:
In general the control could get called twice because of the url response. First it renders the content. After that your browser tries to download the tag (iframe,img) source which is actually a dynamic content that is generated. So it makes another request to the web server. In that case another page object created which has a different viewstate, because it is a different Request.
Have you found a resolution to this yet? I having the same issue, my code is pretty much a mirror of yours. Main difference is my pdf is hosted in an IFrame.
So interesting clues I have found:
If I stream back a Word.doc it only gets loaded once, if pdf it gets loaded twice. Also, I have seen different behavior from different client desktops. I am thinking that Adobe version may have something to do with it.
Update:
In my case I was setting the HttpCacheability to NoCache. In verifying this, any of the non client cache options would cause the double download of the pdf. Only not setting it at all (defaults to Private) or explicitly setting it to Private or Public would fix the issue, all other settings duplicated the double load of the document.
Quick Guess: Could it be that at this stage in the page life cycle, the class that contains GetDocument() has already gone through garbage collection? The ASP.NET Worker process then needs to reload the page in order to read that method again?
Have you tried it in the Page_Load ? and why is GetDocument a static method?