Seeking source of Http Response Headers in MVC application - c#

I've inherited a .NET C# MVC application. I noticed that different pages interact differently with the server when the user uses the Back and Forward buttons built into the browser. Some pages would hit the server and some would not. I used Postman to hit the various URLs and found that different pages are returning different response headers.
Cache-Control: private
Cache-Control: public, no-store, max-age=0
Pages in the first set are cached in the browser's "private" cache. There is no hit against the server when this page is loaded in the browser via the Back or Forward button.
The second control string is somewhat of a conflict, however the most restrictive directive of no-store overrides. These pages hit the server every time the page is loaded.
I've searched the codebase. I've located related meta tags in various page template markup files but I'm looking specifically for the headers. Moreover, the meta tags are all the same, so that could not account for the different headers in different pages.
IIS is not configured to add this header. Moreover, when I run this application in debug, out of Visual Studio / IIS Express, I see these headers sent back with the response. Different headers for different pages.
I can't find any explicit code which is emitting these headers on the server (I searched for Response.AddHeader and didn't find anything) so I'm thinking that there might be different configurations of the different MVC Templates which is implicitly generating these headers? Does that make sense? (I don't have a lot of experience with MVC.) I'll keep looking, but if you have knowledge of MVC which could point me in the right direction, I'd really appreciate that.
I continued looking into this. As Alexei Levenkov pointed out, Angular is executing client side. Not the right place to be looking.
Here's an image of a Debug session processing the page request. At this point there are exactly two headers in the Response.Headers collection.
After this statement executes, the response received by the browser contains many more headers. The View() method is not my code so I have no ability to trace into it. Clearly though, the View() method reacts to some declarative configuration somewhere which results in different response headers for different templates. As I mentioned before, I'm no Angular.js expert and I'm not much better with MVC. I'm probably missing something very basic.
Within this transaction there is nothing which executes after the return View() statement. Step over that statement, step off the closing method brace and the content returns to the client.
Postman shows 11 response headers, including the one in particular which interests me.
Now here I am stepping into a different request which returns a different Cache-Control header. Again, we see the same two headers in the Response.Headers collection.
Yet, the View method returns 13 response headers and a different Cache-Control header:
Here's a clue: The different pages which return different headers are grouped into different controllers. One controller returns Cache-Control: private, while the other controller returns Cache-Control: public, no-store, max-age=0.
I suppose, ultimately, the question boils down to the following: How are controllers defined in MVC to return specific headers on the response? (Like I said, I'm not an MVC expert so this might be a very long-winded presentation to get to a very basic question. Thanks for your help!)

I believe this is the answer. It's a meta-tag on the Controller. (As a declarative, this would not be seen when stepping through the code. An examination of the source is necessary and of course, it helps to know what you are looking for. Now I know!)
namespace rater8.RMM.Web.Controllers
{
[OutputCache(NoStore = true, Duration = 0, VaryByParam = "*")]
public class RMMController : Controller
{
...
}
...
}
The other controller did not have any such meta-tag. It appears that not specifying this meta-tag results in Cache-Control: private, which allows browsers to preserve the page in the local cache for Back and Froward navigation without hitting the server.
Specifying the meta-tag as shown in the code example above results in Cache-Control: public, no-store, max-age=0 (That's a strange cache-control value where public is in conflict with no-store but the latter wins out as it is the more restrictive attribute.)

Related

Browser debugger says my response.body is null but fiddler and the developer tools show it as populated [duplicate]

I can hit this endpoint, http://catfacts-api.appspot.com/api/facts?number=99 via Postman and it returns JSON
Additionally I am using create-react-app and would like to avoid setting up any server config.
In my client code I am trying to use fetch to do the same thing, but I get the error:
No 'Access-Control-Allow-Origin' header is present on the requested
resource. Origin 'http://localhost:3000' is therefore not allowed
access. If an opaque response serves your needs, set the request's
mode to 'no-cors' to fetch the resource with CORS disabled.
So I am trying to pass in an object, to my Fetch which will disable CORS, like so:
fetch('http://catfacts-api.appspot.com/api/facts?number=99', { mode: 'no-cors'})
.then(blob => blob.json())
.then(data => {
console.table(data);
return data;
})
.catch(e => {
console.log(e);
return e;
});
Interestingly enough the error I get is actually a syntax error with this function. I am not sure my actual fetch is broken, because when I remove the { mode: 'no-cors' } object, and supply it with a different URL it works just fine.
I have also tried to pass in the object { mode: 'opaque'} , but this returns the original error from above.
I belive all I need to do is disable CORS.. What am I missing?
mode: 'no-cors' won’t magically make things work. In fact it makes things worse, because one effect it has is to tell browsers, “Block my frontend JavaScript code from seeing contents of the response body and headers under all circumstances.” Of course you never want that.
What happens with cross-origin requests from frontend JavaScript is that browsers by default block frontend code from accessing resources cross-origin. If Access-Control-Allow-Origin is in a response, then browsers relax that blocking and allow your code to access the response.
But if a site sends no Access-Control-Allow-Origin in its responses, your frontend code can’t directly access responses from that site. In particular, you can’t fix it by specifying mode: 'no-cors' (in fact that’ll ensure your frontend code can’t access the response contents).
However, one thing that will work: if you send your request through a CORS proxy.
You can also easily deploy your own proxy to Heroku in just 2-3 minutes, with 5 commands:
git clone https://github.com/Rob--W/cors-anywhere.git
cd cors-anywhere/
npm install
heroku create
git push heroku master
After running those commands, you’ll end up with your own CORS Anywhere server running at, for example, https://cryptic-headland-94862.herokuapp.com/.
Prefix your request URL with your proxy URL; for example:
https://cryptic-headland-94862.herokuapp.com/https://example.com
Adding the proxy URL as a prefix causes the request to get made through your proxy, which:
Forwards the request to https://example.com.
Receives the response from https://example.com.
Adds the Access-Control-Allow-Origin header to the response.
Passes that response, with that added header, back to the requesting frontend code.
The browser then allows the frontend code to access the response, because that response with the Access-Control-Allow-Origin response header is what the browser sees.
This works even if the request is one that triggers browsers to do a CORS preflight OPTIONS request, because in that case, the proxy also sends back the Access-Control-Allow-Headers and Access-Control-Allow-Methods headers needed to make the preflight successful.
I can hit this endpoint, http://catfacts-api.appspot.com/api/facts?number=99 via Postman
https://developer.mozilla.org/en-US/docs/Web/HTTP/Access_control_CORS explains why it is that even though you can access the response with Postman, browsers won’t let you access the response cross-origin from frontend JavaScript code running in a web app unless the response includes an Access-Control-Allow-Origin response header.
http://catfacts-api.appspot.com/api/facts?number=99 has no Access-Control-Allow-Origin response header, so there’s no way your frontend code can access the response cross-origin.
Your browser can get the response fine and you can see it in Postman and even in browser devtools—but that doesn’t mean browsers expose it to your code. They won’t, because it has no Access-Control-Allow-Origin response header. So you must instead use a proxy to get it.
The proxy makes the request to that site, gets the response, adds the Access-Control-Allow-Origin response header and any other CORS headers needed, then passes that back to your requesting code. And that response with the Access-Control-Allow-Origin header added is what the browser sees, so the browser lets your frontend code actually access the response.
So I am trying to pass in an object, to my Fetch which will disable CORS
You don’t want to do that. To be clear, when you say you want to “disable CORS” it seems you actually mean you want to disable the same-origin policy. CORS itself is actually a way to do that — CORS is a way to loosen the same-origin policy, not a way to restrict it.
But anyway, it’s true you can—in your local environment—do suff like give a browser runtime flags to disable security and run insecurely, or you can install a browser extension locally to get around the same-origin policy, but all that does is change the situation just for you locally.
No matter what you change locally, anybody else trying to use your app is still going to run into the same-origin policy, and there’s no way you can disable that for other users of your app.
You most likely never want to use mode: 'no-cors' in practice except in a few limited cases, and even then only if you know exactly what you’re doing and what the effects are. That’s because what setting mode: 'no-cors' actually says to the browser is, “Block my frontend JavaScript code from looking into the contents of the response body and headers under all circumstances.” In most cases that’s obviously really not what you want.
As far as the cases when you would want to consider using mode: 'no-cors', see the answer at What limitations apply to opaque responses? for the details. The gist of it is:
In the limited case when you’re using JavaScript to put content from another origin into a <script>, <link rel=stylesheet>, <img>, <video>, <audio>, <object>, <embed>, or <iframe> element (which works because embedding of resources cross-origin is allowed for those)—but for some reason you don’t want to/can’t do that just by having the markup of the document use the resource URL as the href or src attribute for the element.
When the only thing you want to do with a resource is to cache it. As alluded to in What limitations apply to opaque responses?, in practice the scenario that’s for is when you’re using Service Workers, in which case the API that’s relevant is the Cache Storage API.
But even in those limited cases, there are some important gotchas to be aware of; see the answer at What limitations apply to opaque responses? for the details.
I have also tried to pass in the object { mode: 'opaque'}
There is no 'opaque' request mode — opaque is instead just a property of the response, and browsers set that opaque property on responses from requests sent with no-cors mode.
But incidentally the word opaque is a pretty explicit signal about the nature of the response you end up with: “opaque” means you can’t see into any of its details; it blocks you from seeing.
If you are trying to address this issue temporarily on your localhost, you can use this chrome extension : Allow CORS Access-Control-Allow-Origin
https://chrome.google.com/webstore/detail/allow-cors-access-control/lhobafahddgcelffkeicbaginigeejlf
If you are using Express as back-end you just have to install cors and import and use it in app.use(cors());.
If it is not resolved then try switching ports.
It will surely resolve after switching ports
So if you're like me and developing a website on localhost where you're trying to fetch data from Laravel API and use it in your Vue front-end, and you see this problem, here is how I solved it:
In your Laravel project, run command php artisan make:middleware Cors. This will create app/Http/Middleware/Cors.php for you.
Add the following code inside the handles function in Cors.php:
return $next($request)
->header('Access-Control-Allow-Origin', '*')
->header('Access-Control-Allow-Methods', 'GET, POST, PUT, DELETE, OPTIONS');
In app/Http/kernel.php, add the following entry in $routeMiddleware array:
‘cors’ => \App\Http\Middleware\Cors::class
(There would be other entries in the array like auth, guest etc. Also make sure you're doing this in app/Http/kernel.php because there is another kernel.php too in Laravel)
Add this middleware on route registration for all the routes where you want to allow access, like this:
Route::group(['middleware' => 'cors'], function () {
Route::get('getData', 'v1\MyController#getData');
Route::get('getData2', 'v1\MyController#getData2');
});
In Vue front-end, make sure you call this API in mounted() function and not in data(). Also make sure you use http:// or https:// with the URL in your fetch() call.
Full credits to Pete Houston's blog article.
You can also set up a reverse proxy which adds the CORS headers using a self-hosted CORS Anywhere or Just CORS if you want a managed solution.
https://justcors.com/<id>/<your-requested-resource>
http://cors-anywhere.com/<your-requested-resource>
Very easy solution (2 min to config) is to use local-ssl-proxy package from npm
The usage is straight pretty forward:
1. Install the package:
npm install -g local-ssl-proxy
2. While running your local-server mask it with the local-ssl-proxy --source 9001 --target 9000
P.S: Replace --target 9000 with the -- "number of your port" and --source 9001 with --source "number of your port +1"
Solution for me was to just do it server side
I used the C# WebClient library to get the data (in my case it was image data) and send it back to the client. There's probably something very similar in your chosen server-side language.
//Server side, api controller
[Route("api/ItemImage/GetItemImageFromURL")]
public IActionResult GetItemImageFromURL([FromQuery] string url)
{
ItemImage image = new ItemImage();
using(WebClient client = new WebClient()){
image.Bytes = client.DownloadData(url);
return Ok(image);
}
}
You can tweak it to whatever your own use case is. The main point is client.DownloadData() worked without any CORS errors. Typically CORS issues are only between websites, hence it being okay to make 'cross-site' requests from your server.
Then the React fetch call is as simple as:
//React component
fetch(`api/ItemImage/GetItemImageFromURL?url=${imageURL}`, {
method: 'GET',
})
.then(resp => resp.json() as Promise<ItemImage>)
.then(imgResponse => {
// Do more stuff....
)}
I had a similar problem with my browser debugger saying my response.body was null but fiddler and the developer tools show it as populated that turned out to be basically the same scenario as this. I was using a local Angular application hitting a Web Api service running on IISExpress. I fixed it by following the steps outlined here to find the correct applicationhost.config file to add a Access-Control-Allow-Origin header like so:
<customHeaders>
<clear />
<add name="X-Powered-By" value="ASP.NET" />
<add name="Access-Control-Allow-Origin" value="*" />
<add name="Access-Control-Allow-Headers" value="Content-Type" />
</customHeaders>
If all the above solutions don't work, probably it's because of the file permissions as sometimes even if you have fixed the non-cors problem using Heroku or another way, it throws 403 forbidden error. Set the directory/file permissions like this:
Permissions and ownership errors
A 403 Forbidden error can also be caused by incorrect ownership or permissions on your web content files and folders.
Permissions
Rule of thumb for correct permissions:
Folders: 755
Static Content: 644
Dynamic Content: 700

ASP.NET MVC OutputCache does not store Custom Headers

My application uses ASP.NET MVC 5 with OutputCache (in detail, we use MVCDonutCaching) to cache high traffic sites and expensive routes.
Some of the Actions have a Custom ActionFilter which adds a Content-Range header depending on the view model. Without caching it works like charm. When the cache is enabled the first hit is ok (Content-Range header is present in the response) - but the second one only contains Content-Type and the HTML/JSON Response and our custom Content-Range header is missing (which breaks the client functionality).
Is there any way to enable proper header caching without writing an own OutputCache implementation?
Thank you very much.
The cached response is a "304 - Not Modified" HTTP Response, and that kind of response is not expected to return entity headers (except some exceptions like "Last-Modified").
The "Content-Range" header you are trying to return is an entity header:
http://www.freesoft.org/CIE/RFC/2068/178.htm
Here is a full list of Entity headers:
https://www.rfc-editor.org/rfc/rfc2616#section-7.1
The reason why 304 is not returning entity headers is that the 304 response is not supposed to return a full representation of the target resource, since nothing changed.
The 304 (Not Modified) status code indicates that a conditional GET
or HEAD request has been received and would have resulted in a 200
(OK) response if it were not for the fact that the condition has
evaluated to false. In other words, there is no need for the server
to transfer a representation of the target resource because the
request indicates that the client, which made the request
conditional, already has a valid representation;
That means that entity headers should not be transferred again. This ensures consistency, and also has some performance benefits.
If the conditional GET used a strong cache validator (see section 13.3.3), the response SHOULD NOT include other entity-headers. Otherwise (i.e., the conditional GET used a weak validator), the response MUST NOT include other entity-headers; this prevents inconsistencies between cached entity-bodies and updated headers.
https://datatracker.ietf.org/doc/html/draft-ietf-httpbis-p4-conditional-23#section-4.1
My conclusion is that ASP.NET and IIS are interpreting this specification correctly, and what you are trying to do is NOT supported. A prove for that is that Apache, and other popular web servers do the same as explained above.
If you still need that header in your 304 you will have to identify and replace (if possible) the components responsible for filtering the 304 responses.

Sitecore redirects to physical layout page instead of content tree item

So our client makes use of www.opensiteexplorer.org to review their site. The problem is when they search using the homepage URL (e.g. https://www.clientsitecoresite.com), somehow it is being redirected to the physical layout page https://www.clientsitecoresite.com/layouts/custom/mycustomlayout.aspx
I disabled the customizations in httpRequestBegin pipelines but still the same issue.
I checked the IIS log and the request captured is the physical layout page.
We are using Sitecore 6.5.
Appreciate any input regarding this issue.
Additional info:
Strange, somehow when using agents other than browsers the request (whatever page of the website) is redirected to the base layout. I used cURL curl.haxx.se to inspect the header and this is the result:
HTTP/1.1 301 Moved Permanently
Cache-Control: no-cache, no-store
Pragma: no-cache
Content-Length: 185
Content-Type: text/html; charset=utf-8
Expires: -1
Location: http://www.clientsitecoresite.com/layouts/custom/baselayout.aspx
Set-Cookie: ......
.....
The baselayout.aspx inherits from a class. In the OnLoad event of the base class, when access is via browser, the Request.Url is as expected. However, when using another agent (cURL.exe), the Request.Url somehow is changed to baselayout path.
Any idea which event or sitecore config could have caused this? We don't have robots.txt. I also tried taking out URL rewriting and whatever custom process after ItemResolver pipeline. The relevant domain's ensureAnonymousUser is set to true as well.
It might be worth to say that the issue is happening when I use HEAD request in cURL, however it seems alright with a GET request either with cURL or browser.
Do you mean after the initial request trying to get,the home, there's another another requesting the layout? Or the initial request is straight for the layout?
in the second case do you have a referrer? I wouldn't care about your custom code, as is iis the one returning the aspx, i would investigate in the index where does that link come from... Do you have directory listing enabled? Any custom sitemap xlm?
If your problem are only the logs on IIS, you could fix it, and logs the path of the item instead of the path of the layout as shown in this post: http://www.bolaky.net/post/IIS-75-Logging-with-Sitecore-6x-in-Integrated-Pipeline-Mode.aspx
I'd check the "action" attribute of any form tags you may have on your page. If this is set incorrectly (i.e. contains the path of your layout), it could be causing issues for non-JS enabled requests. I've seen this happen before when you have the action set via JS (not a good idea fwiw).
Failing that, I'd hit up Sitecore support as others have mentioned.

Best Way to avoid Reinsertion of data in ASP.net on Page Refresh

I want to know what is the best way to avoid the reinsertion of data in ASP.net.
I am currently doing
Response.Redirect('PageURL');
Thanks in Advance
Don't put your insertion code in the Page_Load method, or if you are, make sure you are checking Page.IsPostBack first.
Yes, normally we have an identity autoincrement number id, wich should be sent back to your form after the insertion. So you just have to check on server if that number is > 0 and execute an update instead of an insert.
Your redirect solution is valid. This pattern is called Post/Redirect/Get.
Post/Redirect/Get (PRG) is a web development design pattern that
prevents some duplicate form submissions, creating a more intuitive
interface for user agents (users). PRG implements bookmarks and the
refresh button in a predictable way that does not create duplicate
form submissions.
When a web form is submitted to a server through an HTTP POST request,
a web user that attempts to refresh the server response in certain
user agents can cause the contents of the original HTTP POST request
to be resubmitted, possibly causing undesired results, such as a
duplicate web purchase.
To avoid this problem, many web developers use the PRG pattern[1] —
instead of returning a web page directly, the POST operation returns a
redirection command. The HTTP 1.1 specification introduced the HTTP
303 ("See other") response code to ensure that in this situation, the
web user's browser can safely refresh the server response without
causing the initial HTTP POST request to be resubmitted. However most
common commercial applications in use today (new and old alike) still
continue to issue HTTP 302 ("Found") responses in these situations.
Use of HTTP 301 ("Moved permanently") is usually avoided because
HTTP-1.1-compliant browsers do not convert the method to GET after
receiving HTTP 301, as is more commonly done for HTTP 302.[2] However,
HTTP 301 may be preferred in cases where it is not desirable for POST
parameters to be converted to GET parameters and thus be recorded in
logs.
http://en.wikipedia.org/wiki/Post/Redirect/Get

configuring IIS to cache based on content-type?

We had a weird issue on our site last week that seemed to be a caching issue. A version of our page was cached with Content-Type: text/vnd.wap.wml; charset=utf-8 set in the header.
After some research, I found out that asp .net uses .browser files in the %SystemRoot%\Microsoft.NET\Framework\versionNumber\CONFIG\Browsers path to determine preferred mime types for certain user agents. based on the content-type above, it looks like a Nokia phone was the first application to hit our page after a cache clear based on the content-type above, and asp stored a cached version of the page with that content-type rather than text/html. the problem with that content-type is that browsers do not recognize it, and will just display the page as plain text.
I could verify that the above scenario was the cause. I took one of our servers out of our pool, recycled the app pools for the site and reset iis, then hit the page with fiddler and passed the follow headers as a GET to our homepage.
Accept: text/html
User-Agent: NokiaN90-1/3.0545.5.1 Series60/2.8 Profile/MIDP-2.0 Configuration/CLDC-1.1
this returned the following content-type in the response as expected:
Content-Type: text/vnd.wap.wml; charset=utf-8
Now to fix this going forward, it would make sense for asp to cache various flavors of the page based on the content-type it will be serving, right? is there a way to configure asp to do this, or is there a better way to handle this scenario?
I believe that customarily you'd add Vary: User-Agent header if you plan to serve different content types to different clients. E.g. http://msdn.microsoft.com/en-us/library/system.web.httpcachevarybyheaders.useragent(v=vs.100).aspx

Categories