Running the ASP.NET webforms run the application works fine. When the application is idle for 4 to 5 minutes, it is giving this error:
Validation of viewstate MAC failed. If
this application is hosted by a Web
Farm or cluster, ensure that
configuration specifies
the same validationKey and validation
algorithm. AutoGenerate cannot be used
in a cluster.
How can this be solved?
This free online tool: http://aspnetresources.com/tools/machineKey generates a machineKey element under the system.web element in the web.config file.
Here is an example of what it generates:
<machineKey validationKey="1619AB2FDEE6B943AD5D31DD68B7EBDAB32682A5891481D9403A6A55C4F91A340131CB4F4AD26A686DF5911A6C05CAC89307663656B62BE304EA66605156E9B5" decryptionKey="C9D165260E6A697B2993D45E05BD64386445DE01031B790A60F229F6A2656ECF" validation="SHA1" decryption="AES" />
Once you see this in your web.config, the error itself suddenly makes sense.
The error you are getting says
"ensure that configuration specifies the same
validationKey and validation algorithm".
When you look at this machineKey element, suddenly you can see what it is talking about.
Modifying the pages element under the system.web element may not be necessary with this in place. This avoids the security problems associated with those attributes.
By "hard coding" this value in your web.config, the key that asp.net uses to serialize and deserialize your viewstate stays the same, no matter which server in a server farm picks it up. Your encryption becomes "portable", thus your viewstate becomes "portable".
I'm just guessing also that maybe the very same server (not in a farm) has this problem if for any reason it "forgets" the key it had, due to a reset on any level that wipes it out. That is perhaps why you see this error after an idle period and you try to use a "stale" page.
See http://blogs.msdn.com/tom/archive/2008/03/14/validation-of-viewstate-mac-failed-error.aspx
This isn't your problem but it might help someone else. Make sure you are posting back to the same page. Check the action on your form tag and look at the URL your browser is requesting using Firefox Live HTTP Headers.
I ran into this because I was posting back to a page with the same name but a different path.
Modify your web.config with this element:
<pages validateRequest="false"
enableEventValidation="false"
viewStateEncryptionMode ="Never" />
Any more info required, refer to the ASP.NET Forums topic
Related
I have a problem with my project Asp.net mvc 1.0, with .net framework 2.0. My application is hosted on a IIS 7.5. My authentication form looks like this:
<authentication mode="Forms">
<forms protection="All" loginUrl="~/Account/LogOn" timeout="60" cookieless="UseUri" />
</authentication>
<httpRuntime executionTimeout="1000" maxRequestLength="600000" />
<sessionState mode="InProc" cookieless="UseUri" timeout="60">
</sessionState>
When a user connects to the webpage, he receives a session id which is stored in the URL. When I connect to my webpage with the default UserAgent (in every browser, Chrome/FF/IE) everything works fine. When I override the browser UserAgent and try to connect with the User agent XXXXXXXX.UP.BROWSER, I receive an infinite redirection loop to address
http://<IP>_redir=1
But when I connect to the default webpage IIS - the user agent doesn't matter and everything loads fine, so it must be a problem with the specified UserAgent and my Application. I tried to find any filters for that XXXXXXXX.UP.BROWSER UserAgent but there aren't any. When I studied application lifecycle I tried to find the differences between good connection and wrong connection and found that functions which are NOT executed are:
Application_AcquireRequestState
Application_PostAcquireRequestState
Application_PreRequestHandlerExecute
Application_PostRequestHandlerExecute
Application_ReleaseRequestState
Application_PostReleaseRequestState
Application_UpdateRequestCache
Application_PostUpdateRequestCache
and another clue I found is that there is no Session in "wrong" connection - Session object is null.
To sum it up: The connection to my application web page with a specified user agent makes an infinite redirection loop, probably because of the lack of the session ID. What could be the problem ?
EDIT: I discovered that User Agent that contains "UP.Browser" is related to mobile. When I changed cookieless to "UseCookies" everything works. Why option "UseUri" doesn't work for mobiles?
EDIT2 : /admin -> my webpage hosted on specified IP address.
Good connection :
Wrong connection:
Sorry, I don't know how to make these images bigger.
http://msdn.microsoft.com/en-us/library/aa479315.aspx
So you're putting two different values into the URI, one for session and one for forms, which would probably create a lengthy URI:
"The principal limitation of this feature is the limited amount of data that can be stored in the URL. This feature is not targeted at common browsers such as IE, since these do support cookies and do not require this feature. The browsers that do not support cookies are the ones found on mobile devices (such as phones), and these browsers typically severely limit the size of the URL they support. So, be careful when you use this feature—try to make sure that the cookieless string generated by your application is small."
My guess is that the key to the infinite redirect loop is this functionality:
"// Step 5: We can't detect if cookies are supported or not. So, send a
// challenge to the client. We do this by sending a cookie, as
// well as setting a query string variable, and then doing a
// redirect back to this page. On the next request, if cookie
// comes back, then Step 3 will report that "cookies are
// supported". On the other hand, if the next request does not
// have any cookies, then Step 4 will report "cookies not
// supported".
SetAutoDetectionCookie();
Redirect(ThisPage + Our_auto_detect_challenge_variable);"
Unfortunately, this sounds like a bit of an architecture rethink, as it's probably going to now matter what the full path to your site is and you may have to drop automatic handling of forms authentication.
As you said the issue is for mobile browsers, I think this issue is limited to the devices(MOBILE) where the cookies are not supported and the Size of the URL increases and mobile browser severely limit that size, as mentioned in the MSDN reference article above.
My solution was to change User Agent containing "UP.Browser" to something else using rewrite rule. Everything works fine ;)
Edit: I found another clue.
In mobile browser - these with user agents containing "UP.Browser", it was necessary to add slash at the of the address.
In conclusion:
Everything works fine for user agents not related with "UP.Browser".
User agents containing "UP.Browser" needed address like:
http://addr/controller/
I don't know why it is necessary. Any ideas?
I'm getting the following error while redirecting one page to another web page:
"the page was not displayed because the request entity is too large.".
The page from which I'm redirecting to another page contains a huge amount of data, so basically I know the cause of the issue.
However, I'm looking out for a working solution for this. Secondly, when I researched this issue I found such kind of problem generates when any large file gets uploaded.
But I'm not uploading any large file, its just the page itself contains large data. Prompt solution will be appreciated.
I think this will fix the issue if you have SSL enabled:
Setting uploadReadAheadSize in applicationHost.config file on IIS7.5 would resolve your issue in both cases. You can modify this value directly in applicationhost.config.
Select the site under Default Web Site
Select Configuration Editor
Within Section Dropdown, select "system.webServer/serverRuntime"
Enter a higher value for "uploadReadAheadSize" such as 1048576 bytes. Default is 49152 bytes.
During client renegotiation process, the request entity body must be preloaded using SSL preload. SSL preload will use the value of the UploadReadAheadSize metabase property, which is used for ISAPI extensions
Reference.
I got it working by doing the following
Go to IIS.
Click on the server name
In the features (icons), chose the configuration editor.
Click on the dropdowns on the top with Settings
Traverse the path system.webServer -> security -> requestFiltering -> maxAllowedContentLength and set it to 334217728. (Then hit enter and then apply on the top right).
You can also restart the webserver for good measure.
After that I could upload my 150k database to phpymyadmin.
I also set post_max size to 8000 in php.ini in programs/PHP/phpversion/php.ini
It may be overkill for the sizes but it gets the job done when you've got big files.
For me, uploadReadAheadSize did not fix my issue.
I changed both of these settings in my asp.net web.config and finally the file uploaded for me:
<system.web>
<system.webServer
<httpRuntime executionTimeout="999999" maxRequestLength="20000" requestValidationMode="2.0" /> <!-- 20 MB -->
</system.web>
<system.webServer>
<security>
<requestFiltering>
<requestLimits maxAllowedContentLength="20500000" /> <!-- 20.5 MB - making it match maxRequestLength to fix issue with uploading 20mb file -->
</requestFiltering>
</security>
</system.webServer>
Another possible cause is an Authentication setting. Within IIS7,
Select the site under Default Web Site
Select Authentication
Select Windows Authentication and enable. Disable all others.
While Windows Authentication is still selected, click on Advanced Settings in the Actions pane.
Make sure Extended Protection is on Accept and check the Enable Kernel-mode authentication
I had the same error and the above fix did not work. I was only on HTTP (localhost)
However this fixed the 413 error
Set-WebConfigurationProperty -filter /system.webserver/security/requestfiltering/requestLimits -name maxAllowedContentLength -value 134217728
I did the above but still had no joy.
What fixed the problem for me was also upping the request limits:
- Select site in IIS manager
- Click "Edit feature settings" from right hand menu
- Insert 200000000 for "Maximum allowed content length"
In my case, the error occurred when downloading a file.
The reason was, that I had a code that explicitely sends an HTTP 413 to the client in an (erroneous) case.
(See here for details).
So be aware of the fact that setting an HTTP response code 413 in your code (or in a library that you are using) also can generate the OP's error message.
We've just upgraded to ASP.NET 4.0, and found that requestValidation no longer works. The MSDN docs suggest we need to set requestValidationMode in web.config to 2.0:
4.0 (the default). The HttpRequest object internally sets a flag that indicates that request validation should be triggered whenever
any HTTP request data is accessed. This guarantees that the request
validation is triggered before data such as cookies and URLs are
accessed during the request. The request validation settings of the
pages element (if any) in the configuration file or of the # Page
directive in an individual page are ignored.
2.0. Request validation is enabled only for pages, not for all HTTP requests. In addition, the request validation settings of the pages
element (if any) in the configuration file or of the # Page directive
in an individual page are used to determine which page requests to
validate.
This will work for us, however I'm a little puzzled. It seems that we're putting this into a legacy/compatibility mode. Surely it should be possible to have the 4.0 behaviour, but still have an option to turn this off on a page?
I found a way to achieve this without changing RequestValidationMode to 2.0 to the whole site:
You can crate a sub-directory for the page you want to disable the request validation and add a new web.config to this directory with RequestValidationMode set to 2.0, this way only this directory will work in 2.0 mode without affecting all other requests that will work in 4.0 mode.
I think you can add an location section to your main web.config specifying only one page, but I didn't tested this yet.
Something like this:
<location path="Admin/Translation.aspx">
<system.web>
<httpRuntime requestValidationMode="2.0"/>
</system.web>
</location>
Hope it helps you as helped me !
Your best bet is to override the requestValidationType with your own code:
<httpRuntime requestValidationType="YourNamespace.YourValidator" />
MSDN link
It appears that it is not possible to turn this on or off for a page in requestValidationMode 4.0.
This whitepaper outlines breaking changes in .Net 4.0, of which this seems to be one. Even the whitepaper suggests reverting back to requestValidationMode 2.0
To revert to the behavior of the ASP.NET 2.0 request validation feature, add the following setting in the Web.config file:
<httpRuntime requestValidationMode="2.0" />
Although it also helpfully recommends
that you analyze any request validation errors to determine whether existing handlers, modules, or other custom code accesses potentially unsafe HTTP inputs that could be XSS attack vectors.
without giving any guidance on how best to resolve these issues
Set requestValidationMode="0.0" to disable ASP.NET pages and HTTP requests validation.
Value 0.0 recognized in ASP.NET 4.6 and later. MSDN
<configuration>
<system.web>
<httpRuntime requestValidationMode="0.0" />
You can set ValidateRequest to false in the page directive:
<%# Page ValidateRequest="false" %>
The site im working on is an Ajax enabled ASP.net/C# project and i have a URL like this:
http://localhost:2531/(S(lfcvqc55wkabpp55o1x4pvq5))/Logon.aspx
How do you get rid of the (S(lfcvqc55wkabpp55o1x4pvq5)) portion of the URL? I have a feeling its a web.config parameter however I'm not really sure what you call this part?
That is your SessionId - check the <sessionState> element in web.config and you will likely see <sessionState cookieless="true" />
Set that to false and see how it goes. But keep in mind that session state will then be tracked by setting a cookie. It is possible that the designer of the site had a valid reason for using the url to track session. You should probably ask someone.
If you simply don't like the way it looks and want it gone, but did not consider that it is purposeful, perhaps you should really talk to someone with a nameplate and a door before doing anything.
In my website, when a web page is idle for more than 5 minutes, then that page is not working until I refresh. The following error occurs:
Error:
Sys.WebForms.PageRequestManagerServerErrorException:
Validation of viewstate MAC failed. If
this application is hosted by a Web
Farm or cluster, ensure that
configuration specifies
the same validationKey and validation
algorithm. AutoGenerate cannot be used
in a cluster.
I'm already using EnableEventValidation="false" ViewStateEncryptionMode="Never" ValidateRequest="false"
But, nothing is working for me.
Although it's an old question, I will answer anyway because it might help someone else.
So I had this problem in the past few days, and I realized that I started getting this error after I configured my cookies as HttpOnly and Require SSL:
</system.web>
<httpCookies httpOnlyCookies="true" requireSSL="true" />
</system.web>
Turns out that I just forgot to configure Visual Studio to open the SSL URL of my website. So as long as it opened the regular Url, the cookies couldn't be sent, and that what caused the error.
In order to change the default Url, you simply need to figure out what is your SSL url: Click the project on solution explorer and press F4 (not Right Click -> Properties) and over there you'll see SLL URL under the the Development Server section. After that, go to the project properties page (Right Click -> Properties) and in the Web tab, put the SSL Url as the Project Url.
Make sure all the servers on the cluster are using the same encryption key.
This sometimes happens if you are doing a postback from a form which has an action pointing to a different page.