PhysicalAddress.Parse() won't parse lower cased string, is this a bug? - c#

Note: Using .Net 4.0
Consider the following piece of code.
String ad = "FE23658978541236";
String ad2 = "00FABE002563447E".ToLower();
try
{
PhysicalAddress.Parse(ad);
}
catch (Exception)
{
//We dont get here, all went well
}
try
{
PhysicalAddress.Parse(ad2);
}
catch (Exception)
{
//we arrive here for what reason?
}
try
{
//Ok, I do it myself then.
ulong dad2 = ulong.Parse(ad2, System.Globalization.NumberStyles.HexNumber);
byte[] bad2 = BitConverter.GetBytes(dad2);
if (BitConverter.IsLittleEndian)
{
bad2 = bad2.Reverse().ToArray<byte>();
}
PhysicalAddress pa = new PhysicalAddress(bad2);
}
catch (Exception ex)
{
//We don't get here as all went well
}
So an exception is thrown in PhysicalAddress.Parse method when trying to parse an address with lower case. When I look at the source code of .Net its totally clear to me why.
Its because of the following piece of code.
if (value >= 0x30 && value <=0x39){
value -= 0x30;
}
else if (value >= 0x41 && value <= 0x46) {
value -= 0x37;
}
That is found within the Parse method.
public static PhysicalAddress Parse(string address) {
int validCount = 0;
bool hasDashes = false;
byte[] buffer = null;
if(address == null)
{
return PhysicalAddress.None;
}
//has dashes?
if (address.IndexOf('-') >= 0 ){
hasDashes = true;
buffer = new byte[(address.Length+1)/3];
}
else{
if(address.Length % 2 > 0){ //should be even
throw new FormatException(SR.GetString(SR.net_bad_mac_address));
}
buffer = new byte[address.Length/2];
}
int j = 0;
for (int i = 0; i < address.Length; i++ ) {
int value = (int)address[i];
if (value >= 0x30 && value <=0x39){
value -= 0x30;
}
else if (value >= 0x41 && value <= 0x46) {
value -= 0x37;
}
else if (value == (int)'-'){
if (validCount == 2) {
validCount = 0;
continue;
}
else{
throw new FormatException(SR.GetString(SR.net_bad_mac_address));
}
}
else{
throw new FormatException(SR.GetString(SR.net_bad_mac_address));
}
//we had too many characters after the last dash
if(hasDashes && validCount >= 2){
throw new FormatException(SR.GetString(SR.net_bad_mac_address));
}
if (validCount%2 == 0) {
buffer[j] = (byte) (value << 4);
}
else{
buffer[j++] |= (byte) value;
}
validCount++;
}
//we too few characters after the last dash
if(validCount < 2){
throw new FormatException(SR.GetString(SR.net_bad_mac_address));
}
return new PhysicalAddress(buffer);
}
Can this be considered a bug? Or is it so very wrong to use a lower cased hex values in a string? Or is there some convention I am unaware of.
Personally, I consider this programmer unfriendly.

From MSDN:
The address parameter must contain a string that can only consist of
numbers and upper-case letters as hexadecimal digits. Some examples of
string formats that are acceptable are as follows .... Note that an address that contains f0-e1-d2-c3-b4-a5 will fail to parse and throw an exception.
So you could simply do: PhysicalAddress.Parse(ad.ToUpper());

No, it's only a bug if it doesn't do something the documentation states that it does, or it does something the documentation states that it doesn't. The mere fact that it doesn't behave as you expect doesn't make it a bug. You could of course consider it a bad design decision (or, as you put it so eloquently, programmer-unfriendly) but that's not the same thing.
I tend to agree with you there since I like to follow the "be liberal in what you expect, consistent in what you deliver" philosophy and the code could probably be easily fixed with something like:
if (value >= 0x30 && value <=0x39) {
value -= 0x30;
}
else if (value >= 0x41 && value <= 0x46) {
value -= 0x37;
}
else if (value >= 0x61 && value <= 0x66) { // added
value -= 0x57; // added
} // added
else if ...
though, of course, you'd have to change the doco as well, and run vast numbers of tests to ensure you hadn't stuffed things up.
Regarding the doco, it can be found here and the important bit is repeated below (with my bold):
The address parameter must contain a string that can only consist of numbers and upper-case letters as hexadecimal digits. Some examples of string formats that are acceptable are as follows:
001122334455
00-11-22-33-44-55
F0-E1-D2-C3-B4-A5
Note that an address that contains f0-e1-d2-c3-b4-a5 will fail to parse and throw an exception.

Related

Detecting multiple keys in Unity

I'm following along with a tutorial about creating a mini-RTS in Unity, but I've hit something of a roadblock when it comes to the selection feature for assigning selection groups for multiple units.
The pertinent parts are below:
In the Update() method of my UnitsSelection class
//manage selection groups with alphanumeric keys
if (Input.anyKeyDown)
{
int alphaKey = Utils.GetAlphaKeyValue(Input.inputString);
if (alphaKey != -1)
{
if (Input.GetKey(KeyCode.LeftControl) || Input.GetKey(KeyCode.RightControl))
{
_CreateSelectionGroup(alphaKey);
}
else
{
_ReselectGroup(alphaKey);
}
}
}
And the GetAlphaKeyValue method from Utils:
public static int GetAlphaKeyValue(string inputString)
{
if (inputString == "0") return 0;
if (inputString == "1") return 1;
if (inputString == "2") return 2;
if (inputString == "3") return 3;
if (inputString == "4") return 4;
if (inputString == "5") return 5;
if (inputString == "6") return 6;
if (inputString == "7") return 7;
if (inputString == "8") return 8;
if (inputString == "9") return 9;
return -1;
}
This is the code that is used in the tutorial, but to my understanding there is no way that _CreateSelectionGroup() would ever be called.
I've seen the tutorial demonstrate this functionality working, but whenever I try to run it GetAlphaKeyValue turns the Left and Right control keys into a -1 value so the if statement that checks for them never runs.
Am I missing something here? How does Unity normally handle things like Ctrl+1?
If you use the inputString I would always rather check for Contains instead of an exact string match. However, I tried to use the inputString in the past and I found it too unpredictable for most usecases ^^
While holding control keys your Keyboard most likely simply won't generate any inputString.
Only ASCII characters are contained in the inputString.
But e.g. CTRL+1 will not generate the ASCII symbol 1 but rather a "non-printing character", a control symbol - or simply none at all.
You should probably rather use e.g.
public static bool GetAlphaKeyValue(out int alphaKey)
{
alphaKey = -1;
if (Input.GetKeyDown(KeyCode.Alpha0) alphaKey = 0;
else if (Input.GetKeyDown(KeyCode.Alpha1) alphaKey = 1;
else if (Input.GetKeyDown(KeyCode.Alpha2) alphaKey = 2;
else if (Input.GetKeyDown(KeyCode.Alpha3) alphaKey = 3;
else if (Input.GetKeyDown(KeyCode.Alpha4) alphaKey = 4;
else if (Input.GetKeyDown(KeyCode.Alpha5) alphaKey = 5;
else if (Input.GetKeyDown(KeyCode.Alpha6) alphaKey = 6;
else if (Input.GetKeyDown(KeyCode.Alpha7) alphaKey = 7;
else if (Input.GetKeyDown(KeyCode.Alpha8) alphaKey = 8;
else if (Input.GetKeyDown(KeyCode.Alpha9) alphaKey = 9;
return alphaKey >= 0;
}
And then use it like
//manage selection groups with alphanumeric keys
if(Utils.GetAlphaKeyValue(out var alphaKey)
{
if (Input.GetKey(KeyCode.LeftControl) || Input.GetKey(KeyCode.RightControl))
{
_CreateSelectionGroup(alphaKey);
}
else
{
_ReselectGroup(alphaKey);
}
}
As it turns out it may just be an issue with my keyboard. Different keyboards handle key presses in different ways. Mine just refuses to tell Unity that control + (some other key) are being pressed together. Changed the code to respond to Shift + (some other key) and it works fine.

String == operator: how did Microsoft write it?

I want to know how Microsoft write algorithm for string comparison.
string.equal and string.compare
Do they compare character by character like this:
int matched = 1;
for (int i = 0; i < str1.Length; i++)
{
if (str1[i] == str2[i])
{
matched++;
}
else
{
break;
}
}
if (matched == str1.Length) return true;
Or match all at once
if (str1[0] == str2[0] && str1[1] == str2[1] && str1[2] == str2[2]) return true;
I trying pressing F12 on the string.equal function but it got me to the function declaration not the actual code. Thanks
After Thilo mentioned to look at the source i was able to find this... this is how Microsoft wrote it.
public static bool Equals(String a, String b) {
if ((Object)a==(Object)b) {
return true;
}
if ((Object)a==null || (Object)b==null) {
return false;
}
if (a.Length != b.Length)
return false;
return EqualsHelper(a, b);
}
But this raise a question whether is faster by checking character by character or doing a complete match?
Looking at the source (copied below):
null check
reference identity
different length => not equal
go over the binary encoding of the characters in a bit of an unrolled loop
this raise a question whether is faster by checking character by character or doing a complete match
I don't understand the question. You cannot do a "complete match" without checking each of the characters. What you can do is bail out as soon as you find a mismatch. That reduces runtime a bit, but does not change the fact that it is O(n).
// Determines whether two strings match.
[Pure]
[ReliabilityContract(Consistency.WillNotCorruptState, Cer.MayFail)]
public bool Equals(String value) {
if (this == null) //this is necessary to guard against reverse-pinvokes and
throw new NullReferenceException(); //other callers who do not use the callvirt instruction
if (value == null)
return false;
if (Object.ReferenceEquals(this, value))
return true;
if (this.Length != value.Length)
return false;
return EqualsHelper(this, value);
}
[System.Security.SecuritySafeCritical] // auto-generated
[ReliabilityContract(Consistency.WillNotCorruptState, Cer.MayFail)]
private unsafe static bool EqualsHelper(String strA, String strB)
{
Contract.Requires(strA != null);
Contract.Requires(strB != null);
Contract.Requires(strA.Length == strB.Length);
int length = strA.Length;
fixed (char* ap = &strA.m_firstChar) fixed (char* bp = &strB.m_firstChar)
{
char* a = ap;
char* b = bp;
// unroll the loop
#if AMD64
// for AMD64 bit platform we unroll by 12 and
// check 3 qword at a time. This is less code
// than the 32 bit case and is shorter
// pathlength
while (length >= 12)
{
if (*(long*)a != *(long*)b) return false;
if (*(long*)(a+4) != *(long*)(b+4)) return false;
if (*(long*)(a+8) != *(long*)(b+8)) return false;
a += 12; b += 12; length -= 12;
}
#else
while (length >= 10)
{
if (*(int*)a != *(int*)b) return false;
if (*(int*)(a+2) != *(int*)(b+2)) return false;
if (*(int*)(a+4) != *(int*)(b+4)) return false;
if (*(int*)(a+6) != *(int*)(b+6)) return false;
if (*(int*)(a+8) != *(int*)(b+8)) return false;
a += 10; b += 10; length -= 10;
}
#endif
// This depends on the fact that the String objects are
// always zero terminated and that the terminating zero is not included
// in the length. For odd string sizes, the last compare will include
// the zero terminator.
while (length > 0)
{
if (*(int*)a != *(int*)b) break;
a += 2; b += 2; length -= 2;
}
return (length <= 0);
}
}

StringBuilder not appending int to beginning of string unless 0 or 10

I have a bowling game that takes the list of bowls and then runs them through this code to produce a string that goes through the frames on the UI. If the user bowls a gutter (0) or strike (10) the code works fine. However, if it is between 1 and 9, it fails to produce a string. Why is this? I've searched and tried many other ways to do this, but this seems to be the only way it will work.
public static StringBuilder FormatRolls (List<int> rolls) {
StringBuilder output = new StringBuilder();
for (int i=0; i < rolls.Count; i++) {
if (rolls.Count >= 19 && rolls[i] == 10) { //Bonus End-Frame Strike
output.Append ("X");
} else if (rolls[i] == 0) { //Gutter
output.Append ("-");
} else if (rolls[i-1] + rolls[i] == 10 && rolls.Count > 1) { //Spare
output.Append ("/");
} else if (rolls[i] == 10 && rolls[i+1] == 0) { //Strike
output.Append ("X");
} else { //Normal bowls 1-9
output.Append (rolls[i].ToString());
}
}
output.ToString();
return output;
}
This is the code that then writes to all of the frames:
public void FillRolls (List<int> rolls) {
StringBuilder scoresString = FormatRolls(rolls);
for (int i=0; i<scoresString.Length; i++) {
frameText[i].text = scoresString[i].ToString();
}
}
Any help is greatly appreciated, I've been stuck for DAYS trying to get this to work...
output.ToString(); is a pure function and you are not using its return value (so, you are converting to a string, and then throwing that string away without using/storing/returning it). I guess you really want to return the fully built and formatted string, not the StringBuilder instance. Use:
return output.ToString();
That said, other codepaths should not produce a value either.
After rolling through all the comments and suggestions from others, I took away the try-catch section that blocked me from seeing past my catch error. Then, added debug.log statements to help find the correct position and used Unity's built-in errors to find this wasn't the string's fault at all. It was trying to find an index rolls[i]-1 that didn't exist. After a few attempts I found that if I changed the else if (rolls[i-1] + rolls[i] == 10 && rolls.Count > 1) to a nested if statement and used the output.Length instead of rolls.Count it would no longer return an error. Thank you everyone who tried to help me solve this, you pointed me in the right direction and really got me thinking! Here is the finished code:
public static string FormatRolls (List<int> rolls) {
StringBuilder output = new StringBuilder();
for (int i=0; i < rolls.Count; i++) {
if (rolls.Count >= 19 && rolls[i] == 10) { //Bonus End-Frame Strike
output.Append ("X");
} else if (rolls[i] == 0) { //Gutter
output.Append ("-");
} else if (rolls[i] == 10 && output.Length % 2 == 0) { //Strike
output.Append (" X");
} else if (output.Length >= 1 && output.Length % 2 != 0) { //Spare
if (rolls[i-1] + rolls[i] == 10) {
output.Append ("/");
}
} else { //Normal bowls 1-9
output.Append (rolls[i]);
}
}
return output.ToString();
}

TextBox maximum amount of characters (it's not MaxLength)

I'm using a System.Windows.Forms.TextBox. According to the docs, the MaxLength property controls the amount of characters enter a user can type or paste into the TextBox (i.e. more than that can be added programmatically by using e.g. the AppendText function or the Text property). The current amount of characters can be got from the TextLength property.
Is there any way to set the maximum amount of characters without making a custom limiter which calls Clear() when the custom limit is reached?
Regardless, what is the absolute maximum it can hold? Is it only limited by memory?
What happens when the maximum is reached / memory is full? Crash? Top x lines is cleared?
What would be the best way to manually clear only the top x lines? Substring operation?
edit: I have tested it to hold more than 600k characters, regardless of MaxLength, at which point I manually stopped the program and asked this question.
Sure. Override / shadow AppendText and Text in a derived class. See code below.
The backing field for the Text property is a plain old string (private field System.Windows.Forms.Control::text). So the maximum length is the max length of a string, which is "2 GB, or about 1 billion characters" (see System.String).
Why don't you try it and see?
It depends on your performance requirements. You could use the Lines property, but beware that every time you call it your entire text will be internally parsed into lines. If you're pushing the limits of content length this would be a bad idea. So that faster way (in terms of execution, not coding) would be to zip through the characters and count the cr / lfs. You of course need to decide what you are considering a line ending.
Code: Enforce MaxLength property even when setting text programmatically:
using System;
using System.Windows.Forms;
namespace WindowsFormsApplication5 {
class TextBoxExt : TextBox {
new public void AppendText(string text) {
if (this.Text.Length == this.MaxLength) {
return;
} else if (this.Text.Length + text.Length > this.MaxLength) {
base.AppendText(text.Substring(0, (this.MaxLength - this.Text.Length)));
} else {
base.AppendText(text);
}
}
public override string Text {
get {
return base.Text;
}
set {
if (!string.IsNullOrEmpty(value) && value.Length > this.MaxLength) {
base.Text = value.Substring(0, this.MaxLength);
} else {
base.Text = value;
}
}
}
// Also: Clearing top X lines with high performance
public void ClearTopLines(int count) {
if (count <= 0) {
return;
} else if (!this.Multiline) {
this.Clear();
return;
}
string txt = this.Text;
int cursor = 0, ixOf = 0, brkLength = 0, brkCount = 0;
while (brkCount < count) {
ixOf = txt.IndexOfBreak(cursor, out brkLength);
if (ixOf < 0) {
this.Clear();
return;
}
cursor = ixOf + brkLength;
brkCount++;
}
this.Text = txt.Substring(cursor);
}
}
public static class StringExt {
public static int IndexOfBreak(this string str, out int length) {
return IndexOfBreak(str, 0, out length);
}
public static int IndexOfBreak(this string str, int startIndex, out int length) {
if (string.IsNullOrEmpty(str)) {
length = 0;
return -1;
}
int ub = str.Length - 1;
int intchr;
if (startIndex > ub) {
throw new ArgumentOutOfRangeException();
}
for (int i = startIndex; i <= ub; i++) {
intchr = str[i];
if (intchr == 0x0D) {
if (i < ub && str[i + 1] == 0x0A) {
length = 2;
} else {
length = 1;
}
return i;
} else if (intchr == 0x0A) {
length = 1;
return i;
}
}
length = 0;
return -1;
}
}
}
The theoretical limit is that of a string, ~2GB. However, in reality, it depends upon the conditions in your running process. It equates to the size of the largest available contiguous section of memory that a string can allocate at any given time. I have a textbox in an application that is erroring at about 450MB.
The Text property of System.Windows.Forms.TextBox is a string, so in theory it can be the max length of a string

ASP.NET request validation causes: is there a list?

is anybody aware of a list of exactly what triggers ASP.NET's HttpRequestValidationException? [This is behind the common error: "A potentially dangerous Request.Form value was detected," etc.]
I've checked here, around the Web, and MSDN Library but can't find this documented. I'm aware of some ways to generate the error, but would like to have a complete list so I can guard against and selectively circumvent it (I know how to disable request validation for a page, but this isn't an option in this case).
Is it a case of "security through obscurity"?
Thanks.
[Note: Scripts won't load for me in IE8 (as described frequently in the Meta forum) so I won't be able to "Add comment."]
EDIT 1: Hi Oded, are you aware of a list that documents the conditions used to determine a "potentially malicious input string"? That's what I'm looking for.
EDIT 2: #Chris Pebble: Yeah, what you said. :)
I couldn't find a document outlining a conclusive list, but looking through Reflector and doing some analysis on use of HttpRequestValidationException, it looks like validation errors on the following can cause the request validation to fail:
A filename in one of the files POSTed to an upload.
The incoming request raw URL.
The value portion of the name/value pair from any of the incoming cookies.
The value portion of the name/value pair from any of the fields coming in through GET/POST.
The question, then, is "what qualifies one of these things as a dangerous input?" That seems to happen during an internal method System.Web.CrossSiteScriptingValidation.IsDangerousString(string, out int) which looks like it decides this way:
Look for < or & in the value. If it's not there, or if it's the last character in the value, then the value is OK.
If the & character is in a &# sequence (e.g.,   for a non-breaking space), it's a "dangerous string."
If the < character is part of <x (where "x" is any alphabetic character a-z), <!, </, or <?, it's a "dangerous string."
Failing all of that, the value is OK.
The System.Web.CrossSiteScriptingValidation type seems to have other methods in it for determining if things are dangerous URLs or valid JavaScript IDs, but those don't appear, at least through Reflector analysis, to result in throwing HttpRequestValidationExceptions.
Update:
Warning: Some parts of the code in the original answer (below) were removed and marked as OBSOLETE.
Latest source code in Microsoft site (has syntax highlighting):
http://referencesource.microsoft.com/#System.Web/CrossSiteScriptingValidation.cs
After checking the newest code you will probably agree that what Travis Illig explained are the only validations used now in 2018 (and seems to have no changes since 2014 when the source was released in GitHub). But the old code below may still be relevant if you use an older version of the framework.
Original Answer:
Using Reflector, I did some browsing. Here's the raw code. When I have time I will translate this into some meaningful rules:
The HttpRequestValidationException is thrown by only a single method in the System.Web namespace, so it's rather isolated. Here is the method:
private void ValidateString(string s, string valueName, string collectionName)
{
int matchIndex = 0;
if (CrossSiteScriptingValidation.IsDangerousString(s, out matchIndex))
{
string str = valueName + "=\"";
int startIndex = matchIndex - 10;
if (startIndex <= 0)
{
startIndex = 0;
}
else
{
str = str + "...";
}
int length = matchIndex + 20;
if (length >= s.Length)
{
length = s.Length;
str = str + s.Substring(startIndex, length - startIndex) + "\"";
}
else
{
str = str + s.Substring(startIndex, length - startIndex) + "...\"";
}
throw new HttpRequestValidationException(HttpRuntime.FormatResourceString("Dangerous_input_detected", collectionName, str));
}
}
That method above makes a call to the IsDangerousString method in the CrossSiteScriptingValidation class, which validates the string against a series of rules. It looks like the following:
internal static bool IsDangerousString(string s, out int matchIndex)
{
matchIndex = 0;
int startIndex = 0;
while (true)
{
int index = s.IndexOfAny(startingChars, startIndex);
if (index < 0)
{
return false;
}
if (index == (s.Length - 1))
{
return false;
}
matchIndex = index;
switch (s[index])
{
case 'E':
case 'e':
if (IsDangerousExpressionString(s, index))
{
return true;
}
break;
case 'O':
case 'o':
if (!IsDangerousOnString(s, index))
{
break;
}
return true;
case '&':
if (s[index + 1] != '#')
{
break;
}
return true;
case '<':
if (!IsAtoZ(s[index + 1]) && (s[index + 1] != '!'))
{
break;
}
return true;
case 'S':
case 's':
if (!IsDangerousScriptString(s, index))
{
break;
}
return true;
}
startIndex = index + 1;
}
}
That IsDangerousString method appears to be referencing a series of validation rules, which are outlined below:
private static bool IsDangerousExpressionString(string s, int index)
{
if ((index + 10) >= s.Length)
{
return false;
}
if ((s[index + 1] != 'x') && (s[index + 1] != 'X'))
{
return false;
}
return (string.Compare(s, index + 2, "pression(", 0, 9, true, CultureInfo.InvariantCulture) == 0);
}
-
private static bool IsDangerousOnString(string s, int index)
{
if ((s[index + 1] != 'n') && (s[index + 1] != 'N'))
{
return false;
}
if ((index > 0) && IsAtoZ(s[index - 1]))
{
return false;
}
int length = s.Length;
index += 2;
while ((index < length) && IsAtoZ(s[index]))
{
index++;
}
while ((index < length) && char.IsWhiteSpace(s[index]))
{
index++;
}
return ((index < length) && (s[index] == '='));
}
-
private static bool IsAtoZ(char c)
{
return (((c >= 'a') && (c <= 'z')) || ((c >= 'A') && (c <= 'Z')));
}
-
private static bool IsDangerousScriptString(string s, int index)
{
int length = s.Length;
if ((index + 6) >= length)
{
return false;
}
if ((((s[index + 1] != 'c') && (s[index + 1] != 'C')) || ((s[index + 2] != 'r') && (s[index + 2] != 'R'))) || ((((s[index + 3] != 'i') && (s[index + 3] != 'I')) || ((s[index + 4] != 'p') && (s[index + 4] != 'P'))) || ((s[index + 5] != 't') && (s[index + 5] != 'T'))))
{
return false;
}
index += 6;
while ((index < length) && char.IsWhiteSpace(s[index]))
{
index++;
}
return ((index < length) && (s[index] == ':'));
}
So there you have it. It's not pretty to decipher, but it's all there.
How about this script? Your code can not detect this script, right?
";}alert(1);function%20a(){//
Try this regular expresson pattern.
You may need to ecape the \ for javascript ex \\
var regExpPattern = '[eE][xX][pP][rR][eE][sS][sS][iI][oO][nN]\\(|\\b[oO][nN][a-zA-Z]*\\b\\s*=|&#|<[!/a-zA-Z]|[sS][cC][rR][iI][pP][tT]\\s*:';
var re = new RegExp("","gi");
re.compile(regExpPattern,"gi");
var outString = null;
outString = re.exec(text);
Following on from Travis' answer, the list of 'dangerous' character sequences can be simplified as follows;
&#
<A through to <Z (upper and lower case)
<!
</
<?
Based on this, in an ASP.Net MVC web app the following Regex validation attribute can be used on a model field to trigger client side validation before an HttpRequestValidationException is thrown when the form is submitted;
[RegularExpression(#"^(?![\s\S]*(&#|<[a-zA-Z!\/?]))[\s\S]*$", ErrorMessage = "This field does not support HTML or allow any of the following character sequences; "&#", "<A" through to "<Z" (upper and lower case), "<!", "</" or "<?".")]
Note that validation attribute error messages are HTML encoded when output by server side validation, but not when used in client side validation, so this one is already encoded as we only intend to see it with client side validation.
From MSDN:
'The exception that is thrown when a potentially malicious input string is received from the client as part of the request data. '
Many times this happens when JavaScript changes the values of a server side control in a way that causes the ViewState to not agree with the posted data.

Categories