I am trying to figure out the best way to create a function that is equivalent to String.Replace("oldValue","newValue");
that can handle surrogate pairs.
My concern is that if there are surrogate pairs in the string and there is the possibility of a string that matches part of the surrogate pair that it would potentially split the surrogate and have corrupt data.
So my high level question is: Is String.Replace(string oldValue, string newValue); a safe operation when it comes to Unicode and surrogate pairs?
If not, what would be the best path forward? I am familiar with the StringInfo class that can split these strings into elements and such. I'm just unsure of how to go about the replace when passing in strings for the old and new values.
Thanks for the help!
It's safe, because strings in .NET are internally UTF-16. Unicode code point can be represented by one or two UTF-16 code units, and .NET char is one such code unit.
When code point is represented by two units, first unit is called high surrogate, and second is called low surrogate. What's important in context of this question is surrogate units belong to specific range, U+D800 - U+DFFF. This range is used only to represent surrogate pairs, single unit in this range has no meaning and is invalid.
For that reason, it's not possible to have valid utf-16 string which matches "part" of surrogate pair in another valid utf-16 string.
Note that .NET string can also represent invalid utf-16 string. If any argument to Replace is invalid - then it can indeed split surrogate pair. But - garbage in, garbage out, so I don't consider this a problem in given case.
Related
So I'm using Roslyn SyntaxFactory to generate C# code.
Is there a way for me to escape variable names when generating a variable name using IdentifierName(string)?
Requirements:
It would be nice if Unicode is supported but I suppose ASCII can suffice
It would be nice if it's reversible
Always same result for same input ("a" is always "a")
Unique result for each input ("a?"->"a_" cannot be same as "a!"->"a_")
Can convert from 1 special character to multiple single ones
The implication from the API docs seems to be that it expects a valid C# identifier here, so Roslyn's not going to provide an escaping mechanism for you. Therefore, it falls to you to define a string transformation such that it achieves what you want.
The way to do this would be to look at how other things already do it. Look at HTML entities, which are always introduced using &. They can always be distinguished easily, and there's a way to encode a literal & as well so that you don't restrict your renderable character set. Or consider how C# strings allow you to include string delimiters and other special characters in the string through the use of \.
You need to pick a character which is valid in C# identifiers to be your 'marker' for a sequence which represents one of the non-identifier characters you want to encode, and a way to allow that character to also be represented. Then make a mapping table for what comes after the marker for each of the encoded characters. If you want to do all of Unicode, the easiest way is probably to just use Unicode codepoint numbers. The resulting identifiers might not be very readable, but maybe that doesn't matter in your use case.
Once you have a suitable system worked out, it should be pretty straightforward to write a string transformation function which implements it.
I have string that contains an odd Unicode space character, but I'm not sure what character that is. I understand that in C# a string in memory is encoded using the UTF-16 format. What is a good way to determine which Unicode characters make up the string?
This question was marked as a possible duplicate to
Determine a string's encoding in C#
It's not a duplicate of this question because I'm not asking about what the encoding is. I already know that a string in C# is encoded as UTF-16. I'm just asking for an easy way to determine what the Unicode values are in the string.
The BMP characters are up to 2 bytes in length (values 0x0000-0xffff), so there's a good bit of coverage there. Characters from the Chinese, Thai, even Mongolian alphabets are there, so if you're not an encoding expert, you might be forgiven if your code only handles BMP characters. But all the same, characters like present here http://www.fileformat.info/info/unicode/char/10330/index.htm won't be correctly handled by code that assumes it'll fit into two bytes.
Unicode seems to identify characters as numeric code points. Not all code points actually refer to characters, however, because Unicode has the concept of combining characters (which I don’t know much about). However, each Unicode string, even some invalid ones (e.g., illegal sequence of combining characters), can be thought of as a list of code points (numbers).
In the UTF-16 encoding, each code point is encoded as a 2 or 4 byte sequence. In .net, Char might roughly correspond to either a 2 byte UTF-16 sequence or half of a 4 byte UTF-16 sequence. When Char contains half of a 4 byte sequence, it is considered a “surrogate” because it only has meaning when combined with another Char which it must be kept with. To get started with inspecting your .net string, you can get .net to tell you the code points contained in the string, automatically combining surrogate pairs together if necessary. .net provides Char.ConvertToUtf32 which is described the following way:
Converts the value of a UTF-16 encoded character or surrogate pair at a specified position in a string into a Unicode code point.
The documentation for Char.ConvertToUtf32(String s, Int32 index) states that an ArgumentException is thrown for the following case:
The specified index position contains a surrogate pair, and either the first character in the pair is not a valid high surrogate or the second character in the pair is not a valid low surrogate.
Thus, you can go character by character in a string and find all of the Unicode code points with the help of Char.IsHighSurrogate() and Char.ConvertToUtf32(). When you don’t encounter a high surrogate, the current character fits in one Char and you only need to advance one Char in your string. If you do encounter a high surrogate, the character requires two Char and you need to advance by two:
static IEnumerable<int> GetCodePoints(string s)
{
for (var i = 0; i < s.Length; i += char.IsHighSurrogate(s[i]) ? 2 : 1)
{
yield return char.ConvertToUtf32(s, i);
}
}
When you say “from a UTF-16 String”, that might imply that you have read in a series of bytes formatted as UTF-16. If that is the case, you would need to convert that to a .net string before passing to the above method:
GetCodePoints(Encoding.UTF16.GetString(myUtf16Blob));
Another note: depending on how you build your String instance, it is possible that it contains an illegal sequence of Char with regards to surrogate pairs. For such strings, Char.ConvertToUtf32() will throw an exception when encountered. However, I think that Encoding.GetString() will always either return a valid string or throw an exception. So, generally, as long as your String instances are from “good” sources, you needn’t worry about Char.ConvertToUtf32() throwing (unless you pass in random values for the index offset because your offset might be in the middle of a surrogate pair).
Let's assume that we have an encrypted byte stream with a suspected decrypting key. I want to decrypt the message with the key and validate the result.
How to validate result?
The only known thing about the plain text is it should contain a human language paragraph (one or more). We cannot assume anything more from this text.
I want to develop/use an algorithm that will test the output of the decryption and give me prediction whether the decryption was successful or not.
The algorithm must work with all human languages (won't be specific for one language).
Is this possible? What do you think?
Step 0
Decrypt the cipher-text (encrypted) byte array to obtain a plain-text (decrypted) byte array.
If authenticated encryption is used then decrypting with wrong key will fail outright.
If proper padding (PKCS#7/PKCS#5) is used then decrypting with wrong key will fail with the very high probability because the padding will not be decrypted properly.
Step 1
Decode the byte array into a char array using proper character encoding and DecoderExceptionFallback (CodingErrorAction.REPORT in Java).
If decrypted byte array contains a sequence of bytes that does not represent a valid character then decoding will fail. Assuming that initial data is proper text in the same encoding the decrypted byte array will contain invalid byte sequences only if the wrong key is used.
Step 2
Actually, the first two steps will expose the wrong key with the very high probability.
Now, in the unlikely situation when a wrong key is used and decryption miraculously resulted in the properly padded data and the decoded data contained only valid byte sequences for the selected character encoding, you have a textual data and can use two simple (but still empirical) ideas that do not require dictionaries or online access:
In most sane natural languages the words are separated by white-space.
In most sane natural languages the words are made of letters.
The Unicode General Category property is very helpful in determining the type of character without being specific for a single language, and most regex implementations allow to specify the regex pattern in terms of Unicode categories.
First, split the text by Separator and Punctuation Unicode categories. The result is a list of "words" devoid of white-space and punctuation.
Second, match each word with a Letter+ pattern. The rate of words that match to the words that do not match it is high for any natural text. It can be high for a specifically constructed text-like gibberish too but it certainly will be low for a random sequence of characters.
You can analyse the text and then calculate the letter frequencies. If the letter frequency is way of chart you could say that the encryption has gone wrong. And if you mix this with the occurrences of spaces you have a reasonable solid way of saying if the encryption was successful.
Wikipedia on Letter frequency
There is no way to tell whether a bytestream contains a message in a human language without any more assumptions. First and foremost, you absolutely need to know the encoding (or a few possible encodings).
Then I am 99.9% sure that there is no generic way to infer whether a large group of (e.g.) ASCII-chars has a meaning in any human language without the use of some kind of dictionary. If you could narrow it down to a language family, maybe you would be able to detect grammatic constructs - but I am really just speculating. Even if it is possible it will not be a trivial task to design the heuristics.
That said, I can only second the suggestions in the comments: Use wikipedia! Create your own dictionary from it or use it online - either way, I believe it's your best bet.
I saw this post on Jon Skeet's blog where he talks about string reversing. I wanted to try the example he showed myself, but it seems to work... which leads me to believe that I have no idea how to create a string that contains a surrogate pair which will actually cause the string reversal to fail. How does one actually go about creating a string with a surrogate pair in it so that I can see the failure myself?
The simplest way is to use \U######## where the U is capital, and the # denote exactly eight hexadecimal digits. If the value exceeds 0000FFFF hexadecimal, a surrogate pair will be needed:
string myString = "In the game of mahjong \U0001F01C denotes the Four of circles";
You can check myString.Length to see that the one Unicode character occupies two .NET Char values. Note that the char type has a couple of static methods that will help you determine if a char is a part of a surrogate pair.
If you use a .NET language that does not have something like the \U######## escape sequence, you can use the method ConvertFromUtf32, for example:
string fourCircles = char.ConvertFromUtf32(0x1F01C);
Addition: If your C# source file has an encoding that allows all Unicode characters, like UTF-8, you can just put the charater directly in the file (by copy-paste). For example:
string myString = "In the game of mahjong 🀜 denotes the Four of circles";
The character is UTF-8 encoded in the source file (in my example) but will be UTF-16 encoded (surrogate pairs) when the application runs and the string is in memory.
(Not sure if Stack Overflow software handles my mahjong character correctly. Try clicking "edit" to this answer and copy-paste from the text there, if the "funny" character is not here.)
The term "surrogate pair" refers to a means of encoding Unicode characters with high code-points in the UTF-16 encoding scheme (see this page for more information);
In the Unicode character encoding, characters are mapped to values between 0x000000 and 0x10FFFF. Internally, a UTF-16 encoding scheme is used to store strings of Unicode text in which two-byte (16-bit) code sequences are considered. Since two bytes can only contain the range of characters from 0x0000 to 0xFFFF, some additional complexity is used to store values above this range (0x010000 to 0x10FFFF).
This is done using pairs of code points known as surrogates. The surrogate characters are classified in two distinct ranges known as low surrogates and high surrogates, depending on whether they are allowed at the start or the end of the two-code sequence.
Try this yourself:
String surrogate = "abc" + Char.ConvertFromUtf32(Int32.Parse("2A601", NumberStyles.HexNumber)) + "def";
Char[] surrogateArray = surrogate.ToCharArray();
Array.Reverse(surrogateArray);
String surrogateReversed = new String(surrogateArray);
or this, if you want to stick with the blog example:
String surrogate = "Les Mise" + Char.ConvertFromUtf32(Int32.Parse("0301", NumberStyles.HexNumber)) + "rables";
Char[] surrogateArray = surrogate.ToCharArray();
Array.Reverse(surrogateArray);
String surrogateReversed = new String(surrogateArray);
nnd then check the string values with the debugger. Jon Skeet is damn right... strings and dates seem easy but they are absolutely NOT.
I'm finding a way to count special character that form by more than one character but found no solution online!
For e.g. I want to count the string "வாழைப்பழம". It actually consist of 6 tamil character but its 9 character in this case when we use the normal way to find the length. I am wondering is tamil the only kind of encoding that will cause this problem and if there is a solution to this. I'm currently trying to find a solution in C#.
Thank you in advance =)
Use StringInfo.LengthInTextElements:
var text = "வாழைப்பழம";
Console.WriteLine(text.Length); // 9
Console.WriteLine(new StringInfo(text).LengthInTextElements); // 6
The explanation for this behaviour can be found in the documentation of String.Length:
The Length property returns the number of Char objects in this instance, not the number of Unicode characters. The reason is that a Unicode character might be represented by more than one Char. Use the System.Globalization.StringInfo class to work with each Unicode character instead of each Char.
A minor nitpick: strings in .NET use UTF-16, not UTF-8
When you're talking about the length of a string, there are several different things you could mean:
Length in bytes. This is the old C way of looking at things, usually.
Length in Unicode code points. This gets you closer to the modern times and should be the way how string lengths are treated, except it isn't.
Length in UTF-8/UTF-16 code units. This is the most common interpretation, deriving from 1. Certain characters take more than one code unit in those encodings which complicates things if you don't expect it.
Count of visible “characters” (graphemes). This is usually what people mean when they say characters or length of a string.
In your case your confusion stems from the difference between 4. and 3. 3. is what C# uses, 4. is what you expect. Complex scripts such as Tamil use ligatures and diacritics. Ligatures are contractions of two or more adjacent characters into a single glyph – in your case ழை is a ligature of ழ and ை – the latter of which changes the appearance of the former; வா is also such a ligature. Diacritics are ornaments around a letter, e.g. the accent in à or the dot above ப்.
The two cases I mentioned both result in a single grapheme (what you perceive as a single character), yet they both need two actual characters each. So you end up with three code points more in the string.
One thing to note: For your case the distinction between 2. and 3. is irrelevant, but generally you should keep it in mind.