Issue identifying the form feed character in c# code when reading a file
string contents = File.ReadAllText(file);
I have attempted to encode in various formats and then run a replace using UTF-8 hex, UTF-32 hex values for the character.
In the watch window I see
'\f' character
but when i expand out the visualizer i see the actual female character
how do you identify which is the correct character to be searching for? Either the \f or some variation of the female sign?
I have looked at this site for the variations of encoding values with no luck at actually finding it in c#: www.fileformat.info/info/unicode/char/2640/index.htm
Your question is a little vague on whether you are trying to find the character \f or the ♀ character.
If you are trying to find the ♀ character, you can use the hexadecimal code 0x2640, or simply use the character as-is:
var ctn = File.ReadAllText("file.txt", Encoding.UTF8);
int pos = ctn.IndexOf((char)0x2640);
int pos1 = ctn.IndexOf('♀');
Clarification: I think the confusion might come from the fact that character ALT+12 and character ALT+2640 often produces the same 'Female Sign' character, but this is for historical reasons, as the ALT+12 is, in ASCII, a device control code. Only the ALT+2640 Unicode character is specifically designed to always produce the ♀ sign.
So, I re-ran everything this morning with the following combination of UTF8 encoding and searching on '\f'
string contents = File.ReadAllText(file, Encoding.UTF8);
int pos = contents.IndexOf("\f");
and finally got a hit.
I still don't know why the watch and visualizer display the character differently, but that combination of searching works.
Thanks everyone.
Related
I'm parsing a number of text files that contain 99.9% ascii characters. Numbers, basic punctuation and letters A-Z (upper and lower case).
The files also contain names, which occasionally contain characters which are part of the extended ascii character set, for example umlauts Ü and cedillas ç.
I want to only work with standard ascii, so I handle these extended characters by processing any names through a series of simple replace() commands...
myString = myString.Replace("ç", "c");
myString = myString.Replace("Ü", "U");
This works with all the strange characters I want to replace except for Ø (capital O with a forward slash through it). I think this has the decimal equivalent of 157.
If I process the string character-by-character using ToInt32() on each character it claims the decimal equivalent is 65533 - well outside the normal range of extended ascii codes.
Questions
why doesn't myString.Replace("Ø", "O"); work on this character?
How can I replace "Ø" with "O"?
Other information - may be pertinent. Opening the file with Notepad shows the character as a "Ø". Comparison with other sources indicate that the data is correct (i.e. the full string is "Jørgensen" - a valid Danish name). Viewing the character in visual studio shows it as "�". I'm getting exactly the same problem (with this one character) in hundreds of different files. I can happily replace all the other extended characters I encounter without problems. I'm using System.IO.File.ReadAllLines() to read all the lines into an array of strings for processing.
Replace works fine for the 'Ø' when it 'knows' about it:
Console.WriteLine("Jørgensen".Replace("ø", "o"));
In your case the problem is that you are trying to read the data with the wrong encoding, that's why the string does not contain the character which you are trying to replace.
Ø is part of the extended ASCII set - iso-8859-1, but File.ReadAllLines tries to detect encoding using BOM chars and, I suspect, falls back to UTF-8 in your case (see Remarks in the documentation).
The same behavior you see in the VS code - it tries to open the file with UTF-8 encoding and shows you �:
If you switch the encoding to the correct one - it shows the text correctly:
If you know what encoding is used for your files, just use it explicitly, here is an example to illustrate the difference:
// prints J?rgensen
File.ReadAllLines("data.txt")
.Select(l => l.Replace("Ø", "O"))
.ToList()
.ForEach(Console.WriteLine);
// prints Jorgensen
File.ReadAllLines("data.txt",Encoding.GetEncoding("iso-8859-1"))
.Select(l => l.Replace("Ø", "O"))
.ToList()
.ForEach(Console.WriteLine);
If you want to use chars from the default ASCII set, you may convert all special chars from the extended set to the base one (it will be ugly and non-trivial). Or you can search online how to deal with your concern, and you may find String.Normalize() or this thread with several other suggestions.
public static string RemoveDiacritics(string s)
{
var normalizedString = s.Normalize(NormalizationForm.FormD);
var stringBuilder = new StringBuilder();
for(var i = 0; i < normalizedString.Length; i++)
{
var c = normalizedString[i];
if(CharUnicodeInfo.GetUnicodeCategory(c) != UnicodeCategory.NonSpacingMark)
stringBuilder.Append(c);
}
return stringBuilder.ToString();
}
...
// prints Jorgensen
File.ReadAllLines("data.txt", Encoding.GetEncoding("iso-8859-1"))
.Select(RemoveDiacritics)
.ToList()
.ForEach(Console.WriteLine);
I'd strongly recommend reading C# in Depth: Unicode by Jon Skeet and Programming with Unicode by Victor Stinner books to have a much better understanding of what's going on :) Good luck.
PS. My code example is functional, compact but pretty inefficient, if you parse huge files consider using another solution.
Using the uGUI Text component, I'm getting "replacement characters" aka � and I can't find a way to remove them.
I'm getting a string from the Instagram api which contains unicode characters for both non-alphabet language characters (for Japanese for example) which I need.
However, the unicode characters for the emojis come in as replacement characters aka �.
I don't require the emojis and they can be stripped out however, I can't find a method to do this.
I'm unable to use TextMeshPro as I'm unable generate a font asset with all the unicode characters need to display the various languages (this could be user error but when I try the process hangs).
I notice these � characters don't appear in the Inspector or console so there must be a way to ignore or remove them.
I'm setting the string like this
body.text = System.Uri.UnescapeDataString(postData.text);
I've tried a number of things that haven't worked including
body.text = body.text.Replace('\uFFFD','\'');//doesn't work
body.text = Regex.Replace(body.text, #"^[\ufffd]", string.Empty);//doesn't work
I've also tried breaking up the string as a char array. When I try to print to console I get this error when it hits a replacement character:
foreach (char item in postData.text.ToCharArray())
print(item); //Error: UTF-16 to UTF-8 conversion failed because the input string is invalid
Any help with this would be greatly appreciated!
Thank you.
Unity 2018.4.4, c#
Found the answer!
This post provided a solution: How do I remove emoji characters from a string?
body.text = Regex.Replace(body.text, #"\p{Cs}", "");
I have a string that I receive from a third party app and I would like to display it correctly in any language using C# on my Windows Surface.
Due to incorrect encoding, a piece of my string looks like this in Farsi (Persian-Arabic):
مدل-رنگ-موی-جدید-5-436x500
whereas it should look like this:
مدل-رنگ-موی-جدید-5-436x500
This link convert this correctly:
http://www.ltg.ed.ac.uk/~richard/utf-8.html
How I can do it in c#?
It is very hard to tell exactly what is going on from the description of your question. We would all be much better off if you provided us with an example of what is happening using a single character instead of a whole string, and if you chose an example character which does not belong to some exotic character set, for example the bullet character (u2022) or something like that.
Anyhow, what is probably happening is this:
The letter "ر" is represented in UTF-8 as a byte sequence of D8 B1, but what you see is "ر", and that's because in UTF-16 Ø is u00D8 and ± is u00B1. So, the incoming text was originally in UTF-8, but in the process of importing it to a dotNet Unicode String in your application it was incorrectly interpreted as being in some 8-bit character set such as ANSI or Latin-1. That's why you now have a Unicode String which appears to contain garbage.
However, the process of converting 8-bit characters to Unicode is for the most part not destructive, so all of the information is still there, that's why the UTF-8 tool that you linked to can still kind of make sense out of it.
What you need to do is convert the string back to an array of ANSI (or Latin-1, whatever) bytes, and then re-construct the string the right way, which is a conversion of UTF-8 to Unicode.
I cannot easily reproduce your situation, so here are some things to try:
byte[] bytes = System.Text.Encoding.Ansi.GetBytes( garbledUnicodeString );
followed by
string properUnicodeString = System.Text.Encoding.UTF8.GetString( bytes );
I'm new to programming and self taught. I'm trying to output the astrological symbol for Taurus, which is supposed to be U+2649 in Unicode. Here is the code I'm using...
string myString = "\u2649";
byte[] unicode = System.Text.Encoding.Unicode.GetBytes(myString);
Console.WriteLine(unicode.Length);
The result I'm getting is the number 2 instead of the symbol or font. I'm sure I'm doing something wrong.
Why are you converting it to unicode, this will not do anything.. lose the conversion and do the following:
string a ="\u2649" ;
Console.write(a) ;
You need to have a font which displays that glyph. If you do, then:
Console.WriteLine(myString);
is all you need.
EDIT: Note, the only font I could find which has this glyph is "MS Reference Sans Serif".
The length of the Unicode character, in bytes, is 2 and you are writing the Length to the Console.
Console.WriteLine(unicode.Length);
If you want to display the actual character, then you want:
Console.WriteLine(myString);
You must be using a font that has that Unicode range for it to display properly.
UPDATE:
Using default console font the above Console.WriteLine(myString) will output a ? character as there is no \u2649. As far I have so far googled, there is no easy way to make the console display Unicode characters that are not already part of the system code pages or the font you choose for the console.
It may be possible to change the font used by the console: Changing Console Fonts
You are outputting the length of the character, in bytes. The Console doesn't support unicode output, however, so it will come out as an '?' character.
I'm writing a program that reads all the text in a file into a string, loops over that string looking at the characters, and then appends the characters back to another string using a Stringbuilder. The issue I'm having is when it's written back out, the special characters such as “ and ” , come out looking like � characters instead. I don't need to do a conversion, I just want it written back out the way I read it in:
StringBuilder sb = new StringBuilder();
string text = File.ReadAllText(filePath);
for (int i = 0; i < text.Length; ++i) {
if (text[i] != '{') { // looking for opening curly brace
sb.Append(text[i]);
continue;
}
// Do stuff
}
File.WriteAllText(destinationFile, sb.ToString());
I tried using different Encodings (UTF-8, UTF-16, ASCII), but then it just came out even worse; I started getting question mark symbols and Chinese characters (yes, a bit of a shotgun approach, but I was just experimenting).
I did read this article: http://www.joelonsoftware.com/articles/Unicode.html
...but it didn't really explain why I was seeing what I saw, unless in C#, the reader starts cutting off bits when it hits weird characters like that. Thanks in advance for any help!
TL;DR that is definitely not UTF-8 and you are not even using UTF-8 to read the resulting file. Read as Windows1252, write as Windows1252 (If you are going to use the same viewing method to view the resulting file)
Well let's first just say that there is no way a file made by a regular user will be in UTF-8. Not all programs in windows even support it (excel, notepad..), let alone have it as default encoding (even most developer tools don't default to utf-8, which drives me insane). Since a lot of developers don't understand that such a thing as encoding even exists, then what chances do regular users have of saving their files in an utf-8 hostile environment?
This is where your problems first start. According to documentation, the overload you are using File.ReadAllText(filePath); can only detect UTF-8 or UTF-32.
Indeed, simply reading a file encoded normally in Windows-1252 that contains "a”a" results in a string "a�a", where � is the unicode replacement character (Read the wikipedia section, it describes exactly the situation you are in!) used to replace invalid bytes. When the replacement character is again encoded as UTF-8, and interpreted as Windows-1252, you will see � because the bytes for � in UTF-8 are 0xEF, 0xBF, 0xBD which are the bytes for � in Windows-1252.
So read it as Windows-1252 and you're half-way there:
Encoding windows1252 = Encoding.GetEncoding("Windows-1252");
String result = File.ReadAllText(#"C:\myfile.txt", windows1252);
Console.WriteLine(result); //Correctly prints "a”a" now
Because you saw �, the tool you are viewing the newly made file with is also using Windows-1252. So if the goal is to have the file show correct characters in that tool, you must encode the output as Windows-1252:
Encoding windows1252 = Encoding.GetEncoding("Windows-1252");
File.WriteAllText(#"C:\myFile", sb.toString(), windows1252);
Chances are the text will be UTF8.
File.ReadAllText(filePath, Encoding.UTF8)
coupled with
File.WriteAllText(destinationFile, sb.ToString(), Encoding.UTF8)
should cover off dealing with the Unicode characters. If you do one or the other you're going to get garbage output, both or nothing.