Japanese character in AdaptiveCard Bot Framework V4 - c#

I have been trying to print a simple card with a Japanese character but it keeps displaying boxes and unknown characters.
This is how I create my adaptive card, then I get the params and data in a json, just to make it neat.
string[] paths = { ".", "Cards", "pickLanguageCard.json" };
string fullPath = Path.Combine(paths);
var adaptiveCard = File.ReadAllText(fullPath);
return new Attachment()
{
ContentType = "application/vnd.microsoft.card.adaptive",
Content = JsonConvert.DeserializeObject(adaptiveCard),
};
Picture of the printed output:
As you can see the returned JSON data is also wrong. So this pinned it down to the main source of the Bot. I tried to the Json file containing the Japanese character, also change the encoding at the web.config but it didn't solve my problem. Back in Bot Framework v3, there is no problem in printing/displaying Japanese character. But when I tried v4 the Japanese characters get like that.
Any fix, solution, workaround will be appreciated. Thanks
Edit:
Tried using encoding param in ReadAllText, (Encoding.UTF8, Encoding.UTF32, Encoding.Unicode). In UTF8, other japanese character get to print but destroy the format of the Json unable to parse, also it occurs in utf32 and unicode. In default the character is the same.
Edit:
So after researching relentlessly, I found out that the JSON only encodes data to standard UTF-8 to make it lighter, and tried to using converting the characters to UTF-16 and it print successfully, but that seems wrong for me. Is there any other way to print correctly the Japanese characters?

When you edit JSON in Visual Studio 2019 and try to save the file with Japanese characters, Visual Studio will automatically offer to fix the format for you:
If you want to manually save your file with a specific encoding instead of relying on an automatic dialog box, you can use the Save with Encoding... option in the File > Save As... dialog:
If you select Codepage 65001, which is Unicode (UTF-8) with or without signature, your Japanese characters should display correctly:

Related

How to correctly encode arabic subtitles in c#?

Hello there i am creating a video player with subtitles support using MediaElement class and SubtitlesParser library, i faced an issue with 7 arabic subtitle files (.srt) being displayed ???? or like this:
I tried multiple diffrent encoding but with no luck:
SubtitlesList = new SubtitlesParser.Classes.Parsers.SubParser().ParseStream(fileStream);
subLine = Encoding.UTF8.GetString(Encoding.UTF8.GetBytes(subLine));
or
SubtitlesList = new SubtitlesParser.Classes.Parsers.SubParser().ParseStream(fileStream,Encoding.UTF8);
Then i found this and based on the answer i used Encoding.Default "ANSI" to parse subtitles then re-interpret the encoded text:
SubtitlesList = new SubtitlesParser.Classes.Parsers.SubParser().ParseStream(fileStream, Encoding.Default);
var arabic = Encoding.GetEncoding(1256);
var latin = Encoding.GetEncoding(1252);
foreach (var item in SubtitlesList)
{
List<string> lines = new List<string>();
lines.AddRange(item.Lines.Select(line => arabic.GetString(latin.GetBytes(line))));
item.Lines = lines;
}
this worked only on 4 files but the rest still show ?????? and nothing i tried till now worked on them, this what i found so far:
exoplayer weird arabic persian subtitles format (this gave me a hint about the real problem).
C# Converting encoded string IÜÜæØÜÜ?E? to readable arabic (Same answer).
convert string from Windows 1256 to UTF-8 (Same answer).
How can I transform string to UTF-8 in C#? (It works for Spanish language but not arabic).
Also am hoping to find a single solution to correctly display all the files is this possible ?
please forgive my simple language English is not my native language
i think i found the answer to my question, as a beginner i only had a basic knowledge of encoding till i found this article
What Every Programmer Absolutely, Positively Needs To Know About Encodings And Character Sets To Work With Text
Your text editor, browser, word processor or whatever else that's trying to read the document is assuming the wrong encoding. That's all. The document is not broken , there's no magic you need to perform, you simply need to select the right encoding to display the document.
I hope this helps anyone else who got confused about the correct way to handel this, there is no way to know the files correct encoding, only the user can.

How do I extract UTF-8 strings out of a JSON file using LitJSON, as JsonData does not seem to convert?

I've tried many methods to extract some strings out of a JSON file using LitJson in Unity.
I've encoding converts all over, tried getting byte arrays and sending them around and nothing seems to work.
I went to the very start of where I create the JsonData object and tried to run the following test:
public JsonData CreateJSONDataObject()
{
Debug.Assert(pathName != null, "No JSON Data path name set. Please set before commencing read.");
string jsonString = File.ReadAllText(Application.dataPath + pathName, System.Text.Encoding.UTF8);
JsonData jsonDataObject = JsonMapper.ToObject(jsonString);
Debug.Log("Test compatibility: ë | " + jsonDataObject["Roots"][2]["name"]);
return jsonDataObject;
}
I made sure my jsonString is using UTF-8, however the output shows this:
Test compatibility: ë | W�den
I've tried many other methods, but as this is making sure to encode right when creating a JsonData object I can't think of what I am doing wrong as I just don't know enough about JSON.
Thank you in advance.
This type of problem occurs when a text file is written with one encoding and read using a different one. I was able to reproduce your problem with the following program, which removes the JSON serialization from the equation entirely:
string file = #"c:\temp\test.txt";
string text = "Wöden";
File.WriteAllText(file, text, Encoding.Default));
string text2 = File.ReadAllText(file, Encoding.UTF8);
Debug.WriteLine(text2);
Since you are reading with UTF-8 and it is not working, the real question is, what encoding was used to write the file originally? You should be using the same encoding to read it back. I suspect that the file was originally created using either Windows-1252 or iso-8859-1 instead of UTF-8. Try using one of those when you read the file, e.g.:
string jsonString = File.ReadAllText(Application.dataPath + pathName,
Encoding.GetEncoding("Windows-1252"));
You said in the comments that your JSON file was not created programmatically, but was "written by hand", meaning you used Notepad or some other text editor to make the file. If that is so, then that explains how you got into this situation. When you save the file, you should have the option to choose an encoding. For Notepad at least, the default encoding is "ANSI", which most likely maps to Windows-1252 (Western European), but depends on your locale. If you are in the Baltic region, for example, it would be Windows-1257 (Baltic). In any case, "ANSI" is not UTF-8. If you want to save the file in UTF-8 encoding, you have to specifically choose that option. Whatever option you use to save the file, that is the encoding you need to use to read it the next time, whether it is with a text editor or with code. Using the wrong encoding to read the file is what causes the corruption.
To change the encoding of a file, you first have to read it in using the same encoding that it was saved in originally, and then you can write it back out using a different encoding. You can do that with your text editor, simply by re-saving the file with a different encoding, or you can do that programmatically:
string text = File.ReadAllText(file, originalEncoding);
File.WriteAllText(file, text, newEncoding);
The key is knowing which encoding was used originally, and therein lies the rub. For legacy encodings (such as Windows-12xx) there is no way to tell because there is no marker in the file which identifies it. Unicode encodings (e.g. UTF-8, UTF-16), on the other hand, do write out a marker at the beginning of the file, called a BOM, or byte-order mark, which can be detected programmatically. That, coupled with the fact that Unicode encodings can represent all characters, is why they are much preferred over legacy encodings.
For more information, I highly recommend reading What Every Programmer Absolutely, Positively Needs To Know About Encodings And Character Sets To Work With Text.

Can not read turkish characters from text file to string array

I am trying to do some kind of sentence processing in turkish, and I am using text file for database. But I can not read turkish characters from text file, because of that I can not process the data correctly.
string[] Tempdatabase = File.ReadAllLines(#"C:\Users\dialogs.txt");
textBox1.Text = Tempdatabase[5];
Output:
It's probably an encoding issue. Try using one of the Turkish code page identifiers.
var Tempdatabase =
File.ReadAllLines(#"C:\Users\dialogs.txt", Encoding.GetEncoding("iso-8859-9"));
You can fiddle around using Encoding as much as you like. This might eventually yield the expected result, but bear in mind that this may not work with other files.
Usually, C# processes strings and files using Unicode by default. So unless you really need something else, you should try this instead:
Open your text file in notepad (or any other program) and save it as an UTF-8 file. Then, you should get the expected results without any modifications in your code. This is because C# reads the file using the encoding you saved it with. This is default behavior, which should be preferred.
When you save your text file as UTF-8, then C# will interpret it as such.
This also applies to .html files inside Visual Studio, if you notice that they are displayed incorrectly (parsed with ASCII)
The file contains the text in a specific Turkish character set, not Unicode. If you don't specify any other behaviour, .net will assume Unicode text when reading text from a text file. You have two possible solutions:
Either change the text file to use Unicode (for example utf8) using an external text editor.
Or specify a specific character set to read for example:
string[] Tempdatabase = File.ReadAllLines(#"C:\Users\dialogs.txt", Encoding.Default);
This will use the local character set of the Windows system.
string[] Tempdatabase = File.ReadAllLines(#"C:\Users\dialogs.txt", Encoding.GetEncoding("Windows-1254");
This will use the Turkish character set defined by Microsoft.

Encoding special characters in c#

In my c# application, i receive post data in the form of xml. Within the xml i have a attribute receiving as " SmÃ¥senter (Sandvika SmÃ¥senter)" . Before inserting to database i need to encode it as "Småsenter (Sandvika Småsenter)" . I tried to use below code ,
string name = "Småsenter (Sandvika Småsenter)";
name = HttpUtility.HtmlDecode(name);
Also tried name = HttpUtility.HtmlEncode(name);
But it is not giving expected output.
Is any suggessions to get in expected characters.
Regards
Sangeetha
You have just encountered Mojibake, which is caused by mixing text encodings. You need to use the same encoding for writing and reading the XML, preferably a Unicode encoding such as UTF-8. You should not try to repair a broken string such as "Småsenter", but rather make it not break in the first place.

How to get correctly-encoded HTML from the clipboard?

Has anyone noticed that if you retrieve HTML from the clipboard, it gets the encoding wrong and injects weird characters?
For example, executing a command like this:
string s = (string) Clipboard.GetData(DataFormats.Html)
Results in stuff like:
<FONT size=-2>  <A href="/advanced_search?hl=en">Advanced
Search</A><BR>  Preferences<BR>  <A
href="/language_tools?hl=en">Language
Tools</A></FONT>
Not sure how MarkDown will process this, but there are weird characters in the resulting markup above.
It appears that the bug is with the .NET framework. What do you think is the best way to get correctly-encoded HTML from the clipboard?
In this case it is not so visible as it was in my case. Today I tried to copy data from clipboard but there were a few unicode characters. The data I got were as if I would read a UTF-8 encoded file in Windows-1250 encoding (local encoding in my Windows).
It seems you case is the same. If you save the html data (remember to put non-breakable space = 0xa0 after the  character, not a standard space) in Windows-1252 (or Windows-1250; both works). Then open this file as a UTF-8 file and you will see what there should be.
For my other project I made a function that fix data with corrupted encoding.
In this case simple conversion should be sufficient:
byte[] data = Encoding.Default.GetBytes(text);
text = Encoding.UTF8.GetString(data);
My original function is a little bit more complex and contains tests to ensure that data are not corrupted...
public static bool FixMisencodedUTF8(ref string text, Encoding encoding)
{
if (string.IsNullOrEmpty(text))
return false;
byte[] data = encoding.GetBytes(text);
// there should not be any character outside source encoding
string newStr = encoding.GetString(data);
if (!string.Equals(text, newStr)) // if there is any character "outside"
return false; // leave, the input is in a different encoding
if (IsValidUtf8(data) == 0) // test data to be valid UTF-8 byte sequence
return false; // if not, can not convert to UTF-8
text = Encoding.UTF8.GetString(data);
return true;
}
I know that this is not the best (or correct solution) but I did not found any other way how to fix the input...
EDIT: (July 20, 2017)
It Seems like the Microsoft already found this error and now it works correctly. I'm not sure whether the problem is in some frameworks, but I know for sure, that now the application uses a different framework as in time, when I wrote the answer. (Now it is 4.5; the previous version was 2.0)
(Now all my code fails in parsing the data. There is another problem to determine the correct behaviour for application with fix already aplied and without fix.)
You have to interpret the data as UTF-8. See MS Office hyperlinks change code page?.
DataFormats.Html specification states it's encoded in UTF-8. But there's a bug in .NET 4 Framework and lower, and it actually reads as UTF-8 as Windows-1252.
You get allot of wrong encodings, leading funny/bad characters such as
'Å','‹','Å’','Ž','Å¡','Å“','ž','Ÿ','Â','¡','¢','£','¤','Â¥','¦','§','¨','©'
Full explanation here
Debugging Chart Mapping Windows-1252 Characters to UTF-8 Bytes to Latin-1 Characters
Soln: Create a translation dictionary and search and replace.
I don't know what your original source document is, but be aware that Word and Outlook provide several versions of the clipboard in different encodings. One is usually Windows-1252 and another is UTF-8. Possibly you're grabbing the UTF-8 encoded version by default, when you're expecting the Windows-1252 (Latin-1 + Smart Quotes)? Non-ASCII characters would show up as multiple odd Latin-1 accented characters. Most "Smart Quotes" are not in the Latin-1 set and are often three bytes in UTF-8.
Can you specify which encoding you want the clipboard contents in?
Try this:
System.Windows.Forms.Clipboard.GetText(System.Windows.Forms.TextDataFormat.Html);

Categories