When I tried to download arabic files from mvc project, I found the arabic data is changed to special characters like this تاريخ الشكوى
That's my code that I use in download:
System.Web.Mvc.FileStreamResult FSR = new FileStreamResult(stream, "application/msword");
FSR.FileDownloadName = CorrespondenceselectedFile.FileName;
return FSR;
It seems like the text "تاريخ الشكوى" ("Date of the complaint") is decoded with the default encoding instead of UTF-8.
You should probably correct the encoding somewhere in your code (not part of the shown code) or do it manually (not preferred):
string ascii = "تاريخ الشكوى";
var bytes = Encoding.Default.GetBytes(ascii);
string utf8 = Encoding.UTF8.GetString(bytes);
// utf = تاريخ الشكوى
Related
first a depressing fact: https://www.base64decode.org/ can do what i want to do.
i´m trying to encode and decode (to and from base64) a model file (.shm) generated by the image processing tool MVTec Halcon because i want to store it in a xml file.
If i open it, it has this strange form:
HSTF ÿÿÿÿ¿€ Q¿ÙG®záH?Üä4©±w?Eè}‰#?ð ................
I´m using this methods to encode and decode it:
public static string Base64Encode(string text)
{
Byte[] textBytes = Encoding.Default.GetBytes(text);
return Convert.ToBase64String(textBytes);
}
public static string Base64Decode(string base64EncodedData)
{
Byte[] base64EncodedBytes = Convert.FromBase64String(base64EncodedData);
return Encoding.Default.GetString(base64EncodedBytes);
}
and calling the methods from a gui like this:
var model = File.ReadAllText(#"C:\Users\\Desktop\model_region_nut.txt");
var base64 = ImageConverter.Base64Encode(model);
File.WriteAllText(#"C:\Users\\Desktop\base64.txt", base64);
var modelneu = ImageConverter.Base64Decode(File.ReadAllText(#"C:\Users\\Desktop\base64.txt"));
File.WriteAllText(#"C:\Users\\Desktop\modelneu.txt", modelneu);
my result for modelneu is:
HSTF ?????? Q??G?z?H???4??w??E?}??#??
so you can see that there are lots of missing characters.. I guess the problem is caused by using .Default.
Thanks for your help,
Michel
If you're working with binary data, there is no reason at all to go through text decoding and encoding. Doing so only risks corrupting the data in various ways, even if you're using a consistent character encoding.
Just use File.ReadAllBytes() instead of File.ReadAllText() and skip the unnecessary Encoding step.
The problem is with reading file with unspecified encoding, check this question.
As mentioned there you can go with overload for ReadAllText to specify encoding and also for writing you must specofy encoding for WriteAllText I suggest using UTF-8 encoding so:
var model = File.ReadAllText(#"C:\Users\pichlerm\Desktop\model_region_nut.txt",Encoding.UTF8);
var base64 = ImageConverter.Base64Encode(model);
File.WriteAllText(#"C:\Users\\Desktop\base64.txt", base64,Encoding.UTF8);
var modelneu = ImageConverter.Base64Decode(File.ReadAllText(#"C:\Users\\Desktop\base64.txt"));
File.WriteAllText(#"C:\Users\pichlerm\Desktop\modelneu.txt", modelneu);
I load a XML like this:
var url = Application.dataPath + #"/config.xml";
var www = new WWW(url);
while (!www.isDone)
{
yield return new WaitForSeconds(0.2f);
}
After that I create a XmlTextReader in order to parse that XML:
GameSettings.ParseXML(new XmlTextReader(new StringReader(www.text)));
But I'm having problem with character encoding (é,ç,ã,ê, etc). What can I do make it works?
If you use WWW.text, the function expects the web page contents encoded in UTF-8 or ASCII but your customer uses Windows-1252.
Like Bart already suggested, the best way would be to request that the customer just uses UTF-8. If that is not possible und you are sure that the customer always uses Windows-1252 you can convert the encoding inside your application.
Encoding windows1252 = Encoding.GetEncoding("Windows-1252");
Encoding utf8 = Encoding.UTF8;
byte[] windowsBytes = www.bytes;
byte[] utf8Bytes = Encoding.Convert(windows1252, utf8, windowsBytes);
string converted_xml = utf8.GetString(utf8Bytes);
I'm using HttpClient to POST MultipartFormDataContent to a Java web application. I'm uploading several StringContents and one file which I add as a StreamContent using MultipartFormDataContent.Add(HttpContent content, String name, String fileName) using the method HttpClient.PostAsync(String, HttpContent).
This works fine, except when I provide a fileName that contains german umlauts (I haven't tested other non-ASCII characters yet). In this case, fileName is being base64-encoded. The result for a file named 99 2 LD 353 Temp Äüöß-1.txt
looks like this:
__utf-8_B_VGVtcCDvv73vv73vv73vv71cOTkgMiBMRCAzNTMgVGVtcCDvv73vv73vv73vv70tMS50eHQ___
The Java server shows this encoded file name in its UI, which confuses the users. I cannot make any server-side changes.
How do I disable this behavior? Any help would be highly appreciated.
Thanks in advance!
I just found the same limitation as StrezzOr, as the server that I was consuming didn't respect the filename* standard.
I converted the filename to a byte array of the UTF-8 representation, and the re-armed the bytes as chars of "simple" string (non UTF-8).
This code creates a content stream and add it to a multipart content:
FileStream fs = File.OpenRead(_fullPath);
StreamContent streamContent = new StreamContent(fs);
streamContent.Headers.Add("Content-Type", "application/octet-stream");
String headerValue = "form-data; name=\"Filedata\"; filename=\"" + _Filename + "\"";
byte[] bytes = Encoding.UTF8.GetBytes(headerValue);
headerValue="";
foreach (byte b in bytes)
{
headerValue += (Char)b;
}
streamContent.Headers.Add("Content-Disposition", headerValue);
multipart.Add(streamContent, "Filedata", _Filename);
This is working with spanish accents.
Hope this helps.
I recently found this issue and I use a workaround here:
At server side:
private static readonly Regex _regexEncodedFileName = new Regex(#"^=\?utf-8\?B\?([a-zA-Z0-9/+]+={0,2})\?=$");
private static string TryToGetOriginalFileName(string fileNameInput) {
Match match = _regexEncodedFileName.Match(fileNameInput);
if (match.Success && match.Groups.Count > 1) {
string base64 = match.Groups[1].Value;
try {
byte[] data = Convert.FromBase64String(base64);
return Encoding.UTF8.GetString(data);
}
catch (Exception) {
//ignored
return fileNameInput;
}
}
return fileNameInput;
}
And then use this function like this:
string correctedFileName = TryToGetOriginalFileName(fileRequest.FileName);
It works.
In order to pass non-ascii characters in the Content-Disposition header filename attribute it is necessary to use the filename* attribute instead of the regular filename. See spec here.
To do this with HttpClient you can do the following,
var streamcontent = new StreamContent(stream);
streamcontent.Headers.ContentDisposition = new ContentDispositionHeaderValue("attachment") {
FileNameStar = "99 2 LD 353 Temp Äüöß-1.txt"
};
multipartContent.Add(streamcontent);
The header will then end up looking like this,
Content-Disposition: attachment; filename*=utf-8''99%202%20LD%20353%20Temp%20%C3%84%C3%BC%C3%B6%C3%9F-1.txt
I finally gave up and solved the task using HttpWebRequest instead of HttpClient. I had to build headers and content manually, but this allowed me to ignore the standards for sending non-ASCII filenames. I ended up cramming unencoded UTF-8 filenames into the filename header, which was the only way the server would accept my request.
I've got this string returned via HTTP Post from a URL in a C# application, that contains some chinese character eg:
Gelatos® Colors Gift Setä¸æ–‡
Problem is I want to convert it to
Gelatos® Colors Gift Set中文
Both string are actually identical but encoded differently. I understand in C# everything is UTF16. I've tried reading alof of postings here regarding converting from one encoding to the other but no luck.
Hope someone could help.
Here's the C# code:
WebClient wc = new WebClient();
json = wc.DownloadString("http://mysite.com/ext/export.asp");
textBox2.Text = "Receiving orders....";
//convert the string to UTF16
Encoding ascii = Encoding.ASCII;
Encoding unicode = Encoding.Unicode;
Encoding utf8 = Encoding.UTF8;
byte[] asciiBytes = ascii.GetBytes(json);
byte[] utf8Bytes = utf8.GetBytes(json);
byte[] unicodeBytes = Encoding.Convert(utf8, unicode, utf8Bytes);
string sOut = unicode.GetString(unicodeBytes);
System.Windows.Forms.MessageBox.Show(sOut); //doesn't work...
Here's the code from the server:
<%#CodePage = 65001%>
<%option explicit%>
<%
Session.CodePage = 65001
Response.charset ="utf-8"
Session.LCID = 1033 'en-US
.....
response.write (strJSON)
%>
The output from the web is correct. But I was just wondering if some changes is done on the http stream to the C# application.
thanks.
Download the web pages as bytes in the first place. Then, convert the bytes to the correct encoding.
By first converting it using a wrong encoding you are probably losing data. Especially using ASCII.
If the server is really returning UTF-8 text, you can configure your WebClient by setting its Encoding property. This would eliminate any need for subsequent conversions.
using (WebClient wc = new WebClient())
{
wc.Encoding = Encoding.UTF8;
json = wc.DownloadString("http://mysite.com/ext/export.asp");
}
I've tried googling around but wasn't able to find what charset that this text below belongs to:
具有éœé›»ç”¢ç”Ÿè£ç½®ä¹‹å½±åƒè¼¸å…¥è£ç½®
But putting <meta http-equiv="Content-Type" Content="text/html; charset=utf-8"> and keeping that string into an HTML file, I was able to view the Chinese characters properly:
具有靜電產生裝置之影像輸入裝置
So my question is:
What tools can I use to detect the character set of this text?
And how do I convert/encode/decode them properly in C#?
Updates:
For completion sake, I've updated this test.
[TestMethod]
public void TestMethod1()
{
string encodedText = "具有éœé›»ç”¢ç”Ÿè£ç½®ä¹‹å½±åƒè¼¸å…¥è£ç½®";
Encoding utf8 = new UTF8Encoding();
Encoding window1252 = Encoding.GetEncoding("Windows-1252");
byte[] postBytes = window1252.GetBytes(encodedText);
string decodedText = utf8.GetString(postBytes);
string actualText = "具有靜電產生裝置之影像輸入裝置";
Assert.AreEqual(actualText, decodedText);
}
}
What is happening when you save the "bad" string in a text file with a meta tag declaring the correct encoding is that your text editor is saving the file with Windows-1252 encoding, but the browser is reading the file and interpreting it as UTF-8. Since the "bad" string is incorrectly decoded UTF-8 bytes with the Windows-1252 encoding, you are reversing the process by encoding the file as Windows-1252 and decoding as UTF-8.
Here's an example:
using System.Text;
using System.Windows.Forms;
namespace Demo
{
class Program
{
static void Main(string[] args)
{
string s = "具有靜電產生裝置之影像輸入裝置"; // Unicode
Encoding Windows1252 = Encoding.GetEncoding("Windows-1252");
Encoding Utf8 = Encoding.UTF8;
byte[] utf8Bytes = Utf8.GetBytes(s); // Unicode -> UTF-8
string badDecode = Windows1252.GetString(utf8Bytes); // Mis-decode as Latin1
MessageBox.Show(badDecode,"Mis-decoded"); // Shows your garbage string.
string goodDecode = Utf8.GetString(utf8Bytes); // Correctly decode as UTF-8
MessageBox.Show(goodDecode, "Correctly decoded");
// Recovering from bad decode...
byte[] originalBytes = Windows1252.GetBytes(badDecode);
goodDecode = Utf8.GetString(originalBytes);
MessageBox.Show(goodDecode, "Re-decoded");
}
}
}
Even with correct decoding, you'll still need a font that supports the characters being displayed. If your default font doesn't support Chinese, you still might not see the correct characters.
The correct thing to do is figure out why the string you have was decoded as Windows-1252 in the first place. Sometimes, though, data in a database is stored incorrectly to begin with and you have to resort to these games to fix the problem.
string test = "敭畳灴獩楫n"; //incoming data. must be mesutpiskin
byte[] bytes = Encoding.Unicode.GetBytes(test);
string s = string.Empty;
for (int i = 0; i < bytes.Length; i++)
{
s += (char)bytes[i];
}
s = s.Trim((char)0);
MessageBox.Show(s);
//s=mesutpiskin
I'm not really sure what you mean, but I'm guessing you want to convert between a string in a certain encoding in byte array form and a string. Let's assume the character encoding is called "FooBar":
This is how you encode and decode:
Encoding myEncoding = Encoding.GetEncoding("FooBar");
string myString = "lala";
byte[] myEncodedBytes = myEncoding.GetBytes(myString);
string myDecodedString = myEncoding.GetString(myEncodedBytes);
You can learn more about the Encoding class over at MSDN.
Answering your question at the end of your post:
If you want to determine the text encoding on runtime you should look at that: http://code.google.com/p/ude/
for converting character sets you can use http://msdn.microsoft.com/en-us/library/system.text.encoding.convert(v=vs.100).aspx
It's Windows Latin 1. I pasted the Chinese text as UTF-8 into BBEDIT (a text editor for Mac) and re-opened the file as Windows Latin 1 and bang, the exact diacritics appeared.