I am creating an application to scan and merge CSV files. I am having an issue when writing the data to a new file. One of the fields has the ö character which is maintained until i write it to the new file. It then becomes the "actual" value: ö instead of the "expected" value: ö
I am suspecting that UTF8 Encoding is not the best thing to use but have yet to find a better working method. Any help with this would be much appreciated!
byte[] nl = new UTF8Encoding(true).GetBytes("\n");
using (FileStream file = File.Create(filepath))
{
string text;
byte[] info;
for (int r = 0; r < data.Count; r++)
{
int c = 0;
for (; c < data[r].Count - 1; c++)
{
text = data[r][c] + #",";
text = text.Replace("\n", #"");
text = text.Replace(#"☼", #"""");
info = new UTF8Encoding(true).GetBytes(text);
file.Write(info, 0, text.Length);
}
text = data[r][c];
info = new UTF8Encoding(true).GetBytes(text);
file.Write(info, 0, text.Length);
file.Write(nl, 0, nl.Length);
}
}
I might be mistaken and this should probably go in a comment but I can't comment yet. Text editors will decode the binary data into a certain encoding. You can check the actual binary data in a hex editor. You can verify the binary data you are writing out to the file. Notepad++ has a hex editor plug in that you could use.
BinaryWriter is easier to work with when it comes to writing bytes to a file. you can also set the encoding of the BinaryWriter. You'll want to set this to UTF-8.
Edit
I forgot to mention. When you write out to bytes you are going to want to read in as bytes as well. Use BinaryReader and set the encoding to UTF-8.
Once you read the Bytes in use Encoding.UTF8.GetString() to convert the bytes into a string.
You might be truncating the output since UTF-8 is multibyte.
Don't do this:
info = new UTF8Encoding(true).GetBytes(text);
file.Write(info, 0, text.Length);
Instead use info.Length.
info = new UTF8Encoding(true).GetBytes(text);
file.Write(info, 0, info.Length); // change this line
Related
I have an XML file with a UTF-8 BOM in the beginning of the file, which hinders me from using existing code that reads UTF-8 files.
How can I remove the BOM from the XML file in an easy way?
Here I have a variable xmlfile in Byte type that I convert to string. xmlfile contains the entire XML file.
byte[] xmlfile = ((Byte[])myReader["xmlSQL"]);
string xmlstring = Encoding.UTF8.GetString(xmlfile);
Great stuff DBC :) that worked well with your link. To fix my problem where i had a UTF-8 BOM tag in the beginning of my xml file. I simply added memorystream and streamreader, which automaticly cleanced the the xmlfile(htmlbytes) of BOM elements.
Really easy to implement for existing code.
byte[] htmlbytes = ((Byte[])myReader["xmlMelding"]);
var memorystream = new MemoryStream(htmlbytes);
var s = new StreamReader(memorystream).ReadToEnd();
Encoding.GetString() has an overload that accepts an offset into the byte[] array. Simply check if the array starts with a BOM, and if so then skip it when calling GetString(), eg:
byte[] xmlfile = ((Byte[])myReader["xmlSQL"]);
int offset = 0;
if (xmlfile.Length >= 3 &&
xmlfile[0] == 0xEF &&
xmlfile[1] == 0xBB &&
xmlfile[2] == 0xBF)
{
offset += 3;
}
string xmlstring = Encoding.UTF8.GetString(xmlfile, offset, xmlfile.Length - offset);
Is there any Stream reader Class to read only number of char from string Or byte from byte[]?
forexample reading string:
string chunk = streamReader.ReadChars(5); // Read next 5 chars
or reading bytes
byte[] bytes = streamReader.ReadBytes(5); // Read next 5 bytes
Note that the return type of this method or name of the class does not matter. I just want to know if there is some thing similar to this then i can use it.
I have byte[] from midi File. I want to Read this midi file in C#. But i need ability to read number of bytes. or chars(if i convert it to hex). To validate midi and read data from it more easily.
Thanks for the comments. I didnt know there is an Overload for Read Methods. i could achieve this with FileStream.
using (FileStream fileStream = new FileStream(path, FileMode.Open))
{
byte[] chunk = new byte[4];
fileStream.Read(chunk, 0, 4);
string hexLetters = BitConverter.ToString(chunk); // 4 Hex Letters that i need!
}
You can achieve this by doing something like below but I am not sure this will applicable for your problem or not.
StreamReader sr = new StreamReader(stream);
StringBuilder S = new StringBuilder();
while(true)
{
S = S.Append(sr.ReadLine());
if (sr.EndOfStream == true)
{
break;
}
}
Once you have value on "S", you can consider sub strings from it.
How do I read chars from other countries such as ß ä?
The following code reads all chars, including chars such as 0x0D.
StreamReader srFile = new StreamReader(gstPathFileName);
char[] acBuf = null;
int iReadLength = 100;
while (srFile.Peek() >= 0) {
acBuf = new char[iReadLength];
srFile.Read(acBuf, 0, iReadLength);
string s = new string(acBuf);
}
But it does not interpret correctly chars such as ß ä.
I don't know what coding the file uses. It is exported from code (into a .txt file) that was written 20 plus years ago from a C-Tree database.
The ß ä display fine with Notepad.
By default, the StreamReader constructor assumes the UTF-8 encoding (which is the de facto universal standard today). Since that's not decoding your file correctly, your characters (ß, ä) suggest that it's probably encoded using Windows-1252 (Western European):
var encoding = Encoding.GetEncoding("Windows-1252");
using (StreamReader srFile = new StreamReader(gstPathFileName, encoding))
{
// ...
}
A closely-related encoding is ISO/IEC 8859-1. If the above gives some unexpected results, use Encoding.GetEncoding("ISO-8859-1") instead.
My scenario is:
Create an email in Outlook Express and save it as .eml file;
Read the file as string in C# console application;
I'm saving the .eml file encoded in utf-8. An example of text I wrote is:
'Goiânia é badalação.'
There are special characters like âéçã. It is portuguese characters.
When I open the file with notepad++ the text is shown like this:
'Goi=C3=A2nia =C3=A9 badala=C3=A7=C3=A3o.'
If I open it in outook express again, it's shown normal, like the first way.
When I read the file in console application, using utf-8 decoding, the string is shown like the second way.
The code I using is:
string text = File.ReadAllText(#"C:\fromOutlook.eml", Encoding.UTF8);
Console.WriteLine(text);
I tried all Encoding options and a lot of methods I found in the web but nothing works.
Can someone help me do this simple conversion?
'Goi=C3=A2nia =C3=A9 badala=C3=A7=C3=A3o.'
to
'Goiânia é badalação.'
string text = "Goi=C3=A2nia =C3=A9 badala=C3=A7=C3=A3o.";
byte[] bytes = new byte[text.Length * sizeof(char)];
System.Buffer.BlockCopy(text.ToCharArray(), 0, bytes, 0, bytes.Encoding.UTF8.GetString(bytes, 0, bytes.Length);
char[] chars = new char[bytes.Length / sizeof(char)];
System.Buffer.BlockCopy(bytes, 0, chars, 0, bytes.Length);
Console.WriteLine(new string(chars));
In this utf-8 table you can see the hex. value of these characters, 'é' == 'c3 a9':
http://www.utf8-chartable.de/
Thanks.
var input = "Goi=C3=A2nia =C3=A9 badala=C3=A7=C3=A3o.";
var buffer = new List<byte>();
var i = 0;
while(i < input.Length)
{
var character = input[i];
if(character == '=')
{
var part = input.Substring(i+1,2);
buffer.Add(byte.Parse(part, System.Globalization.NumberStyles.HexNumber));
i+=3;
}
else
{
buffer.Add((byte)character);
i++;
}
};
var output = Encoding.UTF8.GetString(buffer.ToArray());
Console.WriteLine(output); // prints: Goiânia é badalação.
Knowing the problem is quoted printable, I found a good decoder here:
http://www.dpit.co.uk/2011/09/decoding-quoted-printable-email-in-c.html
This works for me.
Thanks folks.
Update:
The above link is dead, here is a workable application:
How to convert Quoted-Print String
I want to read a file but not from the beginning of the file but at a specific point of a file. For example I want to read a file after 977 characters after the beginning of the file, and then read the next 200 characters at once. Thanks.
If you want to read the file as text, skipping characters (not bytes):
using (var textReader = System.IO.File.OpenText(path))
{
// read and disregard the first 977 chars
var buffer = new char[977];
textReader.Read(buffer, 0, buffer.Length);
// read 200 chars
buffer = new char[200];
textReader.Read(buffer, 0, buffer.Length);
}
If you merely want to skip a certain number of bytes (not characters):
using (var fileStream = System.IO.File.OpenRead(path))
{
// seek to starting point
fileStream.Seek(977, SeekOrigin.Begin);
// read 200 bytes
var buffer = new byte[200];
fileStream.Read(buffer, 0, buffer.Length);
}
you can use Linq and converting array of char to string .
add these namespace :
using System.Linq;
using System.IO;
then you can use this to get an array of characters starting index a as much as b characters from your text file :
char[] c = File.ReadAllText(FilePath).ToCharArray().Skip(a).Take(b).ToArray();
Then you can have a string , includes continuous chars of c :
string r = new string(c);
for example , i have this text in a file :
hello how are you ?
i use this code :
char[] c = File.ReadAllText(FilePath).ToCharArray().Skip(6).Take(3).ToArray();
string r = new string(c);
MessageBox.Show(r);
and it shows : how
Way 2
Very simple :
Using Substring method
string s = File.ReadAllText(FilePath);
string r = s.Substring(6,3);
MessageBox.Show(r);
Good Luck ;
using (var fileStream = System.IO.File.OpenRead(path))
{
// seek to starting point
fileStream.Position = 977;
// read
}
if you want to read specific data types from files System.IO.BinaryReader is the best choice.
if you are not sure about file encoding use
using (var binaryreader = new BinaryReader(File.OpenRead(path)))
{
// seek to starting point
binaryreader.ReadChars(977);
// read
char[] data = binaryreader.ReadChars(200);
//do what you want with data
}
else if you know character size in source file size are 1 or 2 byte use
using (var binaryreader = new BinaryReader(File.OpenRead(path)))
{
// seek to starting point
binaryreader.BaseStream.Position = 977 * X;//x is 1 or 2 base on character size in sourcefile
// read
char[] data = binaryreader.ReadChars(200);
//do what you want with data
}