We're attempting to read in an HTML file that contains certain MS Word characters (such as that long hyphen). The problem is these characters, for example, are showing up as garbage in SQL 2008. The data column is varbinary, and am viewing this data by casting to varchar. Here is the code, verbatim:
EDIT: Corrected definition of bad characters
var file = new FileInfo(/*file info*/);
using (var fs = file.OpenRead())
{
var buffer = new byte[16 * 1024];
using (var ms = new MemoryStream())
{
int read;
while ((read = fs.Read(buffer, 0, buffer.Length)) > 0)
{
ms.Write(buffer, 0, read);
}
item.Data = ms.ToArray();
}
}
The "item" object is outside the scope of the code.
If it makes any different, we are using EF 4. The data type for this data column in question is binary. Please let me know what code or details I can provide. Thanks.
Casting arbitrary bytes into some arbitrary code page shows up as funky characters. Nothing new here, this was always the case and will always be. You need to properly manage your text endcoding end-to-end, from the file being read to the final data being shown. Start by reading this: International Features in Microsoft SQL Server 2005. This old KB is also helpfull (in some way at least) Description of storing UTF-8 data in SQL Server. Once you figure out what encoding your HTML files are and what encoding do you whish to display the data, we can discuss available options.
Oh, and I forgot the obligatory link: The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!).
As a temporary solution, if I am not wrong, the characters are shown like a square, no? You can always substitute the annoying characters once you display them.
You look for the ASCII code (to know it, you only have to convert.int32) and you replace it with the character you prefer.
Related
I am trying to use StreamReader and StreamWriter to Open a text file (fixed width) and to modify a few specific columns of data. I have dates with the following format that are going to be converted to packed COMP-3 fields.
020100718F
020100716F
020100717F
020100718F
020100719F
I want to be able to read in the dates form a file using StreamReader, then convert them to packed fields (5 characters), and then output them using StreamWriter. However, I haven't found a way to use StreamWriter to right to a specific position, and beginning to wonder if is possible.
I have the following code snip-it.
System.IO.StreamWriter writer;
this.fileName = #"C:\Test9.txt";
reader = new System.IO.StreamReader(System.IO.File.OpenRead(this.fileName));
currentLine = reader.ReadLine();
currentLine = currentLine.Substring(30, 10); //Substring Containing the Date
reader.Close();
...
// Convert currentLine to Packed Field
...
writer = new System.IO.StreamWriter(System.IO.File.Open(this.fileName, System.IO.FileMode.Open));
writer.Write(currentLine);
Currently what I have does the following:
After:
!##$%0718F
020100716F
020100717F
020100718F
020100719F
!##$% = Ascii Characters SO can't display
Any ideas? Thanks!
UPDATE
Information on Packed Fields COMP-3
Packed Fields are used by COBOL systems to reduce the number of bytes a field requires in files. Please see the following SO post for more information: Here
Here is Picture of the following date "20120123" packed in COMP-3. This is my end result and I have included because I wasn't sure if it would effect possible answers.
My question is how do you get StreamWriter to dynamically replace data inside a file and change the lengths of rows?
I have always found it better to to read the input file, filter/process the data and write the output to a temporary file. After finished, delete the original file (or make a backup) and copy the temporary file over. This way you haven't lost half your input file in case something goes wrong in the middle of processing.
You should probably be using a Stream directly (probably a FileStream). This would allow you to change position.
However, you're not going to be able to change record sizes this way, at least, not in-line. You can have one Stream reading from the original file, and another writing to a new, converted copy of the file.
However, I haven't found a way to use StreamWriter to right to a specific position, and
beginning to wonder if is possible.
You can use StreamWriter.BaseStream.Seek method
using (StreamWriter wr = new StreamWriter(File.Create(#"c:\Temp\aaa.txt")))
{
wr.Write("ABC");
wr.Flush();
wr.BaseStream.Seek(0, SeekOrigin.Begin);
wr.Write("Z");
}
I am using the code below to read binary data from text file and divide it into small chunks. I want to do the same with a text file with alphanumeric data which is obviously not working with the binary reader. Which reader would be best to achieve that stream,string or text and how to implement that in the following code?
public static IEnumerable<IEnumerable<byte>> ReadByChunk(int chunkSize)
{
IEnumerable<byte> result;
int startingByte = 0;
do
{
result = ReadBytes(startingByte, chunkSize);
startingByte += chunkSize;
yield return result;
} while (result.Any());
}
public static IEnumerable<byte> ReadBytes(int startingByte, int byteToRead)
{
byte[] result;
using (FileStream stream = File.Open(#"C:\Users\file.txt", FileMode.Open, FileAccess.Read, FileShare.Read))
using (BinaryReader reader = new BinaryReader(stream))
{
int bytesToRead = Math.Max(Math.Min(byteToRead, (int)reader.BaseStream.Length - startingByte), 0);
reader.BaseStream.Seek(startingByte, SeekOrigin.Begin);
result = reader.ReadBytes(bytesToRead);
}
return result;
}
I can only help you get the general process figured out:
String/Text is the 2nd worst data format to read, write or process. It should be reserved for output towards and input from the user exclusively. It has some serious issues as a storage and retreival format.
If you have to transmit, store or retreive something as text, make sure you use a fixed Encoding and Culture Format (usually invariant) at all endpoints. You do not want to run into issues with those two.
The worst data fromat is raw binary. But there is a special 0th place for raw binary that you have to interpret into text, to then further process. To quote the most importnt parts of what I linked on encodings:
It does not make sense to have a string without knowing what encoding it uses. [...]
If you have a string, in memory, in a file, or in an email message, you have to know what encoding it is in or you cannot interpret it or display it to users correctly.
Almost every stupid “my website looks like gibberish” or “she can’t read my emails when I use accents” problem comes down to one naive programmer who didn’t understand the simple fact that if you don’t tell me whether a particular string is encoded using UTF-8 or ASCII or ISO 8859-1 (Latin 1) or Windows 1252 (Western European), you simply cannot display it correctly or even figure out where it ends. There are over a hundred encodings and above code point 127, all bets are off.
I'm trying to read the bytes in the stream at each frame.
I want to be able to read the position and the timestamp information that is stored on a file I have created.
The stream is a stream of recorded skeleton data and it is in encoded binary format
Stream recordStream;
byte[] results;
using (FileStream SourceStream = File.Open(#".....\Stream01.recorded", FileMode.Open))
{
if (SourceStream.CanRead)
{
results = new byte[recordStream.Length];
SourceStream.Read(results, 0, (int)recordStream.Length);
}
}
The file should be read and the Read method should read the current sequence of bytes before advances the position in the stream.
Is there a way to pull out the data (position and timestamp) I want from the bytes read, and save it in separate variables before it advances?
Could using the binary reader give me the capabilities to do this.
BinaryReader br1 = new BinaryReader(recordStream);
I have save the file as .recorded. I have also saved it as .txt to see what is contained in the file, but since it is encoded, it is not understandable.
Update:
I tried running the code with breakpoints to see if it enters the function with my binaryreader and it crashes with an error: ArgumentException was unhandled. Stream was not readable, on the BinaryReader initialization and declaration
BinaryReader br1 = new BinaryReader(recordStream);
The file type was .recorded.
You did not provide any information about the format of the data you are trying to read.
However, using the BinaryReader is exactly what you need to do.
It exposes methods to read data from the stream and convert them to various types.
Consider the following example:
var filename = "pathtoyourfile";
using (var stream = File.Open(filename, FileMode.Open))
using(var reader = new BinaryReader(stream))
{
var x = reader.ReadByte();
var y = reader.ReadInt16();
var z = reader.ReadBytes(10);
}
It really depends on the format of your data though.
Update
Even though I feel I've already provided all the information you need,
let's use your data.
You say each record in your data starts with
[long: timestamp][int: framenumber]
using (var stream = File.Open(filename, FileMode.Open))
using(var reader = new BinaryReader(stream))
{
var timestamp = reader.ReadInt64();
var frameNumber = reader.ReadInt32();
//At this point you have the timestamp and the frame number
//you can now do whatever you want with it and decide whether or not
//to continue, after that you just continue reading
}
How you continue reading depends on the format of the remaining part of the records
If all fields in a record have a specific length, then you either (depending on the
choice you made knowing the values of the timestamp and the frame number) continue
reading all the fields for that record OR you simply advance to a position in the stream
that contains the next record. For example if each record is 100 bytes long, if you want to skip this record after you got the first two fields:
stream.Seek(88, SeekOrigin.Current);
//88 here because the first two fields take 12 bytes -> (100 - 8 + 4)
If the records have a variable length the solution is similar, but you'll have to
take into account the length of the various fields (which should be defined by
length fields preceding the variable length fields)
As for knowing if the first 8 bytes really do represent a timestamp,
there's no real way of knowing for sure... remember in the end the stream just contains
a series of individual bytes that have no meaning whatsoever except for the meaning
given to them by your file format. Either you have to revise the file format or you could
try checking if the value of 'timestamp' in the example above even makes sense.
Is this a file format you have defined yourself, if so... perhaps you are making it to complicated and might want to look at solutions such as Google Protocol Buffers or Apache Thrift.
If this is still not what you are looking for, you will have to redefine your question.
Based on your comments:
You need to know the exact definition of the entire file. You create a struct based on this file format:
struct YourFileFormat {
[FieldOffset(0)]
public long Timestamp;
[FieldOffset(8)]
public int FrameNumber;
[FieldOffset(12)]
//.. etc..
}
Then, using a BinaryReader, you can either read each field individually for each frame:
// assume br is an instantiated BinaryReader..
YourFileFormat file = new YourFileFormat();
file.Timestamp = br.ReadInt64();
file.FrameNumber = br.ReadInt32();
// etc..
Or, you can read the entire file in and have the Marshalling classes copy everything into the struct for you..
byte[] fileContent = br.ReadBytes(sizeof(YourFileFormat));
GCHandle gcHandle = GCHandle.Alloc(fileContent, GCHandleType.Pinned); // or pinning it via the "fixed" keyword in an unsafe context
file = (YourFileFormat)Marshal.PtrToStructure(gcHandle.AddrOfPinnedObject(), typeof(YourFileFormat));
gcHandle.Free();
However, this assumes you'll know the exact size of the file. With this method though.. each frame (assuming you know how many there are) can be a fixed size array within this struct for that to work.
Bottom line: Unless you know the size of what you want to skip.. you can't hope to get the data from the file you require.
I found some questions on encoding issues before asking, however they are not what I want. Currently I have two methods, I'd better not modify them.
//FileManager.cs
public byte[] LoadFile(string id);
public FileStream LoadFileStream(string id);
They are working correctly for all kind of files. Now I have an ID of a text file(it's guaranteed to be a .txt file) and I want to get its content. I tried the following:
byte[] data = manager.LoadFile(id);
string content = Encoding.UTF8.GetString(data);
But obviously it's not working for other non-UTF8 encodings. To resolve the encoding issue I tried to get its FileStream first and then use a StreamReader.
public StreamReader(Stream stream, bool detectEncodingFromByteOrderMarks);
I hope this overlord can resolve the encoding but I still get strange contents.
using(var stream = manager.LoadFileStream(id))
using(var reader = new StreamReader(stream, true))
{
content = reader.ReadToEnd(); //still incorrect
}
Maybe I misunderstood the usage of detectEncodingFromByteOrderMarks? And how to resolve the encoding issue?
ByteOrderMarks are sometimes added to files encoded in one of the unicode formats, to indicate whether characters made up from multiple bytes are stored in big or little endian format (is byte 1 stored first, and then byte 0? Or byte 0 first, and then byte 1?). This is particularly relevant when files are read both by for instance windows and unix machines, because they write these multibyte characters in opposite directions.
If you read a file and the first few bytes equal that of a ByteOrderMark, chances are quite high the file is encoded in the unicode format that matches that ByteOrderMark. You never know for sure, though, as Shadow Wizard mentioned. Since it's always a guess, the option is provided as a parameter.
If there is no ByteOrderMark in the first bytes of the file, it'll be hard to guess the file's encoding.
More info: http://en.wikipedia.org/wiki/Byte_order_mark
This is C#/.NET 2.0.
So I have string that contains the future contents of an XML file. It contains metadata and binary data from image files. I would like to somehow determine how big the XML file will be once I write the data in the string to the file system.
I've tried the following and neither works:
Console.Out.WriteLine("Size: " + data.Length/1024 + "KB");
and
Console.Out.WriteLine("Size: " + (data.Length * sizeof(char))/1024 + "KB");
Neither works (the actual size of the resulting file deviates from what is returned from either of these methods). I'm obviously missing something here. Any help would be appreciated.
XML Serialization:
// doc is an XMLDocument that I've built previously
StringWriter sw = new StringWriter();
doc.Save(sw);
string XMLAsString = sw.ToString();
Writing to file system (XMLAsString passed to this function as variable named data):
Random rnd = new Random(DateTime.Now.Millisecond);
FileStream fs = File.Open(#"C:\testout" + rnd.Next(1000).ToString() + ".txt", FileMode.OpenOrCreate);
StreamWriter sw = new StreamWriter(fs);
app.Diagnostics.Write("Size of XML: " + (data.Length * sizeof(char))/1024 + "KB");
sw.Write(data);
sw.Close();
fs.Close();
Thanks
You're missing how the encoding process works. Try this:
string data = "this is what I'm writing";
byte[] mybytes = System.Text.Encoding.UTF8.GetBytes(data);
The size of the array is exactly the number of bytes that it should take up on disk if it's being written in a somewhat "normal" way, as UTF8 is the default encoding for text output (I think). There may be an extra EOF (End Of File) character written, or not, but you should be really close with that.
Edit: I think it's worth it for everybody to remember that characters in C#/.NET are NOT one byte long, but two, and are unicode characters, that are then encoded to whatever the output format needs. That's why any approach with data.Length*sizeof(char) would not work.
In NTFS, if your file system is set to compress, the final file might be smaller than what your actual file might be. Is that your problem?
If you want to determine if your file will fit on the media, you have to take into account what the allocation size of the file system is. A file that is 10 bytes long does not occupy 10 bytes on the disk. The space requirement increases in discrete steps, determined by the allocation size (also called cluster size).
See this Microsoft support article for more info about NTFS and FAT cluster sizes.
What is data in your example above? How is the binary data represented in the xml file?
It's quite likely that you'll want to do a full serialization into a byte array to get an accurate guess of the size. The serializer may do arbitrary things like add CDATA tags and if you for some reason need to save the file in UTF-16 instead of UTF-8, well that'll double your size right there probably.
You can save (or write) it to a memory stream then determine how big that memory stream has become, thats the only way to determine the actual size without writing it to disk.
Can't see there being any point to that you may as well just save it a local file, take a look at the final file size then make a choice as to what to do with it.
If all you want to do is make a reasonable estimate of how big a XML file will become once you've added a bunch of encoded binary elements and if we can assume that the rest of the XML will be negligable in comparison to the encoded binary content, then its a matter of determining the bloat introduced due to the encoding.
Typicaly we would encode binary content with base64 encoding which results in 4 bytes of ASCII for every 3 bytes of binary, that is a 33% bloat. So an estimate would be data.Length * 1.33333