how to write with a single byte character encoding? - c#

I have a webservice that returns the config file to a low level hardware device.
The manufacturer of this device tells me he only supports single byte charactersets for this config file.
On this wiki page I found out that the following should be single byte character sets:
ISO 8859
ISO/IEC 646 (I could not find this one here)
various Microsoft/IBM code pages
But when I call Encoding.GetMaxByteCount(1) on these character sets it always returns 2.
I also tried various other encodings (for instance IBM437), but GetMaxByteCount also returns 2 for other character sets.
The method Endoding.IsSingleByte seems unreliable according to this
You should be careful in what your application does with the value for
IsSingleByte. An assumption of how an Encoding will proceed may still
be wrong. For example, Windows-1252 has a value of true for
Encoding.IsSingleByte, but Encoding.GetMaxByteCount(1) returns 2. This
is because the method considers potential leftover surrogates from a
previous decoder operation.
Also the method Encoding.GetMaxByteCount has some of the same issues according to this
Note that GetMaxByteCount considers potential leftover surrogates from
a previous decoder operation. Because of the decoder, passing a value
of 1 to the method retrieves 2 for a single-byte encoding, such as
ASCII. Your application should use the IsSingleByte property if this
information is necessary.
Because of this I am not sure anymore on what to use.
Further reading.

Basically, GetMaxByteCount considers an edge-case that you will probably never need in regular code, specifically what it says about the decoder and surrogates. The point here is that some code-points are encoded as surrogate pairs, which in unfortunate cases can mean that it straddles two calls to GetBytes() / GetChars (on the encoder/decoder). As a consequence, the implementation may theoretically have a single byte/character still buffered and waiting to be processed, therefore GetMaxByteCount needs to warn about this.
However! All of this only makes sense if you are using the encoder/decoder directly. If you are using operations on the Encoding, such as Encoding.GetBytes, then all of this is abstracted away from you and you will never need to know. In which case, just use IsSingleByte and you'll be fine.

Maybe you should use the example from Encoding.Convert Method page on MSDN
The Encoding.Convert method should provide an ASCII encoded string. Hopefully single byte..

Related

C# - How do I know the correct length to read a string when reading it from a memory address?

So I have a problem: I'm reading a string from a memory address that is different at different times. For example:
Axe?ca Ocarina?tar??ing?ing????????????
I only need Axe.
Ball of Green Yarn??ing?ing????????????
I only need Ball of Green Yarn.
I'm reading 80 bytes of text (40 chars) because that's the most amount of characters the string should get to. But how can I know how long the string actually is?
It really depends on what's writing the string.
Generally, strings are NUL-terminated, i.e. a '\0' character immediately follows the string. Old-style (non-_s-variant) C functions like strlen and strcat use that to determine the end of existing strings and mark the end of modified strings.
Most string data types tend to work this way, but not all. In Turbo Pascal, strings were length-prefixed. BSTRs used in COM (including pre-.NET VB) are both.
Based on the samples you've shown, there's a good chance that the ? character you're seeing after the part you want is a NUL character. It looks like the buffer is being reused and re-terminated each time, e.g. a shorter string like "Axe" was written over a longer string like a certain kind of ocarina.
Examine the buffer in the debugger and you'll probably find a '\0' character immediately following what you want.
Probably. Again, it depends on what's writing the string. Until you look for yourself, it could be anything, and even then, it could just be a coincidence that it's NUL-terminated this time. Don't rely on observation alone. Without documentation, it could be different and still just as valid. Whatever you do, do not read past the 40-character buffer you know you have, NUL terminated or not.

C# What is the difference between Text.Encoder and Text.Encoding

I am currently using Unicode in bytes and using Encoding class to get bytes and get strings.
However, I saw there is an encoder class and it seems like doing the same thing as the encoding class. Does anyone know what is the difference between them and when to use either of them.
Here are the Microsoft documentation page:
Encoder: https://msdn.microsoft.com/en-us/library/system.text.encoder(v=vs.110).aspx
Encoding: https://msdn.microsoft.com/en-us/library/system.text.encoding(v=vs.110).aspx
There is definitely a difference. An Encoding is an algorithm for transforming a sequence of characters into bytes and vice versa. An Encoder is a stateful object that transforms sequences of characters into bytes. To get an Encoder object you usually call GetEncoder on an Encoding object. Why is it necessary to have a stateful tranformation? Imagine you are trying to efficiently encode long sequences of characters. You want to avoid creating a lot of arrays or one huge array. So you break the characters down into say reusable 1K character buffers. However this might make some illegal characters sequences, for example a utf-16 surrogate pair broken across to separate calls to GetBytes. The Encoder object knows how to handle this and saves the necessary state across successive calls to GetBytes. Thus you use an Encoder for transforming one block of text that is self-contained. I believe you can reuse an Encoder instance more transforms of multiple sections of text as long as you have called GetBytes with flush equal to true on the last array of characters. If you just want to easily encode short strings, use the Encoding.GetBytes methods. For the decoding operations there is a similar Decoder class that holds the decoding state.

Advantage in using SerialPort.ReadByte over ReadChar?

Of all the example codes I have read online regarding SerialPorts all uses ReadByte then convert to Character instead of using ReadChar in the first place.
Is there a advantage in doing this?
The SerialPort.Encoding property is often misunderstood. The default is ASCIIEncoding, it will produce ? for byte values 0x80..0xFF. So they don't like getting these question marks. If you see such code then converting the byte to char directly then they are getting it really wrong, Unicode has lots of unprintable codepoints in that byte range and the odds that the device actually meant to send these characters are zero. A string tends to be regarded as easier to handle than a byte[], it is.
When you use ReadChar it is based on the encoding you are using, like #Preston Guillot said. According to the docu of ReadChar:
This method reads one complete character based on the encoding.
Use caution when using ReadByte and ReadChar together. Switching
between reading bytes and reading characters can cause extra data to
be read and/or other unintended behavior. If it is necessary to switch
between reading text and reading binary data from the stream, select a
protocol that carefully defines the boundary between text and binary
data, such as manually reading bytes and decoding the data.

C# big-endian UCS-2

The project I'm currently working on needs to interface with a client system that we don't make, so we have no control over how data is sent either way. The problem is that were working in C#, which doesn't seem to have any support for UCS-2 and very little support for big-endian. (as far as i can tell)
What I would like to know, is if there's anything i looked over in .net, or something that someone else has made and released that we can use. If not I will take a crack at encoding/decoding it in a custom method, if that's even possible.
But thanks for your time either way.
EDIT:
BigEndianUnicode does work to correctly decode the string, the problem was in receiving other data as big endian, so far using IPAddress.HostToNetworkOrder() as suggested elsewhere has allowed me to decode half of the string (Merli? is what comes up and it should be Merlin33069)
Im combing the short code to see if theres another length variable i missed
RESOLUTION:
after working out that the bigendian variables was the main problem, i went back through and reviewed the details and it seems that the length of the strings was sent in character counts, not byte counts (in utf it would seem a char is two bytes) all i needed to do was double it, and it worked out. thank you all for your help.
string x = "abc";
byte[] data = Encoding.BigEndianUnicode.GetBytes(x);
In other direction:
string decodedX = Encoding.BigEndianUnicode.GetString(data);
It is not exactly UCS-2 but it is enough for most cases.
UPD: Unicode FAQ
Q: What is the difference between UCS-2 and UTF-16?
A: UCS-2 is obsolete terminology which refers to a Unicode
implementation up to Unicode 1.1, before surrogate code points and
UTF-16 were added to Version 2.0 of the standard. This term should now
be avoided.
UCS-2 does not define a distinct data format, because UTF-16 and UCS-2
are identical for purposes of data exchange. Both are 16-bit, and have
exactly the same code unit representation.
Sometimes in the past an implementation has been labeled "UCS-2" to
indicate that it does not support supplementary characters and doesn't
interpret pairs of surrogate code points as characters. Such an
implementation would not handle processing of character properties,
code point boundaries, collation, etc. for supplementary characters.
EDIT: Now we know that the problem isn't in the encoding of the text data but in the encoding of the length. There are a few options:
Reverse the bytes and then use the built-in BitConverter code (which I assume is what you're using now; that or BinaryReader)
Perform the conversion yourself using repeated "add and shift" operations
Use my EndianBitConverter or EndianBinaryReader classes from MiscUtil, which are like BitConverter and BinaryReader, but let you specify the endianness.
You may be looking for Encoding.BigEndianUnicode. That's the big-endian UTF-16 encoding, which isn't strictly speaking the same as UCS-2 (as pointed out by Marc) but should be fine unless you give it strings including characters outside the BMP (i.e. above U+FFFF), which can't be represented in UCS-2 but are represented in UTF-16.
From the Wikipedia page:
The older UCS-2 (2-byte Universal Character Set) is a similar character encoding that was superseded by UTF-16 in version 2.0 of the Unicode standard in July 1996.2 It produces a fixed-length format by simply using the code point as the 16-bit code unit and produces exactly the same result as UTF-16 for 96.9% of all the code points in the range 0-0xFFFF, including all characters that had been assigned a value at that time.
I find it highly unlikely that the client system is sending you characters where there's a difference (which is basically the surrogate pairs, which are permanently reserved for that use anyway).
UCS-2 is so close to UTF-16 that Encoding.BigEndianUnicode will almost always suffice.
The issue (comments) around reading the length prefix (as big-endian) is more correctly resolved via shift operations, which will do the right thing on all systems. For example:
Read4BytesIntoBuffer(buffer);
int len =(buffer[0] << 24) | (buffer[1] << 16) | (buffer[2] << 8) | (buffer[3]);
This will then work the same (at parsing a big-endian 4 byte int) on any system, regardless of local endianness.

C#: String -> MD5 -> Hex

in languages like PHP or Python there are convenient functions to turn an input string into an output string that is the HEXed representation of it.
I find it a very common and useful task (password storing and checking, checksum of file content..), but in .NET, as far as I know, you can only work on byte streams.
A function to do the work is easy to put on (eg http://blog.stevex.net/index.php/c-code-snippet-creating-an-md5-hash-string/), but I'd like to know if I'm missing something, using the wrong pattern or there is simply no such thing in .NET.
Thanks
The method you linked to seems right, a slightly different method is showed on the MSDN C# FAQ
A comment suggests you can use:
System.Web.Security.FormsAuthentication.HashPasswordForStoringInConfigFile(string, "MD5");
Yes you can only work with bytes (as far as I know). But you can turn those bytes easily into their hex representation by looping through them and doing something like:
myByte.ToString("x2");
And you can get the bytes that make up the string using:
System.Text.Encoding.UTF8.GetBytes(myString);
So it could be done in a couple lines.
One problem is with the very concept of "the HEXed representation of [a string]".
A string is a sequence of characters. How those characters are represented as individual bits depends on the encoding. The "native" encoding to .NET is UTF-16, but usually a more compact representation is achieved (while preserving the ability to encode any string) using UTF-8.
You can use Encoding.GetBytes to get the encoded version of a string once you've chosen an appropriate encoding - but the fact that there is that choice to make is the reason that there aren't many APIs which go straight from string to base64/hex or which perform encryption/hashing directly on strings. Any such APIs which do exist will almost certainly be doing the "encode to a byte array, perform appropriate binary operation, decode opaque binary data to hex/base64".
(That makes me wonder whether it wouldn't be worth writing a utility class which could take an encoding, a Func<byte[], byte[]> and an output format such as hex/base64 - that could represent an arbitrary binary operation applied to a string.)

Categories