char8 & uchar8 Equivalent - c#

I am working on a program, or I communicate with a card via UDP, the answers of the card I recover them in a Byte table and I put them in an int32 but they ask me that the gain be coded in uchar8 and Pedestal in char8, what is it the equivalent of char8 and uchar8 in C # and what exactly is 0.255 the max and min value?
char8 & uchar8

As far as the specification you're asking about goes, you'll need to ask the author of the specification for specifics. However, based solely on the image you've provided, I'd say that yes, the numerical values given indicate the minimum and maximum values allowed for the types. As such, the equivalent C# types would be sbyte for your char8 type, and byte for the uchar8 type.
The former has a range of -128 to 127 (it's a signed type), while the latter has a range of 0 to 255 (being an unsigned type). Both are stored as single bytes, and as such have 256 different possible values.

Related

Converting an amount to a 4 byte array

I haven't ever had to deal with this before. I need to convert a sale amount (48.58) to a 4 byte array and use network byte order. The code below is how I am doing it, but it is wrong and I am not understanding why. Can anyone help?
float saleamount = 48.58F;
byte[] data2 = BitConverter.GetBytes(saleamount).Reverse().ToArray();
What I am getting is 66 66 81 236 in the array. I am not certain what it should be though. I am interfacing with a credit card terminal and need to send the amount in "4 bytes, fixed length, max value is 0xffffffff, use network byte order"
The first question you should ask is, "What data type?" IEEE single-precision float? Twos-complement integer? It's an integer, what is the implied scale? Is $48.53 represented as 4,853 or 485,300?
It's not uncommon for monetary values to be represented by an integer with an implied scale of either +2 or +4. In your example, $48.58 would be represented as the integer value 4858 or 0x000012FA.
Once you've established what they actually want...use an endian-aware BitConverter or BinaryWriter to create it. Jon Skeet's MiscUtil, for instance offers:
EndianBinaryReader
EndianBinaryWriter
BigEndianBitConverter
LittleEndianBitConverter
There are other implementations out there as well. See my answer to the question "Helpful byte array extensions to handle BigEndian data" for links to some.
Code you don't write it code you don't have to maintain.
Network byte order pseudo-synonym of big-endian, hence (as itsme86 mentioned already) so you can check BitConverter.IsLittleEndian:
float saleamount = 48.58F;
byte[] data2 = BitConverter.IsLittleEndian
? BitConverter.GetBytes(saleamount).Reverse().ToArray()
: BitConverter.GetBytes(saleamount);
But if you don't know this, probably you already using some protocol, which handle it.

I want to store large chunk data in true/false format using bits

Consider example where I have many types(types - some sections). For each type there are multiple values and out of available values possible useful values are less.
each type will store 30 values. All 30 values are not applicatble but I need to store in 1/0 format. Consuming byte is also costly here.
Please guide me on the same.
Consider using BitArray class.
You can define either an int column (If you have value less than or equal to 32) or bigint column (long in case C#) (If you have value less than or equal to 64) along with each Type and then define each bit of either int or bigint (long in case C#) column as one value of type and then store.
For Example: Suppose that each type has four value Physics, Maths, Chemistry, English and others up to 32. Now we have a type as "Class" which have only three value Physics, Maths and English and rest is not available. So value will be 0000000000000000000000000001011 = 13.

Why bytes in c# are named byte and sbyte unlike other integral types?

I was just flipping through the specification and found that byte is odd. Others are short, ushort, int, uint, long, and ulong. Why this naming of sbyte and byte instead of byte and ubyte?
It's a matter of semantics. When you think of a byte you usually (at least I do) think of an 8-bit value from 0-255. So that's what byte is. The less common interpretation of the binary data is a signed value (sbyte) of -128 to 127.
With integers, it's more intuitive to think in terms of signed values, so that's what the basic name style represents. The u prefix then allows access to the less common unsigned semantics.
The reason a type "byte", without any other adjective, is often unsigned while a type "int", without any other adjective, is often signed, is that unsigned 8-bit values are often more practical (and thus widely used) than signed bytes, but signed integers of larger types are often more practical (and thus widely used) than unsigned integers of such types.
There is a common linguistic principle that, if a "thing" comes in two types, "usual" and "unusual", the term "thing" without an adjective means a "usual thing"; the term "unusual thing" is used to refer to the unusual type. Following that principle, since unsigned 8-bit quantities are more widely used than signed ones, the term "byte" without modifiers refers to the unsigned flavor. Conversely, since signed integers of larger sizes are more widely used than their unsigned equivalents, terms like "int" and "long" refer to the signed flavors.
As for the reason behind such usage patterns, if one is performing maths on numbers of a certain size, it generally won't matter--outside of comparisons--whether the numbers are signed or unsigned. There are times when it's convenient to regard them as signed (it's more natural, for example, to say think in terms of adding -1 to a number than adding 65535) but for the most part, declaring numbers to be signed doesn't require any extra work for the compiler except when one is either performing comparisons or extending the numbers to a larger size. Indeed, if anything, signed integer math may be faster than unsigned integer math (since unsigned integer math is required to behave predictably in case of overflow, whereas unsigned math isn't).
By contrast, since 8-bit operands must be extended to type 'int' before performing any math upon them, the compiler must generate different code to handle signed and unsigned operands; in most cases, the signed operands will require more code than unsigned ones. Thus, in cases where it wouldn't matter whether an 8-bit value was signed or unsigned, it often makes more sense to use unsigned values. Further, numbers of larger types are often decomposed into a sequence of 8-bit values or reconstituted from such a sequence. Such operations are easier with 8-bit unsigned types than with 8-bit signed types. For these reasons, among others, unsigned 8-bit values are used much more commonly than signed 8-bit values.
Note that in the C language, "char" is an odd case, since all characters within the C character set are required to translate as non-negative values (so machines which use an 8-bit char type with an EBCDIC character set are required to have "char" be unsigned), but an "int" is required to hold all values that a "char" can hold (so machines where both "char" and "int" are 16 bits are required to have "char" be signed).

Why is Count not an unsigned integer? [duplicate]

This question already has answers here:
Closed 12 years ago.
Possible Duplicates:
Why does .NET use int instead of uint in certain classes?
Why is Array.Length an int, and not an uint
I've always wonder why .Count isn't an unsigned integer instead of a signed one?
For example, take ListView.SelectedItems.Count. The number of elements can't be less then 0, so why is it a signed int?
If I try to test if there are elements selected, I would like to test
if (ListView.SelectedItems.Count == 0) {}
but because it's a signed integer, I have to test
if (ListView.SelectedItems.Count <= 0) {}
or is there any case when .Count could be < 0 ?
Unsigned integer is not CLS-compliant (Common Language Specification)
For more info on CLS compliant code, see this link:
http://msdn.microsoft.com/en-us/library/bhc3fa7f.aspx
Mabye because the uint data type is not part of the CLS (common language specification) as not all .Net languages support it.
Here is very similar thread about arrays:
Why is Array.Length an int, and not an uint
It's not CLS compliant, largely to allow wider support from different languages.
A signed int offers ease in porting code from C or C++ that uses pointer arithmetic.
Count can be part of an expression where the overall value can be negative. In particular, count has a direct relationship to indices, where valid indices are always in the range [0, Count - 1], but negative results are used e.g. by some binary search methods (including those provided by the BCL) to reflect the position where a new item should be inserted to maintain order.
Let’s look at this from a practical angle.
For better or worse, signed ints are the normal sort of ints in use in .NET. It was also normal to use signed ints in C and C++. So, most variables are declared to be int rather than unsigned int unless there is a good reason otherwise.
Converting between an unsigned int and a signed int has issues and is not always safe.
On a 32 bit system it is not possible for a collection to have anywhere close to 2^^32 items in it, so a signed int is big enough in all cases.
On a 64 bit system, an unsigned int does not gain you much, in most cases a signed int is still big enough, otherwise you need to use a 64 bit int. (I expect that none of the standard collection will cope well with anywhere near 2^^31 items on a 64 system!)
Therefore given that using an unsigned int has no clear advantage, why would you use an unsigned int?
In vb.net, the normal looping construct (a "For/Next loop") will execute the loop with values up to and including the maximum value specified, unlike C which can easily loop with values below the upper limit. Thus, it is often necessary to specify a loop as e.g. "For I=0 To Array.Length-1"; if Array.Length were unsigned and and zero, that could cause an obvious problem. Even in C, one benefits from being able to say "for (i=Array.Length-1; i GE 0; --i)". Sometimes I think it would be useful to have a 31-bit integer type which would support widening casts to both signed and unsigned int, but I've never heard of a language supporting such.

Most significant bit

I haven't dealt with programming against hardware devices in a long while and have forgotten pretty much all the basics.
I have a spec of what I should send in a byte and each bit is defined from the most significant bit (bit7) to the least significant (bit 0). How do i build this byte? From MSB to LSB, or vice versa?
If these bits are being 'packeted' (which they usually are), then the order of bits is the native order, 0 being the LSB, and 7 being the MSB. Bits are not usually sent one-by-one, but as bytes (usually more than one byte...).
According to wikipedia, bit ordering can sometimes be from 7->0, but this is probably the rare case.
If you're going to write the whole byte at the same time, i.e. do a parallel transfer as opposed to a serial, the order of the bits doesn't matter.
If the transfer is serial, then you must find out which order the device expects the bits in, it's impossible to tell from the outside.
To just assemble a byte from eight bits, just use bitwise-OR to "add" bits, one at a time:
byte value = 0;
value |= (1 << n); // 'n' is the index, with 0 as the LSB, of the bit to set.
If the spec says MSB, then build it MSB. Otherwise if the spec says LSB, then build it LSB. Otherwise, ask for more information.

Categories