Splitting a 200 bit hexadecimal bitmask [closed] - c#

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 7 years ago.
Improve this question
I have a bitmask of 200 bit stored as a hexadecimal value.
In order to apply the & bit operator, I have to first convert the hex to an integer but 200 bit is too big for a uint64 so my question is : how do I split my bitmask in 4 different hexadecimal value without loosing data?
So that I can also split my 200 bit data and then compare every chunk of data with the corresponding chunk of bitmask without altering the result.

You can use the BigInteger from System.Numerics (it's a separate assembly):
BigInteger bi = BigInteger.Parse("01ABC000000000000000000000000000000000", System.Globalization.NumberStyles.HexNumber);
VERY IMPORTANT: prepend a "0" before the hex number! (because BigInteger.Parse("F", NumberStyles.HexNumber) == -1, while BigInteger.Parse("0F", NumberStyles.HexNumber) == 15
BigInteger implement the "classical" logical operators (&, |, ^)
Requires .NET 4.0

The most efficient way of achieving this is writing a class that can store and do binary operations on 200bits of data, have strings as input, etc.

Related

How to store a bit in C# [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
I have the following code:
void MyBitMaker(int inputNumber)
{
Console.WriteLine(inputNumber << 1);
//TODO
//Bit bitHolder = inputNumber << 1;
//InitMico(ref bitHolder);
}
There is not a type for storing bits in C#. Is there any way to store bits in a variable.
I am programming a micro controller using C#, It uses the bits that are coming from a web service, fetches them and sends them to the controller to open, close, sleep and such things using an interface to change the input data to appropriate bits. My problem is that the micro has just 16 bytes memory in ram and I can not store more than two bytes, the micro should store the history of previous actions (this is extra, maybe, jargon). I have low space and need this small unit. I searched a lot and did not find anything. currently I am using bit operators using a class that I myself have implemented but it's not efficient at all because of using bit operations. I was wondering if someone has faced something that can help me.
The smallest addressable unit is a byte, so use that, or a bool which is still 8 bits, but only has two possible values to be set.
You can't reference a bit, so that would be useless anyway. If you need to address a specific position in a byte, you can pass along the position. Still, you can only change the bit's value by setting the entire byte.
I just want to store bits
Trivial solution: bool array.
If it really needed to store the bits in a compacted form, you can use the BitArray type, which uses an int array internally. You can index it similarly to a normal array:
var myBits = new BitArray(20); // initialize for 20 bits (1 int will be stored internally)
myBits[5] = true; // similar to this: myInt |= 1 << 5;

Convert byte[] to single [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 8 years ago.
Improve this question
Problem:
I am trying to convert a byte[] to a single. I've tried using BitConverter.ToSingle() and it doesn't give the desired result.
The Content of the Array is:
0
0
0
100
The desired output is 100; I know an Int would work for this but I just choose that number for easy debugging. I have also tried moving the 100 into every possible position in the array, with no luck.
My output always looks like 9.3345534545E
or something similar with different digits.
Any Ideas?
IEE-754 types (Single and Double - float and double in C#) do not have a trivial binary representation so 0x00 0x00 0x00 0x64 does not represent the value of 0x64 (100 in decimal).
The actual raw, binary representation of IEEE-754 values is rather complicated and setting them and performing the conversion from integer to IEE-754 really isn't worth the effort (unless it's a learning exercise). It's best to let the library/platform or even the processor do it for you:
Because your value is an integer value, you need to convert it into Int32 first, and then use the Convert class (or a simple compiler cast which will perform the type conversion under-the-hood).
Int32 val = BitConverter.ToInt32( yourArray ); // assuming it's little-endian
Single s1 = (Single)val;
Single s2 = Convert.ToSingle( val );

How does the computer convert between types [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
So a common question you see on SO is how to convert between type x and type z but I want to know how does the computer do this?
For example, how does it take an int out of a string?
My theory is that a string is a char array at its core so its going index by index and checking it against the ascii table. If it falls within the range of ints then its added to the integer. Does it happen at an even lower level than this? Is there bitmasking taking place? How does this happen?
Disclaimer: not for school, just curious.
This question can only be answered when restricting the types to a somewhat managable subset. To do so, let us consider the three interesting types: Strings, integers and floats.
The only other truly different basic type is a pointer, which is not usually converted in any meaningful manner (even the NULL check is not actually a conversion, but a special built in semantic for the 0 literal).
int to float and vice versa
Converting integers to floats and vice versa is simple, since modern CPUs provide an instruction to deal with that case directly.
string to integer type
Conversion from string to integer is fairly simple, because no numeric errors will happen. Indeed, any string is just a sequence of code points (which may or may not be represented by char or wchar_t), and the common method to work through this goes along the lines of the following:
unsigned result = 0;
for(size_t i = 0; i < str.size(); ++i) {
unsigned c = str[i] - static_cast<unsigned>('0');
if(c > '9') {
if(i) return result; // ok: integer over
else throw "no integer found";
}
if((MAX_SIZE_T - c) / 10 < result) throw "integer overflow";
result = result * 10 + c;
}
If you wish to consider things like additional bases (e.g. strings like 0x123 as a hexadecimal representation) or negative values, it obivously requires a few more tests, but the basic algorithm stays the same.
int to string
As expected, this basically works in reverse: An implementation will always take the remainder of a division by 10 and then divide by 10. Since this will give the number in reverse, one can either print into a buffer from the back or reverse the result again.
string to floating point type
Parsing strings to a double (or float) is significantly more complex, since the conversion is supposed to happen with the highest possible accuracy. The basic idea here is to read the number as a string of digits while only remembering where the dot was and what the exponent is. Then, you would assemble the mantissa from this information (which basically is a 53 bit integer) and the exponent and assemble the actual bit pattern for the resulting number. This would then be copied into your target value.
While this approach works perfectly fine, there are literally dozens of different approaches in use, all varying in performance, correctness and robustness.
Actual implementations
Note that actual implementations may have to do one more important (and horribly ugly) thing, which is locale. For example, in the German locale the "," is the decimal point and not the thousands seperator, so pi is roughly "3,1415926535".
Perl string to double
TCL string to double
David M. Gay AT&T Paper string to double, double to string and source code
Boost Spirit

Handling special characters [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
I am wondering how to best handle a special character such as ’ using c#?
e.g
public static string DecodeFrom64(string toDecode)
{
byte[] arrayToDecode = System.Convert.FromBase64String(toDecode);
return System.Text.Encoding.Unicode.GetString(arrayToDecode);
}
The problem here is that you've stored a UTF-8 string to a different encoding in your database - probably the Windows-1252 code page (CP2152). As a result the UTF-8 character ’ represented by the byte sequence E2 80 99 is translated into the CP2152 single-byte characters ’. This was all explained to your previously in this answer, which also gives a solution to your current problem.
In order to get back to the original UTF-8 encoding you will need to take the string returned from your database and correct it with the following code:
public static string UTF8From1252(string source)
{
// get original UTF-8 bytes from CP1252-encoded string
byte[] bytes = System.Text.Encoding.GetEncoding("windows-1252").GetBytes(source);
return System.Text.Encoding.UTF8.GetString(bytes);
}
This highlights the fact that it is vital to use the correct encoding at all times when using the GetBytes method.
It is important to note that the reverse of this transformation is not always possible, since there are gaps in the CP2152 code space - values that will be discarded or altered during conversion from byte values.
The hex values for these gaps are: 81 8D 8F 90 9D.
Unfortunately these values are present in various UTF-8 encodings, such as ” (E2 80 9D). If you have one of these values in your database then it will not load correctly. Depending on how you did the first stage conversion the third byte may be lost or corrupted in the database, in which case you cannot retrieve it.

Message parity check [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
Can someone help me out with implementing this sequence of calculations in C#?
This problem essentially describes a CRC with a 24-bit polynomial.
You can solve the problem simply using shift and XOR operations and a 24-bit (or larger) variable; no bigint required.
Recommended introductory reading:
http://en.wikipedia.org/wiki/Cyclic_redundancy_check
http://www.mathpages.com/home/kmath458.htm
http://www.ross.net/crc/download/crc_v3.txt
I took the opportunity to dabble with this. Interpreting the equations in the context of an implementation in software is tricky because there are many ways in which the polynomials can be mapped to data structures in memory - and, I assume, you'll want the solution you produce to seamlessly inter-operate with other implementations. In this context, it matters if your byte ordering is MSB or LSB first... it also matters if you align your bit-strings that aren't a multiple of 8 to the left or right. It is worth noting that the polynomials are denoted in ascending powers of X - whereas one might assume, because the leftmost bit in a byte has maximum index, that the leftmost bit should correspond to the maximum power of X - but that's not the convention in use.
Essentially, there are two very different approaches to calculating CRCs using generator polynomials. The first, and least efficient, is to use arbitrary precision arithmetic and modulo - as the posted extract suggests. A faster approach involves successive application of the polynomial and exclusive-or.
A implementation in Pascal can be found here: http://jetvision.de/sbs/adsb/crc.htm - translation to C# should prove trivial.
A more direct approach might involve encoding the message and the generator polynomial as System.Numerics.BigInteger objects (using C#/.Net 4.0) and calculate the parity bits exactly as the text above suggests - by finding the message modulo the polynomial - simply using the "%" operator on suitably encoded BigIntegers. The only challenge here is in converting your message and parity bits to/from a format suitable for your application.

Categories